Marc Hassenzahl, Jan Borchers, Susanne Boll, Astrid Pütten, Volker Wulf
From the command line to today's immersive, tangible, or gesture-based interaction, the evolution of interaction paradigms is a story of increasing embodiment . We predominantly design interactive systems as tools, which withdraw from conscious thought to be always ready-at-hand. In this paradigm, people are at the center of action and use ubiquitous "everyware" technology  to extend their minds and bodies.
But this paradigm is more and more at odds with what is happening in technology today. New self-learning, proactive, AI-infused systems are no longer tools to act through. When encountering personal assistants, chatbots, smart home devices, self-driving cars, or autonomous robots, people perceive these as counterparts, not as extensions of themselves. Sometimes this is by accident, because one cannot understand the complex, opaque reasoning of a deep-learning algorithm. Sometimes it is by design, as in the case of an anthropomorphic social robot. Either way, it creates a fundamental shift from an embodied relationship with technology to one of alterity: Technology becomes other . We call this class of interactive systems otherware (https://otherware.net). And, so far, HCI has a very limited idea of how people should best interact with it.
Otherware implies the application of social metaphors to human-technology interaction: We converse with robots or voice assistants; we cooperate, delegate, or command, and we need to trust. With these novel forms of interaction, companies take the very first approach to interaction design that comes to mind: mimicking interaction among humans or with animals as closely as possible. This has some benefits. People can apply already existing human-human or human-animal communication skills to the interaction with this new technology. But it also has its costs. Such "social" interaction is, for example, prone to inappropriate gender stereotypes, hence the call for gender-neutral synthetic voices (https://www.genderlessvoice.com/). The Washington Post reported on parents who worry about their children unlearning to ask politely after engaging in blunt, commandeering conversations with the Alexa voice assistant . In general, mimicking human-human interaction often fails to fulfil people's expectations, leaving them disappointed and annoyed (see the story of Breazeal's Jibo; https://de.wikipedia.org/wiki/Jibo). But even if otherware could live up to our full expectations, just knowing that we are really talking to a piece of technology trying hard to appear as a fellow human or animal might be enough to make it feel creepy and uncanny.
If otherware will never be experienced as an extension of the self but also should not be designed to closely mimic humans or animals, what else is there? HCI seems void of alternative interaction metaphors. Shouldn't otherware be a third species, or, rather, many different species? And if so, what are appropriate interaction paradigms for communicating with it—if it is even the right word here?
Some researchers have begun to explore alternatives. For example, Robert Wortham and Vivienne Rogers  presented a robot that "vocalized" internally generated plans by "muttering" sentences, such as "Attempting forward avoiding obstacle" or "Doing sleep 10 seconds." By appealing to the social practice of muttering to oneself, but in a particular, machine-like way, the robot presents itself as a counterpart, yet remains true to its mechanistic nature. It is halfway between machine and social counterpart without fully pretending to be a living being.
To remain true to its mechanistic nature can actually be an advantage in social encounters . Consider a robot used in rehabilitation or caregiving: It can have endless patience, it is not naturally competitive, and it won't take things personally—all because it is a machine rather than a living being. These qualities are beneficial to companionship. Yet they are hard to attain by a human caregiver, but easy to implement in a robot. Unlocking these "psychological superpowers" requires a design that presents the technology as a counterpart, but emphasizes its otherness instead of trying to imitate. Figure 1 shows an attempt at this, the companion robot Sympartner. While its lower part looks like a piece of furniture, it also has a head that allows for "emotional expression" via moveable fans, an expressive display, and head position. It is clearly animistic, but neither in an especially anthropomorphic nor zoomorphic way.
A recent study  compared Sympartner with Softbank's highly anthropomorphic Pepper and Savioke's machine-like Relay (Figure 2). As expected, Sympartner's hybrid design was more ambiguous and led to a broader spectrum of possible social responses compared with the other robots. For example, participants assigned a gender to Sympartner as often as they refrained from doing so, while Pepper was more often assigned a gender than not. While Pepper forces users into a social interaction akin to interacting with a child, Sympartner invites social interaction but is less specific about the particular quality of the relationship. It breaks with the naive anthropomorphic design strategy in favor of an alternative, broader notion of animistic design.
|Figure 2. Pepper (left), Sympartner (middle), Relay (right) .|
In a similar vein, Tennent and colleagues  took a look at robots to improve the dynamics in small groups of humans. They argued that design's predominant approach, namely to present such robots as a team member through anthropomorphism, is misguided. It distracts rather than facilitates. As an alternative, they proposed MicBot, a microphone on a stand that can autonomously show attention through particular movements. This is then mirrored by the group members. They concluded that their animistic approach "did not set any expectations that could not be met. None of the participants attempted to interact with the robot in ways not intended" . Thus, their approach was especially effective, because it provided a socially active counterpart without the pitfall of excessive anthropomorphism.
The Little Data Wranglers is a further example for an alternative design strategy to otherware . Their purpose is to help designers with Internet searches and personal archiving. Each wrangler was designed as otherware, with its own specific behavior, quirks, and personality. For example, the Twins (Figure 3) are a gateway to Google's Image Search. However, the Good Twin randomly adds positive search terms, while the Bad Twin adds negative terms. Together, they proactively customize and skew the search, providing a "sense of personality, intention, and even mood" . This may seem odd at first. However, since Google's search algorithm is opaque and skewed itself, the Twins provide an alternative, outspoken, and at the same time playful approach to transparency.
|Figure 3. The Twins .|
To quote their creators, "Animism reframes and makes practical these [computational artifacts], not by revealing the literal inner workings of the system (which are too complex to unravel for the end-user), but through animistic fictions that use personality and narrative to explain the behavior and intent of the system" . The designers of the Twins did not just add unrelated anthropomorphic features as a facade of eyes and voices to create a counterpart. Instead, they crafted the Twins' personality from their functionality. The goal is not a reproduction of faux intelligence, but rather to allow for a "new set of relations between people and the digital" —in other words, an alternative interaction paradigm for otherware.
Currently, alternatives to anthropomorphism happen only at the fringes of HCI. Taking together HCI's tendency to critique treating computers as counterparts—the Humpty Dumpty Syndrome, as Ben Shneiderman once claimed—and iconic failures such as Microsoft's Clippy the paperclip, this comes as no surprise. There simply has been little interest in tackling this problem. As a result, HCI does not seem to know much about how to best design socially acceptable interactions with otherware beyond simple anthropomorphism or zoomorphism. Meanwhile, autonomous systems are taking the world by storm, and technology perceived as otherware is entering our daily lives at an exponential rate.
To address this problem requires research and practice in all areas of HCI. We will need to develop reliable models to predict what makes us perceive an interactive system as otherware or as an extension of one's self. This perception will not be static; it will change with people, context, system behavior, and even just over time. Nor will it be binary. Systems will be straddling the continuum between the rock in your hand and fully autonomous collaborators, with many shades of otherness in between.
We will also need to develop new methods and processes to design, prototype, and evaluate otherware. The often physical manifestation of many otherware systems means that physical form and function will need to be designed hand in hand, with appropriate support through a new class of integrated hard- and software prototyping tools and processes.
We will need to revisit the software architectures that create today's otherware. This goes far beyond explainable AI—we will need deep hooks in today's autonomous systems to provide people with interactive control and feedback, where it is needed. Most likely, desirable otherware will not only be transparent in its reasoning, but also be something that treats humans with respect and empathy. Something we can trust and negotiate with—and modify quickly, if we fail.
Ultimately, we will need to arrive at new interaction paradigms and HCI design patterns  for otherware that encode desirable qualities of interaction for this emerging class of technology, beyond today's naive anthropomorphism. We believe this to be HCI's next grand challenge. How we respond to it will shape the way we live with technology in the foreseeable future.
Part of this paper is based on discussion in a workshop with Judith Dörrenbächer, Wilko Heuten, Maximilian Krüger, Matthias Laschke, Diana Löffler, Thomas Ludwig, Claudia Müller, Saskia Nagel, Volkmar Pipek, Shadan Sadeghian, and Gunnar Stevens.
4. Rosenwald, M.S. How millions of kids are being shaped by know-it-all voice assistants. The Washington Post. Mar. 2, 2017; https://wapo.st/33Qajyo
5. Wortham, R.H. and Rogers, V.E. The Muttering Robot: Improving robot transparency though vocalisation of reactive plan execution. 26th IEEE International Symposium on Robot and Human Interactive Communication (Ro-Man) Workshop on Agent Transparency for Human-Autonomy Teaming Effectiveness. 2017.
8. Tennent, H., Shen, S., and Jung, M. Micbot: A peripheral robotic object to shape conversational dynamics and team performance. Proc. of ACM/IEEE International Conference on Human-Robot Interaction. 2019, 133–142; https://doi.org/10.1109/HRI.2019.8673013
9. Marenko, B. and Van Allen, P. Animistic design: How to reimagine digital interaction between the human and the nonhuman. Digital Creativity 27, 1 (2016), 52–70; https://doi.org/10.1080/14626268.2016.1145127
Marc Hassenzahl is a professor of ubiquitous design/experience and interaction at the University of Siegen, Germany. He combines his training in psychology with a love for interaction design. With his group of designers and psychologists, he explores the theory and practice of designing pleasurable, meaningful, and transforming interactive technologies. email@example.com
Jan Borchers is a professor of media informatics and human-computer interaction in the Department of Computer Science at RWTH Aachen University, Germany. With his lab, he studies and creates new interaction techniques and technologies for personal fabrication, tangible, wearable, and haptic interaction, soft robotics, and augmented reality. firstname.lastname@example.org
Susanne Boll is a professor of media informatics and multimedia systems in the Department of Computing Science at the University of Oldenburg, Germany. She heads the Interactive Systems Group, which focuses on ambient, mobile, and tangible interfaces for children and older adults. email@example.com
Astrid Rosenthal-von der Pütten is a professor of psychology and heads the group Individual and Technology at the Department of Society, Technology, and Human Factors at RWTH Aachen University. Her research interests include social effects of artificial entities, human-robot interaction, linguistic alignment with robots and virtual agents, presence, and communication in social media. firstname.lastname@example.org
Volker Wulf is a professor of socio-informatics at the University of Siegen, Germany. His research interests focus on a practice-based approach to the design of IT systems in real-world settings. This includes the development of innovative applications in the areas of cooperation systems, knowledge management, and community support. email@example.com
©2021 ACM 1072-5520/21/01 $15.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2021 ACM, Inc.