Oftentimes, and quite logically, developers and scientists become fixated on considering interactions and interaction design from the perspective of humans operating machines. Throughout the decades of user experience research, more attention has been placed on non-instrumental, emotional qualities in design, yet ultimately, when considering the end goal of user experience design, the outcome is still smooth and easy operation, an increased willingness to buy, positive word of mouth, and brand/product/service loyalty.
When considering robotic and especially humanoid robot development, common spoken and unspoken questions include: Are the movements natural? How many actuators/motors does it take to make a humanoid robot genuinely smile? What tones and emphasis should be activated with which words, in response to what emotions? It seems logical to focus on the traits of individual designs to produce a desired reaction (emotional or otherwise) in a human user or onlooker. And that is how people raised in Western or modern consumerist cultural paradigms are trained—to focus on one's own self or another self and the innate qualities in order for the self (or ego, in Sigmund Freud's terms) to act in the world in relation to other selves [1,2]. Thus, when approaching the design of humanoid robots and other intelligent systems (IS), there is a tendency to focus on the features of the robot itself and what it can do—its functions—rather than the effects that the robot can generate on a less tangible level in terms of broader social impact.
The point in question in this brief piece relates to what would happen if, rather than focusing on the isolated qualities of the technological artifacts and systems that developers are designing, they instead focus on the desired qualities and wider social outcomes of the interactions and interactional processes. Rather than focusing on robot traits, developers could focus on human traits and their impact in the social environment and how robotic systems could respond to these. This would be in order to read, react to, and perhaps redirect or channel human social interactions toward more beneficial and desirable social outcomes. That is, if the objective and underlying intentions behind societal planning are well-being and social harmony, rather than designing highly efficient, convincing, and easy-to-use robots, could it not be the case that the robots are in fact designed to be social problem solvers, to act and react in relation to social actors, climates, and circumstances? After all, social dynamics and atmospheres in general can be enhanced or made worse on a number of levels, ranging from work or living conditions to the chemistry between people, to specific individuals (who may have ill intentions).
Here, the emphasis is on interaction design not in terms of how humans can interact with or operate robotic systems, but rather how the robots can operate to guide and influence the human-interaction systems. For instance, while robots are introduced into a factory setting to replace human workers in tasks that require both speed and precision, simultaneously there may be an IS that reallocates the human workers to more suitable tasks or organizations, composing human teams that should be optimal from personality, skills, and interests perspectives. From a social health and safety point of view, too, there may be situations in which these ISes, either during an employment interview or soon after someone has arrived at an organization, detect that not all is good (socially healthy) in terms of potential bullying, negativity, or even sociopathy. This may be detected in the person's interactions with other employees, behavior, performance and/or observable intentions in general. The IS can then intervene—either by alerting the appropriate people or creating systems of interaction and movement around the employees that somehow either calm or resolve the situation through a variety of actions.
Could it not be the case that the robots are in fact designed to be social problem solvers?
Thus, the IS interaction design is not robot focused; rather, it is human-social-interaction focused. And, with this in mind, if humanoid robots are designed or purchased for specific social and organizational contexts, it should be done so from the perspective of how they make their human counterparts feel—how they will enhance the social interactions and subsequently the morale of the environment. There are moves in the direction of co-responsive robotics in areas such as robot dance and collaborative robot dance [3,4], as well as studies of ecological relationalism , in which social cognitive psychology is applied to robotics, and android science , which examines and develops human-robot relationships.
The latter comes closer to the current argument toward designing robotics, or, rather, designing relational systems, to encourage and enhance social harmony within and between humans. Instead of observing robots as ego-driven agents or philosophizing about robot personhood, the humanism remains within the people, yet the robotic systems assist in orchestrating human social systems for harmonious social and psychological outcomes. The ISes in question are semiotic I-You-Me  technologies designed to read signs in the physical and social environment, and activate or adjust behavior to induce desired social and emotional outcomes among the humans in these systems.
Thus, at the core of the robotic program lies the desired social outcome and qualities of human interactions—no tasks, no specific physical traits in and of the robot itself, or particular emotional responses in relation to the robots themselves. Rather, the robotic systems, or ISes, could encourage positive human agency by focusing on "functional coordination and co-action" . Therefore, in this interpretation of interaction design for robots and other intelligent systems, the idea is to look at the state of social conditions in groups, organizations, communities, and even societies that are induced and supported by social interactions—the ways in which people treat and communicate with one another. If the end goal for interactions is social well-being and harmony, robotic systems should be developed to help achieve this overall state among people. Then we can really say that technology is being developed to improve quality of life.
Rebekah Rousi is a postdoctoral researcher in cognitive science. Rousi's research explores human experience in relation to technology, with a keen focus on human-robot interaction, cyborgism, emotions, semiotics, critical theory, and other areas of cultural studies. Rousi is a trained and experienced performance and visual artist. email@example.com
Copyright held by author
The Digital Library is published by the Association for Computing Machinery. Copyright © 2019 ACM, Inc.