Authors:
Eva Hornecker, Antonia Krummheuer, Andreas Bischof, Matthias Rehm
The idea of a robot that autonomously takes over human work stems from automation, where robots have successfully transformed industry and the work context on factory floors. But when robots move out of the restricted and often static environment of the factory floor and are introduced into social and dynamic contexts, we can no longer rely on human-robot interaction (HRI) patterns derived from industry. If we want robots to contribute meaningfully to more sectors in society, they must be developed with people in mind, supporting ongoing interactions, and not with a focus on autonomous behavior.
HRI traditionally assumes a dyadic interaction model of one human and one robot. Most HRI concepts, experiments, and evaluations focus on single users. We have found that for therapeutic and care contexts, a dyadic model is not appropriate and fails to capture important characteristics of interactions in such settings. We illustrate how in contexts such as therapy, rehabilitation, and care, the situation is instead characterized by the interactions of several human participants with a single robot, where often the robot relies on their participation to fulfill its role.
In both public and scientific discourse, robots are usually imagined as acting in isolation, working autonomously, and replacing human work. Thus, we often respond skeptically when asked whether robots will, for instance, autonomously fulfill eldercare tasks and replace care workers. A key issue here is the word autonomously. Robotic agency and functionality is not just a technological achievement; it is a situated and mutual construction achieved by the actions and interactions of humans with the technology [1,2]. We illustrate this with two examples from our work, one from a rehabilitation and life-support context for people with disabilities, and one from eldercare. We then expand the discussion and call for future work in HRI to focus on an understanding of distributed agency, not autonomy, and to develop and be guided by non-dyadic models of interaction.
Example 1: Triadic Interaction With a Scheduling and Reminder Robot
Rehabilitation and healthcare robots are seen as promising solutions to support the independence and well-being of people with cognitive impairments and to address the challenges of an aging society where growing numbers of people need support for taking medication or scheduling reminders. This build-your-own-robot (BYOR) project engaged in a cocreation process with residents and staff of a Danish facility for people with acquired brain injury to develop individualized reminder robots that take account of the situated and distributed nature of agency and memory. Design and testing sessions were videotaped to enable analysis from a micro-sociological perspective based on principles from ethnomethodology and conversation analysis. The focus was on the situated processes of how the participants coordinate their actions and construct meaning.
Ida (not her real name) has problems with short-term memory and cannot handle or understand entries in traditional calendars. She and her carer (caregiver) wanted a robot that would prepare her for upcoming events, such as getting picked up for a doctor's appointment. Ida should also be included actively in scheduling her appointments and programming the robot, tasks currently taken care of by her carer. The cocreation process resulted in a green robot face that is fed with cards representing appointments [3]. The day of the week is scheduled by pressing one of the robot's teeth, and the time is programmed by moving the lever of a clock placed around the robot's face (Figure 1).
Figure 1. The Reminder Robot. |
Figure 2 shows a session where Ida and her carer program a prototype robot that will be kept in Ida's apartment to remind her of appointments. The carer assists Ida by breaking down the task into smaller pieces, using instructions, requests, and gestures to support Ida. At the beginning of the transcript, the carer informs Ida about an upcoming appointment; at the same time, the formulation identifies the relevant information for programming the robot: "Tomorrow you go to Phys at half past 10." Ida takes this as instruction to find the appointment card, and puts it in the robot. Ida thus exploits the multiple resources of the information/instructions and the material environment, becoming a competent participant in scheduling the robot despite her memory impairment. This continues with the carer leading Ida through the programming, informing her of the day of the appointment and indicating the relevant button by pointing, while the robot confirms the input of the appointment card ("Physiotherapy"). Similar to the prior sequence, Ida takes this instruction and exploits these and the robotic features for scheduling the appointment.
Figure 2. Ida and her carer program the Reminder Robot together. |
In this example, the robot profits from not aiming for the individual's full autonomy, but rather for their participation in a human-robot interaction that is scaffolded by the carer. Instead of a dyadic HRI, we observe a triadic HRI. This is very different from traditional reminder robots that provide training or support to help the individual autonomously, an approach that tends to neglect the social, material, and distributed character of scheduling, guiding, and reminding practices commonly used in interactions with people with severe cognitive impairments. As our example shows, the design of successful robots will rely especially on an understanding of this distributed and socially situated agency (and memory), instead of a focus on autonomy and dyadic interactions.
Example 2: The Interactive Enactment of Lifting Functionalities
Our second example [4] is from another major application area for healthcare robots: eldercare. The ReThiCare project aims to rethink and develop concepts for care robotics following a creative design approach. Early on, researchers conducted an extended ethnography in a German care home to gain firsthand knowledge and inspiration. The practices that involved the most advanced robotics technology concerned transferring residents from a bed into a wheelchair or reclining chair and vice versa with a lifting device/machine, some of which were remotely controlled by staff. The research team decided to investigate this in detail, given that lifting constitutes an intimate and vulnerable situation in which a human body is touched and handled, and is a highly relevant scenario for care robotics. While not robotic, these lifting machines are the closest thing to an autonomous machine that might be found in a care home. Thus, we might learn from how these existing machines are used and integrated into everyday practice what is needed for the successful integration of robots into caregiving interactions.
For deeper insight into the process of lifting, researchers engaged in an autoethnographical trial, taking the position of residents and being moved around by two staff members who had volunteered for the session. Most relevant for HRI is a battery-powered, remote-controlled passive sling lift ("passive" in that the resident does not lift themselves up) that lifts immobile bedridden residents who have little control over their body or physical strength. The resident is rolled on their side and into a net, then lifted into a wheelchair or reclining chair; later they are lifted back into bed (Figure 3). The session was videotaped, which allowed a microanalysis of interactions as well as comparison with field notes, revealing that staff adopted the same kinds of routines and conversational patterns with the researchers as in their daily practice.
Analysis revealed how all involved parties played an essential role in ensuring the success of the lifting act. The residents are not passive packages to be handled and hoisted; they cooperate, moving as much as they are able to and thereby easing the task for staff, responding to instructions, and contributing however they can, even if this just means relaxing their body and letting themselves be moved. This cooperation is enlisted by staff, who take great care to create a trusting, lighthearted atmosphere (field notes: "only flying is better"). Especially for residents with dementia, clear announcements and physical touch are essential to orient them to the procedure and to establish trust.
Many of the staff's actions can be described as "configuring" the resident toward the process. Before pushing the buttons to lift the net with a researcher in it, they always said, "So, now going up," in the same tone of voice, similar to when lowering the researcher into the bed or chair. Taking position inside the net revealed that the moment of liftoff or being put down involves a sudden shift in balance and body-weight distribution, and can thus startle those who are unaware. Turns in direction were further announced with, "I drive/turn you to your wheelchair." For residents with dementia, almost every action was announced, for example, "We will now put the net under you," or "We will now turn you over." Moreover, staff performed various small adjustments habitually and almost in passing, to ensure the comfortable positioning of the person being moved so that legs or feet would hang or lie loosely and not get stuck at awkward angles. Moreover, caregiving is highly social and involves tuning in with residents, projecting the feeling of being safe and cared for, demonstrating competence, calming people, and enlisting their collaboration.
It is hard to envision an autonomous robot that would be able to react adequately in these situations (which include residents who panic and fight against the procedure) while enlisting trust.
Overall, this seemingly functional task, in which a machine plays a central role, consists of complex interactions, involving various physical, verbal, and emotional configurations of the human-machine interaction. Moving a resident is rendered easier and more comfortable (for both resident and staff) when the staff enlist the residents' collaboration. Given the vulnerable situation, physically and mentally, of elderly residents, it is hard to envision an autonomous robot that would be able to react adequately in these situations (which include residents who panic and fight against the procedure) while enlisting trust. Thus, the participation of staff in such care activities will be required even if machines and robots are able to take over more of the "hard work." Furthermore, handing these kinds of tasks over to robots would ignore the role of caregivers in ensuring residents' emotional, cognitive, and physical well-being.
Calling for a Non-Dyadic Understanding
Figure 4 illustrates the structure of HRI in these two scenarios. Our observations are not surprising in light of STS and ethnographic studies of HRI. Various HRI studies have shown how humans participate in the construction of technological agency and how robot behavior gets interpreted as a part of social interactions. This ranges from children not engaging with a robot when the experimenter ignored it [1] to the role of staff or family in introducing the social robot Paro to care home residents and creating acceptance of it [2]. Other examples include robot behavior that does not adhere to the social rules of a situation being interpreted as "queue jumping" or "cheating" in a game context, or a robotic trash can being referred to as "hungry" when it approaches a table. Thus, robots are always part of a larger situation, one among many actors, and can be interpreted in multiple ways.
Moreover, the humans and the robots present in given situations can assume or expect different roles. An important and often underestimated role configuration for HRI is that of the human mediator. Robots in public, in organizations, or in HRI lab situations usually require guidance and introduction by a facilitator. Before a NAO robot can successfully tell a joke or story to a preschool class, the children's attention and gaze have to be focused, as well as silence established and maintained. This holds not just for short-term interactions. In a 10-week trial, the German Padero project found that it needed human moderation "to facilitate interaction" with Pepper in a care home, introducing it, moving it, and motivating residents to interact with it. On a closer look, it turns out that researchers in HRI studies frequently mediate interaction with a robot, directing participants' attention or signaling that the robot is a point of interest, weaving the robot's agency into the social situation.
Once analysis takes account of the overall context, it becomes apparent that the use of robots is highly socially situated. The robot may simply take a service role (e.g., a Roomba), become a point of entertainment and conversation (when pets ride on a Roomba), or be utilized as catalyst, for instance by a caregiver in order to relax and soothe another person (examples include Paro and Purrble (https://www.purrble.com)). Robots thus may be assigned different roles, which may shift over the course of deployment and across users [2].
And it's not just facilitators and mediators. In many situations typical for HRI—in museums, airports, shopping centers, hospitals, and schools—other people are present: observers, bystanders, and group members. Although these individuals might be passive and out of the range of robotic sensors, their presence affects the behavior of the person directly interacting with the robot. Somebody typing on the keyboard interface of a conversational agent may feel stress under observation or be incited by peers to push the system's limits, as we observed with school classes in museums. Interaction can also turn into a performance for an audience. For the robot, the contribution of bystanders will often remain invisible, even if other group members suggest what to do next or switch roles of interactor and observer so that there is not one single interactor. In some cases, the robot needs to be able to recognize and handle group situations; for instance, prior research has investigated how a museum guide robot can react to groups of visitors, identifying a person for conversation, supervising attention levels, and navigating with the group.
There may not even be an audience present to influence the people who interact with the robot. Expectations of or norms for adequate behavior often become internalized and thus effective even when the user is alone.
We need to "free" HRI and robotics from the idea of dyadic interaction with an autonomous robot, into which its function is unquestionably inscribed. Robots and humans are not independent entities. Instead, we need to understand the robot's functionality and agency as a situated and mutual construction. These constructions are achieved in the ongoing, collaborative, and distributed activities of often several humans and the robot platform in a socio-material context.
Our case studies add emphasis to the fact that often the robot will not be able to fulfill its functional role or task without human participation and relies on humans making it part of interactions: The technical functionality of a robot cannot be separated from its interactional enactment. Besides interaction with the robot itself, we find interaction around it, where the robot is something to be handled in a specific situation, to be talked about and adjusted to the current needs, or where it is treated as a resource and as another participant in the situation (Figure 4). While our two case studies both feature triads—two people and one robot—the subsequent discussion highlighted that we need to think more generally about this as a multiagent situation, where agents can be humans as well as robots. The analogy of HRI to HCI needs extension, similar to the perspective of CSCW and the paradigm shifts or waves in HCI [5].
The work reported here was funded by VolkswagenStiftung on the RethiCare grant, and by Helsefonden (Bevilling nr. 16-A-0467 Byg-selv-robot til selvhjaelp) and Spar Nord Fonden (Proposal ID 23086).
1. Alač, M., Movellan, J., and Tanaka, F. When a robot is social: Spatial arrangements and multimodal semiotic engagement in the practice of social robotics. Social Studies of Science 41, 6 (2011), 893–926. DOI:10.1177/0306312711420565
2. Šabanovič, S. and Chang, W-L. Socializing robots: Constructing robotic sociality in the design and use of the assistive robot PARO. AI & Society 31, 4 (2016), 537–551. https://doi.org/10.1007/s00146-015-0636-1
3. Krummheuer, A.L. Rehm, M., and Rodil, K. Triadic human-robot interaction. Distributed agency and memory in robot assisted interactions. Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction. ACM, New York, 2020, 317–319; https://doi.org/10.1145/3371382.3378269
4. Hornecker, E., Bischof, A., Graf, P., Franzkowiak, L., and Krüger, N. The interactive enactment of care technologies and its implications for human-robot-interaction in care. Proc. of ACM NordiCHI 2020. Article No. 78 1–11; https://doi.org/10.1145/3419249.3420103.
5. Serholt, S., Ljungblad, S., and Ní Bhroin, N. Introduction: special issue—critical robotics research. AI & Soc (2021); https://doi.org/10.1007/s00146-021-01224-x
Eva Hornecker is a professor of HCI at Bauhaus-Universität Weimar. Her research focus is on non-desktop interaction, in particular anything tangible or embodied. [email protected]
Antonia Krummheuer is an associate professor in the Department of Communication and Psychology at Aalborg University. [email protected]
Andreas Bischof has a doctorate in sociology and specializes in STS studies, with a focus on robotics. [email protected]
Matthias Rehm is a professor at Aalborg University focusing on HRI. [email protected]
Copyright 2022 held by owners/authors
The Digital Library is published by the Association for Computing Machinery. Copyright © 2022 ACM, Inc.
Post Comment
No Comments Found