XII.2 March + April 2005
Page: 37
Digital Citation

Humans, robots, rubble, and research

Robin Murphy

back to top 

Rescue robotics has been declared an exemplar domain for human-robot interaction by the DARPA/NSF study and a computer science grand challenge by the Computing Research Association. This is not surprising given the extreme challenges of the urban search and rescue domain (US&R), where small robots are used to explore highly confined voids deep in the interior of rubble. US&R itself is demanding on emergency responders since the environment is deconstructed and noisy, a priori knowledge may be unavailable, incomplete or incorrect, and the pace of the rescue unpredictable (e.g., "hurry up and wait"). Possibly more interesting for HRI is that US&R poses a diversity of "consumers" (victims, searchers, specialists, off-site support specialists) and many are working under unfavorable, error-inducing conditions. A single type of robot is not sufficient; so multiple, difficult-to-control platforms must be used by the same operators. Communication frequently breaks down or reduced bandwidth occurs, thus constricting the information flow and introducing team breakdowns. Humans must collaborate with software agents to overcome perceptual challenges, as well as the more typical use of agents to mine and represent related knowledge.

The Center for Robot-Assisted Search and Rescue (CRASAR) conducts basic and applied field research. Field research at realistic venues with hardened robots and rescue professionals is often difficult to organize, and CRASAR is fortunate to have developed a network of contacts with the Federal Emergency Management Agency (FEMA), individual states, and the U.S. Department of Defense that enables field work at least four times a year. (CRASAR provides opportunities for researchers to join us in the field or for us to collect data by proxy through the NSF-funded "R4: Rescue Robots for Research and Response" program.) To date, our field research has produced five distinct results.

One of the fundamental results has been the creation of task models of the technical search and remote medical-support missions and the associated mental models of the operator. Task models are essential because they define the goals of the human-robot team and how they accomplish it. Task models do not exist because robots are currently not used for US&R (though FEMA has begun the process of adding robots to the accepted equipment list for response teams and regional teams are looking at purchases). Since robots are not currently used, traditional work analysis methods of defining the task and then the human's role in the task do not apply. Instead, we have created the RASAR modular coding scheme to decompose novel robot-assisted tasks, identify specific situation-awareness requirements, and capture the effective modalities of information transfer. By working directly in the field with US&R professionals, these models represent how robots will really be used and how the humans will truly interact with them.

Possibly the most striking outcome of our field research to date has been the discovery that two rescuers working cooperatively are nine times more likely to find victims with the robot than a single operator. Our studies indicate that a single rescuer simply cannot effectively build and maintain the necessary situation awareness to be effective. While it is common practice in the aerial vehicle community to use two operators, it has been a standard assumption in the ground-vehicle community that the state of the art is one operator. Instead, our work suggests that a single operator per single robot (SOSR) may be the state of the practice but underperforms.

Our field studies have also confirmed the observation from the World Trade Center deployment of robots, that perception is the bottleneck, not mobility. In two field studies, robot movements were recorded and found to be stationary—the camera was only moved an average of 49 percent of the runs. Essentially half of the time was spent stopped and looking, both to find victims and assess the structure, but also to build situation awareness. This finding emphasizes the need for HRI: More sophisticated mobility and navigation algorithms without an accompanying improvement in situation awareness support can reduce the time spent on a mission by no more than 25 percent.

Our recent work with medical specialists who may be remotely trying to interact with multiple trapped victims through robot mediation has yielded a task model, a preliminary mental model, and a vocabulary of how these users wish to direct the robot. We have found that users may be able to dispense with a mental model of the robot and strictly interact through a "drive the camera" approach. This approach is manifested in two styles of communication, one where the specialists use targeted communication (commands are relative to an object or feature in the image) and non-targeted communication (commands such as "up," "down," "zoom").

Finally, our studies highlight that humans will interact "face to face" with the robot, as well as from "behind" as operators. Victims will certainly be "in front" of a rescue robot and that interaction is only now beginning to be explored. Another class of humans in front is the rescuers themselves. Within the past year, we have documented cases where the rescue robots were spontaneously adopted for use as a virtual presence inside the void for the safety officer and team manager outside the void, literally allowing them to look over the shoulder and be side-by-side with extraction teams.

Figure 1.

Based on nine years of experience with rescue robotics, we conclude that the central issue in rescue robotics isn't making the robots autonomous, but rather supporting team processes. Assuredly, robots will become increasingly autonomous, though autonomous navigation appears more feasible than object recognition and situation awareness. While this increase in autonomy will reduce the demands on a robot specialist, it will not reduce the need for, or demands on, the human decision-makers. These decision-makers are likely to be geographically distributed and have never worked together, as progress continues in distributed networking. As a result, the pressing research issues are how to facilitate the larger human-robot system to reach the overarching goals of the rescue enterprise.

For more information about the R4 program and publications, please visit

back to top  Author

Robin Roberson Murphy received a B.M.E. in mechanical engineering, a M.S. and Ph.D in computer science from Georgia Tech, where she was a Rockwell International Doctoral Fellow. She is a professor in the Computer Science and Engineering Department at the University of South Florida with a joint appointment in Cognitive and Neural Sciences in the Department of Psychology. She is also director of the Center for Robot-Assisted Search and Rescue and the NSF Safety Security Rescue Research Center. She was awarded the NIUSR Eagle Award for her work using rescue robots at the World Trade Center disaster.

back to top  Figures

F1Figure 1. Robot's eye view of the interior of a void in the World Trade Center Tower 2 rubble 40 feet below the surface. Note the lack of color and depth cues, creating challenges for both computer and human image interpretation.

back to top 

©2005 ACM  1072-5220/05/0300  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2005 ACM, Inc.

Post Comment

No Comments Found