Robots!

XII.2 March + April 2005
Page: 30
Digital Citation

Case study


Authors:
Emily Hamner, Mark Lotter, Illah Nourbakhsh, Skip Shelly

Introducing a Mission to Mars

In 2003 Carnegie Mellon University research team members designed, prototyped, and installed the Personal Exploration Rover (PER) exhibit as part of a vision to:

  • help the general public understand how NASA mission scientists use robots as space exploration tools
  • increase the general public’s understanding of advances in robot autonomy

To achieve these educational goals, the Carnegie Mellon team set about creating an experience that was emblematic of the real challenges of NASA mission scientists as they explore Mars remotely.

The PER exhibit is comprised of an autonomous robot, a physically simulated Mars terrain, an interface for developing rover missions and a wireless communication network between the rover and the interface. The interface for creating missions is displayed on a flat screen monitor that overlooks the Mars terrain. It is controlled through a trackball with a button.

Users identify a target rock that they think may contain life, and then develop a mission for the rover by specifying a direction and distance to the rock using an overhead "satellite" image. This mission plan is sent to the PER and real-time images of the mission from the rover’s point of view illustrate the rover’s autonomous capabilities. The PER confirms that it has reached the target rock and collects and analyzes data. The mission concludes with a task in which users analyze an image of their target rock to determine if there are signs of life.The PER exhibit openings were scheduled for January 2004 to capitalize on NASA’s Mars Exploration Rovers (MERs) Spirit and Opportunity. By October 2003, five high-visibility venues had reserved space for the PER exhibit:

  • Smithsonian Air & Space Museum in Washington, D.C.
  • The Stephen F. Udvar-Hazy Center in Chantilly, Virginia
  • NASA/Ames Visitor Center in Palo Alto, California
  • The National Science Center in Augusta, Georgia
  • The Exploratorium in San Francisco, California

ins01.gif

Project Summary

In September 2003 the Carnegie Mellon technical team working on the PERs determined that there was a critical need to improve the existing user interface by their November 2003 deadline. In addition to addressing tactical usability issues, the team believed it needed to deliver a more compelling experience for the users. Two professional interface design consultants, Mark Lotter and Skip Shelly, were added to the technical team to improve the PER.

The team agreed to a simple design process: Identify the greatest opportunities for improvement, test alternative design solutions with users, and continually iterate the designs until the deadline arrived. With a large set of tasks and only two months until the first installation at the National Science Center, team communication needed to be frequent, flexible, and peer-to-peer with no interpersonal issues or attitudes to delay the work. The team also needed a clear understanding of roles, dependencies and priorities. Whole team meetings were scheduled twice each week to review progress from the entire team. Sub-teams were encouraged to meet and communicate informally outside of these official meetings.

Design Challenges

Many of the design challenges centered around the public exhibits in the different venues. Rovers would be required to operate in at least four different "Mars yards," each one a different shape, size, and color. The Mars yards would not be completed until a few days before the exhibit opening, leaving limited time for any customization and testing. The PER team needed to communicate general requirements to each Mars yard construction team that still allowed flexibility and freedom to create unique Mars Yards for each venue.

The PER team hoped and anticipated that the exhibit would be a popular attraction. If so, a challenge would be to maximize the number of people that could use the exhibit and avoid long, frustrating wait times. Would the educational experience be lost if users perceived the PER as a game of beating the clock? How could we subtly reinforce time without creating anxiety with users if they had an unsuccessful mission and wanted to try again?

The exhibit would also need to attract users when sessions weren’t running. This presented an opportunity to distinguish between the autonomous behavior of the PER and more direct-drive machines. Rather than controlling every move, users design a small instruction set for the PER to execute, much like NASA scientists control the MERs.

Each session would start with the PER building a 360-degree panoramic view of the terrain from its current position. To avoid confusion, we would need to help users quickly understand the spatial relationships by coordinating their direct view of the PER in the Mars yard with both the image sent from the rover’s point of view and the overhead (satellite) map view.

Sharing the individual experience with people waiting in line or watching the exhibit became an important strategy for achieving the educational goals. Museum venues had the option to provide additional information but the team could not depend on it. Our educational goals would need to be met entirely through spontaneous interaction with the exhibit.

The PER needed a "walk-up-and-work" user interface. A design that engaged the public to watch active sessions could also deliver training just prior to use; to reduce users’ apprehensions about failure.

Finally, adjustments to the PER hardware, software and firmware needed to be completed in parallel with an improved user interface due to the constrained time frame, necessitating seamless team interactions.

Identify Key Improvement Opportunities

After getting a demonstration and a crash course in robot autonomy from Illah Nourbakhsh and Emily Hamner, the two designers evaluated the existing design with the goal of identifying the greatest opportunities for improvement. Their analysis and recommendations were delivered in a 14-page report in which the designers organized improvement opportunities into the groups:

  • Overall Experience
  • Orientation
  • Interaction Clues
  • Language
  • Visual Presentation
  • Typography

ins02.gif

Each category included more detailed comments in which problems were coupled with possible solutions visualized with quick sketches done in Adobe Illustrator. The sketches were juxtaposed with current screens.

The report was designed to elicit quick reactions and gain consensus about the specific problems to be solved and the order in which they could be solved. As an example of the trust the group quickly established, the team accepted the risk that the attract loop on the initial screens would have to wait until last to be designed. Prioritizing issues as a team enabled us to focus on the critical improvements first and implement them quickly, then use remaining time to iteratively refine them. As an example, improvements to the interactions for specifying distance and direction underwent five design proposals within a three-week period.

To facilitate the design process, the designers pinned ten existing screens like a filmstrip onto a long roll of paper to underscore the need for continuity and narrative. The screens were organized into four segments that would establish the mental model we hoped users would form as they waited in line for their turn and eventually created their mission. We converged on this model:

  • identify a rock in the terrain represented in the UI
  • specify distance and direction coordinates
  • execute the mission with real-time feedback
  • analyze the target rock and end the mission

Looking at the current screens from this perspective revealed improvement opportunities that affected the whole user experience.

These simple methods and materials resulted in several interesting consequences. First, they made the (re)design process accessible to the entire interdisciplinary team. Second, this low-tech form invited others to the UI design effort—team members worked together to fill in the details of the general structure. Disagreement could erupt spontaneously and be resolved quickly-often resulting in entirely new ideas. Ironically, the rolls themselves reminded the team of the need to also think of themselves as filmmakers.

ins03.gif

Storyboarding

The team agreed on the key improvement opportunities and began work on two different narrative concepts, "game" and "mission," to make the user interface more provocative and expressive. Difficult usability issues, like presenting spatial manipulation controls, would be tackled after establishing a narrative pattern.

Shortly after presenting the report, more comprehensive hand drawn storyboards were developed and reviewed with the team.

Hand drawing allowed for rapid development and lowered the barrier to entry for user interface design. All group members drew solutions and then drew on top of drawings created by others—an early indicator of the healthy sense of trust that grew among technical team members and the interface designers. This openness was reciprocated by the programmers when the user interface designers suggested changes that required software changes.

Hand-Drawn Paper Prototypes

The team photographed key (hand-drawn) frames from the storyboards and uploaded them to a project Web site that was accessible by the whole team. Each screen was annotated with notes about the unique role it needed to play in the narrative and the larger user experience.

Agreement on the overall narrative theme, as well as the goals of each segment and the individual screens, was achieved within days. Establishing these shared requirements quickly allowed the team to document a shared view of success criteria. A shared vision for the project allowed team members to contribute their unique skills to the project effectively and efficiently.

Spatial Manipulations

When designing a rover mission, users must specify the distance and direction that the PER needs to travel to reach a rock target in the Mars terrain. To decide these values they must coordinate between several different visual representations of space, the physical Mars terrain, a panoramic image, and an overhead "satellite" image.

Connecting the panoramic image and satellite images mentally is a difficult task. In initial user tests with pre-teen and early-teenaged children, users had a hard time identifying the same rock in both the panoramic image and the satellite image. Even adults struggled with this task during early user testing.

Orienting oneself in space can be done by referencing landmarks, so just as the MERs on Mars orient themselves using the sun, our users would be able to see the sun on all views of the Mars yard. The sun would be visible on the hip wall opposite users from their vantage point while creating a mission. The sun would also be clearly visible in both the panoramic image sent from the PER and the overhead satellite map. While still challenging, it allowed for real-world statements like "I want to go to the rock that’s to the left of the sun" to be more easily translated into direction and distance commands. We also emphasized to each venue that, in addition to the sun, unique rock sizes and shapes and other markings on the hip wall visible from where users sit or stand (resembling the Mars terrain, of course) would help users orient themselves.

The panoramic image is different for every mission since it depends on the position of the PER. Conversely, the overhead satellite image is always static. We chose not to generate a real-time overhead image. While this may have made things easier since the actual location of the rover would be shown, height limitations in each of the venues and development time were factors in ruling this out.

ins04.gif

Real-time Feedback

Displaying feedback messages in real-time helped expose the autonomous behavior of the rover and was essential for achieving our educational goals. The feedback was very basic in early designs. We decided that a more personal tone would reinforce the concept of teamwork between the user and the PER in completing the mission.

The designers worked with Eric Porter to understand the tasks that the PER actually executes and the decisions it makes based on the data it collects. Using this information, they crafted feedback messages to be displayed alongside the real-time images from the rover executing the mission. We found that this worked great for parents or teachers at the exhibit with children. The real-time feedback served as script to guide them through the steps of each mission.

The museums wanted some part of the exhibit to attract users from long distances. We aimed for the PER to explain itself to users through the attract loop. This approach had many advantages. First it acted as an "advance organizer" to the exhibit, summarizing the narrative. We used actual screens to foreshadow how users would interact with the PER to specify distance and direction immediately prior to use. We believe this reduced the potential for the surprise that often reduces users’ confidence. It also allowed people to perform difficult spatial manipulations at their own pace and as anonymous members of the public.

Balancing Drama, Gamesmanship, and Learning

With approximately 50,000 interactions in the first four months of operation and a roughly 95-percent success rate, the final PER user interface design is highly usable, yet it also blends the drama of a film with the competitiveness of a game to provoke wonder in users and spectators. The entire team evaluated designs for interactions and user interfaces with a common vision to deliver an experience that included aspects of drama, gaming, and learning.

A simple trackball and button interface already conjured a simple game interface, but did not suggest a direct-drive game. Rather, users competed against a clock that counted the time it took them to develop a mission for the PER. The actual use experience was then followed by static learning delivered through live docents or static displays that explained the temporal challenges actual mission scientist face programming the MERs in time to execute on the next day on Mars. Competition becomes a vehicle for users to team up with an autonomous robot to learn, rather than beat another user. Because it all happens in public, everybody learns, even those who lurk on the periphery can participate without ever operating the user interface.

Drama is used at the start of each session when the rover sends a panoramic image to the user. A canned animation shows a top view of the rover rotating 360 degrees to capture a panoramic view of Mars from the rover point of view. This animation is coordinated with gradual display of the actual panoramic image taken by the PER. We believe this drama helped users place themselves in the Mars yard and begin to regard the PER as a tool for extending their reach into the unknown.

The team developed three different prototypes to use to study children and adults in less than three weeks. The first tests revealed an unanticipated learning-users were confused about the order in which they needed to specify distance and direction. They perceived some connection between them that did not exist. It created work for the users that consumed too much time and lowered confidence of individuals watching in the public awaiting their turn.

ins05.gif

Results

The final experience delivered to PER users was only possible because of changes made throughout the system, not just the user interface. User interface improvements made clear the opportunity to fine-tune other subsystems such as feedback messages and camera control software. Iterative improvement to the user interface caused improvement in other aspects of the system that may not have been changed otherwise. During user testing, for example, one of the graphic designers observed that sometimes the robot would end near a rock but report that it could not find the target. Users were frustrated by this. Based on this observation, the rover was reprogrammed to perform two scans rather than one, thus allowing it to locate rocks within a broader range.

In order to measure the success of the PER exhibit against the educational goals of increasing understanding of NASA’s robotic exploration missions and of robot autonomy, a team from the University of Pittsburgh’s Learning Research and Development Center analyzed the exhibit at the Exploratorium and the Smithsonian Air and Space Museum. The results show that family groups interacting with the exhibit discussed the Mars missions, compared the PER with the MERs, talked about communicating with robots, and discussed robot technology and autonomy. Since discussion at museum exhibits represents the learning which is taking place [1], this shows that the exhibit was successful in educating its audience. The analysis also included interviews with children after they had used the exhibit. Many children were able to connect their exhibit experience with the MER missions, although their understanding of autonomy was inconsistent.

Acknowledgements

The PER project would not have been possible without funding and equipment provided by NASA/Ames Autonomy and Intel Corporation. The team acknowledges efforts from Ellen Ayoob, who provided an independent review of storyboards and user interface language. We would also like to acknowledge the children, students, and CMU staff who allowed us to observe them using the PER. Our thanks go as well to the following individuals for their contributions and support: Debra Bernstein, Jim Butler, Daniel Clancy, Kevin Crowley, Edward Epp, Rachel Gockley, Jean Harpley, Thomas Hsiu, Anuja Parikh, Kristen Stubbs, and Peter Zhang.

References

1. Leinhardt, G., Crowley, K. & Knutson, K. (Eds.). Learning Conversations in Museums. Mahwah, NJ: Lawrence Erlbaum Associates (2002).

Authors

Emily Hamner is a research assistant at the Robotics Institute at Carnegie Mellon University. She received her BS degree in Computer Science from Carnegie Mellon University in 2002. Emily currently manages the Mobile Robot Programming Lab and serves as project coordinator for the PER project. Her research focuses on interface design, human-robot interaction, and the application of robots in education.    etf@andrew.cmu.edu

Mark Lotter is a designer and founding partner in LotterShelly, a design consultancy which helps clients envision, design, and build products for research as well as commercial applications. Prior to forming LotterShelly, Mark worked for five years as a senior visual interface designer at MAYA Design Group in Pittsburgh, PA and worked as a designer at H2 design and the Software Engineering Institute at Carnegie Mellon University.    mark@lottershelly.com

Illah Nourbakhsh is an associate professor of Robotics in The Robotics Institute at Carnegie Mellon University. He received his Ph.D. in computer science from Stanford University in 1996. He is co-founder of the Toy Robots Initiative at The Robotics Institute. He is a founder and chief scientist of Blue Pumpkin Software, Inc. Illah recently authored the MIT Press textbook, Introduction to Autonomous Mobile Robots.    illah@cs.cmu.edu

Skip Shelly is a designer and founding partner in LotterShelly, a design consultancy which helps clients envision, design and build products for research as well as commercial applications. Prior to forming LotterShelly, Skip worked for five years as a senior information designer at MAYA Design Group in Pittsburgh PA. Prior to that, he was an art director for nine years at the Software Engineering Institute at Carnegie Mellon University.    skip@lottershelly.com

©2005 ACM  1072-5220/05/0300  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2005 ACM, Inc.

 

Post Comment


No Comments Found