Andrew Hunsucker, Emily Baumgartner, Kelly McClinton
Augmented reality (AR) is an emerging technology that could soon become an established paradigm in computing. Although it incorporates hand gestures and voice commands into the user's experience, it bypasses traditional user command tools such as a touchscreen, mouse, or keyboard.
This paradigm creates challenges not only for interaction designers, who must invent new ways for users to interact intuitively with these devices, but also for researchers who have to evaluate the experiences. Because AR occurs within the real world and its experiences are privileged to a single user, researchers will have a difficult time correlating their external observations to what the user is seeing. In this article, we explore the lessons learned throughout the eight weeks of usability testing of a HoloLens in a museum context, both what we learned about the challenges of conducting this research in the face of the environmental constraints in the museum, and how we made sense of the user's reactions in order to improve our design. Interestingly, the problems we faced as researchers provided insights into problems designers will have to solve to make AR viable in the real world.
Our exhibit of the Resting Satyr statue was featured in the center of the Eskenazi Art Museum's gallery room, surrounded by an assortment of other artifacts displayed in glass showcases and on the walls. The original Resting Satyr, an ancient Greek work likely by sculptor Praxiteles, has a large number of replicas compared with other statues from its period. The exhibit's objective was to enable users to compare the physical Resting Satyr statue alongside some of its copies, which are spread all over the world. To do this, we leveraged AR to display virtual copies, which were viewable through the HoloLens in the form of 3D models, allowing users to compare the physical artifact to its virtual copies. We studied this interaction in a museum context. This method of comparing a physical object to digital representations of similar ones is an interaction with applications in many other contexts. Examples include zoos, where patrons could view animals that they normally would not be able to get close to, and the medical field, where a doctor could compare a patient's visible symptoms with the symptoms of a virtual patient.
Our approach to harmonizing the museum's artistic and historic presentation with emerging technology took great care in addressing the variety of novel predicaments woven throughout our design process.
Although our team was aligned on the strategy and goals, the usability testing proved to be the most difficult part of the project. The luxury of conducting our tests in a lab environment was not a realistic option. There were several reasons we needed to test in the actual gallery with the physical piece of art. First, we needed to allow the user to compare the real and virtual statues; a stand-in object, like a box, would not suffice. Second, asking museum staff to relocate a priceless artifact to a separate location for testing would have been unreasonable. As usual, there were also benefits to doing our testing in the context of use. For example, we did not receive immunity from the museum's protocol for its patrons; the guards kept a close eye, ready to interrupt our tests in the event that our usability participants' gestures threatened the artifacts. Of course, this reflects a user experience issue in its own right.
As with most emerging technologies like the HoloLens, we faced a number of technical limitations that created experiential complications and distractions. For example, while wearing the headset, one's field of view is noticeably diminished, with the available view constrained to the center and omitting the peripheral view. This particular technical limitation makes it easy to overlook parts of the experience that exist outside the central view. Because peripheral vision is nonexistent in virtual space, we discovered users often missing important details during the experience—such as the moment when virtual statues would be replaced with new versions and which buttons were being activated—simply because they had not pivoted their head. This limitation was noticed on both a design level and a user level; complaints about the lack of peripheral vision became our most consistent piece of feedback. We attempted to apply traditional screen-based methods like clearly labeled buttons, animations, and action-based colors, but these were commonly missed by our users. What we learned (and a problem we didn't solve during our testing) was that designers will have to develop new ways of drawing attention in AR.
|A user's view of our AR exhibit through a HoloLens. Pictured is a physical Resting Satyr statue positioned next to two digital representations, as well as supporting interactive tools (i.e., audio, podiums).|
Regarding research methodology and tactics, one unexpected challenge was that initially, we could not see what the user was seeing through the headset. This was a major departure from traditional usability testing, which has myriad screen-sharing and recording software available. However, we did eventually learn of a Microsoft app that allowed us to record the user view during the test. Unfortunately, this recording degraded the quality of the experience, and as the app got more complex and more features were added, we simply stopped using the recording feature. By the time we stopped using it, however, everyone on the team knew exactly what the experience looked like and could predict what the user was seeing based on where they were positioned. This suggested to us that, more than ever, researchers will need to carefully observe the actions of their users, and have a strong grasp on what the user is seeing as they move to different parts of the experience. Conducting the research with this awareness will provide a helpful guide for thoughtful follow-up questions and mitigate any confusion during the course of the study.
Because peripheral vision is nonexistent in virtual space, users often missed important details during the experience.
Our process was simple. Each week we would perform testing, usually with some specific goal in mind related to questions to which we wanted answers. We would then create a report of user reactions, bug fixes, and feature requests that we would give to our art history expert and developer in preparation for the next week. Each round of testing offered several new insights; its influence and ability to shape the design was vital to our process.
Although augmented reality is situated in the real world, its technology can still create an overlay that distracts from the real world. Much like smartphones, this technology could cause accidents or cause the user to unintentionally ignore actual hazards. Our testing took place in the center of the gallery, with no ability to make changes to the existing layout. We didn't know initially whether users would cautiously walk around the holograms as if they were real objects or carelessly walk through them. Because of the safety issues, we enforced various protocols designed to prevent harmful participant behavior, such as warning users of the dangers in advance and disrupting the AR experience with warnings if participants wandered too close to a showcase. However, we quickly learned that the majority of participants responded to the artificial statues with authentic caution. This inhibition had a major impact on their experience. We noted in our first tests that participants weren't willing to view the exhibit from the rear because they felt the virtual statues created a barrier. While this suggests how compelling the AR experience was, designers will have to consider what behavior to foster and what behavior to discourage based upon research insights and observations, and how to do so without degrading the experience.
To showcase more statues, we decided to place two statues into a single position, adding a button to toggle between them, thus adding walking space around the exhibit. In order to achieve this, we had to decide where to place the button in the augmented space. We approached the problem by following accepted usability guidelines for "on-screen" experience , since AR practices are still developing. We grouped the button with the object it controlled, initially placing it on the far left of the exhibit. However, because of the HoloLens's narrow field of view, participants were unlikely to see the button at all. We then attempted to place the button in the center of the exhibit for better visibility, but in that case, activating the button did not guarantee that participants would notice the statue change. We also had issues with the look and feel of the button. No matter what call to action was placed on the button, users were not sure what type of interaction was required.
This revealed a general lack of knowledge about AR from our participants; they simply did not have the comfort level to experiment in AR. With a smartphone, users by now know the repertoire of possible actions (i.e., tap, swipe, drag, hold) and will attempt them when uncertain. In contrast, in the AR experience, users don't have the interactive memory (i.e., they aren't familiar with the types of interactions that are possible in AR) to support such experimentation. In addition, the fact that the AR experience happens in the real world increases the degrees of freedom designers have about where to place interactive controls. This lack of natural constraints is challenging for both designer and user. Designers will have to create new onboarding tools to help users acclimate to this new interactive medium, complemented by new interactive norms in the AR space.
In studying how users compared virtual and physical objects, we learned a few lessons as well. For instance, the virtual statues often overpowered the physical statues in the darkly lit gallery. This privileged the virtual statues, leading to greater interest from users. In addition, while the art historians we tested invariably recognized the significance of the exhibit, those with less knowledge were unable to relate the physical statue to its virtual replicas. Helping patrons understand the artistic significance of an exhibit continues to be a challenge for museums, and AR as a technology cannot solve this problem alone. It can, however, give museum curators a new tool to develop thought-provoking exhibits.
Augmented reality is an emerging technology and possibly a new paradigm in computing. Understanding user expectations and the perceived value of AR technology will be essential for the future. Throughout this project, we have experienced the challenges of usability testing in AR. For example, while many usability studies are performed in a controlled lab setting, we feel it is ideal to test AR experiences in the real setting. Because AR experiences are embodied, it is critical to understand how a user will interact with the space around them. Is there enough space? Will the 3D models you're working with contrast with the background? If you're in a public space, will the interactions you've designed interfere with other people's use of the space?
For AR designers and researchers, the biggest challenge to overcome is the user's discomfort with the technology. When smartphones appeared, they had a history of similar devices that helped users acclimate. Before smartphones, there were flip phones, and touch screens, and music players. The smartphone worked because it combined many different technologies into something familiar yet revolutionary. It is possible AR will follow the same incremental path. Our work simply scratched the surface of user experience in AR and how designers will be able to engage their users in this new space. But it is clear that people immersed in screen-based technology will need to be carefully guided into this form of computing.
We see usability testing as an ideal way to understand user reactions to this medium. Until AR becomes a ubiquitous technology, it will continue to be new and strange to many users. Through usability testing, designers can gain an understanding of how interaction methods and experiences will work for different user groups. Engineers and developers will continue to make the technology smaller, more efficient, and more powerful. But it is up to the designers and researchers to make it user-centered.
Andrew J. Hunsucker is a Ph.D. candidate at Indiana University studying design communication and augmented reality. He has previously published at CHI and has written for XRDS magazine. He expects to graduate in May 2019. firstname.lastname@example.org
Emily Baumgartner gathers and translates user needs by designing and executing meaningful research methods at innovatemap, a digital product agency. She studied communication and returned to school to earn her M.S. in HCI from Indiana University which has helped her truly understand the user experience and champion the design of strategic, useful products. email@example.com
Kelly Elizabeth McClinton focuses on the application of computational methods to material culture studies. This research has extended during her time at Indiana University to the ancient collection in the Eskenaz Museum of Art, where she has developed digital exhibitions with the acting ancient curator. Her current research focuses on Roman domestic spaces and the analysis of visual programs within the houses of Pompeii. firstname.lastname@example.org
©2018 ACM 1072-5520/18/07 $15.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.