What is design?

XVII.5 September + October 2010
Page: 68
Digital Citation

Relying on failures in design research


Authors:
Nicolas Nova

Sitting next to the automatic door between train coaches in Switzerland provides a fantastic opportunity to observe the range of behaviors when people interact with a very basic instance of ubiquitous computing. Interestingly, most Swiss trains have sensors located in the upper part of the doorway. Experienced travelers know they have to wait for their presence to be detected, while nervous commuters wave their hand at the sensor to open the door. However, a longer period of observation reveals plenty of less-than-fluid usage: Elderly travelers try to find an (absent) handle; some people in a rush bang their head against the door because the sensor did not have time to detect them; angry knowledge workers (who know how the sensor works) wave their arm, but the door fails to open because of some momentary flaw with the sensor. Meanwhile, kids, luggage, or even the combination of both lead to even more complex situations.

Observing frustration with the automatic door is an example of how the investigation of accidents within a larger process can be inspiring from a design viewpoint. Surfacing people’s problematic reactions when confronted with invisible pieces of technologies highlights their mental model and eventually has implications for design. In the case of an automatic door opening, these observations show that previous encounters with non-automatic doors shape our expectations of what a door is and how it should work. Noticing these problems can also lead to understanding how the sensor calibration should be tuned or that the presence of the sensors should be made apparent in a more visible way.

From Observing Failures to Provoking Them

I am interested in how users appropriate technology, especially product failures and prototype flops. As a user experience researcher, failures of all sorts intrigue me. User mistakes, errors, and accidents (as in the example above) are pertinent because of their design implications.

Curiously, as pointed out by various authors [1, 2], there is little field research about design flops and failures. This is surprising, given that design has a long-time interest in avoiding or fixing failures, as in Herbert Simon’s famous quote: “Everyone designs who devises courses of action aimed at changing existing situations into preferred ones” [3]. To some extent, we can consider that preferring one solution over another is a matter of preventing accidents and mistakes.

Furthermore, the idea that design can be inspired and fueled by people’s practices is now more common with the establishment of approaches such as user-centered design. We observe users and investigate their needs and interests or we try to understand consumers’ motivations to do something and then turn them into insights and design concepts. But as we saw in the automatic-door example, failures and mistakes are important too because they are implicit signs of a need or problem that requires a solution. The examination of failures reveals what is commonly referred to in HCI as the “gulf of execution,” i.e., the difference between the user’s expected actions to achieve a goal and the actual required actions [4].

However, my quirky mind-set left me wondering about the role of failure in design research: If problems and mistakes are so interesting and insightful, why not be a bit more bold and enlist them as a design tactic? I am suggesting the conscious design of “questionable” prototypes to investigate user experience. Drawing on the “probe” metaphor [5], the approach here is to use “anti-probes”—a failed embodiment of technology that can be shown to people in order to engage them in open-ended ways.

In doing so, what kind of insights can be derived from leading people in the wrong direction?

Wrong Positioning

The first example of this approach stems from research I conducted with my colleague Fabien Girardin when we worked at the Swiss Institute of Technology in Lausanne. We designed a location-based game called CatchBob! to run geolocation field studies. We were interested in the user experience of positioning technologies: how people react when they see their own location and how they react to awareness of their contacts’ locations.

An interesting example of provoking failures in the context of location-based services is to locate people in wrong locations (close to the proper location, a bit farther away, much farther, etc.) and observe their reactions. Incorrectly positioning the user and locating a friend in the wrong place on the display enabled us to test different “acceptable” accuracies of positioning. This helps the researcher to understand the radius of the area in which people are comfortable being located (for self and for others). Evaluating users’ reactions to wrong location is a proxy to understand the mental model: Should positioning be accurate? What is an acceptable uncertainty? Could this be an iterated process to define a “comfort zone” in the context of location-based services?

Wii Superpower

In another project I examined failures in console games. Games are especially interesting when exploring failures because difficulties and hurdles can be playful and intriguing from the players’ standpoint. Unlike with business applications, designing a game is not necessarily a matter of making everything as easy as possible. Failure is indeed acceptable when framed as play simply because it is a prominent component of the game’s mechanics. Therefore, we thought video games could be an interesting platform to explore failures in the context of interaction design.

In this project, we looked at the Nintendo Wiimote and the sensitivity calibration of its accelerometers. When preparing the software that used the Wiimote (and Nunchuk) sensors, programmers intentionally coded the effects of the gestures to be highly sensitive to motion. Small movements made by players had an extra-large influence on the character’s movements in the virtual environment. At first, this was done to gain an understanding of how people would react to sensitivity so we could fine-tune it properly. But play tests revealed that players liked this utterly wrong calibration because it gave them a sort of superpower. We observed children gesturing and shouting dramatically: The on-screen reaction was more compelling than what they had experienced before. In this example, provoking failures was a way to disrupt the way game designers thought about players’ interests.

So What?

Failures result from the incompatibilities between the way technical objects are designed and the way people actually perceive those objects, think, and act. Provoking and observing failures can be an insightful tactic in design research. However, this approach nevertheless calls into question the kind of failures that can or should be provoked. Choosing what problems can be tested on users is obviously conditioned by social, technical, and ethical constraints.

In doing so, user experience researchers can start a different kind of dialog with users that highlights inspirational data about how people would behave (and adjust their behavior or solve problems). Knowing how users react to problems can lead to insights about how to prevent these failures from happening, how to communicate malfunctions (i.e., error messages), or simply find solutions so users are not too bothered. In addition, the use of fieldwork in the context of misuse (or flawed use) can be a way to shed some light on original design possibilities and questions.

References

1. Latour, B. Aramis, or the Love of Technology. Cambridge: Harvard University Press, 1996.

2. Gaver, W., Bowers, J., Kerridge, T., Boucher, A. and Jarvis, N. “Anatomy of a Failure: How We Knew When our Design Went Wrong, and What We Learned From It.” In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems. 2213–2222.

3. Simon, H.A. The Sciences of the Artificial. Cambridge: MIT Press, 1969.

4. Norman, D. and Draper, S. eds. User Centered System Design: New Perspectives on Human-computer Interaction. Hillsdale, New Jersey: Lawrence Erlbaum Associates, 1986.

5. Gaver, B., Dunne, T. and Pacenti, E. “Cultural Probes.” interactions 6, 1 (1999): 21–29.

Author

Nicolas Nova is a researcher and consultant in the domains of user experience and interaction design, undertaking field studies to inform and evaluate the creation of innovative products and services. At Lift Lab, he works for clients such as Orange, Nespresso, Nokia Design, and UBS. He is also the editorial director of the Lift Conferences and he teaches design research at the Geneva University of Arts and Design (HEAD-Geneva) as well as the Ecole Nationale Supérieure de Création Industrielle (ENSCI, Paris). Nova holds a Ph.D. in human-computer interaction from the Swiss Institute of Technology (Lausanne).

Footnotes

DOI: http://doi.acm.org/10.1145/1836216.1836234

©2010 ACM  1072-5220/10/0900  $10.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2010 ACM, Inc.

 

Post Comment


No Comments Found