Features

XXII.5 September-October 2015
Page: 44
Digital Citation

Benefiting from legacy bias


Authors:
Anne Köpsel, Nikola Bubalo

ins01.gif

In “Reducing Legacy Bias in Gesture Elicitation Studies” (May–June 2014), Meredith Morris and colleagues presented a solution to the bias coming from users’ “experience with prior interfaces and technologies, particularly the WIMP (windows, icons, menus, and pointing) interfaces that have been standard on traditional PCs for the past two decades” [1]. But isn’t it exactly this bias that should help us design good gestures?

Insights

ins02.gif

But let’s start at the beginning. More and more systems around us are no longer controlled by buttons and a mouse; hand gestures often control our systems. They can be used on small surfaces like those of smartphones, as well as in touchless systems with infrared sensors like the Microsoft Kinect. But with an increasing number of possible application domains, it is necessary to design “good” gestures for these systems. Up until now, designers have been the main developers of these gestures. The results have not always been “good,” which according to Morris et al. means to “meet design criteria such as discoverability, ease-of-performance, memorability, or reliability” [1], since they are being developed by small groups that do not represent the majority of the target audience.

One possible solution for this issue is gesture elicitation studies, in which users themselves define gestures instead of a small group of designers [2]. But this solution is also not free of problems, such as the aforementioned legacy bias. Morris et al. describe how to adapt the elicitation studies to reduce this bias. Their approach includes three techniques (three Ps): production, priming, and partners.

Production means that “requiring users to produce multiple interaction proposals for each referent may force them to move beyond simple, legacy-inspired techniques to ones that require more reflection.” Their presented pilot study showed that on average, the user is satisfied not with the first impression of a gesture but rather with the third impression, after they have had the chance to rethink their choice. Priming aims to give the users an idea of the possibilities they have when generating gestures. It can be implemented by letting users watch demonstrations to get a feeling of how to use a new technology. Partners is the third method, which suggests letting users work together with others to leverage their ideas. The results of their study support their claim that, considering the stated goals in elicitation studies, legacy bias should be reduced. But, again, isn’t it this bias that sometimes makes it easier to design new interactions?

The authors mention that legacy bias is, among other things, based on regional and cultural factors, making gestures affected by it especially easy to figure out and learn by members from the region or with that particular cultural background. This makes certain gestures particularly useful in settings where multiple users are expected to interact with the same system (e.g., all members of a family interacting with a TV or the ticket-vending machines at train stations). Furthermore, gestures created with legacy bias have in most cases the advantage of being simplistic. Users do not want to perform complex sequences of movements to achieve a basic goal such as picking one item out of many or moving it from one location on-screen to another if a shorter, quicker gesture already exists. Consequently, the simplicity of interactions has always been a key concern in human-computer interaction, with the goal of minimizing the user’s mental and physical effort. Complete suspension of the legacy bias would erase the headway made over decades of HCI research and create possibly unnecessary redundancies in the development of interfaces in novel systems.

There might be better gestures for single users, but when designing gestures for the majority, going back to the familiar will probably produce more confidence. Additionally, the implementation of gestures on an individual level will lead to issues such as rendering devices useless for other users unless they learn the gestures of the owner. Of course everyone will produce “good” gestures for themselves, but would this not be the case for every gesture, as long as a person has internalized it, no matter who designed it? For example, turning the page of a book is not natural in the sense that books are not natural objects (i.e., they are manmade), but everyone who knows how to use a book will say its usage is intuitive. This is also the case for an eBook reader, where a page is turned in almost the same way. Wouldn’t it further confuse the user to change this gesture, just because it is biased and there could maybe be a better one?

To see whether we were right, we replicated the pilot study discussed earlier. A questionnaire was created in which we asked 18 users to choose gestures for certain tasks. The gestures to be created were backward and scroll-down on a webpage of a browser, close the browser window, and the same gestures for controlling an eBook reader. Afterward, the users had the opportunity to rethink their choices and change them. Half the group were asked just for their gesture suggestions; the rest were primed by the instructor, as recommended.

Only three participants from each group changed their answers; the rest were persistent with the ones they gave initially. Most of the gestures, independently of the group, were a kind of wipe gesture, as learned from using smartphones, and were executed with one hand. Every user had previous experience with those kinds of gestures, which affected their choices; therefore, the gestures they chose can be said to contain legacy bias.

But neither study can make forecasts about the performance of the gestures. Therefore, we created a second experiment in which we realized abstract gestures on a multitouch display. The gestures were created using the style of the EdgeWrite system [3] used with Palm handhelds. For each gesture, the user had to cross the corners of the screen in a certain way, but with no relation to the meaning of the gesture (Figure 1). Thus, the gestures were intentionally designed to not be “natural” or “intuitive.” That way we could see how a user’s opinion of the system changed once he or she became familiar with it.

The users had to learn the gestures and execute them during four sessions, each on a different day. Afterward they filled out standardized questionnaires, which showed that all users increased their performance from session to session. With this increase, the subjective task demand as well as their frustration and effort decreased.

These results are not surprising if we take into account what research on human learning and cognitive load has uncovered. Human working memory (e.g., mental arithmetic) is limited and thus needs support from our long-term memory (e.g., knowing for a fact that the square root of 49 is 7). A new gesture, for example, is highly demanding for one’s working memory since it is seen not as one item but rather as a complex sequence of movements, which can push our working memory to its limits. Consequently, with sufficient repetition, a schema of the gesture is created and stored in long-term memory. With the recall of that schema, the gesture is not a sequence of independent movements anymore. Instead it’s a single item to be handled by our working memory, which frees up space in it. The execution of the same gesture that needed one’s full attention before can now be done while focusing on something else entirely [4]. A good example of this process is learning how to drive a car. At first, one’s working memory and attention is completely taken up by the monitoring of the road and the rear-view mirrors, as well as by the complex process of shifting gears (at least with a manual gearshift). But after some training, one can do all those things automatically while having a complex discussion with the passenger.

The opinions and results presented here do not mean it is not necessary to create new “good” gesture interactions, since users will adapt to and internalize them. This is especially the case with new systems, which will involve new, unknown gestures. But we should also not totally condemn what was invented earlier [5]. After all, if users have sufficient exposure to a system, there is no significant difference in their performance and satisfaction between interacting with that system via utterly new gestures and those derived from already familiar forms of interaction.

So then why should we dismiss the legacy bias? Taking advantage of it helps shorten the time and effort necessary to learn new ways of interaction. The system is then perceived as more intuitive. Whatever was good before, such as using a mouse to control a personal computer, should thus be integrated into new inventions and not reduced in every possible way. Moreover, legacy bias is a concern not only in interaction via gestures, but also when developing new surfaces or websites. Arranging icons, symbols, information and so on is also subordinate to different “gestalt laws of organization,” which are guidelines to create surfaces the way the human brain can handle them easiest [6]. And even if the legacy bias is acquired through training, it can still be used to inform rules for creating new interfaces. Nearly everybody is used to a mouse cursor, icons, and even the illogical QWERTY keyboard, which is a legacy of the typewriter era. Even though it would be better to arrange the letters according to their frequency of appearance in a certain language, nobody would start to design a new personal computer by rearranging the keyboard.

In summary, the challenge exclaimed by Morris et al. should not be refused, but maybe it should be altered to a certain degree. Instead of rejecting legacy bias, take what is good from it and invent a better solution for what is problematic or disadvantageous in order to make users feel more comfortable and familiar, even if the system is new. Therefore, designers of new interactive systems should thoroughly investigate current systems and their use and then, as a next step, weigh which properties of the current system are actually in need of improvement in order to prevent unnecessary reinventions. Consequently, when testing new interaction methods, one should always compare them with the current state of the art, which serves as a baseline. Here, one should focus not only on the performance of the system and the user but also on the satisfaction and comfort of the user. Slight improvements in performance might not be desirable if they come with large reductions in comfort. To this end, a new system should, during its early stages, also incorporate the old and familiar as a backup. This can serve two purposes: On the one hand, it enables the identification of beneficial changes to the interaction design and thus furthers the development of intuitive and comfortable systems. On the other hand, the legacy bias can be a helpful tool to gently introduce new forms of interaction, such as gestures or multimodal interfaces, to the general public. Drastic and sudden changes can trigger an aversive reaction in users, as most prominently seen in the uproar against the tiles of Windows 8 on desktop computers.

So, by separating the wheat from the chaff with legacy bias, one can save cost and effort, focus on the investigation and design of truly good and new ways of interaction, and most notably, avoid repelling potential users.

References

1. Morris, M., Danielescu, A., Drucker, S., Fisher, D., Lee, B., schraefel, m.c., and Wobbrock, J. Reducing legacy bias in gesture elicitation studies. Interactions 21, 3 (May–June 2014), 40.

2. Wobbrock, J.O., Morris, M.R., and Wilson, A.D. User-defined gestures for surface computing. Proc. of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, 2009.

3. Wobbrock, J. EdgeWrite: A versatile design for text entry and control. IBM, 2006.

4. Van Merrienboer, J.J. and Sweller J. Cognitive load theory and complex learning: Recent developments and future directions. Educational Psychology Review 17, 2 (2005), 147–177.

5. Köpsel, A. and Huckauf, A. Evaluation of static and dynamic freehand gestures in device control. Proc. of the Tilburg Gesture Research Meeting, 2013.

6. Wertheimer, M. Untersuchungen zur Lehre von der Gestalt II. Psychologische Forschung. 1923/1938, 301–350.

Authors

Anne Köpsel is a Ph.D. student at Ulm University. Her research primarily focuses on free-air gesture interaction as well as on touchscreens. anne.koepsel@uni-ulm.de

Nikola Bubalo is a Ph.D. student at Ulm University. His work is user-centered research in the field of adaptive multimodal human-computer interaction. nikola.bubalo@uni-ulm.de

Figures

F1Figure 1. The gestures used in the experiment. The dot marks the start of a gesture, and the arrowhead marks the end. The symbol in the middle stands for the meaning of the gesture.

Copyright held by authors. Publication rights licensed to ACM.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2015 ACM, Inc.

Post Comment


No Comments Found