Features

XX.5 September + October 2013
Page: 38
Digital Citation

‘Let me finish’


Authors:
Charles Hannon

back to top 

Almost 20 years ago in Being Digital, Nicholas Negroponte imagined the ideal computer interface as a digital butler, a machine that would anticipate and provide for our needs before we were even conscious of them [1]. In his vision, our digital butler would take the smallest piece of information—the fact that we always visit family in December, or that we've had a paper accepted at a conference in February—and from that factoid begin making plane and hotel reservations, recommending restaurants based on our dietary needs, finding friends and colleagues in the area with whom we might want to meet up, even setting our alarm to a time that will ensure we get to the airport on time (and adjusting that wakeup call, if necessary, for flight delays). His vision was based on the concept we today call Web services. This massive categorizing and sharing of metadata may not yet function to the extent of Negroponte's vision, but we can at least imagine the workings of it from our current historical and technological vantage point.

The digital butler is a useful metaphor for how interactive systems can make our complicated lives easier to navigate. But recently the metaphor of conversation has begun to guide interaction designers as they develop systems and define how they should behave as we are using them. In the conversation metaphor, the system is interpreting and responding to our speech and gestures in real time, the way an engaged friend would—anticipating what we are about to say or do and responding in kind, sometimes knowing where we are headed in the interaction before we do ourselves, but also guiding the conversation based upon its own knowledge of our past together and the world we mutually inhabit.

These two metaphors, the butler and the conversationalist, offer very different explanations for why users can become so frustrated when the exchange goes wrong, when our needs or intentions are misinterpreted or ignored. The digital butler, after all, is a servant, and that metaphor allows us to excuse the system even as we blame it—"good help is hard to find." This reaction is also described by Alan Cooper as the "dancing bear" effect: We are so impressed that a bear can dance that we don't expect it to dance very well [2]. But in this regard the conversation metaphor is superior because it suggests that we should consider our machines our equals, and hold them to a higher standard—a human standard—when it comes to our expectations in the conversation.

ins01.gif

I would offer what researchers have learned about mirror neurons to make this argument. The first researchers to discover mirror neurons observed them in monkeys. They detected activity in the monkeys' brains in those neural circuits associated with motor-control activities when the monkeys performed actions such as pulling a lever, grabbing a peanut, and putting the peanut in their mouth. They then discovered that a subset of these motor neurons fired when the monkeys observed a human performing these same actions. As V.S. Ramachandran writes, "These were not mere motor-command neurons; they were adopting the other animal's point of view" [3]. Subsequent experiments have confirmed the existence of mirror neurons in humans, although the means of testing them is more limited because of the difficulty of tapping into specific neurons in live human subjects. But this has not stopped Ramachandran and others from speculating that the rapid development of mirror neurons in humans over the past 150,000 years has played an essential role in the development and transmission of culture and language, two things that make humans different from our primate ancestors.

Mirror neurons, and the networks of which they are a part, have at least three interesting effects as we interact with each other. First, they activate when we observe others engaging in familiar actions. If I see you reach out to grab a pencil, the mere observation activates in my brain the same neurons that are activated when it is my own hand that is reaching out. "Think of what this means," writes Ramachandran. "Anytime you watch someone doing something, the neurons that your brain would use to do the same thing become active—as if you were doing it." Second, my brain will also anticipate your next action, firing neurons that are related to associated actions, such as picking up the pencil or holding it in my hand. Ramachandran claims that this anticipatory effect allows me to consider your point of view, even to feel empathy with you. As he expresses it, "The ability to see the world from another person's vantage point is ... essential for constructing a mental model of another person's complex thoughts and intentions." Finally, because they help us understand the intentions of others and to feel empathy with their experiences, mirror neurons also help us to know how others might perceive us. In other words, because they allow you to adopt the perspective of others, mirror neurons allow you to "see yourself as others see you—an essential ingredient of self-awareness." Ramachandran postulates that the very concept of being self-conscious would make no sense if it were not for these reciprocal effects of mirror neurons.

As interaction designers (and as conversationalists), we should imagine a further effect of this phenomenon of mirror neurons: Not only do my neural networks anticipate your intentions and actions and yours mine, I also unconsciously know this is taking place in you as we interact. Indeed, I expect it, to the extent that if you do not anticipate my next actions, I begin to feel that something is amiss in our interaction. You should see my point of view just as I see yours, and it should be obvious that you do when we interact. The unpleasantness one feels when another person introduces a non sequitur into a conversation conveys what I mean: You don't know what the other person is talking about, and then you wonder how they even got there. They have completely disregarded the common ground that the two of you have been establishing together in conversation. They have stopped anticipating the directions of your thoughts entirely. The result is an interruption to the flow, a requirement that one take a few steps backward, reestablish common ground, redefine the path forward in the interaction.

In the context of interactive computing, we get this feeling when the system we are using asks a question that seems utterly unrelated to where we thought the conversation was heading—when I swipe my card at the gas pump, for instance, and I am asked to report my zip code before I am allowed to proceed. There might be perfectly good security-based reasons for the system to ask this question, but if that is the case, tell me: "In order for us to protect the security of your card, please enter your zip code." Otherwise, it just seems like the system is ignoring me when I tell it, by swiping my card, that I want to put gas into my car.

Jon Kolko uses the conversation metaphor to begin his Thoughts on Interaction Design by calling design "the creation of a dialogue between a person and a product, service, or system" [4]. To do this "requires an understanding of the fluidity of natural dialogue, which is both reactionary and anticipatory at the same time." Systems should anticipate our intentions in the same way a good conversationalist would. But this is not to say they need to get it right every time. A good conversationalist will sometimes ask, "When you say that, do you mean a or b?" When I tap a text message that contains a URL on my phone, the system asks me, "Do you want to open/view this text message, or do you want to visit the URL with the phone's Web browser?" When I want to share a photo, the system asks me whether I want to share using email, text message, Twitter, a photo-sharing site, or some other means. When I tap a tweet in my Twitter application to indicate that I want to interact with the tweet in some way, the system asks me how I want to interact: Direct Reply, Retweet, Mark as Favorite, etc. It is fine for the system not to know exactly what I want to do, but it should take the clues I've provided about my intentions and build on them, without introducing non sequiturs, or worse, by making false assumptions about my intentions. It is okay for a computer to say, "I don't know." Also: "Did you really mean that?" I once accidentally separated the keyboard on my iPad into two pieces without knowing how, or how to undo it (use pinch-out and pinch-in). It would have been okay, that one (first) time, for the system to interrupt our conversation to make sure I understood why it was responding to my input in that way. Think about it: Computers are most annoying when they come across as bullies, assuming they know what it is that you want to say or do, not caring enough to make sure they have it right. This behavior is difficult to suffer with humans; we should not tolerate it in conversation with our computer systems.


Systems should anticipate our intentions in the same way a good conversationalist would.


In conversation, we sometimes begin to fear that our partner is anticipating us too much. We worry that we are being misread, that our partner is interpreting our signals too quickly, not waiting for all the evidence, jumping to wrong conclusions. We want to say, "Let me finish," before our partner's brain starts sending itself false signals about what we mean to say or do. In face-to-face communication, humans have more than 150,000 years of experience in working through these various iterations of signal and message. With the design of interactive systems, we should take the additional time to make sure the conversations we are designing are built upon this collective store of communicative knowledge.

back to top  References

1. Negroponte, N. Being Digital. Knopf, New York, 1995.

2. Cooper, A. The Inmates Are Running the Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity. Sams, New York, 2004.

3. Ramachandran, V.S. The Tell-Tale Brain: A Neuroscientist's Quest for What Makes Us Human. W.W. Norton, New York, 2011.

4. Kolko, J. Thoughts on Interaction Design. Morgan Kauffman, New York, 2009.

back to top  Author

Charles Hannon is associate dean of the faculty and professor of computing and information studies at Washington & Jefferson College in Washington, Pennsylvania. He teaches courses in human-computer interaction, the history of information technology, data presentation, and project management.

back to top 

©2013 ACM  1072-5220/13/09  $15.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2013 ACM, Inc.

Post Comment


No Comments Found