Features

XXI.3 May-June 2014
Page: 46
Digital Citation

Reflecting our better nature


Authors:
Deborah Tatar

Computerization has moved from providing a counterpoint to life, with the potential to highlight and shade experience, to constituting a constant force, defining life as experienced. The co-evolution of computing and society means that even the central tenets of HCI are subject to questioning and refinement. Gilbert Cockton put the consequence for research more emphatically than I would in a recent Interactions opinion piece:

“HCI must move out from its human science comfort zones to embrace all ways of understanding humanity such as via the arts, humanities, theologies, and ideologies. There is not enough scientifically legitimated knowledge for the job at hand…. HCI has to move up to the existentialist subjective challenges of taking full responsibility for design purpose rather than just doing what the big boys ask us to do, too often as mercenaries or serfs” [1].

These words reflect two important truths: First, it is more convenient to investigate matters of direct economic impact than the more elusive and long-term issues of quality of life and the nature of society, and second, the influence of the computer is too important to be left purely to one or two segments of society, with their narrow motives, values, and ways of knowing. Perhaps implicit in Cockton’s argument is an observation about the peculiarity of our Western relationship to technology, made trenchantly by the great social critic Jonathan Swift in this description of the Lilliputians’ discovery of Gulliver’s watch:

“We conjecture it is either some unknown animal, or the god that he worships; but we are more inclined to the latter opinion, because he assured us (if we understood him, for he expressed himself very imperfectly) that he seldom did anything without consulting it. He called it his oracle, and said it pointed out the time for every action in his life” [2].

Insights

ins01.gif

One of our cultural strengths is that we are willing to adapt to and exploit the power of the machine. Often this capacity is important; sometimes, as in Swift’s view, it is funny. But undue deference constitutes a serious problem, addressable by design action.

ins02.gif

The Authority of Computers

The following examples are drawn from our own research in designing and implementing a range of distributed activities for teaching, learning, and games:

  • Undergraduates asked to design learning games are astonished and repelled that a computer-based collaborative crossword puzzle game would fail to tell the players who should go next. “But the computer can tell you!” Further, when asked whether strict alternation of turns is important, they are puzzled. One speaks up, “It’s not fair if people decide who goes next!” The others chime in with agreement. The brute fact of the computer’s ability is prioritized over the ability of the people.
  • When asked what is most important about a game, undergraduates say, “To win.” When asked whether that is always the most important thing, they reassert this. But when asked how they would run a footrace with a four-year-old, their view of games and playing changes radically. Suddenly fun is associated with process, with deliberately not winning. The ability to shape the interaction, to have social agency, is important but experienced as remote.
  • When computer science undergraduates are asked, “If you were to pretend that the computer was a person, what kind of person would it be? What kind of a personality would it have?” Some reply that the computer is rude and domineering, but one woman puts her arms around her laptop open on the desk and says, “I love my computer. It always does what I want it to do.” The computer’s compliance is seductive and its shallowness often invisible.

We are not entirely unique in this examination. Sherry Turkle has documented and popularized similar issues in Alone Together, expressing concern about human connectedness [3]. Turkle points out that not all is clear, bright, and healthy for everyone all the time in the online world. Many people are substituting simulacra of consolation and sympathy for real human sympathy, while others look on with approval. Children play with robotic pets that demand little of real consequence of them as people. The same wind that Turkle once perceived as pushing human behavior toward a beautiful future is one she now sees as pushing us toward frightening shoals.

The difficulty is that it is very hard to say what we should do differently based on Turkle’s analyses. What precisely is the harm on any given occasion or over any period of time? It is easiest for designers just to keep on as before.

A first idea to help us with questions of design action is the observation that a great deal of human intelligence lies in the way we arrange the world. Thus, we leave our keys by the door to avoid forgetting them. Therefore, we must understand the implications of how the computer arranges the world for us, including when its arrangement is not our arrangement. Additionally, we must insist on arrangements that make us more likely to act as the selves we wish we were. As I told my eight-year-old, we put away our cash not primarily because we mistrust others, but because it is unkind to tempt them.

Turkle’s ideas (or my own) do not require theory to lead to design action. But without theory they are just contenders among other contenders for our attention. Without theory, something left out is just a missing feature, not an instance of what I call zensign, disciplined and principled omission.

My research group has been investigating a theory related to the nature of the computer’s authority and influence—and the entailed design consequences. This is a capital-T theory in that it integrates several different trends to form a picture of a whole. A capital-T theory does not depend on empirical support in every instance, but rather, like the theory of evolution, is a framework that makes sense of otherwise disparate factors.

Human Malleability and Machine Intransigence

Our theory concerns the nature of human malleability in the face of the nature of the machine. It pulls together different factors from different disciplines. The argument starts with the fact of human sociality. We are influenced by others; we also expect to influence others. But when an interactor is not influenced by us, we tend to conform to it. We do not necessarily conform immediately, nor do we even necessarily know that we have conformed, but we still comply. Second, computers are interactors; computers influence us. They are inflexible. In response, we become more passive, less happy, and less able to engage in independent action.

There are four underpinnings to this argument: social trends, properties of interaction, human self-concept, and the design of modern computing and telephony systems. There is not enough space to present our research or design responses.

Social trends. In Generation Me, Jean Twenge documents a rise in anxiety and depression. Based on generational changes in data, she argues that today’s children see themselves as solitary actors, untied to others [4]. While some components of this vision of the solitary self are positive (e.g., the belief in the fundamental equality of people across group boundaries), some of them are not, which she clearly conveys with her book’s subtitle: “Why today’s young Americans are more confident, assertive, entitled, and more miserable than ever before.” Twenge locates the cause in a focus on the self in child-rearing and education that leaves each person drawing on their own internal resources for emotional sustenance and unable to see themselves as part of a bigger picture.


Viewed as a person, the computer is a bully.


However, there are other possible causal factors. While the vision of digital native status is a happy one, it might obscure the fact that we may not actually know what is important about human coordination. To repeat Cockton: “There is not enough scientifically legitimated knowledge for the job at hand.” This situation with computers is arguably similar to that of nutrition. At the same time that modern science has been teaching us a great deal about nutrition, American society has developed crippling rates of obesity. What Americans think they know about nutrition is evidently not enough to keep us well. Caution should be exerted with coordination as well.

Most sciences have spent hundreds of years creating and structuring relevant phenomenologies, yet we frequently treat our bits and pieces in HCI as fully sufficient. A rise in anxiety, stress, and loneliness suggests a need for increased attention to the terms and organizations of underlying phenomena.

Properties of interaction. People perceive computers as interactors because of our own sociality. Lucy Suchman long ago argued that our inclination to perceive human-computer relationships as interactive stems from their reactive, linguistic, and opaque properties [5]. The computer reacts. Human reaction to others starts at birth. People are so focused on reacting to one another that the gap between speakers in conversation averages 209ms across cultures and languages. By comparison, it takes about 200ms for a person to engage in the simplest action-reaction cycle, such as pressing a button when a light goes on. Computers are linguistic, at minimum, in the simplest sense in which people comprehend abbreviated communications (such as “more”). Computers are opaque. Human interactors are opaque. Both require active sense-making.

Suchman’s description of human-machine interaction is surprisingly close to what would be predicted from a very different source. Byron Reeves and Clifford Nass [6] argue that there is an equation, an equivalency, between how we treat people and how we treat media. When I was a post-doctoral fellow in their laboratory long ago, I struggled against this formulation because it seemed deterministic and reductive.

Yet while Reeves and Nass are not entirely right, they also cannot be entirely wrong. We may not treat computers exactly like people, but there is an undoubted similarity. The fundamental human capacity is for interaction, and interaction of a sort is achieved with computers, as it is with dogs, comatose people, and pre-verbal babies. Of course we draw upon our human skills and capacities. What else could we draw upon?

Human self-concept. The media equation formulation [6] says that we treat computers like people. One of the oldest theories in psychology is that of the Looking Glass Self [7], which holds that how we see ourselves is influenced by the ways in which we are seen by those around us. If what we do with computers is appropriately conceptualized as interaction, then, in the same ragtag way that our self-conceptualization is influenced by people, dogs, and babies, we will be influenced in our view of ourselves by the regard of the computing device.

If we take this idea seriously—the idea that computers are interactive and that we, in some sense, act as though they are human—this question arises: “What kind of self is reflected in the behavior of the computer?” We may seek to see ourselves as actualized, agentic, effective, worthy of the regard and attention of others, as moral agents, kind, creative, patient, loving, healing, able to bring care and meaning to other people’s lives, and all sorts of other good things. Many of these qualities can be found in the gaze of a dog and often in the gaze of a person. Arguably, not so much with the computer. Yet we now spend more time with computers than with people.

The design of computer systems. The self that we see reflected back by the device is increasingly ineffective, submissive, and with limited agency. The computer nags us about our spelling and makes insistent noises that disrupt more important activities. When online Scrabble tells me what words exist in English, I might complain, but I cannot negotiate. If a Google search returns incompetent results, I have to think about it, not it about me. I accept user agreements that absolve the company from all blame without giving me any power except that of wholesale rejection. The computer sets the rules, runs the show, is unyielding in its demands and structures—and, worse, may gain unscrupulous compliance through recruiting other people and organizations in social networks to exert the pressure of social expectations. My colleagues use Google+ and thereby pressure me to give Google access to my intimate information. My intimate information is then woven into a vision reflected back in unknown ways for dubious purposes. In short, viewed as a person, the computer is a bully.

Of course, not all experiences of the computer are like this. There are wonderful examples of people organizing around quests in World of Warcraft, or experiencing the liberation of acting in a way that seems free of the contingencies of gender, handicap, or distance. Democracy can be promoted! People who are engaged in serious mastery of the machine, in the open source or DIY worlds, may also have important good experiences. Almost all of us have at least some good experiences.

But our positive experiences exist side-by-side with a very different experience of how we are seen by the computer. The general user has to behave in stilted and sometimes superstitious ways to avoid impenetrable problems. S/he has no recourse or ratification in the face of the computer’s incessant insistence. People can shrug off poor treatment on any given occasion, but unresisted intransigence makes us subservient. It’s depressing. It’s reductive.

The theory of human malleability and machine intransigence suggests that we need to resist the drip, drip, drip of palliative computing. My lab creates situations that explore phenomena related to human malleability and machine influence. We create interpersonal dilemmas that people have to solve, under different technological conditions. We look at the detailed interactional resources that participants bring to bear, the interactive underpinnings of “persuasive computing,” and other unintended side effects. We focus on phenomena that are outside the design brief of most projects. We reject the idea that self-reported user satisfaction constitutes the entirety of good design.

Even the core HCI design principle of “seamlessness” needs to come into question. It may lead us into accepting “help” that is really undue control. In my lab, we view seams as similar to pain. Some kinds of pain keep us safe.

This endeavor needs more than our small, underfunded efforts. Redirecting even a tiny portion of the effort devoted to turning us into compliant consumers could make a tremendous difference in all our lives.

Acknowledgments

Thank you: Steve Harrison, Michael Muller, the VT Third Lab, especially Samantha Yglesias and Joon Suk Lee. Also, NSF grant # IIS-1018607, which is not responsible for this content.

References

1. Cockton, G. Refuser (centered design): Moving on, moving out, moving up. Interactions 19, 6 (2012), 8–9.

2. Swift, J. Gulliver’s Travels. Signet, New York, 1960.

3. Turkle, S. Alone Together: Why We Expect More from Technology and Less from Each Other. MIT Press, Cambridge, MA, 2011.

4. Twenge, J.M. Generation Me: Why Today’s Young Americans are More Confident, Assertive, Entitled—and More Miserable Than Ever Before. Free Press, New York, 2006.

5. Suchman, L. Human-Machine Reconfigurations: Plans and Situated Actions. Cambridge University Press, New York, 2007.

6. Reeves, B. and Nass, C. The Media Equation. CSLI Publications and Cambridge University Press, New York, 1996.

7. Cooley, C.H. Human Nature and the Social Order. Scribners, New York, 1902.

Author

Deborah Tatar (Ph.D., Psychology, Stanford) is currently associate professor of computer science and, by courtesy, psychology, and a member of the Women’s and Gender Studies program at Virginia Tech. Her research focus is investigating and designing face-to-face interaction with and through technology, especially in the context of classroom education.

©2014 ACM  1072-5220/14/05  $15.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2014 ACM, Inc.

Post Comment


No Comments Found