Recently I sat on a national committee to evaluate HCI research funding. As part of that process, we discussed past successes and current challenges for HCI. It's a good time to be thinking about these issues, as HCI is no longer a minority interest. The field has major conferences, active journals, practitioners, and degrees. HCI skills are in demand at the world's leading technology companies. The success of design-oriented companies like Apple means that everyone understands the importance of interfaces. But what are these external indicators of success based on? What have we achieved, what do we know, and why are others interested in what we do? Where should we go from here? Here, I first outline three successes: transformative technology, the importance of experience, and the user-centric design process. I then argue that now we need to build on these successes by taking a more principled approach to what we do.
One unquestionable success is the development of transformative technology. The most obvious example here is graphical user interfaces (GUIs). GUIs completely altered the way in which we interact with computers, changing them from the tools of a few hardcore programmers to a technology used by billions of people. This success extends beyond the specific technologies of windows, bitmapped screens, and mice that facilitated the emergence of personal computing. Instead, the emergence of a set of design principles, WYSIWYG, laid the foundations for repeated, equally radical changes. Later developments such as the Web were dependent on underlying technologies such as HTML and search, but they could never have achieved mass penetration without effective GUI browsers based on WYSIWYG principles. Ditto for smartphones. The fundamental ways in which we interact with smartphones follow old-school design principles (give or take a finger gesture or two). Just because WYSIWYG is old and established doesn't negate its utility. I return to the critical role of design principles later.
Another key success to recognize is the importance of user experience (UX) and design. There is more to design than simple functionality. It's clear that rather than just checking off a list of features, users develop love-hate relationships with products. We see this in user investment in transformative designs like the PalmPilot and Razr, and, more strikingly, in the near cultism (iPhetishism?) surrounding Apple products. People line up overnight to be first with the new iPhone, iPad, or iPod. Explanations are fueled by press stories that Apple products trigger the same neurons that mediate religious experiences . Whatever the brain science uncovers, there is a conceptual move in HCI to recognize the importance of experience. But having acknowledged experience, how do we systematically design compelling experiences? Although we are increasingly able to recognize engaging designs after the fact, we can't yet do this to order. It's not enough to package cool kit in a thin silver case and charge a little more for it.
A less controversial success is consensus about the process of design. Everyone in HCI agrees that design involves four iterative steps: understanding users, generating designs, prototyping, and evaluation. Although there is broad agreement about the importance of each step, practitioners differ in how they define them. And for each step there remain outstanding issues, which I now describe.
Understanding users is fundamental to HCI. If we don't orient to user experience, tasks, activities, and values, we are mere technologists. Much has changed since the early days, when the job of HCI was to model user "tasks" using cognitive science techniques. The need to understand experience has led us to annex qualitative techniques, including ethnography, observation, interviews, and, more exotically, cultural probes. But how and when we deploy each technique is still very much up for debate.
Equally important, how do we move from user data to identify its implications for new technology? Indeed, there has been debate within HCI about whether fieldwork even needs to produce such "design implications"! I believe this debate is a red herring: If understanding users has no direct implications for technology, then we devolve into pure social science and should publish in its venues. More critically, an implication-free stance doesn't help UX professionals in the trenches; researchers need to provide designers and developers with explicit guidance about potential new technologies. Saying that understanding users doesn't require design implications undermines their professional modus operandi.
Rather than adopting an entrenched position about this, we should focus on how we define design implications. There are many different ways in which user data can inform design innovation. Empirical findings can motivate specific proposals about entirely new technologies. Or more conservatively, implications might suggest improvements to existing technologies. There are clear precedents. For example, work on video-mediated communication offered two different sets of design implications. One strand of work proposed innovative new types of awareness systems and media spaces. Other work studied videoconferencing, offering incremental proposals to improve that specific technology.
A different approach to implications is to propose general properties for new systems. This is difficult because it requires abstraction from user data, but it is ultimately more useful because it can spur the development of multiple new systems. One good example is Mark Weiser's proposal for general design characteristics of ubiquitous systems, which spawned an entire research area. A related question concerns how we present design implications. Designers want "just enough" information to ground new designs in real user practice but not so much that it stifles their creative process.
There are also outstanding practical questions about how we generate new design concepts. Successful design companies like Apple follow best practices and generate multiple solutions, avoiding early commitment to a single suboptimal design. But the exact process remains a black art. There are no systematic principles for refining early design concepts, including how to and who provide(s) feedback. At a theoretical level, how does generating multiple solutions optimize design choices? What are the results of the design phase? We end up with multiple promising design concepts, but it's hard to know how good these are. It's too costly to implement every concept, but we need reliable lightweight methods to evaluate them if we are to commit to a particular option.
Next we turn to prototyping new technology. This used to be central to the community, but there are alarming signs of a shift to exclusively studying technology. Historically there have been two different approaches to HCI system building: technique oriented and transformation oriented. One important output of HCI is the invention of hundreds of new interaction techniques, for example, pi-menus and pen, gestural, multitouch, handwriting, speech, multimodal, and affect-based UIs. Such techniques usually address an existing task, whether this is data entry or system control. Tests are lab based, and success is defined as showing improvements over existing techniques for an existing user task.
Contrast this specific approach with attempts to transform the interaction ecosystem by building systems that fundamentally alter what we do with computers. Historical examples include the Xerox Star (GUI principles) and MediaSpaces (distributed collaboration). Now, in the age of ubiquitous computing, every object and surface potentially affords computation, moving it into our everyday lives. But transformational systems are fiendishly difficult to build: Their infrastructure has to be built from scratch, and fundamental new interaction techniques have to be developed as the system is built. It's also hard to know how good they are. By definition they attempt to facilitate radically new user practices, which may involve the appropriation of emerging technology. So unlike classical systems development, transformational systems don't have predefined user requirements, making it hard to determine progress.
This brings us to evaluation, an indisputable strength of HCI. For many companies, this was the initial reason for hiring UX professionals. UX folk were brought in to evaluate systems that technologists had already built. A critical success of our field has been to transform the scope of HCI practice from after-the-fact evaluators to involvement in every phase of design. We now have a slew of successful evaluation techniques, from controlled lab methods to practice-based methods such as cognitive walkthroughs and heuristic evaluation.
One emerging question concerns testing deployed working systems, where we lack consensus about evaluating emergent user behaviors or determining success when systems are under constant development. Recent new techniques promise to transform working system evaluation. User evaluation for mature large-scale applications is now being crowdsourced, exploiting active user populations in A/B testing. Unbeknownst to many users, Google, Yahoo!, and Facebook routinely modify their UI features to measure the effects of the modified UI. They do this at a scale that would make most UX practitioners salivate. This new approach provokes important questions.
So far we have focused on the practice of HCI, but a final topic I want to touch on is theory. Again, a lot has changed since the inception of our field. When there was no distinct field of HCI, people naturally were reliant on theories developed elsewhere. Early Goals, Operators, Methods, and Selection (GOMS) approaches saw HCI as applied cognitive science, while others saw it as applied sociology or business studies. Although most practitioners don't currently see the field this way, there is little consensus on what our theories are. Here I want to offer my own viewpoint. We are a design discipline, so theory should provide guidelines for building different classes of systems, offering explanations for the successes and failures of past implementations of each system class. By this rubric, Fitts Law and distributed cognition are not complete HCI theories; Fitts law provides guidelines for an interaction technique, not a system, and distributed cognition doesn't tell us how to distribute cognition between people and specific systems. Design guidelines also represent encapsulations of prior implementations of a given system type, generalizing from real instances what works and what does not. Guidelines must inform the design of new instances of the system type. Such principles are manifested in thousands of deployed WYSIWYG interfaces, but also in computer-mediated communication (CMC) and ubicomp applications.
A critical success of our field has been to transform the scope of HCI practice from after-the-fact evaluators to involvement in every phase of design.
In the past, theory hasn't been a major HCI focus. In part this is because we are always playing catch-up with technology. However, it is imperative that we address theory as we move into a new era. Recent embedded systems have direct implications for our everyday lives. In the developed world, we are living more of our lives online, being increasingly affected socially and psychologically by technology. There is much speculation in the media about how modern technologies are breeding a teenage generation with minimal attention spans, how we can't resist the temptations of email and Facebook, and how online access is ruining personal and family life. This demands new design principles that extend literal system design to encompass the individual and social behaviors that result from system use. We need to acknowledge that when we design social media systems, we are manipulating behavior. This implication is strikingly obvious for systems that try to change behavior. New healthcare systems aim to change personal behaviors by showing people the relationship between their behaviors and health outcomes (e.g., food intake, weight loss, and diabetes). This is often a good thing. However, such systems aren't value neutral, because they implicitly assume that weight loss is positive. This may not be true for anorexics, however. The importance of values argues for far greater attention to design guidelines. Such systems demand we make explicit not only what our systems should do, but also the social and individual behaviors that are affected by them.
In conclusion, HCI is in a very robust state. We have produced transformational technologies, developed many new methods, and achieved consensus about design practices. In each area there are still challenges, however: For system building, what is the next transformational technology, and can we better understand system-building processes in order to predictably repeat our successes? As for methods, how do we understand experience, what are design implications and when are they useful, and how do we develop, iterate, and evaluate design concepts? How will crowdsourcing change system evaluation? And finally, we need much better theory and clearer design guidelines as we move to designing and building systems that directly affect social and individual behavior.
Steve Whittaker is a professor of human-computer interaction at the University of California at Santa Cruz. He works at the intersection of psychology and computation. He uses insights from cognitive and social science to design new digital tools to support effective attentional focus, memory, collaboration, and socializing.
©2013 ACM 1072-5220/13/07 $15.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2013 ACM, Inc.