XIV.3 May + June 2007
Page: 53
Digital Citation


Robert Jacob, Audrey Girouard, Leanne Hirshfield, Michael Horn, Orit Shaer, Erin Solovey, Jamie Zigelbaum

back to top 

Is there an emerging next generation of human-computer interaction? Or, rather, are there simply "a thousand points of light" of disparate and unrelated, innovative new developments? A wide-ranging group of top HCI researchers gathered to consider this at CHI 2006 in Montreal for what turned out to be the largest and possibly the most interesting preconference workshop.

Titled "What Is the Next Generation of Human-Computer Interaction?," the workshop brought together researchers in a range of emerging new areas of HCI to look for common ground and a common understanding of a "next generation" human-computer interaction style. If we consider command-line interfaces as the first generation, then direct manipulation and the graphical user interface define the second generation of user interfaces [5] that still dominate the state of practice. We look for the next generation by considering research currently in progress (presented at CHI, for example) to find developments that will move into practice and constitute a third generation.

Our goal was to find common elements for understanding and discussing the next generation of HCI and to build a community of researchers who will think about this topic explicitly. Unlike the early days of graphical user interfaces, recent developments in new interaction styles are proceeding independently on unconnected and unrelated fronts, making the next generation more difficult to connect and define. Yet, much current research appears to be moving away from the screen-based GUI, in a related general direction. We think the key components of next-generation interaction styles are found in the variety of loosely related current research areas in HCI detailed in the sidebar below.

back to top  Workshop Madness

With 39 participants and a desire to leave time for interactive discussion within the one-day session, we began with "CHI Madness" style presentations. While not quite as fast as the other ones at CHI 2006, we had rapid-fire three-minute talks with all the slides concatenated onto a single laptop in advance. The session worked surprisingly well, the participants cooperated to keep within the draconian time limit, and the result was a lot of information and an excellent overview of the area efficiently covered in a short time. Participants presented their current research or interface designs that they saw as part of next-generation interaction; their ideas or approaches for describing or defining next-generation interaction styles; as well as various research challenges and agendas in this area. The presentations were grouped into sessions on:

  • Frameworks and Surveys
  • Broader Perspectives, Psychological Considerations
  • New Interaction Styles
  • New Interface Designs and Systems
  • Tools and Development Techniques

(The position papers and presentation slides are available on the workshop website.)

Ben Shneiderman served as special guest, agent provocateur, and chief kibbitzer, speaking on "A Second Path to HCI Innovation: Generative Theories Tied to User Needs." He challenged the group to go beyond technology and consider other dimensions of future interaction including societal impact. His work, which helped define the second generation, was a model for us. He took what was then a set of disparate new user interfaces and research projects and unified them through their common characteristics [5]. Hutchins, Hollan, and Norman then explained the power and success of these interfaces with a theoretical framework [4]. Our goal was to take a first step in that direction for the emerging generation, so we were delighted to have both Ben and Jim Hollan participating in the workshop.

back to top  Background

To date, few researchers have explicitly addressed the issue of a unifying framework for next-generation interaction styles, but several have discussed subareas and made contributions toward it. People who have attempted to explain or organize new styles of user interfaces have tended to concentrate more on individual classes or groups of new interfaces than on concepts that cut across them. For example, Ullmer and Ishii provide a framework for tangible interfaces [6]; Fishkin, Moran, and Harrison propose the concept of embodied interfaces [3]; Bellotti, Back, Edwards, Grinter, Henderson, and Lopes define sensing interfaces and raise a set of key problems [2]; and Beaudouin-Lafon's Instrumental Interaction model sheds light on post-WIMP interfaces [1].

back to top  Starting Point: Reality-Based Interaction

As a starting point for discussion, we proposed the concept of natural, realistic, or reality-based interfaces. This notion focuses on the ways in which the new interfaces leverage users' built-in abilities and pre-existing knowledge. These interfaces draw strength from exploiting the user's pre-existing skills and expectations from the real world rather than trained computer skills. For example, navigating through a conventional computer-graphics system requires a set of learned commands, such as keywords to be typed in or function keys to be pressed. By contrast, navigating through virtual reality exploits the user's existing real-world "navigational commands": positioning the head and eyes, turning the body, and walking toward something of interest. Perhaps basing the interaction on the real world reduces the mental effort required to operate the system because the user is already skilled in those aspects of the system. For casual use, this reduction can speed learning; for use in situations involving information overload, time pressure, or stress (e.g., surgery, disaster recovery), this reduction of overhead effort could improve performance.

A unifying characteristic for much of the research that is leading to next-generation interaction styles is the frequency of how users' abilities and pre-existing knowledge is being tapped. Direct manipulation moved user interfaces toward more realistic interaction with the computer; next generation, reality-based interfaces push further in this direction, increasing the realism of the interface objects and allowing the user to interact even more directly with them.

We can also take this approximate notion of "realistic" or "natural" and make it more precise—perhaps by focusing on the pieces of knowledge or skills that a system requires its user to know. This leads to a notional checklist of the knowledge the user needs. However, there are many kinds of things that the user already knows. Moving the head to change point of view is one. The user may already know more-arcane facts, such as that pressing the Alt-F4 keys will close a window. It seems intuitively better to exploit the more "basic," more built-in knowledge that the user learned in infancy (or perhaps was born with) than to exploit more recently learned, less innate knowledge, like the Alt-F4 keys. We could explore how to measure reality-based versus non-reality-based knowledge on a more continuous scale. This requires a way to rate a piece of knowledge according to how real or innate it is. For example, when the user learned it; we conjecture that younger is better. Information that is deeply ingrained in the user seems somehow more robust, perhaps more highly practiced, and should take less effort to use than information learned recently. Another side of this issue is that reality-based is typically not sufficient. A useful interface will rarely entirely mimic the real world, but will necessarily include some "unrealistic" or artificial features and commands. In fact, much of the power of using computers comes from this "multiplier" effect, the ability to abstract from or go beyond a precise imitation of the real world.

back to top  Discussion Groups

Discussion commenced with four groups, each working in parallel with the same agenda, to develop alternative ideas. We used reality-based interaction as an initial candidate to tie together developments in next-generation interaction styles. From there, different groups considered ways to extend, expand, or discredit this approach, or to introduce alternative opposing or complementary approaches to the problem. The groups began by considering these issues:

  • Do you see a next generation or just a set of disparate developments?
  • What is common about these new interfaces; what things or ideas connect them? (List three things on sticky notes that were common among the morning presentations.)
  • What differs? (Three more sticky notes)
  • Agreement, disagreement, extensions, or alternatives to reality-based interaction approach
  • Psychological evidence or theories
  • Ways to test or validate frameworks and concepts we develop
  • Opportunities for new designs inspired by gaps uncovered by new integrative thinking.

Groups began by analyzing the research overviews presented during the morning. Each person listed common threads and differences. While the groups diverged, we saw general agreement that the focus is shifting away from the desktop and that technology is moving into new domains. There was also general support for the reality-based interaction concept with some new ideas and dimensions added to it. Many of the commonalities that the groups identified were related to reality-based interaction, for example, exploiting users' existing knowledge about different materials and forms to enforce syntax.

Some axes that are useful for discussing and comparing new interaction styles also emerged from the group discussions:

  • Extent to which physicality is embedded in the feedback loop
  • Bandwidth of the interaction: just using your fingers on a keyboard and mouse versus full-body interaction; tactile I/O in addition to visual
  • Use of multipurpose interaction devices versus specialized devices
  • Extent to which interaction style is configurable by the user (e.g., a TUI where users can couple information to physical objects of their choice)

Other concerns or problems we would like to see solved in the next generation:

  • Broader use, by better integration of everyday skills
  • Lowering technical boundaries
  • Universal usability: Next-generation interfaces have the potential to better serve populations that rely on physical representation and manipulation. Also may have an important role in decreasing the digital divide in third-world countries
  • Use psychology to guide development rather than only to evaluate
  • Concerns about trust, especially with lightweight interaction and ubicomp. Mainly relevant where users are being watched and where the technology is not obvious to them
  • Using technology to bring people together (collaboration)
  • Increase interaction and social copresence, collaborative support plus individual support

back to top  Wrapping Up

We found encouraging support for the notion of reality-based interaction, but phrased in a variety of different terms. Attendees generally agreed that we need new tools and understanding in order to properly judge current HCI research, which contains the seeds of the next generation. Current evaluation techniques for user interfaces may not be sufficient for these next-generation interaction styles. A focus on new evaluation techniques, metrics, and frameworks is an important research problem.

Defining the next-generation human-computer interaction style is a tall order for a single-day workshop. Ideas emerging from the workshop can serve as a lens or common language for viewing, discussing, comparing, and advancing proposed innovative new interface developments and technologies—to provide some coordinate axes on which to put them into perspective and organize them. Such a framework can also give us explanatory power for understanding what makes particular new interfaces better or worse or to make predictions about them. And it could help identify gaps or opportunities to develop a research agenda for new work suggested by gaps or "sweet spots" in a new taxonomy. We are seeking the next generation by considering research currently in progress, rather than an attempt to predict possible future research.

Attendees generally agreed that we need new tools and understanding in order to properly judge current HCI research, which contains the seeds of the next generation.

back to top  Next Steps

We hope to give the HCI community a new, more explicit way of thinking about and connecting next-generation interaction styles, and that this will lead to a research agenda for future work in this area. Our goal is to create a community of HCI researchers who are thinking specifically about connecting their research to other developments in next-generation interaction. This extends well beyond the original workshop participants; we invite all readers to contact us or to join our discussion forum website, listed below.

We are also pursuing this area further at Tufts, under an NSF grant on "Reality-based Interaction: A New Framework for Understanding the Next Generation of Human-Computer Interfaces," which will provide a nexus for continuing and collecting work in this topic after the workshop.

back to top  Acknowledgments

We thank our collaborators Andrew Afram, Eric Bahna, Georgios Christou, Michael Poor, and Larissa Winey of the Computer Science Department at Tufts, as well as Caroline Cao and Holly Taylor of Tufts, Leonidas Deligiannidis of University of Georgia, Hiroshi Ishii of the MIT Media Lab, Sile O'Modhrain of Queen's University Belfast, and Frank Ritter of Penn State University.

We also thank the National Science Foundation for support for our research project on this topic (NSF Grant No. IIS-0414389). Any opinions, findings, and conclusions or recommendations expressed in this article are those of the authors and do not necessarily reflect the views of the National Science Foundation.

back to top  References

1. M. Beaudouin-Lafon, "Instrumental Interaction: An Interaction Model for Designing Post-WIMP User Interfaces," Proc. ACM CHI 2000 Human Factors in Computing Systems Conference, p. 446-453, Addison-Wesley/ACM Press, 2000.

2. V. Bellotti, M. Back, W.K. Edwards, R.E. Grinter, A. Henderson, and C. Lopes, "Making Sense of Sensing Systems: Five Questions for Designers and Researchers," Proc. ACM CHI 2002 Human Factors in Computing Systems Conference, p. 415-422, ACM Press, 2002.

3. K.P. Fishkin, T.P. Moran, and B.L. Harrison, "Embodied User Interfaces: Toward Invisible User Interfaces," Proc. of EHCI'98 European Human Computer Interaction Conference, Heraklion, Crete, 1998.

4. E.L. Hutchins, J.D. Hollan, and D.A. Norman, "Direct Manipulation Interfaces," in User Centered System Design: New Perspectives on Human-computer Interaction, ed. by D.A. Norman and S.W. Draper, p. 87-124, Lawrence Erlbaum, Hillsdale, N.J., 1986.

5. B. Shneiderman, "Direct Manipulation: A Step Beyond Programming Languages," IEEE Computer, vol. 16, no. 8, p. 57-69, 1983.

6. B. Ullmer and H. Ishii, "Emerging Frameworks for Tangible User Interfaces," in Human-Computer Interaction in the New Millenium, ed. by J.M. Carroll, Addison-Wesley/ACM Press, Reading, Mass., 2001.

back to top  Authors

Robert Jacob
Department of Computer Science
Tufts University

Audrey Girouard
Department of Computer Science
Tufts University

Leanne M. Hirshfield
Department of Computer Science
Tufts University

Michael Horn
Department of Computer Science
Tufts University

Orit Shaer
Department of Computer Science
Tufts University

Erin Treacy Solovey
Department of Computer Science
Tufts University

Jamie Zigelbaum
Tangible Media Group
MIT Media Laboratory

About the Authors

Robert Jacob is a professor of computer science at Tufts University, where his research interests are new interaction media and techniques and user interface software. He was also a visiting professor at the MIT Media Laboratory, in the Tangible Media Group. Before coming to Tufts, he was in the Human-Computer Interaction Lab at the Naval Research Laboratory. He received his Ph.D. from Johns Hopkins University. He was Papers Co-Chair of CHI 2001, Co-chair of UIST 2007, and Vice-President of SIGCHI. He was elected to the ACM CHI Academy in 2007.

Audrey Girouard is a Ph.D. student in computer science at Tufts University specializing in human computer interaction. She is currently studying the use of brain imagery to enhance HCI with the use of functional near-infrared spectroscopy. In 2007, she received the PostGratudate Scholarship for doctoral studies from NSERC and completed her master's in computer science at Tufts. Before matriculating at Tufts, Audrey received her undergraduate degree in software engineering from Ecole Polytechnique de Montreal, Canada.

Leanne Hirshfield is working on her Ph.D. in computer science at Tufts University. She received her M.S. in computer science from the Colorado School of Mines in 2005. Leanne is conducting research using brain activity to acquire an objective measure of user workload for unbiased, real-time evaluation of current and emerging computer interfaces.

Michael Horn is a Ph.D. candidate in computer science at Tufts University working in the HCI group. He received his undergraduate degree in computer science from Brown University. Afterwards he worked for five years for Classroom Connect, an Internet company that provides curriculum and professional development services for K-12 educators. His research interests include exploring innovative ways to integrate appropriate and useful technology into classrooms. In particular, he is interested in educational programming languages and the opportunities created by tangible user interface (TUI) technology.

Orit Shaer is a Ph.D. candidate in computer science at Tufts University. Her research focuses on developing interaction techniques and software tools for tangible user interfaces. She was also a visiting researcher at the University College London Interaction Center and in the Design Machine Group at the University of Washington. She received an M.Sc. in computer science from Tufts University and is a member of ACM SIGCHI.

Erin Solovey is a Ph.D. student in the computer science department at Tufts University. Her main area of research is human computer interaction, specifically next-generation interaction techniques. Before coming to Tufts, she was a senior software engineer at Oracle Corporation. She received her bachelor's degree in computer science from Harvard University and a M.S. in computer science from Tufts University.

Jamie Zigelbaum is a graduate research assistant in the Tangible Media Group at the MIT Media Laboratory where he is working with Dr. Hiroshi Ishii to create new physical embodiments of the digital world that elegantly empower users. Before MIT Jamie received a B.A. in HCI at Tufts University.

back to top  Sidebar: Some research areas in next-generation interaction styles:

  • virtual and augmented reality
  • ubiquitous, pervasive, and handheld interaction
  • tangible user interfaces
  • lightweight, tacit, passive, or noncommand interaction
  • perceptual interfaces
  • affective computing
  • context-aware interfaces
  • ambient interfaces
  • embodied interfaces
  • sensing interfaces
  • eye-movement-based interaction
  • speech and multimodal interfaces

back to top  Sidebar: Commonalities and Differences in HCI Trends


  • Embodiment
  • Interaction takes place in the real world
  • Concern for or relation to the real world and its properties
  • Very little concern for the desktop and GUIs, a sense that our interests have moved on
  • Interaction over a larger physical space
  • Out of virtual world, into real world
  • Full-body interaction: Positioning of the user's body is part of the interface, not just positioning of interaction objects
  • Emphasis on mobile HCI
  • Doing other (noncomputing) tasks while interacting
  • Specialized, aimed at limited rather than general activities
  • Uses hands more than eyes
  • The task is king
  • Common technology-driven approach to the development of these interfaces
  • Individual user versus social focus
  • Performance and productivity are not necessarily relevant measurements for evaluating these interfaces
  • Require new evaluation techniques such as use of ubicomp (e.g., sensors) in the evaluation, ethnographic long-term studies


  • Adherence to versus enhancement of reality
  • Extent to which the interface uses physical forms and materials
  • Integration of the interface with the physical world (e.g., VR is less integrated with the real world than TUI)
  • Abstraction, how much the interface maps the "real" physical world
  • Support for colocated collaboration
  • Virtual versus real: an artificial dichotomy?
  • World model versus conversation model
  • Different modalities, human input channels
  • Technology used: GUI, VR, vision, audition, mechanical, EEG, multimodal
  • Feasibility, how realistic and close to deployment they are
  • Time scale of the interface: one interaction versus many, versus lifetime, versus history
  • Social scales: user, task, community, world
  • Level of analysis: meta versus specific technology or problem or solution; concrete tool versus abstract system
  • Practicality: purposeful versus fantasy
  • How "smart" or "autonomous" should systems be?
  • Degree of training (novice/ambient versus expert/tool)
  • Need to expand HCI to accessibility
  • Is new sensor development the key, or is it finding what to do with existing sensors? Need to push into new sensor areas?

back to top  Sidebar: For more information on the workshop:

  • Workshop website (including list of participants, position papers, slides, and other info):
  • Discussion forum:
  • Project website:

back to top 

©2007 ACM  1072-5220/07/0500  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2007 ACM, Inc.

Post Comment

No Comments Found