Cover story

XVIII.4 July + August 2011
Page: 50
Digital Citation

Reimagining HCI


Authors:
Liam Bannon

Some years ago, HCI researcher Panu Korhonen of Nokia outlined to me how HCI is changing, as follows: In the early days the Nokia HCI people were told “Please evaluate our user interface, and make it easy to use.” That gave way to “Please help us design this user interface so that it is easy to use.” That, in turn, led to a request: “Please help us find what the users really need so that we know how to design this user interface.” And now, the engineers are pleading with us: “Look at this area of life, and find us something interesting!” This, in a nutshell, tells a story of how HCI has moved from evaluation of interfaces through design of systems and into general sense-making of our world.

This essay argues for a reformulation of the HCI discipline for the 21st century, centered on the exploration of new forms of living with and through technologies that give primacy to human actors, their values, and their activities. The area of concern is much broader than the simple “fit” between people and technology to improve productivity (as in the classic human factors mold); it encompasses a much more challenging territory that includes the goals and activities of people, their values, and the tools and environments that help shape their everyday lives. We have evermore sophisticated and complex technologies available to us in the home, at work, and on the go, yet in many cases, rather than augmenting our choices and capabilities, this plethora of new widgets and systems seems to confuse us—or even worse, disable us. (Surely there is something out of control when a term such as “IT disability” can be taken seriously in national research programs.) Solutions do not reside simply in ergonomic corrections to the interface, but instead require us to rethink our whole value frame concerning means and ends, and the place of technology within this frame.

The ambit of HCI has expanded enormously since the field’s emergence in the early 1980s. Computing has changed significantly; mobile and ubiquitous communication networks span the globe, and technology has been integrated into all aspects of our daily lives. Computing is not simply for calculating, but rather is a medium through which we collaborate and interact with other people. The focus of HCI is not so much on human-computer interaction as it is on human activities mediated by computing [1]. Just as the original meaning of ACM (Association for Computing Machinery) has become dated, perhaps so too has the original meaning of HCI (human-computer interaction). It is time for us to rethink how we approach issues of people and technology.

In this article I explore how we might develop a more human-centered approach to computing. I will refer back briefly to early human factors history, discussing the move toward human-centered automation in the 1970s and 1980s, and then note briefly the influences of the participative design and CSCW fields on HCI. I will continue with the impact of the design tradition on HCI thinking, leading to the emergence of the interaction design field. All of these developments can be seen as moves in the game toward a more nuanced approach to understanding the way in which people design and use technology in our world. Then I will consider briefly how a more human-centered approach might reframe certain current research topics, and end on a note of optimism about the heterogeneity of conceptual and methodological approaches we might find within a revitalized HCI community.

Human Factors—A Brief History

In the rush to computerize, to automate, and to network, relatively little attention seems to have been paid to evaluating the consequences of submitting that particular realm of human activity to rapid and ofttimes radical technical change.” —Gene Rochlin [2].

The field of human-computer interaction emerged in the early 1980s from the confluence of a variety of concerns about the human aspects of working with computer systems. Traditionally, human-machine interaction was the province of the human factors field. Indeed, the annual ACM CHI conference is still labeled as a conference on “Human Factors in Computing Systems.” This linkage is worth exploring, especially since today most people in the HCI field do not have direct contact with this tradition.

Human factors engineering developed within industrial engineering at the turn of the last century, when the concern was to maximize industrial productivity through maximal utilization of technology and the most effective exploitation of human labor. The focus was on improving the “man-machine fit,” as it was referred to at that time. Frederick W. Taylor is regarded as one of the pioneering figures in industrial engineering, with his gospel of “scientific management” seen by management as a beacon of hope for improving business performance. At the same time, Taylorism, as it became known, was viewed as a disaster by unions and workers, who experienced a reduction in their autonomy on the shop floor due to Taylor’s maxim of “The One Best Way” (that is, his “scientific” way) and the separation of the conception of tasks from their execution. Workers would be told exactly what to do by management—their skill and ingenuity were no longer required. The optimization of the man-machine fit often seemed to fit the person to the machine, rather than vice-versa—after all, machines were expensive, and people (human labor, including women and children) at that time were not [3].

Automation extended its reach during the last century, moving from the physical to the mental sphere. Workers were still required to complete certain tasks not yet fully automated, and for certain supervisory or control functions, but the impetus was on expanding automation inexorably out from the factory and into the office and other domains. In the 1970s and 1980s, however, there was an interesting turn within human factors, in which some researchers and practitioners began to raise questions about the quality of human-machine systems and note the dangers of overautomation. Researchers examining various kinds of complex socio-technical safety-critical systems realized that designing reliable, resilient systems depended crucially on an understanding of human, technical, social, and institutional strengths and weaknesses and began designing for this complexity from the outset. Bainbridge’s classic paper “Ironies of Automation” [4] sounded a warning bell. Over the past 40 years, many others have documented similar concerns, especially those investigating complex safety-critical processes, whether in nuclear power-plant operation, battlefield systems, emergency management systems, aircraft cockpits, or air traffic control systems. (See [2] for an overview of many of these studies.) The usual blaming of any failure in complex automated systems on human error is now seen as simplistic and problematic (See, for example, [5]). While the human in the loop may indeed perform some inappropriate action, rarely is this action alone sufficient to cause a mishap. A complex nexus of poor interfaces to instruments and sensors and their lack of integration, poor problem prioritization, poor training, and weak operating procedures are usually even more to blame. Although in some circles humans are seen as hazards due to their likelihood of making errors and violating procedures, they are also wonderfully resilient and adaptive creatures who can make adjustments, compensate for technical faults, recover from problematic states, and improvise to solve problems.


The terms “human-centered computing” and “human-centered design” have been touted as possible replacements for HCI, a term many see as beyond its sell-by date.

 


What is important for our purposes here is the realization that building reliable and robust complex human-machine or socio-technical systems requires us to go beyond approaches that aim for full-blown automation, with some residual role for humans added as an afterthought when complete automation is impossible. Rather, we need to develop our designs from the outset to take advantage of some of the wonderful flexibilities and capabilities of human beings. Human judgment is required to manage rule-based automatic systems. Issues such as reliability and dependability of systems are thus not viewed simply as technical features, but instead as inherently sociotechnical, produced in and through the actions of people and systems in the course of the working day. This has led to the development of a more human-centered approach to automation, based on an increasing awareness of the importance of human skills and judgment in making such systems work.

HCI—Early Influences and Emergent Forms

This insight into the value of human skills and practices applies not only to complex safety-critical systems, but also to all areas where people and technology commingle. Early HCI embraced the dictum “Know the user!” and endorsed a user-centered approach to interface design, but its focus was still mainly on the individual human user, viewed as an information-processing mechanism. The gap between research lab studies and practical systems design seemed huge. The human was seen as a set of factors that had to be accounted for in the design process, but the notion of the user—a very limiting term—as an active actor in the process was missing [6].

The view of people as competent human actors with skill sets that could be augmented via various forms of computer applications (tools) was strengthened in HCI through the influence of what became known as the Scandinavian participative design movement, led by Kristen Nygaard and others [7]. Around this time the field of computer-supported cooperative work (CSCW) also emerged. Some in HCI view its emergence as a simple enlargement of the HCI field—a move beyond the human-machine dyad to encompass groups. However, it can also be seen more radically as a shift from a psychological to a sociological perspective on human work and activity, emphasizing field observation methods rather than lab studies. This shift resulted in a refocusing not just of HCI, but also of fields such as office automation, away from interfaces and interaction toward understanding workplace practices and the support requirements of cooperative work [8]. The CSCW field has provided us with a large corpus of workplace studies that show the artful ways in which people manage to accomplish their work with and through technology. Once again, the focus is not on replacing human labor with machines but on supporting people through technology.

The Emergence of Interaction Design

“...the design role is the construction of the ‘interspace’ in which people live, rather than an ‘interface’ with which they interact...” —Terry Winograd

In a lucid and, to my mind, somewhat neglected essay entitled “The Design of Interaction” [9], Terry Winograd provided an excellent rationale for the emergence of interaction design, a field that grew out of the confluence of “design thinking” with earlier, task-centered HCI concerns. Again, this can be viewed as simply an enlargement of the HCI community, or it can be seen as a more radical break with the engineering and human-science framework within HCI. Winograd argues that we are moving from the design of interfaces between people and machines to the design of “interspaces” inhabited by “multiple people, workstations, servers and other devices” in a complex web of interactions. The focus is on how to design spaces within which people can communicate, rather than on the computing machinery, which creates new opportunities for interaction between more traditional HCI researchers and members of various design professions. This mixing of concepts, methods, and practices has been exciting—and at times bewildering. Concepts such as aesthetics, user experience, enjoyment, and play are now discussed at HCI conferences. Technology is presented not simply as something to use in the workplace, but also as a part of our lifestyle and a presence in our homes—something we live with, not simply something we use [10].

This expansion of traditional HCI opens up new forms of inquiry, new questions, and new methods of investigation. The dominance of the cognitivist approach is questioned. Phenomenological approaches are explored [11]. The nature of experience is analyzed [12]. Attention to the human body in space, and to the environment in which activities are performed, becomes more pronounced. Screen interfaces give way to wall displays and tangible surfaces. Playfulness and ambiguity become new methods of exploring forms of interaction and living [13]. The role of performance has begun to be investigated seriously, in terms of how people engage in activities in space. Human concerns and values become more explicit and a part of the discourse. Critical design approaches question how we live with and through technology [14]. Researchers argue for the need for thoughtful and reflective design [15, 16].

This panoply of ideas, critiques, art, designs, and reflections at times sits uneasily with a more scientific research agenda. There is something about the kinds of questions being raised that makes us realize this mixing of scientific knowledge, on the one hand, and design expertise, on the other, can create uneasy bedfellows. Returning to Winograd [9], he argues that the challenge for interaction design is to combine the practical aspects learned from engineering, the human concerns that guide design, and social science perspectives on our world. For some, the attempt to combine this plethora of approaches under the old HCI banner is too limiting, and even the field of interaction design is not sufficiently broad. Some argue that we require a new frame. The terms “human-centered computing” and “human-centered design” have been touted as possible replacements for HCI—a term many see as beyond its sell-by date, even when used in the loose umbrella fashion mentioned earlier.

What’s New (If Anything) in “Human-Centered Design”?

There is no single recipe for human-centered design.”—Rob Kling and Susan Leigh Star [17]

While one can find references to the need for a human-centered approach—as distinct from a system-centered one—from the early days of HCI, only within the past decade has the term become more visible in terms of denoting a field of inquiry, as seen in labels such as human-centered computing (HCC), human-centered design (HCD), and human-centered systems (HCS) [18]. People often use the terms in a generic way to encompass a range of distinct research themes, such as interaction design and intelligent systems, human-computer interaction, and others, without any commitment to an overarching conceptual framework other than a general interest in the development of complex human-machine systems that pay close attention to human and social factors. From this perspective, the “new” field is simply the sum of its parts and can be described simply by enumerating the various topics and subfields that make it up.

For others, the term “human centered” implies a more specific theoretical—and even ethical—commitment to the design and development of technologies that augment the already existing skills and expertise of human workers, and thus links back to the earlier concerns expressed in the human-centered automation movement. This perspective of “human-centered design” (or computing, or systems) as a paradigm shift takes the term “human centered” to mean more than simply “considering the user” in technology development. Rather, it places our understanding of people, their concerns, and their activities at the forefront in the design of new technology. As an example of what this might mean, I will mention the issue of values. Concerns over the ethical and moral stance being taken in research and systems design projects have become more pronounced in recent years, partly as a result of the variety of different disciplinary groups and interests now involved in the HCI and interaction design communities. One can no longer assume a shared set of values among researchers or the communities in which they work. This concern with values is evident in, for example, the work of Batya Friedman and her colleagues at the University of Washington, who have developed the topic of value-sensitive design [21]. A recent workshop report on the future of HCI published by Microsoft Research entitled, appropriately enough, Being Human, is instructive [22]. The authors state that the biggest issue confronting the field is to consider values more explicitly in our designs. What might this mean?

A Way Forward

“... technology is not given. It’s not like the sun or the moon or the stars. It was made by people like us. If it’s not doing for us what we want, we have a right and a responsibility to change it.” —Mike Cooley, Right Livelihood Award Speech, 1981

We should not believe that applying new labels such as “human centered” to HCI or computing or design in itself changes anything. Rather, the name change points to a more bottom-up process of rediscovering our human potential and reconstructing the very foundations on which we attempt to build any form of human-centered informatics. In what follows, I provide two short examples of topics that have generated interest as new possibilities in our ubicomp world, and show how a more human-centered agenda could not just raise some concerns but point to a radically reworked agenda for these fields.

Example 1: A Different Perspective on the Theme of “Ambient Assisted Living”

Many HCI researchers have become interested in the area of ambient intelligence and healthcare. They have focused on introducing such technologies into the home to support elderly people living independently, with a view to providing them with a better quality of life at home than in an institution, and, concomitantly, not becoming a burden on the state as they grow older. Much of this work is couched in the language of empowering older people through independent living, although on closer examination this framing of the issue is questionable. The 24/7 remote monitoring of one’s vital signs or the alerting of one’s relatives when one has fallen or failed to take medication may, of course, be potentially life-saving, but does not necessarily add to the dignity or empowerment of the individual concerned.

In examining the large number of experiments currently in progress under the “ambient assisted living” banner, one often finds that while there are some sensitively conducted studies, the real needs and concerns of the central people involved are somewhat secondary to those of either the medical specialists, who require the logging of various physiological parameters, or the technical specialists, who wish to explore various ambulatory and domestic sensor technologies. Some of this work is done in specially equipped laboratories—the “intelligent home,” or the future home for the elderly, where extensive logging of human behavior and of the performance of the instrumentation is possible. However, the relevance of these kinds of studies for the real problem of our aging population is reduced by the fact that in these lab environments, many of the real problems of daily living for older people are effectively ignored. How these new technologies might fit into the domestic environments and daily activities of people becomes difficult, if not impossible, to observe. Even when trials are done in actual homes, further issues can be masked. Trials of a few hours, or even a few days, while providing certain kinds of information, do not allow one to observe issues of longer-term use and habituation. More important, proceeding with a technology-first or even medical-first model can blind one to much more fundamental issues for those who supposedly are at the center of the investigations: elderly people. When viewed from the perspective of the people concerned, it is often not clear how these studies are addressing the participants’ fundamental needs, such as their need to be in contact with family, relations, and neighbors in a natural and unproblematic way so they have a sense of belonging; their need for a sense that their lives have meaning; their need to feel of use to society and to be actively engaged in living; and their need for their privacy and wishes more generally to be respected and not overtaken by well-meaning family members, social services, or medical personnel. Investigating how these technologies can be utilized in a cooperative manner by older couples, or other family members, or close neighbors, is also an area that surprisingly has not figured substantially in this work, with the focus often being on individual self-help or else on more formal external support. Likewise, local community help and support, which have been shown to be of great importance, often get minimal attention.

I am not claiming that any of the topics mentioned here are easy to deal with in our research, but what is of concern is the relatively meager amount of space devoted to such concerns in the ubicomp and ambient assisted-living research program. This brings me back to the basic point of reflecting on our own values and attitudes in performing research, and ensuring that we listen to all the stakeholders involved very carefully from the outset of any planned intervention. I believe this area could be an important test case in which those advocating more human-centered approaches could devise alternative research programs and strategies to the current mainstream approaches.

Example 2: Persistence vs. Ephemerality, or Who Wants Total Recall?

It is frequently assumed that in an era of RFID tagging and ubiquitous technologies, our conceptions of privacy will be radically reshaped. Complete logging of our location, our communications, our purchases, and so on will be commonplace due to the mediated nature of our activities. I have been particularly interested in the ways in which it is assumed that this time-stamping of our lives will provide us with many benefits. One claim is that we will be able to have total recall of all events and communications, as they will all be accessible. It is assumed that having such access would virtually always be a positive. There are numerous research projects concerned with logging events through the use of personal locator devices and environmental sensors in order to provide people—for example, those with Alzheimer’s disease—with diary logs that, it is hoped, might allow them to recall the past. Also, there are projects investigating the creation of life logs of people’s experiences and materials. A showcase example is the Microsoft MyLifeBits project, which attempts to capture every aspect of a person’s life-world in electronic form and make it available for later perusal [23]. My argument with many of these projects concerns the underlying thinking about human memory and the human practices that are implicit in them. They seem to focus on an overly simplistic model of human and collective memory that assumes the capture of every event and activity in real time might become relevant in some way to later human activities of remembering and communicating.

My concerns begin with the assumption about the positive benefits of recording everything about our lives. Omitting, for the moment, the issue of framing that exists with the use of any digital recording device, the issue at one level is how useful such a record would be for people in making sense of their world. Human memory is not like computer memory—it is a constructive combinative process, not a readout from a memory register. The fact that we forget, individually or collectively, is not simply an error in the human psyche, or in the social fabric of a group or society. There are many positive values associated with forgetting at both an individual and collective level—for instance, in allowing new ideas to emerge and take root. At the same time, our ways of living are intricately connected to the nuanced ways in which we can express ourselves in different settings through different media. Discussions in a coffee house or pub have a different value and set of expectations from those held in a meeting hall. Assuming that all of our interactions and discussions are forever frozen and recorded would significantly reduce the subtleties of social intercourse. Rather than augmenting our capabilities through recording and storage, we would actually be reducing our range of options. There are many intriguing questions concerning both individual and collective remembering and forgetting that could be explored. What is worrisome is how little discussion we find about such human and social issues in many of the projects and scenarios focused on “augmenting memory” [24]. Rather, the focus seems to be more on technological exploration for its own sake. While I have nothing against such exploration, presenting it as being motivated primarily by real-world concerns about human-memory fallibility is, in my view, problematic.

Space for Imagination

Le Corbusier said in the early part of this century that a house is a machine for living in…Think about it: A house is a machine for living in. An office is a machine for working in. A cathedral is a machine for praying in. This has become a terrifying prospect, because what has happened is that designers are now designing for the machine and not for people...”—William McDonough [25]

I return now to the underlying theme of this essay: What is HCI, and where is it going? As noted, one perspective sees all of the developments described as just expanding the large HCI umbrella so that HCI is viewed as comprising this enlarged community and their activities. Concepts, methods, and practices from different research and design traditions are thus entangled. From this perspective, the field has more the feel of a bazaar than of a cathedral, to use Eric Raymond’s phrase. On the positive side, this opens up spaces in which people may explore a wide range of approaches to understanding, building, and evaluating human-machine systems, but, in this view, the core of HCI is everywhere and yet nowhere. A more radical approach might be to argue that HCI in its original meaning as “human-computer interaction” is no longer a relevant framework or approach. In this view, fields such as participative design, CSCW, interaction design, and human-centered design are not simply aspects of HCI but themselves radically different interdisciplinary research programs that re-specify the very nature of the relation between people and technology.

Over the years, HCI as a field and as a community has been exhorted to change in various ways. Jonathan Grudin expanded the interface notion in his early paper on the computer “reaching out” [26]. Alan Newell, the UK academic, argued forcefully at INTERCHI ‘93 for more effort in HCI to be focused on the needs of people with extraordinary needs and abilities. At CHI 2000 design theorist John Thackara suggested a focus on the societal concerns of sustainability and meaningful social action. Susanne Bødker has explored “third wave” challenges for HCI [27], and Yvonne Rogers has argued for a more engaged approach to ubiquitous computing that focuses on human actors [28]. The more recent concerns with examining human values and the theme of human-centered computing can be seen as once again highlighting human and societal concerns that can be explored with and through new media and technologies. The very notion of computing or informatics as a discipline focused simply on what can be automated becomes open to debate. From a human-centered computing perspective, the question becomes what should be automated? [29].

When I suggest reimagining HCI, I am encouraging an openness to new forms of thinking about the human-technology relationship, especially as we confront such challenges as the Internet of Things [30]. Designers Anthony Dunne and Fiona Raby argue this is their intent with their notion of critical design, which in their words aims to “raise awareness, expose assumptions, provoke action, spark debate, and even entertain” [14]. Julian Bleecker envisions the blending of science fact, and science fiction, with design to create what he terms design fictions—artifacts that tell stories—new forms of imagining and prototyping [31]. New groups are forming to explore alternative directions in human-machine technology [32, 33]. New open source software and hardware platforms, fabrication labs, and similar facilities provide a drawing board on which people can imagine possible futures and then turn their dreams into artifacts and services that can be tried out, exported, and then hacked by others. New innovative practices are developing, building on individual and collaborative creativity. Design is no longer confined to specific sites and pedagogical traditions. Perhaps the issue is no longer simply about reimagining HCI—it’s about reimagining, and then acting out, a better world.

References

1. Bannon, L. and Kaptelinin, V. From human-computer interaction to computer-mediated activity. In User Interfaces for All: Concepts, Methods and Tools. C. stephanidis, ed. Lawrence Erlbaum Associates, Hillsdale, NJ, 2000.

2. Rochlin, G.I. Trapped in the Net: The Unanticipated Consequences of Computerization. Princeton University Press, Princeton, NJ, 1997.

3. Charlie Chaplin’s classic film, Modern Times (1936), offers a comic, yet insightful, commentary on this machine-centered working world.

4. Bainbridge, L. Ironies of automation. Automatica 19, 6 (1983), 775–779.

5. Woods, D.D., Dekker, S., Cook, R., Johanesen, L., and Sater, N. Behind Human Error (2nd Edition). Ashgate Publishing, Abingdon, UK, 2010.

6. Bannon, L. From human factors to human actors: the role of psychology and human-computer interaction studies in systems design. In Design at Work: Cooperative Design of Computer Systems. J. Greenbaum and M. Kyng, eds. Lawrence Erlbaum Associates, Hillsdale, NJ, 1991, 25–44.

7. Bjerknes, G., Ehn, P., and Kyng, M., eds. Computers and Democracy: A Scandinavian Challenge. Avebury Aldershot, UK, 1987.

8. Schmidt, K. and Bannon, L. Taking CSCW seriously: supporting articulation work. Computer Supported Cooperative Work (CSCW): An International Journal 1, 1–2 (1992), 7–40.

9. Winograd, T. The design of interaction. In Beyond Calculation: The Next Fifty Years of Computing. P. Denning and R. Metcalfe, eds. Copernicus, springer-Verlag, NY, 1997, 149–161.

10. Hallnäs, L. and Redström, J. From use to presence: On the expressions and aesthetics of everyday computational things. ACM Transactions on Computer-Human Interaction 9, 2 (2002), 106–124.

11. Dourish, P. Where the Action Is: The Foundations of Embodied Interaction. MIT Press, Cambridge, MA, 2001.

12. McCarthy, J. and Wright, P. Technology as Experience. MIT Press, Cambridge, MA, 2004.

13. Gaver, W., Beaver, J., and Benford, S. Ambiguity as a resource for design. Proc. of the SIGCHI Conference on Human Factors in Computing Systems (Fort Lauderdale, FL, Apr. 5–10). ACM, New York, 2003, 233–240.

14. Dunne, A. and Raby F. Design Noir: The Secret Life of Electronic Objects. Birkhäuser, Basel, 2001.

15. Löwgren, J. and Stolterman, E. Thoughtful Interaction Design. MIT Press, Cambridge, MA, 2004.

16. Sengers, P., Boehner, K., David, J., and Kaye, J. J. Reflective design. Proc. of the 4th Decennial Conference on Critical Computing. ACM, New York, 2005, 49–58.

17. Kling, R. and Star, S.L. Human-centered systems in the perspective of organizational and social informatics. ACM SIGCAS Computers and Society 28, 1 (1998), 22–29.

18. Space precludes a historical exegesis here, but note that the term human-centered computing was used as a place-holder label for an interdisciplinary initiative in the US inaugurated at a NSF Workshop in 1997 (For an overview of some of the issues discussed, see [17] and [19]). Professional Associations such as IEEE have also had task forces on HCC [20]).

19. Talbert, N. Toward human centered systems: A special report. IEEE Computer Graphics and Applications 17, 4 (1997) 21–28.

20. Jaimes, A., Gatica-Perez, D., Sebe, N., and Huang, T.S. Human centered computing: Toward a human revolution. IEEE Computer. May 2007, 30–34.

21. Friedman, B., Kahn, P.H., Jr., and Borning, A. Value sensitive design and information systems. In Human-Computer Interaction in Management Information Systems: Foundations. P. Zhang and D. Galletta, eds. M.E. Sharpe, Armonk, NY; London, UK, 2006, 348–372.

22. Harper, R., Rodden, T., Rogers, Y., and Sellen, A, eds. Being Human: Human-Computer Interaction in the Year 2020. Microsoft Research Ltd., Cambridge, 2008.

23. Gemmell, J., Bell, G., and Lueder, R., MyLifeBits: A personal database for everything. Comm. ACM 49, 1 (Jan. 2006), 88–95.

24. Bannon, L.J. Forgetting as a feature, not a bug: The duality of memory and implications for ubiquitous computing. CoDesign Journal 2, 1 (2006), 3–15.

25. McDonough, W. Design, ecology, ethics and the making of things. From a talk at the Cathedral Of St. John The Divine, New York (Feb. 7, 1993); http://www.mcdonough.com/writings.htm

26. Grudin, J. The computer reaches out: The historical continuity of interface design. Proc. of the SIGCHI Conference on Human factors in Computing Systems. ACM, NY, 1990, 261–268.

27. Bødker, S. When second wave HCI meets third wave challenges. Proc of the 4th Nordic Conference on Human-computer Interaction: Changing Roles (Oslo, Norway, Oct. 14–18). ACM, NY, 2006, 1–8.

28. Rogers, Y. Moving on from Weiser’s vision of calm computing: Engaging ubicomp experiences. Proc. of Ubicomp 2006. Springer-Verlag, New York, 2006, 404–421.

29. Tedre, M. What should be automated? interactions 15, 5 (2008), 47–49.

30. Greenfield, A. Everyware: The Dawning Age of Ubiquitous Computing. New Riders, Berkeley, CA, 2006.

31. Bleecker, J. Design fiction: A short essay on design, science, fact and fiction. 2009; http://www.nearfuturelaboratory.com/2009/03/17/design-fiction-a-short-essay-on-design-science-fact-and-fiction/

32. The Internet of Things Council; http://www.theinternetofthings.eu/

33. The European Society for Socially Embedded Technology; http://www.eusset.eu/

Author

Liam Bannon is a visiting professor at the Université de Technologie de Troyes (UTT), France. He also holds professorial positions at the University of Limerick, Ireland, and Aarhus University, Denmark. His research interests include human-centered computing, human-computer interaction, computer-supported cooperative work and learning, knowledge management, cognitive ergonomics, new media, interaction design, and social dimensions of new technologies.

©2011 ACM  1072-5220/11/0700  $10.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2011 ACM, Inc.

 

Post Comment


No Comments Found