On the question of human in HCI

Authors: Elizabeth Churchill
Posted: Wed, January 16, 2013 - 11:43:23

WOW! I am so honored and so happy to be part of this amazing blogging lineup for interactions magazine. As some of you may know, I have been writing a somewhat whimsical column for interactions for several years, reflecting on research, design, and the emerging technoscapes that surround us everyday. While I will continue to contribute as a columnist for the magazine, I am very excited to be given the chance to offer some quick reflections on design, HCI, technology, and people in this more interactive format.

So, what am I going to be focusing on in this blog?

Mostly (and don’t hold me too tightly to this, this is a creative space after all) I want to think about what it means to be human. I’d like to muse on what we as individuals, as technologists, designers, observers, and interactive technology developers believe we know about “humanness” and “computerness.” And what those explicit or tacit beliefs open up for consideration—or not—when we think about technology design and redesign. 

Like many HCI professionals, I came to HCI from psychology. When I started thinking about this area of research, cognition was front-forward for me. However, I was not just interested in the brain’s frontal lobe where cognition apparently resides. I also cared about bodies and places. So, I read and contributed to ideas on situated and distributed cognition, and to notions of virtual and “real” embodiment. Ultimately, I was always intrigued by what we mean when we say human—what is mutable and what immutable about being human? Without fully believing it, I signed up tacitly for the idea that technology is in constant flux, always changing, but that people—humanity—stays the same. I believed that humans are the stable entity around which the chaotic world of technology plays out. I believed that technologies enhance us but they do not fundamentally change us: “Technologies change. People don’t” is the elegant and compelling mantra that goes with this viewpoint.

But, I am not so sure of any of this these days.

An instrumental view would acknowledge that, in the “task” moment, we are augmented through the use of tools, including computational ones. Am I the same human without my reading glasses as I am with them? No. If those glasses offer annotations on the visual field, I have augmented and altered my cognitive processes, and thus likely changed the course of my decision-making and action. But has that experience fundamentally changed me? Am I the same person when the glasses come off? Maybe.

Aside from persuasive and enlightening experiences, I am also amazed by the sophistication of prosthetic “bionic” limbs that sense when we are about to initiate a movement before we have made it, enthralled by the idea that we can use the tongue to “see,” and curious about the ways in which brain waves can be harnessed such that objects can be controlled through thought. I am delighted with emerging modal/sensory substitution frameworks where devices can select the modality for presentation appropriate for the situation, based on contextual factors. It’s not even slightly far-fetched. These technological wonders are here now. But we have only begun to scratch the surface of what is possible. Or to ask philosophical questions: If it is my tongue that registers the visual image of an object, am I seeing? Is it vision? As computational objects that reach into our bodies, invasively or not, they start to talk to each other and to us. They start to adapt their behavior as we adapt ours. We are far from an instrumental, tool for the task, master-slave technological perspective that fueled early conceptions of human-computer interaction. We are explicitly in the world of not just cohabitation, but of co-constitution.

I am curious as to the significance of this realization for us as designers of augmentative and facilitative technologies. Without becoming completely relativist, I am more and more excited about the exploration of the ways in which interactive technologies are changing us, and changing how we think about us—physiologically, psychologically, socially. What new design possibilities are opened up and what new discussions around ethics, agency, and decision-making are looming? The binary distinction between human and computer remains useful in some contexts, certainly. But increasingly, we are in cahoots with constellations of computational and cognizing devices, services, applications. This expanded view of human-computer interaction brings into question what it means to be human.

In this blog, I’d like to discover, share, and investigate some ways in which humans are changing (or not) as computation’s embodiments evolve. Which is to engage with the idea that perhaps we need to rethink what it means to be human, just as we have rethought what it means to be computational. And, to be more pragmatic, how we will translate our insights from such ruminations into the settings where we work.

In this blog, in the name of exploration and debate, I intend to be petulant, insistent, strident, argumentative, and probably usually wrong. I am here to share, question, probe, and learn. With you.

Elizabeth F. Churchill is Director of Human Computer Interaction at eBay Research Labs in San Jose, California. She is also Vice President of ACM SIGCHI.

Posted in: on Wed, January 16, 2013 - 11:43:23

Elizabeth Churchill

Elizabeth Churchill is a director of user experience at Google. She has been a scholar and research manager focused on human-computer interaction for over 20 years. A Distinguished Scientist of the ACM, her current work focuses on HCI aspects of the social web and the emerging Internet of Things.
View All Elizabeth Churchill's Posts

Post Comment

@Gopi (2013 01 24)

What an exciting question to focus on! Something very close to my own heart. Thank you for sharing your thoughts. I look forward to reading future posts.