Authors:
Elizabeth F. Churchill, Mikael Wiberg
Welcome to the May–June 2024 issue of Interactions! Our cover this issue reflects something that has been top of mind for many of us: the ways in which we interact with AI systems.
Many of us have already experienced how output from large language model (LLM) systems can be superficial and nonhuman, leaving us with an uncanny sense of disquiet. While LLMs can deftly produce written text or images based on prompts, there is still something missing—for example, the ability to respond to queries with questions that unpack nuance and ambiguity—basically, to collaborate on understanding in a way another human might. This has led to the development of a broad spectrum of new digital tools that paraphrase and ask further questions in an attempt to make AI speak human.
Notably, this is simply the latest chapter in a decades-long story of establishing humanlike human-AI conversations. The Turing test asks a human interrogator to distinguish between a machine and another human in conversation. While some believe AI systems are close to passing that test, others say, "No way." and tell us that this is the wrong test. Most believe current AI can mimic human conversation by processing language patterns but that current AI conversation lacks "genuine" deep understanding. Certainly, current AI systems struggle with concepts that are intuitive to humans. They fail to grasp context, social cues, and empathy or to make logical connections the way humans do. In response to this, training in "prompt engineering" is growing as humans learn to speak to AI tools.
How about going the other way and exploring a human-centered, rather than a machine-centered, approach to human-AI conversations?
This of course raises the question: How about going the other way and exploring a human-centered, rather than a machine-centered, approach to human-AI conversations?
Our cover story this month addresses exactly this topic. In their article "From Prompt Engineering to Collaborating: A Human-Centered Approach to AI Interfaces," Tanya Kraljic and Michal Lahav invite us to consider that, despite the recent leap forward in AI-driven by LLMs, the way people actually talk isn't yet fully reflected in the technology. They argue that the onus of query creation remains firmly on us users, whose responsibility it is to "package up our thoughts in very specific language so that AI can decode or 'understand' it." They discuss prompt engineering and, drawing on years of research, illustrate the artificiality of the ways in which people issue prompts. They show how when humans interact, they shape meaning with one another. Conversely, prompt engineering, to date, leaves the primary burden on people to specify and clarify through iteration. In short, prompt engineering recommendations typically take a machine-centered approach. The authors provoke us to ask: What would a real co-shaping of understanding look like?
Of course, embodiment, or its lack, is also an issue for AI systems. As humans, understanding is tied to our corporeal senses and to our environmental interactions in profound ways. It remains to be seen whether robotics and AI integration could give AI systems some real-world "experience," improving their ability to relate to human concepts and conversational contexts. In her Making/Breaking piece, Sofia Guridi brings embodiment into the picture with a discussion of an interactive multisensory installation composed of handwoven biotextile pieces. "Borrowed Matter/Materia Prestada" explores "innovative uses of tree cellulose as a biomedium while provoking reflection on the extractive processes and economic considerations involved in its production." A beautiful, engaging provocation for us all, this piece also appears on our website with a Spanish translation.
As this issue coincides with CHI 2024, we are very excited to share some thoughts from the CHI Steering Committee. Regan Mandryk, Cliff Lampe, and Aaron Quigley summarize some discussions and possibilities for the future of the CHI conference. Based on feedback from several events involving key members of the SIGCHI community, a number of options are presented for us to consider as we think about the future of our flagship conference. Nothing has been finalized, but hopefully this thoughtful overview offer will spark conversations and reflection in our community. The authors invited feedback, so please do consider sharing your ideas.
Speaking of CHI, we are aware that many people cannot attend CHI 2024 in Hawaii in person. In their Blog@IX submission "Doing CHI Together: The Benefits of a Writing Retreat for Early-Career HCI Researchers," Ava Elizabeth Scott, Leon Reicherts, and Evropi Stefanidi share their experience of running writing workshops at the University of St. Gallen. Called CHI Together, these workshops bring researchers together for two and a half weeks prior to the CHI deadline to work through ideas for papers, refine papers, and discuss collaborations that may lead to papers being submitted to future CHI conferences. Bringing novice and experienced authors together not only creates an environment for papers to be completed for submission but also builds community for future collaborations.
Finally, we would like to welcome our new Exit curators, Scott Minneman and Renato Verdugo. In their first piece for Exit, Minneman shares one piece from the work of artist Alexander Reben, an MIT Media Lab graduate and current artist-in-residence at Open AI. Using AI tools as his muse, Reben creates extraordinary sculptures that invite us to ask: What does human-AI collaborative inspiration and creativity look like?
As always, we invite you to consider submitting to Interactions. Let us know what you are doing in your technology and design explorations, as well as in your labs, projects, prototyping activities, and fieldwork. Please invite others to submit as well—we are always interested in foregrounding the ways we interact with, or are deeply entangled with, interactive systems.
Elizabeth F. Churchill and Mikael Wiberg
[email protected]
Copyright held by authors
The Digital Library is published by the Association for Computing Machinery. Copyright © 2024 ACM, Inc.
Post Comment
No Comments Found