Elizabeth Churchill, Philip van Allen, Mike Kuniavsky
AI techniques have been affecting people’s experiences with digital services, applications, interactions, and interfaces for a long while now, but without much engagement with or input from interaction designers and HCI researchers.
Over the past two years, as part of the AAAI Symposia series (2017–2018) , we have organized a gathering of researchers and practitioners from various disciplines to discuss the UX of AI & ML—the user experience of artificial intelligence and machine learning. Our main interest in organizing these events was to discuss the HCI and design-practice implications of the current fascination with and adoption of artificial intelligence and machine learning in user-facing products.
For the symposia, we invited researchers and practitioners to consider how we can more effectively bring AI considerations and techniques, HCI research, and design practice closer together. Our hope was to lay out concrete activities to close the gap between technical developments in AI methods and techniques, interaction, and interactive-system design activities, and debates around designing with AI, including discussions of ethical, explainable, and accountable AI. We invited considerations of how HCI and design could influence the development of new AI technologies from a number of angles: first, by questioning what AI means and by exploring new definitions of AI; second, by investigating notions of authority in AI algorithmic decision making; third, by considering how to design non-deterministic ecologies, and by exploring what it would mean to design for autonomy and serendipity; fourth, by outlining ethical issues and choices that arise when designing with and for AI systems; and finally, by raising discussion around the aesthetics of AI.
The articles in this Special Topic were selected specifically because they reflect some of the key discussions that spanned both the 2017 and the 2018 symposia . Here are some themes that emerged:
- Terminology matters. A critical project is to identify and clearly define terms at the intersection of HCI, interaction design, user experience, and AI. We explored where the same words used colloquially and in the context of these disciplines may have very different denotations and connotations, and considered what the consequences of such disconnects may be. Consider the words conversation, recommendation, and learning, for example—what do those words mean in everyday conversation, versus their meanings when used by researchers and practitioners in the fields of HCI, interaction design, user experience, and AI?
- Algorithms and data are design materials. AI is made up of a number of algorithmic techniques that manage, summarize, abstract, manipulate, and/or draw inferences from different forms of data. However, neither algorithms nor data are at the heart of AI. Algorithms and data are materials with which to craft robust, reliable, effective, and appropriate experiences for people .
- At the core of HCI, interaction design, and user experience are iterative testing and evaluation. The evaluation of systems that use AI techniques requires new iterative testing and methods that are designed for analysis and the (re)design of distributed, complex, sociotechnical systems, including the collection of case studies that illustrate unexpected and unintended consequences.
- Historical perspectives can reveal and inspire new ways of thinking about AI. As designers move from the design of individual things to complex ecologies of smart things, some older conceptions that influenced the development of AI may have renewed relevance. In their article “Cybernetics and the Design of the User Experience of AI Systems,” Nick Martelaro and Wendy Ju explore one influential area in the development of AI as a field of inquiry. They showcase the perspectives and voices of a number of HCI and design leaders who consider how cybernetics can help us rethink the dynamics of systems, goals, feedback, and conversations with AI systems.
- Speculative design. Designers have a long history of practice oriented not toward solving a problem, but rather toward exploring a new domain or future potential. Design speculation for AI can consider the new opportunities; ethical, cultural, and societal impacts; and potential hazards. In “Design and Fiction: Imagining Civic AI,” Jason Wong explores the idea that “speculation is essential to work through the uncertainty, complexity, and ambiguity of AI problems.” As an example, his speculative project shows how the good parts of human governance—negotiation, diverse perspectives, slowness—can make AI better.
- Design tools. Given the differences in design goals and strategies for autonomous systems, new tools are needed that help designers build working prototypes for both exploration and application. These tools should provide ways of working around the difficult aspects of AI and enable designers and others to quickly experiment and iterate so they can build their understanding and design better systems. With an exploratory and playful perspective, Phil van Allen’s article “Prototyping Ways of Prototyping AI” describes a toolkit he recently developed and is refining with his students. The tool is itself a prototype that asks the questions: What should an AI prototyping tool look like? What affordances do AI designers need? What are the new requirements for the design of AI?
- AI collaborators. Rather than seeing AI as a way to automate activities or provide solutions, AI systems can be designed as collaborators that participate with humans in creating shared outcomes. In this sense, human beings and AI approaches augment each other. This takes into account concepts around cybernetics, distributed cognition, the limits of narrow AI, and the complexity of human creativity. “From Machine Learning to Machine Teaching: The Importance of UX” by Martin Lindvall, Jesper Molin, and Jonas Löwgren discusses this cooperative approach. The authors address human-machine teaching as a key factor in building effective machine-learning systems, pointing out that learning algorithms can offer far superior predictions when they have substantial amounts of high-quality, well-annotated training data. They describe a two-step process for involving people in the labeling of training data as part of their everyday tasks, offering examples from the analysis of medical images.
- Explainable AI (XAI) . Beyond the technical challenges of XAI, our discussion focused on the design issues involved. What affordances can we make available so the user can respond to explanations? How much explanation is too much? What is the role of trust? What if an AI decision is non-intuitive? Does the public need to learn the character of how AI makes decisions? Cramer et al.‘s article, “Assessing and Addressing Algorithmic Bias in Practice,” explores one critical facet of XAI in considering algorithmic bias and calling to make AI systems more visible, transparent, and interrogable.
The symposia confirmed our view that discussions that bridge HCI concerns, design research and practice, and AI have been lacking—but there is a great deal of positive energy around the idea that we could build strong communities of collaboration and practice to address this lack. Our discussions clearly point to the potential and value of treating AI techniques, the data they utilize, and the results they produce as design materials worthy of critical reflection and investigation. They also ask us to consider the many levels of granularity and scopes of impact, whether what is being designed is a micro interaction, an urban infrastructure, or a social structure.
We hope that the articles in this Special Topic spark some broader discussions and ideation in the HCI and IxD communities, and that further symposia and workshops will reinforce that AI methods and techniques are everybody’s material for creating better digital experiences.
2. Although we could only select a few of the papers for this Special Topic, we want to acknowledge that the ideas in this short introduction were developed with all the participants of the 2017 and 2018 symposia.
3. See Lars Holmquist’s article in Interactions: Holmiquist, L. Intelligence on tap: AI as a new design material. Interactions 24, 4 (2017); http://interactions.acm.org/archive/view/july-august-2017/intelligence-on-tap
4. On Explainable AI: https://en.wikipedia.org/wiki/Explainable_Artificial_Intelligence
Elizabeth Churchill is a director of user experience at Google. Her current research focuses on effective tools for creative work, including tools for interactive technology designers and developers. She is the current vice president of the ACM. firstname.lastname@example.org
Philip van Allen is a professor at ArtCenter College of Design, interested in new models for the IxD of AI, including non-anthropomorphic animistic design. He also develops tools for prototyping complex technologies and consults for industry. He received his B.A. in experimental psychology from University of California, Santa Cruz. email@example.com
Mike Kuniavsky is a user experience designer, researcher, and author. Currently at PARC, he previously cofounded several successful user-experience-centered companies, including ThingM, which designs and manufactures ubiquitous computing and Internet of Things products, and Adaptive Path, a well-known design consultancy. firstname.lastname@example.org
©2018 ACM 1072-5520/18/11 $15.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.