Wei Xu, Marvin Dainoff
Artificial intelligence (AI) has brought benefits, but it may also cause harm if not appropriately developed. Current development is mainly driven by a technology-centered approach, causing many failures [1,2]. For example, the AI Incident Database has documented more than a thousand AI-related accidents . To address these challenges, a human-centered AI (HCAI) approach has been promoted and has received a growing level of acceptance over the past few years [1,2]. HCAI calls for combining AI with HCI design that will enable the development of AI systems (e.g., autonomous vehicles, intelligent user interfaces, or intelligent decision-making systems) to achieve its design goals such as usable/explainable AI, human-controlled AI, and ethical AI.
While HCAI promotion continues, it has not specifically addressed the collaboration between the AI and HCI communities, resulting in uncertainty about what action should be taken by both sides to apply HCAI in developing human-centered AI systems .
→ We are at a historical junction for a shared journey of collaboration between the AI and HCI communities to enable HCAI.
→ The complementary nature of the two communities' approaches encourages this collaboration.
This article focuses on the collaboration between the AI and HCI communities, leading to nine recommendations for effective collaboration to enable HCAI in developing AI systems.
When personal computers emerged in the 1980s, their development followed a technology-centered approach, which ignored the needs of ordinary users and resulted in user experience (UX) problems. With an initiative primarily driven by a collaboration by human factors, computer science, and psychology professionals, the new field of HCI emerged. Such an interdisciplinary collaborative field has benefited the promotion of a human-centered approach through UX-driven design.
History seems to be repeating, having brought the AI and HCI communities to a new juncture. This time the consequences of ignoring the human-centered approach are much more serious .
AI technology has unique characteristics compared to non-AI systems . The AI communities have recently made huge investments in the ethics of AI systems, acknowledging the value of identifying human-centered principles that can guide how AI systems are deployed. The HCI community has also been applying the human-centered approach to computing systems but faces some challenges in applying it to AI systems [5,6]. How can the AI and HCI communities work together to develop human-centered AI systems?
To better understand the current transition from traditional human interaction with non-AI systems to interaction with AI systems, we conducted a study to assess the transition between the two types of interaction under seven aspects (see the selected results in Table 1). The focus of the study was to: 1) identify the major transitions from human interaction with non-AI to AI systems; 2) identify the unique characteristics introduced by AI technology; and 3) analyze the pros and cons of the AI and HCI approaches to enable HCAI . Based on the assessment, we can identify the challenges and opportunities for the collaboration between the AI and HCI communities as well as generate nine recommendations for a shared journey of enabling HCAI.
|Table 1. The complementarity of approaches across the AI and HCI communities for enabling HCAI.|
As Table 1 shows, there are unique characteristics introduced by AI technology, posing challenges in developing human-centered AI systems; the approaches taken by each community show their individual work focus; more importantly, both sides reveal the gaps in developing HCAI systems.
The complementary nature of the AI and HCI communities' approaches, however, encourages collaboration between them, which will lead to developing human-centered AI systems more effectively. For instance, the HCAI design goal is to develop human-controlled AI. Driven by the HCAI approach, we advocate that hybrid intelligence must be developed in a context of "human-machine" systems by leveraging the complementary advantages of AI and human intelligence to produce a more powerful intelligence form: human-machine hybrid intelligence. This strategy not only solves the bottleneck effect of developing AI technology in isolation but also emphasizes the use of humans and machines as a system (a human-machine system) and integrates human functions and roles into AI systems as the ultimate decision makers, without harming humans. As an interdisciplinary field, future HCI work needs to help the AI community explore cognitive computing based on human cognitive abilities (e.g., intuitive reasoning, knowledge evolution) in support of developing human-machine hybrid intelligent systems. Future HCI work should help accelerate the conversion of existing psychological research results to support the work of cognitive computing and define cognitive architecture for AI research. HCI professionals also need to collaborate with AI professionals to explore the approach of integrating the cognitive computing method with the human-in-the-loop method, at system and/or at biological levels (e.g., brain-computer interface).
Over the past 50 years, there have been collaborations between the AI and HCI communities. When the development of AI encountered a bottleneck, HCI often provided new research ideas and application scenarios for AI technology, such as voice input. AI has brought breakthroughs to HCI technology and elevated HCI to a new development space.
As an emerging approach, HCAI will need collaboration between both communities, which currently faces challenges in practice. Research shows that while HCI professionals are challenged by the need to effectively influence the development of AI systems, AI professionals may not fully understand HCAI. And many HCI professionals still join AI projects only after requirements are defined, a typical problem when HCI was an emerging field 40 years ago. Consequently, recommendations from HCI professionals could be easily ignored by AI professionals . AI professionals often claim that many ease-of-use issues for UI design that HCI could not solve in the past have been solved through AI technology (e.g., voice input). However, studies have shown that the outcomes followed by a technology-driven approach may not be acceptable from the UX perspective . Also, AI and HCI professionals find it challenging to collaborate with each other effectively. Recent studies have shown that HCI professionals do not seem prepared to provide effective design support for AI systems due to a lack of knowledge of AI, while AI professionals may not fully understand the purpose behind HCI work . There can be no well-integrated process between the two sides without a shared language and process.
We call for the nine actions below to foster the collaboration between the two communities.
Share a common design philosophy. In response to the challenges faced by the interdisciplinary communities a few decades ago, William Howell proposed a "shared philosophy" that integrates human-centered design philosophy with other disciplines . Over the past 40 years, participation from multiple disciplines in the field of HCI for promoting the design philosophy is the embodiment of this model. HCI and AI professionals need to jointly promote HCAI as we did 40 years ago to promote the human-centered design approach for PC applications.
Apply an integrated interdisciplinary approach. To enable HCAI in practice, both communities need to enhance their own methods by leveraging the other's. For example, we need to enhance current processes for developing AI systems by incorporating HCI processes and methods, such as iterative prototyping and UX testing, as well as to enhance current software verification/validation methodology by effectively managing evolving machine behavior in AI systems. Recent work on interactive machine learning looks promising , while the AI-as-a-material approach helps HCI professionals improve current HCI design in developing AI systems . Thus, members from diverse disciplines can collaborate to attain shared goals with complementary methods.
Build optimal UX in AI systems. Some AI professionals assume that AI technologies already make intelligent UI usable (e.g., with voice input) and have little concern about UX. The reality is that we definitely need more UX work when developing AI-based intelligent interaction . HCI professionals can develop new interaction paradigms for usable and natural intelligent UI, and further support the new paradigm of human-AI collaboration. A good example for addressing the AI black box problem is a collaborative work between AI and HCI that leads to more explainable and interpretable AI decisions for drivers of autonomous vehicles through a human-centered XAI approach .
Enhance HCI and UX design with AI. AI technologies have transformed the way for HCI and UX design. For example, we may use machine-learning/algorithm-based approaches to identify insights in user research, design, and UX evaluation. Despite attempts to integrate HCI and AI, HCI designers experience challenges in incorporating ML into UX design and when collaborating with data scientists. The HCI and AI communities need to collaborate on developing innovative methods, tools, and processes to help HCI designers better innovate with AI.
Design ethical AI collaboratively. Many AI-related ethical standards are available now. However, the implementation of ethical AI is still a challenge in the real world. Some AI professionals view ethical decision making as another form of technical problem-solving, and many lack formal training in applying ethics to design, so the community lacks the technical knowledge and solution examples to implement ethical AI. A multidisciplinary approach may help achieve ethical AI. For instance, AI professionals may apply HCI iterative prototyping/testing and behavioral science methods to improve the training and validation of algorithms to minimize algorithmic bias.
Update skill set and knowledge. While AI professionals should understand HCI, HCI professionals also need to understand AI technology and apply it to facilitate collaboration. A mutual understanding from an interdisciplinary perspective will overcome HCI professionals' inability to influence AI systems as reported today . AI professionals also need to obtain the necessary knowledge from HCI, as well as the behavioral and social sciences.
Train the next generation of AI developers and designers. Over the past 40 years, HCI, human factors, and psychology have provided an extensive array of professional capabilities, contributing to a mature UX culture. For this to occur for HCAI, new measures at the level of college education are required, including cultivating interdisciplinary skills. This can include providing students with multiple options of a hybrid curriculum of "HCI + AI" or "AI major + social science minor."
Accelerate interdisciplinary research and application. The development of AI technology itself has benefited from interdisciplinary collaboration. We advocate further collaborative projects for developing AI systems across disciplines and domains. The development of autonomous vehicles is a good example. Many companies are investing heavily in developing autonomous vehicles and there are opportunities for collaboration to overcome the challenges that are coming to light as these vehicles encounter the real world.
Foster a mature culture of HCAI. To foster a mature culture of HCAI, we need support through management commitment, organizational culture, optimized development process, design standards and governance, among others. We firmly believe that a mature HCAI culture will eventually come into being, and history has proved our initial success in promoting the human-centered design philosophy through collaboration in the PC era.
To conclude, although the HCAI approach is in its initial stages, its influence will ultimately determine our continuing efforts, just as the culture of UX has been jointly developed over the past 40 years. We thus find ourselves at a historical junction, initiating a new journey of collaboration between the AI and HCI communities to enable the HCAI approach.
3. McGregor, S. AI Incident Database. https://incidentdatabase.ai/
4. Xu, W., Dainoff, M., Ge, L, and Gao, Z. Transitioning to human interaction with AI systems: New challenges and opportunities for HCI professionals to enable human-centered AI. International Journal of Human-Computer Interaction (Mar. 2022). DOI: 10.1080/10447318.2022.2041900
5. Yang, Q., Steinfeld, A., Rosé, C., and Zimmerman, J. Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. Proc. of CHI Conference on Human Factors in Computing Systems. ACM, New York, 2022.
7. Budiu, R. and Laubheimer, P. Intelligent assistants have poor usability: A user study of Alexa, Google Assistant, and Siri. NN Group Report, 2018; https://www.nngroup.com
11. Wintersberger, P., Nicklas, H., Martlbauer, T., Hammer, S., and Riener, A. Explainable automation: Personalized and adaptive UI to foster trust and understanding of driving automation systems. Proc. of the 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. ACM, New York, 2020, 252–261.
Wei Xu is a professor of psychology/HCI at Zhejiang University in China. He is an elected fellow of the International Ergonomics Association. He received his Ph.D. in cognitive psychology with emphasis on HCI and his M.S. in computer science from Miami University in 1997, as well as his M.S in engineering psychology/human factors from Zhejiang University, China, in 1988. His research interests include human-AI interaction, HCI, and aviation human factors. [email protected]
Marvin Dainoff is a professor emeritus of psychology at Miami University. He is an elected fellow and past president of the Human Factors and Ergonomics Society. He received his Ph.D. in psychology from the University of Rochester. His research interests include human factors, sociotechnical approaches for complex systems, and workplace ergonomics. [email protected]
©2023 ACM 1072-5520/23/01 $15.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.