Authors:
Wei Xu
In a 2018 survey of emerging trends among 6,300 corporate IT executives around the world, Accenture found that the foremost concern was citizen artificial intelligence (AI)—that AI research and development should make AI-based solutions responsible and productive actors in society. This means that as AI is developed further, attention should be given not only to the technology itself but also to other important nontechnical factors.
As with other transformative and controversial technologies like nuclear and biochemical, AI has both potential benefits and risks. Since its development and use are decentralized and global, the entry bar is relatively low, making it more difficult to control [1]. Machine learning (ML)-based AI systems trained with incomplete or distorted data (i.e., their "worldview") can lead to biased "thinking," which may in turn magnify prejudice and inequality, spread rumors and fake news, and even cause physical harm. Because of these concerns, some large AI projects never came to fruition. As more and more enterprise and government services based on AI/ML algorithms are released, decisions influenced by AI thinking and worldviews will increasingly affect people's daily lives. Some even worry that in the future AI systems may completely reject human beings, who will eventually lose control.
In response to these concerns, Stanford University, UC Berkeley, and MIT have established human-centered AI (HAI) research institutes. Their HAI research strategies emphasize that the next frontier of AI is not just technological but also humanistic and ethical: AI is to enhance humans rather than replace them. For example, researchers from Stanford University believe that AI research and development should follow three objectives: to technically reflect the depth characterized by human intelligence; to improve human capabilities rather than replace them; and to focus on AI's impact on humans [1]. In the industry, several leading hightech companies have advocated for AI solutions that are responsible, ethical, secure, and inclusive by publishing guiding principles for AI technology development, processes, tools, and training.
As the core technology of AI, ML and its learning process are opaque, and the output of AI-based decisions is not intuitive. For many nontechnical users, an ML-based intelligent system is a black box, especially neural networks for pattern recognition in deep learning. This black-box phenomenon causes users to question the decisions from the system: Why did you do this? Why is this the result? When did you succeed or fail? When can I trust you? This reflexive skepticism directly affects users' trust and decision-making efficiency, thus also affecting the adoption of AI solutions. Because of the black-box effect, AI solutions are not explainable and comprehensible to users. This phenomenon may occur across various types of AI applications, including financial and legal decisions, medical diagnoses, industrial process monitoring, security screening, employment recruitment, legal judgment, university admissions, smart homes, and autonomous vehicles.
In addition, some AI applications were very expensive and failed due to a lack of use value. People have now reached a consensus on the bottleneck effect of developing solutions, stressing that AI solutions must have a clear purpose. By providing useful AI, such solutions can match user needs and thus gain acceptance and generate economic benefits. From the perspective of intuitively usable AI, there are still some challenges in HCI design. For example, a recent user experience (UX) test conducted for three top brands of intelligent assistants with voice interactions in the U.S. market shows that the smart assistants failed in all main types of complex problem tasks and succeeded only in some simple ones [2]. In recent years, autonomous vehicles have also been involved in multiple fatal accidents, partially due to HCI design issues. This shows the importance of HCI design for usable AI.
How can the HCI community respond to this and help deliver a comprehensive HAI solution encompassing not just ethics and technology, but also explainable, comprehensible, useful, and usable AI?
This article proposes an extended HAI framework (Figure 1). The framework includes three main components: 1) ethically aligned design, which creates AI solutions that avoid discrimination, maintain fairness and justice, and do not replace humans; 2) technology that fully reflects human intelligence, which further enhances AI technology to reflect the depth characterized by human intelligence (more like human intelligence); and 3) human factors design to ensure that AI solutions are explainable, comprehensible, useful, and usable. Human factors design is not fully considered in today's HAI research agenda [1]. The purpose of this framework is to promote a comprehensive approach, ultimately providing people with safe, efficient, healthy, and satisfying HAI solutions.
Figure 1. An extended HAI framework. |
The HAI framework shows synergy across the three domains. For example, ethical AI design emphasizes the enhancement of human capabilities rather than their replacement. It requires HCI design to ensure that human operators are able to quickly and effectively take over the control of an intelligent system in an emergency, so that fatal accidents such as the accidents of autonomous cars mentioned above can be avoided.
Third-Wave AI and Opportunities For HCI
Looking back at the history of AI development (Table 1), we may conclude that the first two AI waves failed not only because they lacked mature technologies but also because they left human needs unsatisfied. AI is beginning to meet human needs and provide a positive UX for a variety of application scenarios in the third wave. It is also beginning to deliver mature business models with useful AI. In addition, people began to consider the ethics of AI, as well as AI's interpretability and comprehensibility. These are all human aspects. Thus, the third AI wave can be characterized by technological enhancement and application + a human-centered approach.
Table 1. A comparison of the three waves of AI. |
History seems to be repeating itself. When PCs were emerging in the 1980s, computer users were mainly programmers, who considered only technical factors and not wide usability when designing products: Experts designed for other experts in the same field. Similarly, much current AI research focuses only on technical aspects, and therefore AI solutions are facing similar problems. The proposed HAI framework is intended to promote a refocus to a user-centered approach in a broader context, much like the rise of the user-centered design (UCD) practice 30 years ago. Thus, a new version of UCD practice, HAI, has again fallen on the shoulders of HCI professionals, promising many great opportunities for the HCI community.
Explainable and Comprehensible AI
Explainable AI (XAI) enables users to understand the algorithm and parameters used, which is intended to address the AI black-box problem. If a medical-diagnosis intelligent system predicts the likelihood of cancer based on a patient's personal data, XAI should provide the reasons for the prediction to the patient through a UI. Previous research on XAI had mainly been done in two ways: visualization of ML processes and explainable ML algorithms. However, these approaches may be biased in explaining how ML algorithms work and rely mainly on abstract visualization methods or statistical algorithms, which may further increase complexity. At present, the most representative one is the DARPA XAI five-year research program [3]. The program organized 13 universities and research institutes to approach XAI primarily by developing new or improved explainable ML algorithms. The program also investigates explanation UIs with advanced HCI techniques (e.g., UI visualization, conversational UI) and evaluates psychological explanation theories to assist XAI research. The program is still midterm, and so far no well-researched results have been published, particularly research related to HCI.
From an HCI perspective, there is no guarantee that the target users of an XAI system will be able to understand it. For example, the XAI version created for data scientists is incomprehensible to most non-expert users. According to UCD, a design needs to provide a comprehensible AI that is based on target users' needs and capabilities (e.g., knowledge level). The ultimate goal of XAI should be to ensure that target users can understand the outputs, thus helping them improve their decision-making efficiency.
Since the research has so far been dominated by AI/ML professionals, it is time for HCI professionals to proactively seek to make contributions to XAI and comprehensible AI. First, HCI professionals can provide effective HCI design for the UIs of HAI solutions by adopting effective visualization models, adaptive UI, and natural UI dialogue technologies. Also, while there are many psychological theories of explanations, they have not been fully considered in current research. HCI professionals can take advantage of their interdisciplinary approaches to work with AI experts to build UI or computational models that contribute to comprehensible AI and accelerate the transition from theory to practice. Finally, HCI professionals can drive rigorous user-involved behavioral experimental methods to validate proposed research protocols, which was overlooked in most of the previous research driven by AI professionals.
Useful AI is defined as an AI solution that can provide the functions required to satisfy target users' needs in the valid usage scenarios of their work and life. From a historical point of view, one of the main reasons third-wave AI can penetrate people's work and life is that it can now solve practical problems with the right usage scenarios and UX, something not achieved in past waves. As HCI professionals working on AI solutions, we must ask ourselves how much we can contribute to the foundation for useful AI.
HCI professionals are good at identifying usage scenarios based on HCI methods such as ethnographic studies and contextual inquiries, and helping mine user needs, behavioral patterns, and usage scenarios. For example, using AI and big data to model real-time user behaviors, and digital user personas to identify potential user needs and real-world usage scenarios.
AI also needs to be usable. Usable AI can be defined as an AI solution that is easy to learn and use via optimal UX created by effective HCI design. Although the HCI community has mature processes and tools (e.g., UCD) to specify user requirements, prototype UIs, and conduct UX tests to validate designs, we do have challenges in delivering usable AI.
The first challenge is to move beyond mere interaction. With the addition of learning capabilities in AI-based machine intelligence, human-machine relationships have shifted from human-computer interaction to human-machine integration and human-machine teaming [4]. The humans and machines are teammates and collaborative partners now. The dynamic cooperation between the two cognitive agents with enhanced capability on the machine side (as it learns over time) brings added complexity to the HCI design of AI solutions. There are a series of questions that require systematic HCI research. For example, dynamic functional allocation and task assignments between human and machine, dynamic goal setting, and allocation of decision-making power between the two over time.
One of the other challenges is that current HCI methods were originally created for non-intelligent solutions. In our study, we suggested a series of enhanced HCI methods specifically for AI solutions [5]. For example, during the UI prototyping, instead of rushing to focus on visual and interactive design as usual, HCI designers should consider the AI-first approach, carrying out dynamic functional allocation between human and machine, prioritizing the use of machine intelligence functions (e.g., smart search, real-time user behavior, contextual information, voice input) to reduce repetitive human activities and design more intuitive UIs to optimize the UX. This is the approach we have explored in our research project, with promising results [6].
HCI professionals can also play a critical role in validating AI solutions. The traditional software verification method assumes that the system has no learning ability to change its behavior, which is predictable. However, the behaviors of intelligent systems develop over time. The verification evaluation of AI solutions needs collaboration between AI software engineers and HCI professionals. A combination of methods (e.g., software validation, user-involved UX validation) may help achieve better results. Early UX evaluation of low-fidelity intelligent design prototypes requires alternatives such as Wizard of Oz (WOZ) design prototypes to simulate and validate the learning and intelligent behaviors of AIs.
Finally, current AI-related standards focus primarily on ethical design issues, such as the guidelines published by IEEE. There are currently no specific HCI design standards for guiding AI solutions; the HCI community needs to develop these.
Additional HCI Considerations of the HAI Framework
In addition to the human-factors design component of the HAI framework, the HCI community can also contribute to other components. For example, in the case of ethically aligned design, AI engineers typically lack formal training in applying ethics to design in their engineering courses and tend to view ethical decision making as another form of technical problem solving. Fortunately, the AI community now recognizes that ethical AI design requires wisdom and cooperation from a multidisciplinary field extending beyond computer science [1]. The HCI community can leverage their interdisciplinary skills to assess the ethics-related issues and help propose solutions by adopting social and behavioral science methods from a broader sociotechnical systems perspective.
It is time for HCI professionals to proactively seek to make contributions to XAI and comprehensible AI.
The HCI community can also support the technology-enhancement component in the HAI framework. HCI adopts the UCD process, defining human needs, designing and prototyping solutions for these needs, and then testing the design with human users. HCI, which studies the interaction between human and machine (AI), can make contributions to algorithm modeling, training, and testing. By following a human-centered ML approach as an example, AI and HCI professionals can work together to define UX criteria, test/optimize ML training data and algorithms iteratively, and avoid extreme algorithmic bias.
Although history seems to be repeating itself in the initial stages of AI's third wave, future success is much more likely if HCI professionals proactively embrace the challenges and start to participate directly in the research and development of AI today, just as the entire HCI community has been doing over the past 30 years. The maturation of key technologies and the accumulated experience of the field make this all the more likely.
Key Takeaways and Actions Required
As we see, there are lessons to be learned and actions to be taken. The third wave of AI is characterized by technological enhancement and application + a human-centered approach, which provides a great opportunity for the HCI community to provide comprehensive HAI solutions to address emergent challenges. This requires a systematic consideration of ethically aligned design, technology that fully reflects human intelligence, and human factors design. Specifically, HCI professionals should take a leading role in the human factors design in the HAI framework by providing explainable, comprehensible AI and useful, usable AI. HCI professionals can also contribute to ethical AI design and AI technological enhancement. The deep involvement of the HCI community in these areas has yet to be fully realized, but is necessary and urgent.
In order to provide full disciplinary support for HAI solutions, the work of the HCI community should include research on human-machine integration/teaming, UI modeling and HCI design, transference of psychological theories, enhancement of existing methods, and development of HCI design standards. HCI professionals should proactively participate in AI research and development to increase their influence, enhance their AI knowledge, and integrate methods between the two fields to promote effective cooperation.
1. Li, F.F. and Etchemendy, J. A common goal for the brightest minds from Stanford and beyond: Putting humanity at the center of AI. 2018; https://hai.stanford.edu/news/introducing-stanfords-human-centered-ai-initiative
2. Budiu, R. and Laubheimer, P. Intelligent assistants have poor usability: A user study of Alexa, Google Assistant, and Siri. 2018; https://www.nngroup.com
3. Gunning, D. Explainable artificial intelligence (XAI) at DARPA. 2017; https://www.darpa.mil/attachments/XAIProgramUpdate.pdf
4. Farooq, U. and Grudin, J. Human-computer integration. ACM Interactions 23, 6 (2016), 27–32.
5. Xu, W. User-centered design (III): Methods for user experience and innovative design in the intelligent era. Chinese Journal of Applied Psychology 25, 1 (2019), 3–17.
6. Xu, W., Furie, D., Mahabhaleshwar, M., Suresh, B., and Chouhan, H. Applications of an interaction, process, integration, and intelligence (IPII) design approach for ergonomics solutions. Ergonomics 62, 7 (2019), 954–980; https://doi.org/10.1080/00140139.2019.1588996
Wei Xu is a researcher at Intel. He is chair of the Intel IT Cross-Domain HCI/UX Technical Working Group, leading HCI/UX design strategy, standards, and governance. He has a Ph.D. in psychology (HCI focused) and an M.S. in computer science from Miami University. His research interests include HCI, cognitive engineering, and aviation human factors. [email protected].
©2019 ACM 1072-5520/19/07 $15.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2019 ACM, Inc.
Post Comment
No Comments Found