Features

XXVIII.1 January - February 2021
Page: 48
Digital Citation

From automation to autonomy and autonomous vehicles: Challenges and opportunities for HCI


Authors:
Wei Xu

back to top 

Autonomous systems based in artificial intelligence (AI) are increasingly entering people's daily work and life. A major category is autonomous vehicles (AV). The U.S. National Highway Traffic Safety Administration hopes that future AVs can drastically reduce fatalities by eliminating driver errors that account for 94 percent of fatal road accidents. However, a study has shown that 35 percent of 8,571 American drivers surveyed did not believe that AVs could operate safely without driver control. In recent years, there have been several fatal AV-related accidents. Failures have also occurred in other autonomous systems, including autonomous weapons, Google Flu Trends, and high-speed trading systems.

ins01.gif

Concerned about the safety of autonomous systems, leading researchers from the human factors community have warned that attention must be focused on human factors design for this new class of technology [1,2]. They argue that society is in the midst of a major change with the introduction of AI-based autonomous systems that are of a different class from conventional automation. There is a confusion between automation and AI-based autonomy, which may lead to inappropriate expectations and the misapplication of technology.

back to top  Insights

ins02.gif

Ben Shneiderman describes the situation as a second Copernican revolution. The HCI community must recognize the potential risks of autonomous systems that put machines at the center (which most AI professionals would advocate), instead of humans (which HCI professionals should advocate). However, according to Peter Hancock, "The horse [autonomous systems] has left the stable" [1]. As HCI professionals, how much do we know about autonomous systems and their potential impacts on safety? How can we ensure that humans are the ultimate decision makers? What do we need to do to improve their safety?

back to top  A Human Factors Perspective: Automation Versus Autonomy

Automation is the ability of a system to perform well-defined tasks and to produce deterministic results, relying on a fixed set of rules and algorithms without AI technologies. It does not usually replace the human; instead, it changes the nature of the human's work from direct operations to a more supervisory role. For example, on a commercial aircraft, the autopilot can carry out certain flight tasks with pre-setup done by pilots, moving them to a more supervisory role in managing the automation. However, in abnormal scenarios for which the autopilot was not designed, pilots must be able to immediately disconnect the autopilot and manually take control.

In the intelligent era, autonomy specifically refers to the ability of an AI-based autonomous system to perform specific tasks independently. They can exhibit behaviors and evolve to gain certain levels of human-like cognitive, self-executing, and adaptive abilities [3]. They may successfully operate under some situations that are possibly not fully anticipated, and the results may not be deterministic.

By using AI technologies, we can develop autonomous systems that typically perform only limited tasks under specific situations. In reality, there will be situations that the designers had never considered and that the system cannot handle. For example, no current AV can solve all driving scenarios without human intervention by understanding cause/effect and adapting to changing situations.

Table 1 compares the characteristics of automation and autonomy from a human factors perspective. The fundamental difference between the two is that autonomous systems can be built with human-like intelligent abilities. On the other hand, both require human intervention in operations for safety. The differences and similarities are of great significance for HCI.

ins03.gif Table 1. Human factors comparative analysis between automation and autonomy.

back to top  Lessons Learned From Automation

Automation has been widely used to ensure reliable performance. Over the past five decades, a great amount of research has been conducted, primarily by the human factors community, on the impact of automation on human performance, workload, situational awareness (SA), and other areas. This research has achieved a certain level of consensus.

Mica Endsley summarizes the key findings in [2]. When using automation, human operators play a critical monitoring role and can intervene in situations that the system cannot handle. Unfortunately, many complex automated systems are brittle when handling unanticipated situations. This creates challenges such as "automation surprise" for the human operator, who may not understand what the system is doing or why. Research also shows that automation may cause the problem of falling out of the loop; that is, operators have low SA and are slow to detect a problem, and then are also slow to intervene appropriately in urgent situations. In civil aviation, this has resulted in accidents.

back to top  Continuing Challenges in Autonomy

Lisanne Bainbridge calls a classic challenge on automation "ironies of automation" [4]. That is, the more automation is added to a system and its reliability increases, the lower the SA of human operators and the less likely that they will be able to take over manual control when needed. This is created by the combined challenges to operators of being out of the loop, loss of SA, lack of transparency of the system interface, overreliance on automation, and so on.

Endsley believes these problems can create fundamental obstacles to AI-based autonomy too [2]. In autonomous systems, as the level of automation of individual functions increases, the autonomy of the overall system increases and the operator's SA to these functions decreases. Thus, in the emergency state, the possibility of being out of the loop increases, so that the operator eventually fails to take over the system when needed. Recent fatal AV accident reports have verified these same safety issues that we see in automation [2].

Furthermore, once deployed, autonomous systems will evolve with use in different environments. This means that the technology has the potential to develop in unexpected ways. Because of this, it may also be more likely that autonomy will surprise human operators to an even greater extent than automation. This could magnify the previously identified problems with automation, a phenomenon known as the lumberjack effect [2]. Thus, the HCI community must recognize the potential dangers of autonomous systems and take appropriate actions.

back to top  A Paradigm Shift: The New Human-Machine Collaborative Relationship

In HCI, the computer is considered to be a tool supporting human operations through interactions. With AI technologies, machines exhibit behaviors and are evolving from a tool to a teammate with certain levels of human-like abilities (Table 1), forming a collaborative relationship between human and machine. This transformation represents a major shift in the human-machine relationship in the intelligent era.

Table 2 summarizes the major changes in human-machine relationships across different technological eras. We are currently entering into an autonomous world driven by AI technologies, where ubiquitous autonomous systems are coming with potential human-machine collaborative relationships. The level of collaboration depends on the level of the human-like abilities of an autonomous system. Such new relationships will inevitably bring about major changes in HCI practices, implying that we may minimize the potential safety risk of autonomous systems by leveraging the human-machine collaborative relationship.

ins04.gif Table 2. Human-machine relationships across technological eras.

back to top  The Human Factors-Related Problem Space of Autonomy

As discussed previously, possession of human-like abilities is the essential difference between automation and autonomy; thus we can assume an orthogonal relationship between the two. Human intervention is the safety guarantee for automated and autonomous systems in emergency scenarios. There is a one-to-one relationship between human intervention and the level of autonomy or automation; that is, as the level of autonomy and automation increases, the need for human intervention decreases but humans must be the ultimate decision makers. Therefore, autonomy, automation, and human intervention can represent a 3D spatial relationship between the three (Figure 1).

ins05.gif Figure 1. The conceptual space of human factors-related problems of autonomy [5].

In Figure 1, Point A refers to a system requiring full human operations without automation or autonomy. Point B represents a system with a full automation capability without autonomy for a specific operating scenario, but still requiring a certain level of human intervention. Point C refers to an autonomous system with a high level of autonomy. But a reasonable level of human intervention must be reserved, and humans must always be the ultimate decision makers, implying that all autonomous systems must be supervised or controlled by human operators at some level for safety.

Thus, the 3D space within the four planes (ABC, ABO, ACO, and BCO) conceptually illustrates the human factors-related problem space of autonomy:

  • Autonomous systems are similar to automation. Human intervention is required at all times, indicating that humans must be the ultimate decision makers.
  • Humans interact with autonomy and automation within the same space, implying that autonomous systems may encounter human factors problems similar to those seen in automation; past findings on automation may help address issues on autonomy.
  • The collaboration between humans and autonomous systems occurs with human interactions with automation/ autonomy across various operating scenarios.

The importance of human intervention defined in Figure 1 echoes the human-centered AI (HCAI) approach advocated by me [6] and Ben Shneiderman [7]. Human-controlled autonomy and the avoidance of excessive autonomy are our ultimate goals, which must be applied in searching for solutions for autonomous systems in this problem space, where the challenges and opportunities coexist for HCI.

back to top  Opportunities and a Call for the HCI Community to Act on Autonomy

First, the HCI community should proactively participate in the development of autonomous systems with a good strategy, that is, applying the HCAI approach as we promoted human-centered design 30 years ago. We need to advocate for human-controlled/supervised autonomy to avoid excessive autonomy by integrating human intervention into the system and ensuring humans are the ultimate decision makers. To this end, we may deploy innovative design approaches, such as an effective human control mechanism on autonomy with well-designed UIs and built-in "flight data recorder" systems for tracking historic failures for design improvement.

Second, machine behavior is a double-edged sword. On the one hand, humans and machines can be teammates through collaboration-based design, which cannot be achieved by conventional human-machine interaction. Human-machine collaboration in AI-based autonomous systems is not new to HCI, as seen in human-robot and human-machine teaming. On the other hand, if a design fails to ensure that a human is the ultimate decision maker, autonomous systems may evolve with use in different environments, potentially developing in unexpected ways and harming humans. Therefore, autonomy is both a new opportunity and a challenge for HCI, requiring that HCI professionals go beyond current design thinking and develop innovative design approaches for autonomy.


We need to advocate for human-controlled/supervised autonomy to avoid excessive autonomy by integrating human intervention into the system and ensuring humans are the ultimate decision makers.


Third, we need to explore alternate paradigms to optimize the design of autonomous systems. By leveraging the human-machine collaborative relationship, we can explore the design for autonomy that emulates the interactions between people by considering mutual trust, shared SA, and shared control authority in a reciprocal relationship between human and autonomous systems. For example, robots should collaborate with humans, rather than replace them. Thus, we can maximize design opportunities to minimize the potential risk in autonomous systems by leveraging the collaborative relationship.

Fourth, we need to enhance extant HCI methods for autonomy. The behaviors of autonomous systems may be nondeterministic; traditional validation methods face challenges as the system is no longer in a static format. ML-based solutions may generate extreme content or unexpected actions through learning. We can help reduce ML algorithm bias and errors by testing extreme edge cases. We also can help AI professionals fine-tune the algorithm to prevent bad responses based on user expectations. We can improve the system through collecting training data, defining the user-expected results, and retraining based on user feedback after release.

Fifth, we need to close the knowledge and skill gaps between the HCI and AI communities. People working in HCI need to acquire knowledge of AI, while those working in AI need to better understand the HCAI approach. An HCI + AI education/training curriculum will help the development of knowledge and skills that would enable mutual collaboration between both sides, leveraging the capabilities of the other. Thus we can jointly deliver autonomous systems that will not endanger humans.

Finally, HCI is interdisciplinary. HCI professionals are in a unique position to address emerging issues from a perspective of broad sociotechnical systems, including the impacts of autonomous systems on human expectations, operational roles, ethics, privacy, and so on. Research already shows that the introduction of autonomy may result in a highly emotional response from humans compared with mere automation; some social and psychological factors are more likely to have a bigger impact on humans than others.

back to top  Autonomous Vehicles: The Opportunity for HCI to Take Immediate Actions on Autonomy

Investigations of recent fatal AV accidents have revealed similar issues as those seen in automation, such as overreliance on the system and low SA, which has led to the failure to quickly take over driving in an emergency. Although HCI professionals have participated in the development of AV, these accidents sound an alarm, compelling us to rethink our current design approach.

First, we need a strategic approach, beginning by ensuring that we fully understand the implications of the ironies of automation and lumberjack effect on autonomy [2]. We need to consider an AV as a "wheeled autonomous system" instead of being conventionally automated, which is especially important for AVs at level 3 or above, per SAE J3016 regulations. A design strategy of human-machine collaboration will help optimize the design of system architecture and interactions through shared SA, trust, and transition of control between human and machine drivers.

For interaction design, we need to take innovative approaches beyond traditional HCI design by leveraging: 1) learnings from automation, 2) human-machine collaborative design, and 3) a driving data recorder for tracking all failures to support continuous improvement. By applying the HCAI approach, we need to develop an effective human control mechanism with well-designed UIs to enable drivers to monitor and quickly take over control of their AVs in emergencies. In this way, we can systematically address problems such as humans falling out of the loop and loss of control.

We also need to promote the HCAI approach beyond the HCI community. The SAE J3016 regulations are system-centered design and do not rigorously classify autonomous levels in terms of design requirements, measurement, and certification [1]. Besides requiring no driver intervention for AV at Level 4, SAE seems to ignore the differences in human factors characteristics between automation and autonomy, leading to a neglect of the potential design benefit of autonomy and human-machine collaboration. We need to ensure that humans are always the ultimate decision makers to avoid potential threats to safety.

Finally, we need to address AV from a broader perspective of sociotechnical systems. The development of AV has a wide-ranging impact on the whole society. Some manufacturers seem to overestimate the technology, leading to some consumers' inappropriate expectations and operational behaviors with AV. Overreliance on AV and falling asleep have been reported while the autopilot was in control of their vehicles. However, most drivers' confidence in AV is lower. We need to fully evaluate the influence of AV on drivers' operational role, expectations, and behaviors, including ethics, public trust and acceptance, training, and certification.

The attempt to understand automation-related problems has been a long-term process, and the classic ironies of automation issue has still not been completely solved after 30 years. Today, as we are entering the autonomous world, we have encountered the new irony, as discussed here. This presents challenges and opportunities for the HCI community. As Paul Salmon writes, "Again, we find ourselves chasing a horse that has bolted" [8]. There is a long way to go to completely solve these problems, but we must step in and take the right actions to ensure that the human is the ultimate decision maker for autonomous systems.

back to top  Acknowledgments

The author appreciates the comments on the earlier draft from Professor Ben Shneiderman. Any opinion herein is those of the author and does not reflect the views of any individual or corporation.

back to top  References

1. Hancock, P.A. Some pitfalls in the promises of automated and autonomous vehicles. Ergonomics 62, 4 (2019), 479–495; DOI: 10.1080/00140139.2018.1498136

2. Endsley, M.R. From here to autonomy: lessons learned from human-automation research. Human Factors 59, 1 (2017), 5–27. DOI: 10.1177/0018720816681350

3. Rahwan, I. et al. Machine behaviour. Nature 568, 7753 (Apr. 2019), 477–486.

4. Bainbridge, L. Ironies of automation. Automatica 19, 6 (Nov. 1983), 775–779.

5. Xu, W. and Ge, L. Engineering psychology in the era of artificial intelligence. Advances in Psychological Science 28, 9 (2020), 1409–1425.

6. Xu, W. Toward human-centered AI: A perspective from human-computer interaction. ACM Interactions 26, 4 (Jul.–Aug. 2019), 42–46. DOI: dx.doi.org/10.1145/3328485

7. Shneiderman, B. Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human-Computer Interaction 36, 6 (2020), 495–504. DOI: dx.doi.org/10.1080/10447318.2020.1741118

8. Salmon, P.M. The horse has bolted! Why human factors and ergonomics has to catch up with autonomous vehicles (and other advanced forms of automation). Ergonomics 62, 4 (2019), 502–504.

back to top  Author

Wei Xu is a senior researcher at Intel. He is chair of the Intel IT Cross-Domain HCI/UX Technical Working Group, driving HCI/UX design strategy, standards, architecture, and governance. He has a Ph.D. in psychology (HCI focused) and an M.S. in computer science from Miami University. His research interests include HCI, human-AI interaction, and aviation human factors. weixu6@yahoo.com

back to top 

©2021 ACM  1072-5520/21/01  $15.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2021 ACM, Inc.

Post Comment


No Comments Found