Forums

XXV.5 September-October 2018
Page: 72
Digital Citation

How to avoid an AI interaction singularity


Authors:
Paul Lukowicz, Philipp Slusallek

back to top 

The ways in which we address societal as well as personal challenges are inherently linked to the technologies to which we have access. Ongoing digitization, coupled with advances in the field of artificial intelligence (AI), are leading us to yet another critical point in history, one in which society, from the workplace to the home, from nations to individuals, will undergo a radical transformation.

back to top  Insights

ins01.gif

AI researchers have made groundbreaking advances in hard, long-standing problems related to machine learning, image recognition, speech recognition, and planning. AI is everywhere, from smartphones and watches to personal digital assistants (e.g., Amazon Echo, Google Home) to autonomous vehicles, smart cities, Industry 4.0, and beyond. By packaging AI functionality in cloud services and libraries, the hurdle for using AI technologies has been lowered, pushing forward innovative applications in different domains. There is general agreement that what we see today is just the beginning of the AI revolution. There is also a strong consensus that the resulting changes will be much bigger than any other technological revolution in human history. There is much less agreement, however, as to where this revolution will take us. The outlook from scientists and industry leaders ranges from utopian, perfect-world scenarios to dark, dystopian forecasts.

The most widely discussed dystopia is the singularity [1], a vision where AI suddenly reaches some sort of sentience and decides to enslave or eliminate humanity. We argue that AI gaining sentience (which, from a computer science point of view, we currently cannot even properly define) is not what we should watch out for. Instead, we need to be aware of the AI’s lack of finesse in its interaction with sentient beings (= humans), in particular its lack of appreciation for the complexity of social contexts and processes involving sentient beings. In other words, the problem is not a sentience singularity, with an AI somehow deciding to enslave humans, but rather an interaction singularity, resulting from the lack of adequate paradigms for (and understanding of) the interaction between humans and social systems on one side, and ever more complex and omnipresent, interconnected AI systems on the other. Such an interaction singularity could indeed lead to significant loss of autonomy and potentially catastrophic unintended social consequences.

To grasp the idea, consider a simple example. Recently a new AI-based assistant for personalized pedestrian navigation has been proposed to help users avoid “bad neighborhoods.” The idea is to build profiles of neighborhoods using, for example, data on who goes to or avoids certain neighborhoods (collected from smartphones, navigation systems, etc.) and information mined from the Web (e.g., which neighborhoods may often be mentioned in the context of crime). For a user requesting navigation, it can build on various user-specific information such as which types of neighborhoods the user visited or avoided in the past. Combining profiles of different neighborhoods with information collected about the user can produce a personalized route suggestion that the user would consider safe, interesting, and comfortable.

However, what seemed like a good idea at first led to a major public outcry, and the assistant was never deployed. “Avoid the Ghetto App. It will damage the economies of poor communities,” read the headlines. As people heard about the app, it became clear that a well-intentioned AI-based service like this, meant to help on an individual level, may, when deployed within a greater social context, have major harmful consequences—depriving certain communities of business and ruining their reputations, and possibly harming adjacent neighborhoods.

In abstract terms we consider the following situation:

  • Interconnected AI-based assistants build up complex models of an environment, user activity, and user preferences by autonomously harvesting and analyzing information from myriad sources, including sensors and the Internet, cooperation between devices, and interactions with humans.
  • Systems leverage such data and models to provide their users with personalized, situation-adapted, and contextualized information, advice, and support. In doing so, they go beyond merely informing the user to influencing behaviors, manipulating opinions, changing attitudes, and affecting social interactions.

As more and more people use these systems, induced individual changes will have an impact on society. When the vast majority of people eventually use such assistive systems, the individual behavior changes add up to a significant impact on society, resulting in unforeseen collective phenomena changing human interactions at a societal level.

The problem with the above is threefold:

  • Despite all efforts in personalization, such systems essentially remain prescriptive. The basic assumption is that the AI knows what is right and the purpose of the system is to make sure that users do not deviate from the “right” path. As more and more areas of our lives are determined by such prescriptive systems, the loss of autonomy, creativity, and inventiveness is a real danger.
  • The systems focus on creating both benefits for individuals and business value for a specific stakeholder (e.g., Alexa increases convenience for users and adds to sales for Amazon). Designed in an individual, customer-centric way, they ignore potential social and ecological impacts resulting from the interaction among many such systems at various spatial and temporal scales.
  • Today AI systems are black boxes that base their decisions on the complex statistical analysis of huge data sets. This means that it is often impossible to predict what even a single system will do in a concrete situation. Yet to steer and control the impact of AI systems in a social context, we need to anticipate and influence the impact of millions (or even billions) of such systems interacting with each other and with their users in dynamic ways, on complex collective phenomena.

ins02.gif

To address the above issues, future AI development must coordinate closely with HCI research, with the humanities, and with complexity science. This should include:

  • Technologies and tools for enhancing human cognitive capabilities and channeling human creativity, inventiveness, and intuition. Such technologies will let people improve their performance and successfully handle more complex tasks, while reducing the cognitive load and stress experienced during those tasks [2]. They will also empower humans to make important decisions in a more informed way, helping us understand and foresee long-term implications.
  • Technologies that allow seamless, productive interaction among humans, intelligent machines, and complex, dynamic, real-world environments. This implies AI systems that can intelligently interact with and within complex social settings seamlessly as well as adapt to changing, open-world environments.
  • Explainable, transparent, validated, and thus trustworthy AI systems, optimally supporting both individuals and society as a whole in dealing with the increasing complexity of a networked globalized world. The goal is to overcome unpredictable black-box systems, currently the norm in AI.
  • Adding values, ethics, privacy, and security as core design considerations in all AI systems and applications. AI systems will require design decisions that make their values accessible and understandable to humans.

The above goals are closely connected to some widely recognized AI grand challenges:

Explainability and accountability of machine-learning methods. Many recent AI success stories are based on the application of complex statistical analysis to massive amounts of training data. Unfortunately, the resulting models are, in general, impossible for humans to understand and interpret, creating a number of problems:

  • For safety-critical applications in particular, this creates a major trust problem.
  • It is hard to explain why the system has produced a given result, and thus the fairness, transparency, and discrimination or bias in machine-learning algorithms are hard to assess. Especially for decisions involving human subjects, the responsible use of algorithms and training data to ensure diversity, equality, and security creates a major challenge for public acceptance [3].
  • It is difficult to incorporate expert domain knowledge and non-statistical models into learning and decision-making processes.

Evolution and adaptation in open-world environments. In a well-publicized failure, an autonomous Tesla recently hit a truck crossing a highway because the car faced a situation that widely deviated from the models on which it was trained: 1) as it proceeded up a hill, the truck’s white exterior was difficult to distinguish from the bright sky, and 2) again because of going uphill, the radar fingerprint of the truck’s undercarriage was mistaken for an overhead road sign. This illustrates a key problem of today’s AI. While tremendous progress has been made in achieving and even surpassing human-level performance in closed-world pattern-recognition tasks, current AI struggles with dynamic, open-world situations [4], in particular when humans do things that are seemingly unreasonable and/or when the environment significantly deviates from the norm. Humans deal with such situations by falling back on general models of the world (deep understanding), common sense, and improvisation. How AI systems can achieve similar capabilities is an open problem. Clearly, an important aspect is to go beyond learning patterns toward understanding the environment by observing the world and continuously adapting appropriate dynamic models of how it works. This is often combined with simulations and the use of synthetic data to derive, evaluate, and eventually act based on such adaptive models (e.g., through reinforcement learning).


People should be able to inquire about and understand the underlying values for which an AI system optimizes.


Understanding and naturally interacting with humans in a social context. Today’s AI systems lack a human-like capability to understand user goals, beliefs, emotions, and capabilities. As an example, consider a well-known yet seemingly trivial AI problem: how to automatically decide if it is appropriate for a mobile phone to ring or not, given a certain setting and a specific caller. This requires an in-depth understanding of the fine points of a user’s current social context [5] and an ability to anticipate the potential significance of taking or delaying the call in the framework of the user’s life. Answering a call is not helpful if a user is about to convince his bosses of something he deeply cares about. On the other hand, if he has lost the argument, then answering the call may be wise. While research areas such as context-aware, affective, and social computing have considered various aspects of this problem, acting and interacting in complex social settings while taking into account the full complexity of human feelings and decision-making processes remains an unsolved AI grand challenge.

Achieving reflexivity and expectation management in AI systems. Ultimately, smooth interaction between humans and AI systems requires that the latter can explain, reflect on, and assess their own capabilities with reference to human partners’ stated or presumed intentions and expectations. While we already see current system prototypes watching over defined limits of system capabilities, the unlimited diversity of future collaborative scenarios will ask for more fundamental solutions. AI systems will need to ensure that they fulfill human expectations in dynamically evolving situations, while guaranteeing that human partners will not rely on misguided expectations.

Embedding ethics and values into AI systems. As we integrate AI systems into our economy and daily lives, their actions must be well aligned with the values and expectations of both individual users and society at large to be accepted. To this end, AI research needs to 1) integrate and collaborate with other disciplines on topics such as ethics, policy, and privacy; 2) foster a wide and well-informed discussion among all stakeholders from society, research, and industry about the relevant ethical values and norms that AI systems should follow and under what conditions; 3) explore methods to integrate and enforce such values and norms in AI systems; and 4) define standards and best practices for how these methods should be used to integrate relevant values and norms into products and systems across various market domains. When embedding values, it is essential to make design decisions explicit and visible. People should be able to inquire about and understand the underlying values for which an AI system optimizes.

Ultimately, developing adequate paradigms to let AI optimally support humans within social systems (see sidebar) is closely related to better understanding the relationship between natural and artificial intelligence. The question about the fundamental difference between artificial and human cognition is hotly debated. It usually comes down to creativity, intuition, emotions, self-awareness, or conscience as uniquely human capabilities, contested by the view that the brain is just a biological computing machine with no fundamental difference from a computer other than speed. However, this debate is often driven more by the ideologies, emotions, and backgrounds of the people involved than by facts or provable arguments. Scientifically solid answers that include a better understanding of what we mean by conscience, creativity, and intuition could contribute both to fundamental advances in AI and to the ways in which AI is used to better support humans.

back to top  References

1. Eden, A.H., Steinhart, E., Pearce, D., and Moor, J.H. Singularity Hypotheses: An Overview. In Singularity Hypotheses. The Frontiers Collection. A. Eden, J. Moor, J. Søraker, and E. Steinhart, eds. Springer, Berlin, Heidelberg, 2012.

2. Schmidt, A. Augmenting human intellect and amplifying perception and cognition. IEEE Pervasive Computing 16, 1 (2017), 6–10.

3. Lepri, B., Oliver, N., Letouzé, E. et al. Fair, transparent, and accountable algorithmic decision-making processes. Philos. Technol. (2017); https://doi.org/10.1007/s13347-017-0279-x

4. Adams, S., Arel, I., Bach, J., Coop, R., Furlan, R., Goertzel, B., Storrs Hall, J. et al. Mapping the landscape of human-level artificial general intelligence. AI Magazine 33, 1 (2012), 25–42.

5. Lukowicz, P., Pentland, S., and Ferscha, A. From context awareness to socially aware computing. IEEE Pervasive Computing 11, 1 (2012), 32–41.

back to top  Authors

Paul Lukowicz is full professor of artificial intelligence at the Technical University of Kaiserslautern and scientific director at the German Research Center for Artificial Intelligence (DFKI), leading the research lab on embedded intelligence. paul.lukowicz@dfki.de

Philipp Slusallek is full professor of computer graphics at Saarland University. He is also scientific director at the German Research Center for Artificial Intelligence (DFKI), leading the research lab on agents and simulated reality. philipp.slusallek@dfki.de

back to top  Sidebar: FICTIONAL EXAMPLE SCENARIO: WHAT WORKING WITH AN AI SHOULD BE LIKE TO AVOID THE AI INTERACTION SINGULARITY

Consider a team of specialists trying to determine the cause of an unusual accumulation of failures in a production line.

“Looks like the work of a ghost,” says Specialist A.

“Indeed, the accumulation of failures between midnight and dawn is significant. But we should be careful not to make it look like we blame the night shift. They’ve been under a lot of pressure recently, on top of being a touchy bunch to begin with!” answers Specialist B.

“The only thing…” starts Specialist C, a new engineer who seems unsure whether he should say something. Specialist B notices that the young man is intimidated by the presence of his older boss and encourages him to speak up. “The only thing that’s special about midnight in our factory is the fact that the accounting mainframe is rebooted at that hour, but I don’t see how it could be relevant,” finishes Specialist C.

“Well, I haven’t been specifically trained to deal with computer mainframes, but basic physics indicates that a reboot might cause a spike in power. Such spikes have indeed been linked to the type of failures that we’re seeing. However, I would tend to rule it out as a cause since the production line has its own decoupled power supply,” explains Specialist B.

“Not quite,” says Specialist A. “The production line should have its own decoupled power supply, but it went offline for maintenance some time ago, and we’ve been piggybacking on the general power line.”

To grasp our vision of AI, imagine Specialist B being an AI!

Compare this interaction to the dialogue you can have with today’s intelligent assistants, which can neither put vague figurative speech into proper context nor take into account the potential emotional impact that their actions could have in subtle social contexts. They also cannot reason about things they were not specifically trained on and cannot explain their reasoning.

back to top 

©2018 ACM  1072-5520/18/09  $15.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.

Post Comment


No Comments Found