Special topic: Designing AI

XXV.6 November - December 2018
Page: 42
Digital Citation

Design and fiction


Authors:
Jason Wong

back to top 

As artificial intelligence (AI) and machine learning’s (ML) prominence in daily human-to-computer interaction grows, the field of interaction design must shift to understand the possibilities, limitations, and biases of AI. Currently, academia and technology corporations are leading the way in research and implementation, working in distinct fields such as natural language processing and object recognition. There has been enormous progress, as AI, supported by deep learning [1], is very good at completing a discrete task such as scheduling an appointment [2]. But it’s not capable of completing a more general intellectual task, such as knowing how and when to reschedule the appointment in the event of more urgent matters. Moreover, ML’s reliance on data collected from human annotation and transcription [3] makes it susceptible to the same biases that plague human cognition [4]. Platforms such as ML Kit, Tensor Flow, scikit-learn, and IBM Watson are available; however, these platforms are intended to be used by developers or researchers who have a foundational knowledge in programming and an understanding of AI’s capabilities.

back to top  Insights

ins01.gif

Design’s role is to contribute a humanist perspective that considers the social, political, ethical, cultural, and environmental factors of implementing AI into daily human-to-computer interactions. By necessity, interaction designers must imagine how people can interact with AI that doesn’t constrain the human experience but rather facilitates interactions that allow for nuance and exchange. Interaction designers have the opportunity to use their range of skills to imagine an interaction that considers both the breadth of the experience and the precise interactions necessary to create a functioning AI. Specifically, we can imagine a holistic system that augments and amplifies human experience by making it tangible through imagery, text, animation, or sound. Finally, as ML researchers are often specialists who lack a comprehensive understanding of the field, interaction designers must work as a conduit, combining different forms of AI to work in concert.

back to top  Speculation as a Technique

Because of the difficulty and limitations of AI prototyping, speculation is a powerful technique that can enable designers to propose, conceive, and ideate on how AI could work. Advocates of this practice Anthony Dunne and Fiona Raby have primarily used speculation to create tangible products and fictional scenarios that serve as catalysts to discuss present issues by “collectively redefining our relationship to reality” [5]. For example, their project United Micro Kingdoms (http://umk.techamigo.net/), completed in 2013, envisioned four fictional societies organized on a spectrum of ethical and political positions, proposing how a future society would implement sustainability, collectivism, and surveillance. Their project highlights what speculation offers design: the opportunity to propose new realities that aren’t bound to the technical limitations of the present and to provoke a dialogue about how we want our society to progress. When envisioning new realities, designers must remember to question techno-determinism, the assumption that all technology is good because it is capable of solving problems. Designers, developers, and researchers must work to evaluate whether or not their work actually addresses their defined problem space while respecting the user and the environment in which their creation exists.

back to top  Committee of Infrastructure: An AI Design Provocation

The new realities that designers are tasked with envisioning make space for the introduction of human values, such as humor and play, while addressing the moral and ethical uncertainties of implementing AI. My recent project, Committee of Infrastructure, is a design provocation that interrogates the issues of agency, representation, and bias. The project explores a need to scrutinize how governance will change with the introduction of AI. Specifically, it considers how humans and AI systems interact with each other in a civic setting to negotiate issues pertaining to a local community. The project also argues for the introduction of civic dialogue as a model for interaction with AI. The good aspects of bureaucracy (e.g., checks and balances) can act as a way to negotiate with AI that works at a human scale and sense of time.


The project purposefully positions these imaginary human and machine stakeholders arguing among themselves to demonstrate how this absurd scenario might become a reality.


Committee of Infrastructure imagines four different groups of stakeholders that advocate on behalf of their organization in a city council meeting (Figure 1), wherein humans and AIs negotiate with each other. Stakeholders include engineers, city council members, presidents, AI experts (machines), smart roads, and sensors who express conflicting positions, ideologies, and motivations (Figure 2). The project purposefully positions these imaginary human and machine stakeholders arguing among themselves to demonstrate how this absurd scenario might become a reality. Discussed at the meeting: a ballot measure for the removal of traffic lights to create a fully autonomous intersection. Smart street lights, autonomous vehicles, and embedded sensors will sense objects, things, and people through machine vision and proximity detection, allowing for open communication and for vehicles to efficiently move through traffic (Figure 3).

ins02.gif Figure 1. This speculative L.A. city council meeting depicts how a member of the public advocates for the well-being of the animal population, aligning with the motivations of PETA.
ins03.gif Figure 2. This meeting stakeholder chart for the L.A. city council proposes a mix of human advocates, AI representatives, human engineers, and AI experts all negotiating with each other on a ballot measure.
ins04.gif Figure 3. The diagram illustrates how the intersection will work after the removal of traffic lights at Sunset Blvd. and N. Alvarado St.; proximity sensors, speed sensors, and computer vision will work together to account for and govern the different types of traffic (vehicular, pedestrian, cyclist, and animal).

back to top  Data and Algorithms as Material

As interaction designers, we use color, typography, icons, and sound to bring an experience to life. Data and algorithms are the materials that affect how a user experiences AI—its personality, biases, and idiosyncrasies. In Committee of Infrastructure, data is used to explicitly reveal bias and the shortcomings of relying on AI to make decisions. Bias is a serious issue with real-world consequences, but speculation allows for designers to play out bias in absurd and imagined ways. To conceptualize how this meeting might take place, a city council transcript was created using the Karpathy char-rnn machine-learning algorithm [6] (Figure 4). The algorithm learned from seminal texts important to the ethos of each organization participating in the meeting [7].

ins05.gif Figure 4. Transcript excerpts created using char-rnn. Human and AI representatives arguing for their respective organizations in the vernacular learned by training the ML algorithm on seminal texts important to the ethos of each organization (https://vimeo.com/250998475).

The language created is awkward and direct. The algorithm’s narrowness undermines the difficulty of consensus building, central to civic dialogue and to the meeting’s progress. However, the transcript does effectively display how an AI might interact with humans and other AIs. In addition to the verbal arguments of each organization, sets of video evidence (machine vision with image classification) reveal each organization’s motivations. Specifically, computers classify moving objects and assign value to each object, creating a hierarchy that allows for communication between vehicles, people, and animals to avoid collisions. PETA’s video, for example, prioritizes animals by predicting their behavior and location. Its video exhibits how a well-intentioned motivation can become susceptible to a flawed outcome when programmed with narrow intentions (Figure 5).

ins06.gif Figure 5. PETA’s image-classification algorithm over-optimizes its computer-vision analysis, leading to incorrect classification of vehicles and people as animals (https://vimeo.com/213373126).

back to top  The Value of Speculation

By revealing the limitations and biases of AI, Committee of Infrastructure demonstrates how AI systems are subject to the same fallibility that is present in human-to-human interaction. Not only are the code, personality, and data necessary to create a functioning AI, but the ethical framework that guides how these systems interact is fundamental too. AI cannot be blindly trusted; it should be subject to the same form of scrutiny as a bill, law, or ballot measure. The project illustrates why developers and researchers should create a record of motivations, origin, and data sources visible to everyone [8]. In detailing all the variables, a discussion among a broader set of stakeholders should challenge preexisting assumptions and provide evidence to thoroughly negotiate how these systems influence daily life.

Ultimately, Committee of Infrastructure proposes that speculation is essential for working through the uncertainty, complexity, and ambiguity of AI problems. Designers have the opportunity to make sense of what is not yet truly understood in both capability and application. Further, AI issues are incredibly difficult and esoteric. Committee of Infrastructure utilizes speculation to show how humor and play can be used to examine serious issues like surveillance and governance, allowing designers, developers, citizens, and policymakers to have a dialogue about how we want AI to influence our daily life. By using humanistic values, designers can promote new forms of interaction that facilitate inclusive and compassionate experiences. This field affords new opportunities that call for radical ways of working.

back to top  References

1. Deep learning is a method that utilizes vast amounts of data to learn. It uses the model of neural networks to detect and predict.

2. Google Duplex, a virtual assistant released in 2018, can schedule an appointment so skillfully that it is impossible to discern whether a human or computer is speaking. The assistant pauses, intonates, and affirms just like a human.

3. Crowdflower and Amazon’s Mechanical Turk are platforms that solicit humans to annotate data to be used by ML. ML needs copious amounts of data to be implemented.

4. Biases such as stereotyping, attentional bias, and confirmation bias can become amplified when incorporated into machine learning. For example, the COMPAS algorithm used by the Department of Corrections in Wisconsin, New York, and Florida has led to harsher sentencing toward African Americans. See Angwin, J., Larson, J., Mattu, S., and Kirchner, L. Machine bias. ProPublica. May 23, 2016; https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

5. Dunne, A. and Raby, F. Speculative Everything: Design, Fiction, and Social Dreaming. MIT Press, Cambridge, MA, 2013.

6. Karpathy, A. The unreasonable effectiveness of recurrent neural networks.” Andrej Karpathy Blog. May 1, 2015; karpathy.github.io/2015/05/21/rnn-effectiveness/.

7. For example, the L.A. DOT representative learned to speak from City of Los Angeles Transportation Impact Study Guidelines and Traffic Studies Policy and Procedures.

8. The idea of data provenance was introduced at the 2017 AAAI symposium.

back to top  Author

Jason Shun Wong is a designer, researcher, and strategist of interactions and critical media working in emerging technology. His work focuses on the intersection of smart cities networking behavioral psychology object-oriented ontology civics Chinese science fiction and meme culture. info@jasonshunwong.com

back to top  Footnotes

https://jasonshunwong.com/work/committee-of-infrastructure-part-2/

back to top 

Copyright held by author. Publication rights licensed to ACM.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.

Post Comment


No Comments Found