FeaturesSpecial topic: Designing AI

XXV.6 November - December 2018
Page: 38
Digital Citation

Cybernetics and the design of the user experience of AI systems


Authors:
Nikolas Martelaro, Wendy Ju

back to top 

Cybernetics and artificial intelligence (AI) are often considered the same thing, with cybernetics having something to do with creating intelligent cyborgs and robots. In actuality, cybernetics and AI are different ways of thinking about intelligent systems or systems that can act toward reaching a goal. AI is primarily concerned with making computers mimic intelligent behavior based on stored representations of the world. Cybernetics more broadly encompasses the study of how systems regulate themselves and take action toward goals based on feedback from the environment. These systems are not just computational; they include biological (maintaining body temperature), mechanical (governing the speed of an engine), social (managing a large workforce), and economic (regulating a national economy) systems [1]. In addition to reaching goals, AI and cybernetics both consider how systems can learn; however, while AI considers using stored representations as a means of acting intelligently, cybernetics focuses on grounded and situated behaviors that express intelligence and learning based on feedback and interaction [2].

back to top  Insights

ins01.gif

Currently, we are seeing a growth in AI systems based on machine-learning techniques that exhibit forms of intelligent behavior by creating representations from large amounts of data. Some examples include computer-vision systems that accurately identify and label objects in images and speech-input systems that can understand natural language. Much of the focus in recent AI has been on training systems to exhibit intelligent behavior, such as image recognition. As these systems become more reliable and easier to work with, designers can embed them into products as AI subsystems that interact with people. The designer’s goal is to develop the interaction between the person and the product. This interaction can then support the AI subsystem to further learn and adapt to the user. Given these advances, designers should understand and equip themselves with ways of thinking about systems that can learn and adapt via interaction. Learning and adapting to the needs of a system are the goals of both iterative design processes and cybernetics [3]. Thus, cybernetics can provide a useful framework for augmenting designers in creating human-centered interactive AI-enabled products.

ins02.gif

To explore how designers can employ cybernetics in their work, we recently held a discussion with three people who are thinking and working at the junction of cybernetics and design. Hugh Dubberly, of Dubberly Design, and co-creator of the Apple Knowledge Navigator concept film, discussed how the changing nature of designed products should prompt design education to incorporate ideas from cybernetics. Deborah Forster, a primatologist and cognitive scientist at UC San Diego’s Contextual Robotics Lab, spoke about how design researchers can use self-observation interviews and feedback to improve the design process. Jody Medich, director of Design for Singularity University Labs, discussed how our current cybernetic models for interacting with AI computer systems limit their ability to augment human thought and require a redesign to truly support human intelligence. Each speaker has provided a synopsis of their thinking below.

back to top  The Relevance of Cybernetics to Design and AI Systems

By Hugh Dubberly

Knowledge of cybernetics is increasingly relevant to both what and how designers design.

Cybernetics is the science of feedback, information that travels from a system through its environment and back to the system. A feedback system is said to have a goal, such as maintaining the level of a variable (e.g., water volume, temperature, direction, speed, or blood glucose concentration). Feedback reports the difference between the current state and the goal, and the system acts to correct differences. This process helps ensure stability when disturbances threaten dynamic systems, such as machines, software, organisms, and organizations.

Simple feedback systems have goals imposed on them. Second-order systems, which observe themselves, may adjust their goals. Second-order systems don’t just react; they may also learn. When two first-order systems engage, the result is interaction. They push each other. When two second-order systems engage, the result may be conversation, an exchange about both goals and means. As discourse on cybernetics expands to second-order systems, issues of ethics emerge.

Cybernetics offers a language (both vocabulary and frameworks) that enable scientists (and designers and others) from different domains of knowledge and practice to communicate—to describe the structural similarities of systems and to recognize patterns in information flows. This shared language is especially useful in analyzing, designing, and managing complex, adaptive systems, which are intertwined with many of today’s wicked problems.

What designers design. In the past 30 years, design practice has expanded from a focus on the form of objects to a broader concern for interaction with systems and product-service ecologies (systems of systems).

Today’s products are often smart (controlled by microprocessors), aware (full of sensors), and connected (to each other and to cloud-based services). These products and services, and our interactions with them, generate increasing volumes of data—just when computer processing is becoming an on-demand utility and pattern-finding software (AI) is advancing.

Today’s designers must consider how information flows through these systems, how data can make operations more efficient and user experiences more meaningful, and how feedback creates opportunities for learning. Knowledge of cybernetics can inform these processes.

How designers design. Traditionally, designers delivered plans-for-making, which clients approved before manufacturing large quantities of finished things. In mass production, risks are high (set-up costs, costs of materials, and costs of fixing mistakes), causing designers to obsess about perfecting their plans-for-making.

Designing for systems and product-service ecologies is different. Today’s information systems are not mass-produced. In the language of systems, they are emergent. They are rarely defined a priori and in toto; rather, they grow over time and key features evolve through interaction with users and the environment.

Now, instead of finished plans, designers must create possibilities for others to design and make; designers must build flexible platforms, defined by patterns and rules for interaction and rules for changing the rules. Instead of making decisions about what and how, designers facilitate conversations about why and who.

In sum, designers are now engaged in designing first- and second-order cybernetic systems, and sometimes, systems for conversation—using methods that draw on cybernetics. These changes suggest that knowledge of cybernetics and other aspects of systems thinking, such as systems dynamics and complexity theory, is a prerequisite for practicing design going forward.

back to top  Mind the Gap: Mapping Meaning Making to Sensor-Driven Data in AI Systems

By Deborah Forster

The cybernetics movement was rooted in an appreciation for dynamics in the natural world. The participants in the Macy conference series (1946–1953) were concerned with identifying biological dynamics that could be designed and instantiated in man-made machines and mechanisms (physical, social, cultural) [4]. The participation of Margaret Mead, Gregory Bateson, and Heinz von Foerster (as well as other social scientists) in these meetings allowed the relevance of the cybernetics formalisms to seep into less formalized realms—identified as “soft” cybernetics—as well as the reflexive extension to second-order cybernetics.

The more recent success of the cybernetic turn in technology—especially technology platforms that can track and feed back rich sensor input in iterative cycles—has filtered so quickly into how we perform our daily lives that we have missed the opportunity to reflect properly on the impact of adaptive technologies. We’ve moved—almost leaped—from ecological niche construction to sociotechnical niche construction. In other words, the new sociotechnical configurations are constantly transforming the very landscape from which they emerge.

In this environment, the task of user experience practitioners is to slow down and pay attention to the dynamics that would pass by unnoticed—to be the guardians of meaning making on a human scale. Given the kinds of sensor networks we can deploy and the kind of data we can gather, we now have many more tools to do this—to see human behavioral dynamics and the subtleties of ecological processes. We are still grasping, however, for reliable practices that capture and map human meaning making onto these large sensor-derived datasets.

The reflexive turn of cybernetics echoed in the cognitive and social sciences. Anthropologists started practicing autoethnography—systematically revisiting the recorded traces of their own experience in order to enrich their understanding of themselves. Confronting people with video presentations of their own activity [5] emerged as an explicit autoconfrontation methodology in the ethnographic toolkit in the 1990s and continues to evolve [6]. As sensor networks become ubiquitous and capture multiple modalities, we are increasingly surrounded by our own digital and virtual echoes. Designing autoconfrontation experiences is already an important aspect of monitoring wearable technologies, although we argue that this methodology could and should be used throughout the research and development cycle.

Supporting users’ reflection of their own doings can be useful during various phases of the design process and during the development of technologies across use contexts. In contrast to usual recall-based data-collection methods, self-confrontation offers two main opportunities: First, the user is confronted with aspects of the activity that they do not remember or did not attend to at the time. Second, users are able to recall and reflect on more than is recorded in the data, providing critical insights for further design and development. Previous work has used autoconfrontation in the development of surgical robotics [7] and intelligent driver-support systems [8]. Autoconfrontation can support designers in the pre-design phase, where it supports task analysis and the discovery phase of user-experience research, and the validation/calibration of behavioral metrics phase in simulated or prototype-testing environments. More recent work in the context of autonomous vehicles (cars and ships) points to the potential value of this method in the training/coaching and behavior change phase of advanced driver-assistance systems (ADAS). By using autoconfrontation, we can expose mismatches between user experience and automated technology behavior. This can provide a cybernetic loop, which can help to improve the design of autonomous systems throughout the design process.

back to top  Time to Speak Human

By Jody Medich

We designed artificial intelligence—like all tools—to extend a human ability: our ability to think. We created a cybernetic model with most of the onus on the human to maintain the system and encode the exchanged information into computer speak. The tech side of the cybernetic system has not spent a lot of time adapting and learning from the human side, except in this encoded translated format. The more fluent the human in tech speak, the more powerful the things they can accomplish together.

But technology has finally matured to a level where it is able to speak more fluently and efficiently to itself than we will ever be able to do again. Soon it will write its own code more effectively than we ever could. And it’s able to encode the real world into computer speak without waiting for human translation. We will never again dominate conversations in that language.

And that’s great. That language, and the cybernetic system around it, is terrible for humans. As a tool, it is unwieldy and noisy.

Great tools disappear into the hand. We don’t think of their operation. Instead, they act as an extension of the body. To our brains, they appear quite similar. Swinging a well-made hammer and walking generate the same level of pink noise in the brain. Barely requiring much attention, the not quite random randomness of the activity is more like controlled falling than conscious thought. All relevant information is sensory data—the native language of the brain—ensuring that most of the mental effort to do the tasks is relegated to background cognitive abilities. The absence of constant data translation allows the brain to focus attention on more difficult, higher-level functions such as deep creative thinking, rather than the operating of the tool or the body.

The operation of technological tools rarely reaches that level of functioning. Instead, the human-machine interface is incredibly noisy. Not because of the amount of data, but rather because of the encoding of the data into a system that relies heavily on linguistics and human working memory to carry tidbits of information across multiple contexts. Unfortunately, that part of our brain is very easily interrupted by context switches and maintains only contextually pertinent data. When the context switches, the brain resets the working memory for the new situation and wipes out all the previous information. This is called the doorway effect, and it’s why you so easily forget that you want a glass of water when you walk from the couch into the kitchen. The level of noise inherent in the technological tool is deafening, killing our ability to think. Which is ironic, considering that AI tools are designed to amplify our cognition.

Technology is finally capable of thinking. Soon it will reach the exaflop barrier, equal to the cognitive computing power of the human mind. It’s time to radically update the cybernetic system that we rely on to interact with technology. We need to teach technology to speak human—to encode all that data into a multisensory physical interface that can disappear into our hands.

back to top  Conclusion: Cybernetic Design Tools

Though cybernetic thinking is rooted in the past, we believe it may be the way forward for design, especially as our products increasingly embody forms of intelligence. One unifying theme from our conversation was around embedding cybernetic thinking into our design processes and tools. As Dubberly suggests, our current design methods may not suffice in the new world of AI-enabled products. Designers will no longer explicitly craft the interactions between a product and a user. Instead, we will need to design the meta-systems that design a product’s interactions. This suggests a need for new tooling to develop these meta-systems. But as Medich suggests, we may need to fundamentally redesign our current interaction model with our computing tools. In doing so, we may need to slow down and observe ourselves. Some of the methods that Forster suggests, such as participant observation and self-observation, may provide the designer with a feedback loop on their own process. Moving forward, we would argue that more designers should learn about cybernetics. More important, those who are developing design tools should consider augmenting their tools with feedback systems that can support designers both with the design project at hand and their design process.

back to top  References

1. Wiener, Norbert. Cybernetics or Control and Communication in the Animal and the Machine. Vol. 25. MIT press, 1961.

2. Dubberly, H. and Pangaro, P. Cybernetics and design: Conversations for action. Cybernetics & Human Knowing 22, 2–3 (2015), 73–82.

3. Glanville, R. Researching design and designing research. Design Issues 15, 2 (1999), 80–91.

4. Pias, C. Cybernetics - The Macy Conferences 1946–1953: The Complete Transactions. Chicago Univ. Press, 2016.

5. Clot, Y., Faïta, D., Fernandez, G., and Scheller, L. Entretiens en autoconfrontation croisée: une méthode en clinique de l’activité. Education permanente 146, 1 (2001), 17–25.

6. Cahour, B. and Licoppe, C. Confrontations with traces of one’s own activity. Revue d’anthropologie des connaissances 4, 2 (2010), a–k.

7. Wahlström, M., Seppänen, L., Norros, L., Aaltonen, I., and Riikonen, J. Resilience through interpretive practice – A study of robotic surgery. Journal manuscript submitted for publication.

8. Boer, E.R., Joyce, CA., Forster, D., Chokshi, M., Mogilner, T., Garvey, E., and Hollan, J. Mining for meaning in driver’s behavior: A tool for situated hypothesis generation and verification. Proc. of Measuring Behavior 2005: 5th International Conference on Methods and Techniques in Behavioral Research.

back to top  Authors

Nikolas Martelaro is a researcher at Accenture Technology Labs. He is interested in interaction design for autonomous systems and design methods. He received his Ph.D. in mechanical engineering from Stanford University’s Center for Design Research. nikolas.martelaro@accenture.com

Wendy Ju is an assistant professor of information science at Cornell Tech interested n how people respond to interactive and autonomous technologies. She received her Ph.D. in mechanical engineering from Stanford University’s Center for Design Research. wendyju@cornell.edu

back to top 

Copyright held by authors. Publication rights licensed to ACM

The Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.

Post Comment


@Dr David Pao (2018 11 13)

I have always found cybernetics as a concept difficult to embrace because its apparent complexity seems to fall in with common human sense. I’ve wondered ‘why does this need to be a theory?’ This is despite being a PhD researcher at the RCA and having had a cybernetics supervisor (the late Ranulph Glanville). However, when pitched against AI (in this case), its contribution shines. I begin to see the theoretical perspective that I have been looking for in my thesis. My research is about designing for healthcare systems - in particular how to nurture clinician performance through the visual design of electronic interfaces and how to improve the patient digital experience of healthcare. We are attempting to do exactly as Dubberly says - bring two these second-order systems together in conversation. This is, in our view, the holy grail.