Special topic: Designing AI

XXV.6 November - December 2018
Page: 46
Digital Citation

Prototyping ways of prototyping AI


Authors:
Philip van Allen

back to top 

We are moving toward an era of pervasive interactions with machine intelligence. Consider that increasing numbers of autonomous vehicles are mixing with pedestrians and traditional vehicles. Home IoT devices independently sense and act on our behalf. Co-bots work side by side with people.

ins01.gif

These new types of interactions are affecting our civic, domestic, and work lives in significant ways. But the tools for exploring potential effects are at best lacking and, in many cases, nonexistent. There is an urgent need to develop generative and effective methods that explore the consequences of design choices when creating AI systems, including consideration of the new ecologies they create. This is an interaction-design meta-task—not just designing for AI, but also designing how to design for AI.

back to top  Insights

ins02.gif

This article explores how prototyping and developing tools specifically for the design of AI can help advance the emerging field of UX/IxD for AI. Using the author’s experimental Delft AI Toolkit as an example (Figure 1), we discuss several ways to think about the needs and features of an AI toolkit.

ins03.gif Figure 1. The Delft AI Toolkit running on a laptop, a robot device that works with the toolkit, and a tablet used to marionette behavior of the device as a Wizard of Oz technique. In this photo, the robot is interacting in a way defined by the visual authoring system, mirroring what the on-screen 3D model of the robot is doing.

back to top  The Challenge

Interaction designers face new challenges with AI, in which the underlying technologies are hard to master and it is even more difficult to understand when, how, and why they should be applied. Our tools should help us create great, thoughtful designs with AI.

Just as important, our tools should enable us to develop a facility with AI as a material for design projects. Designers need to develop a strong tacit understanding of AI through tinkering, exploring, and building. This demystification and understanding can lead to new collaborations, design concepts, and methods for AI through the unique engagement that designers have with their material. Designers bring an important positionality and value system that needs a place at the table in creating new directions for AI itself.

While there are interesting new tools being introduced for the development of machine-learning models (e.g., lobe.ai), there is little that specifically targets the design of the interactions and behaviors that compose the human experience around the AI models. The character of AI interactions is complex and different, arising from independent, evolving systems that need ongoing tending and maintenance by end users. How will each new application of AI work, and what are the human outcomes?

To go beyond the clichés of AI and create consequential, fulfilling, and ethically sound new interactions, designers need productive ways to craft a refined aesthetic for the design of the behaviors, sociality, and narratives that AI systems need.

ins04.gif Figure 2. This diagram identifies possible functionality in the toolkit for adjusting the personality of the AI device through a visual interface. This example list is not exhaustive, but it does illustrate the kinds of high-level controls that would help designers.

Such tools must address the prototyping and design of:

  • Personality and character for autonomous behavior (e.g., Figure 2)
  • Multimodal, non-visual interactions
  • User training/pruning/tending/learning
  • POV and biases (considering diversity in people and machines)
  • Mixed social interactions—M2H and M2M
  • Intentions/goals/rules
  • Ethics/civic responsibility
  • Indication of expertise and affordance.

And the tools must enable the designer to use methods such as:

  • Fast, iterative, experimental prototyping of interactions and autonomous behaviors
  • Wizard of Oz (WOZ) design experiments and testing with people
  • Comparing different algorithms, datasets, and training methods
  • Iterative testing of embodied working prototypes
  • Finding a minimum viable product (MVP) and minimum viable data (MVD)
  • Creating new AI technology requirements.

To support the above affordances, AI design tools need powerful capabilities and workflows that mesh fluidly with design processes. In the same way that we use tools such as InVision and Proto.io for the prototyping of screen interactions, we need new tools for the prototyping of autonomous systems and their interactions.

back to top  The Delft AI Toolkit

Over the past year, I’ve been developing the Delft AI Toolkit (https://github.com/pvanallen/delft-toolkit-v2) as an experiment in prototyping ways to prototype AI. It is an open source visual authoring system that integrates AI techniques and technologies into an easy-to-use drag-and-drop environment that strives toward the needs and methods outlined above.

This toolkit builds on my prior research in toolkits with NTK (netlabtoolkit.org) and with an approach to AI I call animistic design. This work helped inform both what to do and what mistakes to avoid.

NTK. Inspired by data-flow tools like MAX/MSP, NTK (Figure 3) is a drag-and-drop authoring system for designing projects with sensors and actuators. Using microcontrollers like the Arduino, NTK helps the designer create working prototypes with easy-to-use visual widgets that encapsulate domain expertise. By experimenting in real time and tweaking as they go, designers can focus on concepts and interactions through designerly play [1].

ins05.gif Figure 3. The interface for the author’s NTK visual programming toolkit for working with Arduino, sensors, actuators, and media (netlabtoolkit.org).

Animistic design. Animistic design is an approach for AI that creates a sense of life in devices through non-anthropomorphic, fictional, and idiosyncratic personalities [2,3,4]. It proposes multiple heterogeneous smart things that use their form and behavior to embody humble honesty about their capabilities, limits, and points of view (Figure 4). Building on the theory of distributed cognition, they become a part of human imagination and thinking processes, and act as collaborators rather than servants in a shared exploration.

ins06.gif Figure 4. An example of a system of animistic devices that someone can work with, each with a different attitude and goals. This “team” of collaborators, each with their own point of view, might sit on a desk in conversation with each other and people, talking and showing media. For example, the Needy asks a lot of questions; the Nerd goes deep on a topic; the Nostalgic prefers older references.

This animistic design approach, which focuses on developing subtle narratives, interactions, and autonomy while incorporating AI, prompts the design question: How do we prototype sophisticated next-generation AI systems?

back to top  A Working Prototype of the Tool

The Delft AI Toolkit begins to answer the question of how designers might work with AI. It’s a visual authoring tool for designing and prototyping AI that works on-screen and with a physical device.

The tool incorporates three key strategies: the simulation of AI by enabling the live marionetting of the system’s behavior; a next-generation visual authoring environment; and the simulation of physical devices through interactive 3D models (Figure 5).

ins07.gif Figure 5. Functional diagram of the Delft AI Toolkit featuring the three key strategies: marionetting simulated AI, node-based visual authoring, and 3D simulation of physical devices.

A next-generation visual authoring environment. In the toolkit authoring environment, the designer creates blocks and connects them together to incrementally build and experiment with the AI processing, interaction, logic, and behavior of an AI system (Figures 6 and 7).

ins08.gif Figure 6. A simple example in the toolkit of a visually authored behavior tree for a robot moving and speaking. To create a behavior, the user clicks in the graph to add an action, machine-learning behavior, sensory input, etc., and then connects the nodes together and creates a flow by dragging “wires” from one node to the next.
ins09.gif Figure 7. Simplifying the node system: It is common for node-based visual authoring systems to become large and complex. To reduce this problem, the strategy is to make the nodes do more. On the left is an early version of the toolkit that uses five nodes to perform a sequence of actions. On the right is a new approach that consolidates the same sequence of actions into a single node.

Learning from my experience with NTK, as well as how game designers develop non-player characters (NPCs), the authoring system is a hybrid of data-flow and behavior-tree models. This allows for creating complex sequences and interactions that include initiating AI processes (e.g., object recognition, speech to text, reinforcement learning, etc.), reacting to the user, sensing, and generating behaviors (e.g., kinetic expression, audio, light, text).

Simulate then implement AI. To support the simulation of AI, the toolkit has a marionette system that allows the designer to stand in for the AI in the initial design phases. Using the toolkit, the designer can puppeteer the behavior of the prototype in real time while observing how people experience and interact (Figure 8). This is supported with a standard protocol (OSC) for a remote control using a phone or tablet to wirelessly trigger predefined behaviors, simulating the AI.

ins10.gif Figure 8. Using a wireless tablet to marionette a behavior in response to simulated voice recognition. Here the designer is pressing a button remotely from the user test, making the system seem to react to a voice command (the robot moves backward when the user says “Goodbye”) when in fact there is no voice-to-text system active (https://youtu.be/vKxXVijCcdk).

The behaviors that the marionette controls can be pre-built or designer-improvised sequences that include movement, speaking, sensing, sound, and light. More subtle kinds of control can also be used—for example, the puppeteer could vary aspects of an algorithm in real time such as the amount of randomness, confidence thresholds for decision making, or the mood for how the system responds to people (e.g., patient, provocative, stern). In addition, the puppeteer could swap out different machine-learning models (e.g., trained on different data sets) during a test to see how they perform in actual use with people.

This Wizard of Oz (WOZ) approach allows the designer to easily sketch the AI for themselves, for collaborators, and for usability testing, before investing in a functional AI system. Once the design matures, the toolkit allows the designer to replace the WOZ version of AI with appropriate functional implementations of algorithmic AI.

In discussions about this at the AAAI symposium, one insight that came to light is that this iterative, experimental approach for the design of AI can not only help define the MVP (minimum viable product) in agile terms, but also help with narrowing down to the MVD—a term we invented that refers to the minimum viable data needed to train the ML models.

Data collection, tagging, and ML model training is an expensive and time-consuming process. Defining an MVD strategy in the protoyping phase (perhaps in collaboration with a data scientist [5]) could lead to significant reductions in cost and time to market, or new insights about the characteristics of the data needed.

To facilitate identifying the appropriate scope and quality of data, the tool will allow designers to experiment with simulated ML and then use a simple dropdown menu to switch between different ML models to see how they behave in use (e.g., pointing a camera while the model performs object recognition). In this way, the prototype can show how different data and training impact speed, accuracy, and the experience. This will help the designer understand how the data and model fit project goals, as well as identify issues such as unwanted bias, bad data, and unexpected outcomes.

Moreover, the ability to play with data will, over time, give the designer a better sense of how data collection, training, and ML algorithms can be used most effectively and creatively while considering ethics and civic responsibility.

Simulate then implement hardware. To avoid the time and expense of working with physical electronics and mechanics too early in the process, the toolkit allows the designer to simulate the behavior of a physical AI device in 3D within Unity. Whether they’d like to add a simple tabletop device or an autonomous car, designers can sketch a range of physical behaviors (changing shape, speaking/listening, seeing other objects, moving around, etc.) in 3D on a screen, through the visual authoring system (Figure 9).

ins11.gif Figure 9. The 3D cube moves and behaves like the intended physical device would, driven by the visual node system on the right. Once the behavior is defined, the physical robot enacts the same behaviors (sensing, navigating, moving, lighting up) as the simulated 3D device on the screen—see Figure 1 (https://youtu.be/vKxXVijCcdk).

To create a more immersive experience with the virtual, the toolkit will ultimately allow interaction with the 3D simulation through augmented reality, merging the virtual device with reality.

When the designer is ready to move to a working hardware prototype, a critical moment in the design process, the toolkit makes this process easier and relatively seamless. It does so by supporting a standard set of behaviors that work in both 3D simulation and on a physical robotic platform (Figures 10 and 11).

ins12.gif Figure 10. A simple visually designed algorithm for navigating using a proximity sensor labeled/analog/0. The robot reports the real-time value from the sensor, and the toolkit checks that value to decide what action to take. On the left, the behavior runs if the sensor reports a value of 42 (actually a range around that value) and moves the robot forward. On the right, if the robot gets too close to an object (less than 27), it will turn right and then move forward.
ins13.gif Figure 11. The actual robot enacting the algorithm in Figure 10 by reporting values from the proximity sensor and following the logic dictated by the toolkit. From left to right, the robot moves toward the object, turns, and moves forward (https://youtu.be/gn1e1ZpLe2o).

back to top  A Working Prototype

The Delft AI Toolkit is an experiment intended to provoke discussion about tools for AI. But it is also a working prototype and will soon be usable by designers to experiment and gain a stronger understanding of AI as a material of design.

As the system develops, the plan is to support behavior trees, internal machine-learning blocks, Unity’s reinforcement learning agents, APIs for cognitive services such as IBM Watson, and an interface to external ML models such as Google’s TensorFlow.

In addition, plans include providing working examples in the toolkit that help designers understand how a range of machine-learning approaches work (and don’t work).

The system is available on GitHub (github.com/pvanallen/delft-toolkit-v2), and I plan several releases through fall 2018 and beyond that will be stable enough to use. I encourage designers and others to try it out and provide feedback. You can follow progress on philvanallen.com/portfolio/delft-ai-toolkit/

back to top  Acknowledgments

To pursue the goal of deeper engagement for designers in the AI field, in fall 2017 I took a leave of absence from the Media Design Practices MFA program at ArtCenter College of Design to work on a toolkit for the design of AI. This was supported by a research fellowship at TU Delft in the Netherlands, funded by Design United. It took place in the Industrial Design Engineering program, working closely with the idStudioLab and professor Elisa Giaccardi. The project is currently supported by a small grant from the ArtCenter Faculty Council.

back to top  References

1. Hartmann, B. Klemmer, S.R., Bernstein, M., Abdulla, L., Burr, B., Robinson-Mosher, A., and Gee, J. Reflective physical prototyping through integrated design, test, and analysis. Proc. of the 19th Annual ACM Symposium on User Interface Software and Technology. ACM, New York, 2006, 299–308; https://doi.org/10.1145/1166253.1166300

2. Marenko, B. and van Allen, P. Animistic design: How to reimagine digital interaction between the human and the nonhuman. Digital Creativity 27, 1 (2006), 52–70. DOI:10.1080/14626268.2016.1145127; https://www.tandfonline.com/doi/full/10.1080/14626268.2016.1145127

3. van Allen, P. Reimagining the goals and methods of UX for ML/AI. AAAI Spring Symposium Series 2017; https://aaai.org/ocs/index.php/SSS/SSS17/paper/view/15338

4. van Allen, P. Rethink IxD. Philip van Allen. May 9, 2016; https://medium.com/@philvanallen/rethink-ixd-e489b843bfb6.

5. Yang, Q. et al. Investigating how experienced UX designers effectively work with machine learning. Proc. of the 2018 Designing Interactive Systems Conference. ACM, 2018, 585–596. DOI:10.1145/3196709.3196730.

back to top  Author

Philip van Allen is a professor at ArtCenter College of Design interested in new models for the IxD of AI, including non-anthropomorphic animistic design. He also develops tools for prototyping complex technologies and consults for industry. He received his B.A. in experimental psychology from the University of California, Santa Cruz. vanallen@artcenter.edu

back to top 

Copyright held by author. Publication rights licensed to ACM.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.

Post Comment


No Comments Found