Ms. Wong (not her real name) couldn't contain her joy. This young, petite woman bounced on her couch as she described what most think of as the most mundane of activities—driving. My team and I were in Beijing on a research trip to design the future of autonomous vehicles. I was smiling because of the infectious joy the young Chinese mother exhibited as she described to a roomful of American strangers why driving was such an enjoyable experience.
"For one thing," she said through a translator, "I really enjoy a road trip… driving while listening to music and then watching the horizon…it's a very beautiful thing. I feel very, very happy and content.
"You can only see the road and it's connected with the sky," she said. "It's a sky road. There…I want to drive it by myself."
At that moment, I realized my top job as a designer in an autonomous future was to help engineers and technologists determine what not to design. Because it would be downright inhumane to steal that joy of driving from Ms. Wong.
In my extensive career as a designer and researcher, I have personally seen how technology can help solve some of the most complex problems in the world. I have worked on projects that used automated technology to help doctors reduce risk to their patients, reduce a child's anxiety when awaiting chemotherapy, and save food and make restaurants more sustainable. And who can question how automated technology has helped the world maintain order in the midst of the Covid-19 chaos?
But as a human-centered designer, I also know the dark side of automation. How, increasingly, powerful algorithms used in social media have moms suddenly questioning whether to vaccinate their children and celebrities claiming the Earth is flat. Or how automation has an amplifying effect on already biased systems—loan origination and criminal justice procedures, to name a few. And of course, the top fear of automation: the displacement of front-line service workers. Algorithms aren't the culprits, but these real events signal the profound effect that automated technology can have on humanity.
To preserve humanity in a future of automation, one cannot dwell on what can be automated. Rather, we must make our stance to decide what not to automate. This is a bold change and requires a paradigm shift in how we think about our craft, our industry, and our duty as designers. But it can and should be done. If we want a future of automation that shelves the sins of our past, we have to make some drastic changes. We need to:
- Accept we are part of the problem. We must recognize that the inherent bias and racism, sexism, and phobias that seep into our automated technology all start with us. We must admit that human culture—good and bad—plays a huge part in designing automation systems. Only then can we redesign how we design future systems. To address the problem of us, we must widen our design narrative. For an ethical, automated future we must accept that we, alone, are no longer the arbiter of what's right and wrong in design.
- Evolve our design methods. Our current design and research methods focus predominantly on the relationship between an individual and a device. We design automation with a task-oriented framework that sits squarely in the slave/master philosophy. In a world of automation, fueled by artificial intelligence, there will be several entities, beyond humans, in an ecosystem with agency and the ability to act. We must consider the entire system of people, platforms, and products when designing for this future world.
- Embrace design as a verb. This may be the hardest change of all.
Increasingly, our job as designers will be to determine what not to design, what not to make, in a quest to preserve cultural, ritual, and distinctly human qualities in our future world. If not designers, then who? And this monumental shift will require us to embrace design as a verb, not a noun, making it a willful act to stop the ensuing chaos when humans mix with machines. We can no longer have design hampered by siloed turf wars with product managers, data scientists, developers, engineers, and sales and marketing. We must demand our seat at the executive table or destroy the room.
Step 1: Accept that we are part of the problem. Then widen our design narrative. As creators, we're notoriously bad at separating our human failings from our technology. This is an argument that has been made many times so I won't dwell on it here. In our schools, our courts, our neighborhoods, and our government, we've used data and mathematical models to push racist, sexist, and biased agendas. If you want more examples, read the siren calls of Cathy O'Neal , Virginia Eubanks , and Ruha Benjamin .
So what do we do about it? Start with the person in the mirror. When was the last time you actually just visualized and communicated your biases? If you're not doing it on every project, you might start.
Mitigating bias at scale means design schools must integrate ethics into the very essence of design practice. Design ethics shouldn't start on the job. It needs to be part of our process as well as our profession. When we teach ethnography, or wireframing, or heuristic evaluation, we teach ethics. Not separately, but as part of what it means to be a designer. For example, as part of a graduate course, I helped my students craft their design ethics statement, one that they could use for the rest of their career.
But to be honest, the biased problem in automation creation will be solved only by widening our design narrative. Yes, that means recruiting more designers of diverse perspectives, backgrounds, cultures, and philosophies. But it also means questioning the "rightness of whiteness" when it comes to design. Our profession must expand beyond Westernized ideals and principles.
Most of what we practice with design thinking and human-centered design stems from a single conference of white designers in London in 1962. The conference upended design by putting humans in the center of engineering and industrial product making. For a more equitable, automated future, it may be time to upend design methods again, from a less Westernized point of view. We must embrace a wider perspective of design that includes more co-creation, co-participation, and a form of collective design, with communities, societies, and people who will use our future products. I gain inspiration for myself from people like Indigenous futurist Jason Edward Lewis. Lewis, a professor of design at Concordia University, has been showing new ways of creating the future using a very non-human-centered design framework. "Man," he writes, "is neither height nor center of creation. This belief is core to many Indigenous epistemologies."
Step 2: Evolve our design methods. At the Conference on Design Methods in London in 1962, Welsh designer John Christopher Jones introduced design methods to the world. It was a seminal moment in which design matured. We as design practitioners engage with much of what was outlined in that conference today, including design research with people, creating user scenarios, prototyping, testing, and validation. But these methods place designers as the arbiters of the process and people as the center of the outcome.
User experience in an autonomous product era shifts the agency paradigm from individual and device to a multiagency ecosystem. It's an era where agency can be passed among many entities, including multiple devices, machines, and people. This pushes us into a different framework of design, one less about interaction and more about relationships—a more collective point of view that contrasts with the individualism embedded in today's design methods.
Design ethics shouldn't start on the job. It needs to be part of our process as well as our profession.
Human and machine cooperation (HMC), as explained by experts in the human factors field, focuses on multiple sentient parties, each with their own agenda, having to operate in the same context, cooperating to achieve a common goal. This multiagency dynamic presents unique challenges to traditional design and research methods. It also presents a unique future where, as Lewis writes, "man is not the center of the product creation but a part of a cooperative of agencies."
As a design research lead on several autonomous projects at IDEO, I found many traditional methods of discovery, such as interviewing, surveying, and even observing people in their environments, left me feeling unsure and not confident in the design direction. I couldn't just ask people about driving in an autonomous vehicle. It wasn't until my teams and I began delving deeper into intangible territory, such as what value driving has to a society, or the rituals call center managers valued the most, that I began to realize that we had to understand fundamental human values, cultures, and practices to give good design direction. So we brought in historians, psychologists, and ethicists to help us to design.
For a more equitable and ethical autonomous future, we need to focus not on a person and a device but rather on what I call mindful AI. Mindful AI requires designers to be concerned not just with human behavior and decision making, but also with people's culture, rituals, and values.
When you design products and services that interrupt a person's free will—say, for example, an AV car door that refuses to open because its camera can see a cyclist in the door's pathway better than the driver, then designers need to have a fundamental understanding of how to create such gestures without violating abstract values such as trust and the need for freedom.
In a truly automated world, design will be less about what humans do and more about what humans fundamentally need to meet that definition of humanity for themselves.
Step 3: Embrace design as a verb. As we develop new, more communal methods to design, we must expand our influence as designers in our organizations to ensure that what we design gets adopted into product creation. One of my favorite definitions of design comes from the controversial designer Victor Papanek. He defined design as "the conscious and intuitive effort to impose meaningful order" .
As designers, we can't just worry about the look, feel, and convenience of our designs anymore. We must dig deeper because our designs will have agency; they will make decisions, and those decisions could adversely or favorably affect people, communities, and society as a whole. When you're designing for an autonomous future, you're designing for an ecosystem of connected individuals, devices, and relationships. This requires more than just nailing down affordances and usability.
Design as a verb means that creators deciding design is a willful act that requires leadership and responsibility. How often are we questioning the products we make real? How often are we challenging technologists on their product roadmaps? How often are we in the C-suite showing technology CEOs our plans for change?
Design leaders like us can't sit on the sidelines and complain pitifully about what the world is coming to. We're culpable and therefore can help change the course. We have to stop picking petty fights with product managers, engineering, and other disciplines, and acknowledge who the real product owner is—the people for whom we design.
So, as designers, we must get better at weaving design methods and processes into the processes and methods of our disciplinary counterparts. To do this, we need to push for and establish trading zones—areas of transdisciplinary product-making where designers, engineers, data scientists, and researchers all work together in a collective leadership model to produce new solutions for the future. What would this collective operation model look like? Nothing like we do it today. Here's a snapshot:
- We treat disciplines involved in product making as a network of connected systems rather than as a set of individual discipline organizations collaborating. We reimagine reporting structures, the idea of project leadership, and increase transdisciplinary acumen among all disciplines. So everyone leads at some point in the product-making process and no one discipline "owns" the product. You can ship products with this collective operation model. I know I've done it.
- We have rotated and shared decision making to avoid siloed, transitionary leadership handoffs. Use an inclusive product-making model where people facilitate learning across disciplines as part of the process.
- We demand and create diverse product teams so that we may have various mental models at the table to call out bias, privilege, and designer arrogance as we design for an increasingly diverse populous. We normalize diversity in research, recruiting, and design.
To ensure that our future automated world isn't marred and overwhelmed by mishaps of the past, we as designers must embrace new methods. But more important, we must embrace a new, increased responsibility for product making. Smash open a door to a more equitable and collective process of product making. We can no longer wait for the baton to be passed to us—we must take our place at the executive table. Ms. Wong is depending on us.
Ovetta Sampson is a principal creative director at Microsoft and was formerly a design research lead at IDEO. She has a master's in human-computer interaction from DePaul University and a bachelor's in communication and journalism from Truman State University. email@example.com
Copyright held by author. Publication rights licensed to ACM.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2021 ACM, Inc.