Columns

XXXII.5 September - October 2025
Page: 24
Digital Citation

Outfitting the Emperor with New Clothes: How Human-Centered Design Can Reimagine AI to Minimize Its Risk


Authors:
Ovetta Sampson

back to top 

The airport was on fire. It was January 2012, and I was working as an international writer and photographer. A group of doctors and I were traveling from Kaduna State in northern Nigeria to the country's capital, Lagos.

Nigerian President Jonathan Goodluck had just removed the fuel subsidy, and gas prices had doubled. Laborers, shop owners, and ordinary folk took to the streets to protest the sudden surge in fuel prices, and we were caught in the struggle to maintain order. The protests eventually stopped, and we were allowed to fly to Lagos, but the sight of burning oil, thick black smoke, and the desperately angry people who felt betrayed never left me.

While I don't imagine such dramatic events will occur on the streets of the U.S., I know there is a reckoning coming once the real cost of our new toy hits American households. Yes, that catalyst of impending economic pressure is automation, or, as the marketers like to call it, artificial intelligence. And its price is our climate, our jobs, and, if you believe the philosophers, our humanity [1].

But it does not have to be this way. Even as the real costs of AI are slowly being unveiled and its return on investment remains unclear, we—engineers, designers, and researchers—have the opportunity to redesign the new automation age in a way that minimizes the human engagement risks. As designers, we often complain about not getting a seat at the table. The reality is that it's time for us to reinvent the entire industry, and we're uniquely suited to do it.

In the coming months, I will outline a pathway that showcases how design, in particular human-centered design, is not only necessary but essential to minimize the three big AI risks: acceleration of climate change, diminishing labor opportunities, and disruption of human cognitive function. Forget the doom and Skynet scenarios. These three are the real, immediate, and largely ignored risks of adopting automation technology. As human-centered design practitioners we have an unprecedented opportunity to use our well-honed methodologies with some updated additions to minimize them.

Let's first talk about the real cost of AI. Every time you ask ChatGPT a question or plug in a calculation for Gemini to check your math or ask Claude about the meaning of life, the amount of water, power, and human brainpower it takes to process your request is mind-boggling.

I began doing AI design in the early 2000s. I've spent most of my career designing ML/AI infrastructure. Most recently, at Google, I led multiple teams of designers, researchers, and engineers who worked on tools and platforms that helped machine learning researchers, data scientists, and engineers create, build, scale, and produce models.

This work encompassed the entire model development process—from TPU configuration, chip storage and distribution to the collection, engineering, and storage of data—as well as model creation, training, and serving all the way to production, monitoring, and risk management. While I don't know everything about AI, I know a lot about how it is created, developed, built, trained, and produced. And today its production is woefully inefficient.


Improving AI model efficiency would go a long way toward minimizing the biggest risks this technology holds.


In my decade-plus career using AI as a design tool, the reality that the production of AI, especially generative AI, is heavily subsidized has been ignored. It's subsidized by the human labor that enables it, by the large tech companies who build and distribute it, and by American households that enable it through the use of water and power.

And now, just like the fuel in Nigeria, those subsidies are starting to be unveiled. In their recent article "AI Needs So Much Power, It's Making Yours Worse," Leonardo Nicoletti, Naureen Malik, and Andre Tartar lay out stunning visualizations that depict how the power-hungry data centers that fuel AI models are causing blackouts and frying appliances in both cities, such as my hometown of Chicago, and rural areas [2].

Let's make it plain. Cooling one large data center takes about 132 gallons of water a day—that's equal to the amount used by 4,200 people in the U.S. With fresh water making up just 3 percent of the planet's water resources, you can see how using AI is a risk to us all [3]. And every day ChatGPT uses as much electricity to power an average household for 70 years—a conservative estimate [4].

While AI is providing efficiency in logistics, banking, health care, retail, and beyond, right now its costs far outweigh its benefits. So how do we turn this around?

Most design leaders today are focused on incorporating GenAI into their research practices and creating automated shopping carts. They're missing the true design challenge of today, and of the future: How do we maintain our technology use trajectory while creating efficiency and sustainability gains? In short, how do we reimagine AI in a way that's responsible to humanity and the planet we inhabit? Designers need to think about how to solve our machine learning optimization problems.

Typically reserved for engineers and ML researchers, model optimization, or lack thereof, is an area where designers and researchers can have impact. The model development process is inefficient, almost haphazard. Developers adapt and build models using institutional knowledge rather than skill.

Chip configuration, model selection, and model training, for example, are processes that could be improved to optimize model production. Most designers' eyes glaze over when words like machine learning operations and optimization are used, but if my tenure in ML infrastructure has taught me anything, it's that ML infrastructure is where we're needed the most (at least for now). Improving AI model efficiency would go a long way toward minimizing the biggest risks this technology holds.

We're overindexed on large language models and generative pretrained transformers (that's the GPT in ChatGPT), expecting them to produce the seemingly magical human-computer interactions we see today. What if instead of models sketchily trained on the kitchen sink of the Internet, we act more judiciously and intentionally about the data we feed these generative model systems, keeping the model size smaller and the specificity of purpose higher?

What if we took a distributive approach rather than the large-size-fits-all approach we have now? In future articles, I'll lay out how human-centered design can assist this transformation and help make model systems more accurate and thereby more efficient for use cases.

Another avenue for design to play the hero role is by improving how AI is applied in the world. If the recent decision by fintech company Klarna to rehire humans after firing them in favor of AI is any indication, we know that AI is being misapplied in an egregious way [5]. A year ago, Klarna's founder and CEO, Sebastian Siemiatkowski, told news outlets that he had an AI-first company and began layoffs. But just months later he reversed that direction, hiring people back due to lack of quality in AI's work. We know AI fails when it comes to nuance, but companies are rushing to integrate AI technology in areas such as disease diagnosis and mental health therapy.

ins01.gif

Designers and researchers are needed to help companies determine the best use case for AI, not just how to apply it. Just because you can use AI doesn't mean you should. And as with all human-centered design projects, our essential job is to help engineers determine when a technology is a problem solver and when it's not.

If you've been anxious about your future as a human-centered designer or researcher, don't fret. There are plenty of opportunities for us to not only participate but also to lead this new technology into a more humane and sustainable future for everyone.

back to top  References

1. Davies, S. We're already at risk of ceding our humanity to AI. Literary Hub. Feb. 6, 2025; http://bit.ly/4f8Hj8o

2. Nicoletti, L., Malik, N., and Tartar, A. AI needs so much power, it's making yours worse. Bloomberg. Dec. 27, 2024; http://bit.ly/4l3sDcg

3. Privette, A.P. AI's challenging waters. Center for Secure Water, University of Illinois Urbana-Champaign. Oct. 11, 2024; http://bit.ly/40Hk6EB

4. Bolt, O. How much energy does ChatGPT use per day? Energy Theory. Mar. 2, 2024; http://bit.ly/3GXSswn

5. Ivanova, I. As Klarna flips from AI-first to hiring people again, a new landmark survey reveals most AI projects fail to deliver. Fortune. May 9, 2025; http://bit.ly/4ocsSUY

back to top  Author

Ovetta Sampson is the founder of Right AI, which helps businesses navigate the increasing risk of adopting and deploying machine learning and AI. As an executive design leader and AI expert, she has worked for Google, Capital One, Microsoft, and IDEO. [email protected]

back to top 

Copyright 2025 held by owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2025 ACM, Inc.

Post Comment


No Comments Found