XXX.5 September - October 2023
Page: 39
Digital Citation

Creating Standards: Our Secret Job as Researchers

Danny Spitzberg

back to top 

Designing and developing software involves making claims about what users want. In the early years of personal computing, central claims about user goals, desires, and everyday life came from psychologists, anthropologists, and other social scientists who collaborated with technologists in innovation centers like Xerox PARC [1]. In the past decade, however, the widespread adoption of software and massive growth in the industry came with a new professional title: the user experience (UX) researcher.

back to top  Insights

Researchers in software companies have huge potential to improve tech governance, but advocating for "ethics" produces small wins and risks cooptation by executives.
A more influential approach for researchers is to focus on standards that shape ongoing collaboration with engineers, regulators, and other stakeholders.
Over the long term, standards can also enable large-scale tech governance through labor organizing and policy change.

In my day job, I am one of tens of thousands of UX researchers working in the software industry. Researching people's desires comes with ethical challenges for software professionals, and like most of my peers, I want to do ethical work. I aspire to help coworkers design and develop useful software products and positive experiences for users, and, where possible, to help users advocate for themselves. But recently, advocating for ethics has become a research duty on its own. "Ethical AI" is appearing everywhere, from company marketing materials to individual LinkedIn profiles, with little hope of improving tech governance.

My goal in this article is to argue that researchers do our best work influencing software production and guiding tech governance not by advocating for "ethics" but rather by taking on a secret job: creating standards.

back to top  Talking in Circles

As software becomes more pervasive, researchers have exponentially more problems to solve. The gig economy monetized flexible labor; cryptocurrency enabled new financial complexity; AI expanded bias and plagiarism, with more complications emerging every day. Even basic accessibility is a constant game of catch-up.

While UX researchers have told colleagues and clients, "My job is to advocate for the user," a cottage industry has emerged to advocate for "ethics." However, popular outputs from this seem to have people talking in circles rather than making progress.

The design firm IDEO entered the fray with its 2015 publication of The Little Book of Design Research Ethics [2]. At 55 pages, the book offers a preliminary guide for ethics. Its direct references to human rights are noteworthy, however, as human rights law has myriad accountability and enforcement mechanisms and institutions that are external to any project or firm like IDEO, guiding client selection and project scoping.

IDEO's book is far from the only resource on applied ethics in software design and UX. I've come across some 150 ethics offerings ranging from checklists and card decks to workshops and community groups. However, I've yet to find one discussing the role of power to advance ethics, let alone the reality of office politics in a corporate workplace that make ethical interventions a necessity in the first place.

Another popular book in this genre is Design Justice: Community-Led Practices to Build the Worlds We Need, by Sasha Costanza-Chock [3]. In contrast to IDEO's guide, this is an explicitly political scholarly work about "the matrix of oppression" affecting design, especially as applied to software. It includes numerous case studies highlighting the need to center people who are normally marginalized by design, plus 10 principles to do so that emerged from design activist gatherings. Overall, it's a worthwhile read if you've had a negative experience with U.S. airport security—and especially if you haven't.

Ethics are vitally important to consider. The question, however, is about how we influence organizations to make decisions about software production overall.

Shortly after the book came out in February of 2020, I hosted a talk with Sasha for the Design Justice Network. One participant asked for advice in the Q&A: "How do I apply design justice principles to my UX day job at an auto insurance company?"

Sasha attempted to provide a satisfying response, but identifying sources of harm, let alone "community-led practices" addressing them, is far less tractable in the insurance industry compared to airport security, border control, or other problematic areas. Interestingly, I learned a week later that economists pay close attention to auto insurance data because late payments and defaults are a leading indicator of economic distress; they know people depend on cars to commute to work, and sometimes for shelter.

It's difficult to imagine addressing systemic inequality by designing a better auto insurance app. Nevertheless, the vast majority of UX researchers work in the gray area of corporate workplaces.

How can we tell where to intervene or who we can trust as co-conspirators? Ethics and justice are important topics to discuss, but they offer little in terms of strategy for engaging corporate actors, let alone to influence overall software production.

back to top  Points of Intervention

When a software company finds itself under public scrutiny, ethics initiatives often serve as a sort of empty promise for the company to do better next time. In a 2019 talk, Meredith Whittaker explains that this plays out because "ethics" are almost never pegged to external accountability, such as human rights law [4].

Researchers in most software companies are already far removed from making decisions on company direction. And as Emanuel Moss and colleagues argue in their 2019 report, the more a researcher strives to become "the ethics person" (or to introduce terms like design justice), the more management is likely to target them and then isolate, sideline, burden, and even blame them for negative outcomes while also ensuring they lack power or influence to make desired changes [5].

So, for researchers without means of internal influence or access to external accountability, few of us can afford a second, underpaid job digging our company out of a crisis. The magnitude of this task cannot be understated: Even if researchers secure "ethics" commitments from leadership far before production begins, shipping software to consumers involves an entire company. In a 2023 paper, Sanna J. Ali and colleagues studied 23 workers tasked with implementing ethical values in AI systems as part of their job and found these "ethics entrepreneurs" struggling to prioritize, quantify, and reorganize production, while taking great personal risk [6].

It is still true that ethics are vitally important to consider. The question, however, is about how we influence organizations to make decisions about software production overall.

That inspired me and a few colleagues to do a study that asked, "Do influential projects focus more on decisions than ethics?" From a survey and a scenario-based workshop with around 80 people, the answer was yes. We published a paper about the strategies in teams [7]. But what surprised me was that the most influential projects focused on decisions in messy situations where there was no right answer, no best practices. We've since done nine more workshops with more than 800 designers and researchers [8].

Returning to our earlier question: Where do researchers have influence along the production line? We found a small cluster of strategies used to align product teams, for example, a playbook with quality assurance metrics. We also found a much larger set of educational strategies shaping our profession, like helping coworkers educate themselves about oppression in our field and how that affects users.


But the most compelling strategies came from the messiest situations. One firm trying to support restaurant workers early in the pandemic realized there were no existing safety measures to help guide their designs. So, they recruited a social worker to lead the design research, someone whose training and licensing corresponds to external accountability. Their intervention was less about "ethics" and more about bringing standards into production—enabling them to create new standards.

Researchers can exert influence wherever we focus on standards. The rest of this article introduces the theory and practice of creating standards, what they do, and how we can make them better.

back to top  More Than a Nudge

A standard is a normative assertion—a statement about what should be "normal"—that works like infrastructure. For example, consider the now-ubiquitous shipping container:

  • Containers should be 40 feet in length.
  • Containers should stack across all transport modalities.

Shipping container standardization led to huge productivity and efficiency gains. In this case, the standards work at many levels and scales, from establishing specifications and constraints that enable collaboration among engineers, to setting expectations for how things fit together that facilitate market exchange.

Standards exist for everything, from what we build to how we build it, from shipping container specifications to minimum wage. In Western countries, standards often came about as a matter of national security or national pride. Governments set tens of thousands of standards; some are mandatory to keep users safe from harm (or to protect businesses from liability), but most are voluntary. In the private sector, some company standards are closely guarded secrets, like the Coca Cola recipe, while others are promoted widely, like IDEO's books on human-centered design methods.

Putting intellectual property rights and reciprocity aside, here is a condensed list of what it takes to make a better standard:

  • Observable: possible to see in action
  • Reasonable: easy to follow the logic
  • Adaptable: open to feedback.

Of course, researchers can't simply author new standards and expect positive change. If you've tried making an edit on Wikipedia, you might know that the majority of edits are made by white male volunteers who really like to be right on the Internet. Web standards can also become dominated by a minority of people with too much time on their hands.

However, if we understand the dynamics of who makes standards and who follows them, we can make things better. Most software standards are set by third-party associations like the World Wide Web Consortium (WC3), funded and directed by major companies. Interestingly, these same companies hire third-party firms to audit their standards. Most audits involve accessibility concerns—ensuring that unsubscribe links are a minimum of 80 x 20 pixels, that blinking icons don't cause seizures, and so on. Bocoup is one firm that conducts audits, but as a member of another association called the Contract for the Web, it also pushes for open and inclusive standards that go beyond harm reduction to advancing human rights.

back to top  The Duty of Care

What's especially powerful about standards is who cares about them. As researchers, we can influence what gets built or not by helping create standards—that is, by influencing what should be "normal" for software. And while engineers get stereotyped for ignoring ethics, they do value and follow certain standards that facilitate collaboration and overall help build better products. We can see this in cases of consumer electronics, public safety tech, and transportation labor.

The consumer success of two competing smartwatches is a case of several standards at work. Some products succeed because of existing standards, and other products succeed so well that they set new standards. The first Pebble watch shipped in record time and at an affordable price, because it was built with interchangeable off-the-shelf parts, a critical standard for centuries. It played music, displayed emails, and supported sports and fitness applications, and you could customize it using a smartphone. Both manufacturers and hobbyists loved it; iFixit gave the second model a 9 out of 10 repairability score.

Two years later, Apple launched the Apple Watch. Sold at almost twice the price, it was nevertheless a huge success due to Apple interoperability—the standard that says any one product should be able to interact with any other in its ecosystem. The Apple Watch is now the dominant smartwatch shipped, and consumers expect the same user experience on watches and across all Apple products. But iFixit gave the Apple Watch a 5 out of 10 repairability score. Hobbyists might find that problematic, but Apple is fine with it because that's part of how they maintain quality control.


When it comes to public safety, standards are much more high-stakes. Consider the walkie-talkie. It's an ideal tool for first responders because any device should and can talk with any other, a more mission-critical instance of interoperability. Now, imagine if you find yourself trapped in a fire or an earthquake. Who do you trust to coordinate the rescue, a firefighter guided by on-site crew with walkie-talkies, or by a remote colleague wearing a VR headset viewing building floor plans?

ins03.gif Chris Arellano (right) and a group of ride-hail drivers in San Francisco filed hundreds of wage claims in February 2020, urging the state to uphold their rights as employees. Using a Web app over the coming months, drivers filed 5,600 claims worth $1.3 billion and triggered a state wage theft suit against Uber and Lyft.

VR companies interested in expanding to new markets like public safety might make the case that they ought to be standard issue, but new tech isn't necessarily better; it's just new. And for UX researchers exploring better software and tech, ethics checklists aren't enough; we need leadership from practitioners like firefighters and other first responders in the field.

Working conditions for Uber and Lyft drivers brings our attention to labor standards and the issue of enforcement. In 2019, a new ruling in California said that people who do the core work of a business, such as driving for a ride-hail company, should be classified as employees, with all of the rights and protections. To help educate San Francisco workers on their rights under this ruling, I co-organized a legal clinic with a few dozen drivers for Uber, Lyft, and other companies. After a year of meeting and organizing, hundreds of drivers held a rally in front of Uber headquarters to make their demands known. Our group passed out flyers that said we're volunteers in tech collaborating with legal aid workers to help drivers get back lost wages.

Chris was one driver and organizer who had been driving for Lyft so long, he had the founder's number in his phone. He said, "I think this wage claim could be useful for organizing other drivers, but come back when you have something that works." The problem is, if you drive for Uber, filling in a typical wage claim form often takes over two hours, in-person with a legal aid worker, because your income is based on piecework; each ride earns you a different dollar amount. So, over the course of a few months, I organized volunteers, drivers, and legal aid workers, and we made a spreadsheet tool to crunch the numbers faster.

At last, Chris gave the spreadsheet calculator a try and liked it. We also tested the tool with deaf drivers who spoke English as a second language, and they began teaching each other how to file claims. But success came from a leadership pivot, when one driver used his familiarity with wage data to create a new spreadsheet that produced more reliable claim estimates.

After drivers completed a few hundred claims, they marched to the California labor and standards enforcement office in three cities to say, "We need you state officials to enforce the law." A few reporters came out to cover the event. A few months later, we turned the spreadsheet into a Web app and brought the wage claim process from two hours down to 15 minutes on a phone. Soon, around 5,600 drivers filed wage claims worth more than $1.3 billion, forcing the California labor commissioner to file a wage theft lawsuit against Uber and Lyft [9]. The lawsuit is still pending because those companies are fighting it, based on a different view of labor and tech standards. Sadly, Chris passed away in 2023, but was remembered at a memorial attended by family as well as fellow drivers and organizers who continue building a union to protect their rights.

back to top  Better Behavior

One fact familiar to watchmakers, first responders, ride-hail drivers, Uber and Lyft, and the state of California is that you can really get what you want by creating standards that favor you.

As researchers, we too can exert positive influence by changing standards, a subset of policy. That's why perhaps the most strategic approach to UX research is creating a new or better standard, because that's where ethics come in. This line of inquiry helps integrate ethics and standards:

  • What change do you want to see in your organization?
  • What standard is blocking that change?
  • How and why is it "standard"?

If the organization keeps talking about ethics but doesn't do better, we may have to change law or policy. And depending on our place in production, we may have to organize our workplace and sector to get that done.

back to top  References

1. Smith, D.K., and Alexander, R.D. Fumbling the Future: How Xerox Invented, Then Ignored, the First Personal Computer. William Morrow & Co., 1988.

2. IDEO. 2019. The Little Book of Design Research Ethics. 2019; https://www.ideo.com/post/the-little-book-of-design-research-ethics

3. Costanza-Chock, S. Design Justice: Community-Led Practices to Build the Worlds We Need. MIT Press, 2020; https://direct.mit.edu/books/oa-monograph/4605/Design-JusticeCommunity-Led-Practices-to-Build-the

4. Whittaker, M. Reclaiming the future: Privacy, ethics & organizing in tech. City Arts & Lectures. Jun. 7, 2019 (audio recording); https://www.cityarts.net/event/the-future-of-civil-liberties-privacy-ethics-in-tech/

5. Moss, E. et al. 2019. Owning ethics: Corporate logics, Silicon Valley, and the institutionalization of ethics. Social Research 86, 2 (Summer 2019), 449–476; https://muse.jhu.edu/article/732185

6. Ali, S.J. et al. Walking the walk of AI ethics: Organizational challenges and the individualization of risk among ethics entrepreneurs. Proc. of FAccT 2023; https://arxiv.org/pdf/2305.09573.pdf

7. Spitzberg, D. et al. Principles at work: Applying "design justice" in professionalized workplaces. CSCW Workshop, 2020; https://techotherwise.pubpub.org/pub/djnaw/release/2

8. Spitzberg, D. and Pei, Y. Justice in the gray zone. In From the Desks of Designers and Researchers Bringing Design Justice Principles to Work. Design Justice Network, 2022, 6–9; bit.ly/47tNlML

9. Hawkins, A.J. California labor commissioner sues Uber and Lyft for alleged wage theft. The Verge. Aug. 5, 2020; https://www.theverge.com/2020/8/5/21356096/uber-lyft-california-labor-commissioner-lawsuit-driver-classification

back to top  Author

Danny Spitzberg is a visiting researcher at the Georgia Institute of Technology. As a user researcher and sociologist, he facilitates and studies worker codesign. [email protected]

back to top 

intr_ccby-sa.gif Copyright 2023 held by owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.

Post Comment

No Comments Found