Cover story

XXXI.4 July - August 2024
Page: 22
Digital Citation

UX Matters: The Critical Role of UX in Responsible AI


Authors:
Q. Vera Liao, Mihaela Vorvoreanu, Hari Subramonyam, Lauren Wilcox

back to top 

Let's imagine a scenario—inspired by true events—in which a company has deployed an AI-powered system in a hospital. The system provides recommendations for treatment plans. Some clinicians find that the new system requires them to change their routine and significantly adds to their workload, so they start resisting the use of the system. Other clinicians are amazed by this new, powerful technology and overly rely on the AI by accepting its recommendations even when they are incorrect, resulting in medical errors with negative effects on patients—especially women, for whom the AI system tends to underperform.

back to top  Insights

UX practitioners can play an instrumental role in responsible AI (RAI) practices because UX and RAI share common goals, principles, and perspectives.
UX is uniquely positioned to contribute to RAI throughout the AI development cycle, from understanding the sociotechnical system, ideation, and creating design interventions to evaluation.
Organizations should prioritize and incentivize the involvement of the UX discipline; the UX discipline should expand its toolkit to meet RAI-specific product requirements.

ins01.gif

Unfortunately, this is a scenario that happens too often as AI technologies are being deployed in various domains. A responsible approach to AI development and deployment should have aimed to prevent these issues. UX practitioners could have been involved to play an instrumental role. For example:

  • UX researchers could have identified stakeholder needs, concerns, and values, including those of clinicians and patients, to inform better choices of AI use cases, datasets, model parameters, evaluation criteria, and so on.
  • UX designers could have taken a leading role in designing the system interactions to be compatible with current work processes, and created interface features that help mitigate overreliance on AI.

Responsible AI (RAI)—an umbrella term for approaches to understand and mitigate AI technologies' harms to people—has entered academic work, policy, industry, and public discourse. In this article, we argue that the UX discipline shares common goals, principles, and perspectives with RAI, and that UX practitioners can be instrumental to RAI practices throughout the AI development and deployment cycle. Drawing from our work studying AI UX practices and the RAI ecosystem [1,2,3,4,5,6], we discuss concrete contributions that the UX discipline is uniquely positioned to make to RAI and suggest paths forward to remove current hurdles that prevent realization of these contributions.

back to top  Converging Paths: The Intersection of UX and RAI in Sociotechnical Perspectives

To mitigate harms of AI to people—individuals, communities, and society—RAI uses a set of principles and best practices to guide the development and deployment of AI systems. For example, a 2020 report from the Berkman Klein Center reviewed 36 sets of prominent responsible and ethical AI principles and mapped them to eight themes: fairness and nondiscrimination, transparency and explainability, accountability, privacy, safety and security, human control, professional responsibility, and promotion of human values. Recently, the National Institute of Standards and Technology (NIST) released the AI Risk Management Framework (RMF), which is becoming a guiding document for the AI industry in the U.S. The AI RMF lists characteristics of trustworthy AI that are generally consistent with these themes (Figure 1).

ins02.gif Figure 1. Characteristics of trustworthy AI systems specified in the NIST AI Risk Management Framework 1.0 (re-created from Figure 4 in https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf).

More fundamentally, RAI foregrounds a sociotechnical perspective. As the NIST AI RMF document puts it:

AI systems are inherently socio-technical in nature, meaning they are influenced by societal dynamics and human behavior. AI risks—and benefits—can emerge from the interplay of technical aspects combined with societal factors related to how a system is used, its interactions with other AI systems, who operates it, and the social context in which it is deployed.


Blindly adopting AI without solving a real user problem or mismatching stakeholders' needs and goals is not only a waste of resources but can also be one of the root causes of unintended harms.


To understand and positively augment this interplay requires the combined expertise of people who specialize in technology and people who specialize in understanding people, such as those working in UX.

For decades, the UX discipline has been at the forefront of championing human-centered values that directly reflect and promote themes that are now central to RAI. For example, the widely appreciated "Guidelines for Human-AI Interaction" [1] make concrete design recommendations for providing transparency, enabling human control, and ensuring reliability, safety, and resilience in human-AI systems. These UX recommendations resonate deeply with the principle-guided methodology of RAI, which advocates for iterative development and evaluation focused on human-centric outcomes. RAI's emphasis on considering the context-specific consequences for people along each RAI principle aligns seamlessly with foundational UX methods such as human-centered design, which prioritizes the needs of end users at every stage of the design process, and value-centered design, which extends this focus to include the stakeholders, values, ethics, and sociocultural context surrounding the technology.

back to top  How Can UX Contribute to RAI?

Given the synergies between the core values and practices across UX and RAI, UX researchers and designers can play an instrumental role in operationalizing RAI within product teams. Below, we consider a non-exhaustive list based on our work interviewing and observing many UX practitioners who work on AI-powered products.

Explicating the "socio" components of the sociotechnical system through UX research. The "socio" components—who the stakeholders are; what their values, needs, and concerns are; how the deployment context(s) is set up—are often an integral part of UX research outcomes for any product. For RAI practice, they can be instrumental. Most important, knowledge about the "socio" components is necessary for contextualizing and operationalizing RAI principles for a specific system or product. For example, fairness is a socially constructed and negotiated concept that varies across contexts and cultures. Developers of an AI-powered system must choose fairness criteria and harm mitigation strategies based on who could be harmed by unfairness and how. The Fairness Checklist [7]—a useful RAI resource aimed at supporting the development of fairer AI products—starts with an "Envision" step, which includes scrutinizing the system's potential fairness-related harms to various stakeholder groups and soliciting their input and concerns. UX researchers are well poised to lead this step. Let's take our opening scenario as an example. UX researchers could identify and conduct research with various stakeholder groups, for example, communities of patients, clinicians, and other hospital staff of different roles and career stages. The research could help define fairness criteria for the AI system, such as ensuring comparable error rates across patient demographic groups and comparable performance improvement and workload changes across clinician groups. These criteria should then guide all the following steps, from defining system architecture, dataset choices, and UX designs to evaluating system readiness for deployment. Similarly, UX research can provide inputs to operationalize other RAI principles (e.g., identifying what people want to understand about the model as explainability criteria [5]) and inform better model or system architectures accordingly (e.g., working with data scientists to translate what people care about in the deployment context to features or signals for the model).

Facilitating purposeful and responsible use of AI through responsible ideation. RAI must start with the question of whether an AI capability should be built and whether AI is fit for purpose. Blindly adopting AI without solving a real user problem or mismatching stakeholders' needs and goals is not only a waste of resources but can also be one of the root causes of unintended harms. Identifying purposeful and responsible use of AI for a given product requires understanding problem spaces and identifying unmet needs, then considering whether AI is needed or whether there is a better alternative, such as using deterministic software. It also requires exploring possible AI-powered system features and assessing their risks and benefits. Such assessment requires understanding how the system features behave in interaction and affect people. All of these tasks—identifying needs, exploring the design space of system features and envisioning their effect on people (as well as guiding the team to do so), and testing possible solutions—are within the UX discipline's core toolbox. In our opening scenario, if UX practitioners are involved early on, they could start with identifying pain points in clinicians' treatment decision-making process and understanding what aspects of decision making might or might not fit a workflow that leverages AI capabilities. They could guide the team to thoroughly explore possible AI and non-AI features to address these pain points and compare these features through user testing. The product team may end up landing on a more effective solution than "treatment prediction," such as a system to help clinicians gather treatment evidence for similar patients or compare treatment options. Despite the critical role of responsible ideation, the unfortunate reality is that, in many organizations, UX professionals are currently not involved in the product-feature definition stage of AI-powered systems, resulting in a missed opportunity for RAI.

Creating interaction-level RAI interventions. The UX discipline can significantly expand RAI interventions to mitigate potential harms of AI. Taking the RAI principle of explainability as an example, technical solutions (e.g., explainable AI algorithms or techniques) often provide complex outputs about how the model works. They may be challenging for a stakeholder to consume and recent research shows that they risk information overload and even exacerbating harmful overreliance on AI. UX designers can create AI explanation designs that are easier to understand, such as leveraging the modality (e.g., text or visualization) that the stakeholder group is more accustomed to, lowering the information workload, or giving users more control through progressive disclosures and interactions. To make an AI-powered system more reliable, safe, fair, and privacy-preserving, UX can provide a range of design features [2], such as system guardrails, alternative modalities to support accessibility, transparency about the risks and limitations, control mechanisms for people to take actions, and paths for auditing, feedback, and contestability. Taking the treatment-prediction AI system as an example again, to mitigate potential fairness issues, UX designers could create a warning feature when the current patient belongs to a group that the model had scarce similar training data for. They could also create a feedback feature for clinicians to report erroneous model predictions to help improve the AI system's future performance.

Carrying out responsible evaluation. RAI requires iterative development and evaluation against RAI principles. RAI evaluation must involve much broader approaches than model-centric performance metrics to reflect the real effects and potential harms of AI-powered systems on stakeholders in their deployment contexts. Human-centered evaluation is a core part of the UX discipline, including a range of quantitative and qualitative methods that can cater to different evaluation criteria and practical situations of stakeholder groups. RAI practices can benefit from involving UX practitioners to lead the evaluation steps. UX practitioners can work with data scientists to define and operationalize evaluation metrics for RAI principles, advocating evaluation methods that involve stakeholders to reflect their real needs and behaviors. For example, when evaluating whether the treatment-prediction AI system introduces fairness issues for clinicians, any kind of benchmark datasets that data scientists use may not suffice. Instead, UX researchers can design and conduct user studies to compare the outcomes and experiences of different clinician groups, considering a multitude of experience-related metrics, such as efficiency, workload, and subjective satisfaction. In addition, UX can also expand the RAI evaluation toolbox with practical, relatively lower-cost "test-drive" approaches that could help surface RAI issues early on, using approaches such as lab experiments and surveys.


UX can expand the RAI evaluation toolbox with practical, relatively lower-cost "test-drive" approaches that could help surface RAI issues early on, using approaches such as lab experiments and surveys.


All of these contributions reflect the well-established role of UX professionals as advocates for stakeholders' values, needs, and concerns. We recognize that an RAI lens is a reflexive and ongoing perspective that seeks to account for the social responsibilities of those who develop and shape a technology, which is inherent to the role of the UX discipline [4]. UX professionals can play a vital role in cultivating a shared RAI lens by actively collaborating and negotiating with non-UX professionals such as data scientists, engineers, and product managers. In short, UX professionals can act as human advocates to reinforce an RAI lens throughout the AI development and deployment cycle.

back to top  Paths Forward for UX to Meet RAI Needs

The instrumental role of UX and the above-mentioned UX contributions have not been fully realized in most organizations' current RAI practices. Drawing on our research bridging UX and RAI, we suggest paths forward for organizations and the UX discipline to enable meaningful impact on RAI.

For organizations that prioritize RAI, consider the following:

  • Involving UX researchers and designers early on, in the product ideation and definition stage, and throughout the iterative development and evaluation of datasets, models, and systems.
  • Defining and incentivizing UX professionals' role in RAI and actively promoting a shared understanding of possible UX contributions to RAI outcomes—beyond UI designs, advocating for UX's involvement in all the stages discussed in the "How Can UX Contribute to RAI?" section. It is also necessary to incentivize UX individuals' RAI-specific contributions. For example, it may be time for organizations to create and recruit for RAI-specific UX roles, as role uncertainty often results in RAI falling through the cracks.
  • Facilitating collaboration between UX and non-UX disciplines. The traditional separation-of-concerns practice that isolates UX work from engineering work is not compatible with RAI [3], which requires tightly coupled understanding and augmentation of the "socio" and "technical" components. Cross-disciplinary collaboration requires ongoing organizational work and practical steps to break the expertise boundaries and cultural barriers by being intentional about developing a common language and establishing frequent touch points across functions. For example, recent research [8] suggests that AI design guidelines can be used as a boundary object to facilitate cross-disciplinary collaboration by establishing shared goals and languages, and empowering UX practitioners to take a lead in fostering a human-centered AI culture.

For the UX discipline (as well as academic HCI research informing UX practices) to meet the demand of RAI, consider the following:

  • Developing research, design, and evaluation methods; design frameworks and principles; and other UX toolboxes specific to RAI principles. For example, current UX evaluation metrics may fall short in capturing the potential harms of an AI system. These new methods and toolboxes should aim to inform not only design solutions but also choices of datasets, model architectures, and other algorithmic solutions. Further assessments should evaluate the potential societal impact, ethical considerations, and unintended consequences of AI systems, guiding design decisions toward more-responsible outcomes. In addition to the Fairness Checklist [7] we mentioned earlier, another example of such a UX tool is a "question-driven design process for explainability" that we proposed [5]. It is a design process that starts with identifying stakeholders' explainability needs (by what questions they ask), which then guide the selection of explainability techniques by collaborating with data scientists, and the design and evaluation of explainability features that build on the chosen technique.
  • Expanding UX research and design perspectives from more-transactional interactions with users to community groups and affected stakeholders. Stakeholder groups may include domain experts, affected communities (who may or may not be direct users), policymakers, and members of the public. This requires adaptation and innovation of existing user research methods: Community-collaborative approaches to AI involve community and stakeholder groups and build on traditions such as collaborative, speculative design and community-based participatory or action research.
  • Strengthening the critical and ethical lenses in UX training and resources. Many resources have emerged in recent years aiming to educate UX professionals about the affordances and design opportunities of AI technologies. While the AI readiness of UX practitioners is important, they also need to be sensitized to the limitations, risks, and ethical considerations of different AI technologies. We note that it may be insufficient to provide a high-level description of limitations, especially for current complex and multicapability AI technologies such as generative AI. More-sophisticated resources are required to support UX professionals in exploring and anticipating an AI technology's risks specific to its stakeholders and deployment contexts. For example, an AI incident database could be a useful resource for anticipating risks and potential harms of AI. Recent HCI research has developed tools (e.g., [6]) that allow UX designers to "tinker" with AI as a design material—for example, observing different inputs to and outputs from a model, directly incorporating model inputs-outputs into prototyping, and testing and understanding possible failures of a model in a specific application context.

We hope this article provides a starting point to further align RAI and UX practices, as well as convincing arguments for the AI industry to prioritize UX. RAI work is becoming all the more important and challenging in the current "AI arms race." Looking inside to leverage and build on existing UX expertise and practices within an organization may pave the way for responsible and human-centered AI technologies.

back to top  References

1. Amershi, S. et al. Guidelines for human-AI interaction. Proc. of CHI 2019. ACM, New York, 2019.

2. Liao, Q.V., Subramonyam, H., Wang, J. and Wortman Vaughan, J. Designerly understanding: Information needs for model transparency to support design ideation for AI-powered user experience. Proc. of CHI 2023. ACM, New York, 2023.

3. Subramonyam, H., Im, J., Seifert, C., and Adar, E. Solving separation-of-concerns problems in collaborative design of human-AI systems through leaky abstractions. Proc. of CHI 2022. ACM, New York, 2022.

4. Wang, Q., Madaio, M., Kane, S., Kapania, S., Terry, M., and Wilcox, L. Designing responsible AI: Adaptations of UX practice to meet responsible AI challenges. Proc. of CHI 2023. ACM, New York, 2023.

5. Liao, Q.V., Pribić, M., Han, J., Miller, S., and Sow, D. Question-driven design process for explainable AI user experiences. arXiv preprint arXiv:2104.03483, 2021.

6. Moore, S., Liao, Q.V., and Subramonyam, H. fAIlureNotes: Supporting designers in understanding the limits of AI models for computer vision tasks. Proc. of CHI 2023. ACM, New York, 2023.

7. Madaio, M.A., Stark, L., Wortman Vaughan, J. and Wallach, H. Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. Proc. of CHI 2020. ACM, New York, 2020.

8. Yildirim, N., Pushkarna, M., Goyal, N., Wattenberg, M., and Viégas, F. Investigating how practitioners use human-AI guidelines: A case study on the people + AI guidebook. Proc. of CHI 2023. ACM, New York, 2023.

back to top  Authors

Q. Vera Liao is a principal researcher at Microsoft Research Montreal, where she is part of the Fairness, Accountability, Transparency, and Ethics in AI (FATE) group. Her current research interests are in human-AI interaction, explainable AI, and responsible AI, with an overarching goal of bridging emerging AI technologies and human-centered perspectives. [email protected]

Mihaela Vorvoreanu leads UX Research and Responsible AI Education at Aether, Microsoft's research and advisory body for AI ethics and effects in engineering and research. She is an expert in human-centered AI and an AI and RAI educator, giving frequent talks to major companies' leadership and to Microsoft employees. Before joining Microsoft, she had a career in academia, most recently as a tenured professor at Purdue University. [email protected]

Hari Subramonyam is a research assistant professor at Stanford University. His research sits at the intersection of HCI and learning sciences. He explores enhancing human learning through AI, emphasizing ethical design, cocreation with educators, and developing transformative AI-powered learning experiences. [email protected]

Lauren Wilcox has held research and organizational leadership roles in both industry and academia. At Google Research, she was a senior staff research scientist and group manager of the Technology, AI, Society and Culture (TASC) team. She holds an adjunct associate faculty position at Georgia Tech's School of Interactive Computing. She is an ACM Distinguished Member and was an inaugural member of the ACM Future of Computing Academy. [email protected]

back to top 

Copyright held by authors. Publication rights licensed to ACM.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2024 ACM, Inc.

Post Comment


No Comments Found