Blogs

Why codesigning AI is different and difficult


Authors: Malak Sadek, Rafael Calvo, Céline Mougenot
Posted: Tue, June 27, 2023 - 12:04:00

It is estimated that 98 percent of the population are novices with regard to technology (excluding extremes such as infants) [1]. It is this 98 percent, however, that form the main chunk of users and stakeholders affected by AI-based systems. It then makes sense that members of this segment of the population should be involved in designing these systems beyond just the small, homogenous set of experts currently involved.

There have been countless calls for the introduction of a transdisciplinary, participatory design process for AI/ML systems [2,3]. Such a collaborative design (codesign) process has been heralded as especially useful in aiding explainability and transparency [4], embedding values into AI-based systems [5], providing accountability, and mitigating downstream harms arising from several cascading biases and limitations [6]. There have also been calls for collaboration within the entire AI pipeline, including in data creation and selection, instead of having designers at the front end of the process and engineers at the back end [7]. In fact, it has been said that the only way to combat existing structural and data biases that creep into AI systems is to step away from custom-built, solely technical solutions and view AI systems as sociotechnical systems [8]. By looking across them as opposed to inside of them and using open constructive dialogues, collaborations, and group reflections [9], we can bridge the gulf between stakeholder visions and what gets implemented [7]. Having the needed voices give their input in a meaningful, impactful way through codesign activities can allow us to examine this broader frame of the wider sociocultural contexts in which AI systems are being used [10].

Despite the numerous cited benefits of and calls for codesigning AI systems, it is still not a widespread practice. The communication gap between designers’ products and services, developers’ algorithms and systems, and users’ needs and applications, is one of the largest challenges when it comes to using AI systems in real-world settings, where its implications could be life-altering for different stakeholders [10].

Bridging this gap is a unique, and notoriously difficult process for AI systems, as it has several inherent differences to other types of systems which require various considerations across different design and development phases [10]. We summarize several of these challenges below in an effort to consolidate the barriers to codesigning AI-based systems. The goal is to start a conversation on possible solutions and ailments to enable a much-needed, more widespread adoption of codesign practices for these systems. 

Why Is Codesigning AI Different and Difficult?

Data considerations

  • There are several extra considerations when it comes to AI systems about which datasets to use or what type of data to collect, as these systems are based on existing training data from the get-go, as opposed to other systems where user data can be collected after the system has been created for evaluation purposes. Additionally, assessing the initial feasibility of many ideas and being able to develop prototypes for them will be largely dependent on whether the data exists or can be collected.
  • There are also several ethical and moral concerns that arise regarding data provenance and data quality.
  • Both technical and nontechnical participants tend to focus on the ML model itself, overlooking the rest of the wider system, the data needed, and the broader sociotechnical and interactional contexts surrounding the system and in which it is embedded. This is especially true when tackling value-related or nonfunctional requirements such as fairness. Finally, there is a general lack of awareness regarding values and nonfunctional requirements in the field and how different ML design decisions affect them, in addition to no regulations regarding their elicitation and use, and a deficiency of tools and methods for measuring them.

Ideation and communication

  • It is difficult for technical experts to explain to users and nontechnical experts an AI’s behavior, what counts as AI, and what it can/cannot do.
  • It is also difficult for designers to communicate AI design ideas, interactions, and appropriate use cases/user stories to technical experts, codesign partners, and users. It is also challenging to imagine ways to purposefully use AI to solve a given problem, creating an overall “capability uncertainty” [10].
  • It is challenging for designers and developers to understand how to collaborate and co-ideate without a common language or shared boundary objects, especially when designers join late in the project.

User research

  • Embedding values into AI systems must be planned for from the offset, instead of being added in post-hoc, because the way the system is built will influence which kinds of values and ethical systems can be ‘loaded in.’ There then becomes an additional step of eliciting those values that diverse stakeholders agree on early in the process and upholding them on a sociotechnical scale.
  • User experience research also becomes more difficult for AI systems, as user profiles and preferences are created dynamically during interactions and there is no predefined persona or profile to check against.

Design, development, and testing

  • Designing AI systems goes beyond simply grounding design outcomes in what is technically viable, but also understanding and designing for the critical trade-offs between different algorithms and understanding the constraints, abilities, and interactions of the model.
  • It is difficult to anticipate unpredictable or unwanted AI behaviors and effects, and how they will evolve over time. It is also often unclear whom to hold accountable, and very difficult to communicate all of this to users.
  • It is also challenging to design and visualize branching, open-ended AI interactions, difficult to envision the potential of AI for use cases that do not already exist, and difficult to rapidly prototype AI ideas. Designing interactions for the incredibly complex and diverse outputs and possible errors of an AI system is challenging.
  • Compared to other systems, it is more difficult to respect and implement some stakeholder values or nonfunctional requirements such as transparency and fairness, given the complex, uncertain, and often unpredictable nature of AI/ML systems (and the high-level, general nature of those values ). It is also challenging to measure them within those systems.

This list is by no means comprehensive but instead aims to be a starting point for a dialogue on the different challenges that appear during various design stages for AI-based systems. By summarizing and consolidating these challenges, we can begin to discuss solutions and mitigations as a community in order to make codesigning AI systems an easier and more practical endeavor.

Endnotes

1. Sadler, J., Aquino Shluzas, L., and Blikstein, P. Abracadabra: Imagining access to creative computing tools for everyone. In Design Thinking Research. H. Plattner, C. Meinel, and L. Leifer, eds. Springer, 2018, 365–376.

2. Gabriel, I. Artificial intelligence, values, and alignment. Minds and Machines 30 (2020), 411–437.

3. Yu, B., Yuan, Y., Terveen, L., Wu, S., Forlizzi, J. and Zhu, H. Keeping designers in the loop: Communicating inherent algorithmic trade-offs across multiple objectives, Proc. of the ACM Designing Interactive Systems Conference. ACM, New York, 2020.

4. Chazette, L. and Schneider, K. Explainability as a non-functional requirement: Challenges and recommendations. Requirements Engineering 25 (2020), 493–514.

5. Crawford, K. and Calo, R. Here is a blind spot in AI research. Nature 538 (2016), 311–313.

6. Delgado, F., Barocas, S., and Levy, K. An uncommon task: Participatory design in legal AI. Proc.of the ACM on Human-Computer Interaction 6, CSCW1 (2022).

7. Frost, B. and Mall, D. Rethinking designer-developer collaboration. 2020; https://www. designbetter. co/ podcast/ brad-frost-dan-mall 

8. Ananny, M. and Crawford, K. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society 20, 3 (2018), 973–989.

9. Yang, Q., Scuito, A., Zimmerman, J., Forlizzi, J., and Steinfeld, A. Investigating how experienced UX designers effectively work with machine learning. Proc. of the CHI Conference on Designing Interactive Systems. ACM, New York, 2018.

10. Yang, Q., Steinfield, A., Rosé, C., and Zimmerman, J. Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. Proc. of the CHI Conference on Human Factors in Computing Systems. ACM, New York, 2020.


Posted in: on Tue, June 27, 2023 - 12:04:00

Malak Sadek

Malak Sadek is a design engineering Ph.D. candidate working in the space of conversational agents and value-sensitive design. Her background is in computer engineering and human-computer interaction, and she uses this mixed background to constantly look for ways to bridge between technology and design. [email protected]
View All Malak Sadek's Posts

Rafael Calvo

Rafael A. Calvo, is a professor at Imperial College London focusing on the design of systems that support well-being in the areas of mental health, medicine, and education, and on the ethical challenges raised by new technologies. [email protected]
View All Rafael Calvo's Posts

Céline Mougenot

Céline Mougenot is a senior lecturer (associate professor) at the Dyson School of Design Engineering, where she leads the Collaborative Design group. [email protected]
View All Céline Mougenot 's Posts



Post Comment


@AADIL (2023 08 12)

i read your article which is very informative and knowledge full and personally i like your article.And i want to give you some information for your blog and your followers.
click this linkhttps://preciousinfolots.com/what-is-are-required-to-build-an-ai-system/#:~:text=The%20Brains%20of%20AI%20Systems
and plz apporved my comment thanks