Authors:
Matt Jones, Dani Kalarikalayil Raju, Jen Pearson, Thomas Reitmaier, Simon Robinson, Arka Majhi
Biases in AI datasets are well known, and their impacts in terms of misrepresenting gender and ethnicity are regularly surfacing in generative AI (GenAI) services. Many of us have seen examples of these—some of them absurd, some comical, and others completely inappropriate. A notorious example is the image Midjourney produced using the prompt "Black African doctor is helping poor and sick white children, photojournalism." The image showed a white "savior" doctor, helping black children [1]. And what would you expect to see given the following two prompts? "A person at social services" and "a productive person." Stable Diffusion, another…
You must be a member of SIGCHI, a subscriber to ACM's Digital Library, or an interactions subscriber to read the full text of this article.
GET ACCESS
Join ACM SIGCHIIn addition to all of the professional benefits of being a SIGCHI member, members get full access to interactions online content and receive the print version of the magazine bimonthly.
Subscribe to the ACM Digital Library
Get access to all interactions content online and the entire archive of ACM publications dating back to 1954. (Please check with your institution to see if it already has a subscription.)
Subscribe to interactions
Get full access to interactions online content and receive the print version of the magazine bimonthly.