Features

XXXI.4 July - August 2024
Page: 38
Digital Citation

Unmasking AI: Informing Authenticity Decisions by Labeling AI-Generated Content


Authors:
Olivia Burrus, Amanda Curtis, Laura Herman

back to top 

The rapid proliferation of artificial intelligence technologies has ushered in a transformative era in content creation, and with it, shifting perceptions of what makes content authentic. The notion of authenticity (i.e., the accuracy, reliability, and trustworthiness of content) becomes particularly fraught in the context of content made using generative AI (GenAI) tools [1] such as DALL-E, ChatGPT, or Adobe Firefly. Providing transparency means presenting viewers with clear and understandable information about how content was created, and disclosing the role of AI in that process, if any. Transparency is essential for fostering viewer trust and understanding, as well as for ensuring accountability and support for responsible AI development and deployment.

back to top  Insights

AI labeling can be a valuable tool in the fight against mis- and disinformation, helping viewers understand the potential for biases or manipulations.
Creators wish to demonstrate their creative process, while viewers seek information to inform authenticity perceptions.
We need participation in industrywide collaborations, adherence to legislative measures, and the integration of provenance information at both the creation and display stages of AI-generated content.

Many researchers have explored AI-labeling solutions to support this type of transparency for viewers, which involves providing metadata, tags, or annotations to content produced by AI models [2,3,4]. Such studies have shown that surfacing GenAI labels to consumers with text or graphics displayed on or next to content can be a valuable tool in the fight against mis- and disinformation, helping viewers understand the potential for biases or manipulations in the content they are consuming. For example, Chloe Wittenberg et al. [4] propose a framework for labeling AI-generated media according to the goal that specific labels are intending to accomplish. The authors propose either process-based transparency (i.e., communicating to users how and from where a particular piece of content originated while "remaining agnostic about the potential consequences of that content for viewers or society" [4]) or harm-based transparency (i.e., communicating to users that a piece of content is potentially misleading or deceptive). Building on this body of work, we conducted research on a process-based transparency approach that presents viewers with key metadata about images. In choosing to remain impact-agnostic, we wanted to understand whether we could empower users to make their own trust decisions without making any value judgements for them (Figure 1).

ins01.gif

ins02.gif Figure 1. Example of process-based transparency from the Content Authenticity Initiative.

Whether it is for news media, arts, or product marketing, effective labels should enable viewers to quickly recognize AI's involvement, allowing them to evaluate source credibility, verify content accuracy, gain contextual knowledge, and ultimately make their own informed decisions around trust and authenticity. AI labels should also empower content creators by recognizing innovation, mitigating stigma, and reinforcing creative ownership. But what trust signals do viewers need, what do creators want to share, and how can we design a truly effective labeling system?

back to top  Overview of Our Research Studies

Over the past two years, we have conducted a number of interviews and surveys exploring viewer and content creator perceptions of authenticity in AI-generated news, influencer marketing, and art. These studies were part of our larger research initiative to explore the future of digital trust, how to best mitigate harms, and how to empower creators and viewers. Our participants have been primarily North American creators such as art photographers, illustrators, photo and video journalists, and short-form video influencers (using TikTok or Instagram Reels), as well as mostly North American viewers of these creative outputs. We purposefully sampled a wide range of viewers and creators from the U.S. and Canada with diverse lived experiences (including socioeconomic backgrounds, ethnicity, gender identity, and political affiliations). We felt it critical to gain perspectives from both creators, who need to adopt this system, and viewers, who need to make sense of and use it. Future research should also explore the needs of the implementers of AI labeling, including both platforms that enable the information to be surfaced to viewers (social media sites, Web browsers, news media, etc.) as well as the creative tools that would allow creators to use such a labeling system.

In this piece, we report on high-level insights synthesized across these studies. In our first study, we conducted ethnographic interviews on digital trust with seven content creators and 10 viewers. We then dug deeper into creators' perceptions of authenticity and AI-labeling solutions by conducting a second study with 10 creators, observing their interaction with an AI-labeling prototype embedded within a creative tool. In this study, creators explored what such labels could and should look like during their creative processes. Our third study was a concept test, in which we interviewed 19 viewers and asked them to interact with an example of AI labeling on social media posts. Our fourth study aimed to prioritize possible trust signals—specific metadata, tags, or annotations—based on what creators and viewers would most want to share or see, respectively. This mixed-methods study included 66 creators (photojournalists and influencers) and 126 viewers of this type of content; it also evaluated participant knowledge of GenAI and perceptions of its impact on content authenticity. Finally, we conducted two back-to-back unmoderated studies with a total of 47 viewers of art and news media to gather iterative feedback on updated AI-labeling prototypes that incorporated the insights from the previous four studies.

Our findings have allowed us to critically consider approaches to generative AI labeling and implications for creators and corporations. To further our learnings, we recommend additional studies that include international perspectives, particularly from non-Western and nondemocratic countries.

back to top  Creator and Consumer Perceptions of Authenticity in the Context of AI

Across studies, we found that viewers' perceptions of GenAI change as contexts change, and thus the information provided to them needs to change accordingly. For example, in one of our studies, viewers believed the use of GenAI in news content to be a clear signal that something is untrustworthy, whereas the same viewers found the use of GenAI to be acceptable in the context of influencer-marketing content. We also found that edits perceived by viewers to be examples of simple auto-enhancements, such as blemish removal or color corrections (whether accurate or not), have less impact on perceptions of authenticity than those perceived as complex GenAI production, such as turning a frown into a smile, or adding, removing, or replacing an object. Interestingly, in comparison with their viewers, social media influencers were less concerned about the impact of AI-powered auto-enhancements, but they were more concerned about changes made using GenAI. In sum, AI has a time and place in content creation. GenAI is closely tied with mis- and disinformation for many of our participants, and as such, they found it to be more appropriately used in "lower stakes" situations than news reporting. It is critical for viewers of AI-labeling systems to have information about the context of the content in question. Further, how, when, and where GenAI is used in the creation process is paramount, and any labeling solutions should carefully consider how to display this information.

It is important to note that many viewers did not know what generative AI meant, even if they reported using generative AI tools. Specifically, there were two areas of confusion. First, the term itself was confusing to the average person, where they understood that tools such as DALL-E used AI, but they had not encountered the term generative AI before. Second, it was not clear to our participants if using the label "generative AI" implied that an entire piece of content was created using AI or if AI was used in part of the process. Without fully knowing what GenAI is, they did perceive that it somehow reduced the authenticity of digital content. Participants' confusion around what exactly AI-generated content is adds a layer of complexity around already tricky trust decisions, where viewers continue to center the human creators, not the technologies, in the content creation process. Viewers wanted information about which elements had been enhanced or generated with AI, what information that AI was trained on (particularly, whose work was used to train the AI model), and how to recenter the human work that goes into content creation. Based on these insights, we recommend considering the content creator's intended audience, message, goals, process, and platform when designing a labeling system for authenticity. We also suggest being mindful of terminology, without assuming that the average person knows what exactly generative AI entails.


Effective labels should enable viewers to quickly recognize AI's involvement, allowing them to evaluate source credibility, verify content accuracy, and gain contextual knowledge.


When we shared our prototyped labeling system with content creators, many were open to this approach of sharing an overview of their process with their audience. Creators also saw the value in clarifying the specific role AI played in the content generation and editing processes. Specifically, content creators who were creating AI-generated content viewed AI labeling as a means to transparently communicate the involvement of AI in the creation process, fostering trust with their audiences. AI labeling was also viewed as a way to showcase innovative approaches to content creation and position creators as forward-thinking and adaptive to emerging technologies. On the other hand, creators who were not using AI were also interested in sharing information about AI's lack of involvement in their process, in order to demonstrate the "handmade" nature of their work. So whether or not creators are using AI, our research indicated that most are not only willing to adopt AI labeling but also see value in doing so for themselves and society.

back to top  Mitigating Potential Challenges of AI-Labeling Solutions

There are a number of challenges to be considered when developing labeling solutions for generative AI:

  • First, it is critical for labeling mechanisms to inspire consumer trust in the system itself, as misleading labels can severely undermine viewers' confidence in the entire media ecosystem. This assurance is invaluable in an environment where misinformation and manipulation can have far-reaching consequences.
  • Second, AI labeling should provide transparency while also aligning with viewers' mental models and information needs. However, substantial challenges are presented by issues such as the degree of algorithmic intervention and the temporal considerations of when in the creative process AI was employed. Viewer cognitive load further compounds this issue, requiring a delicate balance between providing sufficient information and avoiding overwhelming users with technical details [5].
  • Third, a widespread focus on AI-generated content could inadvertently foster viewer skepticism toward media more broadly [3], as they become aware of the presence of AI in the media ecosystem. This could result in viewers discounting authentic information. Such discounting could in turn create situations in which decision makers and influential figures dismiss legitimate media as fake [4,6].
  • Fourth, there is the risk of totalizing and stigmatizing AI by labeling its content, perpetuating biases against GenAI and the creators that use it. As such, stringent labeling requirements may impede the adoption of AI in creative processes by causing content creators to shy away from it, dismissing possible positive use cases for GenAI.
  • Fifth, mandatory labeling has the potential to erode the sense of creative ownership among creators by forcing them to reveal the use of a tool that is currently the subject of strong public debates around copyright, originality, and authorship. Effective AI labeling should address this by serving to reinforce the sense of creative ownership, assuring creators that their unique vision remains central to the artistic process.

These issues underscore the importance of balancing the potential risks and benefits of labeling AI-generated content to foster a more nuanced and constructive approach in the evolving landscape of AI and media.

back to top  Call for a Deeply Researched, Industry-Wide Approach to AI Labeling

In light of the increasing sophistication of GenAI technology, there is an urgent need for user-friendly solutions that disclose which content (and which parts of that content) has been generated by AI. Both the companies developing GenAI software and the platforms that display AI-generated content have a responsibility to mitigate their users' authenticity concerns, which are exhibited by creators and viewers of online content. Companies should be proactively engaging in efforts to increase transparency, accountability, and user control over AI-generated content. Creators would like to demonstrate the mechanisms and processes by which their content was made, while viewers seek information to inform their trust decisions. Therefore, creation software tools should embed provenance information into the content they produce, and content platforms should display this information in a readily accessible manner.

There is growing interest in labeling content generated by AI, as companies including OpenAI, Adobe, YouTube (Google), Meta, and TikTok adopt this practice (see examples in Figure 2). This momentum is being reinforced by legislative measures such as the U.S. Congress's AI Disclosure Act, the European AI Act, and similar regulations emerging from other political bodies globally. These regulatory efforts are largely focused on governance and the ethical application of AI technologies, stressing transparency and accountability in AI systems. Recognizing the importance of industry-wide collaboration, self-regulation, and adherence to emerging regulations, initiatives like the cross-sector consortium the Coalition for Content Provenance and Authenticity (C2PA) provide an open access standard for embedding provenance information into the publishing, creation, and consumption experiences of online content (Figure 3).

ins03.gif Figure 2. Example of AI labeling from TikTok (left) and YouTube (right).
ins04.gif Figure 3. Examples of a proposed content credentialing system in action, following the C2PA 2.0 UX Guidance, found at https://c2pa.org/specifications/specifications/1.4/ux/UX_Recommendations.html.

This research has shown that there is a clear need for a labeling system that takes a deeply researched approach, responding to cultural shifts in awareness, comprehension, and trust of GenAI and its outputs. We need active participation in industrywide collaborations, adherence to legislative measures, and the integration of provenance information at both the creation and display stages of AI-generated content. Through such concerted efforts, the industry can collectively advance toward a future where transparency, user control, and trust define the landscape of AI-generated content.

back to top  References

1. Epstein, Z. et al. Art and the science of generative AI. Science 380, 6650 (2023), 1110–1111.

2. Epstein, Z., Mengying C.F., Arechar, A.A., and Rand, A. What label should be applied to content produced by generative AI? PsyArxiv preprint, 2023. DOI: 10.31234/osf.io/v4mfz

3. Hoes, E., Aitken, B., Zhang, J., Gackowski, T., and Wojcieszak, M. Prominent misinformation interventions reduce misperceptions but increase skepticism. PsyArxiv preprint, 2023. DOI:10.31234/osf.io/zmpdu

4. Wittenberg, C., Epstein, Z., Berkinsky, A.J., and Rand, D.G. Labeling AI-generated content: Promises, perils, and future directions. Topical Policy Brief, MIT Schwarzman College of Computing, 2023; https://computing.mit.edu/wp-content/uploads/2023/11/AI-Policy_Labeling.pdf

5. Sweller, J. Cognitive load during problem solving: Effects on learning. Cognitive Science 12, 2 (1988), 257–285.

6. Ternovski, J., Kalla, J., and Aronow, P.M. The negative consequences of informing voters about deepfakes: Evidence from two survey experiments. Journal of Online Trust and Safety 1, 2 (2022). DOI:10.54501/jots.v1i2.28

back to top  Authors

Olivia Burrus was most recently leading UX research at Adobe for the Content Authenticity Initiative, an effort focused on bringing more transparency to digital content by sharing how content was made, who made it, and how it changed over time. [email protected]

Amanda Curtis previously led research for Adobe's Content Authentivity Initiative and is currently a user experience researcher and Ph.D. student at Oxford University's Internet Institute focusing predominantly on gaming, immersive, and emerging technologies. She researches video game experiences, knowledge creation, and creativity. [email protected]

Laura Herman specializes in emerging technologies' impact on creative practices. She is a research leader at Adobe and a doctoral researcher at Oxford University's Internet Institute. She has held research roles at Harvard, Princeton, and Intel and has worked with arts institutions such as the Serpentine Galleries, the Tate, Studio Olafur Eliasson, and Ars Electronica. [email protected]

back to top 

Copyright held by authors. Publication rights licensed to ACM.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2024 ACM, Inc.

Post Comment


No Comments Found