In April 2023, four months after ChatGPT was released, the Writers Guild of America (WGA), which represents screen and TV writers, was on the verge of a strike vote heading into contract negotiations. Before negotiating, they published a list of the broad goals the union's bargaining committee would take to the table. Nestled between items on discrimination and contract exclusivity was a goal whose language-model-fueled subtext was clear: "Regulate use of material produced using artificial intelligence or similar technologies" . The WGA was seeking to effectively self-regulate the use of large language models (LLMs) in their workplace. AI-generated scripts or job automation may not have topped the the union's list of demands, but the anxieties that ChatGPT generated for screenwriters offers an important glimpse into the evolving dynamics of creative, white-collar work in the era of large language models.
→ Academics, business leaders, and policymakers treat automation as an uncontrollable force, but is this really the case?
→ Workers have more power to direct the force of automation than many think by using data as leverage in negotiations.
→ Researchers and HCI practitioners can influence this process by working on democratic and participatory AI systems.
This isn't the first time that generative AI has entered Hollywood contracts. Many actors are already familiar with agreements that give studios the right to train generative models using their voice, creating new content using creative workers' intellectual property. So why did the WGA feel the need to include this particular goal in their demands?
First, effective generative AI models are poised to significantly inflate the value that can be extracted from intellectual property such as past screenplays. Using a dataset of past screenplays, accessible and effective models like GPT-4 could allow unskilled managers, producers, and executives to create a torrent of new scripts that are derivative of screenwriters' past work. This transforms culturally important work into valuable raw material for algorithmic production. By limiting how management can use generative AI and what data they can be trained on, the WGA can better control the value that Hollywood management can extract from their members' labor.
Second, and perhaps more important, it would protect screenwriter jobs from a "hollowing out" of their agency and autonomy. After a strike was approved, John August, one of the Charlie's Angels screenwriters, clarified the writers' anxieties, focusing on how management might use an LLM to rewrite existing scripts: "A terrible case of like, 'Oh, I read through your scripts, I didn't like the scene, so I had ChatGPT rewrite the scene'—that's the nightmare scenario" . Rather than full job replacement, he worried that if studios could freely generate screenplays using writers' past content, it would undermine the core creative control writers enjoy in their work.
The anxiety that automation and the new forms of organizing work it enables will severely affect creative workers' autonomy is not new. It is a concern that resonates with the principles of operaismo, or workerism, a sociopolitical philosophy that emerged in Italy in the 1960s and 1970s. Operaist theorists were some of the first to recognize that creative work, like other forms of labor, are embedded within and influenced by our political economy . In the 1970s, operaists argued that the emerging "creative class" enjoyed a false sense of autonomy. Operaists would argue that while an advertising art director might have benefited from some creative freedom in choices such as color, layout, or visual themes, the principles and goals of their work were ultimately aligned with profit-making. Their work was valued not for its artistic merit or creative potential but for its ability to generate profit; their creative work was transformed into a commodity.
Although this commodification has been true for decades, creative workers risk facing a severe intensification of this process through the unfettered use of generative algorithms. Just as an art director's work was valued for its ability to sell products, a screenwriter's work in the ChatGPT era can be valued for its potential as a training set for future AI models that generate profitable screenplays in part or in whole. This process formalizes the consumption of culture, turning creative works into raw material for algorithmic production. The goal of creative work like screenwriting then becomes a kind of dataset creation rather than a creative pursuit in itself, stripping writers of creative control and removing what is arguably the most meaningful and fulfilling aspect of their work.
While most scholars agree we are a ways away from models that can generate compelling, original content comparable to that of a human, this doesn't mean we won't see this new pattern of creative value production in the near future. Rather than fully automate screenwriting, tools leveraging models like GPT can enable new ways of delegating and recombining human work that might achieve the same effect. Algorithmic management, a term coined to describe the use of complex algorithms to manage human workers, has mainly been a concern in the realm of what's considered blue-collar work: gig economy drivers and delivery workers, warehouse workers, and "clickworkers" on platforms like Amazon's Mechanical Turk.
However, a significant amount of literature in the CSCW and CHI communities investigates how crowd work can be leveraged to perform complex, context-dependent tasks that are core to what we consider "creative work." Although using algorithmically guided delegation or management for tasks like programming have been investigated, writing has been the gold standard for systems aimed at delegating and distributing complex creative work. Even without models like GPT, task delegation systems in creative work remove important aspects of agency and control from workers. Some studies even found that removing levels of creative control from crowd workers doing broken-up "piecework" writing tasks actually improved final pieces produced by the system, further incentivizing such agency-stripping designs. As HCI researcher Ali Alkhatib's work warns us, combining task-delegation systems with new generative AI tools could lead to a "contemporary instantiation of piecework"  if designers do not carefully consider their impacts on worker well-being.
The reality of workplace monitoring and productivity scoring in modern white-collar workplaces puts many workers uncomfortably close to this vision. After millions of people were forced to work from home during the Covid-19 pandemic, workers are increasingly under heightened scrutiny by employers seeking to translate workplace discipline and surveillance into the work-from-home context. This heightened surveillance takes the form of algorithmically determined productivity scores, tracking text communications in the workplace, and software that takes regular screenshots of workers' computer screens for employer review. In the LLM era, all of this surveillance—and the data it creates—turns into potential training material for future AI systems.
If workers choose to withhold their labor, they can effectively starve these systems of the data they need to improve.
For example, LLMs could be used to monitor and analyze workers' written communication, providing managers with summaries of their productivity, work habits, and emotions. New models could predict in detail what specific subtasks in a project should be completed by which team members, a form of AI-powered micro-delegation. Futures like this have consequences beyond individual workers' job satisfaction or dignity. Improvements in delegation and management technologies that analyze and predict worker behavior can create severe asymmetry in the workplace while limiting workers' ability to challenge or negotiate the terms of their work.
LLMs like ChatGPT are not just tools that can be used to automate tasks or manage workers. They are also systems that require constant input and training to improve and evolve. This means that they rely on a steady stream of new content from people to function effectively. This dependence on human creativity and labor is a feature, not a bug—it is what allows them to generate content that is fresh, relevant, and engaging.
This dependence of LLMs on human labor also provides a glimmer of hope for workers seeking to exert some control over the direction of future AI model development. If workers choose to withhold their labor, they can effectively starve these systems of the data they need to improve. Although developed mostly in the context of consumer relations, Nicholas Vincent's proposed strategies of "data leverage" or "data strikes," where users withhold or manipulate data collected for algorithmic training, might prove to be effective, modern organizing strategies for workers seeking to negotiate better terms and conditions for their work. These approaches are effectively a strategy of refusal , rooted in the principles of operaismo and autonomism, that emphasize the power of workers to resist and shape the economic forces that affect their lives.
This strategy of refusal is not without its challenges. For one, it requires a high degree of coordination and solidarity among workers, a major issue among fragmented and algorithmically managed workforces. Workers also need to have a better understanding of the value of their data and the role it plays in the development of AI systems. This is a difficult task, particularly given the often opaque, "black box" nature of algorithmic management systems and the fact that current data protection law focuses on individual, rather than collective, rights .
Traditional strikes can be part of this strategy. Members of SAG-AFTRA, the union that represents Hollywood performers and other media professionals nationwide, approved joining the WGA through their own strike vote on July 13, 2023. In a statement to CNBC, SAG-AFTRA Executive Director Duncan Crabtree-Ireland said that a major negotiating point is ensuring "a human-centered approach to the implementation of AI" as studios experiment with using generative models. Worker codetermination and union negotiation can be important ways of defining red lines around AI use in the workplace.
This potential power to influence how automation advances raises an important question: What, if any, kinds of automation should workers, and the labor movement at large, strategically advocate for? Political theorists like Nick Srnicek and Alex Williams have made the case that workers should argue for full automation in certain fields, such as industrial manufacturing, as part of a "political project against work." Empirical work on the automation priorities of workers, however, is sparse. Answering the question of what, exactly, workers want to be automated is broader than protecting certain jobs from automation or enforcing labor rights, although these goals are important. It is about shaping the future of work in ways that enhance human creativity, dignity, and well-being.
Rather than treating automation as an inevitable force that workers must adapt to, we need to recognize it as a social process that can and should be shaped by those it affects most—workers themselves.
Collaborating with workers and valuing their perspective on automation can provide an important, complementary thread to work that focuses on automation itself. More research like that of University of Texas at Austin's Min Kyung Lee, who prototypes systems for participatory algorithmic management, can inform the design of systems that prioritize workers' values. The public discourse on automation at work is preoccupied with the technical aspects of AI, such as evaluating model capabilities and the specific tasks they may be able to perform. This preoccupation treats automation as an unstoppable, technical force that workers must adjust to , rather than as a sociopolitical one they can influence and direct. Workers have agency. They can resist automation, shape it—even refuse it.
HCI researchers and practitioners have a crucial role to play in challenging these assumptions and informing the development of worker-led automation. There are several main ways to do this. First, empirical studies of worker autonomy, preferences, and the impact of automating certain parts of work are essential. For example, how does task automation affect workers' sense of agency at work? Existing research largely investigates this topic with the goal of measuring how automated a task can be before one feels a loss of control. Are there contexts, tasks, or working structures where automation can increase workers' feelings of agency? Which tasks do workers see as most pressing to automate?
Second, most AI models are far from democratic or participatory, even in the most basic sense. Further design methods, case studies, and technical tools for developing participatory AI and governance in the era of LLMs will be crucial to maintaining worker agency. What are democratic ways of uncovering the automatable tasks that people find least fulfilling? How can workers best have a say in how automated systems do those tasks? Hollywood writers may not want management to create new scripts using GPT, but they might be open to other writers using a model trained on their work to brainstorm story ideas. Systems that allow writers to place limits on that model's use, such as only using work from a particular era of their career, could help writers maintain agency. With worker voice, automation has the potential to be an empowering tool. Frameworks for answering these questions will be critical to workers fighting for advancements or limits in how their work is automated.
Third, the impact of algorithmic management systems and workplace automation on worker well-being and mental health is still largely unknown. Scholars like Emilia Vignola have raised urgent questions about these technologies' impact on several dimensions of job quality, such as task significance, schedule stability, and trust, that have links to health. Understanding these impacts will be central to workers and organizers who want to negotiate and strategically direct automation in their workplaces. To measure this impact, longitudinal studies done in collaboration by HCI researchers, occupational health scholars, and public health scholars are desperately needed.
Finally, the case of the WGA's stance on AI-generated screenplays offers a glimpse into how workers can collectively negotiate the use of these technologies in their industry. As I've argued elsewhere , collective regulation through worker codetermination, union negotiation, and other forms of workplace democracy is a promising way to shape the future of work in a manner that prioritizes worker autonomy and agency.
Achieving these goals requires more than developing new technologies. It asks for a paradigm shift in how we approach technology in the workplace and automation as a whole. Rather than treating automation as an inevitable force that workers must adapt to, we need to recognize it as a social process that can and should be shaped by those it affects most—workers themselves.
1. Writers Guild of America. 2023 Pattern of Demands. Apr. 29, 2023; https://www.wgacontract2023.org/the-campaign/pattern-of-demands
2. Scheiber, N. and Koblin, J. Will a chatbot write the next 'Succession'? New York Times. Apr. 29, 2023; https://www.nytimes.com/2023/04/29/business/media/writers-guild-hollywood-ai-chatgpt.html
4. Alkhatib, A., Bernstein, M.S., and Levi, M. Examining crowd work and gig work through the historical lens of piecework. Proc. of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, New York, 2017.
6. Calacci, D. and Stein, J. From access to understanding: Collective data governance for workers. European Labour Law Journal 14, 2 (2023), 253–282; https://doi.org/10.1177/20319525231167981
Dan Calacci is a postdoctoral fellow at the Center for Information Technology Policy at Princeton University. [email protected]
This work is licensed under a Creative Commons Attribution International 4.0 License.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.