Authors:
Nazanin Andalibi
The European Union recently decided to ban emotion AI in workplaces and educational settings, with exceptions for medical and safety settings. This ban took effect on August 1, 2024. Technologists, policymakers, and the public may wonder what this means for the U.S. and how the EU might regulate emotion AI better.
→ Emotion AI and emotion data must be regulated in the U.S.
→ Emotion AI use often exacerbates the very problems its advocates claim it solves.
→ Emotion AI is not the solution to societal challenges like worker well-being, fairness in hiring, or improved patient care. Claiming otherwise is unrealistic.
Emotion AI is rooted in the field of affective computing founded by Rosalind Picard in the 1990s. It is also referred to as emotional AI, affect recognition, and emotion recognition. It uses data, including voice, facial expressions, and text, promising to infer people's affective states, such as anger, stress, distraction, and excitement. Whether this technology truly measures emotions remains controversial and contested.
Despite validity, bias, and accuracy concerns [1], emotion AI's market size is expected to grow to $446.6 billion by 2032 [2]. The use of emotion AI products in the world of work and human resources is also on the rise in the U.S., and this is happening without targeted regulation and with limited public input, scrutiny, and awareness. For example, CallFinder analyzes customers' and agents' emotions during calls, promising to improve both parties' experience. This expansion is part of the conceptually similar trends of biometric surveillance and passive sensing within and beyond the world of work.
In this article, I review what recent empirical research suggests about emotion AI's implications in the workplace and hiring and the harms that people subjected to it experience. I conclude with implications for regulators, technologists, and decision makers, explaining what we can learn from the recent EU efforts to better protect freedoms, privacy, and civil rights in the U.S.
The World of Work with Emotion AI
An analysis of publicly available information on 229 vendors that provide emotion AI for employment use [3] and 86 patent applications for similar uses [4] paints a rosy picture. Emotion AI provides unbiased and fair hiring practices that lead to recruiting the best fit candidates, knowing who companies' future employees truly are, and improving employees' performance and well-being at the same time. What's not to like? A lot, I argue.
A survey [5] of 395 U.S. adults explored attitudes toward emotion AI. When presented with hypothetical scenarios, one-third of respondents noted no benefit to them as employees. Those who acknowledged some benefits had serious concerns about the technology, including loss of privacy and autonomy, negative impact on well-being and psychological harms, diminished work performance, and exacerbation of existing power dynamics between employees and employers. For example, while proponents claim that emotion AI use will improve performance, workers report that being subjected to emotional surveillance causes anxiety and distraction (e.g., trying to make the system read them favorably) and leads to reduced performance. Furthermore, while proponents claim that emotion AI would enhance well-being, workers express that engaging in emotional labor (e.g., changing how they are feeling and displaying their emotions) takes a toll and affects them negatively. What I find striking is that much of what people (potentially) subjected to emotion AI find harmful is what proponents claim the technology will address or improve.
Much of what people (potentially) subjected to emotion AI find harmful is what proponents claim the technology will address or improve.
Additionally, concerns around harm are not unique to those who already have jobs. Interviews with 14 U.S.-based job seekers who have had experience with emotion AI–enabled job interviews [6], such as asynchronous video interviews, indicate a range of perceived injustices. These include discriminatory and inaccurate inferences leading to unfair outcomes and processes, privacy loss, and psychological harms. For example, people were concerned about not being recommended for a job due to the encoding of normative emotional expression expectations in systems (e.g., that women would smile more). In terms of privacy, job seekers expressed concerns around emotion AI–generated insights that would be sold, shared with, or leaked to third parties. And lastly, concerns around needing to display emotions in ways that felt unnatural but possibly read positively by the system were common psychological harms. What I find surprising here is that the only thing job seekers appreciated was the remote and asynchronous nature of interviews due to convenience, which does not require AI.
Technology's Impact on Society and Responsible Innovation
There are several ways to mitigate the potential harms of emotion AI. First, U.S. regulation should consider emotion and affective data as sensitive [7]. This should include data that is about affect directly (e.g., someone saying "I am sad.") and by inference (e.g., someone liking content about depression on social media), as well as cases where a subject is or is not personally identifiable.
Second, in line with law scholar Danielle Keats Citron's argument for intimate privacy—that is, privacy pertaining to our identities, bodies, health, and relationships—as a civil right [8], I advocate that privacy over our affect should also be considered a civil right. Emotion AI's impact on people's expressions and autonomy remains destructive, and it is still unclear how we can ensure that people can provide meaningful consent for its use.
Third, while the EU AI bill's attention to emotion AI is well justified and commendable, I argue that it does not adequately protect against harms, as it falls short in addressing other high-stakes contexts. For example, while some proponents tout emotion AI as a solution in healthcare for issues such as enhancing diagnosis by removing provider bias, improving care provision, facilitating patient-provider communication, and boosting well-being, significant harms to patients may persist (e.g., reduced patient influence in care decisions) [9]. I suggest that regulation in the U.S. should follow the EU but go even further to consider other contexts that are not covered in the EU legislation.
Fourth, the U.S. Patent and Trademark Office evaluates patent applications based on utility, nonobviousness, and novelty. The office should expand the utility criterion [9], for example, by requiring applicants to provide evidence about how and in which populations their proposed technologies have been evaluated and what the outcomes were. The Patent and Trademark Office should also regulate these audits and request them before deployment, adopting the Food and Drug Administration's approach.
Lastly, as emotion AI continues to be patented, developed, and integrated into our lives, it is crucial to prioritize the voices and experiences of those most affected by it before it is too late, and the technology enters the mainstream and gets entrenched. This means technologists should ask themselves: What is the problem we're trying to solve? Is the solution to this problem really emotion AI? Are there other ways of thinking about the problem or the solution? For example, while I agree that worker well-being is an important problem to address, the techno-solutionist emotion AI approach fails to consider the deeper, structural problems that are at the root of workers' diminished well-being. This also means policymakers and regulators should consider the very real impacts of emotion AI on people across a range of contexts. Decision makers considering deploying emotion AI should ask themselves: How is this technology's use going to affect real people beyond the promised (though unproven) organizational profit? Was it reliably audited for bias? And more broadly, what type of culture are we creating when we normalize emotion AI? Developing this kind of sensibility is challenging because companies that do not incorporate the hot, new technology might be perceived as lacking a competitive edge, but it is crucial in protecting against the harms.
1. Stark, L. and Hoey, J. The ethics of emotion in AI systems. OSF Preprints, 2020; https://doi.org/10.31219/osf.io/9ad4u
2. Global Market Insights Inc. Affective computing market to cross $446.6 bn by 2032, says Global Market Insights Inc. Oct. 16, 2024; https://www.globenewswire.com/news-release/2024/10/16/2963911/0/en/Affective-Computing-Market-to-cross-446-6-Bn-by-2032-Says-Global-Market-Insights-Inc.html
3. Roemmich, K., Rosenberg, T., Fan, S., and Andalibi, N. Values in emotion artificial intelligence hiring services: Technosolutions to organizational problems. Proc. of the ACM on Human-Computer Interaction 7, CSCW1 (2023), Article 109, 1–28.
4. Boyd, K.L. and Andalibi, N. Automated emotion recognition in the workplace: How proposed technologies reveal potential futures of work. Proc. of the ACM on Human-Computer Interaction 7, CSCW1 (2023), Article 95, 1–37.
5. Corvite, S., Roemmich, K., Rosenberg, T.I., and Andalibi. N. Data subjects' perspectives on emotion artificial intelligence use in the workplace: A relational ethics lens. Proc. of the ACM on Human-Computer Interaction 7, CSCW1 (2023), Article 124, 1–38.
6. Pyle, C., Roemmich, K., and Andalibi. N. U.S. job-seekers' organizational justice perceptions of emotion AI-enabled interviews. Proc. of the ACM on Human-Computer. Interaction 8, CSCW2 (2024), Article 454, 1–42.
7. Andalibi, N. and Buss, J. The human in emotion recognition on social media: Attitudes, outcomes, risks. Proc. of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, 2020, 1–16.
8. Citron, D.K. The Fight for Privacy: Protecting Dignity, Identity, and Love in the Digital Age. W.W. Norton & Company, 2023.
9. Karizat, N., Vinson, A.H., Parthasarathy, S., and Andalibi. N. Patent Applications as Glimpses into the sociotechnical imaginary: Ethical speculation on the imagined futures of emotion AI for mental health monitoring and detection. Proc. of the ACM on Human-Computer Interaction 8, CSCW1 (2024), Article 106, 1–43.
Nazanin Andalibi is an assistant professor at the University of Michigan School of Information. She examines how marginality is experienced, enacted, facilitated, and disrupted in and mediated through sociotechnical systems, such as social media and AI. Her National Science Foundation CAREER award project examines the ethical, privacy, and justice implications of emotion AI technologies in high-impact contexts. [email protected]
This work is licensed under creative commons attribution-noderivs international 4.0.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2025 ACM, Inc.
Post Comment
No Comments Found