Authors:
Alice Ashcroft, Clàudia Figueras Julián
Many organizations are quickly incorporating AI systems into their work processes to make them more efficient and automated [1]. But a comprehensive study of how ethics and values contribute to the large language models behind generative AI has yet to be done. Furthermore, with existing recommendations, such as Ethics Guidelines for Trustworthy AI (http://bit.ly/3IQ3gxj), and emerging research on software engineers' values [2], it's important to understand how values and ethics can align when AI systems are being developed. To outline this position, this article provides definitions of three areas: ethics, values, and gendered language in GenAI.
Values often refer to the fundamental guiding principles that exert a profound influence over the decision-making processes of individuals, groups, and even entire organizations [2]. These principles reflect deeply ingrained beliefs about what is important, meaningful, and desirable, shaping how we perceive the world and make choices. They serve as the compass that steers our behaviors, actions, and interactions and often form the foundation upon which our personal and collective identities are constructed. Values provide a framework for evaluating situations, opportunities, and challenges, allowing us to prioritize and align our actions with what we hold to be most valuable and significant. Other research, however, outlines that values are not synonymous with principles. Rather, they denote what an individual or group considers vital in life [3] or represent "evaluative beliefs that synthesize affective and cognitive elements to orient people to the world in which they live" [4]. Considering these definitions, then, values are situated within the individual, although they stem from social engagements and can be shared among people. Values fundamentally embody a profoundly personal and individualized occurrence.
→ Organizations are increasingly incorporating generative AI systems into their workflows to achieve efficiency and automation.
→ GenAI is built on large language models, but a comprehensive evaluation of their ethics and values is still lacking.
→ This article focuses on the interplay of ethics, values, and GenAI and presents actionable steps for those engaging with these technologies.
Ethics, according to most definitions, encompass a system of principles and guidelines that offer moral guidance and direction. The discipline of ethics is concerned with understanding what is morally right and wrong, just and unjust, acceptable and unacceptable. It often involves developing a codified, structured framework that assists individuals and communities in making decisions. Ethics go beyond personal beliefs and preferences, seeking to establish a shared understanding of what constitutes ethical behavior within a particular context or society. By adhering to ethical principles, individuals, groups, and organizations aim to uphold a sense of integrity, fairness, and responsibility in their actions, ensuring that their decisions are not only aligned with their values but also conform to established moral standards.
When comparing values and ethics, it becomes apparent that, while values provide the overarching framework that shapes our perspectives and priorities, ethics offer a structured set of guidelines that help us navigate the complexities of moral decision making. Values are deeply personal and can vary widely between individuals and cultures, influencing what we hold dear and meaningful. Ethics, however, provide a more standardized and interpersonal shared foundation for evaluating behaviors, ensuring a level of consistency and fairness in our interactions. Both values and ethics play a crucial role in shaping our choices, actions, and interactions, with values serving as the underlying drivers and ethics serving as the actionable principles that guide us toward morally acceptable outcomes.
We call for the embedding of ethical practices in generative AI, involving subject experts, establishing transparent ethical frameworks, and empowering users with information.
GenAI is developed using large language models; at its core, it attempts to find patterns and meaning within language. This standardization of language appears driven by a motivation similar to that behind ethical patterns and codes, and it potentially holds equally powerful ramifications. In trying to model incredibly complex issues, including constructs of gender, race, and class, the question must be asked of who is responsible for this modeling, and whether they are acting ethically and in line with the values they hold or those that society holds.
The Case for the Embedding of Ethical Practices in Genai
While LLMs are now widely used, and currently English-centered, they are expected to soon be used in every context. One could argue that LLMs should structure language as it is used (e.g., the sexism and racism that exist in language should be included in the model), especially if they are being used for analyzing text (as they will be present in the text that's being analyzed). When the models are generating text with GenAI, however, these models should present language in a way that it should be used (e.g., with no sexism and racism) so that this practice is not perpetuated in new content. But who is to decide how language should be used? We argue that this is an exploration that must involve area experts, such as gender researchers who decide how gender is modeled and whose expertise isn't in LLM development. The importance of involving subject experts in technology for a specific use is not a new suggestion, as it has been suggested that marketing experts should be involved in the design of sales chatbots [5].
Ensuring technology is suitable for everyone comes down to the values and ethics of those who commission and build the technologies. Is it the responsibility of the developers to engage with subject experts, or is it the user's responsibility to only engage with AI that they know has been developed ethically? Who gets to decide how this is determined? This question also needs to be considered from an intersectional perspective. Rather than the overlapping of characteristics, true intersectionality, a product of Black feminist theory [6], ensures that the AI created and the content generated are suitable for all. We present three actionable steps that all those involved in the creation of LLMs and GenAI should engage with.
In the rapidly advancing landscape of AI technology, particularly within the realm of generative AI driven by LLMs, the need to align technological advancements with ethical considerations and societal values has become increasingly evident. To address this imperative, we offer three key proposals.
Incorporate expert insights and diversity. The development of LLMs and GenAI should involve subject experts who have a deep understanding of the specific contexts in which the AI systems will be deployed. This is particularly crucial in sensitive areas, such as gender, race, and social constructs. Experts with knowledge in fields that include gender studies, linguistics, and ethics should play an active role in shaping how language patterns are modeled to ensure that AI systems do not perpetuate biases or reinforce harmful stereotypes. As mentioned earlier, including marketing experts in chatbot creation should enable better alignment between consumers and the brand and lead to increased affiliation. That's preferable to designing chatbots as women as the default option [5]. Additionally, embracing diverse perspectives can help mitigate the risk of unintentional biases and promote fairness in the AI's outputs.
Transparent ethical frameworks. Developers and organizations working on GenAI should establish transparent, ethical frameworks that guide their creation, deployment, and use. These frameworks should outline the core values that the AI upholds and the ethical considerations it accounts for. These frameworks should also include values that practitioners need to uphold, as well as those that get embedded in the AI systems. By openly communicating these principles, developers can ensure that their work aligns with societal values and expectations. Users should be able to assess whether the AI aligns with their own ethical stance. Transparency about the decision-making processes can foster trust in AI technologies.
User empowerment and informed choices. To address the question of who decides how AI language models should be used, it is important to empower users to make informed choices. Companies that provide software should provide clear information about the ethical considerations and values that have guided the creation of a particular AI system. On an even more basic level, users should be made aware from the beginning that they are interacting with AI. For example, if users are aware that they are engaging with a chatbot, they are willing to engage with the chatbot in a conversation. In this area, transparency can include disclosures about the expertise involved, the methodologies used, and the steps taken to address potential biases. Users should be able to make decisions about engaging with AI systems based on this information, allowing them to choose technologies that are in line with their ethical beliefs. It is crucial to mention that some people affected by AI may never know that they are engaging with these technologies, and the implications for this should also be considered (e.g., some AI users might only see the output, but their data is also being processed). We need to consider exposing these processes to achieve full transparency. Users should be empowered in AI development at every step of the process, including modeling.
In summary, we call for the embedding of ethical practices in generative AI, involving subject experts, establishing transparent ethical frameworks, and empowering users with information. These recommendations emphasize collaborative decision making, transparency, and user autonomy that will ensure that AI technologies align with societal values and individual ethical considerations.
As AI systems are incorporated into organizations, this prompts the exploration of ethics and values in generative AI, particularly when it comes to the perpetuation of gender and racial stereotypes, among others. Values and ethics are deeply personal and have ramifications for various social groups, so a set of guidelines for moral decision making when it comes to AI practitioners needs to be created.
In conclusion, it is key to consider the significance of incorporating expert insights and diverse perspectives into the development of large language models and generative AI systems, particularly in sensitive areas, such as protected characteristics. We need to advocate for transparent ethical frameworks that guide the creation, deployment, and use of GenAI, with a focus on aligning core values and ethical considerations with societal expectations. Empowering users to make informed choices is essential, and it can be achieved by clearly communicating the ethical considerations and values that are steering AI system creation. This transparency extends to disclosing expertise, methodologies, and bias mitigation efforts. Ultimately, we emphasize collaborative decision making, transparency, and user autonomy as essential in ensuring that AI technologies align with both societal values and individual ethical beliefs, while addressing the potential challenges of awareness and exposure for full transparency.
1. Figueras, C., Verhagen, H., and Cerratto Pargman, T. Exploring tensions in responsible AI in practice. An interview study on AI practices in and for Swedish public organizations. Scandinavian Journal of Information Systems 34, 2 (2022), Article 6.
2. Ferrario, M.A. and Winter, E. Applying human values theory to software engineering practice: Lessons and implications. IEEE Transactions on Software Engineering 49, 3 (2023), 973–990.
3. Friedman, B., Kahn, P.H., Jr., Borning, A., and Huldtgren, A. Value sensitive design and information systems. In Early Engagement and New Technologies: Opening Up the Laboratory. N. Doorn, D. Schuurbiers, I. van de Poel, and M.E. Gorman, eds. Philosophy of Engineering and Technology, vol. 16. Springer, 2013.
4. Marini, M.M. Social values and norms. In Encyclopedia of Sociology. E.F. Borgatta and M.L. Borgatta, eds. Macmillan, 2000.
5. Ashcroft, Alice and Ashcroft, Angela. The gendered nature of chatbots: Anthropomorphism and authenticity. In Trends, Applications, and Challenges of Chatbot Technology. M.A. Kuhail, B.A. Shawar, and R. Hammad, eds. IGI Global, 2023.
6. Crenshaw, K. Demarginalizing the intersection of race and sex: A Black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. University of Chicago Legal Forum 1989, 1 (1989), Article 8, 139–167.
Alice Ashcroft is a researcher at Lancaster University focusing on identity in HCI. She is also the founder of and lead consultant at Digital Heard. [email protected]
Clàudia Figueras Julián is a doctoral student in the Department of Computer and Systems Sciences at Stockholm University. Her research focuses on understanding the ethical and social consequences of the deployment of AI systems in the public sector, specifically how AI practitioners interpret and address ethical issues in their everyday work practices. [email protected]
This work is licensed under Creative Commons Attribution International 4.0.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2025 ACM, Inc.
Post Comment
No Comments Found