Forums

XXXII.5 September - October 2025
Page: 58
Digital Citation

Moving from Fairness to Justice: Intentional Algorithmic Solutions Through an Intersectional Lens


Authors:
Kenya S. Andrews

back to top 

After studying machine learning fairness for some time, I noticed that many of the experiences of Black women and Black men were not being captured as injustices or harms when they should have been. When ML tools are used to gauge their affinity toward bias, they employ fairness metrics—mathematical measures, such as demographic parity, equalized odds, and equal opportunity. Many traditional or more commonly used metrics, however, suggest that people from marginalized backgrounds have the same experiences as everyone else and require them to be more qualified than their counterparts from non-marginalized backgrounds in order to receive the same treatment. This causes their experiences and standings in life, shaped by historical injustices and systematic biases, to be repeatedly overlooked. I then considered whether these ML algorithms perceived people differently from how they actually are, due to a lack of understanding of historical biases, injustices, and harms. This misperception can cause people, especially those at the intersection of multiple demographic identities, to be unrecognized by these "fairness" metrics.

back to top  Insights

Historical biases can force individuals to present differently to decision-making algorithms to receive appropriate treatments.
Intersectionality serves as a lens to reveal hidden harms, uncovering injustices concealed beneath overlapping demographic features that interact in nuanced ways.
Compensatory justice addresses these injustices by restoring visibility for those most likely to face marginalization.

I aim to design, implement, and evaluate AI and ML tools that more accurately consider marginalized individuals through proper visibility, ensuring justice-oriented access to fair outcomes—algorithmic justice. It is not only critical to mitigating biased outcomes but also necessary to promote accurate and consistently trustworthy AI and ML tools—an urgently needed approach to help support members of marginalized communities. This requires understanding how historical biases and injustices can affect the nuanced pathways of someone's experience in an environment, and the extent to which their demographic features reveal those pathways.

A key form of injustice I focus on in my work is testimonial injustice. Testimonial injustice occurs when a person is not believed or appropriately perceived because of prejudices [1]. While being heard, understood, and acknowledged is important in any situation, it is especially important in life-critical circumstances, such as healthcare. Mary Catherine Beach and colleagues [2] show that clinicians are more likely to ignore or downplay the concerns of Black and female patients compared to those of white and male patients. However, little work has examined the nuanced experiences of groups such as younger Black females, younger Black males, senior white males, senior Latinas, and other intersecting identities. This led me to explore how intersectionality—the ways in which people's demographic features influence their experiences in the world based on attributes such as race, age, and gender—can contribute to someone experiencing different forms of injustice, such as testimonial injustice.

Our work revealed that applying an intersectional lens in medical settings is the only way to uncover certain cases of testimonial injustice [3]. We analyzed three fairness metrics: demographic parity (a traditional approach that compares the marginal probability of a favorable outcome for those who differ on a single demographic feature without intersectional considerations); subgroup fairness (an extension of demographic parity that allows for the assessment of multiple demographic features, namely intersectionality); and epsilon-intersectional differential fairness (a multiplicative bounded metric that assesses the relative differences in probabilities of an outcome for all demographic features) alongside race and gender, using clinical notes from electronic health records.

When looking at demographic parity, we saw no disparate treatment along race or gender. Applying intersectional methods through subgroup fairness, however, revealed 69 instances out of 112 group comparisons where there was disparate treatment related to testimonial injustice. When we allow for an even less strict, more encompassing metric—ϵ-intersectional differential fairness—we see many more instances of disparate treatment. The capacity to capture more cases demonstrates the necessity of applying intersectionality strategically rather than indiscriminately when seeking out instances of injustice. Our findings also showed that the most common form of testimonial injustice was the use of stigmatizing terms in clinician notes (e.g., addict, abuser, alcoholic, combative, and noncompliant). Black women were targeted by these stigmatizing descriptors at the highest rate among all groups in our dataset. These results led me to explore the causal relationships between intersecting demographic features and their contribution to testimonial injustice experiences.

Building on prior work showing that age, race, and gender biases each independently affect quality of care [2], we structured our study, based on our previous work [3], to capture likely marginalized individuals based on race (Black or Latino), gender (female), and/or age (children 15 or younger or adults 65 or older). Using the fast causal inference (FCI) algorithm, a constraints-based causal discovery algorithm, we were able to find the nuanced paths and strength of contribution of someone's demographic features to their experiences with testimonial injustice in medical settings. There are several observable features that may contribute to someone experiencing testimonial injustice outside of the ones we measured, such as marital status, insurance type, and level of education [4,5,6]. FCI was particularly helpful since it allowed us to create a structural causal models with unmeasured confounders. First, we uncovered that race was the strongest causal contributor to experiencing testimonial injustice, followed by gender and then age. The weaker connection of age may reflect the limited number of younger participants in the dataset. Our causal discovery also revealed that these features do not contribute equally or independently, as I had suspected from the previous analysis.

This directly challenges the prevailing assumption embedded in most ML fairness metrics—that demographic features can be treated as equal and independent contributors when defining and measuring injustice. Our work shows that some harms emerge only at the intersection of demographic features, meaning some experiences of injustice remain completely invisible unless an intersectional lens is applied. Without this lens, we continue to discount people's experiences, because we believe that they are just like everyone else or that different aspects of who they are can be ignored when it comes to examining instances of injustice.

ins01.gif

This risk of exclusion is only amplified by underlying imbalances in the data itself. The data that we use is highly disproportionate in terms of race, containing 77 percent of white patients. I suspect the stark disparities we see would only become more so if we had a more equally distributed dataset along the demographic features we are analyzing. This limitation requires an even deeper, intentional intervention to engage marginalized communities in life-critical settings and in data collection. To empower communities and validate their experiences, communal, grassroots-based efforts are needed, alongside algorithmic design changes.


I call for compensatory justice to be used as a means of helping improve the visibility of marginalized individuals in decision-making systems.


There are also nonemergent settings in which we see the need to consider intersectional design, such as media. In our previous work, "Epistemological bias as a means for the automated detection of injustices in text," Lamogha Chiazor and I compared Meghan Markle's and Kate Middleton's representations in the media through our novel framework, which integrates three natural language processing models to automatically detect character, framing, and testimonial injustice. (We used our fine-tuned tagger model, which identified words that cause epistemological bias and two generative models that find stereotypes and their roots related to the input texts). Our framework allows users to input text and automatically analyze it to detect injustices. It provides them with educational resources to help increase their knowledge on why and how a term might be causing a specific type of harm and our interactive user interface allows them to work together to change it.

Markle and Middleton are linked to the British royal family by marriage, associated with high fashion, royal motherhood, philanthropic activity, and higher education in the arts. Despite sharing similar positions, interests, and abilities, they are often portrayed differently in the media, which can reflect the presence of injustices likely due to differences in their demographic features, namely race and nationality. We analyzed more than 1,600 articles on topics that Markle and Middleton share (e.g., family, avocados, charitable event attendance, wardrobe). Markle was the subject of many articles in which the media aimed sarcasm and criticism at her, thus character injustice. Such acts have led readers to have tainted perceptions of Markle and even not believe her statements or actions, thus testimonial injustices. Yet, for the same or similar topics of concern, members of the media wrote positively about Middleton, thus leading to framing injustices shown. We also observed through surveys that even for experts it is very difficult to find instances of injustice in text, likely due to their own biases and the universal acceptance of injustices (e.g., promoting stereotypes).

Our method of comparing these women validates the effectiveness of our approach and underscores the necessity of intersectional analysis. Although nationality played a significant role in the acquiescence shown toward Markle, her race served as a compounding factor in the injustices she experienced from both the media and the Firm (i.e., the business side of the royal family). The proliferation of the injustices she experienced was uniquely exacerbated by her position and economic status. By incorporating this context, we were able to identify the specific types of injustices she faced and validate her experience, as well as provide explainability through the model outputs, offering valuable insights for others aiming to prevent propagating injustices in their work.

My findings have implications for the design of computing systems and the interactions they support, suggesting intentional interventions that account for the causal strengths and nuanced pathways of testimonial injustice in medical settings, particularly as reflected in the construction of metrics and algorithms. It is necessary to consider how historical biases and injustice may force a patient to need to appear differently—younger or older, wealthier, less sick, of another race—to receive the same treatment as their non-marginalized counterparts. This is the reason I call for compensatory justice to be used as a means of helping improve the visibility of marginalized individuals in decision-making systems, in the hope it will lead to more just outcomes. It is the approach I use to promote algorithmic justice. Compensatory justice, in legal contexts, refers to providing restitution to individuals or groups to remedy the effects of harm and injustice. Applying this principle to algorithmic decision-making shifts the goal from minimal survival through equality or equity toward dismantling systemic barriers.

Applying an intersectional lens allows us to see the nuanced ways in which we can tackle such barriers by making the hidden biases that are uncaptured in various settings visible. Applying the intersectional lens and compensatory justice has an additional benefit of supporting instances in which patients felt discriminated against but did not have evidence of. These approaches allow us to give verbiage and validation to experiences that can make the difference between life and death and help someone surpass their current status.

The intentional shifts in algorithmic design that I am calling for are rooted in justice. It is imperative we design tools that will allow for multiple aspects of a person's identity to be considered and the extent to which those features might be causing harm. Some argue that it is too much effort to try to encompass everything about a person, an argument that has resulted in no effort to capture small portions. But reducing a person to a single data point with very little and sometimes ill-gotten information only pushes us farther into solutions that are inaccurate, ineffective, and ultimately cause more harm due to apathy and negligence. By applying a lens of intersectionality and compensatory justice, my hope is that we will see that it's not necessary to sacrifice true justice in order to achieve accuracy. Instead, we can find more accurate ways to represent marginalized people, evaluate tools, design algorithms, and implement systems.

back to top  References

1. Fricker, M. Testimonial injustice. In Contemporary Epistemology: An Anthology. J. Fantl, M. McGrath, and E. Sosa, eds. Wiley-Blackwell, 2019.

2. Beach, M.C. et al. Testimonial injustice: Linguistic bias in the medical records of Black patients and women. Journal of General Internal Medicine 36, 6 (2021), 1708–1714.

3. Andrews, K., Shah, B., and Cheng, L . Intersectionality and testimonial injustice in medical records. Proc. of the 5th Clinical Natural Language Processing Workshop. Association for Computational Linguistics, 2023, 358–372; https://doi.org/10.18653/v1/2023.clinicalnlp-1.39

4. Short, P.F. Gaps and transitions in health insurance: What are the concerns of women? Journal of Women's Health 7, 6 (1998), 725–737.

5. Shi, L. Type of health insurance and the quality of primary care experience. American Journal of Public Health 90, 12 (2000), 1848–1855; https://doi.org/10.2105/ajph.90.12.1848

6. Zajacova, A. and Lawrence, E.M. The relationship between education and health: Reducing disparities through a contextual approach. Annual Review of Public Health 39 (2018), 273–289.

back to top  Author

Kenya S. Andrews is a Provost STEMJazz postdoctoral fellow at Brown University. She researches machine learning, just decision-making, and human-robot interaction, with a particular emphasis on improving the visibility and representation of marginalized people to algorithmic systems with primary applications in real-world settings, including healthcare, education, and public policy. [email protected]

back to top 

intr_ccby.gif This work is licensed under Creative Commons Attribution International 4.0.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2025 ACM, Inc.

Post Comment


No Comments Found