DialoguesCover story

XXIX.3 May - June 2022
Page: 28
Digital Citation

Racial segregation and data-driven society


Authors:
Rashida Richardson, Eric Corbett

back to top 

At this juncture, pointing out the tendency for modern artificial intelligence (AI) and machine learning (ML) technologies to perpetrate various "-isms" (racism, sexism, colonialism, etc.) is uncontroversial. Among these issues, the intersections of ML and racism have received the most attention, in both popular press and academic discourses. Yet while racist AI is increasingly well documented, its spatial aspect has not received the attention it should. Instead, the manifestation of racism in ML is often framed nonspatially—as a cultural, political, or philosophical phenomenon—where space is simply the backdrop where these temporal concepts unfold. In contrast, legal scholar Rashida Richardson's recent essay "Racial Segregation and the Data-Driven Society: How Our Failure to Reckon with Root Causes Perpetuates Separate and Unequal Realities"[1] reveals the spatial roots of this issue. In what follows, I discuss the essay with Richardson to tease out its implications for HCI scholarship and practitioners working to advance fair and just AI/ML.
    — Eric Corbett

ins01.gif

Eric Corbett: I have to begin by asking what motivated you to write this essay. I'm especially interested to hear this because a connection between segregation and AI seems so counterintuitive, even for someone working in this space.

back to top  Insights

Understanding segregation is necessary not only for identifying AI bias but also for identifying what types of solutions are needed.
There is too much technological solutionism in how society is currently trying to address issues of fair and socially just AI.
To combat unfair AI practices, it is important to expand and embed participatory design and other UX methods that include more people with diverse experiences.

Rashida Richardson: So first I think it is important to know that before moving into academia I worked on a range of civil rights and technology issues at public interest legal organizations, and from that work I viewed segregation as a root cause issue to many of our contemporary civil rights problems. When I started to work on AI issues, I noticed the framing of bias concerns was also fairly limited, like it lacked a deep understanding of how bias and difference is created and perpetuated in society. For instance, framing AI bias as merely allocative and representative harms. But my frustration grew after I published the law review essay "Dirty Data, Bad Prediction: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice" [2]. As I did more interviews about that article and my advocacy work related to predictive policing, people would say that they get how person-based predictive policing is wrong and problematic, but that it was not as clear for place-based predictive policing. And every time I would scream internally because the only way you can make that distinction is if you don't understand how race and place are inextricably linked in the U.S. Most of the time, when I would explain this connection between race and space, and how person-based and place-based predictive policing are similarly flawed, people would accept and agree with my argument. But given the frequency of this reaction, I knew I needed to write something that attempted to situate issues of AI bias within the history of racial inequities within the U.S.

At first, I attempted to do this with a law review article where I planned to argue for a new civil rights framework to address AI and nontechnical issues of racial bias. I researched that article extensively for over a year and had a super-detailed outline, where I planned to go through key periods of U.S. history to demonstrate why technocratic solutions to AI bias cannot work. But by the time I got to the period coined as the Gilded Age, I realized that what I was trying to do was more of a book project and could not be tackled in a law review. So I left my draft and decided I'd revisit or recycle the work later on. Then Berkeley Law and Technology Journal invited me to participate in their symposium "Technology Law as a Vehicle for Anti-Racism" and contribute to their volume. I decided I wanted to write this article.


To understand or study segregation, you have to overcome much of the miseducation about race, which can seem like an overwhelming or impossible task.— RASHIDA RICHARDSON


EC: Why do you think segregation is understudied in the space of fair AI?

RR: Segregation is understudied in this space for two reasons. First, to interrogate segregation and its legacy today, you need a deep understanding of your local, state, regional, and national history, which most people are not taught and do not actively seek out. The realities of chattel slavery, the failure of Reconstruction, the actual practices and policies of Jim Crow, and much of the history I tried to succinctly cover in the essay are barely part of formal education in the U.S., and in many parts of this country that history is intentionally taught in a distorted manner.

Second, people don't actively seek out to learn this history because understanding it will mean having to accept or confront one's complicity in this history. It is easier to relegate issues of oppression, exclusion, and racism to the past and not think about how they are embedded in our current structural conditions, and practiced daily. The history of race and racism in this country is often taught as a matter of prejudice and interpersonal issues, rather than as issues of power and property. To understand or study segregation, you have to overcome much of the miseducation about race, which can seem like an overwhelming or impossible task.

EC: What I really loved about this essay is how it explains segregation, specifically the different forms: de jure and de facto. Can you talk more about the differences between these two and why it's important to make this distinction?

RR: De jure and de facto segregation are legal distinctions and definitions devised from legal cases about challenging segregation and its effects to determine what required legal redress. De jure segregation is separation and exclusion that was sanctioned by laws. This is what most people associate with the term segregation—they think of Black and white water fountains or waiting areas at a bus station. The novelist James Baldwin described de facto segregation as "Negroes are segregated but nobody did it," and this is because it is seen as naturally occurring, when in reality it is a form of segregation that is practiced and perpetuated by people and institutions. In the essay, I also note that some scholars disagree with this distinction, seeing it as unnecessary because all of these practices and customs are interrelated and the binary it creates serves to absolve some actors and practices. And I agree… I understand how the distinction can help clarify that segregation is not one thing or action, but I don't think people and the courts, in some cases, actually understand de facto segregation. I also understand why that is the case. If you benefit from a system of policies, practices, and customs, there is less urgency or no interest in interrogating or challenging that system.

EC: Why do you think it is that the de facto form of segregation is less known?

RR: If you benefit from a system of policies, practices, and customs, there is less urgency or no interest in interrogating or challenging that system. But we are also living in a society that has gone through multiple generations of segregation, so both act to reinforce segregation, and its by-products are not seen as such but rather as just naturally occurring divisions or practices in society. Today, a large majority of white people grow up in and exist in very homogeneous circles, even when they live in very diverse cities or towns. And I think the only way you break that cycle or even notice it is not ideal is if you are exposed to the diversity of our society. For example, I have a close white friend, who grew up in an upper-class, predominately white suburb. We met and became close in college. Her other close friends are primarily not white, mostly Black and Asian… and this was something she didn't even notice until her wedding. When she got pregnant, she and her husband spoke a lot about education and whether they should send their daughter to private or public school. She is a product of and a believer in public schools, and her husband went to private schools, and we all lived in NYC, which has the most segregated school system in the country. She knew I worked on school desegregation issues and generally have strong opinions about education, so she asked me what they should do or consider, noting that she didn't want her daughter to end up in an all-white school because she wanted her to have friends and teachers who look like her aunties. I raise this example because most parents don't even consider the diversity or lack of diversity of a school as a factor in their children's education. Other coded words and descriptions are used for race so the choice of sending your child to an all-white school is seen as a neutral or even "good" decision. My friend wasn't trying to perform her politics or virtue signal, she just knew what she valued in life from choosing to live in integrated social circles and wanted that for her child. I share this anecdote because I think it's important for people to understand that segregation is very much about these "personal" choices people make; there are no innocent bystanders.

EC: The main contribution of the essay is how it connects segregation to data-driven technologies. Can you briefly explain that connection and why it is important for HCI and AI researchers to understand?

RR: After providing a brief historical overview and interdisciplinary review of how segregation affects white people in particular, I then examine three aspects of AI design and evaluation and how they are affected or influenced by segregation or the lack of understanding of how segregation is a feature of American society. First, I discuss how segregation is reflected in and amplified by training data. Second, I discuss how segregation can inform or shape much of the discretionary work developers do during problem formulation and how that can lead to unintended outcomes. Finally, I discuss how segregation affects how AI works and is evaluated in its real-world implementation. I use different examples to demonstrate my argument in each section and think they help reveal why understanding segregation is important generally, but particularly for HCI and AI researchers. Understanding segregation is necessary not only for identifying AI bias but also for identifying what types of solutions are needed, because if this root cause problem is not acknowledged or addressed, then purely technical solutions will never work.

EC: The ways segregation affects training data was the most compelling—and problematic—part of the essay. Can you talk more about that here?

RR: So that part is an extension of some of the issues I started to explore in my law review essay [2]. But what's different is that I focus on segregation as a specific source for understanding feedback loops. I detail how policing practices and policies are informed by segregation and are used to reinforce it. I then explain how these practices and policies generate much of the police data that is ultimately used to develop policing technologies.

EC: One of the takeaways of the essay that is especially important for technologists to understand is the limitations of how society is currently trying to address issues of fair and socially just AI. Can you talk about that here?

RR: There is too much technological solutionism. Technology is seen as a solution to longstanding issues as well as the solution to technology-facilitated issues. There seems to be more interest in saying you did something irrespective of whether it actually redressed the harm caused. This is not to say technology cannot be part of a solution, but I think if one is trying to address societal issues, then AI or procedural fixes to AI's development and deployment will not be enough on its own. There are political, economic, and social dimensions to our societal problems, so solutions must be multifaceted and substantive. In the essay's conclusion, I argue why interventions proposed in existing scholarship are deficient and why AI needs a transformative justice framework and praxis. I outline what it is and why it is needed, but I hope to further develop the framework and approach in future scholarship building on my current experience with technology policy at all levels of government and my experience with restorative justice practices.

EC: Now that we know the limitations, let's talk about the opportunities. Lots of Interactions readers are practitioners and researchers active in the fair AI space. How can they have a meaningful impact?

RR: I think there are lots of opportunities in the fair AI space, from developing different business models to developing different design practices. Since many readers have HCI backgrounds, my first recommendation is to expand and embed participatory design and other UX methods that include more people with diverse experiences in the space. I think a problem in academic environments is that lived experience is not considered expertise, but in order to tackle deep systematic and structural issues like segregation, those who have experienced various facets of it must be included. Creating more inclusive design practices can create more opportunities for developers' assumptions to be challenged and addressed, and it can ensure that the lived experience of our society is considered in design, ideally as a corrective measure. A second recommendation is to expand the problem analysis. Often conceptions of problems that AI is deemed appropriate to address are very narrow, and in some cases distorted. I hope that anyone who reads my essay leaves with an understanding that issues like segregation cannot be deduced to a simple problem with a simple solution—there are multiple actors with varying levels of responsibility, and harms are compounded. Finally, I would ask readers to ask themselves what they individually and collectively think meaningful impact is, because the true answer may not be satisfying or there may be conflict within this community.

back to top  References

1. Richardson, R. Racial segregation and the data-driven society: How our failure to reckon with root causes perpetuates separate and unequal realities. Berkeley Technology Law Journal 36, 3 (2022); https://ssrn.com/abstract=3850317

2. Richardson, R., Schultz, J.M., and Crawford, K. Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice. New York University Law Review Online 94, 15 (2019); https://www.nyulawreview.org/online-features/dirty-data-bad-predictions-how-civil-rights-violations-impact-police-data-predictive-policing-systems-and-justice/

back to top  Authors

Rashida Richardson is an assistant professor of law and political science at Northeastern, where she specializes in race, emerging technologies and the law, but she is currently on leave serving as an attorney advisory in the Federal Trade Commission's Office of the Chair. Prior to joining the FTC, she served as a senior policy advisor for data and democracy at the White House Office of Science and Technology. Richardson has previously worked on a range of civil rights and technology policy issues at the German Marshall Fund, Rutgers Law School, AI Now Institute, the American Civil Liberties Union of New York (NYCLU), and the Center for HIV Law and Policy. r.richardson@northeastern.edu

Eric Corbett is a postdoctoral researcher at New York University's Center for Urban Science and Progress. His background is in computer science and human-computer interaction. He has worked on projects across various subjects, including: resisting and countering gentrification; supporting trust in civic relationships between local government officials and marginalized communities; and most recently, exploring participatory approaches to AI. Throughout his research, the overarching thread has been exploring the intersections between design, social justice, democracy, and technology. eric.corbett@nyu.edu

back to top 

Copyright held by authors

The Digital Library is published by the Association for Computing Machinery. Copyright © 2022 ACM, Inc.

Post Comment


No Comments Found