DialoguesCover story

XXVIII.1 January - February 2021
Page: 28
Digital Citation

A conversation on AI activism


Authors:
Sarayu Natarajan, Smriti Parsheera

back to top 

Note to readers: We have attempted in this conversation to bring up some conceptual and practical concerns for AI activism. We are both scholars from India, and so is the imagination of this piece.

ins01.gif

Sarayu Natarajan: I got interested in AI because the production and making of AI is as political and interesting as the technology, its effects, and its outcomes. Despite being called artificial, AI is deeply human! The ways and the extent to which humans are involved in the making of AI are not apparent from the AI we consume. Humans are involved on very many levels, including the making of AI, but what is less visible: the process of generating large quantities of data. The data we generate from the use of several devices, often blissfully ignorant of it, may be called data exhaust—it is critical to making AI. There is also the data that is labeled and classified, often by humans, for the specific commercial development and use of AI.

The involvement of humans in every stage means that AI carries within it some of our deepest problems and biases too. Social structures intersect to create marginalization, impacting what data is available and used to build AI, and humans enact their deepest biases and fears in their work—all of which makes AI deeply human.

The other thing that is interesting for me is that all AI development is by those who have entrenched power and the ability to aggregate labor and capital, namely states and corporations. I think the big debates on justice will need to navigate this power. This is what is interesting to me—how are we going to frame and imagine ideas of fairness and equity here? In what ways could scholars, researchers, and writers such as ourselves intervene? In that sense, what is the space for activism, even?

I am also curious about how to situate these questions in the political and socioeconomic context of where AI is developed and used. What does it mean, for example, to have an Indian view of AI? (And the obvious corollary—is this an exceptionalist view?) I have explored this in a paper I co-wrote [1], but it is a theme that is increasingly important in the context of the pandemic.

ins02.gif

What got you interested in thinking about AI, Smriti?

Smriti Parsheera: I couldn't agree more with the characterization of AI as being deeply human. We often hear statements like "AI does this" or "AI is progressing by leaps and bounds," and in that process people tend to attribute agency to the technology itself. But, in fact, it is the exercise of agency by the humans involved in the process and their interaction with technical artifacts that determine what AI can or indeed should do. Here, I am distinguishing agency from politics because, as Langdon Winner's seminal piece [2] tells us, technological artifacts can certainly have politics of their own. Taking a cue from Winner, this holds true in terms of both the social and power structures that you have rightly highlighted, as well as the style of political life that might accompany certain AI applications. For instance, I have written about the rampant adoption of facial-recognition technologies by government and private actors, for purposes that range from tagging photos on social media to ubiquitous state surveillance [3]. Constant identifiability of this sort goes hand in hand with an authoritarian form of political life, even though the countries in question that adopt the technology might otherwise be democratic in character.

It is also important to call out the flawed assumption that AI development can somehow be divorced from its application and outcomes. This stems from the logic that AI itself is value neutral and that it is up to the user to determine its fair or unfair applications. This argument stands to be dispelled in light of our earlier discussions on the role of human agency in AI processes.

To me, AI activism plays a key role in bringing this sort of critical thinking, on issues of agency, politics, and outcomes, into the forefront of the AI discourse. This is already happening in many ways. A recent working paper on activism in the AI community, which the author defines as those "working on, with and in AI," highlights some interesting examples [4]. This includes the pushback from Google employees against the company's role in the militarization of AI and the emergence of multi-stakeholder initiatives like the Partnership on AI. The use of strategic litigation as a tool for seeking algorithmic accountability is another example.


It shouldn't be that the ideas of those in power, whether by wealth or the fact that they are elected, are the ones that bound and frame ideas of justice. — SARAYU NATARAJAN


You mentioned issues of fairness, equity, and justice as points of interest. What specific role do you see for AI activism on these fronts?

SN: Thank you for that question. I see your point on the broad roles for intervention around AI, but I believe we must consider specificity and relativism for the Indian context as well. I am wary of thinking about this through the lens of some form of Indian exceptionalism—but this idea of justice in AI is a hard one! Because justice is political. The question is about whose ideas of justice prevail, and to what extent must be considered. These questions are all the more central now in a time of growing inequality. It shouldn't be that the ideas of those in power, whether by wealth or the fact that they are elected, are the ones that bound and frame ideas of justice.

And so, I believe we must think of activism in two ways. First, as a set of tools and the ways in which political circumstances circumscribe and limit the avenues we have—I am thinking of Charles Tilly and Sidney Tarrow's Contentious Politics [5] here. The AI version of it, if you will! You mention employee pushback and strategic litigation as specific methods that have been used in the context of AI. These are available in liberal democracies throughout the world. Thinking hard about what tools are available for critical inquiry and activism is important as deepening threats of surveillance, clampdowns on activists, and curbs on speech limit the avenues for activism and critical inquiry. This rise of authoritarianism is a global challenge, but specific to context as well—and here I mean the specific ways in which this is enacted. So paying attention to this is relevant.

And second, I believe we need to think about entry points for AI activism as well, and that this must be done in a contextual way. This means talking about the ends and goals of activism. We need to pay attention to the structures of society and the economy that operate, the specific patterns of marginalization, and who wields power. Paying attention to the intersection of caste, class, and gender in India, for example, will yield differentiated pathways for activism.

Certainly, we are in times when there is a growing number of critiques of power and privilege, and we must carry some of these discussions into talking about AI. I think that AI and activism in AI engenders some unique opportunities and concerns—the idea that AI is made by both those who generate the data, and in some instances label it (I am referring to the AI data-labeling industry and workforce here) is unique and gives us new dimensions to challenge corporate and state power in making AI. These interests coalesce around questions of the futures of workers, the politics of AI design and development itself, and the implications/effects of AI.

How do you see AI activism? I would also like to learn more about your work, particularly on gender and representation.

SP: Thank you. In the work that you referred to, I look at the process of knowledge making in the field of AI from a gendered perspective [6]. When one starts looking into questions of who is making decisions about whether and how to develop AI, and which problems to solve, it is hard to miss the role of gender in these processes. In the paper, I point to the chronic underrepresentation of women in AI agenda setting, research, and deployment. Given the fluidity of AI as a field, where it is sometimes simply described as what AI scientists do, the identity of these scientists and researchers (gender being a critical part of it) becomes very important. But the problem, of course, is not limited to issues of gender imbalance. Power structures manifest in a number of other ways, including based on race, class, caste, nationality, and the intersection of these different points.

Besides questions of representation, another ground for unfairness arises when the biases and stereotypes from the real world manage to find their way into artificial ones. This is often due to uncritical reliance on available data or human errors in its interpretation. Examples of this include translations that attribute particular pronouns to specific professions (he/his for engineers and she/her for nurses) and the weeding out of women candidates in automated job-screening processes. Privacy, security, transparency, and accountability are some of the other big areas of concern that have been raised by the various AI governance principles floated in the past couple of years. I have also been part of one such initiative led by the ITechLaw Association, which resulted in the formulation of eight principles for responsible AI [7]. In addition to the points mentioned above, our work identified the existence of an ethical purpose and societal benefit as being the foundational pillars on which any development or use of AI should be based.


I find it important to emphasize that just like the development and use of AI, the activism around it also needs to be responsible. — SMRITI PARSHEERA


While pointing to all of these concerns, I find it important to emphasize that just like the development and use of AI, the activism around it also needs to be responsible. Paying attention to both the successes and failures of different AI applications and the risks and opportunities arising from them can generate a more nuanced discourse on all sides. Perhaps you could share some insights on this based on your work on AI and the future of work?

SN: I agree entirely that we need to pay attention to the successes and failures of AI applications.

Building from there, we need to talk about the opportunities that AI allows. In the context of work, the debates have brought up job losses around automation. Job loss could be gendered in its impact as well, due to the differential impacts of AI on sectors that have varying gender ratios. Not to belabor the point you have already made, but activism needs to emerge from a careful consideration of what is at play and cannot be framed in universal terms.

To dig into a specific way to consider the context, the pandemic has been associated with an economic crisis of unprecedented proportions, and unemployment is at a high. The economic effects of the pandemic have also been unequally distributed—the better off have stayed relatively protected from the consequences. So, rebuilding and resilience will require at least some focus on job creation. Like we discussed earlier, AI—particularly aspects like data labeling—create opportunities for job creation. It is hard to get a sense of the scale; some predictions suggest it will be a $1.2 billion industry by 2023. The big categories are labeling for retail, driverless cars, content moderation, and more specialized areas like medicine and radiology. This could generate jobs in the hundreds of thousands in the context of India, which could leverage its advantage of growing Internet access and English-language education.

Ongoing work at the Aapti Institute (www.aapti.in) is showing that there are a range of organizations that are doing this work, serving both global and domestic clients. Business models, at least in India, seem to be shifting from platforms like Amazon Mechanical Turk to structured organizations with trained workforces. There is also a parallel effort to platformize this kind of work, allowing a common pool of workers to serve many different kinds of business needs. We are also seeing that many of these businesses are engaging women workers and are finding ways to build capacities and strengthen workforces. And the nature of the business model—given that the work is done online—allows for locations to be in smaller cities. This offers a cost advantage to the business but also generates employment away from the cities. These characteristics can be critical in thinking about economic resilience in the coming years. That means, we need thinking (activism!) about what just, fair, and inclusive transitional processes must look like.

Equally, while the frame of employment generation is important—indeed, this is the future of work—we must not neglect to investigate what this means for the future of workers. At Aapti, we hope to build on the aforementioned work by speaking to some of those who are actually doing these jobs and learning more about the lived realities. And, as we build the empirical knowledge about this domain of work, we must investigate what it means to regulate to protect workers, while harnessing the potential in job creation. We also need to talk about the ways in which this intersects with deriving meaning from work. I have co-written a piece [8] that examines the question enacting transient ideas of productivity and its conflict with meaning in work. Mary Gray and Siddharth Suri talk about how "ghost work" involves human discretion; it can also enable the exploitation of workers [9].

You spoke about AI governance principles; could you place this in the context of the broader debates on how AI is currently being regulated and governed?

SP: Sure, understanding regulatory processes is an area of interest, and I have been looking at the developments in the AI space through this lens. I would characterize the period we are currently in as an AI summer, contrasting this with the many AI winters that have occurred in the past. As we have been discussing, the boom in the research and adoption of AI in recent years has also shone a light on some of the worrisome aspects of these technologies. As a result, there seems to be a growing interest in finding ways to shape AI technologies within the mold of a rights-respecting and ethical framework. Based on a review of 36 sets of AI principles issued between 2016 and 2019, researchers at Harvard's Berkman Klein Center found that many of the recent documents seem to converge around key themes like privacy, security, accountability, fairness, and respect for human values [10]. While convergence in thinking across different stakeholders is a positive sign, one would not be wrong to assume that the proliferation of such principles, many of which advocate a self-regulatory approach, is linked to the desire to avoid a more stringent regulatory approach toward AI.

Although a number of countries have adopted policies, strategies, and reports on AI, so far, it seems unlikely that we will have hard laws governing the field of AI as a whole. Perhaps rightly so, given that AI is such a broad subject, consisting of a number of different technologies with diverse applications. It is therefore more likely that regulatory debates in the near future will continue to focus on specific applications of AI, such as autonomous weapons, self-driving cars, and facial recognition. This would, of course, be in addition to non-AI-specific laws around data protection, anti-discrimination, and competition.

It would, however, be limiting on our part to restrict the AI governance debate only to instruments like principles, and norms and laws. As noted by Lawrence Lessig (http://codev2.cc/) in the context of cyberspace regulation, markets and architecture (or what he calls code) also function as important modalities of regulation [11]. To circle back to the example of facial recognition, the technology is not only being regulated by decisions like the ban on public sector use in certain areas or the moratorium on adoption in others, but also by the architectural decisions made by the developers of such systems. For instance, Google recently decided to stop labeling photographs as man or woman, identifying them only as persons. This decision may have been prompted by ethical principles, but it is ultimately a code-based solution that sets an architectural constraint on how image-recognition services can be used.

Besides questions of governance of AI itself, we are also seeing an interesting trend in which AI ambitions might end up driving other aspects of national policies, particularly in the area of data governance. For instance, while studying India's data-localization policies, we found that "building an AI ecosystem" was noted as one of the justifications for suggesting the mandatory local storage of personal data in India. In a similar vein, a committee set up by the Indian government on governance of non-personal data emphasized the economic relevance of data for spurring AI growth. Based on this, they make a case for the mandatory sharing of non-personal data by data-based businesses. This sort of economic-value maximization approach, however, misses the broader point that we have been making—the need to pause and think about whether any and every application of AI is necessarily value enhancing from a societal perspective.

SN: Yes. There's a whole lot more work needed here. I think there are three big dimensions of inquiry. First, we need inquiry at a conceptual level. To understand how to think about AI, for example, do we accept that technological artifacts have agency, and if they do, how do we accommodate them conceptually in regulation? We also need to ask about corporate and platform responsibility here, and the precise role of the state.

Second, we must develop vocabularies to talk about cross-national concerns, human rights, justice, and fairness in AI, without going into globalist or exceptionalist tropes.

And third, I believe we need to build a body of empirical evidence, as very many aspects of AI, especially the sheer humanity of it, are invisible. This empirical work must attempt to bring the voices of people—those who make and are most affected by AI—to the forefront. We need disaggregated empirics to tell us about the possible implications of actions and regulation. I know that's a whole lot, but I think these are critical questions for AI in the coming years!

Smriti, what do you think are productive areas for inquiry in AI?

SP: Since you mentioned cross-national concerns, I think it would be pertinent for us to also reflect on the formation of the Global Partnership on AI (GPAI), which is an international initiative to encourage the responsible development of AI. Except for the conspicuous absence of China, the GPAI initiative, supported by an OECD secretariat, seems to involve most of the countries that are at the forefront of AI discussions. In terms of its areas of work, so far, GPAI has announced four working groups: responsible AI, data governance, the future of work, and innovation and commercialization, in addition to a focus on Covid-19-related developments. These themes appear to be broad enough to cover all of the key issues that seem pertinent to AI research, development, and adoption at this point. However, it remains to be seen whether GPAI will really manage to live up to its promise of being a multi-stakeholder initiative in terms of effective engagement with non-state actors, involvement of the broader public, and creating scope for AI activism in its decision making.

SN: Thank you Smriti! This has been a fascinating conversation. I know we've barely scratched the surface of what there is to cover, but I hope to pick many of these threads back up with you.

back to top  References

1. Kalyanakrishnan, S., Panicker, R.A., Natarajan, S., and Rao, S. Opportunities and challenges for artificial intelligence in India. Proc. of AIES '18; https://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_52.pdf

2. Winner, L. Do artifacts have politics? Daedalus 109, 1 (1980), 121–136; https://www.cc.gatech.edu/~beki/cs4001/Winner.pdf

3. Parsheera, S. Adoption and regulation of facial recognition technologies in India. Data Governance Network Working Paper No. 5. Dec. 2019; https://datagovernance.org/report/adoption-and-regulation-of-facial-recognition-technologies-in-india

4. Belfield, H. Activism by the AI community: Analysing recent achievements and future prospects. Proc. of AIES '20; https://arxiv.org/abs/2001.06528

5. Tilly, C. and Tarrow, S. Contentious Politics. Oxford Univ. Press, Oxford, UK, 2015.

6. Parsheera S. A gendered perspective on artificial intelligence. Proc. of ITU Kaleidoscope: Machine Learning for a 5G Future. 2018. DOI:10.23919/ITU-WT.2018.8597618

7. Responsible AI: A Global Policy Framework. ITechLaw Association, 2019; https://www.itechlaw.org/sites/default/files/ResponsibleAI_PolicyFramework.pdf

8. Kapor, A. and Natarajan, S. Productivity vs. well-being: The promise of tech mediated work and its implications on society. ORF Online. Oct. 25, 2019; https://www.orfonline.org/expert-speak/productivity-vs-well-being-the-promise-of-tech-mediated-work-and-its-implications-on-society-56962/

9. Gray, M. and Suri, S. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Houghton Mifflin Harcourt, Boston, U.S., 2019.

10. Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., and Srikumar, M. Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center for Internet & Society, 2020; https://ssrn.com/abstract=3518482

11. Lessig, L. Code Version 2.0. Basic Books, 2006; http://codev2.cc/download+remix/Lessig-Codev2.pdf

back to top  Authors

Sarayu Natarajan is founder of the Aapti Institute. She thinks about technology and society at Aapti, particularly state, citizenship, work, and AI—and about politics all the time. [email protected]

Smriti Parsheera is a researcher at the National Institute of Public Finance and Policy, New Delhi, and a fellow at the CyberBRICS Project hosted by the FGV Law School, Brazil. Her research focuses on digital rights and technology, and the policy processes shaping these fields. [email protected]

back to top 

Copyright held by authors

The Digital Library is published by the Association for Computing Machinery. Copyright © 2021 ACM, Inc.

Post Comment


No Comments Found