Columns

XXVIII.1 January - February 2021
Page: 26
Digital Citation

Reflecting on AI activism


Authors:
Elizabeth Churchill

back to top 

When I heard that this issue of Interactions was focused on AI activism, it occurred to me that we have an opportunity for a multiperspectival approach to thinking about activism in this arena. Three areas that came to mind:

  • A conceptual reframing of AI. Let's reclaim, unpack, and redefine the term artificial intelligence. Let's confront the connotations of media and advertising invocations of the acronym AI, which make it seem mystical and powerful.
  • A call to action for increased ethical consideration of technology development in general, with a particular focus on ethical, appropriate, and well-thought-out uses of machine learning techniques and deep engagement with dataset-quality assessments, especially around bias.
  • A focus on ethical and positive designs that are either explicitly or tacitly activist engagements focused on social good.

In turn…

back to top  Conceptually Reframing 'AI'

There's a form of linguistic activism we can drive in unpacking artificial intelligence. John McCarthy, who coined the term in 1956, defined AI as "the science and engineering of making intelligent machines." We have drifted away from this, of course; our current world focuses on algorithmic pattern identification from large datasets, rather than "intelligence."

So, I would like to argue that we should drop the intelligence, which often connotes a reflective capability usually associated with human reasoning. Algorithms are not intelligent by any sense of the word. I suggest we more clearly talk about the techniques, tools, methods, and approaches that make up the field(s) that exist under the misleading umbrella term AI. Let's call AI what it is: a set of techniques and approaches that process data for specific purposes (that behave) in specific contexts with specific bounds. As noted, much of what is called AI these days does not correspond closely to McCarthy's vision; most are statistical machine learning approaches without the symbolic reasoning that McCarthy and colleagues emphasized. Building on this insight, Gary Marcus, in a 2020 essay entitled "The Next Decade in AI" [1], calls for us to shift toward rethinking AI in terms of what he calls robust AI, systems that have deep understanding rather than simply deep learning. He posits that most contemporary AI systems are idiosyncratic (focused on specific domains and unable to generalize and transfer across domains) and overly dependent on the exact details of specific training regimes. I am simplifying his argument, but he invites us to bring back the spirit of classical AI and concern ourselves with knowledge, reasoning, and cognitive models.

Notably, even when applied to human reasoning abilities, the word intelligence has been found wanting. In an old paper from 2003 in the Psychological Record that I still think about, Henry Schlinger concludes:

A concept of intelligence as anything more than a label for various behaviors in their contexts is a myth and that a truly scientific understanding of the behaviors said to reflect intelligence can come only from a functional analysis of those behaviors in the contexts in which they are observed. A functional approach can lead to more productive methods for measuring and teaching intelligent behavior [2].


Communities within organizations and across industries must act together systematically to raise questions at all levels.


If this applies to humans (and I believe it does), then it surely applies to AI algorithms and the technological interactions that are powered by them.

So activist call #1 is to remove AI as a term. Let's be more transparent with our techniques and call things what they are—tools that are used for specific purposes and that may be more or less effective depending on the terrain and the skills of those using them. Let's design for machine-assisted human cognition with carefully supervised techniques applied.

back to top  Ethical Use of AI Techniques

In recent years, there has been an outcry within the technology industry and beyond about the unethical use of AI techniques. This outcry has been effective in a number of instances. Haydn Belfield from the University of Cambridge outlines how activism has reshaped AI engagement by "employers, other members of the community, and governments" [3]. Surveying activities over six years, his paper highlights several forms of activism, from choosing where to work and what to work on, to reframing definitions in the workplace and raising awareness of risks and surfacing failures. Drawing on two analytical frameworks (epistemic communities and worker organizing and bargaining), he concludes that "success thus far has hinged on a coherent shared culture, and high bargaining power due to the high demand for a limited supply of AI 'talent'" [3].

ins01.gif

While I wouldn't think of those working in AI across the board as a community (again, the subtribes are divided by techniques and domains), the actions listed do resonate. It is clear that a groundswell of people addressing issues from biased datasets to questionable military applications can raise awareness and potentially move the application of these techniques in a better direction.

The ACM's Code of Ethics in its Professional Leadership Principles states that a computing professional must "ensure that the public good is the central concern during all professional computing work." As laudable as that sounds, we know that workplace incentives and cultural dynamics can quiet even the most concerned of people if there is not a shared culture of raising concerns without risk. More mundanely, as I have written about before, people usually focus on their own daily moments, working expediently within the exigencies and pressures of their daily lives, sometimes on deeply unethical projects and with horrific results [4]. This is the critical reason why communities within organizations and across industries must act together systematically to raise questions at all levels, from small, short-term projects to large-scale multi-year investments. Movements empower people to ask questions. And we must invite hard, broad-scope questions that address bigger-picture goals.

So #2 in my AI activism list is to continue to build communities or at least networks of like-minded people to surface, address, and lobby for the ethical applications of techniques under the AI umbrella.

back to top  AI and Design for Good

If the definition of activism is "the action of using vigorous campaigning to bring about political or social change," activism can also be the sustained attempt to slowly change a culture, shift a landscape of experience, and support democratic change.

In the former case, two great examples of activism I recently came across are helping to create citizen-led public services using AI techniques (e.g., Bayes Impact; https://www.bayesimpact.org/en/) and using AI techniques to support human rights work (e.g., addressing war crimes [5]). Another example is the use of these approaches in addressing accessibility. For example, technologies like Microsoft's Seeing AI (https://www.microsoft.com/en-us/ai/seeing-ai) mobile app use the smartphone camera and computer vision to analyze the surroundings to support people with visual impairments to navigate more easily. Another favorite example is Hoobox Robotics' Wheelie 7 prototype kit, which allows electric wheelchairs to be controlled through facial expressions [6].

So #3 in AI activism for me is harnessing the power of AI-related techniques to change who can participate in everyday society and be heard in new ways. This perhaps is the most powerful form of AI activism, moving toward more inclusive design and social justice work.

Summing up, I think we can be activist in three ways: reconceptualizing AI; calling for ethical engagement with companies, agencies, and governments; and focusing on positive design and social justice applications. Let's deconstruct AI as a concept into its component parts (i.e., methods and techniques), up-leveling people's understandings as well as our vocabularies and literacy. If we begin applying techniques to inclusive design and societal good more generally, while actively surfacing and stopping applications with negative societal impact, we will do ourselves in HCI and the world a big favor.

back to top  References

1. Marcus, G. The next decade in AI: Four steps towards robust artificial intelligence. 2020; https://arxiv.org/abs/2002.06177v1

2. Schlinger, H.D. The myth of intelligence. The Psychological Record 53, 1 (2003), Article 2; https://opensiuc.lib.siu.edu/tpr/vol53/iss1/2

3. Belfield, H. Activism by the AI community: Analysing recent achievements and future prospects. Proc. of the AAAI/ACM Conference on AI, Ethics, and Society. ACM, New York, 2020, 15–21; https://doi.org/10.1145/3375627.3375814

4. Churchill, E.F. Expedience, exigence and ethics. EPIC Perspectives. Sep. 27, 2018; https://www.epicpeople.org/expedience-exigence-ethics/

5. Hao, K. Human rights activists want to use AI to help prove war crimes in court. MIT Technology Review. Jun. 25, 2020; https://www.technologyreview.com/2020/06/25/1004466/ai-could-help-human-rights-activists-prove-war-crimes/

6. Aouf. R.S. Hoobox launches first wheelchair controlled by facial expressions. dezeen. Jan. 15, 2019; https://www.dezeen.com/2019/01/15/hoobox-wheelie-7-wheelchair-facial-expressions-design/

back to top  Author

Originally from the U.K., Elizabeth Churchill has been leading corporate research at top U.S. companies for over 20 years. Her research interests include designer and developer experiences, distributed collaboration, and ubiquitous/embedded computing applications. [email protected]

back to top 

Copyright held by author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2021 ACM, Inc.

Post Comment


No Comments Found