Features

XXX.3 May + June 2023
Page: 44
Digital Citation

What’s Missing in the ACM Code of Ethics and Professional Conduct


Authors:
Aaditeshwar Seth

back to top 

Two questions that are often encountered when evaluating the ethics of a technology project are Who is your product or service meant to benefit? and Is somebody being harmed by your product or service?

back to top  Insights

The ACM Code of Ethics and Professional Conduct largely focuses on an ethics of the means to avoid harm, but does not clearly define ethical ends that computing systems should aim to achieve.
The code is ineffective in flagging unjust and undesirable goals for which technologies are built or used.
The code should embrace goals such as achieving equality and overturning unjust social and economic structures through technological inventions.

These questions require different frameworks to answer them. The first question requires clarity on the objectives of the technology system and consequently helps understand whose needs these objectives are meant to serve. Answering the second question, however, does not require clarity on the goals of the system. If harms being caused by the system can be identified, then mechanisms can possibly be built to avoid these harms. But understanding the goals of the system need not be a prerequisite for that.

The ACM Code of Ethics and Professional Conduct (CEPC) largely focuses on the second question—of uncovering harm, avoiding harm, and speaking out against harm—but does not say much about defining the goals of systems built by computing professionals. CEPC at best prescribes broad goals such as building systems for the "benefit of society," or slightly more specific goals such as "promoting fundamental human rights" or "protecting each individual's right to autonomy," but these are discussed only briefly.

ins01.gif

Why is it a problem when ethical considerations are placed on just the means and not the ends to which a technology project is deployed? To answer this question, I'll provide some examples of technology projects where the goals are ambiguous or stated incompletely, and the projects have clearly led to unjust and undesirable outcomes. Yet the current formulation of CEPC is unable to flag such projects as unethical.

For instance, consider Facebook's news feed algorithm. The goal of the algorithm is left unspecified to the public, but external observations by several studies have revealed that the curation algorithm seems to maximize user engagement, which leads the algorithm to amplify sensational or fake news and consolidate echo chambers [1]. This of course does not imply that Facebook's goals are unclear—its obvious goal is ad revenue maximization, which needs algorithms that recommend content to have users spend more time on the platform. The ethical concern, however, arises because of a conscious choice made to choose this goal as opposed to a different goal—for example, for the genuine "benefit of society" by showing diverse content that may lead to pluralistic dialogue and strengthen democracy.

A system whose goals may in fact be unjust but are projected to be for the "benefit of society" is the Aadhaar biometric-based unique identity system in India. The stated goals are to reduce leakage in the provisioning of social welfare benefits to the poor: Authentication through a biometric-based identity will reduce corruption in the distribution of welfare benefits. Many researchers, however, have challenged this problem identification and argue that leakages happen more prominently in ways other than identity fraud [2]. Further, such a tightening through technology-based authentication, without accounting for the risks of biometric failures or challenges faced by citizens in obtaining and maintaining an error-free digital identity, has resulted in many unfair denials of welfare benefits. These benefits are meant to be accessible to the poor as a right, but being able to precisely determine accountability in the complex sociotechnical system of Aadhaar has remained elusive. The ethical concern again arises on the choice of objective: Should the priority rest on reducing inclusion errors or eliminating exclusion errors?

Yet another example is the heavily VC-funded ecosystem of agritech start-ups. Many of these start-ups claim to exist to improve the livelihoods of farmers through increased productivity with better crop planning and precision agriculture, but they are also alleged to be data-grabbing agents of surveillance capitalism designed to shape farmer behavior in ways that can eventually dispossess them [3]. Nudges for monoculture cropping, commercialized production, and land consolidation increase the precarity of farmers rather than empowering them. Such underlying, unspoken goals of profiting off farmers are of course not highlighted, and are cloaked with goals that appear to be for the "benefit of society."

Not only do these systems not state their true goals clearly or completely, lest the goals be questioned on their ethical merits, any harms that arise from their usage are further dismissed as "unforeseen" and "unintended" problems or "teething" issues. A focus on these harms has led to only minor tweaks being made retrospectively in the systems, such as the deployment of fact-checkers on Facebook; or, in the case of Aadhaar, the introduction of new intermediaries who help citizens cope with a complex technology infrastructure in return for a fee; or compliance with data-sharing guidelines by agritech companies. The goals of the systems, however, are hardly ever questioned. Furthermore, operating in the realm of means without considering the ends of a technology project also makes it hard to place accountability for harms that may have arisen when the technology was used in unexpected ways. This is because accountability requires the attribution of causation and fault, but intentionality to create fault can be evaded easily when the goals are not defined clearly; the technology designers and managers can claim innocence because they did not look far ahead. This leads to blaming users for the harms, creates moral buffers between the technology and technologists, makes it easy to deny deliberate wrongdoings, and enables the outsourcing of morality to regulatory institutions through simplistic compliance procedures.


Intentionality to create fault can be evaded easily when the goals are not defined clearly; the technology designers and managers can claim innocence because they did not look far ahead.


This distinction between the ethics of ends and means is important to understand. Ethical principles focused only on the means, such as "do no harm" guardrails, are not sufficient—like a ship without a compass to point it in the right direction. It could take the ship to many different destinations, not all of which might be desirable, whereas having clear end goals can help provide such a compass—a guiding light—to aim toward and to continuously steer decisions to meet those goals. This distinction has been highlighted in several domains. In the area of moral psychology and human values, Milton Rokeach in the book The Nature of Human Values distinguishes between terminal values and instrumental values: Terminal values refer to desirable end states of existence, such as equality, world peace, freedom, and the welfare of others; whereas instrumental values refer to preferable modes of behavior as a means to achieve the terminal values, including honesty, politeness, responsibility, and sustainability. Terminal values are therefore clearly consequentialist, arguably more than consequentialist considerations demanded by instrumental values.

Similar to Rokeach, Amartya Sen in the books Development as Freedom and The Idea of Justice distinguishes between constitutive freedoms and instrumental freedoms for development. Constitutive freedoms are those that need no further justification, that is, they are constitutive of development itself and therefore are end goals, such as freedom from starvation, freedom from illiteracy, and freedom to participate politically. Instrumental freedoms are the means to achieve constitutive freedoms, such as the freedoms to participate in economic markets, to live a healthy life, and to scrutinize and criticize authorities. This is also the basis of Sen's criticism of John Rawls's theory of justice as fairness. The Rawlsian framework is somewhat restrictive in maintaining a distinction between ends and means. It does allow some end goals to be specified as basic liberties that should be available equally to everybody, such as several human rights, but only demands equity-based fairness guarantees in terms of some specific aspects, mostly related to the possession of material resources. To this, Sen responds that ensuring fairness alone on some metrics is not sufficient to specify what outcomes or social realizations will finally emerge. The situation is similar to that of a market, where simply having the freedom to participate and transact on equal grounds, and further impose equity measures like progressive taxation on inequalities that may emerge regardless, does not say anything about what the market will be used for or where it will take the world. Further, markets, and the world, are not level playing fields, and Rawls's concept of the veil of ignorance, which is meant to ignore the current position in the world of the decision maker, therefore imposes an unnecessary informational restriction to improve equity and justice. In the book Justice and the Politics of Difference, Iris Marion Young adds to these limitations by further arguing that ensuring distributive equality on material resources is not sufficient to fix structural injustice in the world—the end goal that humanity should strive for is to remove the underlying processes of discrimination that create structural injustice in the first place.

Coming back to the subject of CEPC, it is divided into three sections: ethical principles, professional responsibilities, and leadership principles. All points in the first section, other than 1.1 ("contribute to society and to human well-being, acknowledging that all people are stakeholders in computing"), are clearly addressed at the means, such as to avoid harm, be honest, be fair, and respect privacy and confidentiality. The second section, on professional responsibilities, similarly is addressed at means: to produce high-quality work, acknowledge the work of others, provide reviews, carefully evaluate performance and correctness, assess risks, and foster public awareness. The third section, for those professionals in leadership positions, essentially builds upon the earlier sections by emphasizing the responsibility of leaders to create an environment conducive for their teams to adhere to the various principles. Principle 3.7 draws special attention to systems that become integrated into the infrastructure of society. Without having a clear emphasis on the end goals of computing, however, and potentially even identifying specific goals that computing professionals should work toward to define what is to the "benefit of society" and what is not, CEPC is limiting.

The importance of thinking about the end goals of technology is not a new observation. Norbert Wiener in his open letter "A Scientist Rebels" refused to share details of his technology design with militarists for fear that they might use his work toward irresponsible ends [4]. He went further to illustrate how totalitarian governments or profit-seeking capitalists ignore genuine human welfare, and asked scientists to not be naive and to take responsibility for their inventions to keep them from being used for unethical private or political gain. W. Brian Arthur explains that technology rarely evolves from accidental or serendipitous discovery but rather is shaped by conceptualizations in the minds of the innovators that reflect their values and beliefs, and those of the funding bodies that support the research and development [5]. Similarly, there is wide ranging literature by Marxists like Harry Braverman [6], technology historians like David Noble [7], and science and technology researchers like Langdon Winner [8] who document the processes through which technology is often developed to serve the agendas of the powerful. More recent movements such as ethical source licenses are similarly grounded in defining acceptable and unacceptable goals toward which free and open-source software may be used.

I argue in my recent book Technology and (Dis)Empowerment: A Call to Technologists that computing professionals need to clearly define the purpose of their innovations, and especially to determine which goals should be considered unambiguously as meant for the benefit of society [9]. Not doing this stands the risk of having concepts such as social good be coopted by current systems of the state and markets and thereby lose their meaning and distinctiveness. I further argue, building upon the thinking of people such as Tim Unwin [10], that the purpose of technology should be to overturn unjust social structures and bring about power-based equality. If this is not the goal, then technology often tends to reproduce inequalities, being wielded more easily by those who can gain access to it or design it for their own agendas.

Given the large and intersecting challenges that humanity faces today, including environmental collapse, inequality, exploitation, healthcare, and poverty, among others, and the double-edged nature of technology that often renders it a tool in the hands of the powerful to improve their situation at the cost of the poor and marginalized, it is imperative for computing as a discipline to move beyond narrow values of cost and time efficiency. Terminal values such as equality and the welfare of others and instrumental values such as plurality should be core elements of how computing professionals conceptualize research and development problems. Ethics codes such as CEPC can contribute toward building such an ethic for the entire computing discipline.

back to top  References

1. Bessi, A., Zollo, F., Del Vicario, M., Puliga, M., Scala, A., Caldarelli, G., Uzzi, B., and Quattrociocchi, W. Users polarization on Facebook and Youtube. PLoS ONE 11, 8 (2016).

2. Khera, R. Impact of Aadhaar in welfare programmes. Economic and Political Weekly 52, 50 (2017).

3. Stock, R. and Gardezi, M. Make bloom and let wither: Biopolitics of precision agriculture at the dawn of surveillance capitalism. Geoforum 122 (2021), 193–203.

4. Wiener, N. The Human Use of Human Beings: Cybernetics and Society. Houghton Mifflin, 1950.

5. Arthur, W.B. The Nature of Technology: What It Is and How It Evolves. Penguin Books, 2009.

6. Braverman, H. Labor and Monopoly Capital: The Degradation of Work in the Twentieth Century. Monthly Review Press, 1974.

7. Nobel, D.F. Forces of Production: A Short History of Industrial Production. Transaction Publishers, 2011.

8. Winner, L. Do artefacts have politics? Daedalus 109, 1 (1980).

9. Seth, A. Technology and (Dis) Empowerment: A Call to Technologists. Emerald Publishing, 2022.

10. Unwin, T. Reclaiming Information and Communication Technologies for Development. Oxford Univ. Press, 2017.

back to top  Author

Aaditeshwar Seth is a faculty member in the Department of Computer Science and Engineering at the Indian Institute of Technology Delhi and cofounder of Gram Vaani, a social enterprise that uses voice-based technologies to empower rural and low-income communities to run their own participatory media platforms. [email protected]

back to top 

Copyright held by author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.

Post Comment


No Comments Found