FeaturesDialogues

XXVIII.1 January - February 2021
Page: 34
Digital Citation

HCI sustaining the rule of law and democracy: A European perspective


Authors:
Mireille Hildebrandt, Virginia Dignum

back to top 

In a time in which many believe AI poses serious threats to democratic politics, democratic institutions, and our capacity and right to engage freely in democratic practices, AI activism is rising. This activism is not only geared to the AI community itself and the giant tech organizations that are de facto defining the field, but also toward governments and their responsibility to shape the societal and ethical implications of AI (see [1] for a survey on AI activism in the past six years). As a result, human-centered AI is being advocated as the way forward, one to which multidisciplinary methods and approaches, long used by the HCI community, are core.

back to top  The Legal Perspective

* By Mireille Hildebrandt

Computational systems now interact with humans in myriad ways. They are often designed to mine behavioral data to nourish business models based on behavioral advertising while providing some sort of free service. This kind of design is optimized via, for example, AB testing and privileges interaction that generates observable behavior and/or impressions, clicks, or conversions. Clearly, other objectives abound, such as successful medical interventions, coordination of public transport, enhancing educational performance in disadvantaged groups, improving food security, preventing unlawful police violence, and so on. Some of these systems are data driven, ultimately based on the brute force of complex statistical calculations; others are model driven, in the sense of decision trees based on logic rather than statistics. Both types of systems are often meant to engage users, and are designed in ways that invite intuitive interaction in line with the purpose for which the system was developed. This raises the question of to what extent such design should support legal requirements, thus contributing to interactions that fit the system of checks and balances typical for a society that demands that all of its human and institutional agents be "under the rule of law."

ins01.gif

The General Data Protection Regulation (GDPR) and the force of law. As we all know, there are two types of users: those employing systems to initiate or sustain business models or tasks in the public interest, and those we usually call end users, such as consumers and citizens. The GDPR attributes responsibility and even liability to users in the first sense with regard to the processing of personal data relating to end users in the second sense. It tasks users with a series of legal obligations to protect the fundamental rights and freedoms of end users, coupled with transparency requirements, prior-impact assessments, and data protection by design and default. It provides end users with legal rights, such as transparency rights, the right to object, the right to have data erased, and the right not to be subject to fully automated decisions when these have a serious impact. These rights and obligations define the boundaries of the interaction space between users and end users of computational systems. It seems pivotal that experts in human-computer interaction become aware of that space, further supporting the emancipation of the end user, whose rights and freedoms are core to the GDPR.

As a lawyer who has been collaborating with computer scientists for over a decade, I am often surprised by their attempts to reinvent core concepts of human rights law. For instance, many researchers develop their own concept of privacy or fairness, usually one that best fits the kind of solutions they have on offer, ignoring or even debunking the relevant legal concepts. This is not only antidemocratic, where such concepts have been agreed upon by democratic legislatures, but also violates the core tenets of the rule of law, notably that nobody, not even computer science researchers, are above the law. As a result, I believe it is crucial that more attention is paid to the meaning of these rights and how interaction design can contribute to what I have called legal protection by design (not to be confused with legal by design, which would be a contradictio in terminus) [2].

Let me give one example [3]. Art. 7, paragraph 3 of the GDPR stipulates that consent for the processing of personal data (which can only be provided for specified purposes) can be withdrawn at any time, and that such withdrawal must be as easy as it was to give the consent. This has enormous consequences for the design of websites, as well as for the backend systems they serve. When engineering an interface that respects this aspect of interaction, developers should take into account the meaning of consent in the context of the GDPR. Apart from the fact that consent is valid only if given for specified purposes, paragraph 4 of art. 7 also says that consent must be given freely, meaning that "utmost account must be taken of whether, inter alia, performance of a contract, including the provision of a service, is conditional on consent to the processing of personal data that is not necessary for the performance of that contract." This implies that consent for processing that is not necessary requires attention. In other words, if one's hand is twisted to gain access to additional data that is not necessary, the consent is not valid. So merely pushing a button cannot in itself be equivalent with consent, and preventing access to a website if an end user refuses to share personal data that is not or no longer needed for the requested service (by way of a so-called cookie wall) is not allowed. On top of that, as indicated above, end users must be able to withdraw their consent as easily as it was given. Just think of the many different ways this can be done, and of the many different designs that would violate these stipulations.

Note that personal data may be processed if the processing is necessary for one of the other five legal bases—for example, the vital interest of the data subject, a legal obligation, or the legitimate interest of the controller (whoever determines the purpose and means of processing). The latter legal basis, the legitimate interest of the controller, however, requires a balancing act, considering whether or not the end user's interests or rights and freedoms override the interest of the controller. Try to imagine whether and how interaction design could help to clarify and support such a balancing act. Wouldn't it be great if these legal stipulations become part of the interactional frames used to create a website's or application's affordances?


While law is a complex apparatus open to myriad interpretations, computers, and computer scientists, are pretty black and white.


I would challenge HCI researchers to invest in creating checks and balances between users and end users, making sure that end users are aware of how their data may be used against them at some point, instead of investing all creativity in computational nudging to lure people into whatever suits whoever pays for the design. Not because it is ethically "good," but rather because this is how we organize society in a constitutional democracy.

The reason I pick on consent is that legal protection by design aims to preserve the force of law while avoiding overdetermination by the force of technology. The latter would equate with legal by design; attempts to technologically enforce legal rights, freedoms, and obligations would run counter to all that law stands for. A law that cannot be disobeyed does not qualify as law, but rather as brute force or discipline. The idea is not to figure out how to use sentiment analysis, AB-testing, or cognitive biases to influence people as if they were pawns to be moved around, but rather how to develop interfaces and backend systems that enhance rather than decrease human agency.

back to top  The Computational Perspective

* By Virginia Dignum

Consent, as highlighted above, is an issue of participation. You can consent only if you are somehow involved in the process. As such, Mireille's challenge to the HCI community is a timely and important one.

As the applications of AI increase, businesses and governments face a shortage of experts. Current calls for education and training, for platforms for sharing data and algorithms, and in general for democratizing AI, are increasing in Europe and elsewhere. In fact, given the European (geographically) distributed approach to research, business, and government, many claim that the need for such platforms is larger in Europe. We don't have digital platforms the size of Google, nor do we have a centralized government with the power of the Chinese state. Making intelligent technologies, data, and computation accessible and affordable expands the possibilities of what each organization can achieve individually and fosters innovation. As an example, many datasets, models, and research related to the coronavirus are available open source, enabling a large global community to collaborate. Nevertheless, designing an AI system requires deep knowledge of data modeling and computational techniques, and a commitment to developing and using AI responsibly [4]. Handling the challenge of democratizing AI while avoiding misuse, abuse, bias, and other problems is the focus of the many principles, requirements, and strategies launched by governments and organizations across the globe [5].

While law is a complex apparatus open to myriad interpretations, computers, and computer scientists, are pretty black and white: Things are or are not. There is little room for anything in between. I am particularly drawn to Mireille's comment that "a law that cannot be disobeyed does not qualify as law, but rather as brute force or discipline." This resonated with my work on multi-agent systems, where I have argued for the use of a social contract between the agent (autonomous entity pursuing its own goals) and its societal role (whose objectives and norms are set by the organization). In this model, a norm can be violated and be subject to different interpretations, whereas the contract allows us to verify the overall system behavior and deviations from expectations [6].

Digital transformation, particularly the use of artificial intelligence, is challenging the social contract and providing both risks and opportunities for democracy. The current ideal of democracy is grounded in the individual's right to self-determination [7]. Important requirements are the stable communication between policymakers and citizens [8] and the opportunity for citizens to freely discuss and voice their opinions, for which they need to acquire what Robert Dahl has called an enlightened understanding of public matters [7]. By affecting people's self-determination, AI is potentially affecting the democratic process. For better or for worse. On the one hand, it is becoming increasingly challenging to know and trust democratic institutions, as information manipulation, bots, and algorithmic filters are increasingly distorting the picture of society that reaches people through digital media. But access to information through digital technologies can also empower citizens and strengthen democratic accountability. Harnessing the potential of digital transformation as a force for the global democratic good requires strategic policy action. It falls to public powers to establish regulatory frameworks and policy measures that will ensure transparency in the use of digital technologies and accountability for the decisions guided by artificial intelligence systems. Here, advances in AI lead directly to a need to reassess legal frameworks and to collectively rethink the current views on what constitutes the social contract.

back to top  Joint Perspective

In this dialogue, Virginia Dignum brings in the perspective of democracy and the idea of the social contract, while Mireille Hildebrandt focuses on the rule of law as a precondition of a viable democracy. The topic of consent is core, both because it asserts participation and individual self-determination as the ground and purpose of the social contract, and because both democracy and the rule of law assume that consent is not a panacea where the public interest is at stake. Jürgen Habermas [9] wrote that democracy is a system where the majority rules, but in such a way that minorities can become majorities. This has consequences for the new public spheres, run by big-tech platforms that may have no incentives to sustain unmanipulated public debate. This is where the use of AI for behavioral profiling becomes pivotal and human-computer interaction crucial.

Across the world, activists within the AI community are showing the need for AI governance approaches based on democratic practices and values, including inclusion, equality, participation, and accountability. Even though, at first glance, a legal framework and regulation may seem antithetical to activist agendas, this article shows how important these can be to the activist cause, providing the practical and legal structures needed to ensure the AI development and use that activists are demanding. As a result, several organizations are putting forward principles and guidelines [5]. In the case of the E.U., these guidelines constitute a first step toward legal regulation of AI research, practice, and use in Europe (see https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf ) and strengthen the leading position of Europe in the efforts toward a human-centered, trustworthy approach to AI development and use. Highly successful public litigation based on existing legislation (GDPR), which has, for example, resulted in the voidance of the Privacy Shield agreement between the U.S. and E.U. concerning transfer of personal data to the U.S., is another example of this European position.

AI activists are exposing abuse of social media in the context of elections [10], bias in datasets and in techniques, and a large overall lack of transparency. At the same time, activists are using AI applications and techniques to make basic services free for the most vulnerable in society, to support democratic process and human rights, or to combat discrimination and bias. Despite successes in ethics and safety, in employee organizing, and in informing public opinion, corporate strategists have been slow in taking up AI governance practices based on accountability and transparency to ensure that AI applications are aligned with the values of democracy. This case demonstrates the importance of legal frameworks to leverage organized efforts by employees geared toward AI corporations, and supports our stance that rather than opposing each other, activism and governance can together ensure human-centered and trustworthy AI.

back to top  References

1. Belfield, H. Activism by the AI community: Analysing recent achievements and future prospects. Proc. of the AAAI/ACM Conference on AI, Ethics, and Society. 2020, 15–21.

2. Hildebrandt, M. Chapter 10. Law for Computer Scientists and Other Folk. Oxford Univ. Press, Oxford, U.K., 2020.

3. Hildebrandt, M. Chapter 5. Law for Computer Scientists and Other Folk. Oxford Univ. Press, Oxford, U.K., 2020.

4. Dignum, V. Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer Nature, 2019.

5. Jobin, A., Lenca, M., and Vayena, E. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1, 9 (2019), 389–399.

6. Dignum, V. A model for organizational interaction: Based on agents, founded in logic. Ph.D. dissertation, SIKS, 2004.

7. Dahl, R.A. Democracy and its Critics. Yale Univ. Press, New Haven, CT, 1989.

8. Benz, A. and Papadopoulos, Y. Introduction. Governance and democracy: Concepts and key issues. In Governance and Democracy: Comparing National, European and International Experiences. A. Benz and Y. Papadopoulos, eds. Routledge, London, U.K., 2006, 6.

9. Habermas, J. and Rehg, W. Between Facts and Norms: Contributions to a Discourse Theory of Law and Democracy (Reprint Edition). The MIT Press, Cambridge, MA, 1998, 179.

10. Sabbagh, D. Trump 2016 campaign 'targeted 3.5m black Americans to deter them from voting.' The Guardian. Sep. 28, 2020; https://www.theguardian.com/us-news/2020/sep/28/trump-2016-campaign-targeted-35m-black-americans-to-deter-them-from-voting

back to top  Authors

Mireille Hildebrandt is a research professor of interfacing law and technology at Vrije Universiteit Brussel, appointed by the VUB Research Council, and full professor at the Institute of Computing and Information Sciences at Radboud University in the Netherlands. She was awarded an ERC Advanced Grant for research into computational law. mireille.hildebrandt@vub.be

Virginia Dignum is professor of responsible artificial intelligence at Umeå University, Sweden, and is associated with TU Delft in the Netherlands. She is the director of WASP-HS, the Wallenberg Program on Humanities and Society for AI, Autonomous Systems and Software. She is a fellow of the European Artificial Intelligence Association (EURAI). Her book Responsible Artificial Intelligence: Developing and Using AI in a Responsible Way was published by Springer-Nature in 2019. virginia@cs.umu.se

back to top 

©2021 ACM  1072-5520/21/01  $15.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2021 ACM, Inc.

Post Comment


No Comments Found