Digital technologies are essential for learning, health, politics, and commerce. We have innovative products and greater control over the universe. Dangerous physical labor is done by robots. Computational medicine lengthens our life span. And tech supports collaboration and community, essential during the pandemic. Ethical technology use has been expanded by digital heroes such as Tim Berners-Lee, Doug Engelbart, Batya Friedman, Steve Jobs, Alan Kay, J.C.R. Licklider, Ada Lovelace, and Joe Weizenbaum.
Yet much is troubling. We depend upon software that nobody fully understands and is vulnerable to cyberterrorism. Privacy has been overrun by governments and surveillance capitalism. Totalitarian control is beyond that envisioned by Bentham with his panopticon and Orwell in 1984. The Internet serves us news matching our prejudices, with an increasing inability to tell true from false. Our children are addicted to their devices, while ubiquitous tech has helped us become workaholics. Jobs are demolished without social safety nets. Digital leviathans threaten to control all commerce.
I am most concerned about the hype for modern AI, and the risks to society stemming from its premature use. We are particularly vulnerable in medical diagnosis, criminal justice, senior care, driving, and warfare. Here, AI deployment has begun or is imminent. Yet much current AI is unreliable and inconsistent, without common sense; deceptive in hiding that it is an algorithm and not a person; unable to explain decisions and actions; unfair and unjust; free from accountability and responsibility; and used but not trusted.
Our digital dreams are now nightmares. Yet we are not helpless victims of external forces. There is time to assert human control. There is much that we can and must do to make the digital world a better place.
Thoughtful citizens must avoid tech addiction, develop technological self-confidence, and speak out when tech threatens their values. Society, typically acting via government, must insist that programs and practitioners be accredited and licensed, and tech firms be held accountable via laws and regulation.
I shall focus on what UX professionals must do: supporting citizens in achieving tech self-confidence and in taking action, pursuing public service careers, ensuring that students get a liberal education, insisting on tech usability and responsive customer support, designing while keeping values foremost, doing research toward reliable and safe AI, and speaking out in the face of tech injustice or danger.
We are not helpless victims of external forces. There is time to assert human control. There is much that we can and must do to make the digital world a better place.
Ordinary people should not be intimidated by technology and not accept geek speak. You must do your part. Help your friends and neighbors when they are stuck and frustrated. Practice describing in plain English the tech work you are doing or the technology discussed in the media.
UX people should consider government careers or even running for office. Most elected officials are lawyers. Occasionally, we see legislators who were doctors or teachers or businesspeople. Computer professionals should also assume these offices, bringing their expertise to inform tech policy. Examples of those who have done so are Taavi Kotka and Tarvi Martens, who leveraged their tech backgrounds to create e-Estonia.
Before the 1960s, universities were committed to a liberal education. This was thrown out by many colleges during the Vietnam War. As a result, some students today have little idea about the world in which they will use their expertise, and of their responsibilities as citizens. UX education must not focus solely on computing and design, as is often the case. CS education must be broadened; HCI academics can play a key role here.
Know the word bloatware, systems with thousands of commands and features, most of which appeal to a minority of users. Most apps are packed with more features than humans need and can use. A study 20 years ago by my former Ph.D. student—and now University of British Columbia professor—Joanna McGrenere along with Gale Moore found most users of Microsoft Word employing only 50 of the thousands of its commands and options. Bloatware makes systems forbidding. Campaign vociferously against bloatware and for the design of lean apps .
Speak out against systems that are cluttered, confusing, and inadequately tested, and that leave people unhappy and frustrated. Complain vociferously about such monsters of poor design and careless implementation. Insist on responsive customer support. Tell everyone about GetHuman.com, which discloses the phone numbers of support departments of companies that have personnel who are eager to help yet whose numbers the companies have hidden.
Consider a firm's ethical track record when deciding whether to work there. Facebook saw this after the Cambridge Analytica scandal during the 2016 presidential election; many new grads shunned the company. One wonders how Facebook recruiting has fared recently.
Many aspects of CS R&D require UX talent to realize human dreams rather than to support the nightmares. You can help ensure that AI agents are identified as algorithms rather than as people. The decisions and actions of AI algorithms must be explainable; the explanations must be understandable to humans. Work on ensuring that algorithms make decisions that are fair and unbiased, and that the people behind the algorithms are held responsible and accountable for their actions.
Designing AI algorithms while keeping in mind goals such as honesty, openness, transparency, and fairness is an example of value sensitive design (VSD), as developed by Batya Friedman and her collaborators. Digital technologists in design roles can ensure that system functionality and UIs reflect our values. VSD is an example of technology motivated by social good. UX professionals can act for social good. They can look for applications of computers that address societal problems such as the environment, hate speech, and disinformation that pervade social media, as well as technology addiction. They can work on ethical AI.
Employees can speak up when they believe their firm's actions are immoral or evil. Start with private conversations with coworkers or managers, then engage in collective actions. If that doesn't work, go public and make the issues known, as has happened at Google. There are also two stronger kinds of employee actions.
A tech person can become a conscientious objector. Like soldiers refusing to serve in any war or a specific war, this is the refusal to work for a firm or on a project. The distinction between general and selective objection is key. My friend Louis Font, who in 1969 became the first West Point graduate to become a selective conscientious objector and refuse to serve in the Vietnam War, was not objecting to all war. One can object to a specific task or to all work at a firm.
The final action is whistleblowing. Whistleblowing occurs when an employee is so convinced of the immorality of a firm's confidential actions that he or she announces to the world what it is doing. A recent example is Frances Haugen's disclosures about Facebook. The U.S. government has protections forbidding retaliation against employees who engage in whistleblowing. There are no such policies in private firms, so conscientious objectors and whistleblowers speak up at grave personal risk.
In summary, there is much we can do. We can work with people to insist that they can understand enough about tech to exercise their rights as citizens with respect to its use. We can choose careers in government service. We can ensure that technical education does not create narrow technicians. We can speak out when a company's actions are inconsistent with our values and direct our work to advancing the public good. We can think hard and feel deeply about what we believe, and then step forward and act.
I cover these issues at greater length in my recent book  and in an article in the March 2022 issue of Communications of the ACM. My ideas complement approaches aimed at broadening and humanizing the discipline of user experience design discussed in books such as those of Jeffrey and Shaowen Bardzell  on how the humanities can improve design; Batya Friedman and David Hendry  suggesting that values need to drive design; Sasha Costanza-Chock  on the relationship of communities and power to design; Susanne Bødker et al. on the Scandinavian school of participatory design ; and Ben Shneiderman  on human-centered AI.
2. Baecker, R. Digital Dreams Have Become Nightmares: What We Must Do. 2021; https://compsocietytexts.com
Ronald M. Baecker is professor emeritus of computer science at the University of Toronto, recipient of a 2020 CHI Social Impact Award, and an ACM Distinguished Speaker. His books include Computers and Society: Modern Perspectives (2019), The COVID-19 Solutions Guide (2020), and Digital Dreams Have Become Nightmares: What We Must Do (2021). He is the organizer of the virtual community Computers and Society (computers-society.org). firstname.lastname@example.org @ronbaecker
©2022 ACM 1072-5520/22/03 $15.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2022 ACM, Inc.