Blogs

Technological determinism


Authors: Jonathan Grudin
Posted: Wed, March 09, 2016 - 12:21:43

Swords and arrows were doomed as weapons of war by the invention of a musket that anyone could load, point, and shoot. A well-trained archer was more accurate, but equipping a lot of farmers with muskets was more effective. Horse-mounted cavalry, feared for centuries, were also eliminated as a new technology swept across the globe, putting people out of work and prior technologies into museums.

Are we in control of technology, or at its mercy?

Concerns about technology predated computers, but they proliferate as digital technology spreads through workplaces and homes and into our clothing and bodies. We design technology. Do we shape how it is used?

Technological determinism, also called the technological imperative, became a computer science research focus when organizations began acquiring data processing systems half a century ago. In an excellent 1991 Communications of the ACM review titled “Examining the Computing and Centralization Debate,” Joey George and John King note that the initial studies produced conflicting hypotheses: (i) computers lead to the centralization of decision-making in organizations, and (ii) computers lead to decentralization of decision-making. This contradiction led to two new hypotheses: (iii) computerization is unrelated to centralization of organizational decision-making; (iv) management uses computerization to achieve its goals. George and King found that a fifth theory best fit the results: (v) management tries to use computerization to achieve its goals; sometimes it succeeds, but environmental forces and imperfect predictions of cause and effect influence outcomes. They concluded, “the debate over computing and centralization is over.”

In a 1992 paper in Organizational Science titled “The Duality of Technology: Rethinking the Concept of Technology in Organizations,” Wanda Orlikowski applied the structuration theory of sociologist Anthony Giddens to technology use and reached a similar conclusion. Giddens argued that human agency is constrained by the structures around us—technology and sociocultural conventions—and that we in turn shape those structures. Software, malleable and capable of representing rules, is especially conducive to such analysis.

These were guardedly optimistic views of the potential for human agency. Today, media articles that raise concerns such as oppressive surveillance and the erosion of privacy, excessive advertising, and unhealthy addiction to social media conclude with calls to action that assume we are in control and not in the grip of technological imperatives. How valid is this assumption? Where can we influence outcomes, and how?

It’s time to revisit the issue. Twenty-five years ago, digital technology was a puny critter. We had no Web, wireless, or mobile computing. Few people had home computers, much less Internet access. Hard drives were expensive, filled up quickly, and crashed often. The determinism debate in the early 1990s was confined to data and information processing in organizations. The conclusion—that installing a technology in different places yielded different outcomes—ruled out only the strongest determinism: an inevitable specific effect in a short time. That was never a reasonable test.

Since then, the semiconductor tsunami has grown about a million-fold. Technology is woven ever more deeply into the fabric of our lives. It is the water we swim in; we often don’t see it; we do not link effects to their causes. Whether our goal is to control outcomes or just influence them, we must understand the forces that are at work.

Technology is sometimes in control

The march of digital technology causes extinctions at a rate rivalling asteroid collisions and global warming—photographic film, record players, VCRs, rotary dial phones, slide carousels, road maps, and encyclopedias are pushed from the mainstream to the margins.

This isn’t new. The musket was not the first disruptive technology. Agriculture caused major social changes wherever it appeared. Walter Ong, in Orality and Literacy: The Technologizing of the Word, argued that embracing reading and writing always changes a society profoundly. The introduction of money shifted how people saw the world in fairly consistent ways. With a risk of a computer professional’s hubris, I would say that if any technology has an irresistible trajectory, digital technology does. Yet some scholars who accept historical analyses that identify widespread unanticipated consequences of telephony or the interstate highway system resist the idea that today we are swept in directions we cannot control.

Why it matters

Even the most beneficial technologies can have unintended side effects that are not wonderful. Greater awareness and transparency that enable efficiency and the detection of problems (“sunlight is the best disinfectant”) can erode privacy. Security cameras are everywhere because they serve a purpose. Cell phone cameras expose deviant behavior, such as that perpetrated by repressive regimes. But opinions differ as to what is deviant; your sunshine can be my privacy intrusion.

Our wonderful ability to collaborate over distances and with more people enables rapid progress in research, education, and commerce. The inescapable side effect is that we spend less time with people in our collocated or core communities. For millions of years our ancestors lived in such communities; our social and emotional behaviors are optimized for them. Could the erosion of personal and professional communities be subtle effects of highly valued technologies?

The typical response to these and other challenges is a call to “return to the good old days,” while of course keeping technology that is truly invaluable, without realizing that benefits and costs are intertwined. Use technology to enhance privacy? Restore journals to pre-eminence and return conferences to their community-building function? Easier said than done. Such proposals ignore the forces that brought us to where we are.

Resisting the tide

We smile at the story of King Canute placing his throne on the beach and commanding the incoming tide to halt. The technological tide that is sweeping in will never retreat. Can we command a halt to consequences for jobs, privacy, social connectedness, cybercrime, and terrorist networks? We struggle to control the undesirable effects of a much simpler technology—modern musketry.

An incoming tide won’t be arrested by policy statements or mass media exhortations. We can build a massive seawall, Netherlands-style, but only if we understand tidal forces, decide what to save and what to let go, budget for the costs, and accept that an unanticipated development, like a five-centimeter rise in ocean levels, could render our efforts futile.

An irresistible force—technology—meets an immovable object—our genetic constitution. Our inherited cognitive, emotional, and social behaviors do not stand in opposition to new technology; together they determine how we will tend to react. Can we control our tendencies, build seawalls to protect against the undesirable consequences of human nature interacting with technologies it did not evolve alongside? Perhaps, if we understand the forces deeply. To assert that we are masters of our destiny is to set thrones on the beach.

Examples of impacts noticed and unnoticed

Surveillance. Intelligence agencies vs. citizens, surveillance cameras vs. criminals, hackers vs. security analysts. We are familiar with these dilemmas. More subtly, the increased visibility of activity reveals ways that we routinely violate policies, procedures, regulations, laws, and cultural norms—often for good reason. Rules may be intended to be guidelines and not flexible enough to be efficient in all situations.

Greater visibility also reveals a lack of uniform rule enforcement. A decade ago I wrote:

Sensors blanketing the planet will present us with a picture that is in a sense objective, but often in conflict with our beliefs about the world—beliefs about the behavior of our friends, neighbors, organizations, compatriots, and even our past selves—and in conflict with how we would like the world to be. We will discover inconsistencies that we had no idea were so prevalent, divergences between organizational policies and organizational behaviors, practices engaged in by others that seem distasteful to us.

How we as a society react to seeing mismatches between our beliefs and policies on the one hand and actual behavior on the other is key. Will we try to force the world to be the way we would like it to be? Will we come to accept people the way they are?

Community. Computer scientists and their professional organizations are canaries in the coal mine: early adopters of digital technology for distributed collaboration. Could this terrific capability undermine community? The canaries are chirping.

In “Technology, Conferences, and Community” and “Journal-Conference Interaction and the Competitive Exclusion Principle,” I described how digital document preparation and access slowly morphed conferences from community-building to archival repositories, displacing journals. Technology enabled the quick production of high-quality proceedings and motivated prohibition of “self-plagiarizing” by republishing conference results in journals. To argue that they were arbiters of quality, conferences rejected so many submissions that attendance growth stalled and membership in sponsoring technical groups fell, even as the number of professionals skyrocketed. Communities fragmented as additional publication outlets appeared.

Community can be diminished by wonderful technologies in other ways. Researchers collaborate with distant partners—a great benefit—but this reduces the cohesiveness of local labs, departments, and schools. This often yields impersonal, metrics-based performance assessment and an overall work speed-up, as described in a study now being reviewed.

Technology transformed my workplaces over the years. Secretarial support declined, an observation confirmed by national statistics. In my first job, a secretary was hired for every two or three entry-level computer programmers to type, photocopy, file, handle mail, and so on. (Programs were handwritten on code sheets that a secretary passed on to a keypunch operator who produced a stack of 80-column cards.) Later at UC Irvine, our department went from one secretary for each small faculty group to a few who worked across the department. Today, I share an admin with over 100 colleagues. I type, copy, file, book travel, handle mail, file my expense reports, and so forth. 

Office automation is a technology success, but there were indirect effects. Collocated with their small groups, secretaries maintained the social fabric. They said “good morning,” remembered birthdays, organized small celebrations, tracked illnesses and circulated get-well cards, noticed mood swings, shared gossip, and (usually) admired what we did. They turned a group into a small community, almost an extended family. Many in a group were focused on building reputations across the organization or externally; the professional life of a secretary was invested in the group. When an employer began sliding toward Chapter 11, I knew I could find work elsewhere, but I continued to work hard in part because the stressed support staff, whom I liked, had an emotional investment and few comparable job possibilities.

We read that lifetime employment is disappearing. It involved building and maintaining a community. We read less about why it is disappearing, and about the possible long-term consequences of eroding loyalties on the well-being of employees, their families, and their organizations.

The road ahead

Unplanned effects of digital technology are not unnoticed. Communications of the ACM publishes articles decrying our shift to a conference orientation and deficiencies in our approach to evaluating researchers. Usually, the proposed solutions follow the “stop, go back” King Canute approach. Revive journals! Evaluate faculty on quality, not quantity! No consideration is given to the forces that pushed us here and may hold us tight.

Some cultures resist a technology for a period of time, but globalization and Moore’s law give us little time to build seawalls today. We reason badly about exponential growth. An invention may take a long time to have even a tiny effect, but once it does, that tiny effect can extremely rapidly build to a powerful one.

Not every perceived ill turns out to be bad. Socrates famously decried the invention of writing. He described its ill effects and never wrote anything, but despite his eloquence, he could not command the tide to stop. His student Plato mulled it over, sympathized—and wrote it all down! We will likely adjust to losing most privacy—our tribal ancestors did without it. Adapting to life without community could be more challenging. We may have to endure long enough for nature to select for people who can get by without it.

The good news is that at times we do make a difference. Muskets doomed swords and arrows, but today, different cultures manage guns differently, with significant differences in outcomes. Rather than trying to build a dike to stop a force coming at us, we might employ the martial art strategy of understanding the force and working with it to divert it in a safe direction.



Posted in: on Wed, March 09, 2016 - 12:21:43

Jonathan Grudin

Jonathan Grudin has been active in CHI and CSCW since each was founded. He has written about the history of HCI and challenges inherent in the field’s trajectory, the focus of a course given at CHI 2022. He is a member of the CHI Academy and an ACM Fellow. [email protected]
View All Jonathan Grudin's Posts



Post Comment


@ComputerWorld (2016 03 14)

I think technology makes us human more because we can control them not it is a mercy. I also do not think that this will take over everything from human.

@santke lama (2024 07 08)

Where can we influence outcomes, and how? @backrooms game