Old models no longer suffice

XV.5 September + October 2008
Page: 47
Digital Citation

FEATUREWhat should be automated?


Authors:
Matti Tedre

One of the most influential figures in the early development of computer science, George Forsythe, argued in 1968 that “the question ‘What can be automated?’ is one of the most inspiring philosophical and practical questions of contemporary civilization [1].” Almost 20 years later, Peter Denning wrote that computer science is “the body of knowledge dealing with the design, analysis, implementation, efficiency, and application of processes that transform information” and suggested that “What can be automated?” is the fundamental question underlying all of computing [2]. That question emphasizes the very foundations of computing as a discipline—it asks, in a very general way, what in principle can be automated with any kind of machinery. Later Denning et al. [3] refined the question to “What can be (efficiently) automated?” Although the discipline of computing has diversified greatly since 1968, this “fundamental question” has rarely been challenged.

In addition to that one fundamental question, Denning listed 11 topic areas of computing and outlined a number of fundamental questions asked in each topic area [2]. In the end Denning had 50 questions altogether. However, in 19 of the 50 fundamental questions that Denning mentioned, the question deals with how instead of what. Denning’s 50 fundamental questions include questions such as: “How can large databases be protected from inconsistencies generated by simultaneous access…?” “How can the fact that a system is made of components be hidden from users who do not wish to see that level of detail?” and “What basic models of intelligence are there, and how do we build machines that simulate them [2]?”

The theoretical question “What can be automated?” is the central question in computability theory, which studies what in principle can be computed with any kind of machinery. The theoretical question “What can be efficiently automated?” is one of the central questions in computational complexity theory, which studies the amount of resources, such as time and storage, that it takes to solve different kinds of computational problems. Researchers in those fields often are indifferent about the specific technologies that might be used to automate processes. Many computing researchers do not, however, stop at theories about automation, but their work includes implementing systems that automate various things. “How can one automate things efficiently and reliably?” is the crucial issue for practically oriented computing researchers [1, 3, 4]. Although scientifically and practically oriented researchers have different aims from theoretically oriented researchers, the theoretical questions about limits of computing are fundamental to empirical research on computation and computers, and crucial to the design and implementation of computing systems.

Answers to questions that ask what are different from answers to questions that ask how. Generally speaking, theoretical and empirical scientists tend to ask what and why, and engineers tend to ask how. The theoretician’s questions—what can be automated? and what can be efficiently automated?—are concerned with finding out which processes can be automated and which cannot, and finding out which processes can be automated efficiently (according to some criterion of efficiency). For the theoretician’s questions there can be straightforward yes/no answers. For the practitioner’s question, “How can process p be automated?” there usually are many competing answers. For instance, implementations a, b, and c can all automate process p and be equally efficient in their use of space, time, and other resources, or they can all be non-optimal in different ways.

The theoretician’s and practitioner’s questions above imply that computing researchers deal with issues of computability, tractability, efficiency, operationality, usability, maintainability, and reliability. Those issues are concerned with what can be efficiently automated, why some things can be efficiently automated, how one can automate those things, and how to achieve efficient and reliable implementations [4]. Those questions are indeed central to the work of many computing fields; they probe the theoretical and technical limits of machine computability, and they focus on what machines can do. Those questions crystallize the tradition of computing research that has been called “machine-centered computing [5].”

However, since the 1980s the focus in computing research has been gradually broadening from the machine and automation toward how and where computers are used, the actual activities of end users, and how end users collaborate and interact [6]. For instance, Ben Shneiderman has argued that a clear shift from machine-centered computing toward human-centered computing has occurred [5]. Also, the attention devoted to the social implications of computing has continued to increase. Human-centered views of computing, implicitly or explicitly, incorporate the view that technology has no intrinsic value, but the value of any technology is measured by the way people’s lives change if they adopt that technology or if that technology enters the society. The shift of focus from the machine to the user of the machine seems small but has profound ramifications.

Neither the theoretician’s question “What can be efficiently automated?” nor the practitioner’s question “How can processes be automated reliably and efficiently?” include, explicitly or implicitly, any questions about why processes should be automated at all, if it is desirable to automate things or to introduce new technologies, or who decides what will be automated. Shneiderman argued that the key questions of human-centered computing are “not whether broadband wireless networks will be ubiquitous, but how your life will change as a result of them [5]”. But changing people’s lives is certainly not a mere technical matter.

Technological development has always entailed questions of whether certain technologies should be introduced in society or not, and new media and communication systems have always been especially suspect to suspicion. However, many technologists set aside ethical and social concerns from their work by arguing that science, technologies, or technological development are neutral or value free. When the central concern is what machines can do, ethical issues can be tabled with the argument that machines have no conscience, that technologies are value free, and that theories are neither good nor evil. But when the central concern is what people can do or how people’s lives will be changed, the whole gamut of ethical and social questions cannot be ignored [4]. The questions of machine-centered computing are descriptive questions, questions about what is, but human-centered computing entails normative questions, questions about what ought to be.

The surfacing of human-centered computing alters some of the questions in computing and inevitably brings along new questions altogether. Those questions include:

  • What should be automated?
  • Should process p be automated or not?
  • Why should process p be automated?
  • When should process p be automated and when not?
  • What individual or societal consequences does automating process p have?
  • Are the changes that automation brings about desirable?
  • How can we know what should be automated?
  • Who gets to decide what will be automated?

In contrast to the fundamental question underlying all of machine-centered computing, “What can be efficiently automated?” the fundamental question underlying all of human-centered computing might be, “How can one efficiently automate processes that can and should be automated?” This question includes the practitioner’s question, “How can process p be automated?” the theoretician’s question, “What can be efficiently automated?” and the question of human-centered computing, “What should be automated?” All three aspects are necessary, but they lie in different domains of knowledge. The first one belongs to the domains of engineering-oriented and empirical computing disciplines, the second one belongs also to the domain of theoretically oriented computer science, and the third one belongs perhaps to the domains of social sciences and applied philosophy.

The theoretician’s questions, the practitioner’s question, and the human-centered question are portrayed in Figure 1. All of those questions are important themselves, and research in nonintersecting areas is important. However, whereas in machine-centered computing the production of efficient and reliable technology takes place in the area delineated by the question “How do we automate things reliably and efficiently?” in human-centered computing the responsible production of useful and fair technology takes place in the intersection marked with light gray.

In order for human-centered computing to really respond to human needs, it is not enough to consider only technical and ethical questions, but also the needs, wants, hopes, expectations, wishes, fears, concerns, and anxieties that people have regarding technology. Although human-centered computing does not eliminate the need for technological experts, the methodological, conceptual, and theoretical toolbox of technology experts is insufficient for dealing with the unique issues of human-centered computing. That is, the toolbox of computing disciplines is in most of its parts insufficient for selecting, recording, understanding, explaining, analyzing, or predicting phenomena in the field of human affairs. We need to borrow tools from other disciplines.

Computing researchers in general need not become experts in sociocultural, ethical, economic, or other issues outside of the discipline of computing. Mastering computing topics is hard enough as it is. It is equally unreasonable to expect people from other disciplines to become experts in the discipline of computing. Experts, specialists, and professionals have their areas of expertise and they should do what they know best. Instead of scores of broadly trained bricoleurs, human-centered computing requires a working multidisciplinary combination of experts from different fields. Human-centered computing does not necessarily need to spawn new interdisciplinary fields, but it might best work as an eclectic, multidisciplinary umbrella term for different kinds of computing research that share the focus on the human.

The increasing interest in human-centered issues in the field of computing is especially lucid in practically oriented branches of computing. Focusing on the human in computing research inevitably brings forth a number of ethical and social questions that have not been important in machine-centered computing. In human-centered computing, concerns about social and cultural responsibility, responsiveness to people’s needs, the consideration of individual and social consequences, and sensitivity to human expectations and anxieties limit the production of computing machinery as much as the machine-centered questions of efficiency and reliability. The ethical questions of human-centered computing are certainly not any easier than the technical and theoretical questions of machine-centered computing. But no matter how one approaches the new problems that a human focus brings forth, a shift from machine-centered computing to human-centered computing inevitably shifts the question from “What can be automated?” to “What should be automated?”

References

1. Forsythe, George. “Computer Science and Education.” In Proceedings of IFIP Congress 1968, 92–106. Edinburgh, UK: August 5–10 1968, Volume 2.

2. Denning, Peter J. “The Science of Computing: What is Computer Science?” American Scientist 73, no. 1 (1985): 16 –19.

3. Denning, Peter J., Comer, Douglas E., Gries, David, Mulder, Michael C., Tucker, Allen, Turner, A. Joe, and Paul R.Young. “Computing as a Discipline.” Communications of the ACM 32, no.1 (1989): 9–23.

4. Raatikainen, Kimmo. “Issues in Essence of Computer Science.” An English translation of essays for Tietojenkäsittelytiede 2/3 (1991–1992). Available at https://www.cs.helsinki.fi/u/kraatika/Papers/IssuesInEssenceOfComputerScience.pdf

5. Shneiderman, Ben. Leonardo’s Laptop: Human Needs and the New Computing Technologies. Cambridge, Mass.: The MIT Press, 2002.

6. Grudin, Jonathan. “The Computer Reaches Out: The Historical Continuity of Interface Design.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Empowering People, 261– 268. Seattle, Washington, United States: April 1–5 1990.

Author

Dr. Matti Tedre works as an associate professor and head of B.Sc. program in IT at Tumaini University, Tanzania. His research interests include information technology education, social studies of computer science, the history of computer science, and the philosophy of computer science. Previously, he has worked and studied at the University of Joensuu in Finland, studied at universities of Ajou and Yonsei in South Korea, visited the University of Pretoria in South Africa, and worked in the software industry.

Footnotes

DOI: http://doi.acm.org/10.1145/1390085.1390096

Figures

UF1Figure. Fundamental questions in human-centered computing.

©2008 ACM  1072-5220/08/0900  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2008 ACM, Inc.

 

Post Comment


No Comments Found