Timelines

XIII.5 September + October 2006
Page: 54
Digital Citation

Turing maturing


Authors:
Jonathan Grudin

"In from three to eight years we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months it will be at genius level and a few months after that its powers will be incalculable."—Marvin Minsky, Life magazine interview, 1970

 

Why hasn’t HCI been closer to AI, the most colorful and controversial branch of computer science? Both fields explore the nexus of computing and intelligent behavior. Both claim Allen Newell as a founding figure. SIGCHI and SIGART cosponsor the Intelligent User Interfaces (IUI) conferences.

Early work spanning the fields included research by Henry Lieberman into programming by example and interface agents, Fabio Paterno on automated design and evaluation methods, and Gerhard Fischer and his students on coaching and critiquing systems. Recent efforts include recommender systems by Patti Maes, Paul Resnick, and the Minnesota group led by Joe Konstan, John Riedl, and Loren Terveen, Sharon Oviatt’s insightful speech recognition research, and applications of machine learning by Eric Horvitz (a frequent CHI and IUI author, current AAAI President, and my manager).

However, not many researchers have truly spanned HCI and AI. IUI annual meetings did not begin until 1997, and CHI researcher participation has been limited. Many AI topics are marginalized in CHI and vice versa. Several factors work to keep the fields apart.

AI’s focus on future possibilities has relied heavily on government funding. CHI, focused on applications that are or could be in widespread use, has received support primarily from industry. This results in different priorities, methods, and assessment criteria. In addition, HCI and AI have competed for students and researchers.

In the January <interactions>, I described different HCI research cultures. Government funding has aligned primarily with human factors and ergonomics research. True to this pattern, the first IUI conference in 1993 had considerable HF&E participation. When IUI resumed in 1997, it shifted away from HF&E, but not toward CHI so much as toward AI and participation from Europe and Asia.

But I sense a more fundamental divide between AI and HCI. I’ve worked as an HCI person in four AI groups and have had AI colleagues almost continuously for 30 years. And I might have special insight into this divide, due to an accident of birth.

Two Boomers

Artificial intelligence and I were born the same year. We were fathered by mathematicians and grew up surrounded by them. Each of us was mathematically inclined before turning toward engineering. Similarities continue. AI and I said and did things in the late 1960s and early 1970s that we prefer not to talk about today. After subsequent ups and downs, each of us is convinced that our past accomplishments are under-appreciated and that our best days lie ahead.

There were differences. My mother was a psychologist. Not so AI, according to John McCarthy [3], who coined the term "artificial intelligence" in 1955:

"As suggested by the term `artificial intelligence’ we weren’t considering human behavior except as a clue to possible effective ways of doing tasks. The only participants who studied human behavior were Newell and Simon. [The goal] was to get away from studying human behavior and consider the computer as a tool for solving certain classes of problems. Thus AI was created as a branch of computer science and not as a branch of psychology."

McCarthy and other mathematicians defined artificial intelligence. When you ask mathematicians to define intelligence, what do you get? Before the answer, some history…

AI Origins: A Rollercoaster Ride Begins

In 1949 the British mathematician Alan Turing created a sensation in the London press by writing, "I do not see why [the computer] should not enter any one of the fields normally covered by the human intellect, and eventually compete on equal terms. I do not think you can even draw the line about sonnets, though the comparison is perhaps a little bit unfair because a sonnet written by a machine will be better appreciated by another machine." A year later Turing’s "Computing Machinery and Intelligence," Claude Shannon’s "Programming a Computer for Playing Chess," and Isaac Asimov’s three laws of robotics were published.

The term "artificial intelligence" first appeared in McCarthy’s call for a 1956 workshop held at Dartmouth. Optimistic, eye-catching forecasts of the participants attracted attention but soon collided with reality. This established a pattern that was to play out repeatedly. Hans Moravec [4] wrote:

"In the 1950s, the pioneers of AI viewed computers as locomotives of thought, which might outperform humans in higher mental work as prodigiously as they outperformed them in arithmetic, if they were harnessed to the right programs… By 1960 the unspectacular performance of the first reasoning and translation programs had taken the bloom off the rose."

This downturn was short-lived. The Sputnik launch led to major funding. J.C.R. Licklider’s 1960 essay "Man-Computer Symbiosis" identified a major role for artificial intelligence. When Licklider became director of ARPA’s IPTO program in 1962, AI received extraordinary support. For example, MIT’s Project MAC funding began at $13 million for 1963 and rose to over $20 million annually in the late 1960s, in 2006 US dollars. IPTO steadily expanded the number of AI laboratories being funded, giving researchers financial independence in their departments and establishing AI as a field. Research also thrived abroad, especially in the UK.

Licklider, a psychologist, saw the uncertainties. Citing a 1960 Air Force study that predicted that intelligent machines might take 20 years to arrive, Licklider wrote, "That would leave, say, five years to develop man-computer symbiosis and 15 years to use it. The 15 may be 10 or 500, but those years should be intellectually the most creative and exciting in the history of mankind." Ten or five hundred? Recipients of Licklider’s funding were on the optimistic end of this spectrum.

In 1970, Nicholas Negroponte of MIT argued compellingly that for machines to understand the context in which they operate, speech understanding is necessary. Computers would be dangerous and unreliable without speech understanding, he concluded. Although this is false as long as computers remain tools controlled by people who understand the context, funding flowed to language processing. His colleague Marvin Minsky of Project MAC went further, declaring that these AI capabilities would very soon be realized. Other leading AI researchers interviewed by the Life reporter [1] agreed that the super-genius computer would arrive by 1985.

ARPA funding led to AI demonstration projects in robotics, speech understanding, and object recognition. In the words of Terry Winograd, whose landmark AI thesis in 1970 was for a mathematics degree, these efforts added an "engineering flavor" to AI research.

HCI Germinates during an AI Winter

As it became clear that AI had again been oversold, a longer downturn occurred. The 1973 British government’s Lighthill Report on the prospects of AI was perceived as being so negative that UK funding all but vanished. A similar process was underway in the US. Bold ARPA initiatives in timesharing and networking had paid off handsomely; AI ventures had not and were scaled back. The large speech-understanding program was terminated in 1975.

The ensuing winter coincided with the blossoming of human-computer interaction. Influential HCI research groups formed at PARC, IBM, Digital, the Medical Research Council Applied Psychology Unit, UCSD, and Bell Labs. These labs were active in building SIGCHI, which formed in 1982. The Human Factors Computer Systems Technical Group formed and thrived. HCI also thrived in management information systems.

Was it a coincidence that HCI waxed as AI waned? Perhaps not. I was a young mathematics student in 1970. The Minsky interview stunned me. I believed him: Super-genius machines would reorganize the world. But by 1973 it was clear they were not materializing on Minsky’s schedule. I left mathematics to engage with the real world. By the end of this AI winter I was committed to human-computer interaction.

What Is Intelligence?

Alan Turing, John McCarthy, Marvin Minsky, and other (though not all) AI researchers began as mathematicians, many of them logicians. Many worked in mathematics departments prior to the emergence of computer science departments in the mid-1960s.

What is intelligence to a mathematician or logician? Mathematics is a powerful system, much of it constructible from a small set of axioms by repeated application of a small number of rules. Theorem proving was a favored topic in early AI research.

Chess and checkers also involve the repeated application of a small set of rules to a small set of objects; board games were another favored topic of early AI research.

If intelligence is the accurate application of well-defined rules to a set of symbolic objects, then a computer is perfectly poised for intelligence. Tireless symbolic processing? No problem! The recurring notion in AI theorizing—that at some point computers will take over their own education and quickly surpass human intelligence—reveals a particular view of what intelligence is.

McCarthy’s goal of setting out to "solve certain classes of problems" was fine. When skill at solving those classes of problem is equated with intelligence and seen as a small step from human intelligence, colorful predictions follow.

Mathematics can be complex, but ambiguity and uncertainty have no place (pace Gödel and Heisenberg). Ambiguity and uncertainty, to a logician, suggest error, not intelligence. Human intelligence and HCI focus on such distasteful topics. Our lives are spent overcoming our nervous systems’ skill at abstraction and consistency: learning how not to be logical and still be right.

AI Blossoms Again

In 1981 as in 1958, a foreign threat led to a resurgence of AI funding. Japan, brimming with success in manufacturing, launched the "Fifth Generation" AI effort, built upon the logic-based Prolog language. Reaction was swift. The US Congress amended antitrust laws to permit the founding of the AI-based consortium MCC in 1982. In 1983, US government funding for speech understanding and other AI technologies ramped up sharply, with DARPA again leading the charge. Annual funding for its Strategic Computing Initiative (SCI) rose to almost $400 million in 1988. The European ESPRIT and UK Alvey programs began in 1984 with annual AI expenditures of over $200 million, in 2006 dollars.

New bottles were found, even if some of the wine was old vintage. "AI" was used sparingly; favored were "intelligent knowledge-based systems," "expert systems," "knowledge engineering," "office automation," "multi-agent systems," and "machine learning." General problem solving was emphasized less, domain-specific problem solving more. For example, SCI researchers promised to deliver three things: an Autonomous Land Vehicle, a Pilot’s Associate, and a Battlefield Management System.

Other factors contributed to the AI boom. In 1982, Symbolics and LMI marketed "LISP machines," computers optimized for the western AI language of choice. Production systems reached maturity at CMU and were exported. "Neural nets" or parallel distributed processing models attracted considerable attention among researchers and in the press.

Once again, some of Turing’s descendants echoed his 1949 claim. Machines would soon rival human intelligence, then educate themselves 24/7 and leave homo sapiens in the dust. At MCC, Douglas Lenat’s CYC project began in 1984 promising to achieve machine self-education within ten years. In the U.S., DARPA, NSF, and intelligence agencies were joined by industry in pouring resources into speech recognition and natural language understanding (see [2] for a comprehensive account of unclassified work). Speech understanding was regularly claimed—by researchers and popular press—to be on the verge of becoming the principal means of human-computer communication, which implied that competing HCI research was pointless. Investment in AI by 150 US companies was estimated to be $2 billion in 1985.

* HCI During an AI Summer.

AI researchers had limited interest in human-computer interface details of the platforms and prototypes they delivered to government agencies or industry sponsors. Consequently, considerable government funding in this period also went into the human factors of speech systems, expert systems, and knowledge engineering, in largely unsuccessful efforts to show that these tools could be useful. As the 1980s progressed, more CHI papers on modeling, adaptive interfaces, and advising systems appeared. Research into the perceptual-motor and cognitive challenges of newly arrived GUIs had to compete with the funding and glamorous promises of AI. In 1986, I left a GUI design group in a computer company for an AI research lab. (This time I didn’t believe the hype, but did tell friends "who knows, maybe, I’d like to find out…").

* Another Winter Arrives.

As the 1980s ended, DARPA was unhappy. There was no autonomous land vehicle. There was no pilot’s associate or battlefield management system. Natural-language-understanding products floundered in the marketplace. Moravec again: "By 1990, entire careers had passed in the frozen winter." He wrote off the boom of the 1980s as misguided efforts to use systems that were not powerful enough.

By then there were enough tenured AI professors to ensure that research would survive the freeze, but they had to scramble for funding. Much of NSF’s Human Interaction Program budget was redirected to AI topics—a program manager told me in the early 1990s that his proudest accomplishment was to double the already sizable funding for natural-language understanding. Distributed AI ("multi-agent systems") researchers dropped by DARPA convinced a congenial program manager to redirect a significant part of the NSF collaboration systems budget to them, to the dismay of the associate director when it was discovered. Nevertheless, support for AI graduate students and graduates dropped sharply through the 1990s, as did AI papers in CHI.

A Tentative Spring Brings Promise of Common Ground

The past ten years have seen notable AI achievements. Bolstered by raw power, Deep Blue defeated world chess champion Gary Kasparov in 1997. Robots controlled from a distance explore difficult terrestrial and extra-terrestrial terrain. A few vehicles made it to the finish line in the 2005 DARPA Grand Challenge autonomous land vehicle race. More closely connected to HCI, machine learning research and application progressed steadily, with IUI established as a forum for this and other work. When the rising tide of the Internet boom floated all boats in the late 1990s, AI contributions such as recommender systems appeared in products. Some were swept away when the tide retreated, but others survived.

With the Internet boom came another wave of super-genius-computer predictions by prominent AI figures. In 1999, Ray Kurzweil published The Age of Spiritual Machines: When Computers Exceed Human Intelligence, and Hans Moravec published Robot: Mere Machine to Transcendent Mind. Unlike the 1970 predictions of five to 15 years, they estimated 30 to 40.

On the other hand, attendance at the AAAI conference peaked in 1986, declined steadily through 1996, then plateaued. Paper submissions peaked in 1990, then dropped before leveling off in 1997. They have risen the past two years, but submissions to IUI have been flat and ACM SIGART membership has continued to decline.

With appropriately modest goals, machine learning is contributing to interaction design, often by focusing on specific user behaviors. Working AI applications strengthen the tie to HCI by requiring acceptable interfaces. Still, not many researchers contribute to both CHI and IUI, much less to CHI and mainstream AI venues. But AI researchers are acquiring basic HCI skills and HCI researchers employ AI techniques. Shared understanding may be indispensable for the next generation of researchers and system builders.

A tip of the hat… To Whitman Richards, Fred Wang, Ray Allard and Eric Horvitz for employing me in AI groups over the years. Eric Horvitz, Henry Lieberman, Mike Pazzani, and Don Norman contributed useful observations, although blame for the views here is mine. Carol Hamilton and Maria Gini provided useful historical data.

References

1. Darrach, B. (1970). Meet Shaky: The first electronic person. Life Magazine, November 20.

2. Johnson, T. (1985). Natural language computing: The commercial applications. Ovum Ltd.

3. McCarthy, J. (1988). Book review of B.P. Bloomfield, The question of artificial intelligence: Philosophical and sociological perspectives. Annals of the History of Computing, 10, 3, 224-229.

4. Moravec, H. (1998). When will computer hardware match the human brain? Journal of evolution and technology, 1, 1.

Author

Jonathan Grudin
jgrudin@microsoft.com

About the Author

Jonathan Grudin is a principal researcher in the Adaptive Systems and Interaction group at Microsoft Research. He worked in AI groups at MIT, Wang Laboratories, and MCC in the 1970s and 1980s. His Web page is http://research.microsoft.com/~jgrudin.

Figures

F1Figure 1. Changing AI Seasons (climate for funding and public perception)

©2006 ACM  1072-5220/06/0900  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2006 ACM, Inc.

 

Post Comment


No Comments Found