At the World Economic Forum in Davos this year, Google CEO Eric Schmidt offered up his vision for the disappearing Internet: “Imagine you walk into a room, and the room is dynamic. And with your permission and all of that, you are interacting with the things going on in the room. A highly personalized, highly interactive and very, very interesting world emerges” . This vision of the Internet of Things isn’t new. It’s 40, if not 50 years old. It relies on outmoded models of artificial intelligence. And it is terrifically naive.
Schmidt’s abstraction and oversimplification of the Internet of Things underscores the problems of digital interaction at the scale of rooms and buildings: the mesoscale. Most of our digital interactions are designed to take place at either the microscale or macroscale. At the microscale, we navigate our daily lives with our fingers, our eyes on a screen, using our peripheral vision to move through the streets. At the macroscale, infrastructures managed by companies such as IBM, Cisco, and Siemens deliver gigabit broadband and urban Wi-Fi, or manage electricity demand on the smart grid, or survey traffic, crime, and weather with always-on cameras beamed back to mission-control centers. The mesoscale, by contrast, is the scale of our immediate surroundings: our rooms, our houses. Designing for the mesoscale is tricky because all too frequently, our computational models clash with the scale of our surroundings, with dubious and sometimes frightening results.
Some of the most important and illustrative experiments in this arena took place at MIT between 1967 and 1985 in the Architecture Machine Group (Arch Mac), founded by Nicholas Negroponte and Leon Groisser (in 1985, it became part of the MIT Media Lab when it opened). Arch Mac’s projects coalesced with and collaborated between architecture and AI, with all of the successes and trappings that those entailed. The relationships that supported Arch Mac were born out of Negroponte’s own experience as an architecture bachelor’s and master’s student at MIT in the mid-1960s, where he came to use computers under the tutelage of mechanical engineering professor Steven Coons, one of the fathers of computer-aided design. He also counted as friends and mentors MIT Artificial Intelligence Lab co-founder Marvin Minsky, as well as information-processing godfather J.C.R. Licklider, who coined the term man-computer symbiosis in 1960, and who both funded and put in place time-sharing computing and other foundational technologies that would later enable the Internet.
Negroponte theorized how AI might be applied to architecture in two of his books (The Architecture Machine and Soft Architecture Machines) and numerous papers, projects, and proposals. An architecture machine, as Negroponte wrote in his first book, would use AI to turn the design process and traditional human-machine dynamic into a personalized dialogue with its user. This close, contextual, and all-surrounding relationship extended far beyond a user merely sitting at a terminal. Negroponte’s view of the architecture machine was far more expansive. In the “distant future,” architecture machines would be so pervasive that we would inhabit their worlds. “[T]hey won’t help us design; instead, we will live in them,” Negroponte wrote .
Just as Arch Mac collaborated with the AI Lab, it was funded in a similar manner, primarily by the same Department of Defense bodies that funded computational and AI research at MIT, and followed the same strategies as AI projects. In the 1960s and early 1970s, AI worked within a paradigm called microworlds, simplified areas of inquiry that allowed researchers to focus on specific problems while abstracting out other details. Many microworlds were also called blocks worlds because they focused on piles of blocks, such as robotic arms manipulating stacks of blocks, or cameras and computers locating their edges. Arch Mac’s early projects were also blocks worlds. The lab’s 1970 URBAN 5 system took a graphical CAD program running on an IBM 360 computer, in which the user manipulated 10-by-10-foot cubes, assigning attributes to the blocks, and added to it a textual question-and-answer dialogue between user and computer. Negroponte and Groisser hoped this dialogue would adapt into a personalized dialogue as user and computer interacted in context, but this aim proved nearly impossible. (Not surprisingly: It is difficult by today’s standards, 45 years later). Negroponte noted that the textual exchanges did not allow for the kinds of nonverbal and gestural cues that pepper a real-world dialogue. It could not deliver a meaningful and rich interaction, he recalled in The Architecture Machine. He deemed URBAN 5 a failure. But if microworlds like URBAN 5 were failures, they succeeded precisely because they operated without regard for the real world. In a proposal to ARPA (DARPA, before it was renamed later that year), Marvin Minsky and Seymour Papert wrote, “Each model—or ‘micro-world’ as we shall call it—is very schematic; it talks about a fairyland in which things are so simplified that almost every statement about them would be literally false if asserted about the real world. Nevertheless, we feel they [microworlds] are so important that we are assigning a large portion of our effort toward developing a collection of these micro-worlds and finding how to use the suggestive and predictive powers of the models without being overcome by their incompatibility with literal truth” . Blocks-world failures justified further experimentation and more funding. Likewise, URBAN 5’s shortcomings justified further blocks-world experimentation by Arch Mac.
SEEK, one of Arch Mac’s next projects, puts a fine point on the problems of microworlds. It was an urban microworld consisting of mirrored blocks, a robotic arm that tried to organize and stack them, and a clan of gerbils that lived among the blocks. SEEK, however, was not apprised of its rodent residents. Exhibited at the Software show at the Jewish Museum in New York in 1970, SEEK’s stated purpose was to play up the misalignment between the computer’s memory, its programmed functions, and its physical environment. It demonstrated the discrepancy between machine models and the real world. The model failed for many reasons, and not just because it was a blocks world. Most strikingly, by anecdotal accounts, SEEK tended to kill the gerbils.
SEEK’s stated purpose was to play up the misalignment between the computer’s memory, its programmed functions, and its physical environment.
By the mid-1970s, the field of artificial intelligence saw its funding plummet for two key reasons. First, AI’s fields of inquiry were deemed too abstract to be practical. While the field of AI was successful in developing time-sharing platforms and networked computing, it did not succeed at answering the big, connectionist questions it posed in the 1950s and 1960s, when researchers claimed they would soon be able to model the human brain in software. Second, at the height of the Vietnam War in 1970, the U.S. Senate passed the Mansfield Amendment, which restricted Department of Defense (DoD) funding of academic research to direct military applications, where previously it supported basic, unclassified research. Under this new model, basic research fell under the umbrella of the National Science Foundation (NSF). However, the NSF and DoD operated under different funding philosophies. The NSF used open peer review. The DoD preferred its tight, military-industrial-academic network, of which MIT’s funding culture was a prime example. As a result of the shift in funding, Patrick Winston, director of the MIT AI Lab from 1972 to 1997, encouraged the development of technologies for military applications. In step with the AI Lab, Arch Mac’s efforts in the second half of its lifespan changed accordingly.
Arch Mac developed the Media Room, a simulation environment that turned the computer inside out. This experience of being inside the computer took place within a womblike, soundproofed space, 18 feet by 11 feet by 11 1/2 feet, with a six-foot-by-eight-foot screen built into the wall in front of the user, the other walls carpeted in dark pile fabric. The center of the room featured an iconic Eames lounge chair outfitted with joypads (touch-sensitive joysticks), with two smaller touchscreens within reach, and a 10-inch data tablet that the user could hold in the lap and operate with a stylus. Comfortably reclined in the Eames lounge chair, the Media Room denizen could zoom down the streets of Aspen, Colorado, in the Aspen Movie Map with street scenes delivered from videodisk, in a sort of predecessor to Google Street View. He could fly through layers of information using the Spatial Data Management System’s Dataland, a graphical user interface that served as a gateway to different textual and audio/visual content. Or she could use gestures and voice commands to manage a fleet of ships on a wall-size digital map in Put That There.
The Media Room was a platform for military simulation. The idea for the Aspen Movie Map followed in the wake of a successful recovery by the Israeli army of a hijacked plane in Entebbe, Uganda, in 1976 that had been simulated and rehearsed in the Negev Desert. What if a digital simulation could provide the same benefits as the Entebbe rescue? Might it cost less? How many lives could it save? The Movie Map incorporated the digital information and interfaces that troops would need for such maneuvers, such as maps, images, and even interactions with potential characters on the street, a notion that Negroponte called supreme usability. Where usability relates to the ergonomics and affordances of user interfaces, supreme usability is ergonomics on a larger scale. Negroponte wrote, “We look upon this objective [supreme usability] as one which requires intimacy, redundancy, and parallelism of immersive modes and media of interaction. The image of a user perched in front of a monochromatic display with a keyboard is obscured by the vision of a Toscaniniesque, self-made surround with the effervescence of Star Wars” . When the MIT Media Lab absorbed the Architecture Machine Group, supreme usability also became the intimate experience of home entertainment.
The Media Room continued to be used at least into the 1990s by the MIT Media Lab, which advanced Arch Mac’s experiments with architecture and information. DARPA program director Craig Fields in the Cybernetics Technology Office (who funded Arch Mac and the group’s Spatial Data Management projects) supported a fly-through of a fictitious town called Dar El Marar. The concept fused the graphical user interface of Dataland with the urban form. Negroponte wrote in Being Digital that it was as though “you had built neighborhoods of information by storing data in particular buildings, like a squirrel storing nuts” . The purpose for swooping in and helicoptering into a cityscape reinforces the combination of surveillance, information storage, and spatial memory, aligned again in the form of a city. These spatial relations could also telescope inward. In 1987, Stewart Brand wrote that the Media Room’s experience had shrunk to the size of the body—to that of the nascent virtual reality headset. “Instead of arm for pointer, the pilot points with his eyes. ‘Fire’: the definitive piercing gaze,” he wrote .
The Architecture Machine Group produced projects that were prescient and uncanny. The group pushed the envelope of how humans could interact with computers, and in so doing, posed some uncomfortable questions about designing interactions for the built environment. Their work highlights several lessons about designing at the mesoscale.
First, our models matter. The logical space of computer representation is not the same as lived space, and microworld models are not the same as mesoscale reality. It is one thing to model the world using a computer. It is another to pinch and stretch that model to the architectural and urban scale. The mesoscale does not merely consist of layers of microscale moments—anyone who texts and bumps into someone on the street (or tries to text and drive) already knows how difficult the shifting can be.
Second, our clients matter. If your funding dictates that you design for military uses, then military logistics dictate the aims of your environment. For as bombastic and thought-provoking as Arch Mac’s simulation projects were, it is not difficult to see the seeds for contemporary surveillance and drone technologies. What might be the aims of big tech companies like Google in not highlighting the design considerations of the Internet of Things while promoting it widely?
Third, architects matter. It is vital yet challenging to include architects in the conversation. Part of this is the problem of architectural education. While architecture school teaches design in two and three dimensions through drawing, rendering, parametrics, hand-built models, and fabrication, it excludes interaction design practices. It is not enough for information architects or urban interaction designers to take on the mantle of mesoscale design. Architects need to approach their material practice as interactive.
We cannot pretend that the mesoscale is a microworld. We cannot make abstractions and simplifications where they affect our bodies and surroundings without glossing over very real constraints and considerations.
Just ask the gerbils.
1. Mlot, S. Eric Schmidt: ‘The Internet Will Disappear.’ PCMag. Jan. 23, 2015; http://www.pcmag.com/article2/0,2817,2475701,00.asp
Molly Wright Steenson will be an associate professor at the Carnegie Mellon School of Design starting fall 2015. From 2013 to 2015, she was a journalism professor at the University of Wisconsin-Madison. Steenson is writing a book on architecture and artificial intelligence and holds a Ph.D. in architecture from Princeton. email@example.com
Copyright held by author. Publication rights licensed to ACM.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2015 ACM, Inc.