Mark Gross, Keith Green
Information and communication technologies (ICT) extend a long line of emerging technologies that have reshaped our built environment and, consequently, society, over millennia. In antiquity, Roman arches afforded greater freedom of movement, physically and socially. In the Middle Ages, flying buttresses allowed light to magnificently penetrate once heavy walls. And in the Industrial Age, reinforced concrete, structural steel, and free-plan organizational systems accommodated mass gatherings of people at work and play. In our Information Age, ICT is increasingly embedded into the physical fabric of the built environment in order to intelligently control heating, air conditioning, and lighting, as well as to transform building facades into vast computer displays. But while ICT can intelligently move temperature-controlled air through building interiors, and digital bits across building surfaces, it also promises to move physical building elements to create intelligent, adaptive built environments responsive to the challenges and opportunities of a digital society.
Beyond operable windows and movable partitions (new technologies of past centuries) and centrally controlled heating and ventilation (an innovation of the 20th century), we are now to a point that science fiction writers and futurists have long foreseen: architectural roboticsintelligent and adaptable built environments that sense, plan, and act.
The prospect of what we call architectural robotics was presciently anticipated by architect and MIT Media Lab founder Nicholas Negroponte 40 years ago in his vision of “...a man-made environment that responds to and is ‘meaningful’ for him or her” . Former Wired editor Kevin Kelly has since imagined a “world of mutating buildings” and “rooms stuffed with co-evolutionary furniture” . And while Bill Gates forecasts “a robot in every home” , the late William J. Mitchell, former dean of MIT’s School of Architecture and Planning and head of its Media Arts and Sciences Program, envisioned homes not as “machines for living” but “as robots for living in” , updating Le Corbusier’s well-known characterization of the domestic environment for the Modern Age.
It seems inevitable that robotics, embedded in our built environment, will support and augment everyday work, learning, healthcare, entertainment, and leisure activities. In medical facilities and in homes, architectural robotics can empower people of all ages to live more independently, adapting to their changing needs and capabilities. In work environments, architectural robotics can physically morph to support more and different physical and digital tasks and social, collaborative interactions. In learning environments, architectural robotics located inside or outside the formal classroom can afford interactive, creative exploration and inquiry. In urban environments, architectural robotics promises to respond effectively to a variety of disasters of natural and human origins, providing support for victims assembling, seeking treatment, and planning recovery operations. Architectural robotics might even manifest as reconfigurable monuments that reflect the dynamic character of society. All of these applications have the potential to affect large segments of an increasingly digital society.
We have each developed physical-digital environments suggesting the promise of architectural robotics. Realized by co-author Keith Evan Green and Ian Walker at Clemson’s iMSE lab (www.CU-iMSE.org), the Animated Work Environment (AWE; Figure 1) is a user-programmable robotic work environment that dynamically shapes and supports the working life of creative individuals working collaboratively with both new and old, digital and analog materials and tools . As working practices are transformed by ICT, so must physical work environments transform to support more and different physical and digital activities and social, collaborative interactions. AWE comprises six hinged panels that change the spatial characteristics of the work environment, affording work and play activities, such as collaborating, composing, presenting, viewing, lounging, and gaming. A whimsical project built in co-author Gross’s Computational Design Lab at Carnegie Mellon University, Luke Kambic’s Electric Staircase (http://code.arc.cmu.edu/projects/staircase) hides its steps in a wall and proffers them one by one as people walk up and down, retracting them as they pass (Figure 2). Both projects are early examples of intelligent, adaptable physical-digital environments aimed at augmenting everyday life.
The relatively simple Electric Staircase and the more elaborate AWE project share the principal components of a conventional robot. They sense signals in the environment, they plan a course of action, and they act to effect a change. Using geared (i.e., harmonic) motors, AWE moves its panels to assume one of six configurations that best matches one of eight predefined work activities likely to be taking place. Likewise, if a user of AWE abruptly stands up to attend to matters elsewhere in the world, AWE’s infrared sensors recognize this act and, based on signals from these sensors, AWE intelligently elevates its panels to allow the unobstructed exit of the user. The Electric Staircase comprises a sequence of proximity sensors mounted at foot level. As you step up or down into a sensor’s range, an associated motor projects a step forward; as your foot leaves the previous step, another motor retracts it.
In these examples of architectural robotics, sensing and action are relatively simple and there is almost no planning at all: Sensing and action are tightly coupled. More complex and sophisticated architectural robotics could not only change configuration directly in response to sensor inputs, but also operate with more subtle models of human activity. For example, a museum might learn over time how its inhabitants tend to use it and anticipate activities that it expects are about to happen. If the building could move rooms, not just walls, then an office building might observe friction between two individuals and relocate their workspaces to minimize unnecessary conflicts.
AWE and the Electric Staircase are early prototypes that look toward more fully developed and richer robotically enhanced built environments. Both prototypes exemplify a built environment that physically reconfigures itself in response to inhabitants’ needs or desires. AWE, more pragmatically, suggests that your physical environment could mold itself to your activities. The Electric Staircase would never stand up to rigorous user testingbut stepping out into space and finding a step appear beneath your foot is magical.
Our two examples are part of a growing trend to embed robotics into buildings; we have space here to mention only a few others. Goulthorpe’s Hyposurface (Figure 3; http://hyposurface.org) is a wall made of panels, each independently actuated. Its makers say, “The surface behaves like a precisely controlled liquid: Waves, patterns, logos, even text emerge and fade continually within its dynamic surface.” The Delft Hyperbody Group’s Musclebody (Figure 4; http://tinyurl.com/delft-musclebody) is a pneumatically actuated exoskeletal Lycra tent that changes shape and size, responding to people inside. And Tang’s Pixelbot (Figure 5), a furniture-like installation, responds to gesture with movement and color.
Architectural robotics takes place at the intersection of three complementary trajectories: reconfigurable buildings, pervasive computing, and embedded robotics.
“More than manual” reconfigurable buildings. Although they represent a small part of the built environment, reconfigurable buildings have been with us for centuries. Operable windows and doors, of course, reconfigure a building in small but important ways. Inhabitants of the traditional Japanese house shifted Shoji screens to create rooms of varying sizes according to space requirements. And in the Maison de Verre, a Parisian masterpiece designed by 20th-century architect Pierre Chareau, inhabitants reconfigure space by means of cranks and levers. These unusual cases in our built environment are reconfigured by hand, but today’s technologies enable buildings to self-reconfigure. Despite the sophisticated robotics that pervades modern manufacturing, architectural robotics today still consists of elevators and electric-eye supermarket doors.
Beyond sensors and screenspervasive and ubiquitous computing. Efforts to integrate ICT into built environments (by any name: pervasive, ubiquitous, and so on) have largely been limited to interactive displays embedded in tiny mobile phones, small tabletops, and whole building facades, and to sensor networks that monitor building performance, occupancy, and use. So-called tangible interaction, if it happens at all, occurs mostly at the scale of the human hand, seldom at the scale of buildings.
Embedded roboticsbeyond mobile and humanoid robots. We’ are becoming accustomed to sharing our work, home, and leisure environments with robots that move freely about, even if these robots are currently little more than smart vacuum cleaners. Surelyand soonhumanoid robotic assistants will cohabit our physical environments. Perhaps architectural robotics represents a middle ground between a smart vacuum cleaner and a humanoid robot: an intelligent physical-digital artifact that is more furniture or building envelope than discrete (mobile) object or surrogate servant. Whatever their form, these robots are collectively, like us, inhabitants of buildings.
Skeptics may well wonder: Why complicate our buildings with robotics? We’d be the first to agree, for example, that buildings should employ windows, not air conditioning and artificial lighting, to make the most of daylight and natural heating and cooling. Still, who would deny that electric lighting and central heating extend the usability of our buildings? We see architectural robotics the same way, as potentially extending and enhancing the use and usability of our built environments.
Failures, Prospects, and Challenges
Even a conceptually simple architectural robot can cause problems in practice. For example, automated shading of windows is becoming common, especially in hot climates where the added cost of a robotic building skin pays off in lowered cooling costs. Despite the promises of such intelligent designs, many of us have become acquainted with the surprising choreography of a meeting room equipped in this way. Pleasantly light filled and comfortable at one moment, the room is suddenly darkened by “intelligent” window shades lowering under the power of a low-pitched groaning motor. A light sensor on the building’s facade detected the bright sun, triggering a program to drop the shade. Annoyingly, 20 minutes later, again the groaning sound as the motor raises the shadea cloud is passing over. Where is the switch to override the building’s robotic control? And why is every shade on the facade controlled by a single sensor?
Even experienced designers struggle with new technologies. In 2001, Prada captured the attention of architects and designers by opening a store in New York designed by architect Rem Koolhaas and industrial design firm IDEO. Among the high-tech innovations were dressing rooms outfitted with electrostatic-controlled glass doors that switched on demand from transparent to opaque. As Business 2.0 reported, “The execution of the vision was [however] disastrous. Customers didn’t understand the foot pedals that controlled the dressing-room doors and displays. Reports surfaced of fashionistas disrobing in full view, thinking the walls went opaque when they didn’t. Others got stuck in dressing rooms when pedals failed to work, or doors broke, unable to withstand the demands of the high-traffic tourist location” .
We can survive failures like the window shades and the Prada store in the early experimental days of adopting new technologies, but we had best get a grip on designing architectural robotics to come.
Automated building skins and shading are already commercial products, as are motorized movable partitions and operable windows. And as technologies like e-ink and flexible organic LEDs drop in price and improve in performance, building-scale displays are cropping up everywhere, as are new “transmaterials” whose physical propertiescolor, light transmission, flexibility, and texturecan be controlled by software. How shall we interact with architectural robotics, now that tangible, embedded, embodied, and ubiquitous interaction has freed interaction design from the limits of the computer screen? Gestures and body language will surely play a role. Speech, too.
The challenges of realizing architectural robotics only intensify when we consider the built environment might one day be constituted by “programmable matter”robotic, shape-shifting materials composed of billions or trillions of tiny robots that reconfigure themselves into any physical configuration you desire. A sofa made of self-reconfiguring robots becomes a coffee table. A wall creates an opening to walk through when you approach it, and closes up again afterward. Think of the interaction challenges that buildings made of self-reconfiguring robots will pose! What language shall we (dwellers and designers) use to program the behavior of programmable materials? Surely not C++.
Perhaps the greatest challenge for architectural robotics is defining its community . Who is cultivating this line of research? The human-robot interaction (HRI) community today is mostly interested in humanoid robots. The ubiquitous computing (ubicomp) community is focused on interpreting data from myriad sensors embedded everywhere. The human factors (HCI) community has not yet developed a metrics sufficiently complex for designing and evaluating such digital-physical artifacts at an environmental scale. Although a few architects and allied designers are pursuing architectural robotics research, the architectural profession and building industry will, at least as history suggests, await the maturation of the technology before embracing it. But architectural robotics, the inevitable next step for interaction design from computer screen to physical computing, clearly calls for us all.
5. Green, K.E., Walker, I.D., et al. The Animated Work Environment [AWE]; http://spectrum.ieee.org/automaton/robotics/robotics-software/awe-self-reconfigurable-robotic-wall. The AWE project was supported by the U.S. National Science Foundation (IIS-0534423).
7. Addressing this question of community were “Archibots,” a workshop at Ubicomp 2009 (http://www.archibots.org) co-convened by the authors, and a 2008 workshop in Aarhus, “Interactive and Adaptive Furniture” (http://www.interactivespaces.net/imagine/#) attended by the co-authors. See also Interactive Architecture by Michael Fox and Miles Kemp (http://www.interactive-architecture.com/).
Mark D. Gross is professor of computational design in the School of Architecture at Carnegie Mellon University. He is also research director at Modular Robotics in Boulder, Colorado.
Keith Evan Green is professor of architecture and electrical and computer engineering at Clemson University and director of iMSE, the Clemson University Institute for Intelligent Materials, Systems, and Environments.
Figure 1. The user-programmable robotic Animated Work Environment [AWE] shapes and supports the working life of creative collaborators dealing with new and old, digital and analog, materials and tools. Credit: Keith Evan Green, Clemson University
Figure 4. MuscleBody is a continuous tensile textile stretched inside a tubing structure that is driven by pneumatic “muscles” that change its shape and thereby skin transparency. Credit Hyperbody, Faculty of Architecture, TU Delft, 2005
Figure 5. Pixelbot, an installation in a long tradition of interactive furniture, responds with movement and color to proximity and gesture. Shown at the Taiwan 2011 Design Week. Credit: Sheng Kai Tang Adaptive Artifact
©2012 ACM 1072-5220/12/0100 $10.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2012 ACM, Inc.