"The Computer for the 21st Century" , the 1991 essay by Mark Weiser in which he introduced his vision of ubiquitous computing (ubicomp), has firmly emerged as one of the most impactful writings in computer science from the past several decades. Looking back, it is hard not to be in awe of the incredible accuracy with which he anticipated future technological advances—for example, the recent talk of a "post-PC era" is all but an exact echo of Weiser's words. Weiser offers the following succinct description of the essence of his vision: "The 'virtuality' of computer-readable data—all the different ways in which it can be altered, processed and analyzed—is brought into the physical world."
It's an eloquent statement that more or less summarizes human-computer interaction (HCI) innovations over the past 23 years. Indeed, the various research concepts/initiatives—mobile computing, tangible user interfaces, ambient displays, augmented reality, tabletop/surface computing, to name a few—have all contributed to the steady injection of "virtuality" into the physical world.
However, upon close inspection we find that instead of virtuality fully permeating the physical world, these developments have culminated to form new auxiliary layers of digital devices/services/information atop a primarily inert background layer of static architecture: the traditional built environment composed of ceilings, walls, floors, and windows. Digital convenience (interactivity, ease of modification, etc.) flourishes, but only on the auxiliary layers; it seldom, if ever, invades the architectural layer, which is safely assumed to be invariant. This attitude is implicit but prevalent. A typical augmented reality city guide merely overlays panels and arrows onto an otherwise unaltered city scene, and a typical interactive wall permits digital interaction only with 2D graphical elements displayed on its limited stationary surface.
The Achilles' heel of this attitude is that architecture is by no means a neutral, uninvolved background layer. Studies in environmental psychology show that our behaviors, thoughts, and even our emotional states are strongly influenced by the design of the environment, both indoors and outdoors. To use an example rooted in our everyday lives as researchers, we struggle to write academic papers—even if provided with (hypothetically) perfect word processing software on a perfect laptop/tablet—if the design of the environment is not conducive to such activities. (This article was mostly written at a greatly conducive-to-writing cafe in Berlin; trying to write at the local Thai joint never yielded quite the same results.) Architecture, far from being an impartial bystander, acts in fact as an active and potent manipulator of myriad human behaviors.
What we are now witnessing is the spread of virtuality strictly under the reins of architecture. For us to truly spread the benefits of virtuality throughout the physical world, architectural space itself will need to be digitized (Figure 1)—in other words, imbued with the distinctive, interactive properties of digital media.
In recent HCI literature (including some of my own work), as well as in cutting-edge architectural practice, we can see a group of emerging, but still largely isolated, efforts targeting precisely that goal. Here, I am not talking about some far-fetched efforts to create entire buildings/cities out of ceaselessly shape-shifting programmable matter. Rather, what I am pointing to is a steady pace of innovations occurring in heterogeneous technical domains that are collectively chipping away at the longstanding stability of the architectural environment—eventually, it seems, leading to the experience that we are living within a world of habitable bits (or, to use a term from my earlier publication, living in synthetic space ).
To be fair, there already exist a number of technologies that manipulate space. Calling someone on the phone instantly compresses space; portable music players surround us with auditory walls; and Airbnb instantly converts homes into hotels. The entire field of urban informatics is rife with many such examples. What separates the newly emerging efforts, however, is the explicit intent to inject the physical built environment—made of glass, concrete, and steel—with the plasticity/interactivity of digital bits.
Within such nascent efforts, I have identified responsive, augmented, and printable architecture as the three key technical approaches that I predict will serve as the main drivers of the gradual digitization of architectural space.
Responsive architecture refers to a class of architectural structures that can dynamically alter their shapes and/or appearances using kinetic transformation mechanisms, large-scale LED arrays, and other technologies. The label does not apply to input-only systems (wide deployment of surveillance cameras do not qualify) and requires some form of digital output capability, often involving dynamic adaptation to input (Figure 2).
Among architects, the concept of responsive architecture is not new (an early seminal work in this area can be found in Soft Architecture Machines by Nicholas Negroponte), and there already exist numerous built examples such as revolving restaurants, retractable stadium roofs, media facades, and so on. (A rare instance of responsive architecture built for domestic use can be found in Maison à Bordeaux by Rem Koolhaas.) Although fantastical visions such as Archigram's Walking City will likely remain fantasies for the foreseeable future due to the immense cost, recent advances in digital design/prototyping tools (e.g., Arduino, Grasshopper) are rapidly dismantling many of the traditional hurdles associated with this domain. This is leading to increased deployments of responsive architecture, particularly in exhibitions and other major events where its attention-grabbing potential is highly appreciated; Asif Khan's kinetic facade exhibited at the 2014 Sochi Olympics is one recent example.
On the other hand, while HCI's recent fascination with shape-shifting user interfaces (or radical atoms , as Hiroshi Ishii calls them) is making kinetic architecture/furniture an increasingly viable topic of research—the undulating coMotion bench by Grönvall et al. being one prominent example—the HCI community so far has not made many inroads into responsive architecture per se. (One of the few exceptions to this may be works exploring large-scale interactive surfaces, which have long been a staple in HCI research.)
Several of my own projects have dealt with responsive architecture. For example, in 2013 I teamed up with a group of architects to build MIMMI (Figure 3), a large-scale installation that responds to the current "mood" of the city (obtained by probing Twitter feeds of citizens) through light, mist, and sound. Another example is Whirlstools (Figure 4), an unbuilt concept for a kinetic furniture system that discreetly encourages communication between strangers through subtle alterations of physical forms (hence affordances). Moving forward, I plan to investigate whether responsive architecture can serve more practical ends, such as whether it can be used to design a safer, more efficient traffic system for pedestrians and cyclists in dense urban centers.
Augmented architecture does not involve actual technical interventions to architecture itself; rather, the label represents a class of technologies that employ augmented reality to "filter" users' perceptions of the surrounding built environment, producing equivalent experiences of digitized architectural space (Figure 5). Since the effects are illusory (produced by personal electronic devices), the environment can be radically transformed in ways free from real-world constraints, including the laws of physics.
HCI has long been supportive of augmented reality research, and a wealth of relevant work in this area has been carried out within the community. One of my own projects, ClayVision (Figure 6), is a good example. Whereas conventional vision-based augmented reality systems—due to technical limitations—were restricted to overlaying virtual graphics on top of real-world scenery, ClayVision demonstrates how, by using advanced computer vision/image processing techniques, we can treat the entire built environment as a collection of malleable 3D models. Users of the system can grow, transform, and remove buildings with ease; for example, when looking for restaurants, the heights of all buildings in sight may be adjusted to reflect Yelp ratings.
Our experience of space is a holistic one involving multiple sensory channels, and thus augmented architecture preferably should cater to a wide range of modalities, not just vision. Several of my projects were initiated to address this concern. Weightless Wall (Figure 7) uses aural augmented reality to produce sound-blocking walls that can be erected anywhere in a room (which is proposed as a key component of a future, hyper-flexible office environment), and Gilded Gait (Figure 8) uses haptic augmented reality to mechanically alter sensations of ground texture (potential uses include navigation and outdoor gaming).
Projection-based (or spatial) augmented reality represents another important variation of augmented architecture technology. The introduction of Microsoft Kinect has made a huge impact in this area, making projection onto large, complex environments easier and leading to works such as IllumiROOM by Jones et al. Provided there will be sufficient increases in the power output of compact projectors, we may eventually see wearable projectors (a concept popularized by MIT's SixthSense) capable of producing visual illusions that can fill up entire rooms.
The reliance on (relatively) cheap mobile/wearable devices makes augmented architecture perhaps the most cost-effective of the three key approaches. In the future, I expect innovations in both software and hardware to lead to a series of lightweight, always-on wearables (glasses, headsets, etc.) that collectively operate on the full range of human sensation to create compelling experiences of living within a freely transformable environment.
Printable architecture (Figure 9) refers to a class of technologies that automatically fabricate architectural structures from digital files using techniques such as additive manufacturing (aka 3D printing). Unlike the other two key approaches, printable architecture does not support actual transformations of environments; structures can merely be replaced with newly printed ones. Even with this limitation, however, the potential to bring the high degree and ease of consumer-level customization (a characteristic property of digital media) to architectural space makes printable architecture an important development toward the digitization of the built environment.
Printing out entire habitable buildings may well be the holy grail of printable architecture, but existing efforts (such as USC's Contour Crafting) have not quite reached a level where we can expect practical usage in the short-term future. At the furniture/interior scale, 3D printing is already fully feasible, and one can find many products on the market fabricated using this technology, as well as directions/files to print one's own for DIY hobbyists with access to the necessary facilities. Technologies other than additive manufacturing are being investigated as well. For example, ETH Zurich's Flight Assembled Architecture explores the use of quadcopters as robotic construction workers (note: I am using the word printable as a catch-all term for digital automated fabrication.
While HCI has embraced digital fabrication research in recent years, the central focus is on developing techniques for printing out functional objects (such as optics and electronic devices), and contributions to printable architecture itself have so far been minimal. However, there has been a long, respectable line of work on digital design tools aimed at making 3D design more accessible for non-professional users, which should constitute an indispensable part of the printable architecture ecosystem.
I have recently begun my own experiments with printable architecture. Contrary to the majority of existing work, my concern lies not in printing entire houses or even furniture, but rather in printing out gardens and natural landscapes. As a first step, I have built a printer that fabricates freeform hydroponic gardens (Figure 10). A huge potential of printable architecture, in my view, lies in its capacity to easily fabricate environmental elements that had not lent themselves well to conventional, large-scale industrial manufacturing processes.
So what will our future lives be like in a world built of habitable bits? As research into this area is still at an early stage, the presiding atmosphere is that of youthful optimism. Sifting through the literature, we can uncover numerous statements elucidating seductive future possibilities in bold, graphic terms—collectively painting a techno-utopian vision similar to Weiser's famous vignette (starring "Sal") depicting everyday life filled with the technological wonders of ubicomp. Risking accusations of naivete, we can rely on simple extrapolation and imagine a future world saturated with new technological marvels, such as:
- a meticulously customized 3D-printed house with morphing interiors and automated control of light intake/air circulation
- a city scenery characterized by shape-shifting facades, printed gardens/farms, and smart traffic systems that anticipate and prevent accidents
- a suite of wearables that allow us to dynamically transform our surroundings as easily as changing the desktop wallpaper on a present-day PC.
A brave new world where individuals have the capacity to freely sculpt their surrounding environments, new aesthetic styles flourish, city streets boast impeccable safety/accessibility, and citizens play increased roles in the planning and construction of their city landscapes. A dynamic, diverse, and democratic city that goes well beyond William Mitchell's seminal "city of bits" . Would you want to live in such a future? Who wouldn't?
Unfortunately, as we have learned from our experiences with ubicomp, such idealistic visions rarely survive the transition to reality entirely intact. While many of Weiser's predictions did come true, our lives in the year 2014 are still not quite like that of Sal. The real-world manifestation of ubicomp now unfolding before our eyes is a messy, imperfect system with limited built-in robustness (as famously argued by Bell and Dourish), far from the seamless, clockwork precision imagined by Weiser. The pursuit of habitable bits will inevitably suffer the same fate. Over time, as ideals dovetail with reality, this new vision will similarly need to be shepherded, fleshed out, and reshaped. This is not a reason to despair, though. Just like ubicomp, what will be lost in theoretical purity will be replaced by practical efficacy to benefit our lives. We will be better off for that.
Ever since the publication of Weiser's groundbreaking essay, the HCI community has been hard at work inventing novel ways to inject virtuality into the physical world. Nevertheless, the community has generally been hesitant to engage directly with architectural design, treating the practice as somewhat off-limits for HCI research. The emerging efforts described in this paper signal a radical departure from that longstanding indifference, an attitude that I hope will be adopted by the mainstream HCI community. There is no shortage of issues that need to be tackled. Aside from the obvious task of developing the actual technologies and applications, we will need to devise new design methodologies and evaluation frameworks applicable to architecture-scale systems, both tasks that should put to good use the unique strengths of the HCI community. The time is now ripe for HCI to evolve into a discipline that studies and designs the future environment in its totality, not only its niche subset that has preoccupied us for so long. GUIs, TUIs, and other interfaces have already been thoroughly studied; the new frontier for HCI now lies in the study and design of HUIs—habitable user interfaces.
Yuichiro Takeuchi is a Toronto-born, Tokyo-based computer scientist whose work explores the intersection of HCI and architecture/urban design. He is currently an associate researcher at Sony Computer Science Laboratories Inc. email@example.com
Figure 2. Responsive architecture. In this relatively small-scale example, the responsive wall's color changes to green and apertures appear on its surface, allowing passersby to see through to the other side.
Figure 3. MIMMI, an interactive installation built in Minneapolis. The piece acts as a large-scale ambient display that lets people be aware of the collective mood of their fellow citizens. Collaboration with Bradley Cantrell, Jack Cochran, Carl Koepcke, Peter Mabardi, Artem Melikyan, Allen Sayegh, and Ziyi Zhang (see http://minneapolis.org/mimmi/ for more details).
Figure 4. Whirlstools, a kinetic furniture system that fosters communication in public spaces. Seat angles of vacant stools are dynamically adjusted to steer people into sitting face-to-face with one another. Collaboration with Jean You.
Figure 6. ClayVision, a vision-based augmented reality system that enables freeform transformations of buildings. A new implementation using custom glasses (instead of a tablet as shown here) is under way. Collaboration with Ken Perlin.
©2014 ACM 1072-5220/14/11 $15.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2014 ACM, Inc.