Cover story

XIX.1 January + February 2012
Page: 38
Digital Citation

Radical atoms


Authors:
Hiroshi Ishii, Dávid Lakatos, Leonardo Bonanni, Jean-Baptiste Labrune

Graphical user interfaces (GUIs) let users see digital information only through a screen, as if looking into a pool of water, as depicted in Figure 1 on page 40. We interact with the forms below through remote controls, such as a mouse, a keyboard, or a touchscreen (Figure 1a). Now imagine an iceberg, a mass of ice that penetrates the surface of the water and provides a handle for the mass beneath. This metaphor describes tangible user interfaces: They act as physical manifestations of computation, allowing us to interact directly with the portion that is made tangible—the “tip of the iceberg” (Figure 1b).

Radical Atoms takes a leap beyond tangible interfaces by assuming a hypothetical generation of materials that can change form and appearance dynamically, so they are as reconfigurable as pixels on a screen. Radical Atoms is a vision for the future of human-material interactions, in which all digital information has physical manifestation so that we can interact directly with it—as if the iceberg had risen from the depths to reveal its sunken mass (Figure 1c).

From GUI to TUI

Humans have evolved a heightened ability to sense and manipulate the physical world, yet the digital world takes little advantage of our capacity for hand-eye coordination. A tangible user interface (TUI) builds upon our dexterity by embodying digital information in physical space. Tangible design expands the affordances of physical objects so they can support direct engagement with the digital world [1,2].

Graphical user interfaces represent information (bits) through pixels on bit-mapped displays. These graphical representations can be manipulated through generic remote controllers, such as mice, touchscreens, and keyboards. By decoupling representation (pixels) from control (input devices), GUIs provide the malleability to graphically mediate diverse digital information and operations. These graphical representations and “see, point, and click” interaction represented significant usability improvements over command user interfaces (their predecessor), which required the user to “remember and type” characters.

However powerful, GUIs are inconsistent with our interactions with the rest of the physical world. Tangible interfaces take advantage of our haptic sense and our peripheral attention to make information directly manipulable and intuitively perceived through our foreground and peripheral senses.

Tangible interfaces are at once an alternative and a complement to graphical interfaces, representing a new path to Mark Weiser’s vision of ubiquitous computing [3]. Weiser wrote of weaving digital technology into the fabric of physical environments and making computation invisible. Instead of melting pixels into the large and small screens of devices around us, tangible design seeks an amalgam of thoughtfully designed interfaces embodied in different materials and forms in the physical world—soft and hard, robust and fragile, wearable and architectural, transient and enduring.

Limitations of Tangibles

Although the tangible representation allows the physical embodiment to be directly coupled to digital information, it has limited ability to represent change in many material or physical properties. Unlike with pixels on screens, it is difficult to change the form, position, or properties (e.g., color, size, stiffness) of physical objects in real time. This constraint can make the physical state of TUIs inconsistent with underlying digital models.

Interactive surfaces is a promising approach to supporting collaborative design and simulation to support a variety of spatial applications (e.g., Urp [4], profiled on page 41). This genre of TUI is also called “tabletop TUI” or “tangible workbench.”

On an augmented workbench, discrete tangible objects are manipulated, and their movements are sensed by the workbench. Visual feedback is provided onto the surface of the workbench via video projection, maintaining input/output space coincidence.

Tabletop TUIs utilize dynamic representations such as video projections, which accompany tangibles in their same physical space to give dynamic expression of underlying digital information and computation. We call it “digital shadow.” The graphically mediated digital shadows and reflections that accompany physical building models in Urp are such an example.

The success of a TUI often relies on a balance and strong perceptual coupling between tangible and intangible (dynamic) representations. In many successful TUIs, it is critical that both tangible and intangible representations are perceptually coupled to achieve a seamless interface that actively mediates interaction with the underlying digital information and appropriately blurs the boundary between physical and digital. Coincidence of input and output spaces and real-time response are important elements toward accomplishing this goal.

Actuated and Kinetic Tangibles: From TUI Toward Radical Atoms

Tangible interfaces are deeply concerned with the topic of representation that crosses physical and digital domains. If we rely too much on the digital shadow (intangible representation, such as video projection onto the tabletops), it would look like an extended tabletop GUI with multiple pointing devices or controllers on the surface, and then it would lose the advantage of being tangible.

The central characteristic of tangible interfaces is the coupling of tangibles (as representation and control) to underlying digital information and computational models. One of the biggest challenges is to keep the physical and digital states in sync when the information changes dynamically, either by users’ input (direct manipulation of tangibles) or results of underlying computation. Instead of relying on the digital shadows (video projections on the flat surfaces beneath or behind tangibles), we have been vigorously pushing the envelope of the atoms over the last decade beyond their current static and inert states to active and kinetic dimensions, using motors and gears, tiny robots, and shape-memory alloys (SMAs) for prototyping.

The evolution from static/inert to active/kinetic tangibles is illustrated in Figure 2 [5]. Here are three highlights of the threads of evolution from static/passive to kinetic/active tangibles.

Embodied kinetic tangibles and kinetic tangible toolkits. Some TUIs employ actuation of tangibles as the central means of computational feedback. Examples include inTouch [6], curlybot [7], and Topobo [8]. This type of actuated TUI does not depend on intangible representation (i.e., video projection/digital shadowing), as active feedback throughout the tangible representation serves as the main display channel.

inTouch is one of the early examples of “tangible telepresence;” tangibles that involve the mapping of haptic input to haptic representations over a distance. One of their underlying mechanisms is the synchronization of distributed objects and the gestural simulation of “presence” artifacts, such as movement or vibration. These gestures can allow remote participants to convey haptic manipulations of distributed physical objects. One outcome is giving remote users the sense of ghostly presence, as if an invisible person were manipulating a shared object.

The use of kinesthetic gestures and movement is a promising application genre. For example, educational toys promoting constructionist learning concepts have been explored using actuation technology, taking advantage of tangible interfaces’ input/output coincidence. Gestures in physical space can illuminate symmetric mathematical relationships in nature, and kinetic motions can be used to teach children concepts ranging from programming and differential geometry to storytelling. Curlybot [7] and Topobo [8] are examples of toys that distill ideas relating to gesture, form, dynamic movement, physics, and storytelling.

Kinetic Sketchup is a toolkit to provide a language for motion prototyping featuring a series of actuated physical programmable modules that investigate the rich interplay of mechanical, behavioral, and material design parameters enabled by motion [9]. Bosu [10] is a design tool offering kinetic memory—the ability to record and play back motion in 3-D space—for soft materials. Both Kinetic Sketchup and Bosu were used by designers for motion prototyping to explore a variety of kinetic tangibles.

2-D tabletop discrete tangibles. A variety of tabletop TUIs (interactive surfaces), such as the Sensetable system, have been prototyped and partially commercialized in the past decade. However, one limitation of these systems is the computer’s inability to move objects on the interactive surfaces. To address this problem, the Actuated Workbench [11] and PICO [12] were designed to provide a hardware and software infrastructure for a computer to smoothly move objects on a table surface in two dimensions, providing an additional feedback loop for computer output, and helping to resolve inconsistencies that otherwise arise from the computer’s inability to move objects on the table.

2.5-D deformable/transformable continuous tangibles (digital clay). A fundamental limitation of previous TUIs, such as Urp, was the lack of capability to change the forms of tangible representations during the interactions. Users had to use predefined, finite sets of fixed-form objects, changing only the spatial relationships among them, not the form of the individual objects themselves. Instead of using predefined, discrete objects with fixed forms, a new type of TUI system that utilizes continuous tangible materials, such as clay and sand, was developed for rapid form creation and sculpting for landscape design. Examples are Illuminating Clay [13] and SandScape. Later, this type of interface was applied to the browsing of 3-D volumetric data in the Phoxel-Space project.

Relief was created to explore direct and tangible interactions with an actuated 2.5-D shape display and to provide a kinetic memory of the forms and dynamic transformation capabilities. Recompose added midair gestures to direct touch to enable users to create forms (sculpt digital clay), and allowed us to explore a new form of interaction with 3-D tangible information using our bodies [14].

Antigravity tangibles. Actuated and transformable tangible interfaces have demonstrated a tangible world that is more dynamic and computer controllable, overcoming the limitations of atoms’ rigidity. The most fundamental constraint of tangibles comes from the gravity that governs our interaction with the physical world. The ZeroN project [15] explores how removing the gravity from the physical world will alter our interaction with it (see sidebar on page 46). Although we focus on the transformable capability of materials in this article, we envision that the future of Radical Atoms would incorporate advanced capabilities, such as antigravity levitation or syncing forms among distributed copies (like inTouch [6]).

Concept of Radical Atoms

Radical Atoms is our vision for human interactions with dynamic physical materials that are computationally transformable and reconfigurable. Radical Atoms is based on a hypothetical, extremely malleable, and dynamic physical material that is bidirectionally coupled with an underlying digital model (bits) so that dynamic changes of the physical form can be reflected in the digital states in real time, and vice-versa.

To utilize dynamic affordance as a medium for representation while allowing bidirectional input to control the shape and thus the underlying computational model, we envision that Radical Atoms should fulfill the following three requirements:

  • Transform its shape to reflect underlying computational state and user input;
  • Conform to constraints imposed by the environment and user input; and
  • Inform users of its transformational capabilities (dynamic affordances).

Figure 3 illustrates the dynamic interactions between the user, material, and underlying digital (computational) model.

Transform. Radical Atoms couples the shape of a material with an underlying computational model. The interface must be able to transform its shape in order to modify the model through the shape of the interface and reflect and display changes in the computational model by changing its shape in synchronicity.

Users can reconfigure and transform the interface manually with direct manipulation, gestural commands, or conventional GUIs. The Radical Atoms material should be manually deformable and reconfigurable by human hands. We envision “digital clay” as a physically represented, malleable material that is synced with the coupled digital model. Direct manipulation by users’ hands, such as deformation and transformation, of sensor-rich objects and materials should be translated into an underlying digital model immediately to update the internal digital states. Likewise, we envision that “digital and physical building blocks” will allow users to translate, rotate, and reconfigure the structure quickly, taking advantage of the dexterity of human hands, with the changes in configurations reflected in the underlying digital model. This capability will add a new modality of input from physical (users) to the digital world.

Material can transform by itself to reflect and display changes in the underlying digital model serving as dynamic physical representations (shape display) of digital information. This is the extension of the concept of current graphical representations on the 2-D screen with pixels to 3-D physical representations with new dynamic matter that can transform its shape and states based on the commands from the underlying digital model. This capability will add a new modality of output from the digital to the physical world perceived by users.

Conform: Material transformation has to conform to the programmed constraints. Since these interfaces can radically change their shapes in human vicinity, they need to conform to a set of constraints. These constraints are imposed by physical laws (e.g., total volume has to be constant, without phase changes) and by human common sense (e.g., user safety has to be ensured at all times).

Inform: Material has to inform users of its transformational capabilities (affordance). In 1977 Gibson [16] proposed that we perceive the objects in our environment through what action objects have to offer, a property he coined as affordance. For example, through evolution we instinctively know that a cup affords storing volumes of liquid. Industrial design in the past decade has established a wide set of design principles to inform the user of an object’s affordance—for example, a hammer’s handle tells the user where to grip the tool. In the case of dynamic materials, these affordances change as the interface’s shape alters. In order to interact with the interface, the user has to be continuously informed about the state the interface is in, and thus the function it can perform. An open interaction design question remains: How do we design for dynamic affordances?

Interactions with Radical Atoms

Through Radical Atoms we focus on the interaction with a hypothetical dynamic material rather than on the technological difficulties in developing such a material. We have explored a variety of application scenarios in which we interact with Radical Atoms as digital clay for form giving and as a hand tool with the ability for context-aware, semi-automatic transformation.

Direct touch and gestural interaction. The oldest interaction technique for form giving is direct touch—humans have been forming clay with their hands for thousands of years. Direct touch offers high-precision manipulation with direct haptic feedback from the operand, but it constrains the user to reshape the material only at the scale of his or her hand. Later in history, we started using tools to shape other materials (e.g., wood and metal) to reach a desired form. Still, the scope of manipulation remained on the hand/body scale. Through the Recompose/Relief project [14], we explored how we can combine the low-precision but extensive gestural interaction with high-precision direct manipulation at a fixed scale. Gestures coupled with direct touch create an interaction appropriate for Radical Atoms, since users are able to rapidly reform dynamic materials at all scales.

Context-aware transformation. Hand tools operate on objects. For example, a hammer operates on a nail. We can also imagine writing tools that can change form and function between a pen, a brush, and a stylus, based on the type of surface being drawn or written upon: a sheet of paper, a canvas, or a touchscreen. The way a user holds the tool can also inform how the tool transforms: The way we hold a knife and a fork is distinct. If the soup bowl is the operand, then it is likely the operator should be a spoon, not a fork.

Having the ability to sense the grasping hands, surrounding operands, and the environment, tools might be able to transform to the most appropriate form using contextual knowledge. An umbrella may change its form and softness based on the direction and strength of wind and rain. A screwdriver could transform between a Phillips and flat head depending on the screw it is operating on.

These context-aware transformations require a lot of domain knowledge and inference to disambiguate and identify the best solution. Though the implementation of these interfaces is beyond our current-day technological capabilities, this vision serves as a guiding principle for our future research explorations.

Shape-memory clay: Perfect Red. In 2008 the Tangible Media Group began to explore Radical Atoms through a storyboard exercise that asked us to imagine: What interactions are possible with a new kind of matter capable of changing form dynamically?

Perfect Red represents one such possible substance: a clay-like material preprogrammed to have many of the features of computer-aided design (CAD) software. Perfect Red is a fictional material that can be sculpted like clay—with hands and hand tools—and responds according to rules inspired by CAD operations, including snapping to primary geometries, Boolean operations, and parametric design. When Perfect Red is rolled into a ball, it snaps into the shape of a perfect sphere (primary solids). When two pieces are joined, Perfect Red adds the shapes to each other (Boolean addition). Perfect Red also has other behaviors inspired by parametric design tools: If you split a piece in two even halves, then the operations performed on one part are mirrored in the other. And much like CAD software, Perfect Red can perform detailed operations using splines projected on the surface of solids. To cut an object in half, for example, all that’s needed is to draw a line along the cut and tap it with a knife. Splines and parametric behaviors can also be carried out: If you want to drill 10 holes, you simply draw 10 dots and stick a pin into one of them (Figure 4).

The idea of snapping to primary geometries such as sphere, cylinder, and cube was inspired by shape-memory alloys. We extended this notion of “shape memory” to the 3-D primary forms that can be preprogrammed into the material. Users can conform the constraints of primary volumes by approximating them and letting them snap into the closest primary shapes.

Perfect Red is imagined as one of a number of new materials imbued with a complex set of responsive behaviors. The idea was intended to demonstrate the richness of Radical Atoms in how they can combine intuition and improvisation inherent to physical prototyping with the parametric operation of computer-aided design tools. Like software, Perfect Red is only one of a number of new materials that afford unique interactions—what other materials can you imagine?

Technology for Atom Hackers

Although the actuated TUIs were prototyped using available technologies such as servos, motors, tiny robots, electromagnets, and shape-memory alloys, we envision that Radical Atoms will provide sensing, actuation, and communication capabilities at the molecular level. Advances in material science, nanotechnology, and self-organizing micro-robotic technology are opening new possibilities to materialize the vision of Radical Atoms.

Nanotechnology. Our vision of Radical Atoms requires actuation nano-scale (NEMS) and individual addressing of elements in the system (quantum computing). Material property changes at the molecular level can manifest themselves as drastic property changes at the macroscopic level (optical, mechanical, electric, etc.). Nanosciences aim to scale down technology to the atomic level. In his legendary 1959 talk, Richard Feynman stated the potential of nanotechnology: “There is plenty of room at the bottom” [17]. Scientists have made tremendous leaps toward atom-scale mechanics. Commonplace technology, including inkjet printers, already make use of microscopic-scale microelectromechanical systems (MEMS). Scaling down from micrometers to nanometers brings its own set of difficulties. Devices created at the nanometer scale experience extreme aging, for example. Light and durable materials like carbon nanotubes [18] are a promising alternative to the materials we use today. Once technology overcomes the difficulties at this level, a wide range of opportunities will emerge for hackers at the atomic scale.

Throughout the previously described projects, one of the key problems we faced was addressing individual parts of the system. Currently, tabletop actuation (e.g., PICO [12]) solves this problem by hiding actuation completely under the table, but the trade-off is half a dimension lost to machinery. With advances in quantum computers [19], in which information is stored by the state of individual atoms, system integration (actuation + addressing) is inherent in the design.

Computer science. Programmable, modular systems based on cellular automata or neural networks have been a focus of computer scientists. The theoretical framework for programmable matter was proposed by Toffoli and Margolus in 1991 [20], exploring control theories on how one should “program programmable matter.”

Later, in 1996, Abelson et al. [21] described how these systems can be thought of as complex biological systems with a myriad of parallel resources. Their theories related fundamental laws from material science to computational theory (Fitz’s law of diffusion, information propagation, Fickian communication).

Since these pioneering theories, a number of projects have realized basic principles of programmable matter. Modular self-assembly was explored by the Miche project [22], which explored how a distributed, modular system can create structures of robotic cubes.

The Millibiology project from Neil Gershenfeld [23] explores how synthetic protein chains can reconfigure themselves over six orders of magnitude (from Å to dm). The project’s main objective is to explore folding chain configurations for programmable matter through both mechanical design and computer simulation.

Mechatronics. The field of robotics and mechatronics is one of the most vibrant related to dynamic matter, self-reconfigurable robots, and assemblies. Adaptronics by Janocha [24] summarizes the sensor and actuator technologies to build systems that adapt to their environments. Self-reconfigurable materials were shown by the Claytronics project, which aims to explore how modular systems made up of nodes can be built [25]. Large-scale prototypes have been built to explore the “ensemble effect,” in which multiple nodes interact with each other. To scale down design, the number of nodes has to be increased dramatically. In order to scale proportionally, assembly may become problematic. Kinematic self-replicating machines can create nodes and establish a hierarchy internally between them, thus solving assembly problems. The work of Griffith and Jacobson [26] shows how principles of molecular self-assembly can be applied.

Material science. Material computation science has been exploring novel intelligent materials that could potentially assist the actuation of shape. These materials could augment computational actuation through material logic [27] by interpolating between actuated points. Today TUIs are designed in a heterogeneous manner: Actuation (structure) and cover (skin) are designed and implemented separately. For Radical Atoms we need new material design principles, which treat objects as homogeneous entities with the ability to change their properties.

A number of materials experience a shape-memory effect under exernal stimuli due to their molecular structures: Shape-memory alloys (SMAs) return to a preprogrammed shape when heat is applied, and magnetic shape-memory alloys experience a memory effect under strong magnetic fields. SMAs inspired our imaginary material Perfect Red, around which we explored the interaction techniques for form giving, detailed earlier. Other materials can be actuated by driving electric current through them: Electroactive polymers and polymer gels change their size, shape, and optical properties when exposed to high currents. Optical properties of objects can change rapidly through certain stimuli: Thermochromic materials change their color in response to heat, while halochromic materials change their color in response to acidity levels.

Vision-Driven Design Research

Looking back through the history of HCI, we see that quantum leaps have rarely resulted from studies on users’ needs or market research; they have come from the passion and dreams of visionaries such as Douglas Engelbart. We believe that vision-driven design is critical in fostering quantum leaps, and it complements needs-driven and technology-driven design by looking beyond current-day limits. Tangible Bits is an example of vision-driven research. With Radical Atoms we seek new guiding principles and concepts to help us see the world of bits and atoms with new eyes to blaze a new trail in interaction design.

Figure 5 illustrates the three approaches: technology-driven, needs-driven, and vision-driven design research. The reason why we focus on the vision-driven approach is its life span. We know that technologies become obsolete in about one year, users’ needs change quickly, and applications become obsolete in about 10 years. However, we believe the strong vision can last beyond our lifespan.

Conclusion

In 1965, Ivan Sutherland envisioned a room where “computers can control the existence of matter” [28], in order to create an immersive visual and haptic sensation. Radical Atoms is our vision of human interactions with dynamic physical materials that can transform their shape, conform to constraints, and inform the users of their affordances. Radical Atoms envisions a change in the design paradigm of human-computer/material interaction in which we no longer think of designing the interface, but rather of the interface itself as a material. We may call these human-material interactions (HMIs) or material user interfaces (MUIs), in which any object—no matter how complex, dynamic, or flexible its structure—can display, embody, and respond to digital information. Even though we may need to wait decades before atom hackers (material scientists, self-organizing nano-robot engineers, etc.) can invent the enabling technologies for Radical Atoms, we believe the exploration of interaction design techniques can begin today.

Acknowledgements

The vision of Radical Atoms has been developed through a series of design workshops since 2008. More than two dozen past and current members of the Tangible Media Group and colleagues in the MIT Media Lab have contributed to shape the concept and articulate the interaction techniques and possible applications. We would like to thank all listed here for their contributions: Amanda Parkes, Hayes Raffle, James Patten, Gian Pangaro, Dan Maynes Aminzade, Vincent Leclerc, Cati Vaucelle, Phil Frei, Victor Su, Scott Brave, Andrew Dahley, Rich Fletcher, Jamie Zigelbaum, Marcelo Coelho, Peter Schmitt, Adam Kumpf, Keywon Chung, Daniel Leithinger, Jinha Lee, Sean Follmer, Xiao Xiao, Samuel Luescher, Austin S. Lee, Anthony DeVincenzi, Lining Yao, Paula Aguilera, Jonathan Williams, and the students who took the Fall 2010 Tangible Interfaces course. We thank the TTT (Things That Think) and DL (Digital Life) consortia of the MIT Media Lab for their support of this ongoing project. Thanks are also due to Neal Stephenson, whose science fiction novels and talks inspired us, and Ainissa Ramirez of Yale University, who advised us about the state of the art of material science.

References

1. Ishii, H. and Ullmer, B. Tangible bits: Towards seamless interfaces between people, bits and atoms. Proc. of CHI’97. ACM Press, New York, 1997, 234–241.

2. Ishii, H. Tangible bits: Beyond pixels. Proc. of the 2nd International Conf. on Tangible and Embedded Interaction (TEI ‘08). ACM, New York, 2008, 15–25.

3. Weiser, M. The computer for the 21st century. Scientific American 265 (Sept. 1991), 94–104.

4. Underkoffler, J. and Ishii, H. Urp: A luminous-tangible workbench for urban planning and design. Proc. of the SIGCHI Conference on Human Factors in Computing Systems: The CHI is the Limit (CHI ‘99). ACM, New York, 1999, 386–393.

5. Parkes, A., Poupyrev, I., and Ishii. H. Designing kinetic interactions for organic user interfaces. Commun. ACM 51, 6 (Jun. 2008), 58–65.

6. Brave, S., Ishii, H., and Dahley, A. Tangible interfaces for remote collaboration and communication. Proc. of the 1998 ACM Conference on Computer Supported Cooperative Work (CSCW ‘98). ACM, New York, 1998, 169–178.

7. Frei, P., Su, V., Mikhak, B., and Ishii, H. Curlybot: Designing a new class of computational toys. Proc. of the SIGCHI Conference on Human Factors in Computing Systems (The Hague, The Netherlands, Apr.1–6). ACM Press, New York, 2000, 129–136.

8. Raffle, H.S., Parkes, A.J., and Ishii, H. Topobo: A constructive assembly system with kinetic memory. Proc. of the SIGCHI Conference on Human Factors in Computing Systems (CHI ‘04). ACM, New York, 2004, 647–654.

9. Parkes, A. and Ishii, H. Kinetic sketchup: Motion prototyping in the tangible design process. Proc. of the 3rd International Conference on Tangible and Embedded Interaction (TEI ‘09). ACM, New York, 2009, 367–372.

10. Parkes, A. and Ishii, H. Bosu: A physical programmable design tool for transformability with soft mechanics. Proc. of the 8th ACM Conference on Designing Interactive Systems (DIS ‘10). ACM, New York, 2010, 189–198.

11. Pangaro, G., Maynes-Aminzade, D., and Ishii, H. The actuated workbench: Computer-controlled actuation in tabletop tangible interfaces. Proc. of the 15th Annual ACM Symposium on User Interface Software and Technology (UIST ‘02). ACM, New York, 2002, 181–190.

12. Patten, J. and Ishii, H. 2007. Mechanical constraints as computational constraints in tabletop tangible interfaces. Proc. of the SIGCHI Conference on Human Factors in Computing Systems (CHI ‘07). ACM, New York, 2007, 809–818.

13. Piper, B., Ratti, C., and Ishii, H. Illuminating clay: A 3-D tangible interface for landscape analysis. Proc. of the SIGCHI conference on Human Factors in Computing Systems: Changing Our World, Changing Ourselves (CHI ‘02). ACM, New York, 2002, 355–362.

14. Leithinger, D., Lakatos, D., DeVincenzi, A., Blackshaw, M., and Ishii, H. Direct and gestural interaction with relief: A 2.5D shape display. Proc. of the 24th Annual ACM Symposium on User Interface Software and Technology (UIST ‘11). ACM, New York, 541–548.

15. Lee, J., Post, R., and Ishii, H. ZeroN: Mid-air tangible interaction enabled by computer controlled magnetic levitation. Proc. of the 24th Annual ACM Symposium on User Interface Software and Technology (UIST ‘11). ACM, New York, 2011, 327–336.

16. Gibson, J.J. The theory of affordances. In Perceiving, Acting, and Knowing: Toward an Ecological Psychology. Wiley, Hoboken, 1977.

17. Transcript of a talk presented by Richard P. Feynman to the American Physical Society in Pasadena on December 1960; http://calteches.library.caltech.edu/1976/1/1960bottom.pdf

18. Saito, R., Dresselhaus, G., and Dresselhaus, M.S. Physical Properties of Carbon Nanotubes. University of Electro-Communications, Tokyo, 1998.

19. Nielsen, M.A. and Chuang, I. Quantum Computation and Quantum Information. Cambridge University Press, Cambridge, U.K.:New York, 2000.

20. Toffoli, T. and Margolus, N. Programmable matter: Concepts and realization. Physica D: Nonlinear Phenomena 47, 1–2 (Jan. 1991), 263–272.

21. Abelson, H., Allen, D., Coore, D., Hanson, C., Homsy, G., Knight, Jr., T.F., Nagpal, R., Rauch, E., Sussman, G.J., and Weiss, R. Amorphous computing. Commun. ACM 43, 5 (May 2000), 74–82.

22. Gilpin, K., Kotay, K., Rus, D., Vasilescu, I. Miche: Modular shape formation by self-disassembly. The International Journal of Robotics Research 27, 3–4 (Mar. 2008), 345–372.

23. Gershenfeld, N. et al. The Milli Project; http://milli.cba.mit.edu/

24. Janocha, H., ed. Adaptronics and Smart Structures: Basics, Materials, Design, and Applications (2nd ed.). Springer, Berlin, Heidelberg, 2007.

25. Aksak, B. et al. Claytronics: Highly scalable communications, sensing, and actuation networks. Proc. of the 3rd International Conference on Embedded Networked Sensor Systems (SenSys ‘05). ACM, New York, 2005, 299–299.

26. Griffith, S., Goldwater, D., and Jacobson, J. Self-replication from random parts. Nature 437, 636.

27. Oxman, N. and Rosenberg, J.L. Material-based design computation: An Inquiry into digital simulation of physical material properties as design generators. International Journal of Architectural Computing (IJAC) 5, 1 (2007), 26–44.

28. Sutherland, I.E. The ultimate display. Proc. of the IFIP Congress. 1965.

Authors

Hiroshi Ishii is the Jerome B. Wiesner Professor of Media Arts and Sciences at the MIT Media Lab. Ishii’s research focuses upon the design of seamless interfaces between humans, digital information, and the physical environment. His team seeks to change the “painted bits” of GUIs to “tangible bits” by giving physical form to digital information.

Dávid Lakatos is a graduate student in the Tangible Media Group at the MIT Media Lab. His research focuses on interaction design with future dynamic materials, at the intersection of mechanical engineering, material design, and philosophy.

Leonardo Bonanni completed his Ph.D. and postdoctoral work with the MIT Media Lab’s Tangible Media Group, where he taught sustainable design and developed interfaces for exploring the lives of objects. He is founder and CEO of Sourcemap.com, the crowdsourced directory of supply chains and carbon footprints.

Jean-Baptiste Labrune is a researcher at the MIT Media Lab specializing in human-machine creativity. He is also lab director at Bell Labs in their application domain, where he leads research projects involving interdisciplinary teams from the human sciences, design, and technology.

Figures

F1Figure 1. Iceberg metaphor—from (a) GUI (painted bits) to (b) TUI (tangible bits) to (c) Radical Atoms.

F2Figure 2. Evolution of tangibles from static/passive to kinetic/active.

F3Figure 3. Interactions with Radical Atoms.

F4Figure 4. Making an enclosure with Perfect red (storyboard by Leonardo Bonanni).

F5Figure 5. What drives design research, and why we focus on vision-driven design.

Sidebar: URP: An Example of an Early TUI

Urp uses physical scale models of architectural buildings to configure and control an underlying urban simulation of shadow, light reflection, wind flow, and other properties. Urp also provides a variety of interactive tools for querying and controlling parameters of the urban simulation. These include a clock to change the position of the sun, a material wand to change building surfaces between brick and glass (thus reflecting light), a compass to change wind direction, and an anemometer to measure wind speed.

Urp’s building models cast digital shadows onto the workbench surface (via video projection). The sun’s position in the sky can be controlled by turning the hands of a clock on the tabletop. The building models can be repositioned and reoriented, with their solar shadows transforming according to their spatial and temporal configuration. Changing the compass direction alters the direction of a computational wind in the urban space. Urban planners can identify potential problems, such as areas of high pressure that may result in challenging walking environments or hard-to-open doors. Placing the anemometer on the tabletop shows the wind speed at that point in the model.

UF1-1Figure. Urp: A workbench for urban planning and design. Physical building models casting digital shadows responding to a clock tool that controls the time of the day (bottom eft). Wind flow simulations are controlled with a wind tool (bottom right).

Sidebar: TOPOBO

Topobo is a 3-D constructive assembly system with kinetic memory—the ability to record and play back physical motion. Unique among modeling systems is Topobo’s coincident physical input and output behaviors. By snapping together a combination of passive (static) and active (motorized) components, people can quickly assemble dynamic biomorphic forms like animals and skeletons; animate those forms by pushing, pulling, and twisting them; and observe the system repeatedly play back those motions. For example, a dog can be constructed and then taught to gesture and walk by twisting its body and legs. The dog will then repeat those movements and walk repeatedly. In the same way that people can learn about static structures by playing with building blocks, they can learn about dynamic structures by playing with Topobo.

UF2-2Figure. Topobo: Kinetic 3-D constructive assembly system.

Sidebar: PICO

PICO is a tabletop interaction surface that can track and move small objects on top of it. It has been used for complex spatial layout problems, such as cellular telephone tower layout. PICO combines the usability advantages of mechanical systems with the abstract computational power of modern computers. It is merging software-based computation with dynamic physical processes that are exposed to and modified by the user in order to accomplish his or her task.

Objects on this surface are moved under software control using electromagnets, but also by users standing around the table. With this method, PICO users can physically intervene in the computational optimization process of determining cellphone-tower placement.

UF3-3Figure. PICO: Mechanical intervention of computationally actuated packs.

Sidebar: RELIEF/RECOMPOSE

The Recompose project explores how we can interact with a 2.5-D actuated surface through our gestures. The table consists of an array of 120 individually addressable pins, whose height can be actuated and read back simultaneously, thus allowing the user to utilize them as both input and output. Users can interact with the table using their gestures or through direct manipulation. Together, these two input types provide a full range of fidelity, from low to high precision and from hand- (direct manipulation) to body-scale (gestures) interaction.

UF4-4Figure. Recompose: Direct manipulation and gestural interaction with 2.5-D shape display.

Sidebar: ZERON

ZeroN is an anti-gravity interaction element that can be levitated and moved freely by a computer in 3-D space, seemingly unconstrained by gravity. A ZeroN in movement can represent a sun that casts the digital shadow of physical objects or a planet orbiting based on a computer simulation. The user can place or move the ZeroN in the mid-air 3-D space just as they can place and interact with objects on surfaces. Removing gravity from tangible interaction, the ZeroN project explores how altering the fundamental rule of the physical world will transform interaction between humans and materials in the future.

UF5-5Figure. ZeroN: Antigravity interaction element enabled by computer-controlled magnetic levitation.

©2012 ACM  1072-5220/12/0100  $10.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2012 ACM, Inc.

Post Comment


No Comments Found