Peter Dalsgaard, Kim Halskov
There has recently been a dramatic increase in multitouch tabletop interfaces, but like most physical tabletops, these interfaces are flat and two-dimensional. In our research laboratory, CAVI , we have for the past four years examined the potential of extending the tabletop interface into the third dimension [2,3,4]. We have drawn inspiration from interfaces that combine tabletops with physical objects (tangibles) that function as input devices . In doing so, we have explored how tangible interface components can also become output devices by using 3D projection, also known as projection mapping.
A tangible 3D tabletop combines tangible tabletop interaction with 3D projection in such a way that any tangible on a table can be augmented with visual content corresponding to its physical shape, position, and orientation. The tangibles can simultaneously serve as input and output devices, and multiple users can collaborate by interacting with both the tangibles and the tabletop.
Here, we present three of the very first examples of tangible 3D tabletops and discuss the potentials and limitations of employing this novel type of interface.
3D Projection and Tangible 3D Tabletops
3D projection installations are based on having an accurate 3D model of the physical part of the installation. In the virtual 3D world, we can produce digital content corresponding to the shape of a physical object. By positioning and calibrating the projection system so that the relationship of the projection to the physical object corresponds to the virtual camera’s relationship to the 3D model, we can project the digital model onto the physical elements of the installation, thereby augmenting the physical object (Figure 1).
In Tangible Urban Planning, the tangibles can be employed simultaneously, which allows multiple users to collaborate on positioning buildings, try out different color variations, and see the first-person view at different times of the day.
As shown in Figure 2, the tangible 3D tabletops developed by our research laboratory typically consist of a translucent table surface under which two cameras (1) and two projectors (2) are mounted . Above the table, two or more projectors (3) are mounted. The projector beneath the table displays the content on the table, while the projectors mounted above the table project content onto tangibles. The tangibles are fitted with visual markers beneath their bases, tracked by the cameras (1) and tracking software similar to that found in the well-known Reactivision software . However, the software has been custom developed in order to provide increased precision and faster feedback so as to minimize pixel bleeding and interaction lag.
Tangible Urban Planning
Tangible Urban Planning  explores how 3D tabletops can support collaborative activities in urban planning and development projects, including participatory activities [6,8]. On the tabletop surface we project a road map representing the neighborhood designated for the urban planning process. We operate with two kinds of tangibles: cuboids, which represent buildings, and cylinders, which function as controls. The cuboids are white, but when placed on the table, color and facade elements are projected onto them (Figure 3). As part of the installation, we have added an extra display that visualizes a first-person camera view into the virtual world. Cuboids can be placed on the table surface and moved around while the camera view is updated in real time on the separate display.
In addition to the cuboids, we have three types of cylinders. The first cylinder is a camera controller, which controls the position and direction of the first-person view shown on the extra display. By moving the camera controller, users can stroll through the fully rendered 3D world. The second cylinder is a color controller, which can change the color of the facade of a building when it is placed next to a building cuboid. The third cylinder is a time controller, which controls the time of day in the virtual world. The time controller also functions as a dial that controls the lighting conditions as the sun or moon moves across the virtual sky and illuminates the buildings, which in turn cast shadows. The tangibles can be employed simultaneously, which allows multiple users to collaborate on positioning buildings, try out different color variations, and see the first-person view at different times of the day.
In the case of Tangible Urban Planning, 3D projection was employed to visualize and explore large-scale urban planning. In a supplementary study, we developed the Tangible Blueprint installation to further examine how this form of interface could support architectural planning on a smaller scale. A blueprint of a planned building is displayed on the tabletop, while a rectangular tangible perpendicular to the tabletop offers a 3D view into the building. This enables users to experience the 3D model of the building, complete with surface textures and lighting, in real time (see Figure 4 and ). In contrast to the camera view in Tangible Urban Planning, which was displayed on an external display, the view into the 3D world here is displayed directly onto the tangible. When the tangible is moved, the perspective changes correspondingly. The tangible thus acts as both an input device (controlling the camera view) and an output device (showing the viewpoint in the 3D world). This system also supports multi-user interaction. By placing multiple tangibles on the table, multiple users can explore different portions of the building at the same time, and different tangibles can represent different aspects, such as day and night-time views and X-ray views of wiring and tubes hidden in the walls.
Projected Play was custom-developed for the Lego World 2013 event. It was designed to showcase Lego in a multi-user walk-up-and-use installation, and was the first example of a tangible 3D tabletop installation put to use outside research labs . The installation made use of two forms of tangibles, quadratic cubes and stylized buildings, both constructed from Lego brick (Figure 5 and ). Users could fill the cubes with colors, then paint the buildings and push and rearrange virtual bricks displayed on the table. In addition, a couple of “Easter eggs” were implemented for special events; for example, when a building was fully painted in different colors, it emitted a strong pulse that repelled the surrounding virtual bricks. The novelty of 3D projection itself was intended to pique the curiosity of onlookers and users and invite them to explore the relations between tangibles and tabletop content.
The most intriguing potential of tangible 3D tabletops from an interaction research perspective is arguably that all physical objects in the system can become interface components, and that they can simultaneously be input and output devices. A tangible 3D tabletop can be characterized as a particular type of augmented reality. However, it offers an advantage over most other augmented reality systems in that digital content is projected immediately onto the physical objects, rather than requiring a mediating device such as a mobile phone or a head-mounted display to add extra layers of information to the physical surroundings. By nature it is also a less flexible interface; first, because it requires a tabletop and a set of carefully calibrated projectors; second, because all the physical objects must be modeled in the system before they can be employed. For example, in the case of Projected Play, users could not reconfigure the Lego buildings by adding or removing bricks, since this would create an incongruence between the virtual and physical models and cause the projections to “spill over.”
While a tangible 3D tabletop does not have to be a multi-user installation, it is straightforward to develop it so that it can be; the three installations presented here all support multiple users. This leads to the question of how the interface supports and hinders certain types of collaboration. While users may interact individually with Tangible Urban Planning, for instance, all operations are made visible to other users, and many of the potential interactions with the tangibles affect the rest of the system. It can be seen as an inherently collaborative installation, in that every action potentially affects everyone else using it. For instance, if one user is manipulating the position of the sun and time of day, it will affect other users who are simultaneously exploring the position and appearance of buildings. While this may be preferable in some situations—for example, when a group of citizens discuss the systemic effects of urban planning initiatives with architects and policy makers at a public hearing—it will hinder other types of work—for example, when architects jointly discuss a master plan and subsequently work individually on specific components of the plan.
A tangible 3D tabletop can be characterized as a particular type of augmented reality.
This leads us to consider the type and complexity of information for which this interface is suited. Broadly speaking, present single-user devices are generally better suited for handling complex content and interaction, whereas shared devices often present simpler or more abstract content and allow for more basic forms of interaction and manipulation. For example, when it comes to image manipulation, single-user software such as Photoshop is a highly sophisticated tool that enables expert users to carry out complex image manipulation, whereas most examples of large-scale multi-user installations typically support only very basic image manipulation, such as resizing and adding predefined filters. While the three installations presented here are quite simple, more intricate controls and interdependencies could be implemented in the system to support complex multi-user interaction and content manipulation.
By extension, an issue for developing more complex installations is the balance between scripted and idiosyncratic practices and interfaces. By this we refer to the dilemma of supporting both shared and mutually accepted practices and interfaces, which are often required for collaborative work to function in practice, and the particular ways of working that competent practitioners develop over the course of time and the ways in which they modify or develop tools to support this work. As we develop more complex multi-user installations, we find it crucial to examine if and how systems can offer shared structures and modes of interaction that support joint work, but which also enable individual users to tailor parts of the system to support specific individual actions and preferences.
The setup currently required for a tangible 3D tabletop is arguably best suited for custom use cases. In addition to the ones presented here, we have developed concepts for how the system can be used for architectural renderings and presentations; for exploring maps and interrelations between different forms of geo-localized data; and for playful installations and games, for example, board games and puzzles. From our tests thus far, users have offered very positive feedback on the use of tangibles that function as lenses or filters, offering additional information or views (for example, the tangible in the Tangible Blueprint installation, which offers a 3D view into the blueprint) and that invite them to further explore the content with an inquisitive mind .
In addition to developing custom installations to examine the potential of the interface in other contexts, we intend to develop a more generic setup to support experiments into novel interaction forms. With a standard selection of tangibles and a library of effects and simple interactions, the system can function as an environment for building and testing prototypes, which could be especially useful when designers explore interactions between multiple devices and displays. In the three cases presented here, we have emphasized the use of different physical shapes to show the potential of the system, but we have not used the simplest form, namely flat objects, which can just as easily be tracked and augmented with content. For instance, pieces of plain paper can be augmented and function as lenses or additional sources of information, provided they have a marker for the system to track. A further step we are exploring is to connect the system with mobile devices that people bring with them. Doing so means that we can move toward systems that afford both collaborative actions on shared content on the tabletop and tangibles, as well as individual and more complex interactions on tablets and mobile devices.
5. Jordà, S., Geiger, G., Alonso, M., and Kaltenbrunner, M. The reacTable: Exploring the synergy between live music performance and tabletop tangible interfaces. Proc. of TEI 2007. ACM, 2007, 139–146.
10. The presentation of Projected Play is based on an excerpt from .
Peter Dalsgaard is an associate professor of interaction design at Aarhus University. His work combines theoretical approaches and practice-based projects to explore creativity and innovation in design processes and the design of tools and spaces that meaningfully blend digital and physical properties. firstname.lastname@example.org
Kim Halskov is a professor in interaction design at Aarhus University, where he in addition to being director of the Centre for Advanced Visualization and Interaction (see CAVI.au.dk) he is also co-director of the Centre for Participatory IT (see PIT.au.dk). His research areas includes innovation processes, design processes, and experience design. email@example.com
©2014 ACM 1072-5220/14/09 $15.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2014 ACM, Inc.