Michael Cowling, Joshua Tanenbaum, James Birt, Karen Tanenbaum
There are two competing narratives for the future of computationally augmented spaces. On the one hand, we have the Internet of Things , where the narrative is one of making our environments more aware of us and of themselves, and generally making everything “smarter” through embedded computation, sensing, and actuation. On the other hand, we have current approaches to augmented or mixed reality, in which the space remains unchanged and instead we hack our perception of the space by superimposing a layer of media between us and the world [2,3]. In this article we present examples of three projects that seek to merge these two approaches by creating and fabricating playful material elements that can be integrated with camera-based AR systems but that are independently meaningful objects in their own right. We argue that this new wave of physically grounded AR technologies constitutes the first steps toward a hybridized digital/physical future that can transform our world.
The technology space of augmented reality (AR), sometimes characterized using the more general term mixed reality, is one of the more exciting frontiers to emerge from HCI research in recent years . However, like many frontiers, it is chaotic, overhyped, and misunderstood. In the absence of existing best practices to guide development of this new design space, a number of competing visions have taken hold, each of which seeks to colonize the future of our digitally augmented world. Microsoft’s version of this future is embodied by the HoloLens, an untethered standalone visor running Windows 10 that superimposes translucent “holographic” overlays into the center of a user’s field of view. HoloLens feels like the big brother to Google’s Glass project, which traded computing power and graphical fidelity for mobility. Glass was designed to be worn in daily life, a decision that didn’t factor in the social consequences of wearing a highly visible digital surveillance platform in public spaces. Significant pushback from the general public resulted. HoloLens, in contrast, is not being marketed as a mobile device: Microsoft’s vision of AR, currently confined to the home and office, has not aroused the same privacy concerns that plagued Glass. Google has now moved on to Tango, a project that removes the AR from the glasses and instead presents a depth-sensing camera attached to a traditional mobile phone or tablet. However, despite their differences, HoloLens, Glass, and Tango all share a commitment to a future in which our physical surroundings are enhanced or overlaid with digital visualizations. On its own, the physical has no augmentation at all.
A parallel track of research on computationally augmented environments is the Internet of Things (IoT). The IoT focuses on enhancing the physical world with technology, allowing objects and systems to communicate with one another and with user-accessible display terminals . The focus of the IoT is typically on tagging, tracking, and automating everyday objects and processes. The idea of the IoT has remained fundamentally connected to infrastructures for sensing, automating, and data processing. Most future visions of the IoT imagine a world where every physical object (and all of its associated data) has a digital presence and can connect with other objects. However, this vision often does not contain details on how people actually understand and use the augmented objects, or how to incorporate the meaning of things into the Internet of Things. This is best exemplified by products like Belkin’s WeMo line of home automation devices that include so-called smart power outlets, smart light bulbs, smart coffee makers, smart humidifiers, smart home surveillance security cameras, and many more smart devices for your hyper-intelligent home. This vision of the smart home that IoT applications enable hasn’t evolved much from the 1950s vision of the future of automation, domestic robots, and technological convenience for the modern home.
We believe the ideal path forward involves welding these two approaches together, providing an augmentation of the world where physical objects have values and meanings of their own and can be augmented using IoT or AR to add additional meaning and value through computation. Paul Milgram and Fumio Kishino described a model for this approach as a “reality-virtuality continuum” . Their continuum starts with the real environment on the left-hand side before moving through augmented reality and augmented virtuality to arrive at a purely virtual environment on the right-hand side, with the portion in the middle of the spectrum referred to as mixed reality. Several of the projects we have worked on in our labs fit into this space. These projects use small-scale fabrication technology such as laser cutting and 3D printing, as well as printed fiducial markers and radio frequency ID (RFID) tags, to materialize digital designs in the physical world, further blurring the line between the virtual and the real.
Phylactery is—among other things—a system that critiques the rhetorics of convenience, domestic automation, and utilitarianism that dominate current visions of the IoT. Phylactery asks: What would the world look like if the IoT wasn’t about rendering our physical world tractable to computational systems and was instead about the preservation of the unique personal meanings that accumulate around our material objects? We designed Phylactery to explore the possibility space for an Internet of Meaningful Things. It is technologically simple: A laser-cut wooden altar contains an RFID reader, a Raspberry Pi, and a pair of speakers. A connected microphone is activated whenever a tagged object is placed on the altar, allowing a user to narrate a memory or story about the object. Subsequent interactions allow the user to replay the stories associated with objects.
Terraform is a civilization-building strategy game that uses 3D-printed objects, the printer bed, and AR to explore the playful use of personal fabrication technology (Figure 2). In Terraform, the player takes on the role of an AI system, offering advice and guidance to a collection of simulated colonists on a distant planet. Using a tablet PC, the player selects the priorities the colonists should address, and in response the game produces “facilities” using the 3D printer as a “construction bay.” Each facility includes a 3D-printed fiducial marker as its base, so it can be tracked, registered, and augmented by an AR system incorporated into the game engine. This allows us to combine game-state information, interface elements, and special effects with the physical printed objects.
In Laryngoscopy AR, distance-education paramedics students use an augmented-reality app combined with 3D-printed instruments to practice foreign-body removal with a laryngoscope and forceps as a skill prior to attending the compulsory residential school. The project stems from a need for more opportunity for distance students to practice skills that otherwise can be practiced only in a five-day, hands-on residential school. Students were provided with traditional 2D images and 3D-printed instruments, a mobile phone with an AR/VR simulation application, and a tutorial video to practice with prior to the residential session (Figure 3). To assist in immersion and accuracy, the 3D-printed laryngoscope was a 1:1 scale replication of actual physical tools that could be tracked and simulated virtually. The aim of the simulation is to follow the steps required to insert the laryngoscope correctly and then use the forceps to remove a foreign body lodged in the patient’s throat, with cues provided during the simulation to indicate whether the procedure has been successful.
There are several themes that cut across these three examples. First, they all rely upon the current generation of fabrication technology to create material components for digital/physical hybrid systems that are specifically designed to be meaningful in the absence of AR information while affording easy digital augmentation. The materiality that they participate in is significant even without an AR overlay, but part of this significance is the extent to which they are clearly intended to be digitally augmented. These objects exist as points on a continuum that cycles between the physical and digital representations. Each prototype described here started life as a digital model, and so even after being fabricated it remains legible to digital systems as an instantiation of a set of digital instructions. By existing in both physical and computational spaces, these systems highlight the ways in which the physical world is already a substrate for information manipulation, and the ways in which AR technology can benefit from being situated in a context that is designed to be computationally tractable.
1. Gubbi, J., Buyya, R., Marusic, S., and Palaniswami, M. Internet of Things (IoT): A vision, architectural elements, and future directions. Future Generation Computer Systems 29, 7 (2013), 1645–1660; https://doi.org/10.1016/j.future.2013.01.010
2. Dunleavy, M. and Dede, C. Augmented reality teaching and learning. In Handbook of Research on Educational Communications and Technology. J. Michael Spector, M. David Merrill, Jan Elen, and M.J. Bishop, eds. Springer, New York, 2014, 735–745; http://link.springer.com/chapter/10.1007/978-1-4614-3185-5_59
Michael Cowling is an information technologist with a keen interest in educational technology and technology in a digital age. He is currently a senior lecturer in the School of Engineering & Technology at CQUniversity in Australia. His work focuses on how technology can be used to enhance education and skills development. firstname.lastname@example.org
Joshua Tanenbaum is an assistant professor in the Department of Informatics at UC Irvine, where he runs the Transformative Play Lab. His work spans game studies and human-computer interaction, with an emphasis on creating “transformative experiences” and exploring possible futures for technology through the creation of hybrid digital/physical systems. email@example.com
James Birt is an assistant professor in the Faculty of Society and Design at Bond University, where he runs the Mixed Reality Research Lab. His research spans computer science and visual arts, with an emphasis on applied design and development of interactive mixed-reality experiences assisting discovery and learning. firstname.lastname@example.org
Karen Tanenbaum is a project scientist in UC Irvine’s Informatics Department in the Transformative Play Lab. Her research explores tangible and ubiquitous computing paradigms and the application of AI techniques to interactive storytelling and game design. She also studies Maker/DIY and Steampunk cultures, particularly their role in STEM education. email@example.com
©2017 ACM 1072-5520/17/01 $15.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2017 ACM, Inc.