Cover story

XX.1 January + February 2013
Page: 36
Digital Citation

Media studies, mobile augmented reality, and interaction design


Authors:
Jay Bolter, Maria Engberg, Blair MacIntyre

back to top 

You are walking in the Sweetwater Creek State Park near Atlanta and using the Augmented Reality (AR) Trail Guide, a mobile application designed by Isaac Kulka for the Argon Browser (Figure 1). The application offers two views: a now familiar Google-style map, with points of interest marked on its surface, and an AR view, which shows these points located in space. You see the map view when you hold the screen parallel to the ground; when you turn the phone up to look at the world, you get the AR view with the points of interest floating in space in front of you. This simple gesture of raising the phone changes your relationship to the information. You pass from a fully symbolic form of representation to a form of perceiving symbolic information as part of your visual environment.

The AR Trail Guide, developed in the Augmented Environments Lab at Georgia Tech [1], illustrates a new realm in AR design that goes beyond current commercial applications. In this article, we discuss some of these new areas, such as designing for experiences in cultural heritage, personal expression, and entertainment. At the same time, we want to address a larger issue. ACM interactions has often been a place for exploring new paradigms and the relevance for interaction design of unusual approaches from other disciplines. In that spirit, we pose the question: Can the humanistic discipline of media studies play a useful role in interaction design? Media studies looks at the history of media and their relationship to culture, and we will focus here on digital media and their relationship to other media, both present and past. Looking at digital media in a historical context is relevant because of the dynamic relationship between "traditional" media (film, television, radio, print) and their digital remediations. How can media studies be made to contribute to the productive work of interaction design? We believe one answer lies in using the historical understanding gained through media studies to develop a kind of media aesthetics that can guide designers as they explore new forms of digital media such as the mobile augmented reality application described above.

back to top  Media Studies and Aesthetic Design

Media studies is a historical, humanities-based approach to understanding the role of media in our culture. Pioneers include the German scholar Walter Benjamin in the 1930s and the Canadian Harold Innis in the late 1940s and early 1950s. Influenced by Innis, Marshall McLuhan proposed a comprehensive theory in Understanding Media [2]. For McLuhan, media have always played a central role in determining what culture is and how it changes. Wired magazine named McLuhan as its patron saint in its debut issue in 1993, and his work remains influential among academic and popular writers on digital media today. Among those who advocate a "medium-specific" approach are Lev Manovich, with his "software studies," and Ian Bogost and Nick Montfort, with "platform studies" [3]. Their work is original and compelling, but it cannot be directly applied to improving interaction design.

The reason is that these approaches look at media with a view to analysis and critique rather than creative production. For decades, if not centuries, the task of humanists has been to explain certain cultural artifacts: first literature, then art and music, and much more recently, film and other forms. The task of humanists has not been to make such artifacts or to provide explanations that would help others make or improve them. Art historians don't write with a view to helping young artists produce new work; scholars of literature do not provide guidelines for novelists. In fact, scholars often disparage those who do write practical manuals for novel writing, painting, or script writing.

Most of those engaged in media studies approach digital media from this analytical perspective. The relationship between theory and practice is very different in HCI, where the goal is to apply a theoretical framework—from cognitive science, cognitive psychology, and other disciplines—in order to produce ideas and sometimes even specific guidelines for practitioners. Among the various approaches to interaction design, critical design, and experience design, the focus is on a close and productive relationship between theory and practice. We can see that everywhere we look in contemporary design literature, even in the recent interactions cover article, "Annotated Portfolios," in which Bill Gaver and John Bowers reverse the relationship to show that the design practice of annotated portfolios can constitute a kind of theoretical research [4].

Is it possible to reframe media studies to make it a productive theory, a theory that can be applied to practice? This is almost the same as asking whether history can be of any use—whether an understanding of media history can suggest to us better approaches to designing artifacts today. We believe that media studies can be productive, although not directly by providing design rules or guidelines. Instead, the study of media history and the computer's place in that history can offer a vocabulary that helps designers reflect creatively on classes of problems and their solutions.

One path for developing such a vocabulary is provided by the notion of aesthetics. As a branch of philosophy and art history that focuses on what makes things beautiful, aesthetics is centuries old. But in the past decade or so, there has been considerable research into affect or emotion in the design and HCI community, with widespread interest raised by Don Norman's "Emotion and Design," published in interactions in 2002 [5]. This research often focuses on aesthetics as the emotional response to something attractive or beautiful. However, Peter Wright and John McCarthy, among others, have been expanding the definition of the aesthetic in design to include a range of emotional, affective, and even tangible relationships with technology [6]. This research is giving us a vocabulary for thinking and talking about designs that includes such subjective terms as affect, empathy, and enchantment.

Aesthetics can have an even broader meaning, as it did among 18th-century philosophers, including Immanuel Kant. Based on the Greek word for perception (aisthesis), aesthetics can be defined as the study of our perception of our whole environment, not just objects of beauty. Media aesthetics, then, can focus on how we perceive the world in and through new technologies and new forms of media: for example, how we see the world through photographs or even through forms such as the novel or the Hollywood film. Defined in this way, aesthetics has in fact been part of media studies for decades, dating back to Walter Benjamin, who in a famous essay, "Art in the Age of Mechanical Reproduction," argued that photography and film were changing the way 20th-century culture perceived not just art, but also the world at large [7]. McLuhan later expanded and popularized this idea when he called media "extensions of man." McLuhan claimed that print technology from the Renaissance to the 20th century had led people both to perceive and to categorize the world in certain ways. So-called typographic man categorized his world in a fashion that was linear, sequential, and analytic, and now television was leading to a fundamental change in our perceptual systems and ultimately to a new kind of "electronic man." Many digital writers have followed McLuhan and argued that digital technologies remake our processes of perceiving and thinking. They have developed a digital media aesthetics (although they do not necessarily use that term).

McLuhan's idea is compelling, but media aesthetics is not as simple and singular as McLuhan suggests. It is not simply that technology changes and extends our perceptual systems, because we are not passive in this process. As individuals and as a whole culture, we create new technological forms and designs that define new relationships between us and our environment. There is a feedback loop in which our view of the world changes our designs, and our use of new artifacts and designs changes how we perceive the world. If we take a historical view, we can see these feedback processes at work. Media studies can then contribute to aesthetic design, which we can define as the practice of reconfiguring the way the user perceives her environment through technology.

We are concerned here with a specific technology and one specific path through media history: augmented and mixed reality (AR and MR) for mobile devices. Designers of mobile AR and MR have the opportunity to redefine the relationship between information and our physical and built environments. The AR Trail Guide is an example. As noted earlier, in interacting with the Guide, the user passes from one mode of perceiving to another—from a map to an AR display. A map is a symbolic representation of a portion of the world, and it gives the user a particular view. We could call this a cartographic view of space, which is abstract, regularized, and measurable. Reading a map requires us to decipher the symbols and convert them to our location in the world, and, indeed, some people find it difficult to relate their physical orientation and location to the abstract representation of the map. On the other hand, the AR display is predominantly visual and immediate: The only symbolic information is the AR markers that float in the user's field of view. By combining the symbolic and the visual/immediate in a particular way, this application offers the user a different aesthetic experience—a different understanding of the park as a place where symbolic information from the Internet can be located and acted upon.

back to top  AR Browsers

Smartphones and tablets constitute a complex and still relatively unexplored design environment. They present designers with the opportunity to construct both new applications and a new set of media forms and genres [8]. Many mobile applications, including most games and productivity apps, are redesigned versions of laptop programs or console games that take place entirely on the opaque screen of the phone. Even these applications, however, configure the user for a somewhat different relationship to information—for one thing, because of the ease of access to information. A user can now, for example, carry her email around on her phone and respond almost as quickly as she can to an SMS. At the same time, the relatively greater difficulty in typing on the phone may result in emails that are terser, more like SMS in style. Such changes, however, are relatively minor in comparison with the more profound refashioning of our relationship to digital information that is possible with locative AR and MR applications. Mapping applications are among the most compelling current examples of this refashioning. Through the phone's connection to GPS, the map can become a record of our changing locations. Our aesthetic relationship to the phone changes because we can see our own location recorded on the map. The user sees herself reflected as an element (often as a pulsing and moving dot) in the map's symbolic information. This is like the familiar YOU ARE HERE dot on orientation maps in malls and parks, but because the smartphone map is both dynamic and personalized, the user feels more a part of the map itself. This sense of being part of the map is one new media aesthetic that is fostered by mobile applications.


In comparison with the aesthetics of seamless immersion, AR and MR experiences often seem messy, because they consist of hybrid layers of information and images that users may choose to read or disregard.


A new genre of mobile applications, the AR browser (such as Layar, Aurasma, Junaio, and the AEL's own Argon), provides another—an aesthetic that is in some ways the opposite of seeing ourselves inserted into a screen-based map. In AR browsers we can see symbolic data positioned against the video stream provided by the phone's camera, so that symbolic information enters into our visible world. Again, the AR Trail Guide illustrates the shift. When the user flips the iPad from horizontal to upright, she lifts the symbolic information off the map and locates it in her visual environment. Commercial applications, such as location-aware advertising and AR games, seem likely to dominate the use of this technology. However, the new aesthetics of AR browsers is also suited to tours and cultural heritage experiences, as well as forms of location-based art.

Sets of common design features for these applications in AR browsers are already emerging, although these sets do not yet constitute a full-fledged design language. Common features for commercial applications include:

  • text or images floating in the user's field of view
  • clickable elements to reveal more information or link to the WWW
  • QR codes and logo recognition
  • location-controlled delivery of information (e.g., coupons when you pass a store).
    • For games (where the interface is more varied, depending on the kind of game and its game mechanics):
  • the phone itself as the game instrument
  • markers to indicate physical surfaces for game play
  • 3-D images attached to physical surfaces.
    • For tours and cultural heritage experiences:
  • text, images, and clickable information as in the commercial applications
  • arrows and other guiding mechanisms located in the field of view
  • location-sensitive actions
  • content that emphasizes the "aura" (the sense of authenticity) of the place.

One element that is becoming pervasive in all mobile applications, including AR/MR ones, is the desire to connect to the ubiquitous social network site Facebook, to Twitter, and to image-aggregation sites.

Mobile AR/MR is linked to earlier media and media forms, and understanding its place among its predecessors can help us design more effectively for this new form. One reason is that users understand newer media by analogy with older ones [9]. The designers of AR/MR apps are also influenced by media forms in the current media environment. For these reasons, looking at the history and contemporary condition of media culture can provide us with useful analogies and contrasts so that we may understand what is new in the aesthetics of locative AR. Clearly, mobile AR/MR combines a set of technologies and constitutes a large design space. We could compare today's smartphones and tablets with a number of earlier media (including the telephone, the camera, radio, television, the printed book, and so on). We will focus here on just one comparison, with a media form whose long history is not well known to most designers today: the panorama.

back to top  Panoramas

Smartphones and tablets offer users new opportunities both to create and to view panoramas, and in the process they are giving new meaning to this visual form. Microsoft's Photosynth and other panorama apps (like the panorama feature on many digital cameras) enable users to make simple panoramas by rotating the phone to take multiple images, which are then stitched and transformed into a panoramic projection. The experience of creating a panorama is similar to that of taking other kinds of photographic images with the phone's camera. But the experience of viewing a panorama is particular to mobile devices, as an app called TourWrist (tourwrist.com) illustrates. TourWrist invites uploads both from professional photographers, who produce seamless panoramas, and from DIY amateurs, whose panoramas are often flawed and incomplete but reflect their immediate activities and interests. The name TourWrist is too cute, perhaps, but it is still descriptive, because with such a panoramic application, the user does use her wrist (or arms) to explore the image space. Viewing requires physical engagement, as she looks into the screen while she rotates the phone around her. The panorama appears to surround her.

For TourWrist and similar apps, the act of viewing is more or less the whole experience. But an AR browser such as Argon allows a designer to create a more elaborate experience that takes place for the user within such panoramas. The panoramas can provide the visual backdrops and contexts for images, text, and video. Added audio can create an acoustic space or provide information to bring the panorama to life. If the user is going to tour a college campus or visit a cultural heritage site, a panorama can preview the experience for her before she arrives. Our "Voices of Oakland" prototype is meant to serve this purpose. The experience is designed to take place in the Oakland Cemetery in Atlanta, where the visitor is directed to three graves and can hear the voice of the person buried in each grave, describing his or her life. The prototype has four panoramas (including a starting point), so that a user not in the cemetery can "visit" each site in turn and browse the panorama while hearing the audio. In this case, the panorama functions as a virtual reality. The user is asked to ignore her current real location and pretend that the panorama is in effect live video.

But there are other possible uses for panoramas in cultural heritage. A panorama can transport the user back in time. What did Atlanta look like in 1864, after it was burned by Sherman's troops? What did the Cathedral of St. Etienne in Metz, France look like in 1207? The problem, of course, is that we do not have and cannot create a photographic 360-degree panorama of a medieval French cathedral. So our prototype called the "Lights of St. Etienne" used a panorama rendered from a 3-D model as a substitute (Figure 2). The cathedral was enlarged in the 14th century and incorporated another church (a so-called collegiate church, Notre Dame la Ronde) that had been built right on its east portal. With Argon, the visitor can stand in the section of the cathedral that was once the former church, and by choosing from a series of dated buttons, she can see a 3-D-graphic panorama that replicates the structure of the building at various periods. In the 12th century, the panorama shows her the old church walls; in the 14th century, the walls are extended to reveal the new combined structure.

Panoramas can also be deployed in museums to bring the outside world into the space of the museum. Museums often exhibit objects that have been taken from their original larger context: archaeological fragments or pieces of larger art installations. Curators may present photographs or architectural drawings to try to give visitors a sense of the original whole. An AR panorama could accomplish this in a different way. The visitor could experience the object in its museum setting and then access a panorama on her phone to see the object in its original context. If the context is available to be photographed, she could see a photographic panorama. Otherwise, a 3-D model or even a hand-drawn sketch could also be employed. To date, the most ambitious attempt at incorporating panoramas to create a cultural experience at a distance is the Google Art Project. More than 150 museums and galleries collaborate with Google to make art collections available online in high-resolution images. One part of the project offers "Street View for museums," in which the user can see the inside of featured museums, in the by now well-known 360-degree horizontal and 290-degree vertical panoramas of Google Street View. The Google Art Project inverts the idea of bringing the outside world into the museum; instead, it brings the museum to the viewer wherever she happens to be.

Panoramas in AR browsers are typically deployed as a "skybox," a cube that surrounds and encloses the user's point of view (Figures 3a and 3b); the technique is also used in videogames to create the sense of an expansive environment. To construct a panorama, an image is mapped onto the inner surface of each of the six faces of the skybox; the edges of each image are distorted in such a way that the viewer cannot see the seams. The whole panoramic image needs to be a two-dimensional projection of a sphere, typically an equirectangular projection.

Such a projection is generated by Microsoft Photosynth or by a variety of stitching programs. But this method depends on having current technology to make the master panoramic image. Photographers in the 19th and 20th centuries have produced a variety of long-form "panoramic" images that do not contain all the information of an equirectangular projection. Nevertheless, it is sometimes possible to convert these images to a suitable (although radically foreshortened) partial panorama, and these forms can be used for historical experiences. Because they are in black and white and are grainy, they are convincingly of the period and can invoke the same sense of authenticity or aura as historic daguerreotypes and other early photographs [10].

Panoramas are part of a larger genre of mobile experiences that combine visual realities, present and past, live and recorded. Although we cannot have access to complete panoramic projections from the past, we do often have photographs that can be inserted in appropriate places against the video background provided by the phone's camera. And this juxtaposition can be effective in helping the user to contrast the past and the present, as a number of applications illustrate, such as HistoryPin (www.historypin.com) and WhatWasThere (whatwasthere.com). With these apps, the user can align the historical photograph with the video scene that she sees in her phone. She may then use a slider to make the historical image more or less opaque. This is a striking way to understand historical change—a way to see the past in the present.

AR panoramas and related applications that display images in place can give us a sense of the visual history of a place. But there is another sense in which AR panoramas are historical; they are the latest manifestations of a long tradition of panoramic technologies. Knowing about that history can help us understand the aesthetics of AR panoramas today.

As a form of exhibition, the panorama dates to the end of the 18th century, when in 1793 the painter Robert Barker built the first special-purpose panoramic exhibition space in London (see opening pages). Users entered into a rotunda where the cylindrical painted surface was displayed. It was an image of London. Surprisingly, Londoners were willing to pay three shillings to view a painted representation of the city that they could view simply by walking outside. This panorama initiated a vogue for panoramas housed in temporary or quasi-permanent rotundas throughout Europe and in some cases America. Some of the panoramas were contemporary views; other were of historic battles, such as Waterloo. The vogue tapered off at the end of the 19th century, although a few such exhibits have survived or been reconstructed [11].

As Oliver Grau has pointed out, these 19th-century exhibitions were designed to make the viewer's experience of this painted scene as immersive as possible [12]. Grau argues that these panoramic exhibitions were in fact a form of virtual reality and belong to a long tradition of painted and photographic forms that are all predecessors to the computer-driven VR of the late 20th century. We could also put it this way: The panorama was an attempt to create a transparent medium, a medium that would become invisible and leave the viewer in the presence of the objects being represented. Transparency was and remains a powerful media aesthetic that dates back hundreds of years. Certain media are potentially quite effective in promoting the aesthetic of transparency: painting, photography, film, television, and computer graphics. All of these media can also be used in other ways, but they are often, perhaps usually, designed to be transparent and present viewers with an unmediated view of the world.

The new AR panoramas on phones and tablets, however, work according to a different aesthetic. Unlike the original 19th-century panoramic exhibitions, AR panoramas can never provide fully immersive experiences. It is true that an app like TourWrist seems to be aiming for immersion, at least when it offers the user flawless panoramas. Eliminating flaws is an obsession of professional panoramic photographers, whose websites offer advice about pano heads for tripods, lighting, and stitching programs. Good photography and stitching can result in an image that appears seamless and more or less free of distortion. However, if the viewer is using a phone or even a tablet, she still cannot experience anything close to full visual immersion. The viewer is always somewhat aware of her physical surroundings, even when the panorama transports her elsewhere. In that sense she is both "here and there."

What else can we say about the aesthetics of AR panoramas? Although the experience is never fully immersive, the interaction paradigm of AR panoramas can nevertheless be simple and direct. The viewer holds up the phone and rotates it around her to explore the surrounding image. TourWrist also permits the viewer to choose to pan with her finger to rotate the panorama while the phone or tablet remains still. This panning interface is equivalent to, but easier than, using the mouse to navigate in 3-D on a desktop computer. She can use the slider in WhatWasThere or HistoryPin to easily control the opacity of the historical image overlay, revealing the present building behind the past image. Such interfaces are tactile. The user learns to "feel her way" around the panorama; she learns to summon the past by sliding her finger across the screen.

These tactile-visual interfaces promote a particular aesthetic in precisely the sense we discussed above, a particular way of perceiving the world mediated through the phone or tablet. For many users, this active way of viewing is in itself a pleasure. Decades ago, McLuhan characterized television viewing as a tactile or synaesthetic experience, involving the sense of touch as well as those of sight and sound. But McLuhan was speaking metaphorically. The touchscreen interfaces on mobile devices (even more than Englebart's classic tactile device, the mouse itself) are literally tactile and can involve more bodily motion, particularly when we spin around to construct a panorama in Microsoft Photosynth or to view one in Argon. These interactions promote an aesthetic that Maria Engberg has described as polyaesthetic [13].

AR panorama applications are polyaesthetic in two ways:

  • They combine the senses of sound, sight, and touch. We see and hear proprioceptively, by feeling our way around the visual world of the panorama.
  • They locate us "here and there." We see one world when we look beyond the phone and another when we look at the screen and move it around. With historical panoramas in the phone, we see the past and the present at the same time. Our experience of our environment is redefined through our interactions with the app.

The concept of polyaesthetics can in fact be applied not only to panoramas and AR/MR experiences but also to the whole range of digital design. Polyaesthetics describes a changed relationship between ourselves and our environments as defined through our multimodal interfaces, multiple simultaneous applications, and the combining and overlaying of virtual data with and onto the physical world.

back to top  Polyaesthetics: A Productive Theory from Media Studies

Media studies shows that mobile AR/MR applications are part of a significant shift in our culture's aesthetics. How can this media history now become productive and promote new or better designs? As we suggested earlier, media studies, unlike traditional usability studies, does not lead directly to design guidelines: lists of do's and don'ts. Media studies can provide a vocabulary for describing interfaces and applications and a way of thinking about how we interact with them. This vocabulary can aid designers in articulating their goals both to their clients and to themselves. In this case, we propose to add a new term, polyaesthetics, to the vocabulary of aesthetic design. As a term, polyaesthetics brings with it a historically based understanding of the way we experience the world through our current media forms; it helps to add a historical dimension to the work of Wright and McCarthy and others on technology as experience. Different kinds of applications suggest different design vocabularies. Mobile applications promote a polyaesthetic relationship to our environment. By realizing this, designers are encouraged to experiment with relationships between touch, sight, and sound: They come to see that they are designing not just a way of finding location-based information, but a way for the user to experience the world around her as a mixed and hybrid reality of information on the one hand and physical location and embodiment on the other.

Although there may be no such thing as a completely wrong design aesthetic, there are more or less appropriate paths for various technologies and media forms. A commitment to a less appropriate aesthetic can be limiting. For example, different aesthetics are appropriate for VR on the one hand and AR and MR on the other. Immersion is a good design aesthetic for VR, as it was historically for the medium of painting from the Renaissance to the 19th century and for the medium of film in the 20th. Viewing a film in a darkened theater is an immersive experience. The viewer's relationship to her physical world is blocked out, and she is encouraged to fall into the screen and become totally absorbed in the story. This remains the design aesthetic of most popular films, and it has been adopted by many in the VR community. As we have seen, however, it cannot be the dominant aesthetic of AR and MR, because AR and MR keep the user "here and there" at the same time. The polyaesthetics of AR and MR suggests too that mobile apps may not provide a good storytelling medium, at least if we think of stories as absorbing, immersive experiences like those of a compelling Hollywood-style film. The idea that we could follow a complex, unified and compelling story embedded in the world as an AR app is probably misguided. AR and MR are better utilized to tell "stories" like those found on Facebook and Twitter: small narrative units that are sampled by users. We prototyped the use of small narrative units for our AR application, "Voices of Oakland," in which the experience consists of separate stories with prerecorded audio presenting the person buried at each grave [14]. But complex interactive narratives with "believable" generated characters still face significant practical and theoretical hurdles even on desktop computers; they are not good candidates for AR on mobile devices.

In comparison with the aesthetics of seamless immersion, AR and MR experiences often seem messy, because they consist of hybrid layers of information and images that users may choose to read or disregard. Modernist design in the 20th century emphasized perfect integration of elements into a single unified form. While this aesthetic is still evident, for example in the product design from Dieter Rams to Steve Jobs and Jonathan Ive, it is not always suitable for AR and MR or for any mobile applications with a social media component. Because AR applications often draw on social media and user-driven content, they sometimes have a unified design, but they may also combine rather awkwardly the aesthetics of many users. The design aesthetics of AR and MR needs to address the complexities of the amateur community as well as the professional design community.

Our example of the AR panorama suggests that media studies can offer historically informed aesthetics for designers today. A historical perspective helps us appreciate differences as well as similarities and develop a design vocabulary that is appropriate both to the affordances of the technology and to our current cultural moment. Finally, just as media studies can be used productively to contribute to design, it is also clear that design can contribute to media studies. Today's creative work in digital design will become part of the historical development of media. AR panoramas, for example, revive and redirect the history of panoramas by bringing this 19th-century form into line with the polyaesthetics of the early 21st century. In the 19th century, the panorama offered the viewer a single, immersive experience, which the viewer had to travel to a specific exhibition site to enjoy. Today, AR panoramas invite the user to a visual experience that is hybrid rather than immersive, and multiple panoramas can be linked together in the same application, along with video and audio. Equally important, AR panoramas, along with many other mobile applications, redefine the design space as a combination of sight, hearing, touch, and proprioception. This combination offers a rich and relatively unexplored field for aesthetic design.

back to top  References

1. The Augmented Environments Lab, directed by Blair MacIntyre, is a research group in the GVU Center at the Georgia Institute of Technology. The group has been working in the area of augmented reality since 1998, investigating how interactive computing environments can be used to directly augment the information and enhance the experience of users. One main current research initiative is the development of the Argon browser and related technologies and their application for the cultural heritage and other experiences discussed in this article. For descriptions of all the AEL projects, see the website (http://ael.gatech.edu/lab). Researchers and students who have contributed to Argon development and the experiences mentioned here include: Maribeth Gandy, Hafez Rouzati, Brian Davidson, Gheric Speiginer, Isaac Kulka, Evan Barba, Jeff Wilson, and Alex Hill. Jomis Kakozhayil and Jing Li worked with us on the 3-D models and the design of the Lights of St Etienne project. Rebecca Rouse, Nachiketas Ramanujanr Simone Frassanito, and Sanika Mokashi helped specifically with panorama projects. Rebecca Rouse is also conducting historical research into the panorama, which contributed to the work we present here.

2. McLuhan, M. Understanding Media: The Extensions of Man. New American Library, New York, 1964.

3. Manovich, L. The Language of New Media. MIT Press, Cambridge, MA, 2001; Bogost, I. and Montfort, N. Racing the Beam. MIT Press, Cambridge, MA, 2009.

4. Gaver, B. and Bowers, J. Annotated portfolios. interactions 19, 4 (July + August 2012), 42–49.

5. Norman, D.A. Emotion and design. interactions 9, 4 (2002), 36–42.

6. McCarthy, J. and Wright, P. Technology as Experience. MIT Press, Cambridge, Ma, 2004.

7. Benjamin, W. The work of art in the age of mechanical reproduction (H. Zohn, trans.). Illuminations. Schocken Books, new York, 1968, 217–251.

8. Barba, E., MacIntyre, B., and Mynatt, E.D. Here we are! Where are we? Locating mixed reality in the age of the smartphone. Proc. of the IEEE 100, 4 (2012).

9. Bolter, J.D. and Grusin, R. Remediation: Understanding New Media. MIT Press, Cambridge, MA, 1999.

10. A history of photographic panoramas is given on the website of the American Memory Project of the Library of Congress: http://memory.loc.gov/ammem/collections/panoramic_photo/pnhist1.html. We have discussed the "aura" of AR experiences in Bolter, J.D., MacIntyre, B., Gandy, M., and Schweitzer, P. Benjamin's crisis of aura and digital media. In Media Encounters and Media Theories (J. Müller, ed.). Nodus Publikationen, Münster, 2008, 87–99.

11. Oettermann, S. The Panorama: History of a Mass Medium (D.L. Schneider, trans.). MIT Press, Cambridge, MA, 1997.

12. Grau, O. Virtual Art: From Illusion to Immersion (G. Custance, trans.). MIT Press, Cambridge, MA, 2003.

13. Engberg, M. Writing on the world: Augmented reading environments. In special issue of Sprache und Literatur (P. Gendolla and J. Schäfer, eds) 108, 42.2 (2011), 67–78.

14. See http://ael.gatech.edu/lab/research/design/voicesofoakland/

back to top  Authors

Jay David Bolter is Wesley Chair of New Media at the Georgia Institute of Technology. He is the author of Writing Space (1991, 2001); Remediation (1999), with Richard Grusin; and Windows and Mirrors (2003), with Diane Gromala. He currently works on augmented reality design for art, entertainment, and informal education.

Maria Engberg is an assistant professor (Universitetslektor) and deputy dean at the School of Planning and Media Design at Blekinge Institute of Technology in Sweden, and research affiliate at the Augmented Environments Lab. She received a Ph.D. from Uppsala University in 2007. She currently works in digital media theory and on AR/MR design for aesthetic experiences.

Blair MacIntyre is an associate professor in the School of Interactive Computing at the Georgia Institute of Technology, and director of the Augmented Environments Lab. He received a Ph.D. from Columbia University in 1998, and B.Math and M.Math degrees from the University of Waterloo in Canada in 1989 and 1991.

back to top  Figures

F1Figure 1. Augmented Reality Trail Guide. (Credit: Isaac Kulka)

F2Figure 2. Lights of St. Etienne: 3-D model of the cathedral in 1207. (Credit: Jomis Kakozhayil)

F3AFigure 3a. Image panel for Père LaChaise cemetery. (Credit: Maria Engberg)

F3BFigure 3b. Skybox for Père LaChaise cemetery (one side removed for clarity). (Credit: Maria Engberg)

UF1Figure. Etching of London panorama by Robert Barker, c. 1792.

back to top 

©2013 ACM  1072-5220/13/01  $15.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2013 ACM, Inc.

Post Comment


No Comments Found