Sometimes dreams really do come true.
Back in 2000, Keith Cheverst and colleagues published a paper outlining the functions and interactions made possible with the Guide system . This mobile navigation aid allowed users to read location-dependent content and look at maps to see nearby places of interest; it also provided services such as hotel bookings.
Unlike many predictions of the future, this one was rather prescient. Present-day commercial mobile services like Google and Nokia maps integrate a range of content and features around a detailed, GPS-controlled view. Cheverst's insights have been acknowledged through hundreds of citations; for the mobile industry itself, the financial rewards of building such systems are likely to be great.
So, perhaps, the job is done. The infrastructure and interaction ideas are in place for a world that will be fully indexed, where every curb and every tree in every park will be mapped, documented, and accessible at the stroke of a seductively sleek smartphone. We will never be lost; we will be guided away from wandering or wondering and will no longer need to guess or take a risk.
While these services are undoubtedly useful and used by increasing numbers of people, as I gaze onto a "streetview," I am left a little cold. Where is the "streetlife"? I am left pondering the dangers these innovations pose to the joys, surprises, and even discomforts of exploring our cities, hills, or beaches.
As a father with young children, I revel in watching them discover places as we travel together, imaginatively seeing cities in playful and profound ways. The way they experience places has opened my eyes to an alternative direction for mobile and ubicomp place-based services: services that are about the performer, explorer, and storyteller in all of us. Ones that help us make sense of our personal journeys in and through places.
There is a powerful drive to gather up and index all of everyday life, to draw it into an omniscient, omnipresent computing clouda heavenly collection of content, presided over by a behind-the-scenes "god" of a system, for its supplicants that pray by prodding touch screens.
It's the cloud computing of knowing, or, as one slide in a presentation at a recent Ambient Computing conference put it, systems that are "always beside us, to guard and guide us...all around us, all the time, not too obvious, a quiet supporter." Hymn-style, this presenter invokes a world that is made safe, calm, and solid by pervasive intelligence. Look through the lens-life: reality augmented to guide, guard, and provide for your every need.
Over the past few years, there have been persuasive critiques of these sorts of paternalistic design standpoints for pervasive systems. Genevieve Bell and Paul Dourish point out that while we expected "a world to be delivered by heroic engineering," as we daydreamed everyday ubicomp arrived: "The ubicomp world was meant to be clean and orderly; it turned out to be a messy one" . Then Yvonne Rogers encouraged us to move on from the early visions of calm computing, where we are immersed in a benign bath of experiences and ambient informationembrace the "excitement of interaction," she urged .
Before these voices, though, Brenda Laurel warned how thinking about computers and interaction as simply information-processing artifacts might lead to a less than colorful future way of living. As she puts it, "Intelligence without passion is simply rationality" .
Are our place-based systems full of intelligence but lacking passion? Do we want our children to grow up with services that simply let them see the world in a logical, ordered, controlled way? In the rest of the article, we'll take a look at another direction for these systems, one that moves the user away from being a bystander to playing a central role as a performer.
Two things are required. First, we need to find interaction techniques that de-emphasize screen touches, in favor of ones that put us back in touch with our surroundings. Second, instead of just informing us, there are wide-open opportunities for systems that inspire us to see our places as stages where our imaginations can run riot, where our routines become rituals, and where we can feel we truly belong.
Crowds of commuters, heads down, shuffling along the street, tapping away at personal technologiesa common sight in most cities. Every day, millions of people are being drawn into the digital.
While much has been said about the dangers of people bumping into lampposts, what we really should be concerned about is what they are missing as they screen-walk past each other: an awareness of how different we are all in shape and dress, a reassuring smile or fleeting connection at the blink of an eye.
Researchers have worked hard to produce mobile systems that support heads-up interactions. Take, for example, the work of Enrico Costanza and colleagues on intimate interfaces . Sensors strapped above the elbow, hidden under the user's clothes, can sense subtle muscle changes: A slight flex of the bicep and your mobile rejects an incoming call, without slowing you down or forcing you to take your eyes off the street.
While this concept is heading in the right direction, the tendency to focus on the hidden, personal, and subtle is at odds with a world of bustle and human expressiveness. A world in which many are comfortable walking and talking loudly into hands-free headsets, sometimes gesturing dramatically. A world where, as Fiona Candy notes, we have always used the things we wear and carry to say something about ourselves . The elaborate hat-doffing etiquette of Edwardian times is today's pack-walking performance that combines attitude and style. A world that has always seen us bring functional and expressive artifacts into the street: pipes to smoke, rolled up newspapers to march purposely with from the train-station, boom boxes to amplify our identities. This is a world where technology is less in the woodwork, more in the face.
In our research group, we've been thinking about what a more visible, extravagant set of interactions might look like for location-based services. Take, for example, the "Sweep-Shake" prototype. The user brush-strokes their mobile in front of them. If they point toward a content hot spot, the device vibrates and they can explore and filter further by gesturing in different directions for various information types.
One of the motivations behind this sort of work is to make one user's interactions of potential benefit to others. Perhaps when someone sees a Sweep-Shake user pointing and gesturing, they'll be prompted to look around and also use their device to explore a location. A further spur for our efforts is the desire to bring digital-physical interactions out into the open, not always hidden in furtive device screen taps and button presses. In our view, interactive gestures and system responses should become a familiar, lively backdrop to our shared social experiences.
The U.K., where I live, rarely gets heavy snowfall. When it comes, it stops the trains, closes the schools, and makes front-page news. In February 2009 a deep fall prompted Stuart Jeffries, a national newspaper columnist, to write of London's transformation:
"For a day at least, Londoners returned to a forgotten innocence...We needed it to remind us of who we are...We needed something to lift our spirits, to give us the excuse to play to no discernible economic benefit. And yesterday here it came, free as air, falling on to my bare head as I walked down the canal towpath..." .
On my street, the morning after a snowfall, there were the delicately ridged prints of dogs, people, cats, and hopping birds. In the stillness, the street's life had taken on a new shape. This does not happen only with snow. On the same street, this summery week, winds have blown blossoms from overhanging trees. Suddenly, the usually sullen walkway is turned into a sunlit late-afternoon bridal pathway.
Can the digital clouds we build be as powerful as a snowfall or blossom-bearing wind, helping us see our worlds in completely new ways? How can we build place-based systems to provide enchanting platforms for everyday performance? To consider these questions, let me share three personal in-place experiences.
Two years ago, I spent a couple of months in Finland working as a visiting fellow at Nokia Research. My wife and three children moved with me and we all lived on a small lake island in Tampere.
To help navigate around our new city, we bought a car GPS navigation system. A few days into our adventure, Rosie, the youngest, asked, "Daddy, where are the bears?" Her older brothers, as brothers do, laughed and teased her. I tried to be a good father and humored her by saying that Finland did indeed have bears but perhaps not in built-up areas. She persisted, "No, daddy, there are bears; the machine says so." Even more laughter from my sons. "It says, 'bears to the left and bears to the right'..." From that point on, we began a new game. Whenever the Tom-Tom said, "Bear to the left," one of us looked out and tried to spot a rock, tree, or cloud in the shape of a bear. Rosie and the GPS transformed our journeys.
Just recently, Rosie has started to respond to the Tom-Tom's final message"You have reached your destination"with this: "You have reached your destiny." Imagine that: a location-based system that is about destiny, not destination.
On the island, the children spent much of the day creating make-believe worlds where fallen leaves were dragons' tongues, acorn shells were fairy cups, and mobile phone ringtones helped them find each other's secret camps in a tech-enhanced variant of the familiar hide-and-seek. Returning from work, I was impressed by my children's appropriation of mobile technology to augment their environment.
But perhaps this is just child's play. When we grow up, do we really want to live in make-believe lands? To look for bears among the trees or clouds? Well, I think there is more than enough evidence that we want our grown-up imaginations filled as we encounter places.
Just outside of Waterloo railway station, one of London's busiest, there's a concrete tangle of subways that sprawl out under some of the capital's most traffic-heavy roads. The dismalness of this place was, until recently, ameliorated by a simple textual device. At the entrance, etched on a pillar, a poem begins: "I dream of a green garden / where the sun feathers my face / Like your once eager kiss." As the tunnels unfold, more of the poem is revealed, and the place is changed. Holding up an augment-ed-reality smartphone in that location, seeing there's a Starbucks half a mile away, isn't quite the same.
From text to music. Retailers have long known the power of instore music to inspire or prompt purchases. Every day, people use music to complement or alter their mood as they briskly walk their daily commute, amble as a tourist, or power round a route during a lunchtime run.
Inspired by this, we developed a navigation-by-music system, Ontrack, which helped people reach a destination by panning the music they were listening to in the direction they should head. In trials, the system worked well, effectively getting people from start to end point. One participant, though, spent a long time wandering around in, what seemed to us, an aimless way, weaving around the campus. When we eventually stopped them, we asked why the system had failed. They didn't see it that waythey spoke of feeling as if they were being blown gently by the prototype; they were enjoying the experience. Where are the navigation systems that help us to drift or float around an area, even areas we know well, with locations visited in unfamiliar orders, new connections made?
Text, music... What about rich, visual transformations? Many a street artist makes a good summer living from sketching trompe l'oeil in chalk that change a pavement into, say, a breathtaking waterfall. Groups like NuFormer have been applying digital techniques to similar effect. Using high-definition laser mapping and projection systems, they have produced incredibly compelling animations that play with the architecture and surfaces of buildings as diverse as storefronts and medieval castles. As pico-projectors become more common, built into mobile phones, can we develop services that attempt similar adaptations, on a smaller scale?
Let's return to the island in Finland. While I lived there, my daily routine involved rowing to the mainland, running through a forest, and then cycling six kilometers to work. After a week or so, the journey became less of an exhausting routine and more of a ritual full of sensations. The splash of the water; the feel of the oars as they dipped and resisted the water; the bounce of the soft forest turf; the clack-clack of the raised walkway at the edge of the wood.
As I close my eyes now and think about that commute, I have a deep sense of peace; my breathing slows, and a smile crosses my face. How wonderful it would be to relive that journey in all its richness.
Technologies that help capture a patina of places, particularly places we pass through repeatedly, hold much potential. Microsoft's SenseCam project points the way, as does the more recent Aided Eyes from Rekimoto Labs. SenseCam captures a sequence of pictures from a gadget worn via a lanyard around the neck, conference badge-style . Aided Eyes builds on the idea by combining the outward-facing camera with another that tracks the wearer's gaze, detecting what the eye fixes on .
Such "lifelogging" often seems a mechanistic, prosaic, capture-all processsystems to bottle the sum of our human existence. That people want to liveand relivetheir lives more selectively and poetically can easily be seen in the imaginative, inventive, and witty use of status updates commonplace in social sites like Facebook. To meet such needs, researchers have recently argued for systems that are less memory substitutes and more sources of cues that trigger a person's own memories .
On the first day in a new city, I like to wake up early and go for a run while all is still quiet. It's my way of making me feel that I am part of the place, a way of making the place my own.
As mobile place-based services become pervasive and more comprehensive, there is a danger that our sense of personal experience is weakened. How can we feel that we are explorers when every part of a place is mapped, commented on, photographed? We won't get the chance to own the city; the city will belong to the cloud. I want a system that will take me to places that no one has been to that day, not to the "must-see," highly rated ones. Sometimes I want to go to places that challenge me, rather than to those that seem to fit my profile.
I recently bought a beautifully crafted box in a Helsinki store. A highly finished wooden base supports a transparent blue-glass lid. But it wasn't the design that attracted me; rather, it was a small piece of paper inside that simply read, "What's inside makes it yours" (see Figure 3). For place-based UX designers, the challenge is to give users a sense that they define rather than are defined by the system. My colleagues and I have just begun to consider what this might mean for pedestrian navigation systems (see the following page).
Let's take another box, this time one that holds digital gadgetry. Early marketing for the iPod shuffle had the tagline "Enjoy uncertainty." Many purchasers clearly did, and the simple device was a great success. People were enchanted by the juxtaposition of songs played at random. How might we introduce such randomness into place-based systems?  Users may feel in control if they are presented with perfect information about a place. However, they might be better served if the system's output were imperfect, ambiguous, and open to personal interpretation. Perhaps, then, location-based systems could take us on journeys at randoma modern-day mystery tourwhere we try to make our own sense of what connects the places.
The Questions not Answers mobile search system uses such a "noisy," interpretable approach. Instead of offering a fixed, curated set of facts about a place, when people turn on their mobile, the system provides queries occurring in that place. This live chatter of others allows users to form a sense of the place and people around them.
We have been thinking about place-based services as ones to promote imaginative, meaningful, and engaging performances in situ. If we are to think in terms of performance, what then of the audience?
Detailed mirror worlds are being created. Already, I can sit in my office in Swansea and call up a map of the Bay Area to watch real-time traffic flows, tap into a live view from Web cams, and see when the next train will arrive. All very precise, all very detailedyet no passion, no real human connection.
Perhaps we should think about breaking the glass between people watching the mirror world and those on the streets. In pervasive gaming this has already been experimented with extensively. In the "Uncle Roy All Around You" games, people in the physical place are directed toward targets by their Web-bound teammates . How will our experiences of everyday spaces change if we apply these gaming ideas to mobile information services?
I've argued for a different set of priorities for future place-based computing, ones that will lead to systems that allow us to be fully alive, not deadened, as we are drawn into the digital. Ones that will see us extravagantly, visibly weaving the physical and digital together, not discreetly tapping on touch screens. Ones that will bring streets to life, rather than presenting sterile views.
Of course I've exaggerated and overlooked counter arguments to make my points. Like a dream, the future I've presented is some-times sketchy and muddled. But we need new dreams to avoid the nightmare sleepwalk into a world we no longer own, into public places we no longer connect withwhere people pass each other, digitally divided.
This article began life as a keynote for the Nokia User Experience Day 2009. Further details of our systems can be found at www.undofuture.com. Many great colleagues have been involved in the work, and much of it has been funded by the UK's EPSRC.
1. Cheverst, K., Davies, N., Mitchell, K., Friday, A., and Efstratiou, C. Developing a context-aware electronic tourist guide: Some issues and experiences. Proc. of the SIGCHI Conference on Factors in Computing Systems, CHI '00. (The Hague, Netherlands, Apr. 16.) ACM, New York, 2000, 1724.
3. Rogers, Y. Moving on from Weiser's vision of calm computing: Engaging ubicomp experiences. Ubiquitous Computing: 8th International Conference UBICOM 2006. P. Dourish and A. Friday, eds. (Orange County, CA, Sept. 1721). Springer-Verlag, 2006, 404421.
5. Costanza, E., Inverso, S. A., Allen, R., and Maes, P. Intimate interfaces in action: Assessing the usability and subtlety of EMG-based motionless gestures. Proc. of the SIGCHI Conference on Human Factors in Computing Systems, CHI '07. (San Jose, CA, Apr. 28May 3). ACM, New York, 2007, 819828.
7. Jeffries, S. London's day of innocence. The Guardian (London). February 3, 2009; http://www.guardian.co.uk/uk/2009/feb/03/londonsnow-weather. © Stuart Jeffries/Guardian News & Media Ltd 2009
8. Hodges, S., Williams, L., Berry, E., Izadi, S. Srinivasan, J., Butler, A., Smyth, G., Kapur, N., and Wood, K SenseCam: A retrospective memory aid. Ubiquitous Computing: 8th International Conference UBICOM 2006. P. Dourish and A. Friday, eds. (Orange County, CA, Sept. 1721). Springer-Verlag, 2006, 177193.
9. Ishiguro, Y., Mujibiya, A., Miyaki, T., and Rekimoto, J. Aided eyes: Eye activity sensing for daily life. Proc. of the 1st Augmented Human International Conference. (Megève, France, Apr. 23). ACM, New York, 2010, 17.
11. Leong, T. W., Vetere, F., and Howard, S. Randomness as a Resource for Design. Proc. of the 6th Conference on Designing interactive Systems - DIS '06. (University Park, PA, June 2628). ACM, New York, 2006, 132139.
12. Benford, S., Flintham, M., Drozd, A., Anastasi, R., Rowland, D., Tandavanitj, N., Adams, M., Row-Farr, J., Oldroyd, A. and Sutton, J., Uncle Roy all around you: Implicating the city in a location-based performance. Proc. of the 2004 ACM SIGCHI International Conference on Advances in Computer Entertainment Technology. (Singapore, June 34). ACM, New York, 2004.
13. Robinson, S., Jones, M., Eslambolchilar, P., Murray-Smith, R., Lindborg, M. 'I did it my way': Moving away from the tyranny of turn-by-turn pedestrian navigation. To appear in Proc. of MobileHCI 2010.
Matt Jones is a professor of computer science at the FIT Lab, Swansea University, and the co-author of Mobile Interaction Design, Wiley & Sons. More at www.fitlab.eu.
John is visiting Rome for the first time and is looking forward to meeting some friends at a good local restaurant. Taking out his mobile, he sees the arranged meeting place is just about 2km away. It's a lovely spring day, so, with time to spare, he roams freely in the direction of his meet-up, taking in the maze of alleys and quirky shops all around him. After 10 minutes, he scans left to right; the device vibrates to reassure him he's still on course, and also indicates that there are many routes to his destination. It feels good finding his own way, so he continues to make his own choices, enjoying the area around him. A little later, he comes to a main junction. Should he turn left or right? He'd better get this right, he thinks. Scanning again, the vibration feedback is now more targeted, and he walks on with confidence... Our system  uses haptic feedback to guide the user to their destination. The system can vary the angle within which the device vibrates to give a sense of the range of possible routes: where there is little choice (top image), the feedback region is smaller than when the user can be more adventurous (bottom image).
©2011 ACM 1072-5220/11/0100 $10.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2011 ACM, Inc.