Many complain that we are mesmerized and divided by technologies as they saturate our world. That we are becoming disembodied, fragmented, distracted from the present moment and from one another [1,2,3]. Take smartphones. It’s true that people frequently use them to be social with non-present others during perceived lulls in real-world conversations, causing them to become quite distracted. But it’s also true that people use smartphones to augment co-presence—sharing information to bolster discussion points and incorporating real-time texting and exchanges of photos and information into ongoing in-person conversations.
Whatever we might think of how smartphones affect co-presence, there is a steadily increasing technological infrastructure beyond these devices that is coming into play: ubiquitous surveillance cameras and sensors that detect movement, sound, and other human traces; sensors and communication capability built into everyday objects (Internet of Things); and increasingly powerful sensing and display technologies worn on and in the body (wearables and implantables). All of these will impact how we interact with one another in person. I believe we should be actively working as a community to anticipate and shape how these technologies augment our social engagements. Market imperatives (to sell more individual devices, to gather data from individuals about their consumption patterns, etc.) will not by themselves bring about artful technological support for rich, in-person sociality. We need to actively set guiding values for our work, articulating design spaces that demonstrate the potential and value of technologies to augment our everyday social interactions.
There is a tradition in HCI research of building and then living with prototypes, most famously framed by Mark Weiser in his article explaining Xerox PARC’s strategy of envisioning and then dwelling in a particular vision of a “ubiquitous” computing future [4,5]. This methodology enables people to personally and socially inhabit technological augmentations, experiencing unanticipated issues and opportunities and envisioning forward for the rest of us as these technologies are more broadly deployed.
My research group has been working in the spirit of this tradition, creating collocated technology-enhanced social games. These games explore near-future technological augmentations of our person and environment, such as wearables and surveillance cameras. Games are a rich terrain for experience design in their own right and will increasingly be part of our everyday sociotechnological experience. They can also serve as a test bed for experimenting with technological augmentation of social interaction, because of the “magic circle” that players step into when they engage with the rules and aims of the game [6,7]. Taking up unconventional or unusual social roles in the context of playing a game is something familiar from childhood. That familiarity allows researchers to put everyday people into a rapid-fire process of inhabiting technological augmentations with which they might otherwise be uncomfortable. Games also have the advantage of setting a social frame and shared expectations that take the focus away from the novelty of the technology, instead putting the spotlight on the immediate personal and social tasks and goals. Games therefore make a sort of end run around the novelty effect that gets in the way of examining the potential ongoing social impact of a given technology. So, we are able to somewhat accelerate the process of inhabiting future social augmentation prototypes while also putting them in the hands of people outside the technophilic and engineering-minded social milieu of the research lab.
In this article, I present a brief annotated portfolio  of projects from the research-through-design process [9,10,11] in which we have been engaged, describing prototypes we have made that aim to challenge embedded assumptions in current technologies that can interfere with subtle, supple social augmentation . The overall vision of this cycle of research prototyping has been to explore enhancing social connection through technological augmentation of in-person social interaction . I use three games developed in the context of my lab—Yamove!, Pixel Motion, and Hotaru—as a vehicle for discussing insights yielded from the research. Each example is framed with theory from HCI research that gives some useful concepts and context for the design insights.
Supple, Spacious Sensing
In his landmark book Where the Action Is, Paul Dourish introduces the notion of “coupling” with technology, drawing upon Martin Heidegger’s notions of ready-to-hand and present-at-hand . Dourish points out that technologies should provide variable and flexible coupling opportunities for people to use them. As he puts it, “The reason that variable coupling is so crucial is that the effective use of tools inherently involves a continual process of engagement, separation, and reengagement” . I would argue that this variability or flexibility in coupling becomes even more vital when we are designing to support rich in-person social interaction. Much of the cultural strife about use of mobile phones in social interaction derives from the fracturing of mutual attention required to engage these devices. To look something up to answer a question we have during our conversation, I need to break eye contact, pull out my phone, and engage in close work with the device for a bit. Thus, mobile phones by their nature provide a kludgy form of co-present social augmentation. I believe we can open a much wider, more subtle and supple  design space for the use of mobiles if we deconstruct our assumptions about which of their functions should be active and when, thus coupling the technology and the people involved in a more fluid and creative way.
We designed a mobile-based game that provides very different social support than most social apps. Yamove! (Figure 1) is a dance battle game for two teams of two players . Each person must use a smartphone to play, but the phone is worn on the wrist in a holster and rarely looked at. The game is modeled upon b-boy/b-girl dance battles, which originated in the 1970s at the same time as rap and MC-ing, and which still flourish today. In this dance form, teams (crews) of dancers compete against one another. The competition has a set of judges, and members of the crew take turns doing improvised moves to music played by a DJ. Yamove! interleaves technology into this interaction frame, with a core game mechanic of closely synchronized movement. The game pits one pair of dancers against another pair for three short rounds. Each pair improvises dance moves and performs them during their turn. The moves can be pretty much anything two people can do well in synchrony. Yamove! uses phone accelerometers to track player movement synchrony, as well as to monitor how vigorously players are moving and whether they keep changing up their moves. The final score in a round is based on all three of these criteria. The phones themselves provide only a few visual cues for players. They indicate a countdown to when a round begins and display a summative score after each round on a scale of 1 to 100 (Figure 2a). Players learn about how they are doing during a round by listening to an MC, who shouts out feedback and instructions, such as “Doing great!” or “Change it up! Pick up the pace! Stay in sync!” The MC is aware of the players’ performance stats because s/he has an eye on a large shared screen in the play space (Figure 2b, c, also Figure 1). Spectators also use this shared screen to understand how the game is evaluating the dance moves they are watching. Though a big screen is present in the interaction, we’ve studied how the game is used in our lab and have found that spectators keep their eyes on the players far more than on the screen. Yamove! uses mobiles and a large screen but keeps the attention on the dancers as spectacle.
Partly this is due to the simplified feedback those devices provide, but it is also because the improvisational nature of the game allows players to come up with moves that are both fun for players to perform and fun for observers to watch. Player pairs can size each other up and execute on a move they can both do well. This is quite different from commercial console-based dance games such as Just Dance, in which players closely copy dance moves presented on-screen. The flexibility of the coupling of the game and the players also means very expert dancers can compete against beginners and still manage to have a good game. We’ve seen a tremendous range of dance moves used by players with very different skill levels (Figure 3).
Yamove! orchestrates the coupling of players with devices to make artful use of inputs and outputs from people and machines. Rather than taking the mobile phone’s capacities at face value, our game’s design makes strategic use of its sensing and display capacities in the service of the overall social interaction for players and spectators. Same with the large shared screen. The game incorporates a human mediator role (MC) as a form of flexible and appropriate feedback that supplements that of the machines. All of this results in an engaging spectacle, as well as an enjoyable and flexible shared social experience for players. Yamove! was an IndieCade Finalist (the premier venue for independent games), and has been displayed in public festivals such as the World Science Festival in New York City.
The room for improvisation designed into Yamove! invites ludic engagement , encouraging gestural excess  that has emergent and local social meaning for participants, moving away from the automaton-like imitation of a mass-tailored dance instructor whose moves don’t quite fit anyone or any situation well enough. The sensors and computation are a relatively spare arbiter and framer of engagement between people, rather than a rigid taskmaster and pace setter. Social attention is managed strategically to bolster the interaction for both participants and spectators. This flexible, supple, and spacious coupling of people and machines is an approach that can better play to the strengths of each.
Shaping the Spaces Between Bodies in Public Play
Humans are highly attuned to one another’s physical signals and performance. When a group engages a public system, they are looking to learn what to do, but they are also incorporating what they see a player do into an overall impression of that person [17,18]. Thus, what designers ask people to do physically to engage a public system inevitably has some social implications. Engagement with movement-based commercial sensors in social settings can sometimes require players to perform awkward movements that may invite a “mocking gaze” . For example, Microsoft Kinect–based games can lead a person to flail their arms and move about in an atypical manner in order to be recognized by the system and make “moves” in the game. The limitations of the Kinect’s camera-based sensors can also shift how a social group is arranged in front of the device. It’s best for those not playing to step out of the field of view, so as not to interfere with the recognition of active players. Thus, a player moves in an exaggerated fashion in order to be recognized; the rest try to stay out of the field of view so as not to interfere with this coupling. The engineer-minded focus on maximizing the detailed, one-to-one coupling of sensors with individuals inadvertently creates a social necessity to emphasize spatial coupling with the machine, at the expense of more fluid and natural socio-spatial arrangements and performances. People cannot group themselves and move about intuitively and fluidly. Proxemics (the spaces between bodies in social interaction) are a powerful design material  that is not being artfully shaped in the case of the Kinect.
In creating Pixel Motion, a public museum game, my research group took a different approach to sensing. Instead of attempting to track each individual player in a crowd, the game uses motion-tracking software designed by our research partners at Bell Labs  to sense any motion in the field of view of the surveillance camera mounted in the play space. Pixel Motion uses this motion-flow data to drive a “pixel wiping” game mechanic that functions like scratching off the surface of a lottery ticket to see if you’ve won. Players wipe off pixels to reveal the video image of themselves as they play (Figure 4). Anyone can step into the field of view of the camera and begin wiping pixels away through free movement. All players work together to clear enough of the pixel field to win within a time limit. If they win, there is a chance to pose with virtual props to create a postcard image that goes into the game’s Flickr stream. There is also a leaderboard showing the best scores of the day, which uses the postcard images from the winning player groups.
We installed Pixel Motion as part of an exhibition about the role of surveillance cameras in everyday life at the Liberty Science Center in New Jersey, where we were able to record and study public interactions with the piece. Through conversations with museum staff and our own formative research in the museum, we had seen social patterns of use of the museum’s existing interactive exhibitions in which first one group, then another, would make use of a game or installation. Each group would wait their turn to engage the exhibit, rather than playing together. We hoped with Pixel Motion to encourage intergroup mingling during interaction. Our hypothesis was that museum visitors would quickly grasp the sensing properties of the game and understand that a “more the merrier” approach would result in better game scores. In practice, we observed this to be the case (Figure 5). School groups would freely commingle to play the game, diving in mid-game and posing together for the reward photos afterward [21,22].
The sensing strategy employed in Pixel Motion, combined with the game’s design, reconfigured the spatial arrangement of bodies, encouraging commingling of groups and allowing for a range of movement strategies without the worry of uncoupling from the sensors. This shifted the role of the technology from hobbling and constraining motion, and disrupting social spatial flow, to providing a reason to commingle and move freely with one another. We believe this project illustrates the potential for cultivating the engagement of publics through novel thinking about technical augmentation of bodily movement through shared spaces.
Inviting and Engaging Design Resistance
HCI research and design practice runs a perennial risk of groupthink and local social bias about what is desirable and viable. We try to counterbalance these tendencies through deep commitment to engaging real users in our work practice—identifying needs and closely engaging relevant people as we try to address those needs. But sometimes we want to do speculative work about where things might go in the future. Market researchers know from hard-won experience that people are not adept at communicating what they might think of a potential idea, or how willingly they would incorporate it into their everyday lives. Yet our envisionments and speculative prototypes need rigorous engagement by publics outside our labs and university spaces, to avoid overly narrow assumptions and dramatic missed opportunities.
Consider, for example, wearables. The failure of Google Glass to catch on among the general public and the low utilization of the Apple Watch’s range of features  suggest that we still have a long way to go in conceiving of and delivering adept social (or even individual) augmentation with wearables. Taking on the challenge of envisioning and prototyping technologies meant to gracefully augment everyday social engagement should lead us to reach outside the research lab even earlier and more frequently, in as many partnerships with other skill sets and audiences as possible, toward increasing our chances of hitting upon genuinely insightful and helpful designs.
Bill Gaver is an HCI practitioner who has innovated fresh ways to conduct speculative and creative technology prototyping in areas such as smart homes, which can suffer from an insular and unimaginative approach . He has a longstanding partnership with professionally trained designers built into the makeup of his research studio , and has experimented with methods of installation and assessment of prototype systems that involve outside experts such as cultural commentators . The last project I’ll describe borrows from Gaver’s ways of working—it’s a close collaboration with a professional artist/designer that explores a speculative alternate future for wearables as social game controllers.
Hotaru is a collaborative game in which two players use wearables to engage in close physical coordination (including physical contact) toward a shared game objective (Figures 6 and 7). One player wears a custom built backpack, the other a gauntlet. (Both wearables have embedded Android phones, which drive their functions.) The backpack wearer uses hand gestures to “gather” energy, which slowly illuminates the backpack. The gauntlet wearer must keep an eye on this gauge, and when there is enough energy, the two join hands to transfer it. Once the gauntlet is full, the second player raises their arm to release the energy. The game’s narrative frame is that the two are like lightning bugs fighting off smog with their own bodily illumination (based on a Japanese folk story). Players win by working closely to gather, then fire off energy toward projected smog. Most of the game’s feedback is embedded in the costumes’ illumination and through sound design, rather than through elaborate graphics, keeping the attention on the players’ bodies.
Kaho Abe, the game’s designer and creator, who is an accomplished independent game designer with a background in fashion, drew upon Japanese folk stories as well as popular culture. Hotaru was inspired in part by the Japanese television show Kamen Riders, in which ordinary people transform into superheroes through a combination of wearables and special poses. Abe envisioned Hotaru as a way to transform a pair of players into an interdependent team of intrepid lightning bugs for a little while. Hotaru was partially funded by Eyebeam gallery, through a grant from the Rockefeller Cultural Innovation Fund. The project started through Eyebeam’s Computational Fashion initiative, which paired artists and researchers to bridge the divide between the computation and fashion communities, toward novel insights about mutually relevant areas such as wearables. The game was featured at IndieCade Night Games in 2015 and at CHI Interactivity in 2016.
Hotaru opens a very different design space from the usual terrain of wearable computing—a vision of social wearables that involves neither social media, nor the collection of data, nor attention upon a screen of any sort. Instead, the aim is transformational co-presence in a shared playful objective, with an intense focus upon one another. (In its latest implementation, all cues from Hotaru to players come from light and sound from the costumes themselves.)
Hotaru arose from a collaboration that reached outside the research lab, bridging toward a community with a dramatically different way of thinking about what it means for something to be an engaging wearable. The game was tested at Eyebeam gallery, and iterated as a result of play sessions at several other public play festivals, as well as within HCI forums such as the TEI conference . It was shaped through feedback and resistance from design communities and audiences outside the realm of HCI and engineering practice, allowing a unique vision of wearable social augmentation to emerge. We believe Hotaru underscores the value of seeking alternative user communities, publics, and allies for speculative user experience research work.
In this cultural moment, HCI researchers and designers are shaping interaction paradigms for near-future social-support technologies that make use of cameras and other sensors placed in our everyday environments, as well as on and even in our bodies. The work outlined in this annotated portfolio offers insights about specific ways in which we can fruitfully shift current practices—for example, aiming for more flexible coupling of devices and individuals, and considering the social space between bodies. The larger project described here also offers two strategies for helping to ensure we develop rich, nuanced, supple, subtle, vibrant visions and prototypes for this domain moving forward.
Playful, “out of the box” design through games. Research investigating the augmentation of social interaction and connection using near-future technologies can benefit from creative and playful thinking about the coupling of computation, sensing, and human social activity to create new insights and directions. Games are a useful vehicle for generating prototypes of sociotechnical configurations to reveal potential and pitfalls in ideas. They can offer us a deeply social orientation to viewing constellations of people, and sensing and output technologies. Game design emphasizes moment-to moment engagements, aesthetics, and emotional outcomes . These can be helpful counterbalances to our field’s continuing tendencies toward task-, efficiency-, and data-driven social scenarios. Being social is not just about getting things done; it’s also about having a pleasurable and perhaps even transformative experience together. Using games and play in our design thinking can remind us of these opportunities for augmentation of experience that may otherwise tend to go missing from the ever-shifting landscape of devices and social relations. Games can also help us break up assumptions embedded in our current devices about how we might use computational sensing and output to shape social relations.
Engaging communities of play and resistance. Design of nuanced, subtle, and rich future social augmentation can be enhanced through cultivating resistance from communities outside our current technological and engineering practice. Fashion designers and independent game makers are two such communities of practice. Neither group shares HCI’s current dominant assumptions about wearables (e.g., product as data-mining opportunity, task and productivity emphasis, screen as primary output to end user). Shaping the future of social augmentation should send us far afield, into diverse communities with different values and visions for the future, and different experience with the intersection of technological materials and sociality. Our research team benefited from engagement with these communities, from start to finish in the design process.
Game making and game makers have a great deal to offer the HCI community in meeting the challenges of near-future interactive technologies and their impact on our daily lives and interactions. I hope this article has been instrumental in revealing some of that potential.
I am grateful to Yahoo! Research, the Rockefeller Cultural Innovation Fund through Eyebeam gallery, and Alcatel/Lucent Bell Labs for support of the research projects described in this article. Thanks to Paul Dourish, Bill Gaver, and Terry Winograd for reviewing the manuscript and giving helpful suggestions for revisions. Finally, thanks to the many students, artists, and designers who worked on the games and research described in this article. In particular, thanks to Holly Robbins and Elena Márquez Segura, whose research efforts were discussed here (other contributors can be found in the cited papers).
1. Bilton, N. Disruptions: More connected, yet more alone. The New York Times Bits Blog. Sept. 1, 2013; http://bits.blogs.nytimes.com/2013/09/01/disruptions-more-connected-yet-more-alone/
7. Zimmerman, E. Jerked around by the Magic Circle—clearing the air ten years later. Gamasutra. Feb. 7, 2012; http://www.gamasutra.com/view/feature/135063/jerked_around_by_the_magic_circle_.php
15. Gaver, W.W., Bowers, J., Boucher, A., Gellerson, H., Pennington, S., Schmidt, A., Steed, A., Villars, N., and Walker, B. The Drift Table: Designing for ludic engagement. Proc. of CHIEA 2004. 885–900.
20. Mueller, F., Stellmach, S., Greenberg, S., Dippon, A., Boll, S., Garner, J., Khot, R., Naseem, A., and Altimira, D. Proxemics play: Understanding proxemics for designing digital play experiences. Proc. of DIS 2014. 533–542.
Katherine Isbister (www.katherineinterface.com) is professor of computational media at the University of California, Santa Cruz, and part of the core faculty in the Center for Games and Playable Media. Her research is at the intersection of games and HCI. She is author of How Games Move Us: Emotion by Design, from MIT Press. email@example.com
Figure 5. Groups of school children mingled to play Pixel Motion (typically the school group members would all wear a single shirt color, making it easy to see when different groups of children played together).
Copyright held by author. Publication rights licensed to ACM.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2016 ACM, Inc.