Looking ahead

XVI.2 March + April 2009
Page: 67
Digital Citation

UNDER DEVELOPMENTElectronic tablecloths and the developing world


Authors:
Gary Marsden

Within the HCI community, many of us are working on how to create appropriate technologies for people living in developing countries. Some people are involved in deep ethnographies—studying and living with users in remote communities—while more technology-driven projects seek to establish appropriate digital infrastructure (for example, WiFi networks). Reporting on many of these projects has appeared in this forum over the past few years.

In doing this work, researchers can sometimes feel isolated or at odds with colleagues who are working on more mainstream problems. Certainly, mainstream-HCI techniques do not always make a smooth transition to a developing-world context. Pure participatory design is an example: How do you get someone to suggest a design for a computer system when they have never seen a computer? We get locked into thinking that we need special methods to deal with users from the developing world. But we lose sight of the fact that the users aren’t different; it’s the environment in which they reside.

This was brought home to me recently as I sat through Bill Gaver’s closing keynote at DIS 2008. For those of you not familiar with Gaver’s work (shame on you), he creates devices, such as the History Tablecloth, that explore new ways of interacting with digital technology. The History Tablecloth uses weight sensors to detect objects being placed on cloth’s surface. Over time the cloth will emit light around the objects, almost like a halo, the intensity of which increases with the length of time the object rests on the table.

Gaver’s main point was that devices were not designed to serve a particular goal or solve a problem, but instead offered new opportunities and resources to people. That meant it was up to the users to reason about what they were for or how they might fit into their lives. When users eventually found a use for the product, it was usually not what the designers had expected. This led Gaver to conclude he was unlikely to ever predict all the possible uses of his designs—and that this was a good thing. For example, in the case of the History Tablecloth, users deliberately placed objects on the cloth to create interesting patterns, whereas the research team had anticipated that it would be used as a history mechanism, recording household interactions as a pattern on the cloth. The best one could do, he concluded, was to recognize that users’ stories were as valid as those of the researchers. So, in his current work he ensures that some aspect of the design is flexible, allowing users to impose their desires on it. For example, his Plane Tracker system uses aircraft transponders to display a Google Earth visualization of the route onto a screen in a user’s living room as the plane flies overhead. However, no geo-political data was layered on top of the visualization in an effort to see how the users would interact with the little data that was presented. Would they augment the experience by looking up the route in an atlas, or be content to see the trip as purely an aesthetic experience?

Coming back to users in the developing world, the issues are surprisingly similar. Here, we also have users being exposed to novel technology, which make it impossible for them to intuitively understand how that technology might fit into their lives. Many of the researchers I have talked to have stories of how their work failed because they had not properly understood the user’s needs and the social impact, economic impact, and so on, of the context and culture in which they were working. Indeed, I have many of these stories myself. However, the message I took from Gaver’s talk is that failure (in the sense of accurately predicting usage) is inevitable and must be built into the design process.

This is subtly different to the argument advocating the use of prototypes. Prototypes support “fast fail”—they let you get the bad ideas out of the way so you can get on with refining the good ones. However, with Gaver’s target users, and with those in developing countries, they find it hard to give feedback on how the technology could best be used in their lives based on the half-formed prototype they are presented with. Hence the artifacts that Gaver designs are finished to a very high standard before they are deployed, so that the leap the users have to make is not so great.

Inspired by these insights, my colleagues and I decided to follow Gavers methodology on our most recent study. This involved the creation of an electronic notice-board system that allows users to download multimedia content for free onto their Bluetooth-enabled cellular handsets. This work was inspired by earlier research that showed that many users in the townships around Cape Town had sophisticated cellular handsets, capable of playing MP3 or 3GP files, but they lacked the finances to download such files.

Instead of interviewing these users and including them in design sessions to discover what they would do with a system that lets them download multimedia files for free, we went ahead and built the system. We tested it in many scenarios and with many users around our university to ensure that it was robust and reliable before deploying it in a township community hall. The final system that we deployed allowed users not only to download files but also to upload them. We felt this would allow the flexibility that Gaver had recommended, which would allow the users to place their own interpretation on the technology.

Before deploying the system, we had to think about how we would evaluate it. What would count as success? Our experience in running evaluations in the developing world has shown a very strong Hawthorne effect, with the subjects keen to give experimenters the results they require. Clearly, observing what was going on at the community hall would preclude unbiased results. In Gaver’s work, he employed professional documentary makers who would interview participants and create a video presentation that drew out the points that seemed most relevant to the subjects. In this way, Gaver’s team could get an unbiased interpretation of the results from the intervention.

In our project, we felt nervous about unleashing a documentary team on our subjects, who would have had little experience in dealing with interviews. Having them followed around by a video team was unlikely to create an accurate reflection of the users’ true feelings. Instead, we adapted the documentary approach to our context. Rather than finding a professional team to do the interviews, we recruited two journalism students, who were from the same language group as our subjects. These students were able to interview the subjects in a nonthreatening way, but due to their training, they could report results to us in a way that allowed us to assess the technology’s impact. We told the students only that we wanted them to find out how some new technology had affected the lives of the people living in the target community; they didn’t know that we had placed the technology there in the first place.

So, after training some users on how to interact with the system and lining up the journalists to conduct the evaluation, we sat back to see what would happen.

The results from the intervention were both surprising and encouraging. As Gaver had predicted, no amount of ethnographic study or consultation would have predicted the ways in which people used the system. For example, the board became a venue for women in choirs to exchange local gospel music. On weekends they recorded their performances on the handset and then uploaded the recordings to share with ladies from other choirs. This usage was discoverable only by creating a complete, robust system that users could appropriate in ways that were truly complementary to their lifestyles.

So should we give up prototyping and just build complete systems and hope they work out fine? Definitely not. There are many instances in which low-fidelity prototypes are entirely appropriate and will help resolve design issues. However, when one is considering how technology is appropriated (as opposed to discovering if it is usable), it is important to remember that even the users themselves cannot predict how a given technology might fit into their lives.

Therefore, based on our recent experience, we recommend that you not despair if users do not appropriate your technology in the way you anticipated. Rather, embrace uncertainty and build it in to the system so that users can modify the technology to meet their needs. In our case, we did that by not prescribing a use for the system—for example, using it to distribute health information to a user’s handset. By allowing users to contribute any form of information to the system, we created the space for them to explore ways of appropriating the technology. As a result, we found an application that was unlikely to have emerged through any other means. By using journalism students who were familiar with the users’ culture and language, we were also able to assess the impact of that application in a way that was not possible using direct observations or even questionnaires—the creation of a questionnaire would almost certainly have required us to think of all possible outcomes and focus our evaluation to extremes.

This experience has inspired me to think more about why doing interaction design with developing-world users differs from working with users in the developed world. Understanding the divergences will provide insight into which methods can be applied in both developed and developing domains.

Author

Gary Marsden is currently employed as an associate professor in the department of computer science at the University of Cape Town in South Africa. He was born in Ireland, studied in Scotland, and had his first job in London. Although his background is in computer science, moving to South Africa has forced him to reconsider his views about technology in general and HCI in particular.

Footnotes

DOI: http://doi.acm.org/10.1145/1487632.1487648

Figures

UF1Figure. The “History Tablecloth,” developed by the Interaction Research Studio is an example of embedding computing in everyday objects. When items are left on the cloth it begins to glow beneath them, creating a slowly expanding halo. When the items are removed, the glow gradually fades.

©2009 ACM  1072-5220/09/0300  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2009 ACM, Inc.

 

Post Comment


No Comments Found