Emerging approaches to research and design practice

XV.6 November + December 2008
Page: 26
Digital Citation

LIFELONG INTERACTIONSUnderstanding children’s interactions


Authors:
Janet Read, Panos Markopoulos

back to top 

It was Peter Medawar who wrote, "Today the world changes so quickly that in growing up we take leave not just of youth but of the world we were young in..." The world of interactive technology changes so rapidly that for most adult observers, the interactive world inhabited by children is both unknown and, once entered into, under-comprehended.

Some interaction design researchers have tried to make sense of children's interactive technology by immersing themselves, as much as they are able, in children's worlds. In particular, these researchers place great emphasis on involving children in the design and evaluation of interactive technologies, both to learn about the technologies and to learn about children's interactions.

This immersion was, to some extent, a result of the considerable activity in the study of interactive technology for children that took place about 10 years ago. One product of this era was the seminal work by Hanna, Risden et al. on usability testing with children [1]. This work was published in interactions at a time when the emphasis on the design of interactive technology for children was shifting from a concern with educational aspects to a more general interest in designing for children [2].

As it was set in an industrial landscape (the authors of this work were usability engineers at Microsoft), the paper provided well-considered advice for "would-be" evaluators of children's interactive technology at a time when the inclusion of children in the testing and design of their own products was only really just gaining ground.

back to top  So What Did We Know 10 years Ago?

The paper offered practical advice regarding the setup and planning of a lab-based evaluation session with children. This advice included:

  • Make the lab child friendly
  • Use input devices that the children are familiar with
  • Use recording devices and one-way mirrors sympathetically
  • Give younger children shorter lab times than older ones
  • Be aware that children get tired; shuffle tasks around

The authors then made several suggestions for how to make children comfortable. They described some methods for getting to know the children and making small talk with them. They emphasized the need to make children aware that the interactive technology, not the child, was what was being tested. They also stressed the need to ensure that the children's expectations were met (if they came expecting fun—they should have fun!). Instructions were included for would-be evaluators about how to make the children, and their parents and siblings, comfortable in the lab and the area of test task design—test tasks that can be broken down into bite-size chunks—was discussed as well. In particular, it was stressed that the evaluator needed to ensure that all children, including those with reading difficulties, could easily understand any instructions associated with the test.

back to top  What Has Changed?

If time stood still, and technology and children never changed, the original work by Hanna, Risden et al. would no doubt still be as valid now as it was then. But as we all know, nothing stays the same, and in the dynamic area of interactive technology and children, change is inevitable and rapid.

Technology has changed. In 1997 the iPod had not been invented, the Internet was primarily dialup, the phone tended to have a cable attached. "Mobile computing" referred to heavy laptops, ubiquitous computing was still not much more than an idea, and RFID technology was restricted to cows' ears! Many new technologies cannot be easily evaluated in usability labs and as technology has changed, usability is no longer the only attribute of interest. There is now a much greater emphasis on fun, desirability, and user experience.

Social changes. The world now feels much smaller than it was. Online communication has grown—children now communicate effortlessly online in social networks and chat online in much the same way as they do face to face. Schools now emphasize collaboration with online learning environments, and children play online computer games with other children they will never meet. In these contexts, evaluations of interactive technology need to take account of children working together, over time and across locations. The lone child at a computer is becoming a rarity.

Political changes. No longer can children be regarded simply as subjects in user tests. Changes in legislation and in children's roles in communities mean that they now have greater social capital than ever before. Children expect to be included in the design of their worlds; they certainly know about technology and have a lot to say. When looking at interactive technology with children, this confidence can unnerve the less savvy evaluator. Twenty-first-century evaluators need to ensure that children's rights are observed while also giving them a voice.

New evaluation methods. As would be expected, the past 10 years have seen research into the use and usability of different evaluation methods with children. Notable examples are methods like peer tutoring [3], studies that validate existing methods [4, 5], and studies that create and validate new tools for use with children [6]. In planning an evaluation, as in all work with interactive technology and children, we need to take note of what is already known—failing to learn from the research of others can often result in a poor experience for children.

back to top  Giving the Original Guidelines a Makeover

The original guidelines are essentially still highly relevant because, after all, a child is still a child. However, there are three areas where it appears, in the light of the changing times, that some adjustment is needed: these are timing, screening, and participation. Also, as there is now new knowledge about interactive technology and children, there are some additional guidelines to be aware of.

In the original work, the researchers held usability tests between 30 minutes and an hour long. In our experience, and that of many others, this now seems rather liberal. Maybe children have shorter attention spans than they used to, but modern young children can often concentrate for only very short periods—as short as 10 minutes—and even older children find sessions beyond 30 minutes problematic. A good rule is to keep evaluations as brief as possible. It is possible, with short breaks and a sufficiently engaging product (maybe a game), to keep children for longer, but this is more of an exception than a recommendation.

The screening of children for participation in evaluations of interactive products might be a necessary evil in the time-poor world of commercial usability testing (in which you really might not want children who cannot read), but in a world where equality and inclusion are center stage, as many children as possible should be allowed to join evaluations, even if their contribution might not be useful to the researcher or test administrator. Nowadays, the mantra should be, "the child's experience matters as much as the evaluators' results!"

In designing for the child's interactive experience, whereas Hanna, Risden et al. advocated keeping a parent or adult with the child, it is more common these days to have children paired with a friend (which after all is what generally happens in the real world when the children are using interactive technology), with the parent playing a more disconnected role. At the same time it is necessary, in an era of litigation and concerns for the safety of children, to warn against any situation that places a single tester in a room with a single child.

back to top  Some Extra Tips

The new advice given here is in three parts: We consider the stage before the evaluation, some tips for during the evaluation, and include guidelines for wrapping up the evaluation.

Planning the evaluation. Because children are, at the same time, predictable and unpredictable, it is important to plan well. In particular, carrying out a pilot evaluation that mimics as closely as possible the real evaluation is valuable. This pilot will demonstrate if the chosen recording methods are sensible, if any test tasks are doable, and if any survey instruments are age-appropriate. Before the real evaluation of the technology, there are often some design-and-create activities to be completed. Logging sheets for evaluators, survey instruments for use with the children, or diaries for the evaluation process might all need designing and piloting. Once these tasks are complete, there will also be a need to fix up transport, obtain consent from the children and their guardians, book rooms, arrange refreshments, and carry out a risk assessment.

Different locations and different methods. As interactive technology has become more mobile, and as schools have become more open to interactive technology, evaluations in labs are now quite rare. When looking at technology in schools, it is necessary to work within the structure and confines of the school day. The lesson length, for instance, is often an impermeable feature around which the evaluator will need to plan. Outdoor evaluations are difficult to control; our advice is to keep the evaluation as simple as possible, rely as little as you can on the use of technology, and carry out a very careful risk assessment. In many locations there will need to be a bad-weather backup plan.

Four methods that have been studied in some depth over the past 10 years are diary methods, think-aloud methods, surveys, and the Wizard of Oz method. Diary methods are well suited to home evaluations and those that take place over a length of time; think aloud—previously assumed to be unusable with younger children—has been shown to be possible with children as young as seven and eight; and for surveys, many evaluators now use the fun toolkit, which is a validated method for gathering children's opinions of technology. The location and maturity of an evaluation can dictate the method used. The use of diaries, for example, can be a good choice for evaluations at home, and Wizard of Oz studies (where children interact with a partially functional product that is in part "driven" by an unseen assistant) can be very handy when a fully working system is not available. It should be noted that Wizard of Oz studies present some ethical issues, especially in relation to the use of deception. After the event, researchers will need to tell children that deception has occurred and give them the opportunity to withdraw their consent; where at all possible, open configurations should be used with the wizard seen.

Wrapping up. After evaluations, researchers need to thank the children and tell them what they've contributed. As outlined earlier, the child who participates in an evaluation has some right to know what the point was. While thanking the child, the researcher must often thank teachers and parents, and the more information that they can share about the nature and purpose of the evaluation, the better.

Back at the lab, the modern-day evaluator can breathe a huge sigh of relief (once any data has been made anonymous and safely stored and tagged) after what is often a noisy, but very enjoyable, day's work. The information gathered about the interactive technology should inform better design of products for children—the time spent with children will have deeper, less tangible benefits—an understanding of the child's world, a moment to lapse back into a space long departed, and a gentle, much-needed, confirmation that humanity still has possibilities.

back to top  References

1. Hanna, L., K. Risden, and K. Alexander. "Guidelines for Usability Testing with Children." interactions 4, no. 5 (1997): 9–14.

2. Druin, A., ed., The Design of Children's Technology. San Francisco: Morgan Kaufmann Publishers, 1999.

3. Höysniemi, J., P. Hämäläinen, and L. Turkki, "Using Peer Tutoring in Evaluating the Usability of a Physically Interactive Computer Game." Interacting with Computers, 15, no, 2 (2003): 203 - 225.

4. Baauw, E., M.M. Bekker, and W. Barendregt. "A Structured Expert Evaluation Method for the Evaluation of Children's Computer Games." In Human-Computer Interaction–INTERACT 2005., edited by Maria Francesca Costabile and Fabio Paternò, New York: Springer, 2005.

5. Als, B.S., J.J. Jensen, and M.B. Skov. "Comparison of think-aloud and constructive interaction in usability testing with children." Working paper, IDC '05, Boulder, Colo., 2005.

6. Read, J.C. and S.J. MacFarlane. "Using the Fun Toolkit and Other Survey Methods to Gather Opinions in Child Computer Interaction." Working paper, IDC '06, Tampere, Finland, 2006.

back to top  Authors

Dr. Janet Read is director of the Child Computer Interaction Group at the University of Central Lancashire (UCLan) in the UK and Dr. Panos Markopoulos is an associate professor at the Technical University of Eindhoven in the Netherlands. Dr. Read has a first degree in mathematics from the University of Manchester and a Ph.D. in child computer interaction from UCLan.

Dr. Markopoulos studied undergraduate computer science at the National Technical University of Athens and specialized in human-computer interaction at Queen Mary University of London, where he also did his doctorate in formal methods in human-computer interaction. Both authors have been heavily involved in the Interaction Design and Children conference series. Dr. Markopoulos co-chaired with Mathilde Bekker the first Interaction Design and Children Conference in 2002 and Dr. Read co-chaired the follow up event in 2003. Together the authors have presented several tutorials and workshops on child computer interaction and interaction design for children and have recently, with Stuart MacFarlane and Johanna Hoysniemi, written a specialist book entitled Evaluating Children's Interactive Products: Principles and Practices for Interaction Designers published by Morgan Kaufmann.

back to top  Footnotes

DOI: http://doi.acm.org/10.1145/1409040.1409047

back to top 

©2008 ACM  1072-5220/08/1100  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2008 ACM, Inc.

Post Comment


No Comments Found