Forums

XXII.2 March + April 2015
Page: 64
Digital Citation

Children as participants in design and evaluation


Authors:
Janet Read

The HCI community has long advocated the use of user studies to test and evaluate interactive systems. There is much to be learned by watching users interact with systems, both as novices and as experienced users.

It is generally thought that an expert inspection of a system is a poor substitute for user testing. One main argument for having users test systems is that they will typically do things that experts might not predict will happen. The expert inspector “guesses” what might happen based on his or her knowledge of the product and the intended users. This guesswork requires quite a mature understanding of the context of the system—its users, its uses, and its use.

Insights

ins01.gif

The child computer interaction (CCI) community has a history of promoting children as active participants in research, design, and evaluation. The motivation for the involvement of children as evaluators has been that there is a considerable distance between children and any expert “guessers” evaluating the system on their behalf.

In our work in the ChiCI lab, we have engaged with children both as participants in design and as participants in evaluation. While a clear view on how children can contribute to design is still forming, we firmly believe there is no substitute in usability testing and user studies to having a product tested and experienced by a real child. This belief comes not only from having seen some of the crazy and wonderful things children do with technologies, but also from a logical analysis of how one might test something. In our practice, we have seen children act in ways that could not have been predicted by an expert. For instance, we’ve seen children deliberately choosing to work with a tablet (pre-rotating screens) held upside down and children shaking digital pens (for more digital ink!) when an ink trace didn’t appear on the digital surface. The logical case for user involvement is that, provided there is no “risk,” it makes sense to have at least some users try out technologies and systems. It should be noted, however, that children do not unearth everything there is to be known about a system—especially when they are young—so there is still a need for a parallel expert evaluation.

While watching children in natural interactions with technology can teach us much, a further reason to involve children in evaluation is to gather their opinions of products and features, especially when variations in these might affect motivation, performance, or fun. Again, this is not something that can easily be predicted by an expert, and our own work on measuring fun has shown several interesting things about the expectations of children and their later opinions of technologies. Therefore, there is much enthusiasm, and there are many good reasons, to work directly with children in both usability and user experience testing.

However, the involvement of children in usability studies and in evaluation studies is not without some difficulties. There are practical concerns around arranging studies and recruiting children, there are methodological concerns in terms of ensuring that children can contribute in meaningful ways, and there are ethical concerns around the meaning of the children’s participation.

The first two of these have been reasonably well covered in the literature. In practical terms, the classic workaround is to carry out evaluations in schools or in afterschool clubs. It is true that developing relationships with schools takes time, and a research group has to invest in that school and ensure the work done does not “get in the way” of the curriculum, and also that it fits around the school’s schedule. Our group has a set of guidelines for working with schools (see http://www.chici.org/schools). These guidelines include common sense things such as ensuring there is a back-up activity for children who finish early, making sure there are adequate electrical sockets (many schools are woefully short of these), and making sure the Internet connection can be used if needed.

Methodologically, the CCI community has put considerable effort into designing and modifying methods to be used with children in usability and evaluation studies. Many of these are described in the literature [1]. Recommendations for think-aloud studies, modifications of survey methods, uses of Wizard of Oz, and new techniques including peer tutoring are all discussed. The most essential element in developing methods for use with children is to pilot any method with children of the appropriate age. Many studies fail because the adult evaluator is too far removed from the children in terms of understanding their vocabulary, their abilities, their context, and their motivations, so the experience at best is bad, and at worst is damaging for the children participating. These sorts of studies may well expose many problems with software and gather some half-useful opinions, but they damage the reputation of the CCI community and do little to encourage children to explore science and scientific inquiry, which one would hope might be a by-product of participating in a well-structured usability study.

This leads to the third issue when carrying out usability studies with children. There is understandable concern about the ethics of including children as contributors to software development or research. In the main, the ethics of child involvement involve issues of consent. In university work, there is generally a requirement that work with children is cleared through an ethics review process in which adults are asked to explain how children’s consent will be gained and to detail what information will be given to parents and children ahead of, and during, the study. These processes are intended to protect the institution (in these cases, the university). Optimally, the completion of an ethics form would raise awareness of issues around the informing and consent of children and would result in the children being more carefully thought of before any usability study. In reality, these processes often have individuals completing the forms with a set of known protocols.

Our work has sought to better manage and understand the ethics around children’s participation in HCI research—as both designers and evaluators. This has led us to develop a protocol for examining our work that is over and above a standard ethics form. Our starting point has been what we tell the children. In earlier work, we would typically begin an evaluation session by telling the children that we needed their feedback for our software development; this was true but was a scant explanation of what we were doing. On examination, we realized that we were not being entirely honest and that we should perhaps talk more about research, about our university, about funding, about their participation, and about possible future uses of the data or information we were collecting. Two checklists, CHECk1 and CHECCk2, have been developed that help us examine these aspects, where we ask questions of ourselves and consider the “honest” reason as well as the “excuse” reason. One such question is “Why are these children chosen?” An excuse answer for this might be “because we know they can give us great feedback,” but a more honest answer might be that their school was the first to offer us a chance to work with them. The first set of questions (CHECk1), as applied to evaluations, includes:

ins02.gif

What are we aiming to evaluate?

  • Why this product? (Excuse answer)
  • Why this product? (Honest answer)

What methods are we using?

  • Why these methods? (Excuse answer)
  • Why these methods? (Honest answer)

Which children will we work with?

  • Why these children? (Excuse answer)
  • Why these children? (Honest answer)

The process of going through these questions helps us examine what we will say to the children so that in the second checklist (CHECk2) we are asking ourselves why we are doing things and what we will tell the children [2].

  • Why are we doing this project? What do we tell the children?
  • Who is funding the project? What do we tell the children?
  • What might happen in the long term? What do we tell the children?
  • What might we publish? What do we tell the children?

Here is an example of how these checklists might be applied. It is a common idea in HCI and CCI to carry out a user evaluation after the development of a product. Recently our group developed a small pod device for use with teenagers. A logical process would be to evaluate this pod with teenagers from a school. In the examination of this evaluation, the questions could be answered as follows:

What are we aiming to evaluate? A pod that we have made.

  • Why this product? Excuse answer: Because it will eventually save energy, which will be great for the planet.
  • Why this product? Honest answer: Because we have made this and want to finish the project that funded it.

What methods are we using? The Fun Toolkit.

  • Why these methods? Excuse answer: They are specially designed for use with children.
  • Why these methods? Honest answer: So we can also use the data for a paper on the Fun Toolkit.

Which children will we work with? The teenagers in the local school.

  • Why these children? Excuse answer: They can be great evaluators.
  • Why these children? Honest answer: They were convenient to recruit.

This rather tongue-in-cheek completion of this checklist exposes a subplot of the user evaluation: the desire to get some data to check out the method.

In completing the second set of questions (as per here):

  • Why are we doing this project? Because we believe in it. What do we tell the children? The story about how we were attracted to the funding and about how excited we are to make a small difference.
  • Who is funding the project? Research councils, taxpayers, government. What do we tell the children? As above, but make it clear to them.
  • What might happen in the long term? The product might go on general sale. What do we tell the children? That they are contributing to scientific advancement.
  • What might we publish? We might publish about the methods used. What do we tell the children? Explain how publishing works and about methods being developed and about how we might evaluate them.

This second process highlights that there is a need to clearly explain to children how research might also be generated even from an evaluation study, but, interestingly, it also brings up the by-product of working with children in this way, which is the opportunity to use evaluations with children as a means to expose children to scientific thinking.

The development and use of the CHECk1 and CHECk2 tools has resulted in our taking a much more child-centered approach to consent, where we begin every study with an explanation of why these children are included, what research is, what the university does, who is funding the work, and where the work might end up. We have made it a recent requirement to, wherever possible, return to the children with results when these have appeared as academic papers or as products.

In our thoughts about where results might end up, which is always a very difficult question to answer, we have been looking at how to explain and justify to children where their contributions go in studies of children as participants in design [3]. In a usability study involving several children, our current view is that each child should be able to see the value of their contribution to the ultimate evaluation and development of any product. This raises questions about children being “used” to simply gather research data or test out a new product, so any adult evaluator has to be absolutely clear about the children’s individual, as well as collaborative, contributions. At this juncture, the would-be evaluator has an important question to add to the CHECk1 list: Beyond “Why these children?” is the question “Why ALL these children?” Which is to ask: Can the inclusion of each child be justified and rationalized? Is each child clearly contributing, or are some children simply making up the numbers?

Children participating in usability studies and evaluation studies are not the same as adults. Their inclusion has to be justified, because they are less able to understand why they are participating. They need to be able to remove their consent, and to do that they have to be clearly informed. As participants in research (often, usability studies), they should understand the research as well as the evaluation activity. This should occur at the beginning of an evaluation study, because if these things cannot be reasonably justified, the evaluation should not take place. Children should be able to use methods they can relate to and understand so their contributions can be meaningful to them as well as to the adult evaluator. These methods do need testing, but children need to be aware when that is the case. Individuals carrying out usability studies with children need to be trained in the ways of children and in the mechanisms around their contexts. Especially when studies are located at schools, adult evaluators must be familiar with how schools work and be sensitive to their needs.

Working with children is highly rewarding but comes with responsibilities. Opportunities to engage with children in evaluation and usability studies are also opportunities to introduce children to science and scientific thinking, to new technologies, and, in some cases, to higher education—for these reasons, it is important to get it right.

References

1. Markopoulos, P. et al. Evaluating Interactive Products for and with Children. Morgan Kaufmann, San Fransisco, 2008.

2. Read, J.C. et al. CHECk: A tool to inform and encourage ethical practice in participatory design with children. CHI’13 Extended Abstracts on Human Factors in Computing Systems. ACM, New York, 2013, 187–192.

3. Read, J.C., Fitton, D., and Horton, M. Giving ideas an equal chance: Inclusion and representation in participatory design with children. Proc. of the 2014 conference on Interaction Design and Children. ACM, New York, 2014, 105–114.

Author

Janet Read is a professor of child computer interaction working in the U.K. She has been working in CCI for more than 15 years. She is the chair of the IFIP TC13 SIG on Interaction Design and Children and editor in chief of the International Journal of Child Computer Interaction. jcread@uclan.ac.uk

©2015 ACM  1072-5520/15/0300  $15.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2015 ACM, Inc.

Post Comment


No Comments Found