Exploring the future

XVI.5 September + October 2009
Page: 65
Digital Citation

LIFELONG INTERACTIONSData mining for educational “gold”


Authors:
Shalom Fisch, Richard Lesh, Elizabeth Motoki, Sandra Crespo, Vincent Melfi

Two eight-year-old girls are playing a computer game in which they have to fill gaps in a railroad track with different-size pieces of track:

I think we’re supposed to use the 1 and then the 10.”

Uh-oh. Can [we] subtract?

This is too confusing.”

They clear the pieces from the screen, then start again with a different strategy:

This time, we’ll start with the mini-pieces…

None of us is born with a separate part of the brain devoted exclusively to playing computer games. While playing games, we apply the same sorts of knowledge, inferences, and cognitive skills that we use in our offline lives too. Researchers who study human-computer interaction sometimes draw on broader theories of human cognition to explain users’ thinking while playing games, or note similarities between online and offline thinking and behavior [1, 2]. Indeed, research even shows that users’ interactions with machines are influenced by the same sorts of social rules that govern their interactions with other people—regardless of whether the machine is an animatronic doll or a desktop computer [3, 4].

When children play educational computer games, we might expect their reasoning to follow the same sorts of paths that they use to figure out similar educational content in real (offline) life. If so, this would not only help us understand children’s use of technology, but also present a significant opportunity for research. Successful educational games have a tremendous reach among children: For example, the mathematics-based Cyberchase website (www.pbskids.org/cyberchase) has logged more than one billion page views to date. Given the countless bits of data generated while playing a game, data mining could yield a vast pool of data for investigating applied reasoning during naturalistic play.

As part of a major study of children’s learning from Cyberchase, our research team has been exploring the possibility of using online Cyberchase games—not only as instructional tools, but as a means of simultaneously assessing children’s problem solving too. Of course the field of computer-assisted instruction (CAI) has long used games to teach and assess knowledge [5]. However, unlike traditional types of CAI, the Cyberchase games were not originally designed for assessment. In addition, whereas assessment in CAI frequently focuses on measuring the state of users’ knowledge or skills, we were more interested in observing the evolution of children’s strategies and mathematical thinking over the course of a game.

Piloting an Approach

Does gameplay reflect children’s understanding of educational content and strategies for problem solving? If so, is data mining sufficiently rich to model the process of reasoning (as opposed to knowledge states or simply counting right answers)?

To find out, our researchers watched 74 third and fourth graders (27 girls and 47 boys) as they played three Cyberchase online games focused on decimals, quantity/volume, and proportional reasoning. For example, in the “Railroad Repair” game, players fill gaps in a train track by using pieces labeled with decimals between .1 and 1.0 (See Figure 1). Multiple correct solutions are possible. However, each length of track can be used only once per screen, so children must find multiple ways to create sums and plan ahead to ensure that all of the necessary pieces will be available when needed.

Children played in pairs to facilitate conversation that could reveal ideas and strategies as they played. Simultaneously, custom-built tracking software automatically recorded their mouse clicks and keyboard input. Afterward, we interviewed the children about their strategies for playing each game and solving the problems.

Just as in offline mathematical reasoning, the children’s mathematical strategies showed a range of sophistication. Moreover, just as in research on classroom mathematics [6], those children who used more sophisticated strategies often did not apply them immediately. Rather, they engaged in cycles of problem solving that began with less sophisticated strategies and progressed to more sophisticated approaches when necessary.

In “Railroad Repair,” many children began by using a matching strategy in which they matched the decimals shown (e.g., a .8 piece of track to fill a .8 gap). When this strategy later proved insufficient (e.g., they ran out of .8 pieces or needed to fill a larger gap), some switched to an additive strategy (e.g., combining .6 and .2 to fill a .8 gap). When this strategy, too, proved insufficient (e.g., the .2 piece was needed later), some adopted an advanced strategy in which they planned ahead, considered alternate ways to make sums, and reserved pieces they would need later.

Comparisons of observation, interview, and online tracking data revealed that these strategies could be detected via data mining too. Tracking data showed consistent patterns of online responses reflecting each strategy (matching, additive, or advanced), and clusters of errors when children’s strategies broke down and they shifted to new ones.

An Example

Consider some partial tracking data (see the table to the right) for one user playing “Railroad Repair.” On the first screen, the tracking software shows evidence of the player adopting a matching strategy, picking up a .4 piece (piece press) and placing it (piece drop) to fill a .4 gap. After accidentally putting the piece in the wrong location (row 2), the player then places it correctly (row 4).

On the next screen, the player continues the matching strategy, using a .8 piece to fill a .8 gap (rows 5–6). However, there is more than one .8 gap on this screen and only one .8 piece. Thus, after using the .8 piece, the player switches to an additive strategy, using two pieces (.7 and .1) to fill the second gap. After accidentally misplacing the .1 piece (rows 9–10), the player places it successfully (rows 11–12).

For the next several screens, the player keeps using the additive strategy until arriving at a screen where it is no longer sufficient. After filling several large gaps, the player combines a .5 piece and a .4 piece to fill a .9 gap (rows 13–15), only to find that all of the smaller pieces have been used up, which makes it impossible to fill the remaining small gaps on the screen. Recognizing this, the player hits the “clear” button to clear the screen (row 16). Then the player starts over with an advanced strategy, filling the smaller gaps on the screen first (rows 17–20), so that the smaller pieces are available when needed. Afterward, the player uses the remaining pieces to fill the larger gaps, which have more flexibility in the variety of ways they can be filled.

Thus, children’s shifts in strategies were detectable, not only via in-person observations, but through online tracking data too. Changes in strategies were often associated with either clusters of errors (when a player tried unsuccessfully to use different pieces to fill a gap), use of the “clear” button (when a player recognized that a strategy wasn’t working), and/or simply not having the necessary pieces available to fill the remaining gaps on the screen. We could identify—and differentiate among—instances when children either failed to progress beyond basic strategies, proceeded through more difficult problems via trial and error (without necessarily employing a fundamental change in their thinking), or shifted to more sophisticated strategies over the course of a game.

Opportunities and Challenges

Together, the results from our three games hold implications for researchers and practitioners interested in computer games, mathematics education, and/or assessment. For those interested in children’s use of educational games, the parallels between online and offline reasoning show that gameplay is influenced not only by players’ experience in playing games, but also by their understanding of the educational content embedded in such games. As in classroom math, children often do not display the same level of sophistication throughout a game (even if they are capable of relatively sophisticated reasoning). Rather, their mathematical reasoning may begin at a fairly basic level but become more sophisticated over the course of a game in response to its demands.

For educators, this similarity between online and offline reasoning also shows that games can provide a naturalistic, out-of-school context for assessing mathematical reasoning. Even in the absence of in-person observations, data mining can provide a window into rich processes of reasoning and problem solving. When recorded and coded appropriately, such data can reflect not only the outcomes of problem solving, but the process as well.

Our data also highlights several challenges that must be overcome to use tracking data effectively for measuring reasoning. First, as anyone who has analyzed online data knows, clicks per user produce massive amounts of data. The sample data presented here is from a single session—and even that one session produced a spreadsheet containing more than 120 rows of data. When multiplied by the literally thousands of users who might play a game in a single day, the volume of data can be staggering, posing challenges for both storage and analysis.

Second, online tracking data must be limited to information that can be collected legally under the Children’s Online Privacy Protection Act (COPPA). To interpret data on gameplay or reasoning, researchers naturally look to characteristics such as players’ age, gender, or prior knowledge, but COPPA can make it difficult to gather this information online. Since our project was part of a larger research study (and parents gave written consent for their children’s participation), we could gather demographic information offline. However, outside such studies, researchers must either find alternate ways to obtain demographic data or do without.

Third, tracking data is effective only for behavior that players clearly and unambiguously perform on the screen. Tracking data was highly effective for “Railroad Repair,” but only partially successful for “Sleuths on the Loose”—another Cyberchase game about measurement and proportional reasoning (See Figure 2). In “Sleuths on the Loose” we could accurately record and code children’s answers, but it was harder to assess their use of measurement as a strategy for two reasons.

Instead of using the on-screen “ruler” that served as a measuring tool, some children measured via alternate means such as holding their fingers up to the screen; the software could not detect these sorts of offline behavior. Also, because some children idly played with the on-screen ruler while thinking, movement of the ruler did not necessarily indicate an attempt to measure.

As our experience makes clear, educational games can provide a rich context for assessing and studying children’s naturalistic reasoning. Understanding children’s facility with educational content is an important part of understanding how they play educational games. Certainly, both games and tracking software must be designed carefully in order to produce useful data. But if designed properly, data mining can provide us with deep insight into children’s thinking and reasoning—without having to peek over children’s shoulders to do it.

Acknowledgements

This research was funded as part of a grant from the National Science Foundation (DRL-0723829). We gratefully acknowledge the staff, teachers, and students of the participating school. We also thank the Cyberchase production team (especially Sandra Sheppard, Frances Nankin, and Michael Templeton) for their support, and online producers David Hirmes and Brian Lee for building the tracking software used here. Finally, we are grateful to the field researchers who helped collect our pilot data: Meredith Bissu, Susan R.D. Fisch, Carmina Marcial, Jennifer Shulman, Nava Silton, Faith Smith, and Carolyn Volpe. Without them, this paper—and the development of this methodological approach—would have been impossible.

References

1. Mayer, R.E., and Moreno, R. “Nine ways to reduce cognitive load in multimedia learning.” Educational Psychologist 38 (2003): 43–52.

2. Moreno, R. “Learning in high-tech and multimedia environments.” Current Directions in Psychological Science 15 (2006): 63–67.

3. Reeves, B., and Nass, C. The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. New York: Cambridge University Press, 1996.

4. Strommen, E.F. “Interacting With People Versus Interacting With Machines: Is There a Meaningful Difference From the Point of View of Theory?” In Fisch, S.M., Theoretical Approaches Toward Integrating Cognitive and Social Processing of Media. Symposium presented at the biennial meeting of the Society for Research in Child Development, Tampa, FL., April 2003.

5. Rudestam, K.E., and Schoenholtz-Read, J., eds. Handbook of Online Learning: Innovations in Higher Education and Corporate Training. Thousand Oaks, CA: Sage, 2002.

6. Lesh, R.A., Hoover, M., Hole, B., Kelly, A., and Post, T. “Principles for developing thoughtrevealing activities for students and teachers.” In Handbook of Research Design in Mathematics and Science Education, edited by A.E. Kelly & R.A. Lesh, 591–646. Mahwah, NJ: Lawrence Erlbaum Associates, 2000.

Authors

Shalom Fisch is president of MediaKidz Research & Consulting. For more than 20 years, he has applied educational practice and empirical research to help create effective educational media for children.

Richard Lesh is the Rudy Distinguished Professor of Learning Sciences at Indiana University, and president of PRISM Learning. PRISM Learning. His academic interests include research and assessment design in mathematics and science education, as well as learning and problem solving, mathematics teacher education, and computer-based curriculum development.

Elizabeth Motoki is a doctoral student in the Mathematics Education Program at Indiana University. She received her B.A. and secondary credential in math in BC times (before calculators/computers) from San Jose State University, and has taught math in grades 6–14 in Hawaii, California, and Malaysia.

Sandra Crespo is an associate professor in the Department of Teacher Education at Michigan State University. Her research interests include exploring learning environments and teaching practices that promote mathematical inquiry.

Vincent Melfi is associate director for the mathematical sciences in the Division of Science and Mathematics Education at Michigan State University.

Footnotes

DOI: http://doi.acm.org/10.1145/1572626.1572640

Figures

F1Figure 1. Sample screen from “Cyberchase” Railroad Repair game.

F2Figure 2. Sample screen from “Cyberchase” Sleuths on the Loose game.

Tables

UT1Table.

©2009 ACM  1072-5220/09/0500  $10.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2009 ACM, Inc.

 

Post Comment


No Comments Found