Features

XXXI.6 November - December 2024
Page: 28
Digital Citation

Adventures in AI Wonderland: How Children Are Shaping the Future of AI


Authors:
Eliza Kosoy, Emily Rose Reagan, Soojin Jeong

back to top 

It is nearly impossible to navigate the modern world without being bombarded with artificial intelligence. It has become pervasive in our culture with a reach that crosses cultures and generations. My 89-year-old grandmother, who emigrated from the Soviet Union, called me recently and asked where she could get "an Elon Musk car to take [her] to the greatest store in the world, Kohl's." She then implored me to ask Musk if he was seeking elderly volunteers to fly to Mars. Whether we realize it or not, AI is being introduced into systems all around us: Internet search engines, self-driving cars, healthcare databases, education, and much more. As a result of this rapid technological integration, children today are growing up with AI. It is only natural to wonder how it is affecting them. What do children think about AI? Should they, or even we, be afraid?

ins01.gif

While there are numerous risks, we believe the most productive way forward is to embrace this technology and create positive solutions with users, especially for the next generation. It is undeniable that AI has the power to change the world for millions of children across the globe. If leveraged correctly, AI has the potential to create personalized tutors, curated educational entertainment, and tools for encouraging curiosity, creativity, and imagination. Using AI could provide much-needed support at a fraction of the cost, with an especially important impact for low-income children globally.

back to top  Insights

Children approach AI with curiosity, offering fresh perspectives compared to adults' apprehension.
AI can revolutionize education, creating personalized learning tools and fostering creativity, especially for low-income children.
Early exposure to AI for girls can help close the gender gap in STEM fields.

Children have been interacting with technology—in classrooms, on their parents' phones, on their iPads, on YouTube when watching AI-generated content—and will continue to do so. To help the next generation, we need to know how adults and children think about and interact with AI today. We need to teach technological literacy from a young age so that the solutions are being designed for and used by a global and diverse population.

back to top  Focusing AI Research on Kids

Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? —Alan Turing

For decades, developmental psychologists have studied children in hopes of demystifying human intelligence. The flexibility of the young mind, at certain stages free from many priors that cloud adult judgment, has shed light on several intricacies of human development. In a world being rapidly consumed by AI, a natural next step in understanding machines and their place in the human world would utilize developmental approaches.

One does not need to be a psychologist to understand that children and adolescents interact with technology differently from adults. Children are flexible learners, rapidly interweaving information from multiple sources to create rich models of the outside world. Young children are not privy to the current cultural apprehension around AI. Most four-year-olds have not seen the Terminator movies, while their parents may be unable to keep themselves from thinking about Skynet at the mention of AI. Many adults have been bombarded by negative information about machines in the media, while children, not yet primed to think of AI as dangerous, are often more curious than apprehensive.


Many adults have been bombarded by negative information about machines in the media, while children, not yet primed to think of AI as dangerous, are often more curious than apprehensive.


By having conversations with children of different ages about AI and collecting empirical data about how they conceptualize, interact with, and feel about these systems, we can enrich our own understanding. In doing so, we can inform those building future AI applications.

As AI is a nascent and continually evolving cultural technology, relatively little research has been dedicated to children's interactions with these systems. Existing work, which has focused on robots and computers, found an interesting trend in children's perceptions of different types of machines. Kimberly Brink et al. [1] found that older children (9 to 18 years old) in the study evaluated humanoid robots as being the creepiest type, while younger children (3 to 8 years old) did not evaluate them as creepy. This trend suggests that the negative evaluation of certain machines is something learned through development.

Existing work offers a peek into children's complex conceptualizations of machines as they grow up. The question that remains, however, is how they think and feel about advanced emerging AI models such as large language models (LLMs) and generative AI (GenAI). Our work aims to narrow this gap, investigating how school-aged children perceive, understand, and interact with different GenAI models.

back to top  Experimental Results

Our experimental design was simple: ask children ages 5 to 12 about their understanding and perception of AI, invite them to interact with two GenAI models, and then track any differences in their replies after using AI. Questions ranged from gauging children's base understanding of AI (What is artificial intelligence (AI)? Have you ever played with AI before?) to their qualitative perceptions of AI (Do you think AI is friendly or scary? Does AI have feelings, like happy or sad?). Each child was introduced to both models sequentially and told they could either ask the AI anything (ChatGPT) or draw anything (DALL-E), using four prompts. The questions ranged from perceptions of AI to the anthropomorphism of AI.

Our initial results show that children generally have very positive views of AI. The children in our sample overwhelmingly endorsed AI as being friendly, not scary. When asked what they would like to use AI for, the children had optimistic responses such as "help with homework" and "to put and carry [me] to bed." In terms of their beliefs about the nature of AI, the children were split on AI's anthropomorphic qualities, yes/no queries about whether AI is like a human, whether it has feelings, and whether it understands good and bad. When asked about their preference between the two models, children of all ages were more likely to prefer the visual one (i.e., DALL-E).

In terms of children's growing familiarity with AI, half of the children tested said they had used AI before and about one fifth were able to provide a definition of AI (Figure 1). Definitions varied across ages. A 10-year-old participant responded, "AI is machine learning where they have intelligence to communicate with humans." An 8-year-old said, "It understands what we do and allows humans to interact with them," and a 5-year-old said, "It's like robotics stuff." It is important to note that this study was conducted in the San Francisco Bay Area, whose tech hub status and proximity to Silicon Valley mean that the children interviewed may have a different experience with technology from those in another area.

ins02.gif Figure 1. The graph summarizes all binary choice responses. Children's evaluations are broadly positive, with AI rated overwhelmingly as being friendly, not scary. In terms of children's conceptualizations of AI's humanity, those in our sample did not believe AI to have these attributes.

We found that children's evaluations of AI were positive initially and more so after they prompted the models. Children rated the models as markedly less humanlike post-interaction, and were less likely to endorse AI as having feelings. The largest change between answers pre- and post-interaction was in response to "Can AI get upset?" Sixty percent of children answered yes pre-interaction. That number dropped to just 33 percent post-interaction, suggesting that something in the interaction led children to believe AI cannot easily be perturbed.

How children think about AI is a key piece of this puzzle, but the full picture must capture how children use AI. Parents may still fear that screen time leads to mindless scrolling and hinders their children's curiosity. We believe that utilizing GenAI could be a potential new way to promote curiosity and imagination, but it depends on which modality is being used and how. To truly take advantage of AI's potential to bolster creativity, we must first understand how children are intuitively using the models.

To approach this, we investigated how, specifically, the children were using each model. The children were very comfortable generating four prompts for each model and seemed to enjoy seeing the outputs, becoming more creative with every search.


Visual GenAI could provide a novel tool for probing children's creative curiosity, allowing them to visually represent things that would normally exist only in their head or scribbled on a napkin.


These prompts are interesting and often amusing (Tables 1 and 2), most notably, a 12-year-old who searches for "My Little Pony meets Demon Slayer" in DALL-E (Figure 2). They also provide insight into what types of information children are seeking with AI, what creativity may look like in childhood HCI, and what distinguishes children's use of different GenAI modalities. We categorized the children's prompts into four categories across two dimensions to investigate these questions. One dimension looks at whether the concept they are asking about exists in the real world (e.g., as far as we know, unicorns do not exist, but horses do). The second dimension examines whether children have tangible access to explore the subject of their query (e.g., a child asks ChatGPT what the moon feels like [no access] versus what rain feels like [access]).

ins03.gif Table 1. Children's search prompts using DALL-E.
ins04.gif Table 2. Children's search prompts using ChatGPT.
ins05.gif Figure 2. Example of a 12-year-old's query for DALL-E: "My Little Pony meets Demon Slayer."

When interacting with a text-based LLM, children are most likely to ask about objects that exist and that they have access to (EA), with 63 percent of responses falling into this category (Figure 3). This pattern of use may mean that children are looking to a language-based model to reaffirm their own knowledge, or even to gauge the model's accuracy and ability to answer questions with known answers. The children's searches look drastically different when using a visual GenAI model. With DALL-E, 28 percent of the children fall into the category of asking about concepts that do not exist in the real world and they cannot access (DN), 18 percentage points higher than ChatGPT. This suggests that, when using a visual GenAI model, children are diving into their imagination, craving to see something they normally wouldn't in the real world.

ins06.gif Figure 3. Categories of children's searches and the difference between ChatGPT and DALL-E.

In summary, we find that children are very comfortable using these models, and are eager to query them. Given children's apparent knack for interacting with AI and the genuine enjoyment they express, there may be many ways in which this technology will allow them to expand their knowledge and exercise creativity. Practically, children's EA preference with ChatGPT suggests that they would be interested in a tutoring or education-focused app that would let them work with a text-based GenAI model and explore their real-world curiosity and expand on concepts they are curious about. In parallel, visual GenAI could provide a novel tool for probing children's creative curiosity, allowing them to visually represent things that would normally exist only in their head or scribbled on a napkin. Perhaps it could also increase creative collaboration between children by allowing them to share concepts visually [5].

back to top  Diversity Is Key

We may be uniquely positioned to intervene on one issue plaguing not just tech but STEM fields across the board: gender diversity. According to the World Economic Forum's Global Gender Gap Report 2020 [6], just over a quarter of AI professionals are women. In 2022, only one in four AI researchers was a woman, and women contributed to only about half of all AI publications.

By taking advantage of the positive mental ideas children have about technology, we may be able to help close the gender gap in tech. Introducing young girls to tech and AI in an intentional way may provide them with a strong foundation to succeed in tech fields if they choose to pursue them later.

As early as second grade, girls already associate boys with STEM [2]. By age 6, girls believe that boys are "really, really smart," and they avoid activities they think are for "smart children" [3].

By age 8, girls report that if they are told other girls are not typically interested in an activity, they also show less interest in it [4]. Thus, intervening early may make a STEM pathway more accessible to girls and divert them from falling into gender stereotypes. If we introduce more diversity into technology from a young age, it will result in more diverse AI and tech solutions, system builders, and, ultimately, systems.

As founder of E-liza Dolls, I have already begun looking at the best ways to involve girls in tech from a young age. By inventing a doll with interactive hardware and software components that teaches girls to code, I've combined a toy many girls gravitate toward with innovative technology. The doll shows girls that they don't have to sacrifice girlhood or their interests to code. Life doesn't have to be dolls or tech—it can be both. Our research into children's preferences on a coding task when the goal is to program either a toy car or an E-liza Doll shows a clear gender divide—95 percent of boys chose the car, and 76 percent of girls picked the doll. Regardless of gender or toy choice, children across the board seemed to benefit from the coding exercise. In a survey conducted two weeks after the 10-minute coding challenge, many parents reported that their children expressed increased interest in STEM. One parent noted that their daughter checked out a coding book from the library; another girl inquired about a school coding club; and two others said they wished they had an E-liza Doll at home so they could code again.

To make systematic change and progress in the scientific field, diverse groups of people need to be involved. This research is a promising foundation for the future of women in tech, but adults have to do the work now to support girls' futures.

back to top  Conclusion

As researchers in the field of AI and child development, our dream is to see AI have a significant and positive impact in the world for children. We believe this is within reach, but it can only be accomplished if we invest in scientific research and focus on creating solutions rooted in evidence. From our own research, we can see how innovative search tools are able to harness the power of children's curious and creative nature and provide an interactive canvas for promoting children's imagination. The way forward is to create solutions that allow children to engage with this technology in a safe and efficient manner. To set the next generation up for success, we must foster technological literacy in children, teaching them technical concepts and basic coding skills from a young age. With strategic and timely intervention now, the solutions that AI is being designed and used for can serve a more global and diverse population.

The broad motivation of this research was to study children's mental models of AI. In this work, we investigated how children (ages 5 to 12) perceive, understand, and use generative AI models. We found that children generally have a very positive outlook toward AI and are excited about the ways AI may benefit them. We also illustrated how leveraging visual-based AI might serve as an effective tool to boost children's imagination and creativity. Finally, we provided an example of a promising intervention to bring girls into the AI space with the E-liza Doll. This work illuminates how exposing children to coding early on can have a significant impact. Promoting technological competency with girls and low-income children is especially important to encourage diversity in STEM.

There are many risks that come with a novel technology such as AI, but let's face it: Pandora's box has already been opened. We hope that, with hard work, AI can be firmly positioned on the right side of history and do more good than harm. Using these technologies, we can create novel ways for children around the world to lead better lives. That cannot happen without studying how humans in general, and children in particular, think about and leverage these technologies.

back to top  References

1. Brink, K., Gray, K., and Wellman, H.M. Creepiness creeps in: Uncanny valley feelings are acquired in childhood. Child Development 90, 4 (2019), 1202–14.

2. Cvencek, D., Meltzoff, A.N., and Greenwald, A.G. Math-gender stereotypes in elementary school children. Child Development 82, 3 (2011), 766–79.

3. Bian, L., Leslie, S.-J., and Cimpian, A. Gender stereotypes about intellectual ability emerge early and influence children's interests. Science 355, 6323 (2017), 389–91.

4. Mader, J. Researchers looked at how early STEM stereotypes begin for kids. They found them every step of the way. The Hechinger Report. Jun. 2, 2022; https://hechingerreport.org/researchers-looked-at-how-early-stem-stereotypes-begin-for-kids-they-found-them-every-step-of-the-way/

5. Kosoy, E., Jeong, S., Sinha, A., Gopnik, A., and Kraljic, T. Children's mental models of generative visual and text-based AI models. arXiv:2405.13081, May 21, 2024.

6. World Economic Forum. Global gender gap report 2020. Dec. 16, 2019; https://www.weforum.org/publications/gender-gap-2020-report-100-years-pay-equality/digest/.

back to top  Authors

Eliza Kosoy is a Ph.D. student at the University of California, Berkeley, working at the intersection of AI and child development with Professor Alison Gopnik. She is an intern at Google DeepMind and founder of E-liza Dolls, a startup focused on making dolls to teach young children how to code. [email protected]

Emily Rose Reagan is a Ph.D. student at the University of California, San Diego, working in developmental psychology with Professors Gail Heyman, Jamie Amemiya, and Caren Walker. [email protected]

Soojin Jeong is the head of insights at AIUX at Google DeepMind dedicated to making AI more human-centric. With a global perspective shaped by her work in Korea, Japan, and the U.S., she has spent her career at Intel, Samsung, and Meta, exploring future generation products and AI innovations. [email protected]

back to top 

intr_ccby.gif This work is licensed under a creative commons attribution international 4.0 license.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2024 ACM, Inc.

Post Comment


No Comments Found