Authors:
Summer Schmuecker, Juan Pablo Hourcade, Meryl Alper, Jerry Alan Fails, Saba Kawas, Svetlana Yarosh
In an age where technology reaches every aspect of our lives, a particularly pressing concern emerges: the unique susceptibility of children to the consequences of digital experiences. As technology becomes a central part of young children's interactions with the world, our attention is drawn not only to the potential adverse effects, but also to the unique opportunities for positive development. These dual aspects prompt consideration of how digital experiences shape children's development, in what contexts these experiences occur, and what measures we can take to harness the benefits while mitigating potential harms.
As we stand on the edge of new technological advancements, including generative artificial intelligence and extended reality, our limited understanding of their potential implications urges us to pause. These emerging technologies, which for many contexts and populations are novel, and which are still quickly developing, bring excitement and promise, but their risks can be difficult to predict. Swift adoption by children, as witnessed with smartphones and tablets, can open a Pandora's box of repercussions, with ethical considerations falling behind. When considering ethics, we are concerned with moral principles applied to the design, deployment, and use of technologies affecting children. These moral principles derive from social norms (e.g., privacy) and virtues (e.g., benevolence) and are therefore tied to specific sociocultural contexts. As emerging technologies become more accessible to the public, we must prioritize ethics in their development and implementation, while considering the broad range of societal perspectives on these issues.
→ Ethical considerations of emerging technologies tend to follow a top-down approach and pay little attention to children.
→ Participatory approaches focused on children can address limitations in current approaches.
→ The XR for Youth Ethics Consortium is applying these participatory approaches to extended reality technologies, with preliminary results often amplifying current concerns about technology but also adding XR-specific tensions between potential concerns and benefits.
Limitations of Current Approaches to the Ethics of Emerging Technologies
Current ethical evaluations of emerging technologies, if they occur, tend to have two limitations. The first is that they are typically top-down, primarily driven by experts and industry leaders [1], with a prominent leader recently stating that he was "pretty disconnected from the reality of life for most people" [2]. It is time to democratize the process, situating it within the lived experiences of representative participants and giving a voice to diverse stakeholders, including children, through participatory methods. By combining participatory approaches with other methods, we can cultivate a more-holistic exploration of ethical considerations, authentically represent stakeholders' values and priorities, and help bridge the gap in public discourse about technology and children. In doing so, we can strive to ensure that technology's impact on children is a force for good in their lives, centered on benefiting them rather than only those creating or profiting from these technologies.
A second limitation of current ethical considerations of emerging technologies is that they have mainly centered on adults, despite children being equally important and even more susceptible to risks. Systems perspectives on child development, which have been increasingly embraced in the past 30 years, suggest a bidirectional relationship between the environment, behavior, neural activity, and genetic expression [3]. Given the ubiquity of technology in many young children's lives, this perspective on development points to the potentially deep impact of widely used technology on children's development [4]. Additionally, children are particularly vulnerable to risks like cyberbullying, privacy violations, and exposure to harmful content. While ethical considerations of emerging technologies continue to center mainly on adults, on the rare occasions that children are the focus, the discourse tends to oversimplify by gravitating toward knee-jerk reactions like blanket restrictions on screen time for children. Such one-size-fits-all approaches fail to recognize the complexity of technology's role in children's lives.
The Challenge of Emerging Technologies
While emerging technologies like generative artificial intelligence and extended reality offer exciting possibilities, our limited understanding of their potential uses and consequences raises concerns. As with any new or existing technology, unease over issues like data collection, safety, and privacy is to be expected. The dynamic nature of emerging technologies, however, poses an additional challenge, as it is difficult to predict how they will be used, whether they will be widely adopted, and what unexpected and unintended uses and harms may arise.
In addition, changes in how novel technologies are used may occur quickly. For example, there could be technologies that are quickly adopted by children, even if they were originally designed for adults. A recent example is what happened with smartphones and tablets, which removed the cognitive and motor-skill barriers that had prevented young children from using computers. This space was quickly filled with apps of questionable benefit, with ethical considerations and studies on impacts lagging well behind. To prevent this from happening again with future technologies and devices, we must be proactive.
To ensure that technology remains a force for good in children's lives, our approach needs to be comprehensive, participatory, and inclusive of diverse perspectives.
Additionally, there is a pressing need to ensure that relevant, reliable information is made available to parents, caregivers, and other stakeholders that outlines technology characteristics that may lead to benefits or risks; for example, clearly explaining the data-collection capabilities of devices or apps. Equipping the public with a well-rounded understanding of the implications of their choices in the digital landscape is vital. By implementing public access to knowledge and policy guardrails, we can help safeguard against potentially unethical and exploitative practices that might otherwise take advantage of uninformed consumers.
Furthermore, it is crucial to acknowledge the evolving landscape of data privacy and user exploitation in the digital realm, particularly in the context of social media. Many free apps and services now come at the cost of users' personal data, raising questions about informed consent and data security. This further emphasizes the need for policies and regulations and a heightened awareness among users about the value and potential risks of their data exchanges. While adults may find it difficult to comprehend the extent of information being collected about them, this becomes an even more pressing concern for children, who often lack the capacity to understand the magnitude of data collection. This concern is further amplified by educational technology platforms in schools, which are also used at home, posing more challenges in understanding and safeguarding the privacy of personal data for younger users.
One emerging technology gaining rapid popularity is generative artificial intelligence. Generative AI technologies are advancing swiftly and finding utility in numerous fields, from digital art and chatbots to music creation and virtual assistants. The growing influence of generative AI should prompt us to take preemptive steps to avoid the same pitfalls experienced with the proliferation of smartphone and tablet use by children.
Even in the early stages of generative AI development, a host of concerning traits and potential benefits have emerged. For instance, the technology's capability to create remarkably realistic models of people's voices, expressions, and physical appearances, often indistinguishable from reality, raises alarms about its potential for manipulation and misuse. These concerns also extend to complex questions surrounding the ownership of AI-generated likenesses. This includes considering how public figures might be treated differently from private citizens and whether individuals can sell the rights to their AI-generated likenesses. Another significant concern pertains to generative AI's potential to disseminate misinformation. While it is undoubtedly a valuable information tool, it can also present entirely false information in a convincing, authoritative manner. In other words, it can bring about a crisis of authenticity, underscoring the importance of addressing these ethical and practical challenges.
It is important to emphasize that generative AI also has the potential of providing enormous benefits to children and adults alike. Existing and future generative AI technology can serve as valuable tools that foster creativity, boost productivity, and enhance educational experiences. For example, it could personalize educational content to make it more relevant and interesting to children. These systems could also enable children to author content in ways not previously possible. While generative AI has several potentially positive applications and need not immediately evoke fear, the ethical ramifications of its context and use need to be considered.
The Value of Participatory Approaches
We believe that participatory approaches that involve stakeholders, including children, can fill a void when it comes to considering the ethics of emerging technologies with respect to children. These methods can consider the norms and virtues valued by stakeholders with respect to children, which may be different from those held by industry leaders and academics, and different from those that matter for adults. For example, autonomy is typically applied differently to children and adults (e.g., we expect parental consent for research activities in addition to children's assent) and industry may have a different view from the public too (e.g., with respect to decision making about private information). Because participatory methods allow for more in-depth opinions from stakeholders, these norms and virtues can be better represented.
Involving a diverse set of stakeholders, including children, in participatory methods can also help us better understand how to communicate ethical concerns to a broader population. By more accurately representing their values, priorities, vocabulary, contexts of use, and lived experiences, we could enable and facilitate communication that helps bridge the gap in current public conversations about technology and children. Such communication would enable the child-computer interaction community to better contribute to the public conversation on the ethics of emerging technologies and children.
Another advantage is that participatory methods can be combined with other methods to further explore ethical considerations. Philip Brey, for example, argues for such an approach [5]. He states that addressing the ethics of emerging technologies is particularly challenging because of the overwhelming uncertainty involved. With this uncertainty comes a balancing act: While we should not put too much weight on imagined issues that may or may not actually occur, we also cannot renounce responsibility to act because of said uncertainty over what exact issues will arise.
With that in mind, methods have been proposed to consider emerging technologies' ethics while addressing uncertainty. The first is to look at the features of a technology and ascertain what intrinsic qualities of those features may lead to ethical concerns, regardless of the specific applications or uses that may emerge in the future. This approach would catch the most glaring ethical issues and could help safeguard against the development of uses that could exacerbate the issues found within those technological features.
The second method tries to anticipate future uses, applications, and consequences using techniques such as future studies, risk-benefit analyses, design fictions, imaginary futures, and design imaginaries. The benefit of this approach is that it considers a wider range of potential ethical issues than the feature method, which identifies only generic issues, given its limited scope. However, anticipation of future uses is still speculative and can be hit or miss.
Another option to combine with participatory methods is having participants try out existing devices that use emerging technology. This allows participants to get a more-tangible understanding of the technologies they are being asked to consider, and can aid in the evaluation of features generic to the device's emerging technology. While existing technologies can be used as a jumping-off point to imagine and discuss possible developments and uses, they may also hinder participants' ability to anticipate future uses by limiting the possibilities to what is within the scope of the device's current features.
What remains to be better understood is how to best integrate these methods with participatory approaches to address rapidly developing technologies, and how to implement them for technologies that may be used by children.
XR for Youth Ethics Consortium
We have recently formed a consortium comprising six universities in the U.S. (University of Iowa, Boise State University, University of Minnesota, Northeastern University, University of Baltimore, and University of Maryland), funded by an unrestricted gift from Reality Labs Research, where we are putting these ideas into practice: the XR for Youth Ethics Consortium. This consortium consists of a diverse group of child-computer interaction researchers with a demonstrated interest in ethics, with the primary motivation of advancing knowledge on the ethics of extended reality (XR) technologies for children through participatory methods. Each site has a different demographic focus for stakeholders, in an effort to have broad representation across geographies, income, gender, race, and ethnicity. For example, the University of Iowa works with rural stakeholders, whereas the University of Baltimore focuses on low-income urban youth, and Northeastern University on neurodivergent children.
Participatory activities. During the first stage of research, each site conducts its own research sessions using a variety of techniques used in participatory design. Using different approaches across sites is encouraged, since this first stage is all about seeing what does and does not work and sharing these findings with the other sites. Some of the techniques that have been used across sites are surveys to assess hopes and fears with respect to technology, design sessions, illustrations of possible future scenarios, and having stakeholders test out existing technologies. Members of sites have also hosted meetings and workshops at multiple conferences including CHI, IDC, and CSCW over the past year to share preliminary findings and discuss new ideas and next steps.
Looking ahead to future plans, during a second round of research, a more-standardized approach will be implemented across sites. After an initial stage of trying out different techniques, the most useful methods will be dispersed across all sites to gain comparable information across the broad group of stakeholders. Such a step will go beyond the reach of most participatory activities in our field.
Representative survey. Following this second round of participatory activities, the insights gleaned from these stakeholder sessions will be validated through a comprehensive, nationally representative survey in the U.S. While participatory sessions offer valuable depth of discussion, they have the limitation of including the views only of those who volunteer to participate, who are not necessarily representative of the broader population. And though these selection biases may not matter for general usability or interaction design purposes, the same cannot be said for ethics, which derive from values. Values, being at a much deeper and more personal level, have nuance that requires heightened consideration. It is crucial to prioritize representation for widespread issues with the potential to cause massive societal impacts. This survey will play a pivotal role in ensuring that the findings accurately represent a broader cross-section of the population, strengthening the consortium's ability to craft well-informed and universally applicable guidelines and policies. It will also help us understand any blind spots related to the selection of stakeholders during participatory activities.
The growing influence of generative AI should prompt us to take preemptive steps to avoid the same pitfalls experienced with the proliferation of smartphone and tablet use by children.
Preliminary results. As of February 2024, we have preliminary results available from participatory activities conducted at the University of Iowa and the University of Minnesota. Research at the University of Iowa involved 24 one-hour sessions with 23 adult stakeholders, including parents of and professionals who work with 2- to 12-year-old children (e.g., teachers, therapists, pediatricians, and nurses). Each session involved an average of five participants. Almost half of the participants lived or worked in rural areas. During each session, the researchers conducted one of five types of activities: filling out and discussing items on a questionnaire on parental attitudes toward children's technologies, discussing characteristics of XR technologies, reacting to and discussing scenarios of use, experiencing XR technologies, and validating a preliminary summary of notes taken during sessions.
Preliminary results from an analysis of adult stakeholder feedback during sessions yielded the following themes: privacy and safety (e.g., collection and storage of sensitive data); managing content (e.g., protecting children from inappropriate content); balancing XR and reality (e.g., preference for real-world interactions); developmental, physical, and behavioral impacts (e.g., perceptual development, addiction); broad concerns about emerging technologies (e.g., intersection of XR and generative AI); and contextual considerations (e.g., context can turn what would otherwise not be acceptable uses into desirable uses). In general, stakeholders viewed positive XR uses to be those involving beneficial activities that are otherwise not reasonably available in the real world. For example, a playdate with a friend should ideally occur in person, but if a child is hospitalized, then using XR to have a playdate is viewed very positively. Other uses viewed favorably typically involved educational and health-related applications. Concerns mainly involved the fear that XR devices could amplify current issues with privacy and safety, inappropriate content, bullying, addiction, and so forth.
The research at the University of Minnesota involved a survey and interview study with 55 parents and 67 children ages 7 to 13 to identify ethical concerns and design considerations for XR use by children. The study took place at the Minnesota State Fair, in the Driven to Discover Research Facility on the fairgrounds. This research facility, known as a "lab in the wild," offers an opportunity to engage with a diverse group of participants, ensuring wide representation across different ethnicities, technologies, and socioeconomic backgrounds. The preliminary results from this study provide an empirical understanding of ethical concerns with respect to children's XR use from both children's and their parents' perspectives, directly derived from their perceived risks, benefits, and considerations associated with XR in a range of usage scenarios.
These preliminary results identified primary ethical concerns, including the erosion of physical community bonds and social connections, long-term negative effects on children's health and development, risks to children's safety and privacy, and disparities in equity and inclusion. Furthermore, the results highlighted key considerations for designing and implementing XR for youth, emphasizing the importance of promoting open communication to recognize the benefits and risks and the critical role of parental supervision and control.
The results at both sites had common threads, in many cases echoing concerns about technologies widely used by children, such as smartphones and tablets, but adding XR-specific topics, such as the tension between providing otherwise unavailable experiences with the desire to keep children connected to the social and physical world around them.
Amid the evolving technological landscape, we face a compelling need to strike a balance between innovation and ethical considerations. As we navigate the complexities of emerging technologies, the vulnerability of children to the effects of their digital experiences becomes increasingly evident. It is this unique susceptibility that calls for our attention and action. To ensure that technology remains a force for good in children's lives, our approach needs to be comprehensive, participatory, and inclusive of diverse perspectives. It is essential to empower parents, caregivers, children, and other stakeholders with the knowledge to make informed choices that weigh the advantages against the risks of technology use.
Through the use of participatory methods, we hope to help democratize the ethical evaluation process, allowing children and various stakeholders to actively shape the future of the digital landscape. The XR for Youth Ethics Consortium, a collaborative initiative, embodies this vision. With a focus on ethics and understanding of child-computer interactions, we seek to navigate the challenges and opportunities of emerging technologies for the betterment of children. Through this research, we can define clear guidelines to address technology's negative aspects, encourage further exploration of its benefits, and take a nuanced approach toward issues that fall in-between. Through our efforts, we aspire to contribute to a future where technology is not only safe but also harnessed as a tool to foster learning, growth, and creativity, while empowering children to form their own ethics and values and benefiting their peers.
We hope our ideas and work do not stop with extended reality technologies or children. We expect our approach could work well for other emerging technologies, as well as for other populations that are often forgotten in the ethical considerations of emerging technologies, such as people with disabilities, older adults, and people in lower-income regions of the world. The time to act is now!
We would like to thank Delaney Norris, Greg Walsh, Elizabeth Bonsignore, and Tamara Clegg for their contributions to this article.
1. Reijers, W. et al. Methods for practising ethics in research and innovation: A literature review, critical analysis and recommendations. Science and Engineering Ethics 24, 5 (2018),1437–1481; https://doi.org/10.1007/s11948-017-9961-8
2. Swartz, J. Sam Altman thinks we should feel good, but not great, about him as our AI leader. Morningstar. Sep. 25, 2023; https://www.morningstar.com/news/marketwatch/20230925255/sam-altman-thinks-we-should-feel-good-but-not-great-about-him-as-our-ai-leader
3. Gottlieb, G., Wahlsten, D., and Lickliter, R. The significance of biology for human development: A developmental psychobiological systems view. In Handbook of Child Psychology. Wiley Online Library, 2007.
4. Anderson, D.R., Subrahmanyam, K. and on behalf of the Cognitive Impacts of Digital Media Workgroup. Digital screen media and cognitive development. Pediatrics 140, Supplement 2 (2017), S57–S61; https://doi.org/10.1542/peds.2016-1758C
5. Brey, P. Ethics of emerging technology. In The Ethics of Technology: Methods and Approaches. Rowman & Littlefield, London, U.K., 2017, 175–191.
Summer Schmuecker is a student at the University of Iowa, where she is completing a master's degree in informatics under the human-computer interaction subprogram. [email protected]
Juan Pablo Hourcade is a professor in the Department of Computer Science at the University of Iowa and director of the interdisciplinary graduate program in informatics. [email protected]
Meryl Alper, is an associate professor in the Department of Communication Studies and affiliate associate professor in the Department of Communication Sciences and Disorders at Northeastern University. She is the author of three MIT Press books on disability, youth, and technology. [email protected]
Jerry Alan Fails is a professor and the chair of the Department of Computer Science at Boise State University in Boise, Idaho. He has been researching and designing technologies with and for children using participatory design methods for more than 20 years. [email protected]
Saba Kawas is a Computing Innovation Fellow and a postdoctoral research associate in the Department of Computer Science and Engineering at the University of Minnesota, where she studies the interaction of technology, youth development, and well-being. She has a Ph.D. in human-computer interaction from the University of Washington. [email protected]
Svetlana "Lana" Yarosh is an associate professor in the Department of Computer Science & Engineering at the University of Minnesota. Her research in HCI focuses on embodied interaction in social computing systems, with her most recent work focusing on systems that improve relationships and help people recover from substance use disorders. [email protected]
Copyright held by authors. Publication rights licensed to ACM.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2024 ACM, Inc.
Post Comment
No Comments Found