Features

XXXI.5 September - October 2024
Page: 28
Digital Citation

AI Through Gen Z: Partnerships Toward New Research Agendas


Authors:
Nora McDonald, Karla Badillo-Urquiola, Afsaneh Razi, John S. Seberger, Denise E. Agosto, Pamela Wisniewski

back to top 

Artificial intelligence (AI) is rapidly transforming almost every aspect of our daily lives, from how we tailor educational experiences to how we provide security and fight crime. It is, therefore, crucial that we begin to critically study the individual and societal impacts of AI systems, especially on young people.

back to top  Insights

For Gen Z there is no before AI, which raises epistemological challenges.
Youth are assured AI will force change, and while they want regulations to address it, they want to be involved in directing solutions.
The challenges and opportunities of AI demand a participatory approach.

While researchers and practitioners in HCI are increasingly investigating the impacts of technology on young people's digital well-being and safety [1] and information practices [2], there is still a large gap in understanding young people's perspectives and lived experiences concerning emerging technologies like AI.

Gen Z (those born between 1997 and 2012) is a generation of youth who have been raised on search and social media ("for you") algorithmic content that may have a generation-shaping impact [3]. They reached their adolescence when AI-encoded bias was revealed to be part of our everyday technologies [4]. And within the past few years, generative AI has rapidly taken hold to the point that Gen Zers may rightly expect it will radically shape their workplace. For Gen Zers, there is no before AI. We must work toward dramatically rethinking how we engage youth to center them and their lived experiences with AI.

To this end, we gathered a group of Gen Z individuals for a youth-led panel at the CSCW 2023 conference to try to understand their perspectives around AI and determine what the future research landscape should look like [5].

ins01.gif

In this article, we reflect on the key themes that emerged from discussions among the researchers, practitioners, and Gen Z youth, as well as outline future directions for HCI research on AI.

back to top  AI Workforce Displacement and Other Negative Impacts

One of the most pressing issues identified by our Gen Z panel was workplace displacement. On the one hand, participants expressed the ways in which AI can help people to be more productive in the workplace and provide new tools to help humans. They underscored its integration into novel tools aimed at augmenting human capabilities. On the other hand, they voiced their concerns regarding the lack of support for upskilling and education to address the rapid advancements in AI technology and the evolving workforce requirements. They worried about the potential for AI to displace any number of jobs, from technology workers to artists.

Some of our young participants contended that individuals must be retrained to acquire new skills and adapt to the demands of this AI-driven employment landscape. Other participants argued, by contrast, that retraining is inherently challenging due to the breakneck pace at which AI evolves. They also emphasized that certain skills are inherently difficult to acquire, particularly for individuals with diverse backgrounds. Contrary to the idea that AI might lead to substantial workforce displacement, these older participants posited that complex human cognitive abilities remain indispensable, and therefore human professionals will continue to play a vital role, as AI cannot fully automate every facet of work from inception to completion.

A set of crucial questions remain, revolving around the nature of emerging job opportunities and the prospect that some individuals might find it exceedingly challenging to adapt to these changes in the workforce. It is imperative to contemplate the long-term implications of AI within various professions and how it will affect the lives of individuals.

The action items for future researchers, policymakers, and practitioners entail investigating strategies to support individuals who may resist the integration of AI or are reluctant to adapt their skill sets. Additional items include examination of emerging job markets and their potential impacts on the workforce; the identification of essential skills needed for individuals to remain competitive in an evolving job landscape; and the facilitation of wealth redistribution. The latter, although not precisely within HCI's purview, prompts further inquiry into how this redistribution should be managed, whether through governmental intervention or private sector initiatives.

back to top  AI Facial Recognition, Predictive Policing, and Social and Criminal Justice

Another topic that our youth panel focused on was the role of AI in policing and criminal justice. The potential of seemingly innocuous AI to be weaponized against marginalized populations was a strong concern among youth participants. Tools that are not advertised for predictive policing (e.g., GoGuardian, CLEAR, Ring) are continually used in different contexts for surveillance by civilians and police. When discussing why this continues to be an issue, the group concluded that existing technologies are still being developed and governed by people who have not experienced oppression, discrimination, or police surveillance. AI facial recognition technologists do not think about (and sometimes are even restricted from thinking about) how their inventions and technologies are affecting real people.

Another challenge that was discussed was the lack of knowledge people have about their individual rights, particularly when interacting with law enforcement. AI-driven technologies such as facial recognition are everywhere, and people interact with these technologies constantly. Yet they do not understand how they work, if or how they are regulated, or what their rights are. This unfamiliarity also spreads misinformation among communities.

An important step that can be taken to ensure AI-driven technologies are not used against minority people and marginalized communities is hosting workshops, panels, coffee meetups, and other community-focused events that bring everyday people into conversation with those working on technology development and policy creation. Discussions about developing tools that provide clarity and transparency around privacy policies and data aggregation must also be considered. Finally, we can use different mediums (e.g., information packets, PSAs, news media) to increase awareness and knowledge about how AI actually works and how a person's individual rights are related to these technologies.

back to top  AI Education and Applications for Youth Empowerment

Two of the most pressing questions that came up in our youth panel discussions about youth and AI education were: Who is responsible for youth AI literacy? And, when it comes to AI education and literacy, what does meaningful engagement and youth empowerment look like? We focused on a model for governance at the high school level, where youth are regularly consulted about their experiences with and for their input about AI. For example, perhaps school boards could regularly interact with the youth population in their district? The role of librarians was also front and center in our discussions, since they are often involved in information literacy efforts, which includes AI. The question of how to involve them was a source of uncertainty.

An important next step will be to talk with groups of young people who have worked with institutions in the AI governance space and in other contexts about how they have functionally engaged with schools, administrators, government, parent-teacher associations, nongovernmental organizations, and other groups. What are they saying about AI literacy? What are they saying about AI and diversity, equity, and inclusion efforts? From there, we might consider where there are gaps and how the best formats might be broadly replicated. At each step, we must involve teens in making AI education accessible, valuable, and engaging.


They [Gen Z] worried about the potential for AI to displace any number of jobs, from technology workers to artists.


Of course, in all of this, we must ask what outcomes in AI literacy we are looking for, and how we are going to measure them. Educating people about how AI works and advocating on their behalf is important work. But the question of how to prepare people for living with AI is certainly not settled. There are implications beyond education, to self and personhood that we don't yet understand.

back to top  AI Accountability and Online Safety for Youth

Three topics involving AI accountability and the safety and privacy of youth were raised by Gen Z researchers: data protection, disparate outcomes among youth, and amplifying diverse youth voices. While privacy regulations such as the Children's Online Privacy Protection Act (COPPA) and the General Data Protection Regulation (GDPR) make efforts to protect youth privacy online, they have limitations. COPPA only protects youth under the age of 13, and GDPR does not apply to youth in the U.S. Meanwhile, AI systems depend on collecting copious amounts of user data, including data from youth. Requiring additional age verification prior to data collection and expanding privacy regulations to protect youth up to the age of 18 from the use of their personal data could be a potential path forward, but such solutions would be difficult to implement across all contexts. However, preventing all data collection from youth could deny them the theoretical benefits of personalized AI-based systems (e.g., personalized recommendations, decision and language support).

Indeed, in general, we need to be mindful about whether and how AI-based systems could potentially disproportionately disadvantage underserved youth (e.g., based on socioeconomic status, race, digital literacy, or disability status) in a way that creates a wider digital divide and disparate outcomes. This is particularly true for youth in underprivileged countries, but it can also apply to youth in rural and urban areas in the U.S. who have limited access and resources. Assessment of disparate outcomes should include unequal exposure both to risk (e.g., fraud, discrimination) and to opportunities (e.g., education, public services). Thus, it is important that we intentionally include the voices of diverse and underserved youth in discussions around AI accountability and safety. Those currently at the forefront of these conversations, such as our participants from the youth-led organization Encode Justice, have diverse representation and are readily engaging with their communities. How can we further support them in their efforts to be inclusive and give voice to those youth who would otherwise go unheard? While we did not come up with an answer to this question, we identified it as an important goal for ensuring the digital inclusion of youth in conversations about the future of AI for the generations that succeed us.

back to top  Toward Participatory Research Led by Youth

The problems we have already discussed point to participatory approaches to thinking, agenda setting, and conducting research. Participatory action research emphasizes the engagement and participation of communities that are affected by the research, providing tools for studying them with the goal of transforming their practices. Somewhat adjacent to this is participatory design (PD). Perhaps among the best-known type of PD is the Scandinavian approach, which emphasizes designing with a community and their values, particularly around areas of conflict, to transform environments through design and in doing so democratize them.

We take up the participatory concept here to emphasize the importance of perspective and transformation. Our workshop, which was led by youth for the planning and execution, morphed as we went, and our research ideas were intertwined with theirs. With this participatory approach in mind, we expect that the research agenda and the research itself will have to become more malleable, more sensitive, and open to other ways of thinking that are more youth informed.

As a field, HCI leads in participatory work [6] and continues to break new ground [7]. We should leverage that history and continue to innovate through participatory work with youth at the helm—exploring how they live with AI, how they use it, and how they worry about it. To better equip youth with AI literacies is going to require more embedded, long-term work and that requires partnership.

back to top  Conclusion

AI pervades every facet of our lives. It shapes our social, political, and economic landscape, but for today's youth, that has always been the case. To better understand this terrain, we must place youth at the center. Participatory research seems to be the favorable approach [6]. We urge the HCI community to create new research partnerships with youth and youth-led organizations, incorporating youth community leaders into our research more readily.

back to top  References

1. Freed, D. et al. Understanding digital-safety experiences of youth in the U.S. Proc. of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, New York, 2023, 1–15.

2. Hassoun, A., Beacock, I., Consolvo, S., Goldberg, B., Kelley, P.G. and Russell, D.M. Practicing information sensibility: How Gen Z engages with online information. Proc. of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, New York, 2023, 1–17.

3. Twenge, J.M. Generations: The Real Differences Between Gen Z, Millennials, Gen X, Boomers, and Silents—and What They Mean for America's Future. Atria Books, 2023.

4. Buolamwini, J. Unmasking AI: My Mission to Protect What Is Human in a World of Machines. Random House, 2023.

5. Mcdonald, N., Razi, A., Badillo-Urquiola, K., Seberger, J.S., Agosto, D. and Wisniewski, P.J. AI through the eyes of Gen Z: Setting a research agenda for emerging technologies that empower our future generation. Companion Publication of the 2023 Conference on Computer Supported Cooperative Work and Social Computing. ACM, New York, 2023, 518–521.

6. Hayes, G.R. The relationship of action research to human-computer interaction. ACM Trans. on Computer-Human Interaction 18, 3 (Aug. 2011), 15:1–15:20; https://doi.org/10.1145/1993060.199306.

7. Harrington, C.N., Desai, A., Lewis, A., Moharana, S., Ross, A.S., and Mankoff, J. Working at the intersection of race, disability and accessibility. Proc. of the 25th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, New York, 2023, 1–18.

back to top  Authors

Nora McDonald is an assistant professor in the Department of Information Sciences and Technology in the College of Engineering and Computing at George Mason University, where she focuses on providing safe and ethical technologies for vulnerable populations. She also explores the impact of new types of data relations on our identities and shifting norms around surveillance and privacy. [email protected]

Karla Badillo-Urquiola is a Clare Booth Luce Assistant Professor in the College of Computer Science and Engineering at the University of Notre Dame. She leverages her interdisciplinary background in HCI, psychology, and social computing to investigate online safety and privacy for teens in the foster care system. She is an active member of the ACM SIGCHI Latin American HCI Community. [email protected]

Afsaneh Razi is an assistant professor in the Department of Information Science in the College of Computing & Informatics at Drexel University, where she cofounded the Data, Ontology, Networks, Users, Text, and Safety (DONUTS) collaboratory. Her research expertise is at the intersection of HCI and AI to address sociotechnical issues. Her work strives to deeply understand societal issues and identify ways to mediate these challenges using technology. [email protected]

John S. Seberger is an assistant professor in the Department of Information Science in the College of Computing & Informatics at Drexel University. He conducts interdisciplinary research in humanistic HCI with foci on privacy, affect, and dignity. [email protected]

Denise E. Agosto is a professor in the College of Computing & Informatics at Drexel University. Her research investigates how young people use information and information technologies, and the role of social context in shaping youths' information practices. [email protected]

Pamela Wisniewski is an associate professor in HCI and a Flowers Family Faculty Fellow in Engineering at Vanderbilt University. Her work lies at the intersection of social computing and privacy. She is an expert in the interplay between social media, privacy, and online safety for adolescents. [email protected]

back to top 

Copyright held by authors. Publication rights licensed to ACM.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2024 ACM, Inc.

Post Comment


No Comments Found