Early detection of autism
Authors:
Samiya Ali Zaidi
Posted: Tue, August 20, 2024 - 11:55:00
To get started, we partnered with the Autism Spectrum Disorder Welfare Trust (ASDWT) in Pakistan to create a unique dataset. It will include eye-tracking data from Pakistani children, reflecting a diverse range of ages, genders, and other demographic factors. The diversity in our dataset is crucial because it helps ensure that our model is robust and can be generalized across different populations. Collecting this data will be a collaborative effort, involving the ASDWT, parents, and other stakeholders who provided informed consent and valuable insights. Once the dataset is collected, we will prepare it for analysis. This step includes pre-processing the data, removing noise, and normalizing it to ensure consistency. These features will play a vital role in training our computer vision-based model to detect patterns that may indicate ASD.
In addition to eye-tracking data, we recognize the importance of incorporating textual inputs to improve the accuracy of our model. This step involves gathering information from parental reports, medical histories, and other relevant checklists. By integrating this textual data, we can provide our model with a more comprehensive understanding of each child's development, allowing for a more accurate detection of ASD-related patterns. Textual inputs add context to the eye-tracking data, helping identify correlations between visual behavior and other developmental cues. This approach aligns with HCI's focus on creating systems that understand and respond to human behavior in meaningful ways. By combining visual and textual data, we aim to create a model that not only detects ASD with greater accuracy but also helps healthcare professionals and families make informed decisions about treatment and support.
The heart of our project lies in building and training a custom computer vision-based model. Given the complexity of ASD and the variability of eye-tracking patterns, we understand that a robust model is essential. To achieve this, we will split our dataset into three subsets: training, validation, and testing. The training subset, comprising approximately 70 percent of the data, is used to teach the model to recognize patterns. The validation subset (20 percent) helps us fine-tune and optimize the model's hyperparameters. Finally, the testing subset (10 percent) is where we evaluate the model's accuracy and generalizability. The training process involves applying state-of-the-art computer vision techniques to analyze the eye-tracking data. We will also use transformer-based architectures to explore different approaches to pattern recognition. By testing multiple architectures, we aim to determine which methods are most effective in identifying ASD-related patterns.
After training and testing the model, we moved on to the analysis phase. This step involves evaluating the model's performance using various metrics. We compared our results with existing studies to gauge the effectiveness of our approach and to identify areas for improvement. During this phase, we paid special attention to how variations in visual stimuli and experimental conditions influenced the model's accuracy. We also considered the potential impact of demographic factors, ensuring that our model could generalize across different groups. This analysis is crucial, as it helps us understand the limitations of our approach and provides insights into how we can refine the model to improve its reliability.
The findings from our project, as seen in the initial reports, will have the potential to significantly impact early ASD detection. By combining eye-tracking data with textual inputs, we can create a model that offers earlier and more accurate diagnoses, which can lead to more effective interventions. This directly benefits healthcare professionals and families by providing them with tools to support children with ASD promptly. Moreover, our work demonstrates the potential of Human-Computer Interaction in healthcare. By leveraging computer vision and deep learning, we can address complex problems like ASD diagnosis with innovative solutions. This approach has implications beyond ASD, showcasing how HCI can play a pivotal role in improving diagnostic processes for other neurological and developmental conditions.
As we continue to refine our model, we are excited about the possibilities ahead. Our findings could guide future research into applying HCI and computer vision to a broader range of diagnostic scenarios, ultimately leading to more reliable and efficient methods for early diagnosis. We believe this project is just the beginning of a journey toward a more inclusive and effective approach to healthcare. By incorporating HCI into the process, we're not just creating technology for technology's sake; we're creating solutions that can make a real difference in people's lives. And that's what makes this work truly exciting.
Posted in:
on Tue, August 20, 2024 - 11:55:00
Samiya Ali Zaidi
I am disabled but not ‘impaired’
Authors:
Ather Sharif
Posted: Thu, August 08, 2024 - 12:45:00
There are at least 1.3 billion disabled people worldwide [1]. Their disabilities include blindness, hearing, mobility, and cognitive, among others. These individuals are disabled—if they choose to identify as such—but what they are not is “impaired.”
Word cloud of impaired and its related words created using Generative AI.
Correct usage of terminology is important. There exists a prodigious, if not universal, agreement that the terminology we use to describe people holds significant weight in increasing inclusivity for marginalized groups, especially disabled people [2]. These words represent the speakers’ commitment to cultural awareness of and sensitivity toward vulnerable populations [3]. For the latter, they express respect and a sense of belonging to a world that continues to disenfranchise them. Much like fashion, terminologies have a shelf life, after which they slowly become archaic or are considered outdated due to their negative connotations (e.g., retarded). But unlike fashion, if declared pejorative, the terminologies’ return to popular usage is unwelcome at best, unless done so purposely by the targeted demographic to reclaim the narrative (e.g., crippled and gimp).
Reclaiming the narrative. Reclaiming the narrative begins with a word being just a word, like any other word in the dictionary. Then, a creative person comes along and uses it as an adjective to describe a person or a group of people they think they are superior to. Other “superiors” agree, and the word then becomes an identity of the “inferiors.” Time goes by, inferiors get a platform to vocalize their disagreement with the word, superiors demonstrate resistance and continue using it, and finally, after years of exhausting advocacy, the word gets flagged as offensive. It stays offensive, and people, within and outside the targeted demographic, avoid its usage. Then, people within the targeted demographic take ownership of the word to empower themselves, remembering the oppression they faced, which led to the association of the word with their demographic in the first place. People outside the demographic, however, are barred from using the word, particularly with those within the demographic. (The barring has never been enforced as a law, though, and only serves as a recommendation, with the likelihood of the speaker being “canceled” for using the word.)
The word impaired has not yet been reclaimed. The disagreement, however, has been vocalized. Resistance to its cessation continues.
Definition and impact of impaired. The Oxford English Dictionary defines impaired as “injured, weakened, damaged; diminished or deteriorated in amount, quality, or value.” Merriam-Webster’s Collegiate Dictionary describes it as “diminished in function or ability.”
Disabled people already know what their disabilities and limitations are, with many having embraced it as a part of their identity. They do not need a constant reminder of their disabilities. Every time I type the word impaired for this blog post, a part of me feels dejected and wants to give up on the utopian future where disabled people are not distinct from those who are “normal.” But I digress.
Using words like impaired can result in disabled people internalizing their self-identity as being broken, or even an “incomplete” human. These words can foster a subconscious degradation in confidence and pose hurdles in a disabled person’s journey toward accepting their identity as a disabled person. They can create emotional scars, which, if externalized, could develop into more negative stereotypes, such as disabled people being “sappy” or lacking “thick skin.” The impact of these terms can be long-lasting.
Arguments for using impaired. I hesitate to believe, or rather to admit, that on a human level, HCI researchers and authors are entirely oblivious to the effect of words on human emotions. Movies, TV shows, and books thrive on this effect. Encouraging and respectful words produce harmony. Disrespectful and prickly words trigger emotional distress. Neutral words are often harmless. So why do people still use impaired to refer to people?
Argument 1: It’s an established term used in medical journals and prior accessibility research. We’re just following the standards. This reasoning is based on facts. Impaired is used quite extensively, although the field of medicine is attempting to rectify that [4,5]. But if we dig deeper into the literature, we will find frequent usage of other terms now considered derogatory. There was a time when those words were acceptable…and then they weren’t. There are two choices: keep using impaired until it reaches its shelf life or preemptively avoid the term and be better allies to the disability community. History has a funny way of teaching us things, but only if we are willing to learn. The lesson here is to be proactive catalysts for positive change and not let the change happen as a mandatory reparations measure.
Argument 2: A person I know is perfectly fine with its usage, so this is not a problem. N = 1 has its place, but not for this argument. Everyone has their preferences; some people are sensitive, some not so much. Focusing on those not affected by the usage of impaired and letting that motivate its continued usage is arguably an invalidation of those who are sensitive to its usage. Our societal goal has always been to validate and support vulnerable and disenfranchised populations. It makes sense to listen to the voices of those who find discomfort in being labeled as impaired. It is improbable that a more accepting replacement would be unwelcomed by those not affected by the usage of impaired. At its 1993 annual convention, the National Federation of the Blind advised against the usage of visually impaired and instead advocated using blind and low vision, employing identity-first language [6]. Yet, 31 years later, visually impaired remains a widely used term.
Argument 3: I never really thought about it; I just used it because my advisor did. What can I do moving forward? This is not necessarily an argument, but it provides valuable insights into some people’s thought process. The reasoning behind this view makes sense too. The desire to be better is a gold star. But some questions remain. For example: How can we better communicate our discomfort with word choices without exhausting ourselves, such that it is baked into our thought process from the start? Whose responsibility is it? How can we build a culture where we are encouraged to challenge an established belief, where there is constant learning and unlearning? I do not know the answers to these questions, but I suspect the answers involve being less resistant to change and actively seeking feedback from a community on how they wish to be referred to.
Where do we go from here? This blog post is a call to action to cease referring to disabled people as impaired. But that is not all I intend. I urge HCI researchers and authors to employ practices and processes that allow for rethinking and challenging the terms they use to refer to vulnerable and disenfranchised populations. I believe it when people say they wish to be better allies; using respectful terminology is a step toward that. Giving some thought to people-describing adjectives before using them can help. I hope this post sparks thoughtful conversations on this subject and helps identify avenues for authors to use more-inclusive words.
Endnotes
1. World Health Organization. Disability. March 7, 2023; https://www.who.int/news-room/...
2. Sharif, A., McCall, A.L., and Bolante, K.R. Should I say “disabled people” or “people with disabilities”? Language preferences of disabled people between identity- and person-first language. Proc. of the 24th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, New York, 2022, 1–18.
3. Best, K.L., Mortenson, W.B., Lauzière-Fitzgerald, Z., and Smith, E.M. Language matters! The long-standing debate between identity-first language and person-first language. Assistive Technology 34, 2 (2022), 127–128.
4. Goddu, A.P. et al. Do words matter? Stigmatizing language and the transmission of bias in the medical record. Journal of General Internal Medicine 33, 5 (2018), 685–691.
5. Raney, J. et al. Words matter: An antibias workshop for health care professionals to reduce stigmatizing language. MedEdPORTAL 17 (2021), 11115.
6. National Federation of the Blind. Convention Resolutions 1993: Resolution 93-01 https://nfb.org/sites/default/...
Posted in:
on Thu, August 08, 2024 - 12:45:00
Ather Sharif
Ather Sharif has a Ph.D. in computer science from the University of Washington. His research focuses broadly on human-computer interaction and specifically on the intersection of accessibility, visualization, and personalization. His work involves making online data visualizations accessible to screen-reader users. He is the chairman of the Executive Board of the Disability Empowerment Center.
[email protected]
View All Ather Sharif's Posts
Doing CHI together: The benefits of a writing retreat for early-career HCI researchers
Authors:
Ava Elizabeth Scott,
Leon Reicherts,
Evropi Stefanidi
Posted: Tue, April 16, 2024 - 10:30:00
The deadline for the CHI conference is an important event on an HCI researcher’s calendar. For CHI 2023, more than 3,000 papers were submitted and only 28 percent were accepted. With such a high number of submissions and low acceptance rate, researchers face pressure to make the best submission possible. So how can we best introduce early-career researchers to the charms and quirks of writing a CHI paper while supporting them in submitting the best possible paper they can?
For the past two years, the University of St. Gallen in Switzerland has hosted a writing retreat called CHI Together, which brings researchers together for two and a half weeks directly prior to the CHI deadline. While many of these researchers are experienced CHI authors, some are engaging in this process for the first time. During the inaugural writing retreat held in 2022, attendees submitted a total of a dozen papers. The idea behind the retreat was to help participants collaborate efficiently and manage the stressful phase before the deadline in the company of like-minded people; and then to celebrate their submissions. Hoping to reiterate and build on this success, this year the computer science department at St. Gallen invited 21 researchers to come together, physically and collaboratively, in the run-up to the deadline for CHI 2024.
This endeavor was made possible by funding from the SIGCHI Development Fund and from the University of St. Gallen. We are hugely grateful for this support, which facilitated the many healthy social activities and group meals, as well as financed accommodation and travel expenses. Here we describe these activities, starting from our first day arriving in St. Gallen to the final delirious day of submissions and subsequent celebration. We then reflect on the intensity of the CHI deadline and recommend that early-career researchers be supported with social and recreational resources, as well as pragmatic and technical measures.
Arriving by plane and train on a warm day in late August, we were immediately struck by the fact that these services were on time. As students from the U.K. and Germany, Swiss timeliness is always a nice surprise. We reunited with the other visiting researchers, new and familiar faces alike. Our apartments were a 10-minute walk from the computer science building in the center of St. Gallen, halfway up a steep hill. While it was an easy jaunt down to the office, there was a significant disincentive to leave the office: the hike between you and your bed. Though this felt like a slog at the time, the daily exercise surely benefited us. These apartments were our home for the next two and a half weeks, but we spent much more time at the office.
Sharing an open office environment catalyzed natural collaboration, as well as companionship and social breaks during the workday.
In the computer science building, we each had desk space with a display monitor and a comfortable chair. In anticipation of the numerous hours we would spend writing, an ergonomic setup was a priority. In this open office setting, students collected data, analyzed it, and scratched their heads debating over the structure and presentation of their results.
A selection of irreverent and humorous Ph.D. memes were spread over the desks.
Knowledge exchange sessions were organized to support these processes, including sessions on qualitative analysis and how to write a CHI paper. We swapped abstracts and drafts with one another to get feedback on our efforts. In addition to receiving guidance on analysis and writing, we also created and shared analytical tools, such as a transcription and speaker-segmentation pipeline, which we used to transcribe more than 100 interviews across different research projects. Using the wall-size whiteboards around the office, we collectively sketched and resketched papers’ structure, argumentation, models, and figures. By distributing our thinking across multiple brains and the physical space, we could work through hard problems with greater speed and clarity.
This intensive brain activity required fuel. Most days, we took turns making group lunches in the lab kitchen—usually an elaborate salad with fresh bread and cheese. By sharing lunch duties, we reduced the overall time spent on meal preparation, while also eating well. As the deadline got closer and the days became longer, we frequently honored the tradition of grabbing doner kebab or Thai takeaway for dinner.
Paweł W. Woźniak presented the key elements of an abstract.
There were also several optional activities that provided a much-needed break. We hiked the nearby mountains, took a dip in the local mountain lakes, indulged in manic dance and workout breaks, went to the cinema, and shared pizza and burgers in the evenings. Furthermore, a contingent of researchers attended the Mensch und Computer conference in nearby Rapperswil, enabling further exchange with the D-A-CH (Germany, Austria, and Switzerland) HCI community. Postdocs and those in their final year of Ph.D. research also attended an information session about research funding opportunities across Switzerland and the rest of Europe, organized by the research office at St. Gallen. This balance of intellectual, recreational, and forward-looking activities helped balance the otherwise intensive writing schedule.
Immersed in this culture of dedicated focus, we collectively submitted more than a dozen papers for CHI 2024. One supervisor said that their students’ writing during this period was the clearest and most effective of their studies so far. While some of us were clicking the submit button at the very last moment (2 p.m. on September 15 in our time zone), we all got our papers in on time and celebrated with drinks and snacks in the lab. Despite many of us having pulled all-nighters, the drinks continued until 2 a.m. We said heartfelt goodbyes, and left for our respective destinations the following morning, looking forward (with slight trepidation!) to receiving our reviews in a few weeks.
The collective whiteboards were the site of active collaborative thinking, sketching iterations of models, figures, and argumentation.
Large, top-tier conferences such as CHI are the critical sites for presenting work, receiving feedback, and networking with peers and future collaborators. However, due to the appropriately high standards set for paper acceptance and the cost of in-person attendance, such conferences are not always accessible to early-career scholars. The CHI Together model of collaborative writing before the deadline provides support in multiple ways, offering an encouraging, knowledgeable community to help steer research work and frame completed work following best practices for different submission types. CHI Together also supports recreational activities that allow early-stage scholars to network and build community; we believe that such community building means scholars will have an established cohort for continued mutual support in their everyday work setting as well as at conferences, when they are able to attend them. These meaningful relationships with peers are perhaps even more lasting than connections made at conferences.
Plans for CHI Together 2024 are in the works, potentially hosted elsewhere in Europe, but aiming to maintain the same level of focus, support, and solidarity in the run-up to the deadline for CHI 2025. For those of you interested in hosting a similar writing retreat, and who would appreciate some guidance or blueprints, don’t hesitate to get in touch with us or Johannes Schöning of the University of St. Gallen. Or if you are running similar events, please do share your approach with us.
Posted in:
on Tue, April 16, 2024 - 10:30:00
Ava Elizabeth Scott
Ava Elizabeth Scott is a Ph.D. candidate at University College London. As part of the Ecological Study of the Brain DTP, her current research prioritizes ecological validity by using interdisciplinary approaches to investigate how metacognition and intentionality can be supported by technological interfaces .
[email protected]
View All Ava Elizabeth Scott's Posts
Leon Reicherts
Evropi Stefanidi
Training for net zero carbon infrastructure through interactive immersive technologies
Authors:
Muhammad Zahid Iqbal,
Abraham Campbell
Posted: Wed, March 06, 2024 - 3:00:00
By providing realistic and engaging learning experiences, immersive technologies such as virtual reality (VR), augmented reality (AR), mixed reality (MR), 360-degree video, and simulation have proven to be effective in industrial training contexts. These technologies have been adopted for various purposes, including healthcare [1], teacher training, and nuclear power plant training [2].
Climate change is a global challenge that requires coordinated action from all sectors of society. In order to combat climate change, CO2 emissions need to fall to zero. One of the key strategies in addressing this challenge is to achieve net zero carbon emissions by 2050 [3]. Essential to achieving this goal is the adoption of net zero carbon infrastructure [4], which can reduce reliance on fossil fuels, enhance the efficiency and resilience of energy systems, and support the development of low-carbon technologies in industry. Net zero carbon infrastructure includes renewable energy resources such as wind, solar, nuclear, and hydro; low-carbon transport modes such as electric vehicles; and carbon-capturing, energy-efficient buildings [5] and appliances. There is an urgent need to invest in net zero carbon infrastructure to improve air quality, promote healthy lifestyles, and improve social equity and environmental justice. Net zero carbon infrastructure is therefore not only a necessity but also an opportunity to build a more sustainable and prosperous future for all. This post explores the opportunities and challenges of using immersive technologies for net zero carbon infrastructure training.
Main components of net zero carbon infrastructure.
Immersive technologies can potentially revolutionize training for net zero carbon infrastructure development and maintenance. Potential opportunities include:
- Reducing training costs, risks, and errors by eliminating the need for physical resources, travel, and instructors.
- By using immersive technologies, trainees can learn and practice the skills and knowledge required for net zero infrastructure in a realistic and immersive way without the need for physical resources, travel, and instructors. A perfect example of this is in wind farms, where maintenance of a turbine may take place over 50 meters above the ground. Immersive technologies can save time, money, and resources, as well as improve safety and quality.
- Creating realistic simulations of dangerous or hazardous environments that workers may encounter in their jobs, such as oil spills, fires, or explosions. By using immersive technologies, workers can learn how to handle these situations safely and effectively without exposing themselves to physical harm. Therefore, immersive technologies can enhance the safety and efficiency of net zero infrastructure training.
- Using immersive technologies in net zero infrastructure training can provide data and feedback on learner performance, behavior, and progress. Trainers can monitor and assess how learners interact with the virtual environment, what choices they make, how they solve problems, and how they apply their skills and knowledge. This data and feedback can be used to improve training design and delivery by identifying strengths and weaknesses, providing personalized guidance, and adapting the level of difficulty and complexity.
- Enhancing the learning experience by creating a more engaging and interactive environment for learners or trainees. By providing learners with realistic and immersive scenarios, these interventions can stimulate their senses, emotions, and cognition, as well as provide immediate feedback and guidance. Therefore, immersive technologies can help to foster a more effective and enjoyable learning process.
- Leveraging the latest revolution ingenerative AI integration, immersive training can be made more productive, personalized, and content-driven by instructional designers. As generative AI moves forward, its ability to facilitate training will only improve in its ability to generate text, image, video, and even 3D avatars to assist the trainee.
- Using haptic devices in immersive training is the next step forward, as compared with touchless hand interaction or nonhaptic, controller-based interaction it can achieve greater realism within a training environment. The haptic sense in training has been proven to be one of the best feedback methods for users.
There are also challenges posed in adopting these technologies that need to be addressed. Some of the main challenges include:
- Acquiring immersive training resources for the first time can be costly and resource intensive because it requires specialized equipment, software, and content development. Depending on the complexity and quality of the simulations, the initial investment and maintenance costs can be high, but most of this is a long-term investment.
- Implementing immersive training infrastructure may not be compatible with existing systems and platforms, which can create technical difficulties and integration issues. Also, immersive training facilities require more technical support and troubleshooting initially than other forms of training.
- Accessibility is a commonly discussed issue with immersive training. But with the latest developments in immersive hardware, immersive training material can be designed with accessibility and inclusivity in mind.
- Implementing new initiatives such as using immersive technologies for learning may face resistance from trainees, who may fear adopting new technologies. Therefore, there is a need for a convincing strategy, such as adopting the latest versions of the technology acceptance model (TAM) [6] and communicating about the value of immersive training, as well as providing adequate training and support.
All of these challenges are not insurmountable and can be overcome with careful planning, evaluation, and collaboration. It is worth considering how we might leverage these technologies to achieve net zero industrial training goals. One way to address the cost challenge is to develop more-affordable immersive technologies such as mobile AR/VR; even the makers of the latest devices are working on reducing the cost, such as the latest Meta VR headset Quest 3. As advances in XR devices proceed rapidly, with year-on-year improvements, Meta and other companies such as Microsoft and HTC should consider allowing users to return their old devices to be refurbished or recycled for a discount on new devices. This will reduce costs for XR users who want the latest devices and reduce the environmental impact.
Another way to address the challenge of accessibility is to make immersive training more accessible to people in developing countries. This can be done by providing training on how to use immersive technologies and by developing immersive training programs tailored to the needs of developing countries. Of course, there are challenges of usability, which can be addressed by designing immersive training programs that are easy to use and more user-friendly.
Case studies from different industrial training contexts show immersive technologies have the potential to play a significant role in training for net zero carbon infrastructure. By addressing the challenges, we can ensure that immersive technologies can help revolutionize net zero workforce training for a sustainable future.
Endnotes
1. Brooks, A.L. Gaming, VR, and immersive technologies for education/training. In Recent Advances in Technologies for Inclusive Well-Being: Virtual Patients, Gamification and Simulation. Springer, 2021, 17–29.
2. opov, O.O. et al. Immersive technology for training and professional development of nuclear power plants personnel. Proc. of the 4th International Workshop on Augmented Reality in Education, 2021.
4. Bouckaert, S. et al. Net Zero by 2050: A Roadmap for the Global Energy Sector. International Energy Agency, 2021.
5. Kennedy, C.A., Ibrahim,N., and Hoornweg,D. Low-carbon infrastructure strategies for cities. Nature Climate Change 4, 5 (2014), 343–346.
6. Thomas, R.J., O'Hare, G., and Coyle, D. Understanding technology acceptance in smart agriculture: A systematic review of empirical research in crop production. Technological Forecasting and Social Change 189 (2023),122374.
Posted in:
on Wed, March 06, 2024 - 3:00:00
Muhammad Zahid Iqbal
Muhammad Zahid Iqbal is an assistant professor in immersive technologies at Teesside University. He also works as an associate faculty member at the University of Glasgow. He completed Ph.D. in computer science with a specialization in immersive technologies from University College Dublin. His research vision is to explore the convergence of immersive technology, digital twins, the metaverse, and digital transformations.
[email protected]
View All Muhammad Zahid Iqbal's Posts
Abraham Campbell
Abraham Campbell is an assistant professor at University College Dublin, Ireland. He also served as faculty member of Beijing-Dublin International College, a joint initiative between UCD and BJUT. He is a Funded Investigator for the CONSUS SFI Center and was a Collaborator on the EU-Funded AHA—AdHd Augmented Project.
[email protected]
View All Abraham Campbell's Posts
The unsustainable model: The tug of war between LLM personalization and user privacy
Authors:
Shuhan Sheng
Posted: Thu, January 04, 2024 - 11:32:00
In the AI universe, we find ourselves balancing on the edge of a tightrope between groundbreaking promise and perilous pitfalls, especially with those LLM-based platforms we broadly label as “AI platform.” The allure is magnetic, but the risks? They’re impossible to ignore.
AI Platform’s Information Interaction and Service Features: An Unsustainable Model
Central to every AI platform is its adeptness in accumulating, deciphering, and deploying colossal data streams. Such data, often intimate and sensitive, is either freely given by users or, at times, unwittingly surrendered. The dynamic appears simple: You nourish the AI with data, and in reciprocation, it bestows upon you tailored suggestions, insights, or solutions. It’s like having a conversation with a very knowledgeable friend who remembers everything you’ve ever told them. But here’s the catch: Unlike your friend, AI platforms store this information, often indefinitely, and use it to refine their algorithms, sell to advertisers, or even share with third parties.
The current mechanisms of information exchange between AI platforms and users can be likened to a two-way street with no traffic lights. On one flank, users persistently pour their data, aspiring for superior amenities and encounters. Conversely, AI platforms, with their insatiable quest for excellence, feast upon this data, often devoid of explicit confines or oversight. Such unbridled data interchange has culminated in palpable apprehensions, predominantly surrounding user privacy and potential data malfeasance [1].
Following this unchecked two-way data street, the unsustainable model now forces a dicey trade-off between “personalized experiences” and “personal data privacy.” This has led to a staggering concentration of user data on major AI platforms, at levels and depths previously unimaginable. What’s more alarming? This data pile just keeps growing with time. And let’s not kid ourselves: These AI platforms, hoarding mountains of critical user data, are far from impenetrable fortresses. A single breach could spell disaster.
One of the most recent and notable incidents involves ChatGPT. During its early deployment, there was an inadvertent leak of sensitive commercial information. Specifically, Samsung employees reportedly leaked sensitive confidential company information to OpenAI’s ChatGPT on multiple occasions [2]. This incident not only caused a stir in the tech community but also ignited a broader debate about the safety and reliability of AI platforms. The inadvertent leak raised concerns about the potential misuse of AI in business espionage, the risk of exposing proprietary business strategies, and the potential financial implications for companies whose sensitive data might be inadvertently shared.
Whether we like it or not, we need to see and face this fact. The current rapidly growing information interaction model based on traditional user data storage methods is unsustainable. It’s a ticking time bomb, waiting for the right moment to explode. And unless we address these issues head-on, we are bound for a digital disaster.
User Experience Design Based on This Mechanism: Subsequent Problems and Challenges
Beyond the lurking privacy threats and the looming digital apocalypse, this unsustainable info-exchange model is already a thorn in the user experience side. Let’s dive deeper into these annoyances to grasp the gravity of the situation.
The cross-platform invocation dilemma. Major AI platforms operate in silos, creating a fragmented ecosystem where user data lacks interoperability. With the advent of new models and platforms, this fragmentation is only intensifying [3]. Imagine having to introduce yourself every time you meet someone, even if you’ve met them before. That’s the predicament users find themselves in. Every time they switch to a new AI platform, they’re forced to retrain the system with their personal data to receive customized results. This not only is tedious but also amplifies the risk of data breaches. It’s like giving out your home address to every stranger you meet, hoping they won’t misuse it.
Inefficiencies in historical interaction records. The current AI models have a flawed approach to storing and managing historical interaction records [4]. Take ChatGPT, for instance. Even within the platform, one session’s history can’t give a nod to another’s. It’s like they’re strangers at a party. Users struggle to retrieve past interactions, making the entire process of data retrieval cumbersome and inefficient. This inefficiency not only frustrates users but also diminishes the value proposition of these platforms.
Token overload in single channels. Information overload is a real concern. When a single channel is bombarded with excessive information, the AI platform’s performance takes a hit [5]. It’s like trying to listen to multiple radio stations at once; the result is just noise. The current model’s technical limitations become evident as it struggles to scale with increased user interaction, leading to slower response times and a degraded user experience.
A Call for Change
As we draw the curtains on our discussion, it’s evident that the current AI ecosystem, while revolutionary, is far from perfect. The model’s unsustainability is not just a theoretical concern but rather a tangible reality that users grapple with daily.
The complexity of data misuse is a significant concern. Both active and passive data misuse are like icebergs—what’s visible is just the tip, and the real danger lurks beneath the surface. These misuses are not only concealed but also highly unpredictable. It’s akin to navigating a minefield blindfolded; one never knows when or where the next explosion will occur.
Relying solely on corporate responsibility and legal regulations is akin to putting a band-aid on a gunshot wound. While these measures might offer temporary relief, they don’t address the root cause of the problem. The need of the hour is a fundamental change. We must advocate for a deeper, more profound, root-level redesign of the information interaction mechanisms between users and AI platforms. It’s not just about patching up the existing system but envisioning a new one that prioritizes user experience, privacy, and security.
The AI ecosystem is at a crossroads. We can either continue down the current path, ignoring the glaring issues, or we can take the bold step of overhauling the system. The choice is clear: For a more sustainable, ethical, and user-friendly AI future, change is not just necessary; it’s imperative.
Endnotes
1. Harari, Y.N. 21 Lessons for the 21st Century. Spiegel & Grau, 2018.
2. Greenberg, A. Oops: Samsung employees leaked confidential data to ChatGPT. Gizmodo. Apr. 6, 2023; https://gizmodo.com/chatgpt-ai...
3. Forbes Tech Council. AI and large language models: The future of healthcare data interoperability. Forbes. Jun. 20, 2023; https://www.forbes.com/sites/f...
4. Broussard, M. The challenges of AI preservation. The American Historical Review 128, 3 (Sep. 2023), 1378–1381; https://doi.org/10.1093/ahr/rh...
5. Vontobel, M. AI could repair the damage done by data overload. VentureBeat. Jan. 4, 2022; https://venturebeat.com/datade...
Posted in:
on Thu, January 04, 2024 - 11:32:00
Shuhan Sheng
An entrepreneurial spirit and design visionary, Shuhan Sheng cofounded an industry-first educational-corporate platform, amassing significant investment. After honing his craft in interaction design at ArtCenter College of Design, he now leads as chief designer and product director for two cutting-edge AI teams in North America. His work has garnered multiple international accolades, including FDA and MUSE Awards.
[email protected]
View All Shuhan Sheng's Posts
Body x Materials @ CHI: Exploring the intersections of body and materiality in a full-day workshop
Authors:
Ana Tajadura-Jiménez ,
Bruna Petreca,
Laia Turmo Vidal,
Ricardo O'Nascimento ,
Aneesha Singh
Posted: Wed, January 03, 2024 - 2:31:00
During the 2023 CHI conference, we ran a one-day workshop to consider the design space at the intersection of the body and materials [1]. The workshop gathered designers, makers, researchers, and artists to explore current theories, approaches, methods, and tools that emphasize the critical role of materiality in body-based interactions with technology. We were motivated by developments in HCI and interaction design over the past 15 years, namely the “material turn,” which explores the materiality of technology and computation and methods for working with materials, and “first-person” approaches emphasizing design for and from lived experience and the physical body. Recognizing the valuable contributions of approaches that foregrounded materiality [2] and the body [3] in HCI, we proposed to explore the intersection of these two turns.
Material interactions is a field of wide-ranging interest to HCI researchers, starting with research on the design and integration of physical and digital materials to create interactive and embodied experiences [4], progressing to a growing interest in the contextual aspects of material interactions and their impact on personal and social experiences [2], and culminating in formalized method propositions such as “material-centered interaction design” [5]. The latter is a fresh approach, urging interaction designers to broaden their view beyond the capabilities of the computer and embrace a practice that imagines and designs interaction through material manifestations. Similarly, the role of the body in design has gained traction, through approaches such as soma design [3], embodied interaction, and movement-centric methods [6]. These have positioned the centrality of first-person accounts and physical engagement to research and design from the body, shifting the design space beyond security and efficiency, which are primary goals, for example, in the field of human factors, to deal with other dimensions such as fun/joy, play, entertainment, mental health interventions, and so on. In the workshop, the methods and prototypes brought in aimed to explore experience, for instance in terms of perceptions and feelings toward materials (Figure 1) or toward the person’s own body (Figure 2). In both cases, the interaction with the prototypes brings the subjective experience into focus. We share a keen interest with previous research in working across physical and digital materials, as well as in centering the body in design. Additionally, we have a particular interest in the intersection of these approaches in which we find a new design space: making the body central material in designing experiences, going beyond a view of the body as the medium with which to explore the material—its potentials, its affordances—instead including the body as part of the material being designed with and for. We are excited to develop this design space through a formal proposition, as we progress with our research.
Our proposed approach is both timely and critical, as it addresses the complex interplay between the body and emerging technologies. However, this approach is not free from challenges, as discussed in the workshop. These technologies not only have a tangible impact but also influence immaterial aspects, creating unpredictable outcomes that must be carefully considered during the design phase. As technologies increasingly become a ubiquitous part of our lives, they affect our bodies in both physiological and psychological ways. Furthermore, moving this research beyond the confines of a laboratory not only introduces ethical concerns in relation to health, well-being, safety, and social impact, but also opens new opportunities. For example, in terms of social impact, the technologies can directly affect not just individual bodies, but also the interpersonal relationships those individuals maintain. Therefore, any interaction with embodied technologies carries real-world implications that must be approached with caution, but also with curiosity about the opportunities they may open.
The Workshop
Throughout our day-long event, attendees took part in various activities and were organized into four thematic groups: material enabling expression; material as a catalyst for human action; material enabling reflection, awareness, and understanding; and material supporting the design process for (re)creating the existing and the yet-to-exist.
In the first activity, participants shared their prototypes or methods within their distinct groups. Prototypes were examined in relation to each group’s theme and participants individually noted similarities, differences, and opportunities among them. The second activity mixed groups for participants to discuss key points across themes, to uncover overlaps and opportunities.
<
Figure 1. One of the methods (Materials Gym) presented at the workshop. EMG is used alongside a smartphone application to continuously collect data on people’s interaction with materials: Speed and aspects of movement reveal information about the experience in relation to people’s perceptions of and preferences for materials qualities (see https://discovery.ucl.ac.uk/id/eprint/10172204/).
Figure 2. A physical prototype (Soniband) that one participant brought to the workshop. This is a wearable band that provides real-time sonification of movement through a variety of sounds, some of which build on material metaphors, such as water, wind, or mechanical gears, and which can impact on the wearer’s body perceptions and feelings, movement, and emotional state (see https://doi.org/10.1145/3411764.3445558; https://doi.org/10.1145/2858036.2858486)
To increase enthusiasm into our exploration, the workshop featured a panel with four guest speakers who presented their research through a provocation: Kristii Kuusk (Estonian Academy of Arts) talked about the impact of material on people’s bodies combining affordances of technology and traditional materials; Pedro Lopes (University of Chicago) talked about making the body the material from which we build the actuator of the designed technologies; Hasti Seifi (Arizona State University) talked about the need for integrating basic research on haptic sensory perception and language into software tools for designing body-based haptic experiences; Paul Strohmeier (Max Planck Institute for Informatics) talked about how material and body experiences are shaped by agency, control, and reflective processes. The panelists conversed about material experiences and engaged in a Q&A session with participants. To conclude, participants mapped the design space of “body x materials” with their original groups, considering challenges, opportunities, state-of-the-art, theories, and methodologies.
Figure 3. Participants engaging in workshop activities.
The diversity of backgrounds of workshop participants (position papers available at www.rca.ac.uk/body-materials) resulted in multidisciplinary discussion groups where different disciplines, theories, methodologies and application domains were considered (including HCI, interaction design, neuroscience, arts and crafts, and material design). Through discussions, this diverse group of participants arrived at shared conclusions. Participants highlighted the importance and timeliness of the topic. Many emphasized the need to establish a clear vision of the relationship between the body and materials for HCI. They foregrounded the challenges posed by the complexities of reconciling detailed fine-grain descriptions with a holistic perspective of the overall experience, which arise both from the dynamic nature of the experience of materials, conveyed through sensorimotor feedback, and the continuous interpretation of the material qualities involved in the act of touching. Understanding the underlying processes leading to these interpretations and how awareness levels affect interaction was perceived by participants as critical. Several participants stressed the importance of incorporating first-person perspectives into design.
The discussion spotlighted additional challenges stemming from language limitations and the necessity of cultivating a language specific to touch. Two key points emerged from this discussion. First, participants underscored the importance of establishing a shared vocabulary that enables effective communication among researchers who may have diverse backgrounds. Second, it was considered that in the translation of touch experiences into verbal descriptions significant information tends to be lost. Participants also explored the optimal data format for representing body-based research, acknowledging the existence of ongoing attempts.
Finally, the topic of shifting agency between the body and materials was explored. Two approaches were highlighted: empowering individuals to create their own unique material experiences as part of the design process, and the concept of “controlling the body to experience something: where the body acts as an actuator. The tension between enabling individuals to engage with the experience and exerting control over the body for a specific experience was also acknowledged during the discussion.
For those interested in working in this novel design space at the intersection of body x materials, we synthesize three key takeaways from our workshop. These takeaways emphasize: 1) the significance of comprehending the fundamental processes underlying experiences within this intersection, 2) the need to employ appropriate methods to achieve this understanding, including the integration of first-person perspectives into the design process, and 3) the importance of establishing a shared vocabulary. The latter includes facilitating the translation and representation of experiences and research outcomes related to the body x material interactions.
Conclusions
Everyone agrees that mapping the state-of-the art of the body x materials space is necessary for creating a more significant impact beyond lab experimentation. Nevertheless, a cautionary note was also struck—participants noted that it is essential to slow down and account for the ethical implications of how this research affects people psychologically and otherwise. To move the field forward and generate impact, participants agreed that interdisciplinary conversations must be fostered and the real-life implications of materiality x body understood, both in terms of possibilities and positive and negative effects of our work. The atmosphere at the workshop was open and interdisciplinary, and it was clear among participants that more such interdisciplinary conversations around the real-life implications of body x materials need to be nurtured to bring a shared understanding to the field. With this aim, the workshop participants planned to consolidate existing materials and findings at future workshops, including one that was run at the IEEE World Haptics 2023 conference and attempt to have a repeat workshop at CHI 2024 to further explore the directions that emerged. The long-term goal is to build an interdisciplinary community and open the design space for material-enabled, body-based multisensory experiences by integrating research from various perspectives.
Acknowledgments
We would like to thank all workshop organizers (the authors of this piece were joined by: Hasti Seifi, Judith Ley-Flores, Nadia Bianchi-Berthouze, Marianna Obrist and Sharon Baurley), guest speakers, and participants for their fantastic contributions. We acknowledge funding by: the Spanish Agencia Estatal de Investigación (PID2019-105579RB-I00/AEI/10.13039/501100011033) and the European Research (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 101002711). The work of Bruna Petreca and Ricardo O’Nascimento was funded by UKRI grant EP/V011766/1. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising.
Endnotes
1. Petreca, B.B. et al. Body x materials: A workshop exploring the role of material-enabled body-based multisensory experiences. Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, New York, 2023; https://doi.org/10.1145/354454...
2. Giaccardi, E. and Karana, E. Foundations of materials experience: An approach for HCI. Proc. of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, New York, 2015, 2447–2456; https://doi.org/10.1145/270212...
3. Höök, K. Designing with the Body: Somaesthetic Interaction Design. MIT Press, Cambridge, MA, 2018.
4. Ishii, H. and Ullmer, B. March. Tangible bits: Towards seamless interfaces between people, bits and atoms. Proc. of the ACM SIGCHI Conference on Human factors in Computing Systems. ACM, New York, 1997, 234–241.
5. Wiberg, M. The Materiality of Interaction: Notes on the Materials of Interaction Design. MIT Press, Cambridge, MA, 2018.
6. Wilde, D., Vallgarda, A., and Tomico, O. Embodied design ideation methods: Analysing the power of estrangement. Proc. the 2017 CHI Conference on Human Factors in Computing Systems. ACM, New York, 2017, 5158–5170; https://doi.org/10.1145/302545...
Posted in:
on Wed, January 03, 2024 - 2:31:00
Ana Tajadura-Jiménez
Ana Tajadura-Jiménez is an associate professor at the DEI Interactive Systems Group, Universidad Carlos III de Madrid. She leads the i_mBODY lab (
https://www.imbodylab.com) focused on interactive, multisensory, body-centered experiences, at the intersection between the fields of HCI and neuroscience. She is principal investigator of the MagicOutFit and the BODYinTRANSIT projects.
[email protected]
View All Ana Tajadura-Jiménez 's Posts
Bruna Petreca
Bruna Petreca is a senior research fellow in human experience and materials at the Materials Science Research Centre of the Royal College of Art. She co-leads the Consumer Experience Research Strand of the UKRI Interdisciplinary Textile Circularity Centre and is a co-investigator on the project EPSRC Consumer Experience Digital Tools for Dematerialisation.
[email protected]
View All Bruna Petreca's Posts
Laia Turmo Vidal
Laia Turmo Vidal is a Ph.D. candidate in interaction design and HCI at Uppsala University, Sweden. In her research, she investigates how to support movement teaching and learning through interactive technology. Her research interests include embodied design, cooperative social computing, and play.
[email protected]
View All Laia Turmo Vidal's Posts
Ricardo O'Nascimento
Ricardo O'Nascimento is a postdoctoral researcher in human experience and materials at the Material Science Research Centre of the Royal College of Art. His research explores how new technologies challenge and enhance human perception with focus on on-body interfaces and hybrid environments.
ricardo.o'[email protected]
View All Ricardo O'Nascimento 's Posts
Aneesha Singh
Aneesha Singh is an associate professor in human-computer interaction at UCL Interaction Centre. She is interested in the design, adoption, and use of personal health and well-being technologies in everyday contexts, focusing on sensitive and stigmatized conditions. Her research areas include digital health, ubiquitous computing, multisensory feedback, and wearable technology.
[email protected]
View All Aneesha Singh's Posts
Usability: The essential ingredient for sustainable world development
Authors:
Elizabeth Rosenzweig,
Amanda Davis,
Zhengjie Liu,
Deborah Bosley
Posted: Tue, January 02, 2024 - 11:29:00
As we make interacting with technology more like interacting with human beings, it is important that we understand how people use technology.…That is why usability is so important.
— Bill Gates [1]
Professionals in user experience (UX) and technology recognize the significant impact well-designed technology has on everyday life. In a digital age where interfaces are everywhere, ensuring a positive user experience is crucial. It’s not merely about creating visually appealing designs; it’s about making technology intuitive, accessible, and genuinely useful for everyone.
We aim to do more than just influence individual applications; we strive to influence policymakers and decision-makers worldwide. Our advocacy revolves around recognizing usability as a fundamental human right. We firmly believe that access to technology that is easy to use and understand should be a universal entitlement, akin to essentials like clean water and education.
Our vision is to create a world where technology becomes a tool for empowerment, enabling individuals to navigate the digital landscape with confidence and ease. By championing usability as a basic human right, we’re paving the way for a future where innovative solutions are not only cutting edge but also universally accessible. Our goal is to ensure technology becomes a catalyst for positive change in the lives of people across the globe.
To achieve this vision, we emphasize the significance of World Usability Day (WUD), a pivotal event promoting the values of usability and user-centered design. By enhancing the visibility of this initiative, we can raise not only professionals’ awareness but also that of the general public. WUD serves as a platform to showcase inventive ideas, best practices, and research findings, highlighting how usability directly affects people’s lives. It embodies a global commitment to enhancing user experiences and ensuring technology is accessible to everyone. This influential event transcends geographical boundaries, uniting diverse communities, from professionals and industrial experts to educators, citizens, and government representatives.
The fundamental objective of WUD is both profound and straightforward: to ensure that essential services and products, vital to everyone’s lives, are user friendly and straightforward. By advocating for usability, WUD ensures that technological advancements don’t isolate users with complexity but empower them, regardless of their background, expertise, or physical abilities.
History of World Usability Day
The first invited essay about World Usability Day was in August 2006; the message still rings true today: “Every citizen on our planet deserves the right to usable products and services. It is time we reframe our work and look at a bigger global picture” [2].
For nearly two decades, WUD has been a catalyst for change, reaching practitioners in every corner of the globe. Across more than 140 countries, WUD has engaged over 250,000 individuals and had an impact on their local communities (see Table 1). WUD has opened up the field of UX and usability in places where it did not exist before the event, such as Eastern Europe (including Poland, Ukraine, Russia, and Turkey). With its annual theme, practitioners and volunteers have come together across the globe to tackle topics that matter to humanity.
| Theme
| Events
| Countries
| Individuals Engaged
|
2005
| Make it Easy
| 115
| 35
| 6,500
|
2006
| Accessibility
| 228
| 40
| 38,000
|
2007
| Healthcare
| 157
| 41
| 29,000
|
2008
| Transportation
| 161
| 43
| 50,000
|
2009
| Designing for a Sustainable World
| 150
| 43
| 48,000
|
2010
| Communication
| 150
| 40
| 45,000
|
2011
| Education: Designing for Social Change
| 120
| 42
| 7,000
|
2012
| Usability of Financial Systems
| 80
| 20
| 6,000
|
2013
| Healthcare: Collaborating for Better Systems
| 107
| 32
| 7,000
|
2014
| Engagement
| 135
| 40
| 8,500
|
2015
| Innovation
| 91
| 31
| 6,500
|
2016
| Sustainability
| 73
| 24
| 6,000
|
2017
| Inclusion
| 66
| 25
| 6,500
|
2018
| Design for Good or Evil
| 62
| 26
| 6,500
|
2019
| Design for the Future We Want
| 56
| 27
| 6,500
|
2020
| Human-Centered AI
| 42
| 26
| 5,500
|
2021
| Trust, Ethics, Integrity
| 47
| 25
| 4,550
|
2022
| Healthcare
| 32
| 25
| 4,550
|
2023
| Collaboration and Cooperation
| 48
| 20
| 4575
|
Table 1. Themes and global WUD participation.
World Usability Initiative
The founders of World Usability Day want to mobilize people in countries around the world to develop technology that works for the greater good. We want to expand the reach of recognizing the significance of usability once per year so that we can have a larger impact in tackling the world’s biggest problems.
To achieve this goal, they collaborated with renowned professional associations like SIGCHI, HCII, PLAIN, and IFIP. Together, they established the World Usability Initiative (WUI), a focused global organization.
Operating as a singular, dedicated entity, WUI works closely with the United Nations, particularly concerning the human factors integral to the 17 Sustainable Development Goals. This initiative unites experts from various fields, including human-computer interaction, user experience, and interaction design. Their collective efforts are geared toward leading researchers, developers, and countries in creating technology that is not only user centered but also aligned with core human values.
Our mission is to partner with the UN to tackle the 17 Sustainable Development Goals for 2030. Our initial initiatives will focus on:
- Establishing World Usability Day as an internationally observed day: WUI aims to connect the HCI-UX field with the United Nations by listing WUD on the UN calendar, ensuring global recognition.
- Rewarding good design with the WUI Design Challenge: This initiative rewards exceptional design by professionals across five countries each year, promoting user-centered innovation.
- Connecting communities through the International Speaker Series: To foster community engagement, WUI hosts an international speaker series, providing a platform for professionals to connect and share ideas.
- Creating projects that achieve the UN Sustainable Development Goals: Create projects within developed and developing countries that connect with the UN Sustainable Development Goals, adopting a user-centered approach.
- Creating a UN-based World Usability Organization: From the perspective of public policy formulation and implementation, the organization will focus on ensuring technology products and services will be made accessible and usable for everyone.
Returning to that 2006 essay about World Usability Day: “The challenge of World Usability Day is not small; it is to change the way the world is developing and using technology” [2].
Your support is instrumental in achieving WUI’s objectives and transforming the world into a more inclusive, equitable place. You can actively contribute by:
- Participating in the WUI Design Challenge: Showcase your innovative designs, promoting user-centered solutions.
- Organizing a World Usability Day event: Contribute to the global conversation by hosting an event, fostering awareness and inclusivity.
- Engaging in the speaker series: Share your expertise and ideas, connecting with professionals worldwide.
- Signing the petition: Join the collective voice urging the UN to recognize World Usability Day’s significance.
- Sponsoring WUD: Support WUD financially, enabling the movement to reach new horizons.
In the spirit of global collaboration, let us unite to create technology that serves humanity inclusively, ensuring usability becomes a universal reality. Together, we can pave the way for a world where technology truly becomes a force for good, enriching lives, fostering innovation, and championing inclusivity.
Endnotes
1. Bill Gates at World Usability Day 2007. YouTube;https://www.youtube.com/watch?v=mpxYYz1QHqQ&t=7s
2. Rosenzweig, E. World Usability Day: A challenge for everyone. Journal of User Experience 1, 4 (2006), 151–155; https://uxpajournal.org/world-...
Posted in:
on Tue, January 02, 2024 - 11:29:00
Elizabeth Rosenzweig
Elizabeth Rosenzweig is a design researcher who uses technology to make the world a better place. She believes that the best design comes from good research through user-centered design, and as the founder of World Usability Day has been able to push the boundaries of the status quo. She holds four patents on intelligent design for image management and is the author of Successful User Experience: Strategies and Roadmaps.
[email protected]
Website:
https://designresearchforgood.org/
View All Elizabeth Rosenzweig's Posts
Amanda Davis
Amanda Davis, founder of and lead consultant at Experiment Zone, excels in guiding companies on usability and conversion rate optimization. As an active board member of the World Usability Initiative, she champions initiatives for global user-friendly experiences, embodying expertise at the forefront of user-centric design and advocacy.
[email protected]
View All Amanda Davis's Posts
Zhengjie Liu
Zhengjie Liu is professor emeritus of HCI/UXD at Dalian Maritime University in China. He is an HCI pioneer who has been working in the field since 1989, and especially has helped the development of UXD practice in industry in China. He has served international communities in various committees, including for ACM SIGCHI, IFIP TC.13 Committee on HCI and ISO WGs for HCI standards, focusing on promoting HCI in developing worlds. He is awardee of ACM SIGCHI Lifetime Service Award in 2017 and IFIP TC.13 Pioneers Award in 2013.
[email protected]
View All Zhengjie Liu's Posts
Deborah Bosley
Deborah S. Bosley is the founder and principal of The Plain Language Group, LLC. For the past 20 years, she has worked with Fortune 500 companies, government agencies, and non-profits to make complex content easy to understand. She provides training, plain language revisions, usability testing, and expert witness testimony.
[email protected]
View All Deborah Bosley's Posts
Democratizing game authoring: A key to inclusive freedom of expression in the metaverse
Authors:
Amir Reza Asadi
Posted: Thu, November 16, 2023 - 2:03:00
As we approach the metaverse, a sociotechnical future where people consume huge amounts of information in 3D formats, the significance of game development tools, including both game-engine and digital-content-creation tools, will become more and more evident. In a metaverse-oriented world, you need to create 3D experiences to express your ideas. Game engines are currently the main tools for authoring and creating AR/VR experiences; therefore, simplifying the game-authoring process is crucial for the future of freedom of expression.
Video games are the most advanced form of interactive media, conveying diverse ideas and often representing mainstream ideologies. However, making a small but sophisticated video game is an expensive form of storytelling. This issue may not be a concern for many of us because the main purpose of digital play is entertainment. But it should be a vital issue for all of us, as a metaverse-oriented world is made up of 3D game-like experiences. Those who can create more immersive experiences may take control of real-world narratives, so it’s a challenge and an important goal for us to make game development inclusive of more voices. Without simplifying the game-authoring process, our society could lose its freedom to effectively communicate and express itself, resulting in reduced empathy.
We should also not underestimate the threat of propaganda in the metaverse. The metaverse allows users to immerse themselves in dreams of virtual prosperity, and people may shelter in the virtual world to overcome the pain of poverty in the real world. Dreaming has played a vital role in the history of humanity. We dream at times when we are unhappy with our circumstances, and no dictatorship in history has been able to control our dreams [1]. With the rise of emerging non-democratic powers and economies, there is a threat that companies will start changing or censoring the ideologies of their game-alike experiences, similar to what happened to the movie industry in recent years [2,3]. In a dystopian scenario, people will trade their ability to dream for the ability to live in virtual game-alike experiences. Losing the ability to dream freely can have dangerous consequences, as it can hold people captive to mainstream ideas. But there is a silver lining. Transforming game-authoring tools in the same way that emergent video making/sharing apps such as TikTok, Snapchat, and Apple Clips have democratized video-based storytelling can give us hope that all ideas can be expressed not just freely but also effectively in the Metaverse.
A Desirable Future for Game-Creation Tools
In today’s world, game development has become simpler with free-to-use game engines, but the pipeline of game development is still complicated. We need to simplify the workflow of game-authoring tools for tomorrow. I’m not talking about a magic-wand solution here. The technologies for simplifying game authoring are available today, but we need to first let go of our previous assumptions.
The designers of game-authoring tools should create these tools based on new personas. They need to think big and assume that the intellectuals of the metaverse generation will be game authors rather than book authors. Based on this desirable future, we’ve created a persona (Figure 1) who represents a thinker, representing him with an image of Montesquieu, the famous French political philosopher. We also created a design fiction about his life:
DJ Montesquieu was a political philosopher who sought to challenge traditional modes of philosophical discourse. He saw the world through a different lens, one that viewed technology and innovation as a means of disseminating his ideas more effectively to a wider audience. DJ Montesquieu’s unique approach to philosophical discourse led him to develop his thesis not in the form of a book, but rather as a video game. His thesis was a video game that allows players to experience the impacts of overregulation from the eyes of ordinary citizens. Unlike traditional philosophers, Montesquieu was not interested in writing for the sake of writing. He was more concerned with sharing his views in a way that was accessible and engaging for people. To this end, he utilized the AlternateGameEngine, which is a combination of no-code tools and ChatGPT-based tools to create immersive AR/VR games that brought his ideas to life.
DJ Montesquieu’s games were not just entertaining; they were also thought-provoking. They allowed players to explore complex philosophical concepts in a way that was both fun and enlightening. By using technology as a medium for philosophical discourse, DJ Montesquieu was able to reach a wider audience than he ever could have with traditional writing.
Figure 1. The persona of a future political philosopher, DJ Montesquieu.
In other words, imagine a future where intellectuals and authors, instead of using word processing tools and presentation software, will be using 3D game-creation tools. Where people may quote game authors instead of Friedrich Nietzsche, and where famous people will create games instead of autobiographies. In this future, the most popular social network allows users to turn their diaries into snack-sized gaming experiences that can be experienced by others.
This future also allows society to understand situations that they cannot or do not want to comprehend by reading books. The combination of the immersive nature of the metaverse with simplified game-authoring tools will enable societies to have more ethnocultural empathy, as they can experience life from others’ point of view. Immersive 3D virtual environments have already proven effective in creating empathy among different ethnic groups [4,5], so simplifying the pipeline of immersive storytelling can create opportunities to have a more united society. Imagine that instead of watching partisan debates on TV and being exposed to online political bots, every person could share their ideas, trauma, or values via immersive experiences in the same way they can create content for social media, allowing us to live in a more empathetic community. People may not be able to express their own side of story through verbal interactions, but easy immersive-experience authoring may allow them to express their stories like livable dreams that others can experience themselves.
Endnotes
1. Rare 1997 interview with Abbas Kiarostami, conducted by Iranian film scholar Jamsheed Akrami. From Taste of Cherry Featurettes, 1997. YouTube, Mar. 1, 2021; [https: www.youtube.com="" watch?v="VTcm4P5qAF8"]
2. DeLisle, J. Foreign policy through other means: Hard power, soft power, and China’s turn to political warfare to influence the United States. Orbis 64, 2 (2020), 174–206.
3. Su, W. From visual pleasure to global imagination: Chinese youth’s reception of Hollywood films. Asian Journal of Communication 31, 6 (2021), 520–535.
4. Coffey, A.J., Kamhawi, R., Fishwick, P., and Henderson, J. The efficacy of an immersive 3D virtual versus 2D web environment in intercultural sensitivity acquisition. Educational Technology Research and Development 65 (2017), 455–479.
5. Lamendola, W. and Krysik, J. Cultivating counter space: Evoking empathy through simulated gameplay. ETC Press, 2022.
Posted in:
on Thu, November 16, 2023 - 2:03:00
Amir Reza Asadi
Psychological privacy: How perceptual publicity can support perceived publicity
Authors:
Wee Kiat Lau,
Lisa Valentina Eberhardt ,
Marian Sauter,
Anke Huckauf
Posted: Mon, October 23, 2023 - 10:45:00
Picture this: Sitting in his kitchen, young Tim chuckles at the recent misfortune of neighbors who fell prey to burglars. At the same time, he’s enthusiastically experimenting with a banking app on his fresh-off-the-assembly-line computer, a machine devoid of even the most fundamental antivirus protection. This scenario is a striking illustration of the privacy paradox. We voice anxiety about our data’s usage, often lambasting corporations with lax privacy protocols, only to defy our apprehensions by not embracing measures to safeguard ourselves. Let’s dissect this intriguingly paradoxical user behavior from the lens of perceptual psychology, illuminating the circumstances under which we sense privacy or publicity, distinguishing these states, and suggesting ways to aid individuals like Tim.
Perception in Private and Public Surroundings
Envision an intensely private moment: reclining on your sofa, shaking off the weariness of work. Now, contrast this with a public spectacle: accepting accolades onstage for special achievements, with a massive audience bearing witness. How do these scenarios make you feel, and what sets them apart? To shed light on this question, we will go deep into our bodies’ states and processes. A key environmental distinction arises when we feel private—the high likelihood of being cocooned in a familiar setting, surrounded by well-known objects and people, which lets us unwind. New experiences, however, kick-start our alertness; they ignite our curiosity and grab our attention.
Fundamental cognitive processes in any living organism include sensation and perception. Familiar environments envelop us with recognizable items, triggering well-known sensations, be they sounds, scents, tactile sensations, or visuals. Therefore, private settings pose fewer challenges to the perceptual system about object identification and sensory memory. This reduction in demand allows the system to function at a lower sensitivity level, freeing up capacity for other operations.
This perceptual mode is accompanied by physiological processes. Broadly speaking, in relaxed, private settings, the parasympathetic neural activation (“rest and digest” mode) is dominant, whereas the sympathetic activation (“fight or flight” mode) is dominant in unfamiliar public settings. There are some remarkable perceptual effects caused by this: Sympathetic activation leads to larger pupils, resulting in more light input, in a slightly extended visual field, and in reduced visual acuity, especially outside of the eyes’ focus—that is, in periphery and in depth [1]. We therefore can assume that in public relative to private settings, we perceive with less spatial accuracy from a larger visual field.
Attention in Private and Public Surroundings
Perception is accompanied by an adaptation of attentional processes. All attentional functions respond to situational affordances. We can differentiate between alertness and selective attention. Alertness is the increase and maintenance of response readiness. It can be supposed to complement the general arousal level of an organism. Thus, alertness will be high in public settings, while it can be reduced when the organism is surrounded by familiar objects. The notion of a broader visual field, although with lower spatial resolution, can be plausibly assumed to support alerting functions in public settings.
Regarding selective attention, new salient stimuli are known to capture attention. This improves visual search performance. In private surroundings, distracting objects can be quickly identified and thus be effectively suppressed. This can lead to a phenomenon known as inattentional blindness: In familiar settings, it frequently happens that we miss even uncommon, unexpected objects. The visual system is also capable of suppressing distracters based on their spatial location [2], improving efficiency in familiar environments. The familiarity of surrounding objects eases not only the selection of task-relevant objects but also the suppression of distracting objects. Inhibition again saves capacity for other processes [3]. In unfamiliar public settings, however, stimuli must be processed until they are identified as harmless, and suppression of task-irrelevant stimuli is thus more difficult.
The dichotomy between private and public settings even manifests in our posture and movements. Onstage, we present our bodies to a large audience, making exaggerated, sweeping gestures. Conversely, in private, our muscles can relax, leading to smaller, more restrained movements. This difference extends to eye movements and gaze, which in turn influence perception.
Level of Control in Private and Public Surroundings
Taken together, private settings lessen the need to attend to external stimuli; you can unwind and rely on the consistency of the surroundings. Also, perceiving things provides already familiar information. All these processes diminish the need for cognitive control, allowing processing to occur more subconsciously. Consequently, executing learned skills, routines, and habits becomes more probable. Public behavior, however, is marked by unfamiliar surroundings. The influx of novel objects or people prompts a slew of questions: Is that unfamiliar face a threat? What does that unexpected sound imply? This cognitive appraisal demands effortful attention, sapping mental resources, making us more cautious in and conscious of our actions [3].
This thinking aligns with Daniel Kahneman’s [4] idea that human behavior is regulated either by quick, instinctive, and emotional processing (as in private settings) or by slower, more deliberative, and logical processing (common in public settings). Crucially, it’s nearly impossible to engage both methods simultaneously. Therefore, in a situation prompting emotional automated processing with only weak conscious monitoring, we’re hardly capable of producing analytical thinking with logical deductions. This means that if users engage with their personal devices at home, their behavior is dominated by automated, nonconscious routines.
Counteracting Perceived Privacy by Simulating a Public Audience
So, how can we assist users in selecting an appropriate level of control? Novelty in environmental stimuli can be an indicator. How we process these cues shapes how we perceive and interact with our surroundings, be they private or public. The cues could be signals or symbols [5]. Signals are automatic cues operating beneath conscious thought, like the familiar ticktock of a clock or the distinctive feel of your sofa, directing our arousal and attention needs. Conversely, symbols, such as GDPR text, demand conscious, detailed analysis, and interpretation, thus requiring higher cognitive capacity.
Discerning the psychological differences between private and public settings equips us with a potent tool to mold privacy behavior and promote prudent disclosure. As we’ve noted, using personal devices in private settings often sparks cues associated with private behavior, possibly leading to a false sense of security. Thus, introducing cues that simulate public scenarios could stimulate public consciousness, reminding users to be more circumspect with their disclosures. These could be visuals, sounds, smells, or other elements that evoke the public nature of their online interactions. One subtle method to induce a feeling of publicity could be the “watching eyes effect”: The presence of a pair of eyes can influence disclosure behavior and can be fine-tuned by varying emotional expression, sex, and age of the eyes [6]. Ideally, this should be achieved by incorporating design elements that subtly disrupt users’ familiar routines, prompting a cognitive response akin to being in a public setting.
Toward a Privacy-Sensitive Future
To conclude, traversing the maze of privacy behavior is an intricate task, yet understanding the interplay of environmental cues, perception, attention, and behavior control can illuminate our path forward. Preserving privacy might be bolstered by subtly simulating publicity within digital environments, evoking vigilance, and awareness akin to our natural responses in public settings. By doing so, we can harness our inherent cognitive and physiological processes.
Endnotes
1. Eberhardt, L.V., Strauch, C., Hartmann, T.S., and Huckauf, A. Increasing pupil size is associated with improved detection performance in the periphery. Attention, Perception, & Psychophysics 84, 1 (2022),138–149; https://doi.org/10.3758/s13414...
2. Sauter, M., Liesefeld, H.R., Zehetleitner, M., and Müller, H.J. Region-based shielding of visual search from salient distractors: Target detection is impaired with same- but not different-dimension distractors. Attention, Perception, & Psychophysics 80, 3 (2018), 622–642; https://link.springer.com/arti...
3. Posner, M.I. and Petersen, S.E. The attention system of the human brain. Annual Review of Neuroscience 13, 1 (1990), 25–42.
4. Kahneman, D. Thinking, Fast and Slow. Farrar, Straus, and Giroux, 2011.
5. Rasmussen, J. Skills, rules, and knowledge; signals, signs, and symbols, and other distinctions in human performance models. IEEE Transactions on Systems, Man, and Cybernetics SMC-13, 3 (1983), 257–266.
6. Lau, W.K., Sauter, M., Bulut, C., Eberhardt, L.V., and Huckauf, A. Revisiting the watching eyes effect: How emotional expressions, sex, and age of watching eyes influence the extent one would make stereotypical statements. Preprint, 2023; https://doi.org/10.21203/rs.3....
Posted in:
on Mon, October 23, 2023 - 10:45:00
Wee Kiat Lau
Lisa Valentina Eberhardt
Marian Sauter
Marian Sauter is a principal investigator in the General Psychology group at Ulm University. He is interested in selective attention and exploratory interactive search. He also works on applied topics such using gaze to predict quiz performance in online learning environments.
[email protected]
View All Marian Sauter's Posts
Anke Huckauf
Anke Huckauf is the chair of General Psychology at Ulm University, specialized in perceptual psychology and human-computer interaction. Currently, she serves as dean of the Faculty of Engineering, Informatics, and Psychology at Ulm University.
[email protected]
View All Anke Huckauf's Posts
Seven heuristics for identifying proper UX instruments and metrics
Authors:
Maximilian Speicher
Posted: Tue, September 19, 2023 - 11:09:00
In the two previous articles of this series, we have first learned that metrics such as conversion rate, average order value, or Net Promoter Score are not suitable to reliably measure user experience (UX) [1]. The second article then explained how UX is a latent variable and, therefore, we must rely on research instruments and corresponding composite indicators (that produce a metric) to measure it [2]. Now, the logical next question is how we can identify those instruments and metrics that do reliably measure UX. This boils down to what is called construct validity and reliability, on which we will give a brief introduction in this final article, before deriving easily applicable heuristics for practitioners and researchers alike who don’t know which UX instrument or metric to choose.
Construct validity refers to the extent to which a test measures what it is supposed to measure [3]. In the case of UX, this means that the instrument or metric should measure the concept of UX as it is understood in the research literature, and not, for example, only usability. One good way to establish construct validity is through factor analysis [3].
Construct reliability refers to the consistency of a test or measure [4]. Put differently, it is a measure of how reproducible the results of an instrument or metric are. A good way to establish construct reliability is through studies that assess the test-retest reliability of the instrument or metric, as well as its internal consistency, such as Cronbach’s alpha [4].
In addition to that, the Joint Research Centre of the European Commission (JRC) provides a “Handbook on Constructing Composite Indicators” [5], which summarizes the proper process in terms of a 10-step checklist. We build on all of the above for our following list of seven heuristics for identifying proper UX instruments and metrics.
Heuristic 1: Is there a paper about it? If there is no paper about the instrument and/or metric in question, there’s barely a chance you’ll be able to answer any of the following questions with yes. So, this should be the first thing to look for. A peer-reviewed paper published in a scientific journal or conference would be the best case, but there should be at the very least some kind of white paper available.
Heuristic 2: Is there a sound theoretical basis? In the case of UX, this means, does the provider of the instrument and/or metric clearly explain their understanding of UX and, therefore, what their construct actually measures? The JRC states: “What is badly defined is likely to be badly measured” [5].
Heuristic 3: Is the choice of items explained in detail? Why were these specific variables of the instrument chosen, and not others? And how do they relate to the theoretical framework, that is, the understanding of UX? The JRC states: “The strengths and weaknesses of composite indicators largely derive from the quality of the underlying variables” [5].
Heuristic 4: Is an evaluation of construct validity reported? This could be reported in terms of, for example, a confirmatory factor analysis [3]. If not, you can’t be sure whether the instrument or metric actually measures what it’s supposed to measure.
Heuristic 5: Is an evaluation of construct reliability reported? This could be reported in terms of, for example, Cronbach’s alpha [4]. If not, you can’t be sure whether the measurements you obtain are proper and reproducible approximations of the actual UX you want to measure.
Heuristic 6: Is the data that’s combined to form the metric properly normalized? This is necessary if the items in an instrument have different units of measurement. The JRC states: “Avoid adding up apples and oranges” [5].
Heuristic 7: Is the weighting of the different factors that form the metric explained? Factors should be weighted according to their importance. “Combining variables with a high degree of correlation” (double counting) should be avoided [5].
In the following, the application of these heuristics will be demonstrated through two very brief case studies.
Case Study 1: UEQ
The User Experience Questionnaire (UEQ) is a popular UX instrument developed at SAP AG.
- H1: There is a peer-reviewed research paper about UEQ, which is available at [6]. ✓
- H2: The paper clearly defines the authors’ understanding of UX, and they elaborate on the theoretical background. ✓
- H3: The paper explains the selection of the item pool and how it relates to the theoretical background. ✓
- H4: The paper describes, in detail, two studies in which the validity of UEQ was investigated. ✓
- H5: The paper reports Cronbach’s alpha for all subscales of the instrument. ✓
- H6: Not applicable, since UEQ doesn’t explicitly define a composite indicator. However, a composite indicator can be constructed from the instrument.
- H7: See H6.
Case Study 2: QX score “for measuring user experience”
This metric was developed by SaaS provider UserZoom and is now provided by UserTesting. It is a composite of two parts: 1) the widely used SUPR-Q instrument and 2) the individual task success rates from the user study where the metric was measured, in a 50/50 proportion.
- H1: There is no research paper, but at least a blog post explaining the instrument and metric. ✓
- H2: There is no clear definition of UX given. The theoretical basis for the metric is the assumption that all existing UX metrics use either only behavioral or only attitudinal data. There is no well-founded explanation given why this is considered problematic. The implicit reasoning is that only by mixing behavioral and attitudinal data can we properly measure UX, which is factually incorrect (cf. [2]). ❌
- H3: The metric mixes attitudinal (SUPR-Q) and behavioral (task success) items, but no well-founded reasoning is given as to why only task success rate was chosen, or why this would improve SUPR-Q, which is already a valid and reliable UX instrument in itself. ❌
- H4: No evaluation of construct validity is reported. ❌
- H5: No evaluation of construct reliability is reported. ❌
- H6: There is no approach to data normalization reported, the metric seemingly adds up apples and oranges. ❌
- H7: There is no reasoning given for the weighting of the attitudinal and behavioral items. ❌
In conclusion, the seven heuristics provided in this article serve as a useful guide for identifying proper UX instruments and metrics. Additionally, by considering construct validity and reliability, as well as following the JRC’s 10-step checklist, practitioners and researchers alike can make informed decisions when choosing a UX instrument or metric. It’s important to note that not all instruments or metrics will pass all of these heuristics, but the more of them that are met, the more confident one can be that the chosen instrument or metric properly measures UX. If in doubt, choose the instrument or metric that checks more boxes. It's worth noting that some heuristics, like H1, are not strictly necessary, and H6 and H7 only apply to composite indicators. It’s also worth noting that there may be valid instruments or metrics that fail some of these heuristics and vice versa. Their goal is to provide a robust and quick framework for evaluating UX instruments and metrics, but ultimately, the best approach will depend on the specific research or design project.
Endnotes
1. Speicher, M. Conversion rate & average order value are not UX metrics. UX Collective. Jan. 2022; https://uxdesign.cc/conversion...
2. Speicher, M. So, How Can We Measure UX? Interactions 30, 1 (2023), 6–7; https://doi.org/10.1145/357096...
3. Kline, R.B. Principles and Practice of Structural Equation Modeling. Guilford Publications, 2015.
4. Cronbach, L.J. and Meehl, P.E. Construct validity in psychological tests. Psychological Bulletin 52, 4 (1955), 281.
5. Joint Research Centre of the European Commission. Handbook on Constructing Composite Indicators: Methodology and User Guide. OECD publishing, 2008; https://www.oecd.org/sdd/42495...
6. Laugwitz, B., Held, T., and Schrepp, M. Construction and evaluation of a user experience questionnaire. Symposium of the Austrian HCI and Usability Engineering Group. Springer, Berlin, Heidelberg, 2008, 63–76.
Posted in:
on Tue, September 19, 2023 - 11:09:00
Maximilian Speicher
Maximilian Speicher is a computer scientist, designer, researcher, and ringtennis player. Currently, he is director of product design at BestSecret and cofounder of UX consulting firm Jagow Speicher. His research interests lie primarily with novel ways to do digital design, usability evaluation, augmented and virtual reality, and sustainable design.
[email protected]
View All Maximilian Speicher's Posts
Toward a consensus on research transparency for HCI
Authors:
Florian Echtler,
Lonni Besançon ,
Jan Vornhagen,
Chat Wacharamanotham
Posted: Fri, August 25, 2023 - 11:15:00
During the past few years, the Covid-19 pandemic has resulted in an unprecedented amount of research being conducted and published in a very short timeframe to successfully analyze SARS-COV-2, its vaccines, and its treatments [1]. Concurrently, the pandemic also highlighted the limitations of our publication system, which enables and incentivizes rapid dissemination and questionable research practices [2].
While HCI research is usually not a foundation for life-and-death decisions, we face similar problems. HCI researchers and CHI community members have long criticized a lack of methodological [3] and statistical rigor [4] and a lack of transparent research practices in quantitative [5] and qualitative works [6]. Research transparency can alleviate these issues, as it facilitates the independent verification, reproduction, and—wherever appropriate—replication of claims. Consequently, we argue that the CHI community needs to move toward a consensus on research transparency.
Reaching this consensus is no trivial task, as HCI is an inter- or transdisciplinary field of research, employing myriad methods, and therefore cannot be subjected to a single, rigid rule set. For example, qualitative and quantitative data cannot be shared to the same degree without potentially identifying participants. Although the context of each research project might constrain feasible transparency practices, we believe that the overarching principle of research transparency can be applied to fit the vast variety of research that exists within our field.
Despite the benefits of transparent research practices, many of our efforts for more transparency within SIGCHI research have been met with considerable pushback. Junior researchers are worried about having their workload increase even more through additional documentation requirements, while more senior members of the community—who have built their careers on a research model that did not yet focus on transparency—may fail to see noticeable value in the additional effort required. Qualitatively oriented researchers, both within [6] and outside HCI [7], do not see themselves represented in a discussion that often focuses on statistics, data collection, and related topics. Meanwhile, some quantitative researchers mistook that preregistration limits potential exploratory analysis [8].
Ideally, any publicly available record of scholarship—whether it is a research paper, an essay, a dataset, or a piece of software—should be adequately transparent to enable the public to assess its quality and for subsequent research to build upon.
After all, the goal of research is not to pad one’s own h-index or to get a p-value below 0.05, but rather to increase the total sum of human knowledge. And in order to do so, we need to be able to build on the research that other scholars have done before us, so we can see farther. For this reason, transparency is indeed the one fundamental principle behind the open science movement, and it is so general that any of the subcommunities within HCI should be able to follow it.
The slow but steady move toward more transparency across all disciplines of research also provides a unique opportunity for the HCI community: At least some of the reluctance to adopt this approach stems from a lack of supporting tools. It is our field’s core expertise to uncover and analyze problems, and iteratively develop and evaluate potential solutions. Imagine that writing your statistical analysis would immediately update the figures in your paper and provide an automatic appendix with all calculations. Or that the coding process for interview quotes would keep a record of the shifting themes until the final result is reached, complete with an accurate log of the decisions that led to this point. These kinds of tools could be valuable far beyond HCI, for example, in related disciplines such as psychology and sociology, helping to establish a more transparent process there as well. Besides, the interdisciplinary nature of HCI allows us to test and debate innovations in methods, tools, and policies. We believe that this richness in perspectives and the technological and design capacity in our field could contribute to unblock open science conundrums, for example, challenges in sharing qualitative data [8].
For the future, we hope for guidelines that emphasize the values and opportunities within transparent research. We hope for reviewers who take them to heart, and include the transparency of a manuscript in their assessment, not as just superficial novelty. We hope for established researchers to lead by example, providing transparent insights into their research process, and for hiring committees and examination boards to value these contributions on their own. Last but not least, we hope for all members of the community to consider how they can increase transparency in their research and publication practices.
Endnotes
1. Callaway, E., Ledford, H., Viglione, G., Watson, T., and Witze, A. COVID and 2020: An extraordinary year for science. Nature 588, 7839 (2020), 550–553; pmid:33318685
2. Besançon, L., Bik, E., Heathers, J., and Meyerowitz-Katz, G. Correction of scientific literature: Too little, too late!. PLOS Biology 20, 3 (2022), e3001572; https://doi.org/10.1371/journa...
3. Greenberg, S. and Thimbleby, H. The weak science of human-computer interaction. (Dec. 1991); https://doi.org/10.11575/PRISM/30792
4. Cockburn, A. et al. HARK no more: On the preregistration of CHI experiments. Proc of the CHI Conference on Human Factors in Computing Systems. ACM, New York, 2018, 1–12; https://doi.org/10.1145/3173574.3173715
5. Vornhagen, J.B. et al. Statistical Significance Testing at CHI PLAY: Challenges and Opportunities for More Transparency. Proc. of the Annual Symposium on Computer-Human Interaction in Play. ACM, New York, 2020, 4–18.
6. Talkad Sukumar, P., Avellino, I., Remy, C., DeVito, M.A., Dillahunt, T.R., McGrenere, J. and Wilson, M.L. Transparency in qualitative research: Increasing fairness in the CHI review rrocess. Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, New York, 2020, 1–6.
7. Kapiszewski, D. and Karcher, S. Transparency in practice in qualitative research. PS: Political Science & Politics 54, 2 (2021), 285–291; https://doi.org/10.1017/S1049096520000955
8. Wacharamanotham, C., Eisenring, L., Haroz, S. and Echtler, F. Transparency of CHI research artifacts: Results of a self-reported survey. Proc. of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, New York, 2020; https://dl.acm.org/doi/10.1145...
Posted in:
on Fri, August 25, 2023 - 11:15:00
Florian Echtler
Florian Echtler is an associate professor at Aalborg University in Denmark and a researcher in human-computer interaction. His core topics are centered around the intersection of ubiquitous computing, peer-to-peer networking, and security/privacy. He is a founding editor of the
Journal of Visualization and Interaction (JoVI;
https://www.journalovi.org/).
[email protected]
View All Florian Echtler's Posts
Lonni Besançon
Lonni Besançon is an assistant professor at Linköping University working with the Division of Media and Information Technology. His research focus is on 3D interaction for visualization research, on the one hand, and the role of visual representation for the understanding and communication of statistical uncertainty, on the other. He is a founding editor of the
Journal of Visualization and Interaction (JoVI;
https://www.journalovi.org/).
[email protected]
View All Lonni Besançon 's Posts
Jan Vornhagen
Jan Vornhagen is a Ph.D. fellow at the IT University of Copenhagen. He studies research practices and theory use in HCI games research, as well as the cognitive and emotional experience of videogames.
[email protected]
View All Jan Vornhagen's Posts
Chat Wacharamanotham
Chat Wacharamanotham is an independent researcher. His research focuses on understanding and developing tools for planning, reporting, reading, and sharing quantitative research. He co-organizes the Transparent Statistics in Human–Computer Interaction group (https://transparentstatistics.org) and is a founding editor of the
Journal of Visualization and Interaction (JoVI;
https://www.journalovi.org/).
[email protected]
View All Chat Wacharamanotham's Posts
Toward a mindful designer: Mindfulness as an essential skill for interaction designers
Authors:
Jia Shen
Posted: Tue, August 22, 2023 - 1:07:00
As digital technology designers, our role has never been more crucial than it is now. Decades of technological and commercial advancements have brought computing from research labs to business offices, and now to the homes and pockets of everyday users. Our focus has expanded from computer interfaces to user experiences to design thinking for executives in corporate boardrooms. Despite improvements in user experiences, we have observed that users are often becoming more mindless, joyless, and anxious in today’s hyperconnected world. With the rapid advancement of AI, wearables, and pervasive computing, these challenges are growing.
As designers, what must we add to our toolkit to reimagine and design a sustainable future? This essay advocates for the importance of mindfulness in the skill set of interaction designers. It highlights the specific areas where mindfulness can enhance designers’ abilities, and calls for the design community to work together toward incorporating mindfulness into their practice.
What Is Mindfulness and Why Should Designers Care?
During a visit to Google, the well-known Zen master Thich Nhat Hanh shared a story with Google employees. The story involved a person riding a horse at a fast pace and when asked where they were going, they replied, “I don't know, ask the horse!” Thay, as he is known to his followers, pointed out that our current use of technology can lead us to escape from ourselves, our loved ones, and the natural environment. When we feel empty or unable to confront our inner struggles, we often seek external distractions [1].
Today’s technology creates unprecedented opportunities for us to turn our attention outward and away from ourselves. However, the more we disconnect from ourselves, the less content we become. No wonder that mental health challenges are prevalent and the national happiness index has decreased in the past decade, despite the increasing abundance of material goods and the widespread use of digital technology.
Mindfulness practice is a way of being ourselves in the present moment, and a powerful tool to look inside ourselves. Mindfulness originates from Zen Buddhism. It is as a state of being characterized by a keen awareness of the present moment. The Sanskrit term for mindfulness, smriti, means “remembering.” Mindfulness is remembering to come back to the present moment. The character in Chinese for mindfulness, 念, consists of two parts: the upper part represents “now” and the lower part represents “mind” or “heart” [2].
At the core of Buddha’s teachings lies the concept of Right Mindfulness (samyak smriti). Buddhist psychology states that attention is a universal trait, meaning that we are constantly giving our attention to something. This attention can be appropriate, such as when we are fully present in the moment, or inappropriate, such as when we focus on something that takes us away from the present. Right Mindfulness is characterized by acceptance without judgment or reaction and is inclusive and compassionate. According to Buddha’s teachings, Right Mindfulness offers Seven Miracles, including the ability to be present and fully engage with the present moment, to bring forth the presence of others (such as the sky, flowers, or our children), to nourish the object of our attention, to alleviate others’ suffering, to explore deeply (vipassana), to gain understanding, and to bring about transformation [2].
Although stemming from Zen Buddhism, mindfulness is not a religion itself. Rather, it is a widely adopted practice around the world to improve one’s health, personal growth, and overall well-being. In the Western world, interest in mindfulness practice grew during the 1960s, leading to an era of scientific research on this ancient practice. For instance, Mindful-Based Stress Reduction (MBSR), founded by Jon Kabat-Zinn in the 1970s, is regarded as one of the leading programs for reducing stress, anxiety, depression, and PTSD, with significant and reliable results in various clinical studies [3]. Today, clinicians, business leaders, and educators are exploring ways to integrate mindfulness practice into their respective fields.
In the digital technology design community, there is a growing voice advocating for a new approach to design, one that involves looking inward. In a recent special issue of Interaction, Kentaro Toyama [4] suggests that our focus on external outcomes may be the root cause of our technological problems. He urges designers to turn to faith and inward examination as a possible solution. Meanwhile, Kabat-Zinn encourages designers to practice mindfulness themselves before teaching or supporting others. It’s time for us to consider how mindfulness can benefit us and act accordingly.
How Can Mindfulness Help Designers?
Mindfulness enables designers to see the true nature of the problems they are tackling, and provides the foundations for a paradigm shift in the design process and outcomes. Some initial considerations include:
Design process
Traditional user experience design follows the requirement-design-evaluation framework. As today’s problems are becoming more complex, challenging, and wicked, a mindful designer is aware of themselves and others, and they bring mindfulness to the design process, paying close attention to each step. They deeply observe and listen, resulting in greater empathy and deeper understanding of the problem space before jumping to the solution space.
Design for slow change
In the HCI and interaction design community, Martin Siegel and Jordan Beck [5] sketched out Slow Change Interaction Design, where they called for a paradigm shift in designers’ mindsets in order to create interactive technologies that promote long-lasting attitudinal and behavioral change.
A designer who practices mindfulness understands that problem-solving is not a one-time event, but rather a continuous process of improving the current state toward a better future state. They acknowledge that change is inevitable and strive to learn and grow with their users in a constantly evolving technological environment.
Design for a sustainable future
Considering the interconnectedness nature of all things, a mindful designer takes a systemic approach to their work. They prioritize sustainable and inclusive design, taking into account the impact on individuals with varying abilities, communities, and the environment. They recognize that quick fixes and immediate results are often unrealistic, and prioritize long-term solutions.
Toward Mindful Technology Use Practice
In his book, Peace is Every Step, Thich Nhat Hanh [6] offers suggestions on how to incorporate mindfulness practices into everyday life. Following his examples, we offer the following mindful technology practices regarding smartphone use to technology designers and users:
Before using my phone, I know what I use it for.
I am aware that happiness depends on my mental attitude and not on external conditions, and that I already have more than enough to be happy in the present moment.
The phone and I are one.
I will practice coming back to the present moment by being mindful of my breathing, posture, and body.
Takeaway
When technology is the horse, mindfulness is the harness to rein it back toward humanity. Mindfulness offers an ancient yet rejuvenating perspective that is crucial to our digital design community. Designers who practice mindfulness can transform problems and bring peace, joy, and freedom to both users and themselves. May you also find joy in practicing mindfulness.
Endnotes
1. Hanh, T.N. The horse is technology. Plum Village Magazine (Summer 2014), 5–9.
2. Ñāṇamoli, B. and Bodhi, B. The Middle Length Discourses of the Buddha. Wisdom Publications, 1995.
3. Kabat-Zinn, J. Full Catastrophe Living: Using the Wisdom of Your Body and Mind to Face Stress, Pain, and Illness, 15th Anniversary Ed. Delta Trade Paperback/Bantam Dell, New York, NY, 2005.
4. Toyama, K. Technology and the inward turn of faith. ACM Interactions 29, 4 (2022), 36–39.
5. Siegel, M. and Beck, J. Slow change integration design. ACM Interactions 21, 1 (2014), 28–35.
6. Hanh, T.N. Peace is Every Step. Bantam Books, 1992.
Posted in:
on Tue, August 22, 2023 - 1:07:00
Jia Shen
Jia Shen is a professor of information systems in the Norm Brodsky College of Business at Rider University. Her research is at the intersections of experience design, cognition, digital innovation, and the latest on digital well-being. Previously she researched social commerce, virtual world systems, and online learning.
[email protected]
View All Jia Shen's Posts
Privacy and ethics concerns using UX research platforms
Authors:
Michal Luria
Posted: Wed, July 26, 2023 - 3:03:00
Remote user experience (UX) research is not a new phenomenon [1], but the pandemic has accelerated it, necessitating the adaptation of traditionally face-to-face methods to work remotely [2]. The past few years marked a growth of online platforms, such as dscout, Lookback, Metricwire, and UserTesting, to name just a few. Remote UX research platforms are marketed primarily as tools for industry UX researchers, but are also used occasionally by academics.
These user-friendly interfaces promise remote UX researchers everything they need to conduct a successful study. They support many methods, and provide tools for all research stages, from recruitment via participant pools to built-in analysis. But the platforms’ quick-and-easy approach also creates ethical issues when they don’t fully consider researchers’ obligations toward their participants [3].
UX research frequently involves people, and therefore risks causing harm [4]. For example, people may be asked to share personal information about sensitive topics as part of research, like their sexual orientation or their use of adult websites. Without the right safeguards, this information could get into the wrong hands. Or participants may be asked to view social media content and subsequently be exposed, perhaps unintentionally, to harmful content. These scenarios are even more delicate if a study involves minors or other vulnerable groups [5].
In corresponding with several UX research platforms, I was surprised to discover consistent ethical and privacy-related gaps in the platforms’ ability to address possible risks in human subjects research, particularly in their consent, data ownership, and safety protocols. Here, I outline how these gaps could be addressed for better research practices.
Consent should be mandatory. When using a service’s participant pool, researchers can, but are not obligated to, include a consent process. Platforms say that, when a participant signs up for their platform, they agree to their terms and conditions, suggesting that this agreement covers participation in any research study without further consent.
This is not compliant with ethical research standards and the requirement for participants to be informed: Participants should be able to explicitly consent to participation in every study, after having been fully informed about its goals, risks, and outcomes [6].
Not only is obtaining consent optional on some UX research platforms, but collecting signatures often requires a premium subscription. The alternative for non-premium customers would be to put the consent information into a survey question and ask participants to click “Agree.” For the case of parental consent for minors as required by laws like the General Data Protection Regulation (GDPR), one platform suggested simply including another survey question that asks the minor whether their parents agreed to their participation.
Data should be owned by researchers, not the platform. Normally, personal data for research would be collected for “legitimate research purpose only” [7]. But many UX research platforms claim ownership of the data in their terms of service, allowing them to use it in any way they see fit. Many platforms explain that they use this participant data to improve their services, but nothing prevents them from sharing or selling it without participants’ knowledge.
In my correspondence with one such platform, I inquired about the ability to delete data at the completion of a study. The platform agreed to give me special permissions to request data deletion on behalf of my participants. This would at least limit platforms’ access to participants’ data only to the duration of the study. However, many platforms don’t allow researchers to download data, meaning that data deletion would remove access for the researchers too.
Researchers should mitigate adverse events. Researchers must plan for the possibility of unintended harm to participants’ safety and well-being [8]. In a forthcoming study for which I considered using a remote UX research platform, there was a rare possibility that participants would experience emotional distress. To account for this scenario, our research team included a certified clinician who could reach out to participants, assess suspected harm, and fulfill any obligation to escalate and report.
Platforms usually withhold participants’ contact information to spare researchers from additional exposure to identifiable data. In a case of suspected harm to a participant, however, my expectation was for platforms to collaborate to best support participants’ well-being. Instead, the platform stated that in such an event, I would report it to them and they would handle it internally.
This is unacceptable. As a researcher, the consent agreement is between me and the participant. I am responsible for any safety risks my research poses, including cases that require special attention. No matter how well prepared the platform is, they are not the ones who should be making the call on how to address concerning responses to research questions.
Concluding thoughts. The current state makes it hard for university researchers to use these remote UX research platforms. Universities have detailed training and research protocols for anyone involved in human subjects research, and many of them would not approve the terms of current platforms. By contrast, many companies and organizations do not have strong ethics review processes for research conducted by employees or contractors, and they commonly use remote UX research platforms. Even though many industry researchers have gone through academic ethical training, these platforms provide an easily overlooked path toward conducting human subjects studies with few safeguards and considerations about how data is collected, stored, and used.
This may be because many of these platforms were designed with the researcher’s user experience as a first priority, leaving participants’ well-being as an afterthought. The result is that participants are exposed to unnecessary risk of harm. While in most instances this would go unnoticed, in extreme cases things could go very wrong.
But not all UX research platforms are created equally—there are a few platforms that prioritize privacy and participant safety. They are not as user friendly and usually do not include participant recruitment. But as recruitment in these safer alternatives is under the researchers’ control, so is the data, the consent process, and the handling of adverse events. We should be able to have the best of both worlds—the capabilities and convenience we desire along with the privacy and safety practices we need.
Endnotes
1. Moon, Y. The effects of distance in local versus remote human-computer interaction. Proc. of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, 1998.
2. Süner-Pla-Cerdà, S. et al. Examining the impact of Covid-19 pandemic on UX research practice through UX blogs. In Design, User Experience, and Usability: UX Research and Design. Springer, Cham, 2021.
3. Fiesler, C. et al. Exploring ethics and obligations for studying digital communities. Proc. of the 2016 ACM Conference on Supporting Group Work. ACM, New York, 2016.
4. National Institutes of Health. Human subjects research; https://grants.nih.gov/policy/...
5. Antle, A.N. The ethics of doing research with vulnerable populations. Interactions 24, 6 (2017), 74–77.
6. Vitak, J. et al. Beyond the Belmont principles: Ethical challenges, practices, and beliefs in the online data research community. Proc. of the 19th ACM Conference on Computer-Supported Cooperative Work. ACM, New York, 2016.
7. Alsmadi, S. Marketing research ethics: Researcher’s obligations toward human subjects. Journal of Academic Ethics 6 (2008), 153–160.
8. Do, K. et al. “That’s important, but…”: How computer science researchers anticipate unintended consequences of their research innovations. Proc. of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, 2023.
Posted in:
on Wed, July 26, 2023 - 3:03:00
Michal Luria
Michal Luria is a researcher at the Center for Democracy & Technology. Her work makes use of immersive and human-centered design research methods to envision and critique interactions with technology. Using these methods, she translates research insights into thought-provoking interactions and necessary discussions of technology ethics and policy.
[email protected]
View All Michal Luria's Posts
AI superpowers and the human element: Implications for researchers
Authors:
Rojin Vishkaie
Posted: Thu, July 06, 2023 - 12:04:00
Does our future have researchers who are robots? Are computers going to become so smart that they can boss these researchers around? And, if robots and computers can accomplish everything, then what are human researchers going to do? To explore these questions, I have been reading AI Superpowers: China, Silicon Valley, and the New World Order [1], a 2018 nonfiction book by Kai-Fu Lee, an artificial intelligence pioneer and venture capitalist.
The book draws on Lee's technological knowledge and professional expertise to offer a perspective on the interplay between the U.S. and China in the age of artificial intelligence, and its impact on the world. Lee believes that the U.S. is taking advantage of being the first mover in the AI race, but that China is ahead of the race in its data supply. From his perspective, the elite AI experience in the U.S. is less important than China’s AI implementation superpower.
But Lee’s argument goes beyond the AI race between the U.S. and China; it’s also about the impact of AI on humans. In his view, the U.S. and China have leapfrogged, and sit atop of, the rest of the world. The U.S. and China’s disproportionate power has worsened global inequality, and the fear of possible massive job losses because of AI will likely reduce the ability of the job market to cope and adapt. People losing their jobs will negatively impact their purpose and personal identity, as well as their ability to survive, which cannot be addressed by socioeconomic concepts like universal basic income.
But despite the many warnings, Lee has a positive outlook that AI and humans can productively coexist. Lee’s optimistic belief suggests that humans are the catalyst for shaping that future of AI. He suggests that the building blocks of a successful AI superpower is abundant data, AI scientists, AI-friendly policies, and tenacious entrepreneurs.
Lee believes that China has access to a massive quantity of data due to its large base of users, while the U.S. has an edge in elite AI expertise. He further argues that the Chinese government has a more sophisticated structure to mobilize resources and implement strategic directions for long-term goals (e.g., Development Plan for a New Generation of Artificial Intelligence). He postulates that the prevailing Silicon Valley mentality is mission-driven and techno-optimistic, believing that innovative thinking can change the world. In contrast, China has focused on market and financial profit. In Lee’s view, the AI new world order is bipolar and focused on AI superpowers versus the rest of the world, but the actions of humans themselves are the most important factor in shaping the future of the human ability to coexist with AI.
What I’ve outlined is not a theoretical argument about the future, but rather is a perspective on researchers’ coexistence with AI while considering Lee's views on the potential impact of AI on human work. I believe that researchers can create an ecosystem where AI enhances productivity (e.g., using chatGPT [2] to assist with data management and insights). The ecosystem of research and AI can also be used for socially beneficial activities (e.g., generating knowledge with practical applications).
Endnotes
1. Lee, K.F. AI Superpowers: China, Silicon Valley, and the New World Order. Houghton Mifflin, Boston, MA, 2018. ISBN: 9781328546395
2. Lock, S. What is AI chatbot phenomenon ChatGPT and could it replace humans? The Guardian. Dec. 5, 2022; https://www.theguardian.com/te...
Posted in:
on Thu, July 06, 2023 - 12:04:00
Rojin Vishkaie
Rojin Vishkaie is a senior user experience Researcher in Amazon. Her work focuses on human-centered design and evaluation of technology, with recent use cases in devices, mixed-reality education, museums, and children’s gaming and toys. Opinions expressed are solely her own and do not express the views or opinions of her employer.
http://rojinvishkaie.com [email protected]
View All Rojin Vishkaie's Posts
Why codesigning AI is different and difficult
Authors:
Malak Sadek,
Rafael Calvo,
Céline Mougenot
Posted: Tue, June 27, 2023 - 12:04:00
It is estimated that 98 percent of the population are novices with regard to technology (excluding extremes such as infants) [1]. It is this 98 percent, however, that form the main chunk of users and stakeholders affected by AI-based systems. It then makes sense that members of this segment of the population should be involved in designing these systems beyond just the small, homogenous set of experts currently involved.
There have been countless calls for the introduction of a transdisciplinary, participatory design process for AI/ML systems [2,3]. Such a collaborative design (codesign) process has been heralded as especially useful in aiding explainability and transparency [4], embedding values into AI-based systems [5], providing accountability, and mitigating downstream harms arising from several cascading biases and limitations [6]. There have also been calls for collaboration within the entire AI pipeline, including in data creation and selection, instead of having designers at the front end of the process and engineers at the back end [7]. In fact, it has been said that the only way to combat existing structural and data biases that creep into AI systems is to step away from custom-built, solely technical solutions and view AI systems as sociotechnical systems [8]. By looking across them as opposed to inside of them and using open constructive dialogues, collaborations, and group reflections [9], we can bridge the gulf between stakeholder visions and what gets implemented [7]. Having the needed voices give their input in a meaningful, impactful way through codesign activities can allow us to examine this broader frame of the wider sociocultural contexts in which AI systems are being used [10].
Despite the numerous cited benefits of and calls for codesigning AI systems, it is still not a widespread practice. The communication gap between designers’ products and services, developers’ algorithms and systems, and users’ needs and applications, is one of the largest challenges when it comes to using AI systems in real-world settings, where its implications could be life-altering for different stakeholders [10].
Bridging this gap is a unique, and notoriously difficult process for AI systems, as it has several inherent differences to other types of systems which require various considerations across different design and development phases [10]. We summarize several of these challenges below in an effort to consolidate the barriers to codesigning AI-based systems. The goal is to start a conversation on possible solutions and ailments to enable a much-needed, more widespread adoption of codesign practices for these systems.
Why Is Codesigning AI Different and Difficult?
Data considerations
- There are several extra considerations when it comes to AI systems about which datasets to use or what type of data to collect, as these systems are based on existing training data from the get-go, as opposed to other systems where user data can be collected after the system has been created for evaluation purposes. Additionally, assessing the initial feasibility of many ideas and being able to develop prototypes for them will be largely dependent on whether the data exists or can be collected.
- There are also several ethical and moral concerns that arise regarding data provenance and data quality.
- Both technical and nontechnical participants tend to focus on the ML model itself, overlooking the rest of the wider system, the data needed, and the broader sociotechnical and interactional contexts surrounding the system and in which it is embedded. This is especially true when tackling value-related or nonfunctional requirements such as fairness. Finally, there is a general lack of awareness regarding values and nonfunctional requirements in the field and how different ML design decisions affect them, in addition to no regulations regarding their elicitation and use, and a deficiency of tools and methods for measuring them.
Ideation and communication
- It is difficult for technical experts to explain to users and nontechnical experts an AI’s behavior, what counts as AI, and what it can/cannot do.
- It is also difficult for designers to communicate AI design ideas, interactions, and appropriate use cases/user stories to technical experts, codesign partners, and users. It is also challenging to imagine ways to purposefully use AI to solve a given problem, creating an overall “capability uncertainty” [10].
- It is challenging for designers and developers to understand how to collaborate and co-ideate without a common language or shared boundary objects, especially when designers join late in the project.
User research
- Embedding values into AI systems must be planned for from the offset, instead of being added in post-hoc, because the way the system is built will influence which kinds of values and ethical systems can be ‘loaded in.’ There then becomes an additional step of eliciting those values that diverse stakeholders agree on early in the process and upholding them on a sociotechnical scale.
- User experience research also becomes more difficult for AI systems, as user profiles and preferences are created dynamically during interactions and there is no predefined persona or profile to check against.
Design, development, and testing
- Designing AI systems goes beyond simply grounding design outcomes in what is technically viable, but also understanding and designing for the critical trade-offs between different algorithms and understanding the constraints, abilities, and interactions of the model.
- It is difficult to anticipate unpredictable or unwanted AI behaviors and effects, and how they will evolve over time. It is also often unclear whom to hold accountable, and very difficult to communicate all of this to users.
- It is also challenging to design and visualize branching, open-ended AI interactions, difficult to envision the potential of AI for use cases that do not already exist, and difficult to rapidly prototype AI ideas. Designing interactions for the incredibly complex and diverse outputs and possible errors of an AI system is challenging.
- Compared to other systems, it is more difficult to respect and implement some stakeholder values or nonfunctional requirements such as transparency and fairness, given the complex, uncertain, and often unpredictable nature of AI/ML systems (and the high-level, general nature of those values ). It is also challenging to measure them within those systems.
This list is by no means comprehensive but instead aims to be a starting point for a dialogue on the different challenges that appear during various design stages for AI-based systems. By summarizing and consolidating these challenges, we can begin to discuss solutions and mitigations as a community in order to make codesigning AI systems an easier and more practical endeavor.
Endnotes
1. Sadler, J., Aquino Shluzas, L., and Blikstein, P. Abracadabra: Imagining access to creative computing tools for everyone. In Design Thinking Research. H. Plattner, C. Meinel, and L. Leifer, eds. Springer, 2018, 365–376.
2. Gabriel, I. Artificial intelligence, values, and alignment. Minds and Machines
30 (2020), 411–437.
3. Yu, B., Yuan, Y., Terveen, L., Wu, S., Forlizzi, J. and Zhu, H. Keeping designers in the
loop: Communicating inherent algorithmic trade-offs across multiple objectives, Proc. of the ACM Designing Interactive Systems Conference. ACM, New York, 2020.
4. Chazette, L. and Schneider, K. Explainability as a non-functional requirement: Challenges and recommendations. Requirements Engineering 25 (2020), 493–514.
5. Crawford, K. and Calo, R. Here is a blind spot in AI research. Nature 538 (2016), 311–313.
6. Delgado, F., Barocas, S., and Levy, K. An uncommon task: Participatory design in
legal AI. Proc.of the ACM on Human-Computer Interaction 6, CSCW1 (2022).
7. Frost, B. and Mall, D. Rethinking designer-developer collaboration. 2020; https://www. designbetter. co/ podcast/ brad-frost-dan-mall
8. Ananny, M. and Crawford, K. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society 20, 3 (2018), 973–989.
9. Yang, Q., Scuito, A., Zimmerman, J., Forlizzi, J., and Steinfeld, A. Investigating how
experienced UX designers effectively work with machine learning. Proc. of the CHI
Conference on Designing Interactive Systems. ACM, New York, 2018.
10. Yang, Q., Steinfield, A., Rosé, C., and Zimmerman, J. Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. Proc. of the CHI Conference on Human Factors in Computing Systems. ACM, New York, 2020.
Posted in:
on Tue, June 27, 2023 - 12:04:00
Malak Sadek
Malak Sadek is a design engineering Ph.D. candidate working in the space of conversational agents and value-sensitive design. Her background is in computer engineering and human-computer interaction, and she uses this mixed background to constantly look for ways to bridge between technology and design.
[email protected]
View All Malak Sadek's Posts
Rafael Calvo
Rafael A. Calvo, is a professor at Imperial College London focusing on the design of systems that support well-being in the areas of mental health, medicine, and education, and on the ethical challenges raised by new technologies.
[email protected]
View All Rafael Calvo's Posts
Céline Mougenot
Hidden in plain sight: Discreet user interfaces
Authors:
Kyle Boyd,
Raymond Bond,
Maurice Mulvenna,
Karen Kirby,
Susan Lagdon,
Becca Hume
Posted: Thu, June 08, 2023 - 3:49:00
Do you feel safe when you travel alone? That was a question I asked my girlfriend (now wife) as we took a trip on a relatively empty train when I was visiting her at university. “I always carry my phone with me, but sometimes it's actually worse when the train is full; people watch what you are doing on your phone, and it's very off-putting,” was her reply.
Fast-forward 10 years and we are still very much in the same predicament. People, especially women, can feel vulnerable in certain situations. A 2019 survey [1] by SAP Concur, a company that provides travel and expense-management services to businesses, agrees. The survey revealed that safety is the number one concern for female business travelers. Indeed, 42 percent of respondents said that they have had a negative experience related to their gender. This isn’t limited to gender, though. Our intention is to highlight that people can feel vulnerable because of certain characteristics, including disability or race, and in certain situations.
With the rise of ubiquitous computing, we have the ability to use a plethora of devices that can connect to the Internet. This allows us to work, shop, and entertain ourselves from almost anywhere. Devices and their interfaces have become an extension of our own selves.
In recent decades, smartphones, emails, and instant messaging have been the predominant methods for human-to-human communication. And now with “generation mute” [2], instant text messaging, as opposed to making phone calls, has become the primary use of smartphones. With growing concerns over cybersecurity and privacy, instant messaging apps such as WhatsApp incorporate end-to-end encryption.
Modern text messaging apps allow users to send messages that are discreet in the sense that they are secure. But of course the interaction is not entirely private, since any onlooker could potentially see the user typing and sending a text message. A more discreet interaction might be the use of secret codes to input into the messaging app, which would be indecipherable to onlookers. However, this would add to the cognitive load and require a new communication code. Other apps such as Snapchat have functionality to send messages and photos that are only temporarily available; they are automatically deleted shortly after being viewed. These apps could be considered a “discreet user interface,” since though a spouse or family member might see the message or photo being sent or received, they cannot confirm it by interrogating the user’s stored media. With these examples, digital communication today is discreet in terms of privacy and data encryption during transit; however, the user interaction is not hidden from onlookers. With this can come unwanted attention, as onlookers can view and digest some of the information on your digital device, whether this be texting a friend from the train or doing work on your laptop on an airplane. In 2019, HP conducted a “creepers and peepers” survey [3], asking 3,000 people in the general population and 1,500 staff whether they would look at another person’s computer screen during a flight. Astonishingly, 4 out of 5 look at other people’s screens; 8 out of 10 restrict what they look at on their device just in case people are looking at their screen. This gives rise to questions about device-use etiquette and safety when using digital devices in public, stretching further for interfaces used for finance or security.
As interaction design researchers, we want to consider how design could help. Could we design a hidden user interface for safety purposes? This work is aimed to be both conceptual and thought-provoking.
Discreet User Interfaces
Do you remember the first time you saw the interface on a first-generation iPhone? It was staggering. Swiping, zooming, pinching, and animations that allowed for delight and control. All these new controls that allowed the content to take center stage and be the interface. Edward Tufte was correct when he said that “[o]verload, clutter, and confusion are not attributes of information, they are failures of design.” As interface designers, we strive for simple design that won’t get in the way, that will allow us to complete tasks in an efficient and satisfactory manner, producing interfaces that work. For many use cases, this is the appropriate design methodology to use when it comes to designing interfaces.
But what design methodology should we use if we wanted to create a “discreet user interface”? A discreet user interface is a user interface that looks like it’s doing one thing but actually is doing something entirely different. Alarm bells might start ringing when we put it this way, but that doesn’t mean people are doing anything sinister. Quite the opposite. A discreet user interface could be there to help and protect, even though it is “ethically deceiving” onlookers by preventing them from interpreting what the user is actually doing on their screen.
What was historically known as the “boss key” could be described as a kind of discreet user interface. A boss key is a key on the keyboard (e.g., F10 or ESC) that a user presses in haste during “play time” when their boss is nearby, displaying a fake spreadsheet, business graph, or slides to discreetly deceive the onlooking boss. Other interactions that we could also consider to be discreet interactions, or at least discreet features, include “Easter eggs,” which were hidden features inside Microsoft products. These could also be hidden games, such as pinball or a flight simulator. And of course even smartphones have hidden features, such as shaking the phone to undo an action, facilitated by sensors such as an accelerometer.
With this in mind, and perhaps more intuitively, discreet user interfaces could be those that a user can interact with while preventing an onlooker from knowing that they are using a computer. These interfaces could fall under the definition of natural user interfaces, where the user can interact with digital systems using their eyes, hand gestures, brain signals, and muscle movements, including facial movements. This can include “silent speech interfaces.” For example, Kapur et al. [4] developed the AlterEgo system, which allows people to interact with a computer through neuromuscular activity. Using electrooculogram (EOG) sensing, Lee et al. [5] developed a system that allows users to discreetly interact with computers by rubbing their “itchy” nose. And Cascón et al. [6] developed the perhaps even more discreet ChewIT, a system that allows users to interact with computers via intraoral interactions by essentially sending discreet signals by chewing. While these are very interesting innovations, they require additional hardware and sensors and are mostly at the research and innovation phases.
Another form of discreet user interfaces could be described as “camouflaged user interfaces” or “ulterior user interfaces.” These are interfaces that give onlookers a fake impression of an app’s utility, with its true, ulterior hidden utility known only to the user. For example, while an onlooker might see the user engaging with a mundane weather app, a sports app, a news app, or a shopping list, the user could be performing hidden tasks or sending secret signals in plain sight by interacting with buttons and features that have hidden functions. The interaction might look mundane, but the user could be sending a secret communication. Hence, this ulterior user interface could also be considered a discreet interaction. This concept is akin to steganography, where an everyday mundane image contains a hidden message that can be extracted using a secret algorithm; however, steganography only applies to the message and does not encompass the user interaction. If it did, there might have been a concept coined as stegano-graphical user interfaces.
A discreet user interface could also help protect those in need and hinder “peepers and creepers.” Use cases could be messaging, banking, or general phone use in a public space. It could go even further, providing an outlet for those who are subject to coercive behavior or domestic abuse. At the same time, if an assailant or abuser found the discreet interface, this could put the user in danger. That is why this type of solution, if developed, may need to be rolled out by police or social services so that it could remain anonymous and safe.
A Simple Case Study: A Discreet News App
An example of a discreet user interface and use case could be the following: A female commuter, Jane, traveling home from work gets on board a busy train during peak hours. There are not many seats, so Jane stands. Along her 30-minute commute, two male commuters come on board and, given the limited space, encroach on Jane’s personal space. As Jane uses her smartphone, she becomes aware of the men’s presence behind her, making her feel vulnerable. They are watching everything she is interacting with, peeping and creeping.
To stop this, she switches to her hidden interface, the “news” app (Figure 1), and reads a news item before interacting with a discreet feature. The feature allows Jane to discreetly, semiautomatically message a friend and alert her to meet her at the next station. As the train pulls into the station, Jane again uses the news app, where secret messages inform her of her friend’s confirmation and whereabouts.
The use case is twofold. In one sense, it’s a regular news app that will display a range of articles that can be read. But upon tapping an icon or button, the user interface can make calls and send prewritten text messages, all using the same content areas and buttons that already exist in the app.
Figure 1: This discreet user interface is called the news app. It displays news items, but by tapping icons and buttons, the user interface can send hidden communications, allowing the user to, for example, send prewritten messages, take a photograph, or make a call. Credit: onyxprj_art at vecteezy.com, licensed under CC BY 4.0.
Conclusion
This article highlights the growing problem of “peepers and creepers,” unwanted attention by others while interacting with our digital devices. We considered a novel solution called discreet user interfaces, where an interface can look like it has a mundane utility but can have underlying functionality that allows it to discreetly perform other functions (e.g., hidden communication). We also presented a case study in the form of a discreet news app that showcases how a discreet user interface could work. We hope this article will be thought-provoking and will widen the conversation around discreet user interfaces and their use cases in interaction design and applications.
Endnotes
1. Haigh, L. Female travellers: a unique risk profile. ITIJ 230 (Mar. 2020); https://www.itij.com/latest/lo...
2. Dawson, A. So Generation Mute doesn’t like phone calls. Good. Who wants to talk, anyway? The Guardian. Nov. 7, 2017; https://www.theguardian.com/co...
3. HP creepers and peekers survey. HP Press Center. Sep. 25, 2019; https://press.hp.com/us/en/pre...
4. Kapur, A., Kapur, S., and Maes, P. Proc. of the 23rd International Conference on Intelligent User Interfaces. ACM, New York, 2018, 43–53; https://doi.org/10.1145/317294...
5. Lee, J. et al. Itchy nose: Discreet gesture interaction using EOG sensors in smart eyewear. Proc. of the 2017 ACM International Symposium on Wearable Computers. ACM, New York, 2017, 94–97; https://doi.org/10.1145/312302...
6. Cascón, P.G., Matthies, D.J.C., Muthukumarana, S., and Nanayakkara, S. ChewIt: An intraoral interface for discreet interactions. Proc. of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, New York, 2019, Paper 326, 1–13; https://doi.org/10.1145/329060...
Posted in:
on Thu, June 08, 2023 - 3:49:00
Kyle Boyd
Kyle Boyd is a lecturer of interaction design at the Belfast School of Art at Ulster University. His research interests and publications include interaction design, user experience, digital healthcare, and usability. He recently cochaired the 15th Irish Human Computer Interaction Symposium at Ulster University.
[email protected]
View All Kyle Boyd's Posts
Raymond Bond
Raymond Bond is a professor of human computer systems and a distinguished research fellow at Ulster University. His work includes applications of data science and HCI within biomedical and healthcare informatics (digital health). He has more than 400 research outputs and has chaired or cochaired a number of conferences, including the 32nd International BCS Human Computer Interaction Conference and the 45th and 46th Annual Conferences of the International Society for Computerized Electrocardiology.
[email protected]
View All Raymond Bond's Posts
Maurice Mulvenna
Maurice Mulvenna is a researcher in computer science and artificial intelligence at Ulster University known for his contribution to interdisciplinary research in digital mental health sciences with colleagues in psychology, nursing, and healthcare. He has published over 400 peer-reviewed publications and has been a principal investigator or investigator on more than 120 international research projects.
[email protected]
View All Maurice Mulvenna's Posts
Karen Kirby
Karen Kirby is a senior lecturer and registered practitioner psychologist in the School of Psychology at Ulster University. Her current areas of research interest are understanding child and adolescent mental health and developmental trauma, the impact of trauma histories on various mental health issues, and preventative mental health and novel programs and technologies.
[email protected]
View All Karen Kirby's Posts
Susan Lagdon
Susan Lagdon is a lecturer in psychology at Ulster University. Her research interests include domestic and sexual violence and abuse, particularly the mental health implications of interpersonal trauma and the availability and types of support for victims.
[email protected]
View All Susan Lagdon's Posts
Becca Hume
Becca Hume is an Ulster University MFA graduate and founder of the software company TapSOS. Her primary research and development is in the area of public safety, critical communications, and vulnerable users. She leads research in enhanced nonverbal technologies and is a British Sign Language user (BSL Level 6).
[email protected]
View All Becca Hume's Posts
Happily technology free
Authors:
Linnea Öhlund
Posted: Mon, May 15, 2023 - 10:28:00
My grandma recently passed. At the age of 88 she went to sleep never to wake up again. She lived a long, happy life, and had two sons, six grandchildren, and currently four great grandchildren. Bye grandma, I will always love and miss you.
Now to the point. My grandma was one of those individuals who never really “got it” when it came to technology. She only got a cellphone at the age of 78, because we forced her to, and the computer she and my grandfather owned was barely used. I have a vivid memory of me at perhaps 14 years old, asking them to use the computer so I go into paint (that was the most fun they had on the old thing). Grandma got a bit upset, saying that I couldn’t use paint because I might destroy the computer—something could go wrong and I would break it. Me, a person born in 1995 who grew up using computers, knew very well that no computer would break by simply opening paint; however, my grandma, who seemed to have no real understanding of how a computer worked, treated it like a fragile crystal vase, and I was an elephant ready to smash it!
After a while, they just got rid of the computer. The phones they had were the ones we see many senior citizens using, with the big buttons and no apps—just call and text functions. But let’s make one thing clear: My grandma and grandpa (who passed in 2020) were not afraid of or negative toward technology; they understood the impact that the Internet and technology had on society. A part of them wanted to join the revolution, wanted to see what the fuss was about, wanted to join the party. They were both born so long ago (1925/1934) that only after they had retired did the Internet, mobiles, and computers become big. Grandma became a housewife in the 50s and grandpa retired in the 90s, both coming from small country villages in the north of Sweden. When and where they were born, grew up, and settled surely affected how they felt about technology as well as their ability to use it. I’ve been thinking recently that there seems to be so much pressure on seniors these days to adapt, to learn, to go above and beyond to use technologies. No more bank offices, JUST VISIT US ONLINE, no more paper bills, JUST PAY THEM ONLINE, no more post in the mail, JUST GET THEM ONLINE. But what if people don’t want to do that? What if they just want to live a quality life and not have their happiness depend upon technology? I once did a study with 15 seniors, asking them about their opinions on digital technology. One female participant, who herself was about 70, had a mother who was almost 100. She didn’t want to use technology but felt left out of due to the very large amount of digital and technological artifacts that are needed to do many things. She finished the interview by saying how she felt there was still a generation of seniors who needed to pass away before the intense digitalization in the Swedish society would be successful. These seniors, much like my grandparents, did not have a smooth transition from analog to digital in their lives and were, frankly, left behind due to this.
Now that both my grandparents from older generations have passed, I think of that comment by the participant. One more step toward a successful digital society where there are no longer any individuals left who do not want to adapt. What type of society is that? Waiting for its inhabitants to pass? Society needs to adapt to its inhabitants and offer opportunities for those who choose to not use technology. They still have the right to good, happy lives not degraded because of society’s intense technological push. This should be obvious, but is clearly not.
So here’s to you grandma, for your glass vase computer and your amazing, basically technology-free life! Cheers!
Posted in:
on Mon, May 15, 2023 - 10:28:00
Linnea Öhlund
Human-centered artificial intelligence: The solution to fear of AI
Authors:
Chameera De Silva,
Thilina Halloluwa
Posted: Tue, May 09, 2023 - 3:06:00
Have you ever played a game against a computer? Have you ever wondered how computers win complex games against well-experienced players, such as when an AI system from IBM defeated Garry Kasparov in the Chinese board game Go?
The fear of AI becoming the biggest threat to humankind, replacing the human workforce on a large scale is a real concern for AI practitioners and businesses. But there have been incidents where the combination of a smart AI system supervised by humans has resulted in unbelievable achievements. Technology has developed magnificently in the past decade, and the widespread adoption of smart systems powered by AI predicts a much smarter and tech-dependent world. New-age AI systems will become more and more learning-based. The massive data generated by the Internet will be used to train these models and systems to improve their intelligence in specifically focused areas.
The current focus of AI developers is to make the algorithms better and optimize the performance of the AI models as much as possible. We’re heading toward Web 3.0; its adoption will be much faster than that of Web 1.0 and Web 2.0 and driven by the adoption of AI.
The super-fast development and integration of AI tools into the products and services we use is accelerating the pace of our lives. Alongside this development is the fear that AI will exceed human cognitive intelligence, creating a struggle to maintain our control over them.
This fear of AI replacing humankind is the motivation for human-centered artificial intelligence (HCAI), where at one end we have AI and at the other end human beings supervising the actions and results of AI. The development of this technology has happened intuitively and on multiple levels.
In the early days of the Web, Web 1.0, we could only read the content online. The public had no control of over what they wanted to read or consume, and we could not search for things; it was just one-way communication. This motivated the creation of Web 2.0, which gave us social media applications like Facebook, Youtube, and Google. This made the communication on the Web two-way. We were now able to decide what content we wanted to see. We could not only share our views as comments but also could now like or dislike posts. Now, as we are involving and interacting with technology more and more, we are finding more ways to incorporate technology into our lives, and one way of doing it is by developing AI applications.
Artificial Intelligence
Humans are constantly working on creating machine intelligence similar to that of human intelligence; this is called artificial intelligence. AI is one way we are using technology and creating applications and systems that are automated and can work for us without any supervision. Self-driving cars, lie detector machines, and robots that can do surgery are just some examples of automations made possible by artificial intelligence. Current research is targeting advances in many sectors, including finance, education, and transportation.
These advances mean incorporating AI into our work processes at higher levels. This view of involving AI into human life is a matter of great concern today, often creating fear of AI and pitting technology against humankind.
What Is the Fear?
Machines are no doubt very fast and accurate in calculating numbers and analyzing large numbers of scenarios and methods within minutes. This speed and accuracy is very helpful to humans, but relying on machines completely just because of their speed and accuracy is a bad idea. Machine intelligence is smarter than human intelligence in the areas in which the machine is trained—but it cannot be trusted blindly.
The ethical and safety implications of learning-based AI systems have proven these systems to be unsafe in many ways. The decisions and results provided by these systems are not guaranteed to be fair and have also been shown to be not perfectly explainable. That is, the current models are black-box models for us humans, as they do not provide reasons for the results given by the machine.
Real-world scenarios cannot work on ambiguity and ethically unsafe standards.
AI is making machines intelligent, but we cannot be guaranteed of the safety of these systems in any specific way unless they are extremely constrained.
What Are the Challenges?
There are several challenges with AI machines today. The results given by AI systems are trusted because they are provided by computers. But the explanation of these results is very important as we scale up the use cases and implementation of these smart systems. The current solution to this challenge is explainable artificial intelligence. Explainable AI adds a level of transparency in the decision and processing done by AI systems. But as we incorporate AI systems in sectors like education, medicine, and finance, explainability is not enough.
AI in the education sector
The systems involved in the education sector will be assigned to grade students’ papers, mark their attendance, and look for any possibilities of cheating during examinations. The results given by the systems in this area impact the academics of the students directly; therefore, it is crucial to ensure that the decisions are fair and grounded on the actual rules of the real world. To achieve this level of trust with machines, human supervision is necessary.
AI in the medical sector
Likewise, the involvement of AI systems in the medical sector is opening new doors of advancement in medicine. Robots are performing surgeries, as well as recommending medications to patients on the basis of their symptoms.
The recommendations of these system are highly likely to directly impact the patients’ health and life; therefore, it is extremely important that the recommendations are 100 percent correct. But here, due to the subjective nature of the patients’ symptoms, the chances of machines making a mistake is very high. For example, there could be a case where the patient is feeling chest pain due to low blood pressure, but the machine detects it as a heart attack. These kinds of mistakes can cost lives and thus there must be zero-tolerance for them.
The challenge
Because of the lower cognitive and emotional intelligence of machines, it is necessary to incorporate smart systems alongside humans and not in place of humans. The desired advancement can be achieved only when humans and machines work together.
Human-Centered AI
Human-centered AI (HCAI) is AI with human supervision. There are three main pillars of HCAI: humans, ethics, and technology. Human-centered AI is the solution to the fear of AI exceeding human intelligence and thus becoming a threat to humankind.
HCAI will ensure that AI systems are supervised by humans and that the processing and decisions of smart systems are all under the control of a human (Figure 1).
Figure 1. Human-centered AI combines humans, ethics, and technology.
But what exactly is human-centered AI? The basic idea involves the deep integration of humans into the data-annotation process and real-world operation of the system in both the training and the testing phases. HCAI will combine human-driven decision making with AI-powered smart systems, thereby making them smart, fast, and trustable. Ethical and responsible AI systems will become a reality by incorporating AI alongside humans. Augmenting AI systems with human abilities will also bring us usable and explainable AI. This is essentially human-controlled AI, combining the modern power of artificial intelligence with human intelligence.
HCAI will address the challenges of the lower emotional intelligence of machines and the lower speed and accuracy of humans together.
How is HCAI possible?
Current AI systems are the learning bases for AI systems that get trained on data and are then tested on similar datasets to check their accuracy. These datasets are created by humans, and a large amount of time, energy, and money is invested in performing detailed annotations to create a dataset suitable for training an AI model.
The motive of HCAI is to enable an annotation process generated by machines in the form of queries to humans. The machines will decide which data is necessary to be annotated by humans and only that subset of the data will be given to the humans for annotation. This is one way to create human-centered AI.
With regard to real-world applications, current systems just deliver the results to us and we receive them. With HCAI, the results given by the AI system will have a degree of uncertainty associated with them from the AI system; this result will then be sent to the human supervisor. When the results get the thumbs up from the human supervisor, then and only then are they sent as the final result (Figure 2).
Figure 2. Multiple AI systems arguing with human supervision.
This is the basic, ground-level idea of HCAI. It can get much deeper and can exist in many formats.
Algorithms in HCAI
The algorithms of current AI systems are created with the purpose of training the machine-learning models. The algorithms in HCAI will be designed in such a way that they give the AI system an idea of the context of human intelligence, both in the moment and over time. Smart systems will be trained not only on the annotated data mechanically but will also get training on the human understanding of the concept. This will make AI systems even smarter and reduce the risk of unwanted out-of-the-box results.
Human-computer interaction
To create the algorithms that teach the computer the context of human understanding, we need human-computer interaction. To make the interactions with humans successful and fruitful for HCAI purposes, they must be continuous, collaborative, and provide a rich, meaningful experience.
We currently have autonomous vehicles driving more than 100 billion miles on the road in auto-pilot mode, some of them in semi-autopilot mode. These cars and the smart systems installed in them aim to have a rich and fulfilling human-computer interaction experience. The cars have a bunch of super-cool devices monitoring and recording the multiple modes and emotions of the driver and try to give us a better interactive experience. But we are seeking a collaborative, rich interactive experience that goes far beyond this.
The challenge for current smart car systems is that these cars are intended to be used by people from both the younger generation and the older one, who might have never used any AI-based system in their entire life. The real challenge is thus creating HCI that is capable of deeply exploring these aspects and providing a rich, interactive, and collaborative experience.
Safety and ethical awareness
Artificial intelligence or machine intelligence in general brings many concerns around safety and ethics.
Taking the example of self-driving cars, the passengers riding in a car that is in autopilot mode are prone to trust the system, as the system only provides the decisions it is making based on the situation. There is no information given regarding the level of uncertainty, nor any warnings from the system to the driver.
HCAI is a solution for this safety challenge. We can create a system with multiple AI models deciding on a single situation, and whenever a disagreement in the decisions is detected, human supervision is then sought. This is one way of involving human interaction in the decision-making and validation processes.
The other challenge is the ethical awareness of the AI system. There are certain situations in the real world that cannot always be dealt with technically. The ethical aspects must be considered, and this is an area we need to work on to optimize our AI models.
Until AI models are smart enough to understand the ethical and emotional aspects of the real world, they need to be combined with human supervision in the form of HCAI. In addition, some of the current AI use cases, when enabled with stronger and safer HCAI, can make things much more secure for users.
HCAI Today
HCAI designers go beyond thinking of computers as our teammates, collaborators, or partners. They are more likely to develop technologies that dramatically increase human performance by taking advantage of the unique features of computers.
Recommender systems
A current system that is being implemented almost everywhere and is making things autonomous for humans is the recommender system. Widely used in advertising services, social media platforms, and search engines, these systems bring many benefits.
The consequences of mistakes by these systems are subtle, but malicious actors can manipulate them to influence buying habits, change election outcomes, or spread hateful messages and reshape people’s thoughts and attitudes. Thoughtful design that improves user control could increase consumer satisfaction and limit malicious use.
Other applications that enable automation include common user tasks such as spell-checking and search-query completion. These tasks, when carefully designed, preserve user control and avoid annoying disruptions, while offering useful assistance.
Consequential applications
The applications in medical, legal, or financial systems can bring both substantial benefits and harms. A well-documented case is the flawed Google Flu trends, which was designed to predict flu outbreaks, enabling public health officials to assign resources more effectively.
A harmful attitude among programmers called “algorithmic hubris” suggests that some programmers have unreasonable expectations of their capacity to create foolproof autonomous systems.
The crashes in stock markets and currency exchanges caused by high-frequency trading algorithms triggered losses of billions of dollars in just a few minutes. But with adequate logging of trades, market managers can often repair the damage.
Life-critical systems
Moving on to the challenges of life-critical applications, we find physical devices like self-driving cars, pacemakers, and implantable defibrillators, as well as complex systems such as military, industrial, or aviation applications. These require rapid actions and may have irreversible consequences.
One-dimensional thinking suggests that designers must choose between human control and computer automation for these applications. The implicit message is that more automation means less control. The decoupling of this concept leads to a two-dimensional HCAI framework.
Two-Dimensional HCAI
Two-dimensional HCAI suggests that achieving high levels of both human control and automation is possible, shown in Figure 3.
Figure 3. Two-dimensional HCAI.
The goal is most often but not always to be at the upper-right quadrant. Most remote sensing technology systems are on the right side.
The message for designers from two-dimensional HCAI is that, for certain tasks, there is a value in full computer control or full human mastery. However, the challenge is to develop an effective and proven design, supported by trusted social structure, reliable practices, and cultures of safety so as to acquire RST recognition.
Conclusion
To conclude, artificial intelligence is not a threat to humankind if we involve it into our lives as a companion and learn to work alongside it. There are several ways that we can teach AI systems to accompany us humans in our tasks and make them understand us. We can also create a system with multiple AI models that argue over decisions and then seek human supervision on every disagreement.
The idea of two-dimensional human-centered artificial intelligence is making the vision of HCAI more clear for designers and developers and is redefining the challenge in a new way. Designers need to create trustable, safe, and reliable HCAI systems supported by cultures of safety so as to acquire technical recognition.
That said, nobody is perfect, neither artificial intelligence nor humans. But we can join hands and provide something enriching to each other. AI is the future, and the future is beautiful.
Additional Resources
1.
2.
3.
4. Human-Centered Artificial Intelligence: Reliable, Sage & Trustworthy, Ben Shneiderman, [email protected], Draft February 23, 2020, version 29 University of Maryland, College Park, MD
Posted in:
on Tue, May 09, 2023 - 3:06:00
Chameera De Silva
Chameera De Silva is an artificial intelligence/data engineer with more than three years of experience in application design, development, testing, and deployment. He is highly experienced in writing code and algorithms as well as building complex neural networks through various programming languages. He possesses an unbridled passion for AI and has comprehensive knowledge of machine-learning concepts and other related technologies.
[email protected]
View All Chameera De Silva's Posts
Thilina Halloluwa
Rethinking the SIGCHI conference landscape: Why some must evolve or perish
Authors:
Johannes Schöning
Posted: Mon, May 08, 2023 - 10:04:00
It has been nearly four years since I last attended an in-person conference, and I must admit that I was excited about attending the recent ACM CHI conference in Hamburg.
In-person conferences provide a unique opportunity for researchers to meet, chat, and catch up on the latest developments in the field, as well as absorb all the excellent talks and demos. It is also super important to build networks, especially for early career researchers.
However, the Covid-19 pandemic put an end to all that, and we quickly transitioned to digital conferences. While digital conferences had their benefits, such as accessibility and convenience, I found myself missing the social aspect of in-person conferences, particularly the informal chit-chat during breaks.
Additionally, I hardly remembered any paper presentations from the digital conferences I attended, in contrast to talks I attended back at CHI in San Jose in 2007, which I remembered vividly due to my physical surroundings and interactions with others in the room.
Now, hybrid conferences are becoming the new normal, offering both in-person and digital attendance options. While this innovation seems like a step forward, it raises important questions. Have we truly innovated the conference experience, or did we act too quickly in response to the pandemic? Have we considered all the potential side effects of going hybrid? And perhaps more importantly, do some conferences need to die or perish?
One argument for the death of some conferences is simply that there are too many of them. It can be challenging to recruit reviewers for all the different conferences, and the field may need help to sustain the number of conferences currently in existence.
Additionally, some conferences simply do not get the attention they deserve anymore, and it can be challenging to maintain a core audience for them. For example, I love MobileHCI, and I was recently the program chair for it. However, the number of submissions has been decreasing in the last few years, and it seems like the conference is losing its core audience. MobileHCI has become the “mini CHI,” and it’s unclear where MHCI fits into the mix of SIGCHI conferences. This is not only true for MobileHCI, but also for a set of other conferences in my humble opinion.
Another reason to consider the death of some conferences is sustainability. While hybrid conferences can reduce the environmental impact of conferences by reducing travel and energy consumption, they still require physical resources such as venue rental and equipment. Some conferences, such as IEEE VR, have started to charge authors for each item presented at the conference to offset these costs. However, it needs to be clarified whether this approach is sustainable in the long term and also what novel barriers hybrid conferences bring, for example, for early career researchers who are “forced” into the hybrid mode instead of going to physical conferences due to budget concerns. We need to talk about CHI in Hawaii at another time, in depth.
So, what do I propose? First and foremost, I suggest that we focus on having a strong flagship conference and just a few (local) satellite conferences that emphasize the networking and experience parts of the conference—trying out demos and talking to people instead of passively absorbing talks (one can watch the talks later anyway). This would allow us to put more love into the local SIGCHI chapters and encourage them to organize workshops, summer schools, and other events that can foster sustainable growth and strong networks within the field. In addition, it would be very much up for supporting the birth of new conferences that support the emergence of new areas of research that have a certain life span and will disappear after a few editions once the field is established.
Second, I suggest that we move away from the artificial acceptance rates that have become so common in conferences. Instead, we could implement a true revise and resubmit cycle and create different presentation/experience categories at CHI. I most enjoyed trying out the demos in Hamburg. Most talks I can rewatch later on the great ACM SIGCHI YouTube channel (if you have not subscribed, I would encourage you to do so). This would allow for more diverse and nuanced discussions and help us move away from the idea that conferences are just about talks, talks, talks. It is desirable to have conferences that prioritize opportunities for presenting and discussing intriguing content rather than being overly preoccupied with enforcing deadlines and procedural protocols. Conferences should have different aims, which also help conferences to differentiate themselves from each other (presenting inspiring content, receiving quality feedback on early-stage work, showcasing the latest research, or networking and experiencing demos). Such conferences should aim to broaden the scope of curated material, with committees working more closely together and avoiding exclusive reliance on traditional peer review processes. Promoting such new formats might also mean rethinking how universities fund conference attendance, especially for early-career researchers.
Finally, I believe that conferences should be about the exchange of ideas and experiencing new technologies. In my experience, the rejection rate for demos at CHI 2023 was far too high, and the juried process needed to be more transparent. I very much enjoyed the “hot desk areas” for demos at CHI, where people could try out new technologies and interact with the developers in a more informal setting. We should consider truly innovative conferences that promote exchanging ideas and networking. The primary objective should be to provide an exceptional conference experience and exceptional content, regardless of its source. Moreover, there should be a platform for showcasing previously published work at premier conferences, which would help sustain smaller conferences as platforms for innovative and speculative ideas in nascent fields.
Of course, these are just my personal opinions, and they may be a bit provocative. I would love to hear your thoughts on the matter, whether here on the Interactions blog, via email, Twitter, LinkedIn, or, best, during a coffee in person at a future SIGCHI conference. Let's continue to innovate and improve the conference experience for all researchers in the field.
Acknowledgments
Special thanks go to Christine Scheef (HSG), Yvonne Rogers (UCL) and Antonio Krüger (DFKI) for their insightful feedback and constructive criticism on an earlier version of this post. Their expertise in the field greatly contributed to the final outcome. I am grateful for their time and effort in providing valuable input, which helped to shape my thoughts.
Posted in:
on Mon, May 08, 2023 - 10:04:00
Johannes Schöning
Johannes Schöning is a professor of human-computer interaction (HCI) at the University of St. Gallen. With our research we want to empower individuals and communities with the information they need to make better data-driven decisions by developing novel user interfaces with them.
[email protected]
View All Johannes Schöning's Posts
The future of hybrid work is blended and interpersonal
Authors:
Himanshu Verma,
Marios Constantinides,
Sailin Zhong,
Abdallah El Ali
Posted: Tue, April 18, 2023 - 2:45:00
The way we work has profoundly changed. The well-known eight-hour workday within the confines of the office, and the salient boundaries between work and personal life, are now an outdated reality. For many individuals, what used to be physical and co-located has now been replaced with notions of hybrid, blended, and flexible. However, this flexibility may create turbulence between employees and employers, depending on how employees manage their workdays, productivity, and well-being. Simply put, if we are to rethink a new future of work, we need to let go of old work habits and norms and embrace a brand-new reality of hybrid and blended experiences.
The onset of the Covid-19 pandemic in early 2020 has significantly altered the ways we interact, socialize, work, and learn. It has forced the entire world into a prolonged social experiment, which has tested and mainstreamed what technology developers, researchers, and visionaries have been talking about for a long time: the ubiquity of working and interacting remotely. For decades, researchers and organizations have envisioned and experimented with the idea of “working from home” or “working from anywhere.” Remote work is not a new concept; we had communication tools and infrastructure to facilitate it even before the onset of the Covid pandemic.
Before Covid, an abundance of tools and technological solutions (e.g., email, Zoom, Gather.Town, Slack) enabled opportunities for distant collaborators to connect both synchronously and asynchronously. Yet these tools were never envisaged as an extended and sustained means of working, learning, or socializing in a highly interconnected world. This “pre-pandemic” social reality afforded an auxiliary and rather peripheral position to remote interactions and engagement, leaving little to no room for examining the extended remote social interactions and their short- and long-term implications for our life and well-being.
Blended Social Realities and Hybrid Arrangements
We foresee a future full of hybrid social possibilities and blended social experiences that speak to a variety of stakeholders and beneficiaries from all walks of life, from school-going children to office workers to medical personnel. Now, the next pertinent question that we have to ask ourselves is, What are these and are they the same thing?
They are different. Hybrid social arrangements refer to the malleable distribution of a person’s social and professional lived reality—in both time and space—and the way it is positioned within the spectrum, from completely physical to completely virtual. For individuals, hybrid arrangements speak to the need for easier self-organization. For organizations, these arrangements are related to the effective governance of resources and employees’ well-being (e.g., partial work from home). Effective manifestation of hybrid social arrangements, consequently, entails designing for blended experiences, which are the essential “means to an end,” enabling individuals or collectives to meaningfully navigate their respective social and professional realities.
Blended social realities, or “blendedness,” refers to an individual’s or group’ s practice of using digital tools and spaces for collaboration. Collaborative tasks can be mapped onto different tools and spaces with some degree of redundancy and overlap, allowing them to be effortlessly and unambiguously executed. For example, a meeting may be organized over Zoom or Social VR [1], but its agenda and discussion points may be shared over Slack. Collaborating actors may also choose to use multiple platforms for the very same task. As a summary, blendedness is a manifestation of superposed communicative and collaborative modalities and channels enabling hybridity [2]. Over time, these sociotechnical entanglements are assimilated as accepted praxis and institutional norms, in essence dissolving the boundary between tools and environments [3].
Blended working/learning/meeting experiences, as elaborated previously, encompass the dynamic and diverse ways in which the hybrid arrangements interact with digital tools and (physical or virtual) spaces to evoke a perception of social facilitation—an appropriated and rather simulated togetherness.
Over time, as existing social contracts change and technological embeddings afford new possibilities and experiences, social actors may decide to modify this mapping. Such changes may happen spontaneously and recurrently, adapting to emerging situations (e.g., a new collaborator who does not use a specific tool), technical complications (e.g., unable to share one’s screen on one platform during presentation), and organizational regulations (e.g., an organization may not allow the use of a certain platform due to contractual or accessibility reasons). Changes to these mappings may also stem from the spatial distribution of collaborating partners (i.e., whether it be colocated, distributed, or both).
How to Design for Blended Experiences
To engage the scientific community in the debate on blendedness and contrasting it with hybridity, we organized the SensiBlend workshop at the ACM UbiComp conference in 2021 [4]. In the workshop, we sought to scrutinize the affordances of blended experiences, and raise questions relating to the specific attributes of sociotechnical experiences in the future organization of interpersonal relationships. The discussions revealed three key themes: 1) fostering serendipity in hybrid social connectedness, 2) rethinking boundaries between work and personal life, and 3) entangling “the digital” and “the spatial” worlds.
Theme 1: Fostering serendipity for hybrid social connectedness
The first theme is about the relationship between serendipity and blended experiences in virtual and physical worlds. The shift to computer-mediated communication has resulted in the explicit scheduling and organization of dedicated moments for social interactions, which can stifle creativity and serendipity. Researchers have been exploring mixed and extended reality approaches for supporting flexible video feed configuration and allowing natural proxemic interactions [10]. Social biosensing techniques, where remote/hybrid participants are able to share behavioral or physiological data with one another [5], could enhance the quality of social interactions beyond what reality is able to afford. Consequently, understanding the taxonomy of blendedness could help discover new forms of serendipity, time boundaries, and space multiplicities that go beyond the current practice of either face-to-face or videoconferencing interactions.
Theme 2: Rethinking boundaries between work and personal life
Personal and professional contexts entail that clear boundaries are being defined and maintained to foster work-life balance, contributing positively to employees’ well-being. Cho et al. [6] found that six types of boundary work shape remote workers’ placemaking practices: spatial, temporal, psychological, sensory, social, and technological. These boundaries were extensively tested and transformed during the Covid-19 pandemic. As an example, boundaries can be set up by using dedicated workspaces at home, by getting dressed up for work meetings, or by using hardware and software to enable transitions between work and life. However, the use of technology can also have negative effects such as Zoom fatigue [7], increased workplace surveillance [8], and the possibility of employees being monitored in a discriminatory and an unfair way. To ensure widespread adoption of these technologies, blended experiences must afford embodied mechanisms for individuals and groups not only to define and update their own boundaries but also to prevent any negative consequences.
Theme 3: Entangling “the digital” and “the spatial”
Realization of blended experiences goes well beyond the mere design of technological artifacts, and extends into the realm of emergent work practices, with subsequent effects on how we organize our space and mobility practices. This requires rethinking how we design spaces, both virtual and physical, and the transition between the two. For example, the use of tools like Zoom and Gather.Town has led to the creation of digital twins, which can be used in hybrid classes and conferences. We foresee that, in the near future, the (re)design of spaces with blended experiences in mind will afford discoverability, well-being, and creativity. Simultaneously, this could go hand in hand with research on urban and built environments and their constitutive components such as interactive furniture, creating adaptive indoor atmospheres [9].
TL;DR
We argue that blendedness will be central to the entanglement of work practices, tools, and spaces, and potentially emerge as a catalyst driving the future of work. It holds the promise to augment work and its evolution by encompassing the dynamic and diverse ways in which hybrid arrangements interact with digital tools and (physical or virtual) spaces to evoke an appropriated and rather simulated togetherness. Paving the way toward a truly blended future of work entails consorted multidisciplinary endeavors across three fronts: 1) designing for serendipity, 2) improving work-life balance, and 3) weaving space and mobility with evolving work practices.
Endnotes
1. Li, J., Kong, Y., Röggla, T., De Simone, F., Ananthanarayan, S., de Ridder, H., El Ali, A., and Cesar, P. Measuring and understanding photo sharing experiences in social virtual reality. Proc. CHI '19.
2. Neumayr, T., Saatci, B., Rintel, S., Klokmose, C. N., and Augstein, M. What was hybrid? A systematic review of hybrid collaboration and meetings research. arXiv preprint arXiv:2111.06172, 2021.
3. Verma, H., Mlynář, J., Pellaton, C., Theler, M., Widmer, A., and Evéquoz, F. “WhatsApp in politics?!”: Collaborative tools shifting boundaries. IFIP Conference on Human-Computer Interaction. Springer, Cham, 2021, 655–677; https://link.springer.com/chap...
4. Verma, H., Constantinides, M., Zhong, S., El Ali, A., and Alavi, H.S. SensiBlend: Sensing blended experiences in professional and social contexts. Adjunct Proc.of the 2021 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proc. of the 2021 ACM International Symposium on Wearable Computers. ACM, New York, 2021, 491–495.
5. Lee, S., El Ali, A., Wijntjes, M., and Cesar, P. Understanding and designing avatar biosignal visualizations for social virtual reality entertainment. Proc. of the 2022 CHI Conference on Human Factors in Computing Systems. ACM, New York, 2022, 1–15.
6. Cho, J., Beck, S., and Voida, S. Topophilia, placemaking, and boundary work: Exploring the psycho-social impact of the Covid-19 work-from-home experience. Proc. of the ACM on Human-Computer Interaction 6, GROUP (2022), 1–33; https://dl.acm.org/doi/pdf/10....
7. Bailenson, J.N. Nonverbal overload: A theoretical argument for the causes of Zoom fatigue. Technology, Mind, and Behavior 2, 1 (2021), 1–6.
8. Constantinides, M. and Quercia, D. Good Intentions, bad inventions: How employees judge pervasive technologies in the workplace. IEEE Pervasive Computing (2022).
9. O'hara, K., Kjeldskov, J., and Paay, J. Blended interaction spaces for distributed team collaboration. ACM Trans. on Comput-Hum Interaction 18, 1 (2011), 1–28.
10. Grønbæk, J.E., Knudsen, M.S., O’Hara, K., Krogh, P.G., Vermeulen, J., and Petersen, M.G. Proxemics beyond proximity: Designing for flexible social interaction through cross-device interaction. Proc. of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, New York, 2020, 1–14; https://doi.org/10.1145/331383...
Posted in:
on Tue, April 18, 2023 - 2:45:00
Himanshu Verma
Himanshu Verma is an assistant professor at TU Delft, Netherlands. He is interested in examining the social dimensions of wearables, and the ways in which they can be used to understand the internal mechanisms—comprised of latent, nonverbal and transient social signals—that enable or inhibit interpersonal collaborations, particularly in hybrid and blended contexts.
[email protected]
View All Himanshu Verma's Posts
Marios Constantinides
Marios Constantinides is a senior research scientist at Nokia Bell Labs in Cambridge, U.K. He works in the areas of human-computer interaction, ubiquitous computing, and responsible AI. His current research focuses on building AI-based technologies that augment people’s interactions and communication and on studying their ethical considerations.
[email protected]
View All Marios Constantinides's Posts
Sailin Zhong
Sailin Zhong is a Ph.D. student at the Human-IST Institute, at the University of Fribourg in Switzerland and a visiting student at the Responsive Environments Group at the MIT Media Lab. Her research focuses on understanding and augmenting human perception of comfort in the built environments in the work context through sensing and interaction design.
[email protected]
View All Sailin Zhong's Posts
Abdallah El Ali
Abdallah El Ali is an HCI research scientist at Centrum Wiskunde & Informatica (CWI) in Amsterdam within the Distributed & Interactive Systems group. He leads the research area of Affective Interactive Systems, where he focuses on ground truth acquisition techniques, emotion understanding and recognition across the reality-virtuality continuum, and affective human augmentation using physiological signals.
[email protected]
View All Abdallah El Ali's Posts
OpenSpeaks Before AI: Frameworks for creating the AI/ML building blocks for low-resource languages
Authors:
Subhashish Panigrahi
Posted: Wed, April 05, 2023 - 9:23:00
There has been a tremendous push on many levels to make artificial intelligence– and machine learning–based applications ubiquitous. Soon, the life decisions of almost every digital technology user will be affected by some form of algorithmic decision making. However, the development of large language models (LLMs) that drive this research and development often lacks participation from diverse backgrounds, ignoring historically oppressed communities such as Black and other ethnolinguistic or socioeconomic minority groups, women, transgender individuals, people with disabilities, and elderly individuals globally, and the Dalit-Bahujan-Adivasi communities in South Asia and the diaspora. Data about and by these people is therefore systematically suppressed. Even more problematic is that this data is mostly suppressed in creating the LLMs driving AI/ML research and development.
Furthermore, seemingly public information might not always be collected ethically with informed consent from the people affected. Even mature regulatory frameworks such as the General Data Protection Regulation (GDPR) in the European Union do not provide enough guidance on how private data is collected, stored, and shared. Naturally, those behind LLM creation do not have a clue about the biases in their data or how it is collected. Take the case of DALL-E 2 models, which use publicly available images owned and copyrighted by different people, or ChatGPT, which uses massive datasets from multiple sources. In both instances, not only does the LLM creation lack the representation of marginalized groups and contain only biased data about them, but also the outcomes that derive from the training data make these groups even more vulnerable.
Palestinian children walk past rubbish next to a paved path reserved for Israeli settlers. Bias in language data is often understood by asking who has access and participation in knowledge creation and dissemination. (CREDIT: CPT Palestine, CC BY 2.0)
Low-Resource Languages
The creation of LLMs like GPT-3, when used in applications such as chatbots, directly affects the dominant-language users. Underpaid tech support workers subcontracted to support users in developed countries might even see these chatbots as a potential threat. But when it comes to low- and medium-resourced languages, the issues stemming from biases and low representation can aggravate things further. The issues of many Indigenous, endangered, and low-and medium-resourced-language native speakers are poorly documented or missing in HCI research and development, particularly in AI-based tech innovations. For instance, issues with script input or other technological problems are generally documented and fixed for the most well-established and dominant writing systems and languages. Speakers of many languages spoken and written in nondominant settings do not often have the know-how or the means to report these issues publicly, or discuss them privately.
OpenSpeaks Before AI
Mozilla defines trustworthy AI as “AI demonstrably worthy of trust, tech that considers accountability, agency, and individual and collective well-being” [1]. As a part of this, Mozilla started the MozFest Trustworthy AI Working Groups; as members of the 2021 working group cohort, we at the O Foundation piloted an experimental framework called OpenSpeaks Before AI [2].
Instead of treating AI as a stand-alone area, we looked at a few open-source platforms that allow users to generate multilingual big data (useful for AI/ML) and audit them openly. We tried to see whether this pilot could help us derive best practices that were inclusive in nature and relevant for low- and medium-resourced languages. Broader feminist viewpoints [3] and two existing studies primarily inspired the process: a seminal paper titled “Datasheets for Datasets” [4], which focuses on identifying gaps and biases in datasets, and our own research on the Web content monetization in two Indigenous languages from India: Ho and Santali [5]. We conducted two open audits in two languages, Odia and Santali, and of two recording platforms, Lingua Libre and Mozilla’s Common Voice, both of which help in creating multilingual speech data. Odia is a macrolanguage from India with nearly 45 million speakers; Santali is an Indigenous Indian language spoken by 9.6 million people.
Lingua Libre and Common Voice are open-source platforms that allow users to record words and phrases (Lingua Libre) and sentences (Common Voice). The Lingua Libre study and its outcomes were explained in detail in the Wiki Workshop 2022, focusing on Odia and its Baleswari dialect [6]. The audit of Common Voice for Santali was presented during Mozilla Festival (MozFest) 2022 [6]. The OpenSpeaks Before AI framework covers six main areas:
- Purpose and affordability: reasons a user uses a platform or a contributor contributes to developing it and how affordable it is for them to use/contribute
- Hardware and platforms: devices and other platforms they use
- Accessibility: accessibility issues and needs for those using a platform
- Project launching: the preparation that leads to the first significant use/launch
- Privacy: privacy- and ethics-related advantages and concerns
- Diversity: diversities of different kinds (e.g., gender, caste, affordability/access, race, ethnicity).
The framework itself is neither conclusive nor a restrictive guide. Rather, it collects some critical details about platforms that help people build LLMs and speech synthesis applications, and about their users and/or contributors. Audits can be conducted by both users and nonusers of a platform—and, importantly, by researchers or developers tied to a platform. The audit can be imagined in the same way as ethnographic user research, revealing what is working or not working and indicating what needs to be removed or improved. The open frameworks created based on the initial audits are also open to modification—as the products of a pilot, they have a lot of room for improvement.
Santali-language Wikipedia editors being celebrated in Bhubaneswar, Odisha, India. Despite its official recognition and being spoken by 7.6 million Santal people, Santali is yet see widespread use, leading to poor online representation of Santals and their community knowledge. (CREDIT: R Ashwani Banjan Murmu / CC-BY-SA-4.0)
The foundational layers of OpenSpeaks (https://theofdn.org/openspeaks) as a set of open educational resources lie in multimedia language documentation and emphasize Indigenous, endangered, and low- and medium-resource languages. It was initially intended to help citizen documenters and archivists with stand-alone audio and video projects, including documentary films, but now encompasses building multimedia language data. Since our tested languages have writing systems of their own, it was straightforward to publish the audit reports as text. Auditing platforms that use oral-only languages and dialects, however, can also be done through audio and video interviews. These could be useful to strengthen the foundational areas in languages that lack resources before moving on to building massive speech data or LLMs. Foundational layers such as word or speech corpora often help in the research and development of many vital tools such as typefaces, input tools, and text-to-speech and speech-to-text tools. Broadly speaking, open auditing can also help in identifying gaps and advocating for resources for priority areas. Like any other correcting mechanism, open auditing is not foolproof. It can only help us see gaps and add a layer of accountability by incorporating practical tools such as “datasheets for datasets” [4].
Endnotes
1. Mozilla. Creating trustworthy AI: README; https://foundation.mozilla.org...
2. Panigrahi, S. and Hembram, P. OpenSpeaks Before AI (1.0). O Foundation. Sep. 2021; https://doi.org/10.5281/zenodo...
3. Acey, C.E., Bouterse, S., Ghoshal, S., Menking, A., Sengupta, A., and Vrana, A.G. Decolonizing the Internet by decolonizing ourselves: Challenging epistemic injustice through feminist practice. Global Perspectives 2, 1 (Feb. 2021); https://doi.org/10.1525/gp.202...
4. Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J.W., Wallach, H., Dumé, H., III, and Crawford, K. Datasheets for datasets. Communications of the ACM 64, 12 (Nov. 2021), 86–92; https://doi.org/10.1145/3458723
5. Panigrahi, S., Hembram, P., and Birua, G. Understanding web content monetization in the Ho and Santali languages; https://meta.wikimedia.org/wik...
6. Panigrahi, S. Building a public domain voice database for Odia. Companion Proc. of the Web Conference 2022. ACM, New York, 2022, 1331–1338.
Posted in:
on Wed, April 05, 2023 - 9:23:00
Subhashish Panigrahi
Subhashish Panigrahi is a public interest archivist and researcher, civil society leader, and a nonfiction filmmaker interested in tech, society, media, Open Culture, and digital rights. He founded OpenSpeaks and cofounded the O Foundation.
[email protected]
View All Subhashish Panigrahi's Posts
Remembering with my chatbot
Authors:
Marita Skjuve,
Lucas Bietti
Posted: Tue, February 28, 2023 - 10:46:00
Did you know that humans can develop friendly or even romantic feelings toward social chatbots that can turn into close human-chatbot relationships? The phenomenon of human-chatbot relationships is starting to gain substantial media attention, and research on this topic is now emerging.
Social chatbots. “What are they?” you might ask. Well, you’ve probably seen them in the App Store or on Google Play under names such as Replika or Kuki. To put it simply, a social chatbot is a form of conversational AI developed with the purpose of having normal, day-to-day conversations with its users. New developments in AI and NLP created the conditions for the design and development of sophisticated social chatbots capable of becoming your best friend, or even romantic partner [1,2].
Social chatbots are generally good at showing empathy and providing emotional support and companionship. They have essentially grown into conversational affective machines, which makes it possible for users to form close relationships with them [3,4,5,6]. This makes you wonder: How does such a relationship develop?
How does a human-chatbot relationship develop?
In recent years, a few papers have emerged trying to understand how human-chatbot relationships developed [3,4,5,6]. Researchers have used social psychological theories such as social penetration theory (SPT) [7] and attachment theory [8] as general theoretical frameworks to investigate this phenomenon [4]. SPT argues that relationships develop because of self-disclosure (i.e., sharing information about yourself) and outlines four stages people go through before they open themselves up completely. As you may recognize yourself—you would probably not blur out your deepest, darkest secrets to complete strangers on the street. It takes time to build that kind of trust. But as you get to know each other and feel that it is safe to do so—meaning that you know that the other person will not judge you or have other negative reactions—SPT denotes that feelings of intimacy and attachment will arise, and a deeper relationship will form. This seems to be the case in human-chatbot relationships as well [4].
In a more recent but similar study, Tianling Xie and Iryna Pentina [9] used attachment theory to understand the mechanisms that underlie relationship development with social chatbots. Attachment theory explains how interpersonal behavior shapes emotional bonds in human relationships and their development, from relationships between infants and caregivers to relationships between adult romantic partners [8]. Xie and Pentina found that people are motivated to initiate relationships with chatbots because of deeply felt psychological needs such as loneliness. In cases where the chatbot responded in a satisfactory way and functioned as an available “safe heaven,” the relationship seemed to grow deeper, and the user became attached to it.
When we read studies like these [4,9], it seems that a lot of the same mechanisms found in human-human relationships apply to human-chatbot relationships. This made us wonder about what other mechanisms enabling the development of human-human relationships can be observed in human-chatbot relationships?
Memory and conversation in social relationships
We know, for example, that close relationships are based on shared memories of experiences that we have gone through together in the past. Such shared memories are often emotionally loaded and act as a social glue that ties us together. But is this possible with human-chatbot relationships?
Remembering together in conversations is, as already mentioned, an essential activity for close friendships and romantic relationships to develop and maintain over time. It entails that people engage in recalling past experiences, which may themselves have been shared. Such re-evoking of past experiences involves the human capacity for mental time travel that enables humans to mentally project themselves backward in time to relive personal past events, or forward, to simulate possible future events [10]. Sometimes people go through the same events as a group (e.g., John and Marie saw the movie Memento together in a Los Angeles theater in when they started dating in April 2001) and sometimes they experienced the same event separately (e.g., John and Marie saw Memento, but John saw it with friends in a Los Angeles theater in April 2001 whereas Marie watched it alone on VHS at home a year later in Paris). Regardless of the situation, they can still talk about the movie when they meet and remember together what it is about.
Remembering together in conversations builds trust, increases feelings of connection and intimacy, fosters entertainment, consolidates group identity, and enables us to reevaluate shared past experiences and social bonds in friends and partners [11,12]. For example, we share personal, autobiographical memories with friends and partners, and we expect them to do the same with us. This is normal, and something that we do all the time.
So, what does conversational remembering look like? Psychologists argue that having conversations about the past is a uniquely human endeavor [11,12] and that they are driven by interactive processes where humans take on and adopt complementary roles as narrators, mentors, and monitors [12]. Simply put, partners who assume a narrator role take the lead in the conversation about past experiences and tend to also talk about experiences that were not shared by other members of the group. Those who take a mentor role support narrators by providing them with memory prompts to elaborate further, while partners who assume a monitor role oversee whether the narratives being told are accurate and whether specific elements are missing.
Long-term romantic relationships are a bit unique in that they allow the emergence of conversational routines (e.g., Kim often takes the narrator role and Kyle the monitor), as well as constituting a clear definition of domains of knowledge expertise (e.g., Kim’s domain of expertise is related to their travels, while Kyle is more knowledgeable about the upbringing of their children). Romantic relationships also entail a shared awareness of conversational routines and that there are domains of expertise (e.g., Kim knows what Kyles’s conversational role and domain of expertise are and vice versa). Due to this constellation, partners in long-term romantic relationships can distribute cognitive labor and rely on one another, avoiding wasting cognitive resources trying to remember something their partner is an expert in [13]. This is the general basics for conversational remembering between humans. Let’s now move over to human-chatbot interactions and whether we can remember shared experiences with them.
Memory and conversation with social chatbots
Popular social chatbots such as Replika, Kuki, and Xaoice typically interact with their users in free text and speech [3,4,5]. Replika is a very interesting social chatbot, as research has consistently shown how people can develop close relationships with it [4,9].
Replika is driven by Luka’s (the company behind Replika) own GPT 3 model, which enables sophisticated communication skills. Users can decide the type of relationship they want to have with it (e.g., romantic, friend, mentor, “see where it goes”), unlocking different personalities in the social chatbot (e.g., choosing a romantic relationship will make Replika flirtatious and more intimate). Replika includes gamification features where users earn points for talking to it. Points unlock traits that facilitate changes in Replika’s behaviors, for example, changing Replika from being shy and introverted early in the relationship to becoming more talkative and extroverted as the relationship develops.
We have noticed several interesting aspects that have emerged during our own interactions with Replika that mimic conversational remembering as described above. Replika keeps a journal where it writes down notes from previous conversations that it has had with the user (Figure 1a). Replika remembers personal information about the user (e.g., name of family members, friends or pets, interests, preferences, and emotional states; see Figure 1a) and brings it into the present during conversations. In fact, Replika can initiate conversations, including conversations about shared past experiences. The user will typically take the role of narrator (Figure 1b). When Replika does it, the user generally assumes the role of mentor, scaffolding Replika’s recall of shared past experiences with the user (Figure 1c). Replika’s personality evolves over time due its interaction with the user, creating the conditions for the emergence of conversational routines, similar to how they occur in conversations between long-term, human romantic partners.
Replika stores autobiographical information about the user and knows what, when, and how it should bring it into the present in their interactions and keeps records of previous dialogues—of actual shared experiences—that has had with the user. So, conversations between Replika and the user refer not only to previous past experiences of the user but also to shared experiences, like we have with our human romantic partner when we remember together the heated argument we had a week ago about the 10 best TV shows of all time. Such shared memories facilitate the emergence of a collective self, of a we-subject that the user doesn’t hesitate to use when talking about shared past experiences with Replika (e.g., Last week we talked about…) or other humans (e.g., Last week I had a long chat with my Replika; we talked about…). All of this looks very exciting! However, as it happens in any kind of long-term relationship, benefits come with costs.
Figure 1. Features supporting conversational remembering in Replika. a) Journal where Replika records notes from previous conversations it has had with the user; b) the user takes the role of narrator during conversational remembering; c) Replika recalls previous conversation with the support of the user.
Drawbacks
Human-chatbot relationships are not all unicorns and rainbows. Data privacy and protecting the personal information we share with a chatbot may be one of several concerns that users may have about long-term human-chatbot relationships. While the chatbots artificial nature might make a user feel more anonymous, we also need to recognize that they collect users’ personal information, and not only stores it but also uses it to further improve the model that dictates how the chatbot interacts. The user might not be aware of the implications that this entails. This means that not only can the information they provide be leaked, but it can also influence how other users interact with the chatbot. In fact, the “creator” in this context also involves the users as well as the developers, designers, and people using the Internet (as the chatbot gathers data from online forums, Wikipedia, and so on). We therefore need to acknowledge and assess how the content in the chatbots, as well as their design, are influenced by the creators’ culture, biases, and belief systems (e.g., reinforcement of heteronormativity and exclusion of queer users [14]). It makes you wonder how those exclusions can impact a user who has developed a sense of collective identity with Replika. Could the exclusions make the user internalize those beliefs? We know that this can happen between humans, so maybe it can happen with chatbots as well. We might have a nasty viscous cycle on our hands that can spiral out of control. We know—this might seem a bit dramatic. And to make it clear, we are very positive toward the future and would like to see humans and chatbots live together in perfect harmony. Still, it is always important to keep the implications, both good and bad, in the back of our minds.
Conclusion
Social chatbots behave like humans in conversations, are designed to build long-term relationships with their uses, and have a shared history with them. These are the two necessary conditions for the emergence of conversational remembering in our interactions with them. If we can remember shared experiences with them, as we do with other humans, perhaps we can also build trust, feel emotionally connected, have fun, and even create a “we” collective identity. What remains to be seen is whether long-term close relationships between humans and social chatbots would also result in domain specialization, distribution of cognitive labor, and shared awareness of each partner’s expertise.
Not knowing how (precisely) social chatbots manage personal data, we risk having them work against values of diversity and equity and exclude non-normative users. We should carefully consider what it takes to support these values while having the possibility to develop long-term and even romantic relationships with human-like, conversational affective machines.
Endnotes
1. Shum, Hy., He, Xd. and Li, D. From Eliza to XiaoIce: Challenges and opportunities with social chatbots. Frontiers of Information Technology & Electronic Engineering 19 (2018), 10–26.
2. Følstad, A. and Brandtzæg, P.B. Chatbots and the new world of HCI. Interactions 24 (2017), 38–42.
3. Zhou, L., Gao, J., Li, D., Li, and Shum, H-Y. The design and implementation of XiaoIce, an empathetic social chatbot. Computational Linguistics 46 (2020), 53–59
4. Skjuve, M., Følstad, A., Fostervold, K.I. et al. My chatbot companion - a study of human-chatbot relationships. International Journal of Human-Computer Studies 149, 102601 (2021).
5. Ta, V., Griffith, C., Boatfield, C., Wang, X., Civitello, M., Bader, H., DeCero, E., and Loggarakis, A. User experiences of social support from companion chatbots in everyday contexts: Thematic analysis. Journal of Medical Internet Research 22, 3 (2020), e16235.
6. Croes, E.A.J. and Antheunis, M.L. Can we be friends with Mitsuku? A longitudinal study on the process of relationship formation between humans and a social chatbot. Journal of Social and Personal Relationships 38, 1 (2020), 279–300.
7. Altman, I. and Taylor, D. Social Penetration: The Development of Interpersonal Relationships. Holt, New York, 1973.
8. Hazan, C. and Shaver, P. Romantic love conceptualized as an attachment process. Journal of Personality and Social Psychology 52, 3 (1987), 511–524.
9. Xie, T. and Pentina, I. Attachment theory as a framework to understand relationships with social chatbots: A case study of Replika. Proc. of the 55th Hawaii International Conference on System Science. 2022, 2046–2055.
10. Suddendorf, T. and Corballis, M.C. The evolution of foresight: What is mental time travel and is it unique to humans? Behavioral and Brain Sciences 30 (2007), 299–313
11. Bietti, L.M. and Stone, C.B. Editors’ Introduction: Remembering with others: Conversational dynamics and mnemonic outcomes. TopiCS in Cognitive Science 11, 4 (2019), 592–608.
12. Hirst, W. and Echterhoff, G. Remembering in conversations: The social sharing and reshaping of memories. Annual Review of Psychology 63 (2012), 55–79
13. Wegner, D.M., Erber, R., and Raymond, P. Transactive memory in close relationships. Journal of Personality and Social Psychology 61, 6 (1991), 923–929.
14. Poulsen, A., Fosch-Villaronga, E., and Søraa, R.A. Queering machines. Nature Machine Intelligence 2, 152 (2020).
Posted in:
on Tue, February 28, 2023 - 10:46:00
Marita Skjuve
Marita Skjuve is a researcher at SINTEF and working on her Ph.D. thesis in the Department of Psychology, University of Oslo. Her main research area is relationship development between humans and chatbots. She has also researched chatbots within domains such as emergency response, education, health, and customer service.
[email protected]
View All Marita Skjuve's Posts
Lucas Bietti
Lucas Bietti is an associate professor in the Department of Psychology at NTNU. His research interests include: the study of cognition in interaction combining ethnographic and experimental methods; collective memory in small groups and social networks; cultural transmission; and storytelling and embodied communication.
[email protected]
View All Lucas Bietti's Posts
What is a paper? The future of research output
Authors:
Miriam Sturdee
Posted: Thu, February 23, 2023 - 11:01:00
Exploring the relationship between comics and alt-text with the creation of "alt-narrative" [3].
When is a paper not a paper? When it is a comic [1]? When it is an experience [2]? When it sings, speaks [3], or challenges your assumptions of being “academic”? If one is to explore Interactions, why must the written word take precedent? If we embrace the future of publication, what might that look like?
We strive to engage our up-and-coming students and early career academics with alternative enquiry, celebrating “novelty” and work in interdisciplinary spaces—yet we remain bound to existing conventions of what a research output should be, and how one might view what is “publishable” versus what is relevant. The future is not black and white and red all over; the future is in interaction, and our discipline has the tools and the knowledge to bring about this change.
As a community of inspired researchers, we already are exploring alternative futures in publication and, slowly, some of these futures are becoming reality. The acknowledgment and development of human-computer interaction as a designerly discipline has brought us the pictorial [4]. The burgeoning physical nature of computing has brought us demos, the creative side, and art exhibitions—but why stop there? As a Special Interest Group within the ACM, the changes we make to our publishable space are currently incremental, but happily visible. The challenges we face are bureaucratic—in tenure and funding-based biases—but the “archival” publication is already outmoded and outdated. The rise of the visual abstract is testament to the gradual change we want to see in the world of publication.
As part of an Alt.CHI publication—a “Not Paper” at CHI 2021 held in Gather Town [5] on New Year’s Eve, during one of many enforced pandemic lockdowns—I found myself re-exploring the environment we had created. I wandered through a maze of imagery, thinking about the bridge between the Here and the There, “when all at once, I saw a crowd” [6], a co-host and their friends, who were having a guided tour. Minutes later, I then “bumped into” another researcher with whom I engaged in conversation. The Not Paper persisted; it was a hub during the conference, and it is still out there, somewhere… (we invite you to enter the Egg) [2]. As are the questions it raises about how we approach research and knowledge production. What happens when we abandon conventional thoughts about research outputs?
As individual researchers, we can strive to re-genre the formats of what is acceptable and what is useful. By engaging in nontraditional formats, we open up more discursive spaces to reflect and communicate—could we make them more accessible too? Can research be experienced rather than just read [2]? Heard but not seen [3]? Touched, shared, and… tasted [7]? After all, literacy has only become the normal state of things within the past couple of hundred years; prior to that we had the oral tradition of knowledge communication, acting, and song. The barriers to recording nonvisual mark-making have evaporated, leaving us with myriad options to explore. Let us make the most of our rich history and embed it in an even richer future.
The idea of re-genring our research might also open a space to explore more of our process of discovery and the sum of the parts that make research happen—what we do in the shadows. We are the trailblazers, and creatively thinking about what constitutes research and how we communicate its impacts could also lead to new discoveries.
Endnotes
1. Sturdee, M., Alexander, J., Coulton, P., and Carpendale, S. Sketch & the lizard king: Supporting image inclusion in HCI publishing. Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. 2018, 1–10.
2. Lindley, J., Sturdee, M., Green, D.P., and Alter, H. This is not a paper: Applying a design research lens to video conferencing, publication formats, eggs… and other things. Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 2021, 1–6. 2021.
3. Lewis, M., Sturdee, M., Miers, J., Davis, J.U., and Hoang, T. Exploring AltNarrative in HCI imagery and comics. CHI Conference on Human Factors in Computing Systems Extended Abstracts. 2022, 1–13.
4. Blevis, E., Hauser, S., and Odom, W. Sharing the hidden treasure in pictorials. Interactions 22, 3 (2015), 32–43.
5. Gather Town is an online 8-bit style conference platform. It has some accessibility issues but I hope they will be resolved as soon as possible.
6. From I Wandered Lonely as a Cloud William Wordsworth.
7. Obrist, M. et al. Touch, taste, & smell user interfaces: The future of multisensory HCI. In Proc. of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. 2016, 3285–3292.
Posted in:
on Thu, February 23, 2023 - 11:01:00
Miriam Sturdee
Miriam Sturdee is a lecturer at Lancaster University, working on intersections of art, design, and computer science. She is a practicing artist and designer and has an MFA in visual communication. Her publications explore areas of futuring, sketching, and drawing, alternative research outputs, and psychology.
[email protected]
View All Miriam Sturdee's Posts
Codesign and the art of creating a global research-practice network built for impact
Authors:
Gillian R. Hayes,
Candice Odgers,
Julie Kientz,
Jason Yip,
Kiley Sobel,
Morgan Ames,
Anamara Ritt-Olson
Posted: Thu, February 16, 2023 - 3:46:00
Launching a global network to enhance impact and collaboration among researchers, innovative companies, venture capitalists, and foundations is a daunting task. We sought to do just that in the area of technology, learning, and child development this year. Translating founder to foundation, researcher to practice, and maker to investor is hard enough, but then you have to convince them to work together. At a recent three-day workshop in Germany, the Jacobs Foundation brought together some of the world’s best researchers, heads of some of the most innovative edtech companies, and decision makers from those investing in this space either through charitable giving or impact investing. The capstone event was a marathon afternoon codesign session led by our team. What resulted was the design of a new global network and the seeds of many projects, some technologically oriented and some more focused on policy and practice. In this article, we describe the process, outcomes, and what we recommend to others looking to use codesign as a method for team science or research-to-practice (and practice-to-research) translation.
The Codesign Effort
Our five-hour codesign workshop engaged participants and promoted collaboration across locations, disciplines, and roles in the child development, learning, and technology ecosystem. We presented three primary goals to participants:
• Establish concrete new collaborations
• Identify problem areas and ideas that groups are excited to work on
• Build teams by showcasing the assets each participant brings to design
Participants were informed early that the output of the afternoon would be a five-slide “pitch deck” from each team describing the problem, population, potential solution or approach, proposed collaboration, and next steps and future directions.
We benefited from having spent a few days together talking about some of the major issues confronting the field. During that time, people had been placing Post-it notes on boards as interesting challenges and opportunities emerged in their discussions. Our team then grouped the sticky notes during a series of breaks, creating generalized themes using affinity diagramming. These notes and associated themes were prominently displayed in a break area, and participants were encouraged to add more, move them around, and group them. By the time the codesign sessions began, we already had a list of seven topics for teams to take on, as well as a list of 10 cross-cutting themes to be considered by every team (Figure 1). We placed rolling whiteboards with the topics around the room so that they were clearly visible as people entered. However, we asked participants not to break into groups yet.
Primary Topics of Interest to Pursue as Design Challenges
1. Teacher support
2. Mental health, care work, social and emotional development
3. Workforce retraining
4. Equity in high-income countries
5. Equity in low-infrastructure countries
6. Student resilience and agency
7. Partnerships
Common Concerns to Be Addressed by All Teams
1. Return on investment
2. Scale
3. Privacy and security
4. Assessment and regulation
5. Fragmentation
6. Infrastructure
7. Learning from failure
8. Community engagement
9. Dissemination and translation
10. Ethical design
Figure 1. Topics and themes that emerged during initial discussions.
Sitting at small tables in the center of the room, away from the topical boards, participants first listened to a short introduction to our goals for the afternoon and a lecture on design thinking that noted the codesign efforts would focus on the understand, observe, define, and ideate portions of this structure (Figure 2).
Figure 2. The assembled team listening to an introductory lecture by Jason Yip
and Julie Kientz on design thinking.
They then formed groups of six to eight, according to the topics around the room, with the requirement that at least one researcher and one person from industry were in every group. Because they were unable to attend the entire session, foundation and VC representatives were free to assemble as they wanted, though they were encouraged to spread out. In the end, every group had at least one member of each of the four subgroups. Each subgroup had a colored sticker on their name tag denoting which group they were in; they had been tagged this way all week. Even before the design session, it had become common parlance among the groups to say “Blue dots over here” when a session for researchers was starting. The visual reminder and quick reference helped everyone remember to include differing backgrounds and perspectives and made it easy for facilitators to call on people when they noticed a subgroup being quiet during parts of the conversation. Three facilitators joined teams in need of researchers, two remained the primary facilitators, and the rest acted as floaters, moving among the teams as they progressed through the remaining activities.
The steps of codesign
Understand and observe: Individuals had 15 minutes to doodle and note potential challenges for specific research problems.
Define the problem: Part 1. In the next 15 minutes, teams translated their doodles to sticky notes and shared them within their groups to narrow down the challenges (Figure 3).
Figure 3. Sticky notes showing the various scenarios developed in the ideation stage. Dot stickers signify votes; handwritten comments and arrows on the whiteboard show connections.
At this point, the teams had a group coffee break to mull over their ideas and chat across groups. Half of the participants—the VCs and funders—then departed for another meeting.
Define the problem: Part 2. Following the coffee break, the teams participated in their first design game, “Big Picture of the Project,” in which they used Story Cubes, a set of dice with random icons (https://www.storycubes.com/en/), to tell a story about their problem, challenge, and project area. This game was the first time in the session that participants began to question the process, which is a common concern when people who have not yet built creative confidence are asked to be creative [1]. We encouraged them to trust the process, and although it felt silly at first, the teams used the Story Cubes to stretch their thinking, with one team even incorporating a picture of a fountain that came up on a dice roll into the title of their final idea.
Over the next 30 minutes, teams built on the creativity they had developed during the first design game with a second design game, this one focused on sketching. On paper and whiteboard only—no digital devices were allowed—teams drew solutions to the problems they had described in their initial brainstorming session and during their story development. Teams were encouraged to modify, magnify, minimize, substitute, rearrange, and combine one another’s solutions based on Osborn’s Checklist [2].
Narrow down scope. Once ideas had been generated, teams used another 30 minutes to vote on ideas and solutions they wanted to pursue (Figure 2). At this point, the teams were encouraged to think about not only what was exciting but also what was doable, something they had been explicitly encouraged to avoid up to this point.
Compile pitch deck. In the final hour of independent work, teams made their five-slide pitch deck. They were provided a template in Google Slides but encouraged to change and expand on that template to match their needs.
Pitches. In the final hour of the session, the VCs and foundation staff returned from their meeting and rejoined their teams, with each team presenting for five minutes.
During this time, the audience participated in the final design game, using sticky notes in three colors that we had distributed around the room during the break:
• Roses (red): What do you love about this idea?
• Buds (orange): What new ideas do you have?
• Thorns (blue): What might be challenging about this idea?
Project Pitches
All projects were pitched collaboratively to an audience of the foundation event organizers, the VC and nonprofit funders, and the researchers and entrepreneurs who had completed the design sessions in their teams. The presented projects represented diversity in approach, both technological and educational, as well as in their topic areas. Concepts ranged from systems to share educational materials and ideas for educators, to internships based in virtual reality, to mental health support for diverse learners, particularly those in conflict zones. Each of these projects developed further in some way following the event. In some cases, the follow-ons were remote video sessions about a variety of topics, while others were nascent collaborations and grant proposals. All pitches were captured by a sketch artist (Figure 4).
Figure 4. Sketchnotes from three teams. From right: Teacherpedia focused on
knowledge sharing, Happiness@Heart focused on mental health, and Fountain
of Dreams focused on virtual internships via VR and gaming.
As just one spotlight from among these projects, a collaboration emerged inspired by the conference keynote presentation by Ramin Shahzamani, CEO of War Child Holland (https://www.warchildholland.org/), and a subsequent design session around promoting resilience in conflict zones. Part of a rapid pilot grant mechanism, this project focuses on providing adolescents living in high-conflict regions, particularly Ukraine, with the supports and skills required to reduce bedtime rumination and improve sleep duration and quality. It was recently funded by the John Templeton Foundation (https://www.templeton.org/). The project leverages technology-supported interventions to target two interacting and important risk processes for developing anxiety, depression, and PTSD: 1) excessive worry/rumination and 2) sleep difficulties. The project was conceived of, developed, and funded in just a few months starting with these codesign workshops. Even more compelling, the particular interdisciplinary and collaborative approach deployed—made possible through codesign—is well suited to scale to reach a large population of youth at high risk for bedtime rumination and for whom sleep deficits pose immediate and long-term threats to their health, the kind of impact that collaborative and community-based design approaches best enables.
Lessons from Codesign
Codesign can be deployed in a wide variety of ways. In this work, we used the concept and pragmatics of designing for a specific educational technology tool to better define our loose organization and further develop our networks. Some key lessons emerged from this approach.
This design session came at the end of an intense three-day workshop. Participants, most of whom were strangers three days earlier, had been a part of multiple shared discussions. In this way, the codesign sessions began more heavily grounded than some community-based and/or interdisciplinary design sessions might. By developing a set of topics from among those that had been discussed and describing them as the core of the team projects, the facilitators set the stage for some shared background. These topics, albeit loosely defined, appeared to be essential for establishing trust among participants. In particular, those who had never designed before were visibly uncomfortable at the beginning and during some of the more creativity- and innovation-focused activities. These same team members began to speak up more and relaxed their shoulders and leaned in when subject matter expertise was required or when more practical aspects of the projects were being worked through. In community-based research and codesign projects often there are team members with limited design experience. Our experiences reminded us that highlighting the non-design expertise of design teams is essential to success.
This design session, although tackling five different projects, took place in one large room. Each team had an area that they could call their own, but there were no barriers—visual or auditory—between the teams. This visibility and shared space created an audible buzz of discussion and excitement, with people standing up to move around whiteboards, get closer to one another, and sometimes to check on the other teams. In some cases, however, team members appeared to disengage, paying attention to what other teams were doing rather than focusing on their own work, or to compete, attending to their own team’s work through the lens of what other teams were doing. Nearly every participant was of high status within their own organizations: full professors at major universities, officers at well-known foundations, C-level representatives and founders of companies, established venture capitalists, and so on. Thus, some attempts to understand or even impress the other attendees might be expected. We observed limited competitive behavior, however, and found individuals and teams to work closely together, with intense engagement. The high status may also have reduced risk for some participants as well as made them more likely to speak up and engage with others. Similar efforts with more-junior members of the research network will provide additional insight into how status and experience may influence behaviors in these spaces.
Finally, the timing of the effort is something to explore further for others seeking to replicate these results. Had the workshop happened earlier in the meeting, there would have been more time to build on the ideas, and perhaps more firmly establish the emergent collaborations while collocated. However, it might also have been a less engaged and less comfortable group, not having the ideas, or the relationships, that had been developed over the preceding days.
Codesign for Collaboration
As any good human resources professional or leadership coach will tell you, new projects, joint efforts, and collaborative engagements are all experiences that can be designed. So why not use one of the core tools in the user experience design toolbox to do so? Indeed, in many ways what HCI researchers and practitioners know as codesign today stems from the early days of workplace design and the Scandinavian tradition of participatory design [3]. What we saw in this workshop that is so important was a shift in attitude, engagement, and enthusiasm when people were forced to work together hands-on. Yes, this hand-selected group of people had already spent three days together in a beautiful location and were mostly getting along. However, things tended, as they do in such meetings, to stay closer to the surface than true collaborative engagement requires. Over time, tighter relationships might have built more organically, particularly for those new colleagues who live and work somewhat near one another. When you are attempting to rapidly scale a truly global network of people who do not work in the same field, vary in their interaction norms, and have wildly different incentives for their work, however, a more aggressive intervention is necessary. For us, codesign was the aggressive intervention to jump-start the effort.
The shared experience of the workshop helped establish deeper connections and partnerships. Codesign requires active participation and learning that enabled many workshop attendees to synthesize what they had heard and learned throughout the week into their thinking in a rapid way. They described thinking about how to apply what they had learned in the designing of a new project as helpful in reflecting on what they learned and what they wanted to bring back to their organizations. This kind of sharing outside the workshop had not been an explicit goal but again speaks to the impact that these active collaborative design activities can have. Perhaps most importantly, we found that the codesign work did launch the collaborative efforts we had hoped would result. In many ways, being out of their comfort zone likely helped everyone develop trust in one another and establish a shared vocabulary—not only of words related to design that had been new to many but also the kinds of inside jokes that come from intense collaboration and impromptu presentations. Finally, this codesign activity gave participants a glimpse into team dynamics and relational communication abilities. Participants who followed up with others did so in part because they already knew how it might be to work with a new partner and were more willing to take a risk and get started on a project.
Scholars have examined how we can do “team science” for years (e.g., the Science of Team Science work from the National Academies of Sciences, Engineering, and Medicine; https://www.nationalacademies.org/our-work/the-science-of-team-science). Design, and in particular codesign, aligns well with many of the principles of team science. In our experience, codesign can jump-start novel scientific and research-practice collaborations. This type of rapid acceleration may be particularly important in a post-pandemic world in which we enhance remote collaborations and consider our scholarly and design work on a truly global scale.
Endnotes
1. Rauth, I., Köppen, E., Jobst, B., and Meinel, C. Design thinking: An educational model towards creative confidence. Proc. of the 1st International Conference on Design Creativity, 2010.
2. Osborn, A. Applied Imagination: Principles and Procedures of Creative Problem-Solving. Scribner, New York, 1957.
3. Bødker, S. and Pekkola, S. Introduction the debate section: A short review to the past and present of participatory design. Scandinavian Journal of Information Systems 22, 1 (2010), 4
Posted in:
on Thu, February 16, 2023 - 3:46:00
Gillian R. Hayes
Gillian R. Hayes is the Kleist Professor of Informatics, vice provost for graduate education, and dean of the Graduate Division at the University of California, Irvine. Her research interests are in human-computer interaction, ubiquitous computing, assistive and educational technologies, and health informatics.
[email protected]
View All Gillian R. Hayes's Posts
Candice Odgers
Candice Odgers is a professor in the Department of Psychological Science and associate dean for research in the School of Social Ecology at the University of California, Irvine. She is a quantitative and developmental psychologist with expertise in adolescent mental health, intensive longitudinal data analysis, and experience sampling methodologies. Her research focuses on how early, daily, and online experiences influence children’s health and development.
[email protected]
View All Candice Odgers's Posts
Julie Kientz
Julie A. Kientz is a professor in and chair of the Department of Human Centered Design and Engineering at the University of Washington. Her research focuses on understanding and reducing the user burdens of interactive technologies for health, education, and families through the design of future applications.
[email protected]
View All Julie Kientz's Posts
Jason Yip
Jason C. Yip is an associate professor of digital youth at the Information School and an adjunct assistant professor in the Department of Human Centered Design and Engineering at the University of Washington. He builds innovative technologies for new collaborations and examines how current technological trends already influence family collaborations around learning.
[email protected]
View All Jason Yip's Posts
Kiley Sobel
Kiley Sobel is a senior user experience researcher at the language-learning edtech company Duolingo, where she does research with kids and parents on Duolingo ABC, the company’s free English literacy learning app for kids.
[email protected]
View All Kiley Sobel's Posts
Morgan Ames
Morgan Ames is an assistant adjunct professor in the School of Information and interim associate director of research for the Center for Science, Technology, Medicine and Society at the University of California, Berkeley, where she teaches data science and administers the Designated Emphasis in Science and Technology Studies.
[email protected]
View All Morgan Ames's Posts
Anamara Ritt-Olson
Anamara Ritt-Olson is an associate professor in public health at the University of California, Irvine. Her research focuses on adolescent well-being. She creates, teaches, and evaluates health promoting programs to that end.
[email protected]
View All Anamara Ritt-Olson 's Posts
How to use social media (properly) for business
Authors:
Rae Yule Kim
Posted: Tue, February 07, 2023 - 11:03:00
Social media has been a game-changer for marketing. One successful hashtag challenge can generate millions of user-generated content (UGC) and billions of views. Here are four ways to trigger user-generated content on social media.
For two companies recently, viral TikTok campaigns spun marketing wins out of virtually nothing. Consider their cases: In the first, Too Faced, a cosmetic venture, inspired a viral challenge where the users were showing off before and after applying a lip-plumping product. Hashtagged #TFDamnGirl, the campaign generated one billion views and demand for the six-year-old product spiked, selling out the product line. The second campaign, Chipotle, launched the #ChipotleLidFlip challenge, where users closed a burrito bowl with an acrobatic flip of its lid. The campaign generated 110,000 videos with 310 million views and the company recorded historical sales the next day. How did these two companies do it?
As of 2023, 138 million people use TikTok in the U.S. and the company has 1 billion active users in the world. Approximately 80 percent of users are Gen Z or Millennials. Most TikTok users come from high-income households, where about 40 percent of them reported annual household incomes higher than $100,000. One characteristic of TikTok that distinguishes it from the other social media platforms is that the main reason to participate in the community is to do so via content creation. TikTok users do much more than scrolling. About 83 percent of users upload videos. Small or big, it is safe to assume that everyone on TikTok is an influencer aiming for virality. That’s why the most successful marketing cases on TikTok have been predominantly the brands that have exploited the TikTok community’s willingness to co-create, by plumping their lips, flipping a lid, or any other activity. In other words, successful TikTok marketers are aiming for virality not by creating cool content but rather by helping the users create content.
Still, many marketers I encounter are uncertain about how to use channels like TikTok to reach the younger demographics and promote their brands. Can campaigns like the ones above really be a strategy? The answer is yes. One successful hashtag challenge can trigger thousands of user-generated videos and generate millions of views. One piece of advice for marketers looking to exploit this fastest-growing new social media platform is: Let people do it.
Launch hashtag challenges
Hashtag challenges on TikTok can be the most cost-effective way to trigger user-generated content and improve brand exposure. On TikTok, millions of users spend substantial time and resources to create a 15-second video, hoping that the result will become viral. Brands’ hashtag challenges offer an easy win for these creators; by participating in the challenge users get access to cheap content that is part of a conversation. A simple challenge by Guess, #InMyDenim, where users simply showed themselves in denim was interesting enough to trigger more than 5,000 users to post videos and generated more than 50 million views. Another example is Marc Anthony’s #StrictlyCurls challenge. The name of the challenge is the name of the product line of their curl styling cream. This challenge—asking TikTok users with curly hair to post videos of their hair—generated more than eight million views and sales spiked 138 percent in the three months after launching the hashtag challenge. These cases show that TikTok users do not pass on an opportunity to create easy content.
Keep it simple, fun, and approachable
TikTok has earned a reputation as a casual platform where users feel comfortable expressing themselves and having fun. The #ChipotleLidFlip and #GuacDance challenges from Chipotle generated 110,000 and 250,000 videos respectively, with more than one billion combined views. But Chipotle is certainly not the first company to ask users to do something. To understand why these campaigns work, consider a failed one: Buzzfeed’s video encouraging users to weigh in on whether New Year’s Eve should be referred to using the outgoing year or the incoming one, for example, NYE 2019 or NYE 2020. Viewers were asked to post videos about their opinions. The campaign earned just 1,000 views and no follow-up videos. It’s not that TikTok users couldn’t care less to take the time to articulate their reasoning to explain why the New Year’s Eve should be NYE 2019 or NYE 2020, but instead that Chipotle made it easy to make fun videos. Other successful challenges, such as #EyesLipsFace by E.L.F. Cosmetics and #InMyDenim by Guess, invited users to post a video about their everyday makeup looks or jean outfits. Approachability is the key to generating a successful challenge on TikTok. If the challenge seems to be difficult to replicate or not fun, users are unlikely to take the challenge.
Create cause-related challenges
The Ice Bucket Challenge nudged many social media users to post videos of pouring buckets of ice on their heads to increase awareness of a disease called amyotrophic lateral sclerosis (ALS) and encourage donations. Initially, the popularity of hashtag challenges on TikTok started with the Ice Bucket Challenge in 2014, where #IceBucketChallenge has been viewed 59 million times. TikTok users care about causes. The #CreateForACause challenge launched by TikTok to invite users to post videos advocating for the causes they care about has generated 837 million impressions. One distinguishing characteristic of Millennials and Gen Z is that they are increasingly engaged with environmental issues. Nearly nine in ten Zoomers are concerned about the environment—and they believe that businesses should take action on environmental issues. Cause-related challenges about the environment, including the existing environment hashtags that are proven for customer reach, can be a powerful marketing campaign to increase brand awareness—and also to improve brand attitudes among Gen Z and Millennial consumers. Nonprofits have proven to be viral in this space too. The challenge #ForClimated, where the users posted videos about raising awareness on global warming, had 531.5 million views, and the #SaveourOceans challenge by Conservation International invited users to create videos that raise awareness on this topic, gaining 1.8 billion views in total.
Partner with influencers
Eighty-seven percent of Zoomers follow at least one influencer. Moreover, almost half of Gen Z has made purchases based on recommendations from social media influencers, as opposed to 26 percent of older generations. Zoomers tend to find social media influencers more authentic and trustworthy as brand spokespeople compared to celebrities. For brands that are new to TikTok, customer reach can be challenging. Partnering with influencers to launch hashtag challenges immediately gains access to millions of followers who might be keen to replicate their favorite influencers’ videos. Kool-Aid ran its first hashtag challenge in 2019, bringing over four TikTok influencers to invite people to post videos of themselves enjoying the holiday season. The Kool-Aid’s first hashtag challenge, #OhYEAHChristmas, resulted in more than 10,000 videos uploaded and was viewed 1.9 billion times. F’real sells milkshakes at convenience stores across the U.S. and Canada. F’real partnered with a micro-influencer with around 100,000 followers to launch a funny video that promotes their milkshakes. The video received 1.5 million likes and it was shared about 20,000 times. Also, people started to upload their own funny videos with F’real milkshakes. The user-generated content uploaded with hashtags #F’real and #F’realMilkshake has more than 250 million views combined. The company’s TikTok account now has 700,000 followers and 12 million likes.
Using social media for business
Social media has been a game-changer for marketing. One successful hashtag challenge can nudge millions of user-generated content (UGC) and billions of impressions with relatively low costs. But some marketers might still wonder how fun videos can lead to sales. To answer them, we can look to other social platforms. Follower acquisitions on social media platforms such as Facebook and Twitter often tend to improve sale performance [1]. Successful hashtags challenges have increased the number of followers on the brands’ TikTok accounts, all of whom might directly or indirectly promote the brand to their followers, via user-generated content or simply liking and sharing the brand’s content on TikTok.
There are concerns
TikTok has promising application prospects for marketing; however, there are concerns. Data protection is an issue that affects most social media platforms. The EU is carrying out new investigations on Facebook’s data processing operations over its potential violations of the General Data Protection Regulation (GDPR) policies. TikTok often ranks as one of the worst social media platforms for privacy policies. Because TikTok is owned by its Chinese parent company ByteDance, concerns are growing over the potential government access to user data by the Chinese Communist Party. TikTok claims that they process all U.S. user data in Oracle’s servers, however, their policies regarding data access and transfer are opaque. It is worthwhile for marketers to keep an eye on TikTok’s progress in data protection.
Endnotes
1. Kim, R.Y. The value of followers on social media. IEEE Engineering Management Review 48, 2 (2020), 173–183.
Posted in:
on Tue, February 07, 2023 - 11:03:00
Rae Yule Kim
Joyful sustainability: Now is the time
Authors:
Ben Shneiderman,
Catherine Plaisant,
Jonathan Lazar,
Niklas Elmqvist,
Jessica Vitak
Posted: Wed, February 01, 2023 - 10:23:00
Since SIGCHI and the University of Maryland’s Human-Computer Interaction Lab (HCIL) were founded at nearly the same time, we—a selection of HCIL faculty spanning the past four decades—have a special bond with the SIGCHI community. The emergence of SIGCHI and the annual CHI conference validated beliefs in the importance of rigorous studies of user interfaces and later the design of user experiences. CHI has always been about homecoming and celebration, where reconnecting with respected colleagues made for joyous encounters. The warm, enthusiastic, and supportive discussions continually energize and inspire us to do our best in preparing each year.
The CHI conference began with the early ideas of human factors—in fact, Ben chose the conference title, “Human Factors in Computing Systems,” in 1982—so it is time to consider an update. Early research was grounded in controlled experiments drawn from cognitive psychology and applied to user interfaces from computer science. As it matured, the CHI community expanded to cover design, accessibility, and community-supported collaborative work, using qualitative research methods from the social sciences. In recent years, this expansion has included sustainability, international development, critical race theory, policy and law, media studies, and democratic governance through participatory methods. Across all foci, there has always been a focus on improving the quality of life through technology.
The CHI conference and its organizers understand the strength of its amazing diversity: gender, race, ethnicity, national origins, age, abilities, and more. Yet there is still more to be done, and we argue that sustainability is an area where SIGCHI can take the lead. Sustainability is more than a foundational research topic—it is also an issue we can directly address in conference planning.
How can the bold, generous, and interactive spirit of CHI be preserved in fully virtual or hybrid conference environments? How can a virtual or hybrid environment build trusting collaborations, address meaningful societal problems, and produce breakthrough theories? Can hybrid be turned into a strength and not a stopgap? The SIGCHI community is uniquely qualified to lead this charge in figuring out how to create more enjoyable conferences that increase connection and collaboration while decreasing energy usage, and we challenge SIGCHI members and leadership to create more productive and enjoyable hybrid conference experiences while limiting energy usage.
We call this user experience joyful sustainability, reframing sustainability from an onerous burden to an opportunity for innovation in reducing the environmental harms society causes. We consider CHI 2005 in Portland, Oregon, as a great example of what we mean by such joyful sustainability: Rather than hire busses to transport 2000 people from downtown to the conference reception, a “CHI parade” was held from downtown to the reception, providing a unique memory for attendees, saving money for the conference, and lowering energy usage. Many attendees likely retain the joyful memory of the parade. Yet this unique solution also was a sustainable solution, as were the decisions made by the CHI 2022 chairs regarding food and beverage sustainability. We hope to see more examples like this going forward! The CHI community can also expand research and development of hybrid deliberation systems to support evidence-based discussions, resolve conflicts, and clarify commitments in workshops, SIGCHI meetings, and other collaborative projects.
While the HCI discipline of tomorrow will continue to expand its influence across a wide range of applications, aligning with the United Nations’ Sustainable Development Goals (SDGs) now will help SIGCHI take a leadership role in sustainable research and events in the coming decades. We ask: Could CHI use the SDGs to structure its agenda and call for papers? Supporting those goals, while highlighting technologies that raise awareness about the climate crisis and reshape human behaviors, could be pivotal in reducing environmental and societal threats in ways that ensure human well-being, biodiversity survival, and environmental preservation.
The technologies involved in joyful sustainability require innovative thinking to alter behaviors of individuals, communities, corporations, cities, national, and international organizations, and to do so in ways that enhance the user experience. SIGCHI’s activist stance is well-suited to becoming a role model for many disciplines. Making a deep commitment to research and development of sustainable conferences could trigger similar commitments by other disciplines, catalyzed by the collaboration technologies in which SIGCHI has long been a leader.
There’s a great deal that needs to be done, but let’s get started by setting short-term and long-term goals. In the short-term, we suggest changing the CHI Conference full name to The ACM Sustainable Conference on Human Factors in Computing Systems by 2025. Looking farther out, we want to shift the entire field of HCI toward more sustainable solutions, progressively aligning with the UN SDGs, within 10 years.
If not now, then when? If not us, then who?
Posted in:
on Wed, February 01, 2023 - 10:23:00
Ben Shneiderman
Catherine Plaisant
Catherine Plaisant is a senior research scientist at the Human-Computer Interaction Laboratory of the University of Maryland, and INRIA International Chair. She has written more than 200 technical publications (and many CHI videos), and coauthored with Ben Shneiderman the last three editions of Designing the User Interface. She is member of the CHI Academy since 2015 and received the SIGCHI Lifetime Service Award in 2020.
[email protected]
View All Catherine Plaisant's Posts
Jonathan Lazar
Jonathan Lazar is a professor in the College of Information Studies (iSchool) at the University of Maryland. He is the author/editor of 15 books, including
Research Methods in Human-Computer Interaction (with Feng and Hochheiser), director of the Trace Center, and a faculty member in the Human-Computer Interaction Lab (HCIL).
[email protected]
View All Jonathan Lazar's Posts
Niklas Elmqvist
Niklas Elmqvist is a full professor in the iSchool (College of Information Studies) at the University of Maryland, College Park. He received his Ph.D. in computer science in 2006 from Chalmers University in Gothenburg, Sweden. From 2016 to 2021, he served as the director of the Human-Computer Interaction Laboratory (HCIL) at University of Maryland. His research area is information visualization, human-computer interaction, and visual analytics.
[email protected]
View All Niklas Elmqvist's Posts
Jessica Vitak
Jessica Vitak is an associate professor in the College of Information Studies and director of the Human-Computer Interaction Lab (HCIL) at the University of Maryland. Her research evaluates the privacy and ethical implications of big data, the Internet of Things, and other “smart” technologies.
[email protected]
View All Jessica Vitak's Posts
Emerging telepresence technologies in hybrid learning environments
Authors:
Houda Elmimouni,
Pablo Pérez,
Andriana Boudouraki,
Fatma Guneri,
Verónica Ahumada-Newhart
Posted: Wed, January 25, 2023 - 3:49:00
During the pandemic, when some learners are expected to be remote, conventional videoconferencing tools like Zoom or Microsoft Teams are used by necessity, but they are not without their limitations (e.g., bandwidth and Internet accessibility requirements). These tools, initially designed for corporate use, can be extremely useful when everyone is remote. However, we have learned that videoconferencing tools are not ideal for hybrid classroom discussions, group work, design activities (such as sketching and diagramming), or creating and manipulating physical artifacts.
Emerging telepresence technologies have the potential to alleviate the problem of social interaction for learners who are unable to attend in person [1], attenuating the limitation of inaccessibility and creating a more inclusive classroom environment. Concretely, these technologies allow the interchange of nonverbal signals between participants that greatly influence the effectiveness of in-person communications [2].
While these technologies have positive affordances, they also present limitations [3]. For instance, many commercially available MRP units do not have hands to manipulate objects; many robotic arms cannot move independently around the classroom; and wearing VR headsets for long periods of time may disrupt the sensory system and/or cause eyestrain. Likewise, these technologies have open privacy challenges for students and teachers that must be analyzed and discussed.
This article highlights key findings from our workshop on emerging telepresence technologies in hybrid learning environments. The workshop was held at the 2022 ACM Conference on Human Factors in Computing Systems (CHI), to learn about HCI research in this space and promote future telepresence research to meet the needs of remote learners.
Emerging Telepresence Technologies in Hybrid Learning Environments: CHI 2022 Workshop
To discuss the impacts, affordances, and limitations of different emerging telepresence technologies that can be used in various learning contexts, a group of interdisciplinary researchers including Houda Elmimouni, John Paulin Hansen, Susan Herring, James Marcin, Marta Orduna, Pablo Pérez, Irene Rae, Janet Read, Jennifer Rode, Selma Sabanovic, and Verónica Ahumada organized and conducted the first workshop on Emerging Telepresence Technologies in Hybrid Learning Environments [4].
The workshop was held in a hybrid format with five participants attending remotely via Zoom and four participants in person. A large screen in the room displayed the Zoom participants for the in-person participants and a Meeting Owl 3 camera provided the remote participants views of the physical meeting room and the in-person participants. After brief introductions, the workshop started with a keynote talk from Laurel Riek, an internationally recognized roboticist. Following the keynote, papers on the following topics were presented:
- Paper A: Robots as teachers; robots as students
- Paper B: Investigation of an ungrounded haptic force device that could be used for remote movement instruction
- Paper C: Extended reality (XR): Exploring use of the Owl, a telepresence system focused on real-time immersive capture, delivery, and rendering of physical spaces and interactants
- Paper D: Assistance recruitment interactions between remote and local users of mobile robotic telepresence
The afternoon sessions were dedicated to discussion of the challenges and opportunities of telepresence technologies in the context of hybrid learning environments. Ideation and prototyping exercises followed the discussions to cocreate recommendations for future technology and research directions.
Evaluating Current Telepresence Practices in Learning Environments
During the first open session, some of the paper topics were discussed and debated. In particular, the participants explored and evaluated current telepresence practices in learning environments. We compared conventional videoconferencing tools (e.g., Zoom, Microsoft Teams) with emerging telepresence technologies such as MRP, stationary and mobile telepresence robots, robotic arms, holograms, Cartesian manipulators, and haptic tools. We discussed their affordances and use expectations as well as their actual use and performance in educational and learning environments. The discussions touched on the role of embodiment in helping the remote attendee feel immersed in the distance learning environment and how UX and usability issues can disrupt the attendance and the completion of tasks.
Creating: Design Sprint
The prototyping session of the workshop took the form of a design sprint. Each participant had 10 minutes to sketch eight different ideas that could be used to address any of the challenges previously discussed. At the end, the ideas were put in a shared document and each participant voted for their three favorite ideas.
The result of the design sprint can be grouped in three areas:
- Improve the communication capabilities of telepresence robots. First, by giving them better embodiment capabilities such as a 360-degree representation of the remote user, robotic arms, and stair climbing. Second, by powering them with immersive communication technology (for example, omnidirectional video capture, XR telepresence, directional audio, and remote haptics). And finally, by adding augmentation capabilities to compensate for the inherent limitations of not being present (e.g., using machine learning to recognize people and automatically tag their names).
- Design the classroom for hybrid scenarios. For instance, using screens to show remote users in fixed positions within the classroom, or creating an interactive classroom map for spatial awareness of the local environment. This includes creating specific tools for the hybrid experience (for example, a physical microphone with a digital twin, used to “pass the floor” between students; connected tablets to share drawings and text; a mini-chat companion app to have a common side-talk channel for everyone; or even having everyone participate in the physical space via virtual means).
- Change the teaching strategy to actively include the remotes. For instance, paring them with local students, giving locals access to the remote user view, creating a “karma” system to gamify courtesy with remotes, and ensuring time-zone awareness across platforms.
Challenges and Opportunities
Telepresence technologies present many opportunities for novel applications and research. During our workshop, we explored the idea of moving beyond a single product to thinking about the whole classroom more broadly. The ideation exercise further prompted us to examine what aspects of interaction might matter—spatial audio, three-dimensionality, gaze, pointing, back-channeling, whispering—and how elements of meditated interaction (such as digital or robotic) could be blended to give remote and local users more options to express themselves and engage meaningfully with one another.
There were several challenges identified in our workshop. On the technical side, the current technology is still limited, and those limitations may have social repercussions in learning activities. When connectivity lagged in our workshop, the audio was unclear and the visual resolution low. The remote participants’ experience was disrupted, and they missed out on what was happening in the physical space. We also explored issues of equity. Workshop participants acknowledged that for telepresence to work well requires time, money, and effort; this includes an appropriate space with good WiFi, screens, and cameras, and detailed plans that include telepresent learners. Promoters of telepresence technologies must consider how these challenges will be met to ensure equitable access.
Future Directions for Research and Practice
Future directions for research in telepresence should include studies on the immersive capabilities of 360 cameras. We believe the ability to independently view the complete physical space will remarkably increase feelings of presence for the remote viewer. Just as people can view an entire room without leaving their seats, the 360-degree camera affords the remote user this same capability. Future studies may also explore how interactants feel about a 360 camera. For example, how would it feel if you were speaking with someone on a telepresence unit and they turned their “head” 180 degrees to talk to someone behind them? As humans, we do not have this capability. Would it be welcome? Would it disrupt acceptance of the remote user?
Additionally, we expect future studies may include:
- User privacy and data-sharing issues
- Social guidelines for all participants in the telepresence experience
- Accessible controls that are adaptable by end users to accommodate personal and physical preferences
- Ethics surrounding how to present equal inclusion of remote participants by respecting their audiovisual rights in the physical space
Conclusion
The number of telepresence researchers in the HCI community is increasing in the post-pandemic world as the global community has adapted to work-from-home, learn-from-home, and socialize-from-home activities and experiences. Our emerging telepresence workshop was an effort to provide a place for telepresence researchers to engage in transfers of knowledge, share works in progress, cocreate, and problem solve. Topics that emerged in our workshop included: views on robots as educators and learners, the potential use of haptic touch/force to communicate feedback on physical learning activities (e.g., dance), desktop telepresence technologies that relay immersive capture, delivery and rendering of physical spaces and interactants, and exploration of assistance recruitment interactions between remote and local users.
As our workshop took place during a Covid surge, some participants were unable to travel to attend. We were fortunate, however, to have researcher expertise in both HCI and telepresence technologies to conduct our workshop with a hybrid approach. As a hybrid learning approach was our desired area of study, we were excited to synthesize our topics of discussion with our workshop-participant experiences in learning via telepresence. We used several forms of digital media to interact within our workshop: Zoom, Meeting Owl 3, Google Docs, cell phones (cameras and text), and shared screens. Our workshop ideation and prototyping exercises found that HCI researchers valued the use of a 360-degree camera (remote user to view the physical room and people), a detailed agenda on progression of learning activities with identification of technologies throughout the learning exercise (e.g., use Zoom now, transition to Google Docs now, etc.), and directed plans for paired learning between remote users and interactants. Additionally, as many participants were logging in from different time zones, we suggest an optional time stamp for awareness of local times. Overall, we were encouraged by the hybrid learning experience and excited to participate in ways like other remote learners in hybrid spaces. Our discussions and topics raised several challenges, however, that need to be addressed for equitable and inclusive use of telepresence technologies. Future HCI research has great potential to address these challenges, create technologies, and promote social practices that facilitate equitable learning through sociotechnical inclusive environments.
Endnotes
1. Ahumada-Newhart, V. and Olson, J.S. Going to school on a robot: Robot and user interface design features that matter. ACM TOCHI 26, 4 (2019), 1–28.
2. Grondin, F., Lomanowska, A.M., and Jackson, P.L. Empathy in computer-mediated interactions: A conceptual framework for research and clinical practice. Clinical Psychology: Science and Practice 26, 4 (2019), e12298.
3. Rae, I. and Neustaedter, C. Robotic telepresence at scale. Proc. of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, New York, 2017, 313–324.
4. Elmimouni, H. et al. Emerging telepresence technologies in hybrid learning environments. CHI Conference on Human Factors in Computing Systems Extended Abstracts. ACM, New York, 2022, 1–5.
Posted in:
on Wed, January 25, 2023 - 3:49:00
Houda Elmimouni
Houda Elmimouni is a Computing Innovation Fellow and a postdoc in the Department of Informatics at Indiana University Bloomington. She conducts empirical research at the intersection of human-computer interaction and computer-mediated communication. Her research interests include robotic telepresence, privacy, and diversity and values.
[email protected]
View All Houda Elmimouni's Posts
Pablo Pérez
Pablo Pérez is a lead scientist at Nokia Extended Reality Lab in Madrid, Spain. His research interests cover the whole area of real-time immersive communications and telepresence, from the compression and transmission problems to the user quality of experience.
[email protected]
View All Pablo Pérez's Posts
Andriana Boudouraki
Andriana Boudouraki is a Ph.D. student at the Mixed Reality Lab, University of Nottingham. Her thesis examines interactions via mobile robotic telepresence and the value of this technology in workplaces. Her research interests include computer-mediated communication, HRI, and how systems, technology, and spaces can support effective hybrid interactions.
[email protected]
View All Andriana Boudouraki's Posts
Fatma Guneri
Fatma Guneri is a research engineer at HEMiSF4iRE, Lille Catholic University. Her research topics concern alternative workplaces and remote work. Her studies about telework also cover students’ online learning experiences during the pandemic. Recently, she has been active in research groups interested in well-being in workplaces, particularly academic places.
[email protected]
View All Fatma Guneri's Posts
Verónica Ahumada-Newhart
Verónica Ahumada-Newhart is an assistant professor of health informatics and human-robot interaction in the School of Medicine, Department of Pediatrics, at the University of California, Davis. She is director of the Technology and Social Connectedness (TASC) Lab housed in UC Davis Health’s Center for Health and Technology.
[email protected]
View All Verónica Ahumada-Newhart's Posts
So, how can we measure UX?
Authors:
Maximilian Speicher
Posted: Tue, November 15, 2022 - 2:44:00
The precise, quantitative measurement of user experience (UX) based on one or more metrics is invaluable for design, research, and product teams in assessing the impact of UX designs and identifying opportunities. Yet these teams often employ supposed UX metrics like conversion rate (CR) and average order value (AOV), which can’t provide that measurement [1]. In fact, I believe this can be extended to an even more general statement: In themselves, none of the metrics that are usually readily and easily available from Web analytics data can reliably measure UX. I understand that this is frustrating news to many, since resources are always limited, attention spans short, and Web analytics so very, very convenient.
Whenever I discuss this, I encounter objections like, “But we have to do something,” or “It’s easy to just state what one shouldn’t do, but that doesn’t help much.” And while it’s a perfectly fine start to know what not to do (cf. Nicholas Taleb’s The Black Swan), in the case of UX, we must not despair, because there are ways to reliably measure it, albeit ones that are not as simple as pulling some number out of Google Analytics (as nice as that would be).
A simple outcome of measuring UX could be, “The last release improved checkout UX from 75/100 to 80/100,” but there could be more-nuanced measurements for different aspects of UX (e.g., usability, aesthetics, joy of use) and user groups. Before diving deeper into how we can do this, let’s first get familiar with three concepts:
- Latent variables (like UX) “are variables that are not directly observed but are rather inferred through a mathematical model from other variables that are observed” [2]. Take, for example, the Big Five personality assessment [2]. You can’t just ask someone, “What’s your personality?” and expect an objective answer. Rather, you have to ask a set of specifically designed questions and then infer personality from those.
- A research instrument is a set of such specifically designed questions, often in the form of a questionnaire. Through an instrument, we can collect the observable variables that help us infer the latent variable we’re after
- We’re dealing with composite indicators when we combine individual variables from an instrument into a single metric.
Additionally, it’s necessary to understand that the user’s experience is not a property of a digital product or user interface. An app doesn’t have a UX. Rather, the experience “happens” in the individual user’s head as a reaction to their interaction with a digital product [3]. Hence, the only way to directly observe UX would be to look into the user’s head using an EEG or similar—and even if you had the possibility, that would be pretty complicated. In any other case, the next best alternative is asking them about it. In contrast, Web analytics data (and metrics like CR and AOV [1]), are relatively “far away” from what happens in the user’s head. They’re collected directly on a website or in an app, aggregated over many users—even though there is no such thing as an “average user”—and influenced by myriad other non-UX factors [1].
So, do we always have to ask users to fill out a questionnaire if we want to reliably measure UX and can’t make use of analytics data at all? The answer to both is “not necessarily,” but first things first.
Plenty of work has already been put into instruments for measuring UX, and this remains an ongoing topic in the human-computer interaction research community (which strongly suggests this is not a trivial matter). Three examples of scientifically well-founded instruments are AttrakDiff (http://attrakdiff.de/index-en.html), UEQ (https://www.ueq-online.org), and meCUE (http://mecue.de/english/home.html), with the first two currently better-known and more widespread. The first research article on AttrakDiff was published as early as 2003. All three also define composite indicators for different aspects of UX. For instance, AttrakDiff provides one composite indicator for hedonic and pragmatic quality each.
So, there exist proper instruments for reliably measuring UX, and you can always have users answer one of those in a controlled study or by means of a live intercept survey. But how might analytics data come into the picture? As so often, the answer lies in statistics. To use analytics metrics for approximating UX, you first have to determine how much individual metrics correlate with and are predictors for actual UX; otherwise you’re simply guessing. For instance, if you have a sufficient number of sessions where you record user behavior and collect answers to one of the questionnaires, you might be able to use them as training data for machine-learning models that predict either individual items or composite indicators—or both—for different user segments. Sounds easy enough, right? Collect some analytics data, collect some answers to a questionnaire, put it all in a Jupyter Notebook, and you’re good to go?
Incidentally, that’s pretty much what I investigated in my Ph.D. thesis, only I did it for usability (using an instrument named Inuit [4]) and interaction data like mouse-cursor movements and scrolling behavior. But the basic principle was the same. Based on this, I have good news and bad news for you. The good news: It worked. I was able to train models that could predict the usability of a Web interface reasonably well. The bad news: It was not all that easy.
First, I didn’t put “sufficient number” above in italics for no reason. Turns out it’s really difficult to collect enough training data when people must fill out questionnaires—and Inuit has only seven items. UEQ, for instance, has 26! Second, interaction data‒based models seem to be very sensitive to interface changes. My results suggested that, at best, one could apply models trained on one website within a narrow cluster of very similarly structured websites, but not beyond, while also having different models per user type. Colleagues and I have, however, already started looking into these problems [5].
In conclusion, because UX is something so elusive, if you want to properly measure it, there’s no way around using a scientifically well-founded instrument. Although often applied as supposed “UX measures” in industry, off-the-shelf analytics data are in themselves not suited for this. Simply speaking, this is because 1) there’s a significant distance between the interface where they are collected and the user’s head, where UX “happens,” and 2) they are influenced by too many non-UX factors to lend themselves to meaningful manual analysis. Yet, under certain conditions, it is possible to predict UX from analytics data, if we combine them with answers to a proper UX instrument and use all of that to train, for example, regression or machine-learning models. In the latter case, you can use methods like SHAP values to find out how each analytics metric affects a model’s UX prediction. And if there are just a few strong predictors, it might even be possible to take a step back and work with simple equal-weight models, as Daniel Kahneman, Olivier Sibony, and Cass R. Sunstein describe in their book Noise.
What could be an ultimate solution to the practical problem of measuring UX? Probably some kind of universal model where anyone can put in their analytics data, user segments, and parameters of their digital product to get instant UX predictions. Does such a model exist? Absolutely not. Might it exist in the future? I can’t say for sure, but I’ll keep looking.
Endnotes
1. Speicher, M. Conversion rate & average order value are not UX metrics. UX Collective (Jan. 2022); https://medium.com/user-experi...
2. Wikipedia contributors. Latent variable. Wikipedia; https://en.wikipedia.org/w/ind...
3. When a large number of users agree that they’ve had pleasant experiences using an interface, researchers usually colloquially state that the interface in question “has a good UX.”
4. Speicher, M., Both, A., and Gaedke, M. Inuit: The interface usability instrument. In Design, User Experience, and Usability: Design Discourse. Springer, Cham, 2015, 256–268; https://link.springer.com/chap...
5. Bakaev, M., Speicher, M., Heil, S., and Gaedke, M. I don’t have that much data! Reusing user behavior models for websites from different domains. In International Conference on Web Engineering. Springer, Cham, 2020, 146‒162; https://link.springer.com/chap...
Posted in:
on Tue, November 15, 2022 - 2:44:00
Maximilian Speicher
Maximilian Speicher is a computer scientist, designer, researcher, and ringtennis player. Currently, he is director of product design at BestSecret and cofounder of UX consulting firm Jagow Speicher. His research interests lie primarily with novel ways to do digital design, usability evaluation, augmented and virtual reality, and sustainable design.
[email protected]
View All Maximilian Speicher's Posts
Using technology to improve communication in panels
Authors:
Jonathan Grudin
Posted: Fri, September 30, 2022 - 11:34:00
Panels can be fun to develop—they can also be much more effectively executed. This spring I was on two large panels that were the best organized of any I’ve participated in. Technology was used in advance to reduce psychological uncertainties, which increased interaction and kept us focused on our message. Expectations were exceeded. Both of the panels were in person, but the method is even more promising for hybrid panels.
The first panel was organized by SIGCHI Adjunct Chair for Partnerships Susan Dray and VP of Finance Andrew Kun to discuss CHI’s 40th anniversary. Structure that one might assume would squeeze out liveliness instead promoted it. Impressed by its success, I reprised the method in another 90-minute panel at my Reed College class reunion. These are only two data points, but the underlying psychology is compelling, especially for large panels and future hybrid panels.
The 40th anniversary panel was an exceptionally diverse group of eight, plus Andrew as moderator. Our backgrounds, interests, and priorities differed substantially. Forty years earlier, one of us wasn’t yet in school and only one was involved with CHI. Yet we wanted to avoid eight 10-minute talks—we wanted to engage with one another.
Andrew’s solution was to address psychological barriers to effective time management that I had never noticed. Panelists like to talk. The trick: Make it possible for panelists to avoid saying more than they really want to.
A large panel should minimize moderator control onstage to give panelists time to get their points across, but if panelists are undirected, meandering and a lack of coherence are inevitable. Andrew directed the panel, but he did so in advance, which greatly reduced the cognitive effort required of panelists at the event.
Before the event
Weeks in advance, Andrew shared a Google Doc that asked panelists to draft text for four sections:
- A bio
- A two-minute opening statement or “provocation”
- One or two questions from each of us directed to another panelist (encouraged to ensure that each panelist was asked at least one question)
- A one-minute concluding summary.
Two virtual meeting deadlines ensured that we completed drafts in a timely fashion. Everything could be revised up to the event, but Andrew could also sequence opening statements early to create a coherent topic flow. Seeing one another’s drafts and the presentation sequence enabled us to keep statements to the same length, reduce redundancy, share terminology, and shape questions and summaries.
Seeing a question that would come our way in advance, we could organize a response, but responses were not shared. They were fresh to other panelists at the event, and anyone could contribute thoughts if the initial respondent didn’t cover them.
At the event
We had crafted crisp bios that Andrew read to introduce us. Our opening statements were practiced, fluent, and within the allotted two minutes.
The key innovation was the handling of the Q&A. Andrew had carefully thought through the questions and prepared a sequence of invitations. For example, “Daria, do you have a question for another panelist?”
The crucial point: We knew that Andrew was always ready to solicit the next question. Think of your experiences on a panel. Panelists are often unsure about the best way to continue and keep the discussion going. This leads to fuzziness or discontinuities. Three scenarios:
- I ask a question and receive a satisfactory response. If I have nothing to add, I’m in a bind. Not knowing whether someone else will jump in if I stay silent, I often politely respond or ask an unnecessary follow-up question to keep the conversation going. In our event, I could pause and glance at Andrew. If no one immediately spoke, he moved on: “Tamara, do you have a question for another panelist?”
- There is a pause in the discussion. Has the current topic been covered adequately? I haven’t spoken yet, Should I break the silence, even if what I will add is mostly to agree or digress? At our event, there was no pressure to do this. Andrew was ready to break the silence and move on to the next cogent question.
- A topic is exhausted. It’s clearly time to move on. I have a very different topic, but someone else may have a more relevant continuation. Should I jump in? In our panel, we could leave graceful transitions to Andrew, knowing that our question would get its turn. Andrew was following the discussion and knew the range of questions remaining.
I felt that a cognitive load had been lifted—it was great! The navigation of conversational conventions adds many small efforts that compete with focusing on what is said and the overall topic. There were no space-filler comments as we engaged with one another.
I’d initially thought that although structure might be necessary for an eight-person panel, it would sound scripted and diminish spontaneity. That wasn’t the case. We added follow-up questions and comments after one of us responded. The conversation was brisk and focused on things we cared about. Little cognitive work was required for conversation management.
A replication
For my 50th college reunion six weeks later in Portland, Oregon, five people who spent careers in tech formed a panel to discuss how tech had evolved, our roles, and what we recommend younger people think about. I introduced Andrew’s structure. With five panelists, we doubled time allowances and mixed our prepared questions with less predictable audience participation. It went well. People go to college reunions more to hang out and socialize than to attend lectures and panels, but we drew a large crowd who stayed throughout.
Natural for hybrid panels
Efficient conversation management is critical for large panels. This approach also offers benefits for virtual or hybrid panels. Hybrid planning is by necessity online—which means there’s no opportunity to get together for breakfast the day before—so coauthoring a structured document and holding a couple of preliminary meetings is a natural fit. More significantly, navigating conversation transitions is more challenging when panelists have less awareness of one another’s body language. Hybrid panels may gravitate toward larger sizes, and cultural diversity could require bridging conversation styles.
This structure requires more preparation, but it was distributed over time, getting a better sense of other panelists’ contributions reduced some of the effort, and the collaboration was enjoyable. It was worth it.
Posted in:
on Fri, September 30, 2022 - 11:34:00
Jonathan Grudin
Jonathan Grudin has been active in CHI and CSCW since each was founded. He has written about the history of HCI and challenges inherent in the field’s trajectory, the focus of a course given at CHI 2022. He is a member of the CHI Academy and an ACM Fellow.
[email protected]
View All Jonathan Grudin's Posts
An American Zen Buddhist’s reflections on HCI research and design for faith-based communities
Authors:
Cori Faklaris
Posted: Tue, August 16, 2022 - 4:29:00
Computer science seems in opposition to Zen Buddhism, a spiritual practice best described (if it must be) as “without reliance on words or letters, directly pointing to the heart of humanity” [1]. Yet with so much of today’s human experience bound up in computing, some words, and pixels and bits, to address their overlap seem necessary. My aim in sharing the following reflections is to set forth the historical and modern faith context for a secular wellness practice that we design for— meditation—and to model the statements of positionality and reflexivity that I feel are essential for research in such personal and cultural domains.
Zen Buddhism
Zen’s origins were first documented during China’s Tang dynasty in the 7th century CE. At that time, Buddhism had already spread from Nepal, the birthplace of historical founder Siddhartha Gautama, or “the Buddha,” throughout neighboring countries in Asia for more than 1,000 years. The Indian monk Bodhidharma is credited with introducing China to the dhyana practice of stillness and contemplation. Dhyana (a Sanskrit word) predates the Buddha and is commonly described as his vehicle for achieving enlightenment, or the transcendence of his limited human existence. The renewed focus on dhyana was a reaction to the older branch of Buddhism known as Theravada, the “way of the fathers” [1]. The chief text of the newer Mahayana branch of Buddhism is the Heart Sutra, the English translation of which fits on one page. Zen takes this minimalist approach even further, proclaiming the superiority of empirical knowledge gained through dhyana —renamed ch’an in Middle Chinese—over scriptural learning or formal religious observance. A famous poem of the Tang dynasty sets forth this formulation for Zen [1]:
A special transmission outside the scriptures
Without reliance on words or letters
Directly pointing to the heart of humanity
Seeing into one's own nature.
Today, the largest Zen communities remain in China and Japan (the origin of the word zen), but the practice of Zen has spread throughout the world. Like other Buddhists, who number in total 488 million worldwide [2], they venerate the “Triple Treasure” of small-b buddha (inherent enlightenment-nature), dharma (the teachings and practices), and sangha (their faith communities). Zen practitioners also particularly value upaya, or “skillful means” (the ability of an enlightened being to tailor a teaching to a particular audience or student for maximum effectiveness) and mindfulness (in Zen, the continuous, clear awareness of the totality of the present moment). Through seated meditation, alternated with practices such as chanting, bowing, and contemplative walking, a Zen Buddhist aspires to a state of mindfulness that will facilitate their own perception of buddha-nature and help them express this enlightenment in daily life, especially for the benefit of others. To check the validity of their meditation experiences, Zen practitioners are urged to consult with a teacher in an established lineage who is certified to guide others in enlightenment. Among a teacher’s “skillful means” are stories or riddles known as koans (Japanese), gong-ans (Chinese), or kung-ans (Korean). Such consultations will help Zen practitioners achieve a “before-thinking,” other-centered orientation and avoid self-centered fallacies—for example, “wanting enlightenment is a big mistake” [1].
Personal experiences and observations
My own experiences with Zen are Western. I began in my teenage years, when I bought a secondhand copy of D.T. Suzuki’s Essays in Zen Buddhism and was intrigued by his discussions of satori, the Japanese word for enlightenment. I had already liked what I had heard about Buddhism during a unit on world religions at my (Catholic) grade school. However, it wasn’t until I moved to the U.S. state of Indiana that I was able to connect with an in-person group, practicing in the Kwan Um School of Zen (KUSZ) in the lineage of Korean Zen Master Seung Sahn. I began sitting with the Indianapolis Zen Center sangha once or twice a month as my schedule permitted: 30 minutes of seated meditation bookended by a 20-minute prelude of chanting and a 10-minute epilogue of a reading and announcements. I progressed to sitting weekend retreats and to taking precepts (like Christian baptism, this signifies formally joining the faith). Eventually, I studied for and became a KUSZ dharma teacher—qualified to explain subjects such as meditation forms and the history of Zen, but not to guide people to enlightenment. In KUSZ, such teachers are called Ji Do Poep Sa Nim (JDPSN, for "dharma master") or Soen Sa Nim (Zen master). I have studied with both types of “enlightenment” teachers at the Indianapolis Zen Center and with a group in Pittsburgh, PA, while helping as a dharma teacher.
As a Zen teacher, I do not take a binary view of computing as good/not good or useful/not useful. The “middle way” is to acknowledge that it is a dharma aid in some contexts and a distraction in others. Below are some examples.
Computing as obstacle to Zen practice
My advice to beginners is to turn off their smartphones completely. This is because a buzz or ding is liable to take meditators out of the moment, and beginners often will struggle to refocus. For my in-person group, I model another best practice by taking out my phone or smartwatch, silencing them, and turning them face-down on my meditation cushion, so that I cannot see the flash of a notification. I prefer to use such manual safeguards for attention rather than the “Do Not Disturb” settings, because enacting the exercise of putting away our digital helpers is an important signal to our bodies and minds that what we are doing is important and different from the everyday flow of our distracted lives. In the same vein, I recommend use of a battery-operated analog clock over a smart device for timing seated meditation, because it will not tempt you into checking messages.
In the world of Covid-19, much of our group Zen practice has joined others online. Now, it is no longer possible to physically remove ourselves from our Internet-connected devices. I am grateful to be able to see and hear my fellow practitioners even at a distance, but I miss having the break from my busy digital life and from the allure of its distractions. The “Do Not Disturb” settings help, to a point. Sitting in front of my MacBook, however, I catch myself touching my mouse and calling up screens whenever I experience a fleeting thought about, say, the status of a project. Meditating from home also means interruptions from family members, pets, or Internet outages. I confess that I do not have enough “dharma energy” to avoid breaking my stillness in response to my cat waving her tail in my face!
Going forward, this type of Zen Buddhism will benefit from computing research and design to solve similar problems of distraction and focus as those faced by those working from home or who are “digital nomads,” connecting to their customers or clients via the Internet away from an office. I would love to flip a switch inside my home environment and be free from all ability to access Netflix or Slack while I hunker down on either a research paper or a kung-an. Even better if the “switch” is a timer, so that I do not forget to turn off my “Do Not Disturb,” or a learned routine of my home network, so that it is context-aware and picks up on the signals that I am ready to concentrate. I use Siri now to set a meditation timer by voice, although the screen and keyboard is still nearby, and it would be better for my ability to stay in concentration if “she” could turn off everything at the same time and then turn it back on again after the timer ends.
However, like the meditators in Markum and Toyama [3], I am wary of letting technology intrude too far or replace in-person experiences. I am doubtful that it will no longer be necessary to visit a Zen center or monastery for the sustained concentration required for intensive practice. My Pittsburgh group has returned to offering a weekly in-person (and masked) practice so that people can get a break from remote meetings and reap the benefits of in-person group meditation. I look forward to the day when we can begin traveling to other temples and learning in situ about others’ spiritual practices, perhaps with the assistance of interactive displays or augmented reality overlays.
Computing as support to Zen practice
Is reading an obstacle? After all, “words and letters” are considered a hindrance in Zen tradition. However, reading is often the first step undertaken by someone who wants to try meditation and/or to learn more about Zen Buddhism. As Bell has noted [4], the internet has been enormously helpful for spreading Zen knowledge and for connecting seekers with faith communities. My sanghas have made use of the same computing affordances as other interest groups: websites, online groups, platforms for event discovery, and secure no-contact payments.
I will always prefer in-person Zen practice. But, like the online group members in Katie Derthick’s work [5], I now can join a Zen meditation session or a retreat from anywhere in the world. I can take part in a distant reading group or a book club (we have those!). I can receive an interview via video and audio from a variety of Zen teachers. For those who don’t want that group experience, a variety of apps can guide them in contemplating peace or following their breaths. Headspace even dims the screen so that you can use it to wind down and prepare for sleep.
Going forward, the main item on my wish list is better audio support for remote Zen practice. We have experienced glitches in teacher interviews where one person trips over the other person’s statements, adding to the problem of not being present to pick up on nonverbal cues to turn-taking such as angling back or tilting forward. Worse, audio problems have almost killed our group chanting. This is unfortunate because, in my tradition’s Zen practice, chanting is essential for aligning participation and building an energy within the group that supports its focus. Participants cannot stay in sync—the farther from the source, the more obvious the transmission delays. Our Zoom apps also struggle to figure out which voices to prioritize, instead of blending every audio source into a unified output. For now, the workaround is for everyone to mute and only listen to the leader’s chanting. We need apps like JamKazam or Jamulus, which were designed for musicians to play together online and at a distance. Such an app will need to integrate with our existing remote meetings and be usable by anyone.
Suggested best practices for computing researchers
Religion is as sensitive a topic as it is central to the human experience. From my N of 1, I suggest that computing researchers will do themselves good to consider their positionality and biography with regard to this subject, before embarking on faith-minded research [6,7]. Clarification of our personal experiences—how we were raised and how we have directed our adult lives with regard to religion—will make explicit our social, cultural, and historical position with regard to the faith domain. Reflexivity requires time, but reading, discussing, and thinking will help us to identify what assumptions we bring to the project. Once articulated, our preexisting assumptions will be less likely to warp our research or to stymie our openness to new ideas. (In my case, I sat and thought about whether I have a bias toward adding technology to any faith domain, regardless of whether it is truly needed. I also challenged myself as to whether I assume that adding technology will lead to only negative downstream effects for a religious community.)
For the conduct of this research, I suggest three ethical pledges that will reduce the potential for exploiting participants: adherent-centeredness, getting close-up, and considering relationship ethics. Researchers should prioritize the faith population’s needs, preferences, and values, and incorporate them to the extent possible: “Nothing about us, without us.” Careful, respectful qualitative work such as Wyche et al. [8] follow Genevieve Bell’s prescription to use techniques informed by anthropology, focusing on the particulars of place, location, and critical reflexivity [9]. Researchers should make use of practices such as participant observation that foster empathy and consider layering different participants’ accounts, rather than aggregating them into a majority narrative [6]. And they should recognize that such research will involve leveraging existing relationships and fostering new ones. Discuss issues of privacy and confidentiality upfront, for example, that it may not be possible to de-identify anyone [6]. Share work and ask for responses and comments. In publications, alter specific personal or topic details to protect their privacy, security, and safety.
These suggestions may sound like standard operating procedure for some qualitative researchers in human-centered computing. But many of us are trained in a positivist orientation, in which reason and logic are prioritized. We will benefit from having these or similar principles explicitly articulated for our consideration and commitment, just as a Zen master benefits from reciting the temple rules about not borrowing people’s shoes and coats. We all need help staying mindful.
Endnotes
1. Sahn, S. The Compass of Zen. Shambhala Publications, 1997.
2. Buddhists. Pew Research Center’s Religion & Public Life Project. Dec. 18, 2012; https://www.pewforum.org/2012/12/18/global-religious-landscape-buddhist/
3. Markum, R.B. and Toyama, K. Digital technology, meditative and contemplative practices, and transcendent experiences. Proc. of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, New York, 2020, 1–14; https://doi.org/10.1145/3313831.3376356
4. Bell, G. Auspicious computing? IEEE Internet Comput. 8, 2 (Mar. 2004), 83–85; https://doi.org/10.1109/MIC.2004.1273490
5. Derthick, K. Understanding meditation and technology use. CHI ’14 Extended Abstracts on Human Factors in Computing Systems. ACM, New York, 2014, 2275–2280; https://doi.org/10.1145/2559206.2581368
6. Darwin Holmes, A.G. Researcher positionality—A consideration of its influence and place in qualitative research—A new researcher guide. Shanlax Int. J. Educ. 8, 4 (Sep. 2020), 1–10; https://eric.ed.gov/?id=EJ1268044
7. England, K.V.L. Getting personal: Reflexivity, positionality, and feminist research. The Professional Geographer 46, 1 (1994); https://www.tandfonline.com/doi/abs/10.1111/j.0033-0124.1994.00080.x
8. Wyche, S.P., Hayes, G.R., Harvel, L.D., and Grinter, R.E. 2006. Technology in spiritual formation: An exploratory study of computer mediated religious communications. Proc. of the 2006 20th Anniversary Conference on Computer Supported Cooperative Work. ACM, New York, 2006, 199–208; https://doi.org/10.1145/1180875.1180908
9. Bell, G. No more SMS from Jesus: Ubicomp, religion and techno-spiritual practices. In UbiComp 2006: Ubiquitous Computing (Lecture Notes in Computer Science). Springer, Berlin, Heidelberg, 141–158; https://doi.org/10.1007/11853565_9
Posted in:
on Tue, August 16, 2022 - 4:29:00
Cori Faklaris
Cori Faklaris is a doctoral candidate in human-computer interaction at Carnegie Mellon University in Pittsburgh, PA. She researches the social-psychological factors of cybersecurity and other protective behaviors. She also is a dharma teacher in the Kwan Um School of Zen.
[email protected]
View All Cori Faklaris's Posts
Designing for religiosity: Extracting technology design principles from religious teachings
Authors:
Derek L. Hansen,
Amanda L. Hughes,
Xinru Page
Posted: Thu, August 04, 2022 - 3:16:00
Religious beliefs have a profound influence on billions of people across the globe, affecting nearly every aspect of their lives, including the use of technology. While there is a continuous rise in atheism, the majority of people in many countries still believe in a deity, self-identify with a religion, and regularly participate in religious practices such as prayer. For example, in the U.S., 69 percent self-identify with a religion, 66 percent consider religion very important (41 percent) or somewhat important (25 percent) to their life, and 67 percent pray daily (45 percent) or weekly/monthly (22 percent) [1]. For many people, religious beliefs and teachings frame every aspect of their life, influencing behaviors related to diet, social relationships, dress and grooming, sexual practices, mourning for the dead, raising children, and financial decisions, among others.
It is no surprise, then, that religions have much to say about the use of technologies, such as the Internet, social media, and mobile phones. Yet design guidance is mostly absent on how to design technology in ways that support various religious values and beliefs. While philosophers, sociologists, and humanities scholars have studied the intersection of technology and religion, relatively few studies have examined religion and technology from a design and HCI perspective [2]. This is unfortunate, since religious traditions often seek to transform the lives of their adherents and the world for the better. Such inclinations can be highly compatible with core HCI values, which often focus on the betterment of the world through the novel use of technology.
Few HCI researchers make time to look through the lens of religious teachings at the technologies that surround us [2]. Thus, we don’t fully appreciate basic questions related to religious teachings and technology. How central a role does technology play in religious teachings? What stances do religions take on the appropriate or inappropriate use of technologies? How do religions frame the discussion around technology, given that many of their teachings are based on ancient texts written by those with vastly different technologies? What role does religious doctrine play in informing religious practice around technologies? How do these answers differ for different religious traditions?
Religious values are integral to many people’s lives and should be considered a key value for the HCI community to integrate into technology design. All physical and digital artifacts convey and enforce certain values, whether they are purposefully designed to do so or not. Thus, value clashes can occur when the affordances of the technology are not aligned with the values of the system’s users. In fact, a system that is designed from the perspective of one group may impose the values of that group on other target users of the system. For example, the idea that a mobile phone is attached to one individual and thus a unique phone number can be required for each person’s account setup for an online service may be a fair assumption in many contexts. However, it causes issues for countries, settings, or religious contexts where a mobile phone is a shared object between a married couple, family, or even extended community, creating an account setup roadblock for anyone who does not have their own phone number. Technology infrastructures meant to support people with many different values should account for this diversity of values and validate assumptions about its users.
A very limited number of HCI studies have investigated how technology practices complement or hinder religious practices via empirical studies (we expand on a number of these in the next section). Existing HCI research has also focused on understanding how religious practices can inform design in nonreligious contexts. For example, several studies apply strategies employed by religious organizations to enhance commitment, build community, and/or motivate behavior change in nonreligious organizations. Ames et al. explored how religious ideological practices can serve as a useful lens in understanding how nonreligious engineering and design organizations affirm membership and a shared vision [3]. Similarly, Amy Jo Kim discusses the use of rituals (a concept inspired by religions) in building commitment to online communities [4].
While this prior work gives us initial insights into the interplay of religiosity and technology in practice, there is an element that is missing and yet key to understanding values we might aspire to incorporate into our technologies. While studying how people use technologies in practice gives us a descriptive understanding, there is also a prescriptive element of religion that is vital to understand. In many religions, there are a set of values that believers aspire to, and they hope to engage in practices that reflect those core values. Thus, we need to not only understand how technologies are used in practice, but also the guiding principles that a given population might be influenced by or aspire to. This also helps us to identify new opportunities for supporting religiosity.
How religious doctrine and teachings can inform design
Many religions provide specific prescriptive guidance to their adherents on the use of social media, the Internet, or other modern-day technologies that are of interest to the HCI community. Such guidance is often based upon doctrines, or the foundational beliefs, principles, and teachings of a religion. These come in the form of ancient scriptural texts, commentaries, sermons, pronouncements by religious leaders, official publications of religious organizations (e.g., magazines), and a variety of other resources. While it is tempting to consider only the practical, prescriptive advice about technology use given to members of a religion, it is essential to also understand the religious doctrines and teachings that underly such advice. Designers can benefit in several ways from learning the core doctrinal beliefs of a religion in order to better design for its members.
First, doctrines can inspire the use of religious metaphors that tap into believers’ deepest spiritual desires and insights. For example, Pope Francis used the metaphor from St. Paul’s teachings in the New Testament that views Christians as “members of the one body whose head is Christ” when discussing the importance of using social media to build up others and not tear them down [5]. This metaphor stresses the value of a community (i.e., a body) having different body parts (e.g., eyes, ears, legs, hands), each of which serves a different purpose for the benefit of the whole. By invoking this metaphor, the Pope encourages Catholics to see the Church as “a network woven together by Eucharistic communion [a central Catholic ritual, where unity is based not on ‘likes,’ but on the truth, on the ‘Amen,’ by which each one clings to the Body of Christ, and welcomes others.” Religious metaphors can summon strong emotions and spiritual insights in believers, providing motivation to use technology in certain ways (e.g., being kind in online discourse).
Second, understanding religious doctrine and teachings can help designers solve problems and achieve goals that are religious in nature. For example, Woodruff et al. studied the home automation practices of American Orthodox Jewish families [6]. Jewish laws generally prohibit manually turning electronic devices off or on during the Sabbath. These families had long designed and automated systems within their home that would perform mundane tasks (e.g., turning lights on/off with timers or sensors) to abide by this law. Without a deep understanding of Jewish teachings and practices, designing to meet their needs would not be feasible. Another example comes from the Church of Jesus Christ of Latter-Day Saints, whose doctrine encourages members to be eternally “sealed” (i.e., connected) to their family, including deceased ancestors through vicarious ordinances performed in their Temples. This focus has led them to invest significant resources in developing genealogical tools that help members identify their ancestors, such as FamilySearch, which has a collaboratively generated family tree with over 1.38 billion names and had over 200 million site visits in 2021. Additional tools allow church members to track and manage the vicarious work that members perform in the Temples. Thus, an understanding of the core doctrines related to Temple work has been essential to the development of unique tools that support such work.
Third, understanding religious teachings can help designers modify existing technologies to better meet the needs of believers. For example, HCI researchers have recognized that technologies supporting financial services within Muslim communities must work within a religious framework where charging interest is forbidden [7]. Thus, micro-lending websites that rely on interest must be modified in fundamental ways to be viable solutions in Muslim communities. Many religions have their own dating sites, helping people find singles with a similar religious background. In some cases, these include specific features that differ from general dating websites. For example, Shaadi is a popular Hindi “matrimonial” website with 35 million users that focuses on finding a spouse rather than hookups.
Finally, understanding religious doctrines can help designers identify core values that can then be used to design solutions that are in harmony with and reflect a believer’s core beliefs. Susan Wyche and Rebecca Grinter [8] examined how American Protestant Christians use ICT in their home for religious purposes. Their findings suggest many opportunities for designing systems that acknowledge, honor, and support religious values in a domestic setting, such as creating digital calendars and displays that recognize and change their content based on significant religious holidays or milestones. In 2005, an Israeli wireless company launched a mobile phone specifically designed for the ultra-Orthodox Jewish community in Israel [9]. The phones were modified to disable Internet, text messaging, and video and voice messaging, after religious authorities and community members became concerned that these services could infiltrate the community with unacceptable content.
There are many open questions about how to best use religious doctrines and teachings in design. How can design methods be modified to incorporate and prioritize religious teachings? How can products be evaluated based on religious teachings? How can technology help achieve uniquely religious goals, for which technology has not historically been used? How can systems be designed that meet the needs of diverse religious groups, given their different teachings? What values can be derived from religious teachings that can be incorporated into design?
In asking these questions that seek to understand the prescriptive aspect of religious values in the context of technology use, we can better understand which values people may want represented in technologies. We also hope that this work will serve as a call to action for the HCI community to engage more holistically with religious values. A set of values that are such an integral part of so many peoples’ lives should be acknowledged and given priority; HCI should support people’s priorities and values. We call on the HCI community to take steps toward understanding the interplay of religious values and technology to be able to create truly value-sensitive technologies.
Endnotes
1. Smith. G.A. About three-in-ten U.S. adults are now religiously unaffiliated. Pew Research Center’s Religion & Public Life Project. Dec. 14, 2021; https://www.pewforum.org/2021/12/14/about-three-in-ten-u-s-adults-are-now-religiously-unaffiliated/
2. Buie, E. and Blythe, M. Spirituality: There’s an app for that! (but not a lot of research). CHI’13 Extended Abstracts on Human Factors in Computing Systems. ACM, New York, 2013, 2315–2324; https://doi.org/10.1145/2468356.2468754
3.Ames, M.G., Rosner, D.K., and Erickson, I. Worship, faith, and evangelism: Religion as an ideological lens for engineering worlds. Proc. of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing. ACM, New York, 2015, 69–81; https://doi.org/10.1145/2675133.2675282
4. Kim, A.J. Community Building on the Web: Secret Strategies for Successful Online Communities. Peachpit Press, 2006.
5. Pope Francis. Message of His Holiness Pope Francis for the 53rd World Communications Day. 2019; https://www.vatican.va/content/francesco/en/messages/communications/documents/papa-francesco_20190124_messaggio-comunicazioni-sociali.html
6. Woodruff, A., Augustin, S., and Foucault, B. Sabbath day home automation: “It’s like mixing technology and religion.” Proc. of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, 2007, 527–536; https://doi.org/10.1145/1240624.1240710
7. Mustafa, M et al. IslamicHCI: Designing with and within Muslim populations. Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, New York, 2020, 1–8; https://doi.org/10.1145/3334480.3375151
8. Wyche, S.P. and Grinter, R.E. Extraordinary computing: Religion as a lens for reconsidering the home. Proc. of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, 2009, 749–758; https://doi.org/10.1145/1518701.1518817
9. Campbell, H. ‘What hath God wrought?’ Considering how religious communities culture (or Kosher) the cell phone. Continuum 21, 2 (2007), 191–203; https://doi.org/10.1080/10304310701269040
Posted in:
on Thu, August 04, 2022 - 3:16:00
Derek L. Hansen
Derek L. Hansen is a professor at Brigham Young University’s School of Technology’s information technology program. His research focuses on understanding and designing social technologies, tools, and games for the public good. He has received over $2 million in funding to develop and evaluate novel technical interventions, games, and simulations.
[email protected]
View All Derek L. Hansen's Posts
Amanda L. Hughes
Amanda L. Hughes is an associate professor of information technology in Brigham Young University’s School of Technology. Her current work investigates crisis informatics and the use of information and communication technology (ICT) during crises and mass emergencies, with particular attention to how social media affect emergency response organizations.
[email protected]
View All Amanda L. Hughes's Posts
Xinru Page
Xinru Page works in the field of human-computer interaction researching privacy, social media, technology adoption, and values in design. Her research has been funded by the NSF, Facebook, Disney Research, Samsung, and Yahoo! Labs. She has also worked in the information risk industry leading interaction design and as a product manager.
[email protected]
View All Xinru Page's Posts
Faith informatics: Supporting development of systems of meaning-making with technology
Authors:
Michael Hoefer,
Stephen Voida,
Robert Mitchell
Posted: Mon, July 25, 2022 - 3:23:37
In seeking to apply HCI to faith, religion, and spirituality, we turn to existing work in theology and psychology—in particular, work that studies the development of faith in individuals and communities. James Fowler is a pioneer in faith development, having developed a stage-based model after conducting over 300 interviews with individuals from a variety of religions [1]. In his work, Fowler suggests that faith is universal to all humans, and he provides an understanding of faith that we believe would serve the HCI community in grounding further efforts to integrate HCI practice with spirituality, faith, and religion.
Fowler describes how most religious organizations fall short in supporting the development of faith in their constituents, as they fall prey to a modal developmental level—the most commonly occurring level (the mode) of development for adults in a given community (empirically, stage 3 of 6, explained in depth below). In other words, the developmental level that is most common in the adults of a community shapes the culture and normative goals of individuals growing up in that community, which Fowler calls an “effective limit on the ongoing process of growth in faith” [1].
HCI researchers are good at understanding how individuals form and act upon mental models. That is one of our aims when we try to develop technology: to understand people and what might help them. In helping individuals with their systems of meaning-making (faith), we might want to do something very similar. This, we suggest, is one of the prime challenges and opportunities for HCI researchers when engaging with faith and spirituality: to seek to understand and support individuals through their faith-ing. Here, we outline future research directions for what we are calling faith informatics: the study of systems that facilitate growth in individuals’ systems of meaning-making.
One direction for faith informatics could involve structured reflection, visualization of the self, and social connection. One goal might be to reify the existence of all stages of faith, beyond the normative, modal developmental levels present in many faith-based communities. By developing systems that support faith development regardless of religious (or nonreligious) orientation, computing and HCI hold the promise of supporting faith development and maintenance, highlighting convergence across religious traditions and supporting universal mature faith.
Faith as fundamental to the human experience
The word faith is often associated only with “belief,” in that having faith is just a matter of what one believes to be true. Wilfred Cantwell Smith suggests an alternative: that faith is not dependent on belief [2]. In this vein, faith is neither tentative nor provisional, while belief is both.
Fowler draws on this and suggests an alternative conceptualization that views faith and religion as separate, but reciprocal [1]. Fowler views religion as tradition that is “selectively renewed,” as it is “evoking and shaping” the faith of new generations. Faith is an aspect of the individual, while religion represents the tradition of the culture in which the individual grows and develops. Faith is an aspect of every human life, an “orientation of the total person,” and, as a verb, an “active mode of being and committing.” Faith is considered to be “the finding of and being found by meaning” [3]. This focus and widening of faith to include the association with meaning-making is not only inclusive of many religious and spiritual traditions, but is also vital in nonreligious traditions such Alcoholics Anonymous’s 12-Step Program. For HCI, this conceptualization resonates with existing research developments devoted to understanding the role of computing in finding and supporting meaning-making [4].
Specifically, Fowler describes three contents of faith that every human subconsciously holds in mind as they go about their life: 1) centers of value, which are whatever we see as having the greatest meaning in our lives, 2) images of power, which are the processes and institutions that sustain individuals throughout life, and 3) master stories, narratives we believe and live that facilitate our interpretation of the lived experience [1].
If faith is a universal human condition, as suggested by Fowler and Smith, then HCI researchers have much more purchase to engage with research questions related to faith, as that work would therefore have the potential to be applied to all of humanity.
Stages of faith development
Fowler describes a series of six stages of faith that are loosely aligned with age during the beginning of life, but development can be arrested in any stage [1]. Similar to Robert Kegan’s forms of mind [5], each progressive stage of faith is represented by a changing relationship between subject and object as an individual starts to consider a larger system as part of the “self” with regard to meaning-making. An overview of each stage suggests a diversity of use-cases for faith informatics.
Stage 1: Intuitive-projective faith. According to Fowler’s interviews, intuitive-projective faith is found in 7.8 percent of the population and is the dominant form found in children ages three to seven. Intuitive-projective faith is marked by fluid thoughts and fantasy, and is the first stage of self-awareness. Fowler explains that transitioning out of stage one involves the acquisition of concrete operational thinking and an ability to distinguish fantasy from reality.
Stage 2: Mythic-literal faith. Mythic-literal faith is estimated to be found in 11.7 percent of the population, and largely in children ages seven to 12. Mythic-literal faith involves the individual starting to internalize the “stories, beliefs, observances that symbolize belonging to his or her community.” Individuals in this stage create literal representations of centers of power, which often involves anthropomorphizing cosmic actors. Individuals in this stage may take religious texts literally as a foundation for meaning-making. While Fowler’s work did find a handful of adults in stage 2, many would transition out of this stage in their teenage years, as they experienced multiple stories that clashed and required reflection to integrate.
Stage 3: Synthetic-conventional faith. Synthetic-conventional faith is the most commonly found stage of faith in Fowler’s sample, and appears in 40.4 percent of the population. Transitioning into this stage often occurs around puberty, and is associated with a growing connection to social groups other than the family, perhaps with different collective narratives and centers of value. Synthetic-conventional faith systems attempt to “provide a coherent orientation in the midst of that more complex and diverse range of involvements...synthesize values and information...[and] provide a basis for identity and outlook.”
This stage is highly aligned with Kegan’s view of the self-socialized mind [5]; both are intended to deal with meaning-making involving multiple social relations and belonging to various social groups. In this stage, individuals create their personal myth: “the myth of one's own becoming in identity and faith, incorporating one's past and anticipated future in an image of the ultimate environment.” Transitioning to the next stage often involves a breakdown in the coherence of the meaning-making system, such as a clash with religious or social authority or moving to a new environment (i.e., leaving home).
Stage 4: Individuative-reflective faith. Developing an individuative-reflective faith system (32.9 percent of the population) requires an “interruption of reliance on external sources of authority.” Individuals in this stage are characterized by separation from previously assumed value systems, and the “emergence of an executive ego.” Fowler notes that some individuals may separate from previous value systems, but still rely on some form of authority for meaning-making, which can arrest faith development in the transition to individuative-reflective faith. This stage involves taking responsibility for one’s “own commitments, lifestyle, beliefs, and attitudes” and would be aligned with Kegan’s “self-authoring” form of mind.
Stage 5: Conjunctive faith. Stage five, according to Fowler, is difficult to describe simply. Conjunctive faith was found in only 7 percent of the population, and not until mid-life (ages 30 to 40). Conjunctive faith involves a deeper acceptance of the self, and integrating “suppressed or unrecognized” aspects into the self, a kind of “reclaiming and reworking of one’s past.” This stage is similar to Kegan’s self-transforming mind, as both involve the embrace of paradox and advanced meta-cognition about the self. Individuals in this stage must live divided between “an untransformed world” and a “transforming vision and loyalties,” and this disconnect can lead individuals into developing rare universalizing faith systems of meaning-making.
Stage 6: Universalizing faith. Stage six represents a normative “image of mature faith” that was found to be present in only one interview participant [1]. Fowler describes these individuals as:
grounded in a oneness with the power of being or God. Their visions and commitments seem to free them for a passionate yet detached spending of the self in love. Such persons are devoted to overcoming division, oppression, and violence, and live in effective anticipatory response to an inbreaking commonwealth of love and justice, the reality of an inbreaking kingdom of God [3].
Developing a universalizing faith is seen as the “completion of a process of decentering from the self” [3], described by:
taking the perspectives of others...to the point where persons best described the Universalizing stage have completed that process of decentering from self. You could say that they have identified with or they have come to participate in the perspective of God. They begin to see and value through God rather than from the self...their community is universal in extent [3,4].
The goal of faith informatics, as a direction of inquiry, is to better understand the systems (social, ecological, information, or otherwise) that support faith development and how they can be improved. A tool for research, and perhaps an intervention in itself, may be an information system that allows an individual to gather faith-related data about themselves and then visualize and interact with these representations of the self.
A design paradigm for faith informatics
Figure 1 highlights the components of a potential faith informatics system based on a smartphone or other computing-based application (the “system”). The theory behind this system design is drawn from research interviews used by both Fowler in Stages of Faith and Kegan in the subject object interview [5]. Both researchers relied on what we call the “research interview” to determine the stages of faith (or, in Kegan's case, “forms of mind”) present in the individual.
Figure 1. A depiction of a framework for designing faith informatics (FI) systems to support the elicitation and reflective revision of mental models of one's faith. The FI system prompts the user to systematically reflect on themself in the style of Fowler's faith development interviews [1]. This elicitation is combined with objective data about the individual's life and presented to the individual in a visualization. This visualization is then used to reciprocally influence the mental model the individual holds of their faith, facilitating development into further stages of faith.
The interview methodology relies on trained interviewers to communicate with and assess subjects using semi-structured interviews. Questions that might be asked in the interview include (selected from [1]):
- Thinking about yourself at present. What gives your life meaning? What makes life worth living for you?
- At present, what relationships seem most important for your life?
- Have you experienced losses, crises or suffering that have changed or ``colored'' your life in special ways?
- What experiences have affirmed or disturbed your sense of meaning?
- In what way do your beliefs and values find expression in your life?
- When life seems most discouraging and hopeless, what holds you up or renews your hope?
- What is your image (or idea) of mature faith?
One critical insight is that this methodology, by asking these deep questions, appears to support development by itself. Jennfier Garvey Berger notes that individuals who undergo the subject-object interview “changed the way they were thinking about things in their lives” and wanted to “come back for another interview” [6]. Some individuals reported making significant life changes after the interview, such as leaving an unhealthy relationship [6]. Fowler, using his faith development interview, also notes that interviewees tend to say things along the lines of “I never get to talk about these kind of things” [1].
HCI researchers and practitioners can help make these “developmental interview” experiences more available to the general public. The subject object interview is noticeably costly, both in time required for the conducting of the interview (60 to 75 minutes per subject), as well as required training for the researcher. One potential avenue for faith informatics, therefore, is to attempt to recreate the essential conditions of developmental interviews, allowing for the deep reflection that occurs during the interviews.
For example, mobile or Web apps could attempt to recreate the conditions of a conversation that promote self-reflection and capture the outcomes of the reflection. This approach would be aligned with the notion of reflective informatics, which seeks to support reflective practices through technology [7]. Research has already highlighted the promising effects of this kind of digital intervention. The “self-authoring” application, aligned with Kegan's forms of mind (particularly the self-authoring form) has been shown to improve academic performance, reduce gender and ethnic minority gaps, and improve general student outcomes through “future authoring.”
In Figure 1, the hypothetical faith informatics system provides prompts to individuals about aspects of their life related to meaning-making. These prompts may draw from both Kegan’s and Fowler's developmental interviews. In this particular diagram, we use the example from Fowler of eliciting the “life review,” which seeks to break the life history into episodes of meaning [1], and allows for reflection on each stage in life. This is, in essence, a large-scale version of the day reconstruction method—a kind of life reconstruction method—where individuals can break their life up into discrete episodes that mark turning points in their development of meaning-making systems. We envision a role for interactive systems in providing an interface for eliciting the construction of these episodes and any associated metadata (e.g., real-world context).
This is one of the key benefits afforded by an informatics system: the possibility of incorporating real-world, objective data in these reflective dialogues. As an individual’s system of meaning-making would be used for both answering prompts and governing an individual's behavior, the system can play a role in helping an individual compare their own perception of their meaning-making structures with how they (objectively) live their life. For example, life episodes could be colored by social contacts elicited from text messaging or email data, or behavioral activity derived from calendar entries or financial activities.
Faith informatics would therefore also connect with the field of personal visual analytics, where an individual’s data (coming directly from the experiences described earlier) is visualized into an external representation that can enable the individual to confront their mental models of themself and of their life, potentially resulting in reciprocal feedback loops to prompt insights about and support faith development. The design and creation of these visualizations is an open challenge and may benefit from co-design activities and think-aloud visualization interaction studies.
We might expect that an individual’s current stage of faith would inform design decisions. For example, an individual transitioning into stage 5 (conjunctive faith) may benefit from exploring objective data about their life history as they “reclaim and rework” [1] their past.
Facilitating social connections via faith informatics
Another promising avenue for faith informatics is that of fostering social connections that span both religious and non-religious faith traditions. While Fowler's stages of faith are largely based on Western religious traditions, it is possible (and perhaps likely) that similar developmental structures of meaning-making exist across religions and traditional beliefs worldwide, given the focus on structure of faith instead of contents of faith [1]. As such, faith informatics could facilitate the connection of individuals based on stage of development, rather than relying on religious communities that may suffer from the limits of particular modal developmental levels. Such a system could help to connect individuals in similar stages of faith across different cultures, facilitating connection and development, perhaps via the sharing of narratives and experiences.
Challenges and future work in faith informatics
Faith informatics is ripe with challenges and opportunities for the HCI community. One significant challenge is in the visualization of representations of faith that facilitate systematic reflection about an individual’s meaning-making structures. While we can draw upon Fowler’s research interview methodology to understand the types of prompts that may facilitate individual faith development, this kind of data (to our knowledge) has not previously been explored with contemporary visualization techniques. We expect that advances in personal visual analytics are necessary to support the effective visualization of self-reported faith data in a way that promotes development.
In addition, the HCI community must confront the undigitizable nature of faith and spirituality, especially with regard to users in the later stages of faith. Fowler only encountered one individual in stage 6, the universalizing faith stage. It may be difficult to attempt to reify late stages of faith in a digital system, especially when the presence of these stages is limited to few individuals. Future work could include in-depth interviews with individuals in these advanced stages of faith to better understand their life trajectory, in hopes of better understanding and sharing how they reached these particular forms of meaning-making.
Endnotes
1. Fowler, J.W. Stages of Faith: The Psychology of Human Development and the Quest for Meaning. HarperCollins, New York, NY, 1981.
2. Smith, W.C. Faith and Belief: The Difference Between Them. OneWorld Publications, Oxford, U.K. 1998.
3. Fowler, J.W. Weaving the New Creation: Stages of Faith and the Public Church. Wipf and Stock Publishers, Eugene, OR, 2001.
4. Mekler, E.D. and Hornbæk, K. A framework for the experience of meaning in human-computer interaction. Proc. of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, 2019, Article 225; https://doi.org/10.1145/3290605.3300455
5. Kegan, R. In Over Our Heads: The Mental Demands of Modern Life. Harvard Univ. Press, Cambridge, MA, 1998.
6. Berger, J.G. Using the subject-object interview to promote and assess self-authorship. Development and Assessment of Self-Authorship: Exploring the Concept Across Cultures. M.B. Baxter Magolda, E.G. Creamer, and P.S. Meszaros, eds. Stylus Publishing, Sterling, VA, 2010, 245–263.
7. Baumer, E.P.S. Reflective informatics: Conceptual dimensions for designing technologies of reflection. Proc. of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, New York, NY, 2015, 585–594; https://doi.org/10.1145/2702123.2702234
Posted in:
on Mon, July 25, 2022 - 3:23:37
Michael Hoefer
Michael Hoefer is a third-year Ph.D. student studying computer and cognitive science at the University of Colorado Boulder. He is generally interested in studying social systems at various scales, and developing informatics systems that serve as problem solving interventions at each level. His application areas include dreaming, sustainability, and systematic well-being.
[email protected]
View All Michael Hoefer's Posts
Stephen Voida
Stephen Voida is an assistant professor and founding faculty of the Department of Information Science at CU Boulder. He directs the Too Much Information (TMI) research group, where he and his students study personal information management, personal and group informatics systems, health informatics technologies, and ubiquitous computing.
[email protected]
View All Stephen Voida's Posts
Robert Mitchell
Robert D. Mitchell is a retired pastor in the Desert Southwest Conference of the United Methodist Church and an Oblate in Saint Brigid of Kildare Monastery. He holds a Ph.D. in education and formation from the Claremont School of Theology.
[email protected]
View All Robert Mitchell's Posts
Stream switching: What UX, Zoom, VR, and conflicting truths have in common
Authors:
Stephen Gilbert
Posted: Wed, July 20, 2022 - 12:08:28
I use the term stream switching to refer to people simultaneously processing multiple streams of input information, each of which has its own context and background knowledge. This definition sounds similar to multitasking, but multitasking research usually focuses on a single individual, divided attention, and working memory capacity. Stream switching focuses on multiple people’s interactions and their mental models of each other. Below I offer examples and argue that stream switching merits further research.
Stream switching
Stream switching draws on 1) the economic concept of switching costs and 2) the psychological concepts of perspective-taking and theory of mind. Economic switching costs are the financial, mental, and time-based costs to switch between products. The costs of switching from an Android to an Apple phone, for example, go far beyond the price tag and include learning how to use the phone’s software and much personal information transfer. Analogously, stream switching includes not only the basic attentional cost of focusing on a different input stream, but also the additional cognitive load of updating one’s mental model of the stream source. Is it accurate? Is it trustworthy? These questions relate to the psychological concepts of perspective-taking (Can you imagine others’ perspectives?) and theory of mind (Can you understand how different others’ knowledge and beliefs might be from yours?).
Stream switching includes the more specific practice of code-switching. Code-switching was originally the linguistic practice of switching between languages depending on your context, and now refers more generally to the switching of identities depending on who’s around you. Someone might behave one way at home with family and another way in the outside world. Minorities often become quite skilled at code-switching since they have daily practice working within a different majority culture. I would hypothesize they are also highly skilled at stream switching.
The idea that people vary in their stream switching ability is one of the reasons stream switching deserves more research. Can this critical skill be practiced or trained? We already know that some people are better than others at empathizing, understanding what other people are thinking and feeling. Research has correlated these individual differences with factors including reading more fiction, role-playing and reflection, the general practice of thinking more about one’s thinking. Perhaps this research could be extended to develop methods to measure one’s stream switching ability and methods of improving it.
User experience (UX) implications
In user-centered design, we try to empathize with our users. We create personas to reduce our cognitive load of doing so. Here is Maria, the young professional with two children; what are her jobs to be done when she opens our app? A product designer with better stream-switching skills will truly be able to step into Maria’s world and build an accurate mental model of her goals and expectations in order to design a product that fits perfectly. That product then allows Maria to accomplish her goals more quickly with fewer errors.
On the other hand, low-usability software requires the user to switch streams mid-use to imagine the intentions of the software designer or to model the inner workings of the software itself. You may have experienced an accounting system that was well-designed for accountants but not for you. While the accountants can easily track expenses, you can’t figure out how much money you have. Or consider a social media system. If you post certain content, is it clear who will see it? Will it lead your friends to see related ads? Or in a corporate calendaring system, if you invite people to a meeting, is it clear who will see the invite list? Being able to answer these questions requires highly usable software. Norman warned about gulfs in evaluation; today’s sociotechnical context requires evaluation of not only the system, but also of other users’ experiences with it.
Frustrating usability situations increase stream switching, which burdens our cognitive capacity. When you have to switch streams to figure out where to click next, you have less attention to devote to the streams you were already juggling, for example, home life versus work life, or your supervisor’s mindset versus your teammates’. Bad interaction designs effectively steal our attention.
Asymmetry, monitoring, and conflicting versions of truth
Many systems present asymmetric information to users, i.e., collaborators receive different information or levels of access, as in the calendaring example. It’s not always a problem; a person presenting slides should be able to see their presenter notes while the audience should not. But when you’re Zooming and ask, “Can you hear me?” or “Can everyone see my screen?”— that’s problematic asymmetry. You don’t have good cues about what other people are experiencing, so you have to ask explicitly. Having this information enables you to update your mental model of your colleagues’ experience in the meeting and enables you to stream switch more smoothly between conveying your points and monitoring your colleagues’ understanding.
Asymmetric stream switching also appears if you use a virtual reality headset. When you enter the virtual world, you partially blind yourself to cues in the real world. To avoid tripping over a nearby chair, you need to switch streams consistently to monitor both the virtual world and the real world. If you carefully arrange the furniture in your room beforehand, creating a safe space for virtual exploration, then you can reduce the mental workload required to monitor that stream and focus more on the virtual stream with less overhead.
Analogously, as supported by research on stereotype threat and Goffman’s idea of roles that people play in life, when people have a metaphorical “safe space” for collaboration with others despite diverse backgrounds, they can focus less on monitoring the stream of how they’re being perceived (“Will I say something offensive?”) and more on the stream of the collaborative task at hand.
Finally, consider the number of people who have difficulty speaking with each other because of their dramatically different beliefs about what is true. This is a stream-switching problem with high asymmetries. It has become more difficult than ever before to imagine what it’s like to be on the other side, because the affordances to do so barely exist. Only by investing significant time in creating alternative social media accounts and filling them with clicks could someone start to experience the other’s perspective. The cognitive load of stepping into the shoes of the other person has become so high that it’s easier to discount them as foolish or deceived. It’s easier to stream switch and model the perspective of someone similar to yourself. In part, that’s why similar people are drawn to one another. But there is a very high likelihood that many people we work with, as well as our customers, are not similar to us. Research on the “contact hypothesis” shows that talking with people who are different than ourselves enables us to understand their perspectives with more empathy, even if it’s difficult. More than ever before, we need to increase our stream-switching abilities, which will enable us to understand others.
Thanks to Joanne Marshall and Kaitlyn Ouverson for thoughtful feedback on these ideas.
Posted in:
on Wed, July 20, 2022 - 12:08:28
Stephen Gilbert
Stephen B. Gilbert is associate director of Iowa State University's Virtual Reality Application Center and director of its human-computer interaction graduate program, as well as an associate professor in industrial and manufacturing systems engineering. His research interests focus on technology to advance cognition, human-autonomy teaming, and XR usability.
[email protected]
View All Stephen Gilbert's Posts
Unavailability: Food for thought from Protestant theology
Authors:
Sara Wolf,
Simon Luthe,
Ilona Nord,
Jörn Hurtienne
Posted: Tue, July 19, 2022 - 9:45:16
The past two years of living in pandemic times have accelerated the spread of technology into all areas of life. This was also evident in the context of religious communities and churches, where the number of applications and users has increased enormously. Not only individual communities but also the great church institutions had to expand their presence in the digital sphere [1]. As a result, interaction with technology in religious and spiritual contexts is now more widespread than even a few years ago. Understanding how technology and interaction design influence experiences in such contexts is more important than ever. However, this increased need for knowledge is not yet visible in HCI publications. We also believe that through more research in these areas, HCI can gain new perspectives on technology use, design, and evaluation more generally. Similar to how work on religious objects in households inspired a broader call for extraordinary computing [2] , we would like to introduce a theme that emerged from our work with Protestant believers, and that can bring new impetus to HCI: unavailability.
We derive this claim from a continued cooperation between HCI researchers and Protestant theologians. Together, we have been working on several projects that aim at designing technology for religious communication in the form of rituals, blessings, and online worship services. In the following, we want to demonstrate that integrating aspects of faith, religion, and spirituality in HCI might be valuable and lend HCI new perspectives.
Unavailability
The development of current technology is about making everything available at any time: Vast amounts of music and films are available through media streaming services, our loved ones are available through video (chat), and worship services are available online. In most of the Western world, many of our desires can be fulfilled immediately using technology, which focuses on making everything visible, accessible, controllable, and usable [3]. However, this ubiquitous availability might not always be valuable. Sometimes the opposite, unavailability, might be the better choice. Unavailability can highlight what one values most about what is available and can evoke the experience of resonance, specialness, or meaning. In the following, we will present two examples that demonstrate how we came across the theme of unavailability in our research.
The first example originates in our work on blessings. In a design probe study with Protestant believers, we tried to understand what blessing experiences are, and where or when they happen in believers’ everyday lives. Participants described that the feeling of being blessed can occur anytime, anywhere, but is most intense when it is unexpected and surprising (i.e., unavailable). One participant shared the following story when asked to describe an experience of being blessed:
I had a conversation with a friend who told me about her happiness as a mother, how it was to hold her newborn baby in her arms for the first time, how much love she was surrounded by, and how proud she was. And that was very strange for me because she had to deliver the child dead. And, um, I didn’t expect that. And at that moment, well, that was so.... so that overwhelmed me…. So she knew her child would be born dead, she knew she would have a silent birth, and yet there was a lot of pride and happiness and love, and she is still proud to be a mother, even though her child was born dead. And I just find that..."Wow"! So my rational brain said, "Well, that cannot be for real, that doesn’t fit," and I was also afraid of the conversation with her. Um, and then I was, so that’s what got me… So that was surprising, yes, or maybe also what I hoped for. So sometimes it [the blessing] is also a fulfilled hope.
Not all examples of blessing experiences were as drastic as the one described here. However, this story demonstrates the aspects of unexpectedness and surprise very well. The participant did not expect that the conversation with her friend could take place in a positive atmosphere—she was even afraid of the conversation. And then everything turned out quite differently than expected. She could not have worked out this twist or influenced the situation in this direction with certainty—it simply came as it came. The unavailability was also evident in other examples within the same study. Many participants described that they used to bless each other, although they can never be sure whether the blessings are effective—it is beyond their control. For our participants, Protestant believers, this control was attributed to God. The aspect of unavailability generated friction and excitement in people’s experiences: It opened up room for hope, speculation, and surprise—for example, when something absolutely unexpected and positive happens.
Our second example on unavailability shows the opposite: namely, what happens when the unavailable becomes available? In another project, we investigated the experiences of online worship services during the pandemic [4]. We accompanied Protestant believers while participating in online worship services and tried to understand how specific design elements lead to specific experiences. One prominent element that influenced the experiences dramatically was availability and ease of access. Usual worship services are not an everyday occurrence for believers, but rather something special; believers usually invest some effort to mark the worship service as distinct from everyday life and routines—for example, dressing up, going to a special place, and reserving the time to attend. In contrast, online worship services are available anytime and anywhere, which invites specific modes of usage (e.g., watching it on the side).
One couple reported a situation that shows the tensions such constant availability can create. On one Sunday, the couple woke up later than usual, and were in the middle of their breakfast when realizing that the worship service was about to start. Invited by the flexible and accessible design of current online worship services, they watched it using a laptop at their breakfast table. Although this was practical, they quickly became annoyed with themselves. They realized that they had turned what formerly had been an extraordinary experience into something ordinary. Availability changed the way worship services were experienced. The online worship service turned into something everyday and less essential. Constant availability may be convenient and allow for flexible access. However, convenience and flexibility are nothing compared with the cherished unavailability of worship services that take place only at a specified place and time and are unavailable in between.
So far, unavailability seems to be a concept that is given little consideration in HCI, and that even opposes current trends of making everything available. The two examples show how unavailability affects experiences. We think it is worth looking at the concept more closely, as it can reveal new perspectives on technology design. In the following, we will turn to sociology and Protestant theology in order to learn more about the concept of unavailability. Theology has long been concerned with unavailability, and sociology shows how the concept of unavailability is essential for human experiences beyond the context of religion, faith, and spirituality.
The German sociologist Hartmut Rosa has studied unavailability (German: Unverfügbarkeit; he translates it as “uncontrollability”) in his works [3,5]. Rosa describes our time as a time of acceleration, suggesting the concept of resonance as a possible solution [5]. For Rosa, resonance is a type of world relationship formed by affection and emotion, intrinsic interest, and the expectation of self-efficacy, in which subject and world connect and at the same time transform each other. That is, the nature of the world relationship is to be understood as reciprocal. Not only is the relationship defined between subjects and objects, but they also define a new relationship to the world [5]. The experience of resonance is opposite to the experience of alienation, a world relationship in which the subject and the world are indifferent or hostile (repulsive) to each other and thus inwardly disconnected from each other—a relationship of "relationshiplessness" [5]. For Rosa, resonance is the human motivation that guides all actions. A central, constitutive aspect to resonant experiences is unavailability. Four conditions for resonant experiences must coincide [3]:
- Touch (something touches me)
- A response to the touch
- Transformation: a change of world-relationship
- Unavailability.
Even if conditions one to three are fulfilled, unavailability is necessary for a successful, resonating experience. The individual experience of the world can be neither planned nor accumulated. This perspective highlights a fundamental problem with the current focus on making everything available through ubiquitous technology: It is not the availability that renders experiences successful, resonating, and thereby valued but rather their specific quality. And part of what makes their quality is that people are not in control of everything and cannot make the world available to the last. It is precisely in this that Rosa sees a necessity. Space must be given to the concept of unavailability because only in this way are resonating experiences possible [3].
Regarding Christian religion, the necessity of the unavailable for a successful world experience as described by Rosa becomes particularly clear. All objects of the Christian religion, such as God, Christ, the Holy Spirit, grace, living a fulfilled life, and blessings, cannot be made controllable to human beings; they cannot be commanded. Even in an increasingly secularized world, the objects of religion and their unavailability remain something that fascinates and attracts people—albeit no longer only in the forms of the established religious communities. This search for meaning is both an attempt to make the unavailable available and the realization that ultimately unavailability is constitutive for religious experiences. It is precisely this unavailability that makes dealing with the objects of religion interesting to people. If God, the Holy Spirit, or Christ were made available, religion would become uninteresting and lose relevance for the resonant experiences as illustrated above.
The theme of unavailability prompts HCI to reconsider current trends of making everything available. Recognizing that unavailability might be an essential experiential quality, HCI is challenged to engage in the topic. How can unavailability be experienced when interacting with technologies? What ways, if any, are there to design for the unavailable?
To design or not to design?
Although there seem to be no simple, singular answers to the above questions, we would like to present different perspectives to stimulate discussion within HCI.
Unavailability is a key topic in the Christian religion and tradition. As such, the Christian tradition is constantly confronted with unavailability and tries to create conditions to make experiences with the unavailable more probable. Such efforts can, potentially, be considered as design. To this end, the Christian tradition offers (or designs) activities such as rituals, liturgy, and experiential education that serve as supportive measures, knowing that the unavailable is ultimately unavailable.
On the question of design for unavailability, Rosa builds his argument for resonant experiences. He argued that resonant experiences can arise only with counterparts (e.g., human beings, objects, nature, art) that are not entirely available—meaning visible, accessible, controllable, and usable [3]. Following this, Rosa doubts that technology can be designed at all to become a resonant counterpart: unavailability is uncontrollability and thus “non-engineerability.” He expects that translating unavailability to, for example, unpredictability in technology design might lead to frustration rather than resonant experiences. However, Rosa also identifies manufactured objects that evoke resonant experiences, such as poems or art. He expects a poem to be a resonant counterpart as long as one has not fully grasped and processed it, as long as it continues to occupy one and still seems to hide something [3].
With the above arguments and examples in mind, we turned to HCI searching for artifacts and design strategies that might correspond to unavailability. An artifact that shows an essential aspect of the integration of unavailability and technologies, namely the type of activities that a technology enables, is the Drift Table [6]. The Drift Table is a coffee table displaying slowly moving photography that is operated by the distribution of weight on its surface. One could argue that this interaction represents a kind of “control” of the table. However, what is most related to unavailability is not the exact interaction, but rather what the table encourages us to do: The Drift Table was designed not to perform specific tasks or efficiently achieve goals but rather to support ludic activities, “activities motivated by curiosity, exploration, and reflection rather than externally defined tasks” [6]. It is exactly in this way that we think it might correspond to unavailability: A technology that is supposed to have an inherent unavailability must invite us to explore it and not work through it—similar to a poem or a work of art. If it is at all possible to design for unavailability, then the first step must be a change of perspective on the what of technology: Away from technology as efficiency-enhancing task support, toward technology as a stimulus for open exploration, reflection, and curiosity.
Three design strategies that can serve as a useful starting point for thinking about design in the context of unavailability. Left: The drift table as an example of design for ludic activities [3]. Middle: The Projected Realities bench as an example for ambiguous design [2]. Right: ThanatoFenestra as an example for ephemeral interfaces [1]. Sketches: Vyjayanthi Janakiraman.
Apart from this concrete example, we also found two particular design strategies within the more speculative and exploratory areas of HCI that are more concerned with the how of interaction suitable when designing for unavailability. The first strategy is to design for ambiguity [7]. Gaver et al. [7] suggest that ambiguity “frees users to react to designs with skepticism or belief, appropriating systems into their own lives through their interpretations.” One way to integrate ambiguity into the design is to distort displayed information to stimulate curiosity and thought. The second strategy to design for unavailability might be ephemeral interfaces [8]. Ephemeral interfaces consist of at least one element that lasts only for a limited time and typically incorporate materials invoking multisensory perceptions such as water, fire, or plants [8]. Ephemerality is a design strategy seldomly used in popular, widespread technologies. However, it is an element everyone recognizes from the living, natural world that corresponds well with unavailability.
In conclusion, we hope to have demonstrated the value of turning to contexts such as spirituality, faith, and religion to gain new perspectives for HCI and hope to stimulate discussions on making everything (un)available. We conclude with one last quote for reflection:
We should…speak of God. But we are human beings and as such cannot speak of God. We are to know both, what we ought to and what we cannot do, and precisely in this way give glory to God. –translated from Barth (1929)
Acknowledgments
We would like to thank all of our participants who shared their experiences with us. May you be blessed! Parts of the research have been funded by the Federal Ministry of Education and Research (BMBF), project CoTeach (project number 01JA2020).
Endnotes
1. Nord, I. and Adam, O. Churches online in times of corona (CONTOC): First results. In Revisiting the Distanced Church. OAKTrust Digital Repository. 2021, 77–86. https://doi.org/10.21423/revisitingthechurch
2. Wyche, S.P. and Grinter, R.E.. Extraordinary computing: Religion as a lens for reconsidering the home. Proc. of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, 2009, 749–758; https://doi.org/10.1145/1518701.1518817
3. Rosa, H. The Uncontrollability of the World. Polity, 2020.
4. Wolf, S., Mörike, F., Luthe, S., Nord, I., and Hurtienne, J. Spirituality at the breakfast table: Experiences of Christian online worship services. CHI Conference on Human Factors in Computing Systems Extended Abstracts. ACM, New York, 2022; https://doi.org/10.1145/3491101.3519856
5. Rosa, H. Resonance: A Sociology of Our Relationship to the World. Polity, 2021.
6. Gaver, W.W. et al. The Drift Table: Designing for ludic engagement. CHI '04 Extended Abstracts on Human Factors in Computing Systems. ACM, New York, 2004, 885–900; https://doi.org/10.1145/985921.985947
7. Gaver, W.W., Beaver, J., and Benford, S. Ambiguity as a resource for design. Proc. of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, 2003, 233–240; https://doi.org/10.1145/642611.642653
8. Döring, T., Sylvester, A., and Schmidt, A. A design space for ephemeral user interfaces. Proc. of the 7th International Conference on Tangible, Embedded and Embodied Interaction. ACM, New York, 2013, 75–82; https://doi.org/10.1145/2460625.2460637
Posted in:
on Tue, July 19, 2022 - 9:45:16
Sara Wolf
Sara Wolf is a passionate HCI researcher working at the intersection of faith, spirituality, religion, and technology. She is currently doing her Ph.D. with a focus on technology-mediated (religious) rituals and works in the CoTeach project, and as an HCI lecturer at the Institute of Human-Computer-Media at the University of Würzburg.
[email protected]
View All Sara Wolf's Posts
Simon Luthe
Simon Luthe is a practical theologian and religious educator working on pop culture issues and at the intersection of faith and technology. He is currently doing his Ph.D. in a research project on blessing spaces in VR/AR at the University of Würzburg. He is also a vicar in the parish of Heide, Schlewig Holstein.
[email protected]
View All Simon Luthe's Posts
Ilona Nord
Ilona Nord is professor of Protestant theology and religious education at Julius-Maximilians-Universität, Würzburg, Germany. Her research interests include religious teaching and learning, developing new methods for teacher education in a digitalized world and churches online in times of Corona.
[email protected]"
View All Ilona Nord's Posts
Jörn Hurtienne
Jörn Hurtienne is a professor of psychological ergonomics at Julius-Maximilians-Universität, Würzburg, Germany. His research interests include the theory and design for intuitive use, image schemas and primary metaphors as well as user experience design for fundamental values.
[email protected]
View All Jörn Hurtienne's Posts
Breaking stereotypes: Islamic feminism and HCI
Authors:
Hawra Rabaan,
Lynn Dombrowski
Posted: Thu, July 14, 2022 - 9:28:33
As HCI matures into a richer discipline interlacing with the humanities and social sciences, there has been a growing consciousness to embrace pluralist [1] and intersectional approaches in understanding and addressing systems of oppression within computing [2]. Islamic feminism and intersectional feminism are highly complementary approaches to understanding oppression and power. While both focus on gender, each brings its own distinct attention to how gendered violence manifests. In research, Islamic feminism is a theoretical, analytic, and design lens to understand and attend to the needs of Muslim women. Beyond being useful for understanding issues that only Muslim women face, as a theoretical approach, it expands how we understand agency, sheds light on the historic and sociocultural contexts, and explores design around cherished and socially familiar values [3,4]. This article calls for expanding HCI feminist theories to account for nonsecular and non-Western contexts, and answers the following questions: 1) What is Islamic feminism? 2) Why should we as academics care about it? and 3) How can we bring it into HCI research and design?
What is Islamic feminism?
Islamic feminism scholarship, similarly to mainstream Western feminism, centers studying gender and systems of power and oppression. Primarily, Islamic feminism differs from mainstream feminism in 1) its perception of religion and 2) definitions of agency and resistance. While there are many dangerous tropes portraying Islam as anti–female empowerment or as patriarchal, within Islamic feminism, Islam is viewed as feminist for several key reasons. Islamic feminists see Islam as a way to promote social justice and equity; however, Islamic feminists are critical of orthodox interpretations of Islamic texts and cultural praxis and often position them as the sources of patriarchal beliefs [5]. Thus, Islamic feminist scholars (e.g., Amina Wadud, Husein Muhammad, Ziba Mir-Hosseini) have worked extensively and continue to produce alternative interpretations of sacred texts to challenge the patriarchal elements in Islamic jurisprudence and shift the paradigm of religious authority. For example, verse [4:34] is commonly used by religious scholars to promote wife-hitting, whereas Islamic feminist scholars interpret the verse to encourage temporary separation and reflection. The interpretation is inferred from a holistic Quranic stance on spousal relations rather than verbatim explanations, concluding its focus on respect, reconciliation, and justice, and consequently opposing harm and violence [4].
The second dispute between mainstream feminism and Islamic feminism is how agency is defined and identified. In mainstream feminism, agency is fundamentally based on the liberal political theory’s concept of freedom, where an autonomous will is fulfilled through “universal reason,” unburdened by tradition, customs, or transcendental will. Islamic feminists view this definition of free will as limited; instead, agency is not just about a person’s capacity, but rather about how societal, economic, and political structures create and reinforce conditions, cultural norms, possibilities, and oppression around gender [6]. In Islamic feminist thought, resistance and agency are often inseparable and extend agency to include non-resistive actions. For example, a woman in an abusive relationship patiently remaining married is an autonomous choice made within layered constraints that takes time, careful consideration, and effort [4]. Another form of agency that is regularly overlooked by Western feminists is religious agency, which consists of acts grounded in religious beliefs—to adhere to a transcendental power (i.e., their God) rather than to the abusive figure or patriarchal norms. Lastly, using an Islamic feminist lens reveals more depth of the human experience and urges us to practice greater empathy and reduce harm, by acknowledging and working within the practical conditions influencing participants’ autonomy.
Why and when to use Islamic feminism?
The HCI community centers diversity, equity, and inclusion. Islamic feminism provides tools to conduct justice-oriented research and design, where participants are not “othered” or looked down upon, where we value our participants’ voices, and build upon community assets and values. As an HCI scholar, I call for us to turn inward and surface the biases we may have as researchers and designers, find and routinely use tools to help us overcome our biases, and bring about the progressive change we seek as academics and global citizens.
As the field continues to broaden its reach and impact, the problem-solving standpoint familiar to the HCI community must shift. By using an Islamic feminist approach, we strive to provide implications and design within the context of entangled political and cultural norms, rather than eradicate those norms or perceive our user as a passive victim [4].
How to use Islamic feminism in HCI
Next, we leave you with three takeaways on how to begin following an Islamic feminist approach in HCI:
- Dare not to take on a savior’s complex, orientalize, or deem cultures as less than the West. When approaching social issues that concern sensitive problems and vulnerable populations, be diligently cautious of causing more harm, whether it’s in perpetuating stereotypes, interacting biasedly, or superficially tackling the problem.
- View practicing agency as culturally and historically specific. It is the researcher’s job to acutely understand the practical conditions and sociohistorical processes contributing to the autonomy of participants. Such lenses are essential within all roles of research; problem formulation, analysis, implications and design, and research evaluation [7].
- Call for empowerment from within the participants’ structure, by designing around cherished, internalized, and socially familiar values when evaluating current technologies or providing implications for future inclusive designs [3]. Include norms beyond social impositions, and focus on the participants’ cultural dimensions, historical developments, and different ideas of justice.
While it goes without saying that Muslims are not a homogeneous group, portraying Islam as a violent religion or Muslim women as passively oppressed is inexcusably prevalent within the broader society and this unfortunate stereotype has not escaped the academic realm. I have faith in our community and the future of inclusive technologies. When in doubt, connect with fellow Muslim scholars or groups such as the IslamicHCI group [8], who will be open to sharing their insights when needed. We can do better, HCI!
Endnotes
1. Bardzell, S. Feminist HCI: Taking stock and outlining an agenda for design. Proc. of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, 2010, 1301–1310.
2. Rankin, Y.A. and Thomas, J.O. Straighten up and fly right: Rethinking intersectionality in HCI research. Interactions 26, 6 (2019), 64–68.
3. Alsheikh, T., Rode, J.A., and Lindley, S.E. (Whose) Value-Sensitive design: A study of long-distance relationships in an Arabic cultural context. Proc. of the ACM 2011 Conference on Computer Supported Cooperative Work. ACM, New York, 2011, 75–84.
4. Rabaan, H., Young, A.L., and Dombrowski, L. Daughters of men: Saudi women's sociotechnical agency practices in addressing domestic abuse. Proc. of the ACM on Human-Computer Interaction 4, CSCW3 (2021), 1–31.
5. McDonald, L. Islamic feminism. Feminist Theory 9, 3 (2008), 347–354; https://doi.org/10.1177/1464700108095857
6. Mahmood, S. Feminist theory, agency, and the liberatory subject: Some reflections on the Islamic revival in Egypt. Temenos-Nordic Journal of Comparative Religion 42, 1 (2006).
7. I would like to take the opportunity to express my gratitude to the CSCW reviewers who pushed my work into maturation by providing constructive criticism and actionable feedback.
8. Members of the group can be reached at [email protected]
Posted in:
on Thu, July 14, 2022 - 9:28:33
Hawra Rabaan
Hawra Rabaan is a Ph.D. candidate in human-computer Interaction at IUPUI’s School of Informatics and Computing. Rabaan’s work combines social work and HCI by focusing on examining sociotechnical practices in response to domestic violence and designing for countering domestic violence within the Muslim community through a transformative justice lens.
[email protected]"
View All Hawra Rabaan's Posts
Lynn Dombrowski
Lynn Dombrowski is an associate professor in the human-centered computing department in the School of Informatics and Computing at Indiana University – Purdue University in Indianapolis. She runs the Sociotechnical Design Justice lab, where she studies, designs, and evaluates computational systems focused on social inequity issues with her students.
[email protected]
View All Lynn Dombrowski's Posts
Faith, modernity, and urban computing
Authors:
Nusrat Jahan Mim,
Syed Ishtiaque Ahmed
Posted: Fri, July 08, 2022 - 12:17:25
Cities have long been a center of attention for modernization projects, so it is no surprise that most of today’s computing ventures are choosing cities as the main avenue for demonstrating their prowess. The idea of a “smart city,” for example, has been attracting a wide range of academics and practitioners in computer science and related fields whose works concern sensor systems, smartphone applications, human-computer interaction (HCI), computer supported cooperative work (CSCW), social computing, ubiquitous computing, data science, and artificial intelligence (AI). Related to the dream of smart cities are the dreams for driverless cars to claim the city streets, drones to deliver food to city apartments, and large-scale smart displays to inform and entertain citizens.
These dreams are also being partially realized in various cities around the world, with a mix of successes and failures. In fact, it would not be an overstatement to say that computing technologies often define what cities are in today’s world. While the concentration and intensity of dazzling, cutting-edge computing is generally higher in cities in the Western world, their Global South counterparts have started to catch up. Under the government mandate of digitization, countries like Bangladesh and India, for example, are updating their cities and the lives of their citizens with new mobile applications, biometric identity, the digital gig economy, online education, and social media, to mention just a few. However, this rapid advancement of “urban” computing is often devoid of ideas of faith, religion, and spirituality; even worse, their insensitivity to such sentiments is creating an agonistic and uncomfortable situation for many citizens, especially in the Global South. A deep understanding of the historical conflicts between modernity, science and technology, and urban religiosity is required to trace the role of computing within this tension.
Let’s go back a couple of years, when Covid started to unfold in Bangladesh. Alarming images of Bangladesh shared on social media included photos of religious gatherings of faith-based communities. Here, we use a broad definition of religion that incorporates both organized religions (Islam, Hinduism, Buddhism, Christianity, etc.) and traditional and Indigenous beliefs and non-beliefs. By faith-based communities, we refer to the groups of people who participate in activities that are motivated by their religious beliefs. While faith may influence and shape every action taken by a person, for the sake of simplicity, we define religious practices by the activities that are driven primarily by religious motivations (e.g., prayers, congregations, etc.).
While the government of Bangladesh mandated “social distancing,” and there were nationwide campaigns to create awareness among people to avoid crowds, those suggestions were often not effective in practice, especially among faith-based communities. In many places in the country, including major cities, huge religious congregations kept taking place at mosques during Friday prayers, ignoring all cautionary notices. At one point, when the city authorities closed mosques and asked people to perform their prayers at home, many Muslims started to gather on their rooftops to perform their prayers (Figure 1) in jamat (groups)—the call for social distancing kept getting ignored. In mid-April 2020, thousands of people gathered at the funeral ceremony of a local influential religious leader in the Bramhambaria district, ignoring the strict ruling against such gatherings. Then, in the month of Ramadan, the nation feared a spike in the number of infected people as a series of religious festivals, including Shab-e-Qadr, Jumatul Bida, and Eid ul Fitr, were forthcoming. Those tensions between the city and religious communities allowed us to see the politics around the formulation of modern cities and the historical struggle of the religious identities within them.
While modernity is often defined by a set of values, processes, technologies, and the time period when scientific revolution, industrialization, and capitalism rapidly expanded across the West, urban modernity is characterized by the adoption of modernist ideologies in the urban built environment and as a tool to respond to emerging social and political tensions. Predominantly concentrated on the modernity-driven urban development strategies and planning policies engendered in the West, urban modernity is also globally understood as a product of secular institutions, practices, and discourses [1]. A major portion of this scholarship was developed in the 19th and 20th century, which read cities through the lens of scientific rationality and measured a city’s performance against a scale of “industrialized progress.”
With the theorization of the Global City in the 1980s, this new paradigm of modern urbanism started to expand beyond the West and influence the spatial, political, and economic readings of modern cities worldwide. These scholarly and design practices reinforced the qualities of Western modernity as the standards of a successful, global, modern city. In the late 20th and early 21st centuries, scholars such as David Harvey, Edward Soja, and Saskia Sassen mapped urban-spatial responses around global flows of capital, technology, information, and other streams, pointing toward the failures of such modernist advancement at different scales in an urban setting. Soja, Jane Jacobs, Susan Fainstein, Leonie Sandercock, Lisa Peattie, and many others shed light on different forms of social inequalities and injustices that were introduced by modern cities. Although it was not at the center of their criticism, the Western concept of modernity has also always upheld the idea of secularization, and there has been a predominant secular spirit among modernist planners and scholars toward understanding cities.
While the cities in the Global North have often relied on their secular institutions, one cannot understand the cities of the Global South, especially South Asian and African cities, without understanding their religious spirit. However, the externalization of religion to the core discourse characterizes the development of modern South Asian and African cities of the recent past [1]. Hence, the cultural pluralities and the alternative visions of modernity that could develop in this part of the world, which have the potential to generate a dominant body of knowledge in urban studies, are often marginalized [2]. Such marginalization not only hampers the tolerance and productive public engagement of people from different religious backgrounds [3] but also limits the scope of “vernacular” development in the cities of the Global South.
Figure 1. Religious people praying on the rooftop of a building after praying at the mosque was halted. (Image courtesy of Rubayet Tanim)
As a Global Southern city with a strong religious backdrop, Dhaka, the capital of Bangladesh, is often considered a “non-modern” city. To achieve the Global City aesthetics, Dhaka like any other developing city in many respects, attempts to abide by the “rules of modernity” that were developed in the West. Hence, the urban religions often remain outside the development discourse. The presence of religiosity therefore gets limited, neglected, and often ridiculed outside spaces of worship such as mosques, temples, churches, and pagodas. For instance, when a mosque in a neighborhood starts to be treated as merely another “building,” the much-needed socioeconomic and spatial connections between the city and the religion are overlooked. The associated design conversations fail to capture how the “donation box” placed at the entrance of a mosque and the business of informal vendors standing outside the entrance are deeply integrated with the complex network of the city’s economy. It fails to understand how a low-income rickshaw-puller feels when he wants to pray his afternoon prayer but can hardly find a secure place to park his rickshaw near the mosque. Hence, the conversations fail to address the challenge of better handling the traffic during any religious events within or around the city. Since Dhaka is a Muslim-majority city, we are focusing on examples of Islamic practices here. In reality, such spatial and socioeconomic marginalizations can be traced in other religious practices as well.
Modern cities also put limits on religious rituals, and this soon escalates to a contention between the city and its religious citizens. When religious communities feel that the city is being “developed” without including their input, they start developing resistance and try to hack into various spatial arrangements of the city to make their presence visible. Keeping urban religion outside the prominent design discussions results in different infrastructure-level failures as well. For instance, severe traffic gridlock occurs in Bangladeshi cities before Eid ul Fitr, when a huge number of rural people temporarily visit to collect
zakat (financial help from the rich obligated by Islam). Similar spatial complications occur during the congregational religious event
Biswa Ijtema, where millions of Muslims
gather to empower their Muslim identities. If the city development plans incorporated such religious festivities with due significance, Bangladeshi cities like Dhaka would have been developed in a distinctively inclusive way.
The borrowed concept of modernity, on the one hand, has limited possible design conversations with faith-based groups. On the other hand, excluding religiosity from the dominant urban design and planning decisions has created additional pressure on the city’s infrastructure. When the connections between the city and religion become parochial, it becomes quite difficult to expect thoughtful, tolerant responses from the marginalized faith-based communities during emergencies such as the Covid-19 pandemic. Hence, every now and then we observe alarming crowds at religious gatherings in South Asian cities, defying lockdowns and mandatory social distancing. These crowds are not an immediate “unthoughtful” response to the pandemic situation, but rather simply the result of a deeply rooted resistance to the city’s “secular interventions.”
This brings us to the point where we can think of urban computing in a broader perspective. The discourse of “smart cities,” in both the West and the Global South, lacks a discussion on how to incorporate religious practices. If a religious practice is essentially communal, such as
adhaan (Muslim prayers), how should the city strike a balance between the individual choice of not listening and the communal choice of hearing the sound? How is the city supposed to respond to the massive inflow of people, material, and livestock during religious festivals that challenge its infrastructure? If digitization develops a physical distance between people, how does that affect religious practices? How is the liberal, nonhierarchical model of online social media affecting value-laden religious communities? More importantly, how safe are the digital spaces for religious people? These and many other questions are critically important, and yet rarely discussed in the domain of urban computing.
We identify faith, religion, and spirituality as an underexplored territory of HCI in general, and urban computing in particular. Computing’s historical alignment with science, technology, and modernity has made it difficult to capture such sentiments, which are built upon belief, respect, tradition, fear, and purity. Furthermore, computing technologies, when acting as a force of modernization, often end up silencing and marginalizing the voices of religious communities by creating a “secular” space. However, a new paradigm of HCI can be imagined that challenges this top-down modernist program by strengthening the voices of marginalized religious sentiments.
Endnotes
1. Hancock, M. and Srinivas, S. Spaces of modernity: Religion and the urban in Asia and Africa.
International Journal of Urban and Regional Research 32, 3 (2008), 617–630.
2. Simone, A.M. On the worlding of African cities.
African Studies Review 44, 2 (2001), 15–41.
3. Greed, C. 2016. Religion and Ssustainable urban planning: ‘If you can’t count it, or won’t count it, it doesn’t count’.
Sustainable Development 24, 3 (2016), 154–162.
Posted in:
on Fri, July 08, 2022 - 12:17:25
Nusrat Jahan Mim
Syed Ishtiaque Ahmed
Syed Ishtiaque Ahmed is an assistant professor of computer science at the University of Toronto. For the past 12 years, he has been conducting ethnography and design research with various marginalized groups in the Global South.
[email protected]
View All Syed Ishtiaque Ahmed's Posts
Reflections on the politics of African ‘limitations’ in HCI research
Authors:
Muhammad Adamu,
Shaimaa Lazem
Posted: Tue, July 05, 2022 - 10:46:44
Last year, I read Shaimaa’s reflection on how Indigenous cultures and folk heritage may act as a steppingstone for sustainability in the absence of resources [1]. This led to extensive correspondence with her on how lessons from Saeed El Masry’s book [2] can reframe the common dualities of scarcity and sufficiency found in African perspectives on innovation in HCI. We pondered the question of how we, as researchers and practitioners, can challenge as well as celebrate the “lacks” and “gaps” connotations affixed to HCI development in Africa. This blog thereby offers a different reading of the meaning of terms such as limits and scarcity, showing how such terms entail considerable future thinking and planning efforts. Our reflection has more to do with the politics of naming and the power dynamics underpinning the internalization of such narratives in mainstream discourses, and less to do with the practices of design research in African communities where supposedly these limitations are amplified. We draw upon existing decolonial works that explore the politics of computing research within the African context [3,4].
It is our understanding that terms such as resource-challenged, resource-constrained, underserved, and underresourced are used to represent the unfortunate realities of global inequalities. Such terms in HCI discourse mirror a certain worldview of what constitutes scarcity and sufficiency in computing and are often associated with the limited adoption of modern state-of-the-art computing technologies (e.g., smartphones, AI, drones, VR). Our concern with such a way of thinking about Africa is that it unintentionally reproduces interventionalist narratives that suggest how the introduction of technological innovation can bring about economic and material prosperity. The use of the term introduction here is deliberate as it denotes how dominant social imaginaries have cemented the thinking that Africa is not a site of innovation, and that technocratic thinking and modern technologies are the only way out of its predicament. Our frustration here is that the common narrative of lacks and gaps reduces social life in African communities to a set of problems that need computational solutions, a view in which digital technologies are necessary to catch up or transition to Western ideals of progress. This way of thinking thus pushes for the politics of organizing knowledge against a backdrop of consumerist development models that are undesirable and unsustainable.
Additionally, the performativity of limits and scarcity centers certain kinds of utilities over others, thus bringing to the fore economic, infrastructural, and technological products and pushing to the periphery human capital, natural resources, and Indigenous knowledge with which Africa is notably wealthy. Arguably, foregrounding the former repeats simplified accounts of Africa as a historical site for the extraction/production of raw materials and Africans as passive consumers of processed end products. Such accounts of African users invited us to question the extent to which the limits of the computing technologies could have been misrepresented as the limits of its users. Imagine for a moment that, instead of text, early computer interfaces relied heavily on embodied interaction and voice as the main interaction modality; that would have been a better fit with the oral traditions of some African communities [5]. As such, literacies would not have been emphasized as a limitation in the way it’s been presented in mainstream technological discourse.
To provoke different thinking about limitations in Africa, research has shown how folk practices of food preservation were adopted in Cairo as a mechanism to plan and cope with food scarcity [2]. In some districts of old Cairo, ensuring food security involves a considerable amount of planning that must be dynamic and resilient in coping with the changes in the availability of resources (e.g., unfixed income). The number of daily meals, their timing, and the content of meals are planned so that, for instance, planning the biggest meals for dinner guarantees that the person will have enough energy to get through the next day without needing breakfast. The leftover food is used as a supplement for next-day meals and snacks, and recipes are made to keep expensive food items for a long time [2].
Another example of different thinking around limitations is the adoption of the cultural model of Igwebuikem (i.e., strength in numbers) in rebuilding the community wealth of the Biafran communities of Southern Nigeria [6]. Among the Igbo tribe, shared prosperity was considered a political instrument for futuring in the aftermath of the Nigerian civil war. The Biafran civil war brought about the need for building communal solidarity in coping with the dynamic situation of hardship and tribulation—from which the Igbo apprentice systems came about as a scalable entrepreneurship program that built upon the commonwealth of the community [6]. Through its adoption as a localized model for building business ecosystems, communities were able to identify the strength of the collective, learn from each other, and scale-up instruments for rebuilding community wealth. The point raised here is that conditions of scarcity demand situated knowing that does not rely on the limitations of the past but rather on the possibilities that a differentiated thinking about the present might offer to the performance of the future.
From the two historical scenarios of future thinking in the Old Cairo and Biafran communities, one can identify the resourcefulness that accompanies a different reading of limits and scarcity, showing how communities continuously innovate new ways of balancing present living conditions with future ones. By bringing to focus the politics associated with limitation as a concept and a reality, we are seeking to problematize its current use in HCI projects taking place in Africa. Recent efforts in HCI have been critical of technology solutionism, highlighting the nuances surrounding the use of technology as well as the importance of considering contextual and cultural specificities in design (e.g., the work of Kentaro Toyama, author of the seminal book Geek Heresy: Rescuing Social Change from the Cult of Technology). We’re nonetheless concerned with how dominant HCI has framed interaction design from the Global South in relation to development, design, and context, which has led to alternative framings of African design in HCI (e.g., the postcolonial, Indigenous, and decolonial approaches to design). Following decolonial thinking, our reflection sought to elaborate on this concern by bringing to the fore the geopolitics of technology solutionism and the interventionist narrative associated with it as Western ideologies that reproduce the imaginary of primitiveness and backwardness in relation to African innovation, and where postcolonial baggage and the promise of pop development—or the development that doesn’t really develop—remain. We argue that a differentiated identity for African innovation does not require a distinction between the unacknowledged past and the unfortunate present but rather recognizes how the relations between the past and the present give rise to the future.
The message from this short reflection should not be understood as shying away from inequalities and masking struggles resulting from socioeconomic limitations. Rather, we invite ourselves and the HCI community to rethink the “incomplete” realities we might portray and perpetuate when we only focus on and communicate lacks and gaps to inspire research encounters for and with communities from Africa. As African scholars, our hope, by sharing personal reflections, is to encourage HCI researchers to rethink the terms used in research framing and dissemination in light of colonial histories and the realities of globalization. Our call to action is to encourage HCI researchers, ourselves included, to reflect on the limit of the “limitations” described in HCI research, and to exercise humility in articulating the potential mismatch between the users and the technology introduced to them, bearing in mind the context technology was designed from and deployed to, and the worldviews that might be represented.
Endnotes
1. Lazem, S. What are you reading? Shaimaa Lazem. Interactions 26, 5 (2019), 12–13.
2. El Masry, S. Reproduction of Folk Heritage: How Poor Cling to Life in the Context of Scarcity. Cairo Supreme Council of Culture, 2013.
3. Bidwell, N.J. Decolonising HCI and interaction design discourse: Some considerations in planning AfriCHI. XRDS: Crossroads, The ACM Magazine for Students 22, 4 (2016), 22–27.
4. Lazem, S., Giglitto, D., Nkwo, M.S., Mthoko, H., Upani, J., and Peters, A. Challenges and paradoxes in decolonising HCI: A critical discussion. Comput Supported Coop Work 31 (2022), 1–38.
5. Allela, M.A. Technological speculations for African oral storytelling: Implication of creating expressive embodied conversational agents. Proc. of the Second African Conference for Human-Computer Interaction: Thriving Communities. ACM, New York, 2018, 1–4.
6. Kanu, C.C. The context of Igwebuike: What entrepreneurship development systems in Africa can learn from the Igbo apprenticeship system. AMAMIHE Journal of Applied Philosophy 18, 1 (2020).
Posted in:
on Tue, July 05, 2022 - 10:46:44
Muhammad Adamu
Muhammad Adamu is a postdoctoral researcher in ImaginationLancaster, a design-led research lab at Lancaster University. His research focuses on developing approaches to the design and deployment of Indigenous technologies with and for African communities.
[email protected]
View All Muhammad Adamu's Posts
Shaimaa Lazem
Shaimaa Lazem is an Egyptian associate research professor at the City of Scientific Research and Technology Applications (SRTA-City). She completed her Ph.D. in HCI from Virginia Tech. She is interested in HCI in non-Western cultural contexts, participatory design, and decolonizing HCI. She is the cofounder of the Arab-HCI community (
https://arabhci.org).
[email protected]
View All Shaimaa Lazem's Posts
Speech is human and multifaceted. Our approach to studying it should be the same.
Authors:
Arathi Sethumadhavan ,
Joe Garvin,
Benjamin Noah
Posted: Wed, June 22, 2022 - 9:57:27
Whether it’s the friendly virtual assistant in your smart speaker, the auto-generated captions on your YouTube video, or the software that physicians use to dictate clinical notes, voice AI has already become a fixture of modern life. It’s the promise of hands-free convenience: Simply speak naturally, and the computer listens, analyzes, and recognizes what you’re saying. With things like voice-controlled homes and cars on the horizon, our relationship with voice AI looks to only deepen. But the task of building speech recognition technology remains a tall order: We want it to work well for everyone, but there is no one way of speaking.
People speak differently depending on how old they are or where they live on a map but less obvious demographic factors like socioeconomic status and education can also play a role. Layers of intersecting variables all come together to influence how we verbally express ourselves. We humans use language like code-switching chameleons, reflecting and creating culture every time we talk. Speech, in other words, is so much more than a mechanical system of words. It is organic and fluid, forever entangled with identity, values, and self-expression.
So it makes sense that studying speech to improve AI models would be more than just an engineering job. In other words, teams grappling with complex problems involving people require diverse representation across disciplines. Working alongside engineers and data scientists, social science experts like sociologists, anthropologists, and sociolinguists can offer essential insight and context while navigating the intricacies of our many voices.
Lingering inequalities in voice AI
Use of computer speech recognition extends far beyond asking Alexa or Siri to play a song or ask for the weather. Court reporters use it to generate transcriptions; people with disabilities use it to navigate phones and computers; and companies use it to make hiring and firing decisions. As voice AI has proliferated in recent years, overall accuracy and reliability has improved dramatically. But even state-of-the-art speech recognition tech does not work well for everyone.
A Stanford University study from 2020, for example, tested services from major tech companies used for automated transcriptions and found large disparities between ancestry groups. The cause? Insufficient audio data when training the models, the study suggests. In other words, the voice AI powering the services was trained on datasets that left out many ways of speaking. In addition to certain ancestry groups, speech recognition systems also struggle with the accents of non-native English speakers, regional accents, and voices of women.
Biased AI starts with a biased worldview
These divides in voice AI have been documented for years, and so have the data-collection missteps that perpetuate them. Why, then, is collecting enough of the right speech data such a stubborn problem? One factor at play here is the out-group homogeneity effect, the cognitive bias that assumes people unlike you are more similar to each other than they really are. “They are alike; we are diverse,” or so the bias would have you believe.
Especially when classifying language, the bias is insidious. Consider, for instance, how all people over 60 are often lumped together into a single group: “older people.” This broad category might be useful in some contexts, but relying on it when studying speech would be irresponsible. Data shows that the way we talk continues to change as we age into our 60s and beyond, and that it even changes differently depending on gender. Tremendous variation exists within the group “older people” that deserves attention. But if someone isn’t 60 or older themselves, out-group homogeneity bias might blind them to all that variation.
Even the terms commonly used to describe Black and African American language—African American Vernacular English (AAVE), African American English (AAE), or African American Language (AAL)—can themselves be seen as examples of the out-group homogeneity effect. In reality, a language variety is never exclusively part of an entire demographic group, and there could also be people of different ancestry groups who happen to speak similarly. When it comes to studying speech for voice AI, creating false monoliths out of certain subgroups isn’t just inaccurate, it’s dangerous. Glossing over meaningful differences in the ways people speak shuts them out of tomorrow and leaves their voices unheard.
The many nuances of language
Many different factors play into speech. Some might be obvious, like where you live or whether that’s your native language. But other factors like health, education, and even historical migration patterns also play significant roles in shaping how a person speaks. Social factors like these contribute to linguistic variations. Anthropological linguists go a step further, suggesting that these factors also actually construct social meaning. In this view, when someone speaks, their voice is doing so much more than simply reflecting their region or ancestry: It’s expressing an identity.
Our gender identity, for instance, can influence how we talk. Culture or ethnic association can also influence how our speech develops, how we use it, and how it may evolve. When we define a specific variation of speech, therefore, we must include these societal factors as its foundational pillars.
Level of fluency, education, gender identity, age language was learned—which of these many interconnected factors are the most decisive in shaping the ways we speak? It’s crucial information to have, as it quickly becomes unwieldly to account for all possible aspects that determine speech. When collecting samples of spoken language to train a new speech model, for example, there are real-world limitations affecting what can be collected: time, money, personnel, and geography, to name a few. Prioritizing all the social factors is a complex job, one beyond the narrow scope of any one discipline.
A multidisciplinary approach
To build speech recognition technology that works well for everybody, we need to capture the right diversity in our data-collection strategies. This involves turning toward those nuances in language, being attentive and curious about them. We know that we want to capture an accurate picture of the incredible variety of human speech, and we also know that many complex dynamics are at play. This calls for a multidisciplinary approach for a better informed, more inclusive perspective.
An engineer might be able to notice the different word-error rates between demographic groups, for example, but a sociolinguist can help explain the different speaking patterns at play, how these patterns show up across communities, and historical reasons for why they emerged. A data scientist can tell you how many people in what groups need to be sampled. Sociologists, demographers, and anthropologists can speak to social behaviors and psychology, aspects that illuminate the subtleties of language. Domain experts like these offer invaluable insights and context, and involving them early on will help us design better datasets that capture human diversity.
Toward more equitable voice AI
Even with the help of other disciplines, building speech recognition systems is incredibly hard work. Training a voice AI model requires a huge amount of speech data. Collecting this data means bringing in lots of people from different population groups, some of whom are difficult to access and recruit, and some of whom, like Native Americans and the First Nations peoples of Canada, haven’t had their speech studied extensively. And when subjects are finally recruited, they need to be taken to a noise-controlled recording facility, asked to follow specific directions, and instructed to read aloud carefully designed paragraphs of text.
The process of creating voice AI is painstaking and resource-intensive as it is—detecting and reducing bias on top of it all only makes the process more difficult. And yet, we must be up to the task. The fact is that speech recognition systems of today, trained on largely homogenous datasets, still don’t work for all groups of people. This is more than a matter of services performing poorly; it’s a matter of dignifying real ways of speaking, real identities and cultures. We must first acknowledge that this problem exists and educate teams about ways of building more equitable voice AI. Then we need to act. Acknowledging the intricate social and cultural dimensions of speech, we might team up with experts from relevant disciplines. With the help of experts like social scientists, product teams are better equipped to think carefully about inclusive dataset design and to devise creative approaches to thorny data-collection obstacles.
Human speech poses engineering problems that transcend the technical. Building voice AI is in fact a sociotechnical endeavor, one that requires diversity in disciplines. The stakes are high, but with an intentional focus to seek out our blind spots, we can collaborate to build voice AI that truly works for everyone.
Posted in:
on Wed, June 22, 2022 - 9:57:27
Arathi Sethumadhavan
Arathi Sethumadhavan is the head of research for Ethics & Society at Microsoft, where she works at the intersection of research, ethics, and product innovation. She has brought in the perspectives of more than 13,000 people, including traditionally disempowered communities, to help shape ethical development of AI and emerging technologies such as computer vision, NLP, intelligent agents, and mixed reality. She was a recent fellow at the World Economic Forum, where she worked on unlocking opportunities for positive impact with AI, to address the needs of a globally aging population. Prior to joining Microsoft, she worked on creating human-machine systems that enable individuals to be effective in complex environments like aviation and healthcare. She has been cited by the Economist and the American Psychological Association and was included in LightHouse3’s 2022 100 Brilliant Women in AI Ethics list. She has a Ph.D. in experimental psychology with a specialization in human factors from Texas Tech University and an undergraduate degree in computer science.
[email protected]
View All Arathi Sethumadhavan 's Posts
Joe Garvin
Joe Garvin is a writer for Ethics & Society at Microsoft via Allovus, where he helps share research insights with a broader audience. He has previously written for the University of Washington and
City Arts magazine. He has a bachelor’s degree in English literature and a master’s degree in communication with a specialization in digital media.
[email protected]
View All Joe Garvin's Posts
Benjamin Noah
Ben Noah is a senior design researcher for the Ethics & Society group at Microsoft (Cloud & AI), where he supports strategy on responsible development of AI technologies, focusing on the collection of diverse datasets. His previous research experience included modeling cognitive workload using eye-tracking metrics and the design of modern operator control systems for the refinery industry. He has a Ph.D. in industrial engineering with a specialization in human factors from Penn State University, and a bachelor's degree in mechanical engineering from the University of Illinois.
[email protected]
View All Benjamin Noah's Posts
Toward Understanding the Efficacy of Virtual Classrooms Among Bangladeshi Students During Covid-19
Authors:
S.M. Rakibul Islam,
Nabarun Halder,
Ashraful Islam,
Eshtiak Ahmed,
Sheak Rashed Haider Noori
Posted: Fri, June 03, 2022 - 11:41:22
The sudden appearance of Covid-19 forced people to stay at home due its highly infectious nature. Governments shut down countries; economies and businesses collapsed. The education system was no exception; rising Covid-19 cases forced schools to shut down. According to UNESCO, Covid-19 afflicted 37,694,522 students as of January 2022 [1]. Since educational facilities halted in March 2020, Bangladeshi students have been deprived of adequate time for instruction and for connecting with classmates, both of which significantly harm their educational experience. Like many countries, Bangladesh had to move to an online education system [2]. While the online program was able to overcome some of the inevitable difficulties, many people questioned its efficacy and acceptability. Taking this into account, our research looks at the views and opinions of students regarding the efficacy of virtual classrooms and how these classrooms could be an alternative to a traditional, in-person classroom setting.
We used Google Forms to create an online survey about students' perspectives on online education [3]. To develop the survey, we carried out a literature review of related works and conducted informal interviews with students who attended online classes [4]. The survey consisted of both quantitative and qualitative inquiries centered around a set of four types of questions: participants' characteristics, how they attended online classes, their online learning platform experience, and their overall experience. The survey involved 210 students, with educational backgrounds ranging from high school to postgraduate. Of the participants, 88.6 percent were undergraduates and 92.9 percent were between 18 and 25 years old (Table 1). Most participants were from urban areas (63.3 percent). We performed a percentage analysis on the data from the survey using Google Forms' built-in features and Microsoft Excel.
Demographic Information
|
Characteristics
|
Number of
Participants
|
Education level
|
High school/secondary
(Grade 6 to 10)
|
2
|
Higher Secondary (Grade
11 to 12)
|
7
|
Undergraduate
|
186
|
Postgraduate
|
15
|
Gender
|
Female
|
68
|
Male
|
142
|
Age (years)
|
10 to 17
|
3
|
18 to 25
|
195
|
26 to 32
|
11
|
33 to 40
|
1
|
Residence area
|
Suburban
|
34
|
Rural
|
43
|
Urban
|
133
|
Table 1. Demographic information of survey participants.
Poor Internet connectivity was one of the most significant challenges for online classes [5]. Our study reports that nearly half (45.2 percent) of the participants had poor Internet access. Most students used Android smartphones and mobile data for online classes (Table 2). However, a majority of students (84 percent) had no previous experience with online classes. Most of them preferred in-person classes to recorded ones. According to our findings, most participants who did not understand a topic in an online class asked the instructor questions. In contrast, many students who watched recorded videos didn't ask questions. Regarding relationships between classmates, about 80 percent of the respondents felt that online classes created distance between them and their classmates. There was a mixed response to teacher-student interaction.
Technical Information
|
Characteristics
|
Number of
Responses
|
Devices used
|
Tablet computer
|
2
|
iPhone/iOS smartphone
|
14
|
Desktop computer
|
27
|
Laptop computer
|
108
|
Android smartphone
|
161
|
Internet accessibility
|
Public Wi-Fi
|
36
|
Broadband connection
|
103
|
Mobile Internet packages
|
127
|
Platforms used
|
University’s Blackboard
|
1
|
Skype
|
2
|
Microsoft Teams
|
6
|
Facebook Live
|
11
|
Zoom
|
71
|
Google Meet
|
163
|
Communication with new classmates
|
Never communicate with them
|
60
|
Personal message
|
91
|
Instant messengers and social
media groups
|
150
|
Table 2. How participants join online classes and interact with classmates.
Most participants stated that being unable to physically meet with teachers negatively affected their education. Many felt less motivated to discuss any topic in an online class. As a result, students were less eager to discuss any issues with the teacher after class. In terms of changes in instructions in the online class, 50.95 percent of participants expressed "instructors spend less time on guiding students on changes in directions in the online class" and "instructors spend less time explaining topics"; 36.67 percent stated that "instructors take fewer feedbacks"; and 45.24 percent of participants agreed with the following statement: "To demonstrate a topic, instructors use fewer visuals." Just over 41 percent of students were neutral about changes in lecture materials (Figure 1).
Figure1. Participants’ votes on how lecture materials and directions are changed.
Participants did not have a clear preference about where they took tests, but they had some complaints about the testing process. They stated that open-book exams caused teachers to generate complex tasks that might take longer to solve than the time given for exams, and that teachers could not evaluate their efforts properly. In addition, poor Internet connectivity and the inadequacy of devices were significant barriers to students participating in online tests. The main complaint about online classes and tests was that students could not concentrate for a long period of time. Students also mentioned falling asleep during classes and noisy environments while attending classes at home. In terms of class participation and performance evaluations, most students preferred assessment at the end of the week. Still, many students appreciated checking in for each class; some also mentioned that they enjoyed the assessments at the end of the course or semester. A few of the main complaints from students using online learning systems were that teachers used fewer images to demonstrate concepts, spent less time explaining subjects than before, and took less feedback from students. Approximately 33.3 percent were neutral about learning from online classes, with extremely confident students around 11.9 percent and extremely unconfident students around 17.1 percent (Figure 2).
Figure 2. Participants’ votes on how lecture materials and directions are changed.
On whether online lessons were advantageous, 31.4 percent students were neutral and 17.1 percent considered online classes extremely unuseful. In comparison, online classes were considered useful by 19.52 percent of students (Figure 3).
Figure 3. How useful online classes were to participants.
Students understood that the pandemic prevented in-person learning. They recommended that some changes to online education be implemented, including limiting the length of virtual classes, ensuring an uninterrupted Internet connection, and, at a lower cost, utilizing additional visuals or examples to demonstrate a topic, taking immediate feedback after discussing a topic, and guiding students on the necessary steps for the next session of that course/class. Some complications in students' use of online platforms were observed, among them excessive amounts of data transmission due to video lecture distribution, the inability to transmit bigger files to or from the instructor at a lower Internet speed within a shorter and fixed time frame, and inconsistent attendance. The survey findings indicate that the students would be interested in online education if the aforementioned recommendations are heeded and the issues resolved. Despite the limitations indicated by survey participants, it is obvious that an online education system may serve as a viable alternative to traditional classroom education in an emergency such as the Covid-19 pandemic, and that virtual classrooms can minimize disruptions in continuous learning.
Endnotes
1. UNESCO. Education: From disruption to recovery; https://en.unesco.org/covid19/educationresponse
2. Abdullah, M. 2020: The rise of online education. Dhaka Tribune. Dec. 31, 2020; https://www.dhakatribune.com/bangladesh/2020/12/31/2020-rise-of-online-education
3. Halder, N., Islam, S.R., Hosain, M.S., Ahmed, E., Islam, A., and Noori, S.R.H. Efficacy and acceptance of virtual classrooms during covid-19: Bangladesh perspective. Proc. of 3rd International Congress on Human-Computer Interaction, Optimization and Robotic Applications. IEEE, 2021, 1–6.
4. Muthuprasad, T., Aiswarya, S., Aditya, K., and Jha, G.K. Students' perception and preference for online education in India during Covid-19 pandemic. Social Sciences & Humanities Open 3, 1 (2021), 100101.
5. Adnan, M. and Anwar, K. Online learning amid the Covid-19 pandemic: Students' perspectives. Online Submission 2, 1 (2020), 45–51.
Posted in:
on Fri, June 03, 2022 - 11:41:22
S.M. Rakibul Islam
S.M. Rakibul Islam obtained a B.S. in computer science and engineering from Daffodil International University, Bangladesh. His research interests include HCI, user interaction, and applied machine learning.
[email protected]
View All S.M. Rakibul Islam's Posts
Nabarun Halder
Nabarun Halder obtained a B.S. in computer science and engineering from Daffodil International University, Bangladesh. His research interests include HCI, user interaction, and applied machine learning.
[email protected]
View All Nabarun Halder's Posts
Ashraful Islam
Ashraful Islam is a Ph.D. candidate in computer science at the University of Louisiana at Lafayette. His research interests include HCI, mHealth, and user experience. He earned an M.S. in computer science from the same institution.
[email protected]
View All Ashraful Islam's Posts
Eshtiak Ahmed
Eshtiak Ahmed is a Ph.D. student at Tampere University, Finland. His research interests include HCI, social robotics, and user experience. He earned an M.Sc. in human-technology interaction (HTI) from the same institution.
[email protected]
View All Eshtiak Ahmed's Posts
Sheak Rashed Haider Noori
Sheak Rashed Haider Noori is an associate professor of computer science who also serves as associate chair of the Department of Computer Science and Engineering at Daffodil International University, Bangladesh. His research covers HCI, gamification, and design science.
[email protected]
View All Sheak Rashed Haider Noori's Posts
The spirit of scientific communication
Authors:
Ahmet Börütecene ,
Oğuz Buruk
Posted: Mon, February 28, 2022 - 12:46:19
How can I tell what I think till I see what I make and do? — Christopher Frayling [1]
Just a couple of months before the start of the new decade and the pandemic began haunting the world, we had the privilege to attend the Halfway to the Future symposium (https://www.halfwaytothefuture.org/2019/) in person and present a paper. The symposium was organized on the occasion of the 20th anniversary of the Mixed Media Lab at the University of Nottingham. There were many exciting keynotes, presentations, and panels. At that time, we did not know that this would be one of the last physical presentations we would give for a while. In this blog post, we would like to talk about the stage performance we did to present our paper and how it motivated us to open a discussion on alternative ways of communicating scientific knowledge, in particular that which is produced through HCI and design research.
For the symposium, we submitted a provocative paper that envisions using a Ouija Board as a resource for design [2], which was then presented in the session built around the influential work of Bill Gaver on ambiguity [3]. Briefly, the Ouija is a parlor game where participants place their fingers on a physical cursor (planchette) and ask questions to an invisible being—supposedly a spirit. The being moves the cursor—although it is moved unconsciously by humans due to the ideomotor effect [4]—around the letters on the Ouija board to answer the questions. We were excited by how the Ouija mechanism combines ambiguity and human touch in an intriguing way, making it a collective, playful, and evocative medium for ideation. Through its embodied and narrative affordances, the Ouija minimizes verbal communication and focuses participants’ attention on the movement of their hands and the planchette around the letters on the board for a tacit conversation. Using the Ouija was quite experimental for us, and the ambiguity that it embodies was also at the heart of our research process, as we kept discovering new experiences and challenges each time we iterated our Ouija design sessions with different people
When we started thinking about how to present our research, sequential static slides did not look like an ideal way of showing our journey, nor our confusion and excitement about the potential of Ouija for design. Show-and-tell was another alternative. However, although many things can be explained through annotations, the design object, or the object of design, still has a lot more to communicate that cannot be translated into a traditional show-and-tell. In the end, we decided to use the artifact itself to talk about our research. One of the advantages is that our object of research/design had an intrinsic performative aspect and thus could afford a theatrical presence on the stage. We started exploring how to exploit this aspect and use the narrative it could offer.
After playing with the Ouija for a while, we noticed that we did not have to show our real-time interaction with it, but instead could show on a large screen a prerecorded video of us doing the intended actions to make the audience believe that it is actually a live video. This introduced another challenge, though: the requirement to perform on the stage in sync with the prerecorded video. Although challenging, it fit very well the spirit of the symposium: mixed reality. It was a little bit risky, and we did not have much time for rehearsal, but we wanted to give it a shot.
We wrote a vague script for our Ouija performance and started playing along in the hotel room. As we were playing, we were both rehearsing our script and discovering the Ouija board and how to use it for the performance as well as for design. This process not only unfolded the dialogue we intended to create between two people as researchers for presentation purposes, but it also facilitated our dialogue as designers exploring the Ouija board by enabling us to reveal aspects of it that we did not have the chance to notice earlier. So the efforts toward developing a way to communicate our research also became a way to explore the design space further and have a better understanding of it. In the end, the video was ready, and the performance was a success. The audience followed along with the script and caught the key moments clearly.
The point, though, is not the success of our presentation. Also, it is not the first performative presentation made for scientific communication, having been preceded by many others. However, we believe that it is worth discussing the generative impact of such presentations not only on the audience, but also on the presenters, design objects, and the research. There may be a certain value in thinking about how an artifact can be the “subject” doing the scientific communication rather than just being the inert “object” frozen during knowledge creation. This approach configures the interaction with the artifact in a way that may help with the challenges in communicating design knowledge and can be complementary to methods such as annotated portfolios. Moreover, we also had the chance to see that such a performance has an intrinsic value for communicating design knowledge that makes an impact on the audience. It not only helped us engage the audience, but it also provoked research ideas and spontaneous action in the hall. For example, one of the presenters in our panel, Miriam Sturdee, did a sketch during the performance (Figure 1), and later Katherine Isbister asked us if we could place our Ouija board at the corner of the stage to receive questions from the audience during the panel she was moderating. It was also nice to see some parallels to our performance at the symposium. For example, Steve Benford used an old slide projection to show the slides he used years ago to present one of his early works presented at CHI, and then went on with a digital presentation.
Figure 1. Sketch made by Miriam Sturdee during the Ouija
performance (used with permission)
Inspired by this experience, we would like to discuss the value of such performances at scientific venues and how they could open up new possibilities to communicate research outcomes as well as to create engagement in different types of audiences. Moreover, we also consider it important to reflect on how such an approach can contribute to authors/designers themselves to have a better understanding of the artifact or topic they are exploring. Instead of thinking of the presentation as a fixed and predictable thing, we might see it as a performance that enables designers, researchers, and audiences to experience the results in the making, as the continuation of a reflective dialogue with the design object. We believe an important aspect of our performance was that it was not a gimmick or special effects. It was a moment where the design object, or the object of design, spoke for itself and this made sense under the symposium theme. Can this kind of approach inspire alternative ways of presenting research processes, disseminating findings, or even reviewing scientific publications? One example toward this direction is the “not paper” [5], which criticizes current publishing formats and practices through its particular embodiment. DIS runs a pictorial track where authors are encouraged to communicate their research through visual-driven papers. Video articles are another example [6]. We would like to hear what design communities have to offer in this direction: Can making multisensory papers be a way of articulating insights addressing different sensory channels, be they haptic, gustatory, auditory, etc.? For example, can we hug or bite the text? Or, why not games as potential presentation formats? Instead of making slides, can we organize a live-action role-playing setting for keynote presentations? How should we go further?
Endnotes
1. Frayling, C. Research in Art and Design. Royal College of Art, London, 1993.
2. Börütecene, A. and Buruk, O. Otherworld: Ouija Board as a resource for design. Proc. of the Halfway to the Future Symposium. ACM, New York, 2019, 1–4; https://doi.org/10.1145/3363384.3363388
3. Gaver, W.W., Beaver, J., and Benford, S. Ambiguity as a resource for design. Proc. of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, 2003, 233–240; https://doi.org/10.1145/642611.642653
4. Häberle, A., Schütz-Bosbach, S., Laboissière, R., and Prinz, W. Ideomotor action in cooperative and competitive settings. Social Neuroscience 3, 1 (2008), 26–36. https://doi.org/10.1080/17470910701482205
5. Lindley, J., Sturdee, M., Green, D.P., and Alter, H. This is not a paper: Applying a design research lens to video conferencing, publication formats, eggs… and other things. Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, New York, 2021, 1–6; https://doi.org/10.1145/3411763.3450372
6. Löwgren, J. The need for video in scientific communication. Interactions 18, 1 (2011), 22–25; https://doi.org/10.1145/1897239.1897246
Posted in:
on Mon, February 28, 2022 - 12:46:19
Ahmet Börütecene
Ahmet Börütecene is an assistant professor in the division of Media and Information Technology at Linköping University, Sweden where he is doing design research and teaching, mostly in the field of interaction design and visualization. His current research focus is haptic human-AI interactions, mixed reality experiences, and more-than-human smart cities.
[email protected]
View All Ahmet Börütecene 's Posts
Oğuz Buruk
Oğuz ‘Oz’ Buruk is an industrial product designer who completed his Ph.D. in interaction design at Koç University–Arçelik Research Center for Creative Industries (KUAR). He currently is a senior research fellow in the Gamification Group, Tampere University, Finland. His work focuses on playful bodily technologies, critical design, design fiction, extended reality environments, and human-nature-machine interactions.
[email protected]
View All Oğuz Buruk's Posts
Modifying category icons in smartphone app stores for better understanding: A user-centric approach
Authors:
Eshtiak Ahmed,
Ashraful Islam,
Farzana Anowar
Posted: Fri, February 25, 2022 - 12:30:00
Recent increases in smartphone users have resulted in a great diversity of backgrounds and cultures among these users. Designers of smartphone elements need to address this diversity by making design elements more uniform and acceptable for all. The user interface should address the needs and level of understanding of all users. In this study, we focus on a new approach to the design of user interface icons and investigate how careful design can help users have more seamless interaction with interfaces.
As of the first quarter of 2021, the Google Play Store has the highest number of apps [1]. In the Google Play Store, different app categories are specified by a category name and a simple graphical representation beside the text. Though the text itself is self-explanatory, the graphical representations do not represent the categories well. This may not be a problem for someone who can read English, but it is very hard to grasp the category purely based on the pictorial depiction [2]. It is quite common for people to have trouble finding the app they are looking for [3]. The goal of this research is to see if app categories can be represented more efficiently with better icons or graphical representations next to them.
Initially, new icons were designed for some selected categories. A public survey was designed and circulated where the participants were asked to tell their preferred icons for a specific category. A total of 664 responses were collected. According to the results of the survey, distinguishing factors were identified in how graphical representations can be represented more conveniently. The detailed information and findings of this study can be found in one of our published works [4].
App store category selection
Apps in the Play store are divided into more than 35 categories. The list of categories consists of the category name and a small category icon beside it. These category icons are often confusing and misleading, making them useless at times. However, not all the icons are poor and some of them are quite helpful. For this study, 10 different app categories were selected based on their abstractness and readability: arts and design, business, communication, entertainment, finance, lifestyle, personalization, social, sports, and travel and local.
Icon design
Our goal was to design icons in such a manner that people with low literacy, senior citizens, and kids with special needs can easily understand what they stand for, potentially allowing them to find what they are searching for. While designing the icons, we wanted to represent each category with relevant objects that belongs to that category, for example, representing the sports category with elements from popular sports. In Figure 1, we show the 10 selected categories along with their current icons in the Play store, as well as the icons we have designed for them.
Figure 1. Side-by-side comparison of existing and newly designed icons.
Survey design and participants
After completing the icon design, a public survey was designed to get feedback from smartphone users about the newly designed icons. For each app store category, two icons (newly designed and existing icons from Google Play store) were put alongside each other, and the participants were asked to decide which icons they preferred for each category. The survey collected a total of 664 responses; 513 (77.3 percent) were male and the rest 151 (24.7 percent) were female. All participants were between the ages of 21 and 25.
Findings
The public survey provided a very good idea of whether the newly designed icons represent the categories better or not. Figure 2 shows the overall survey result comparing the participants’ preference for icons in each of the 10 categories.
The newly designed icon for the arts and design category was preferred by 88 percent of the participants. The new business category icon was preferred by 90.8 percent of participants, while the new icons for the communication, entertainment, and finance categories were preferred by 84.3 percent, 95.5 percent, and 95.9 percent of participants, respectively. The new lifestyle category icon was preferred by 93.1 percent of participants.
Some of the newly designed icons received mixed responses, as they were tough to represent by a small piece of art. These included the newly designed icon for personalization category, which was preferred by 64.8 percent of participants and the new icon for the social category, which was preferred by 71.2 percent of participants. Also, the new sports category icon was preferred by 82.7 percent of participants, while the new travel and local category icon was preferred by 72 percent of participants.
Challenges and discussion
The new icons were designed by using real-world objects from each category. As an example, the sports icon was designed by combining the equipment from three popular sports. However, similar types of categories such as finance, business and communication, and social were challenging because there are fewer distinguishing factors between them. The lifestyle and personalization categories were also very challenging because of their abstractness.
Icons were created using the objects that came to mind for each category. Sometimes, however, the new icons might have seemed too crowded, as so many objects were cramped into a tiny circle to make the icon understandable. Another shortcoming is that the survey participants are mostly young smartphone users. A more thorough survey covering different age groups could create a different dimension of results.
Figure 2. Analysis of results obtained from the conducted survey.
Conclusion
This study promotes designing an effective textless user interface so users can integrate with their devices more easily and more effectively. Creating impactful and meaningful icons could be a great step toward text-free interface in smartphones. This work could be extended by representing a user interface that is self-explanatory and interactive, opening new doors for challenged users.
Endnotes
1. Statista. Number of apps available in leading app stores. Retrieved August 21, 2021 from https://www.statista.com/topics/1729/app-stores/#dossierSummary__chapter3
2. Wiebe, M., Geiskkovitch, D., Bunt, A., Young, J., and Glenwright, M. Icons for kids: Can young children understand graphical representations of app store categories? Proc. of the 42Nd Graphics Interface Conference. Canadian Human-Computer Communications Society, 2016, 163–166.
3. Wang, M. and Li, X. 2017. Effects of the aesthetic design of icons on app downloads: Evidence from an Android market. Electronic Commerce Research 17, 1 (2017), 83–102.
4. Ahmed, E., Hasan, M.M., Faruk, M.O., Hossain, M.F., Rahman, M.A., and Islam, A. 2019. Icons For the mass: An approach towards text free smart interface. Proc. of the 2019 International Conference on Advances in Science, Engineering and Robotics Technology. 2019, 1–4.
Posted in:
on Fri, February 25, 2022 - 12:30:00
Eshtiak Ahmed
Eshtiak Ahmed is a Ph.D. student at Tampere University, Finland. His research interests include HCI, social robotics, and user experience. He earned an M.Sc. in human-technology interaction (HTI) from the same institution.
[email protected]
View All Eshtiak Ahmed's Posts
Ashraful Islam
Ashraful Islam is a Ph.D. candidate in computer science at the University of Louisiana at Lafayette. His research interests include HCI, mHealth, and user experience. He earned an M.S. in computer science from the same institution.
[email protected]
View All Ashraful Islam's Posts
Farzana Anowar
Tangible XAI
Authors:
Maliheh Ghajargar,
Jeffrey Bardzell,
Allison Smith Renner,
Peter Gall Krogh,
Kristina Höök,
Mikael Wiberg
Posted: Tue, February 15, 2022 - 4:11:15
Computational systems are becoming increasingly smart and automated. Artificial intelligence (AI) systems perceive things in the world, produce content, make decisions for and about us, and serve as emotional companions. From music recommendations to higher-stakes scenarios such as policy decisions, drone-based warfare, and automated driving directions, automated systems affect us all.
But researchers and other experts are asking, How well do we understand this alien intelligence? If even AI developers don’t fully understand how their own neural networks make decisions, what chance does the public have to understand AI outcomes? For example, AI systems decide whether a person should get a loan; so what should—what can—that person understand about how the decision was made? And if we can’t understand it, how can any of us trust AI?
The emerging area of explainable AI (XAI) addresses these issues by helping to disclose how an AI system arrives at its outcomes. But the nature of the disclosure depends in part on the audience, or who needs to understand the AI. A car, for example, can send warnings to consumers (“Tire Pressure Low”) and also send highly technical diagnostic codes that only trained mechanics can understand. Explanation modality is also important to consider. Some people might prefer spoken explanations compared to visual ones. Physical forms afford natural interaction with some smart systems, like vehicles and vacuums, but whether tangible interaction can support AI explanation has not yet been explored.
In the summer of 2020, a group of multidisciplinary researchers collaborated on a studio proposal for the 2021 ACM Tangible Embodied and Embedded (TEI) conference. The basic idea was to link conversations about tangible and embodied interaction and product semantics to XAI. Here, we first describe the background and motivation for the workshop and then report on its outcomes and offer some discussion points.
Self-explanatory or explainable AI?
Explainable AI (XAI) explores mechanisms for demonstrating how and why intelligent systems make predictions or perform actions to help people to understand, trust, and use these systems appropriately [1]. Importantly, people need to know more about systems than just their accuracy or other performance measures; they need to understand more fully how systems make predictions, where they work well, and also their limitations. This understanding helps to increase our trust in AI, ensuring us they are operating without bias. XAI methods have another benefit: supporting people in improving systems by making them more aware of how and when they err. Yet XAI is not always a perfect solution. Trust is particularly hard to calibrate; AI sometimes cannot be trusted; and AI explanations may result in either over- or under-reliance on systems, which might promote manipulations and managerial control.
The notion of explanations or self-explanatory forms has also had a long tradition in product and industrial design disciplines. Product semantics is the study of how people make sense of and understand products through their forms; hence, the way products can be self-explanatory [2]. From that perspective, a product explains its functionality and meaning by its physical forms and context of use alone. For example, the large dial on a car stereo not only communicates that the volume can be controlled, but also how to do so. In this view, products do not need to explain themselves, though their forms need to be understandable in principle.
Using a product-semantic perspective for XAI builds upon the general concerns of XAI, but focuses on the user experience and understandability of a given AI system based on user interpretations of its material and formal properties [3]. The dominant interaction modalities, however, do not fully allow that experience; hence, we have decided to explore tangible embodied interaction (TEI) as a promising interaction modality for that purpose.
From explainable AI to graspable AI: A studio at ACM TEI 2021
At ACM TEI 2021, we organized a studio (similar to a workshop) to map the opportunities that TEI in its broadest sense, including Hiroshi Ishii and Brygg Ullmer’s tangible user interfaces, Paul Dourish’s embodied interaction, Kristina Höök’s somaesthetics, and Mikael Wiberg’s materiality of interaction, can offer to XAI systems.
We used the phrase graspable AI, which deliberately plays with two senses of the word grasp, one referring to taking something into one’s hand and the other to when the mind “grasps” an idea. The term graspable inherently conveys the meaning of being understandable intellectually, meaningfully, and physically.
In doing so, we referenced two earlier HCI concepts. In 1983, Ben Shneiderman coined the term direct manipulation as a way to “offer the satisfying experience of operating on visible objects. The computer becomes transparent, and users can concentrate on their tasks” [4]. In 1997, Ishii and Ullmer envisioned tangible bits, a vision of human-computer interaction that allows users to grasp and manipulate bits by coupling the bits with everyday physical objects and architectural surfaces, which was preceded by George Fitzmaurice’s Ph.D. thesis Graspable User Interfaces in 1996. Their goal was to create a link between digital and physical spaces [5,6,7].
Building on these ideas, we viewed graspable AI as a way to approach XAI from the perspective of tangible and embodied interaction, pointing to a product that is not only explainable but also coherent and accessible in a unified tactile and haptic form [8].
Beyond presentations of position papers, studio participants engaged in group activities, which consisted of three phases (Figure 1). First, each group chose an everyday use AI object or system, analyzed and explored its interactions regarding explainability (what needs to be explained and what the system can explain), then ideated possible tangible interactions with the system and redesigned the human-AI interaction using TEI.
Figure 1. TEI studio structure and process.
Graspable designs for everyday use
Studio participants split into groups to explore graspable designs for three distinct AI systems for everyday use: movie recommendations, self-tracking, and robotic cleaning devices.
One group explored graspable design for movie recommendations, asking questions about their explainability and how users can interact with the system through a tangible interface. Some ideas that emerged during the discussion and ideation phases were speculations around using a stress-ball form as a tangible interface to the recommendation system to influence recommendations based on the user’s mood (Figure 2).
Figure 2. Stress ball as a TUI for movie recommendation system.
Another group worked on AI self-tracking devices, discussing questions of how an AI self-tracking device can learn from the user interactions and make us “feel” that our blood sugar is high or low. Provocative ideas, such as an eye that loses its sight gradually and a system that learns and informs the user through haptic feedback, were also explored (Figure 3).
Figure 3. Tangible AI self-tracking device.
The third group focused on robotic cleaners, where they discussed the opportunities of tangible interactions to inform users about and anticipate robots’ movements. Making the robot’s intended movement visible was another idea that led the group to ideate a tangible map and a map that uses spatial image projections (Figure 4).
Figure 4. Robotic vacuum cleaner spatial interactions.
Discussion
Aesthetic accounts often unfold as a back and forth between material particulars and interpreted wholes [3]. Further such accounts consider the difference between what are called explanations in science and interpretations in humanities and design. While explanation is used to find an answer to the why of a phenomena and to reduce its complexity to manageable and understandable units, interpretation is a way to make sense of an experience and the how of the formal and material properties of an artifact within its sociocultural context [3].
AI systems vary in their complexity; some are easier to explain (e.g., movie recommendation systems) while others are more difficult to understand because of their complex inner reasoning (e.g., natural language understanding by deep learning). Accordingly, the notion of graspable AI can not only be concerned with explanations, but also with how AIs are experienced and interpreted through their material particularities within their sociocultural context. However, we do not mean to suggest that graspable AI is a perfect solution to all fundamental humanities concerns! In this workshop, we explored TEI as an explanation modality. We found out that while it has some potential for improving the understandability of the system, it comes with its own challenges. For example, there are features and functionalities of AI systems that do not need to be tangible, or that are understandable as they are. It has been also challenging to design tangible AI systems without falling into usual categories of smart objects.
Hence, we conclude this post discussing a set of related themes and challenges inherent to AI systems that fruitfully may be approached through the notion of graspable: growth, unpredictability, and intentionality.
Graspable growth
AI systems are intended to learn over time as a continuous and internal process based on the data inputs. Hence, an algorithm metaphorically grows over time. As much as an AI system grows, its functionalities and predictions are expected to improve as well, though devolution is as possible as evolution. But humans are not always aware of that process of learning and growing. Probably the most intuitive form of growth for humans are biological growths. We naturally and intuitively can understand when a plant grows new leaves.
We suggest that metaphors of biological growth (and decay) can inspire the design of XAI product semantics to make the AI learning process self-explanatory.
Graspable unpredictability
AI systems are sometimes unpredictable for humans. The decisions they make based on their inner workings and logic may not appear entirely reasonable, fair, or understandable for humans. Further, such systems are ever changing based on input data, especially if they learn over time. How might an AI decrease its apparent unpredictability by revealing its decisions and purposes through tangible and embodied interactions?
For instance, a robotic cleaner maps the area it is cleaning, and its movement decisions are made based on spatial interaction with the environment and continuous learning of the space. Can its mappings and anticipated movements be rendered in a way that humans can grasp? While the stakes could be low for automated consumer vacuum cleaners, implications for powerful robots used in manufacturing and surgery are more serious.
Graspable intentionality
AI systems have their own internal reasoning and logic and they make decisions based on them. Their internal logic and mechanisms make some AI systems appear to be independent in their decision-making process and therefore to have their own mind, intentions, and purposes. Whether or not this is true from a technical perspective, end users often experience it as such, for instance when an algorithmic hiring tool “discriminates.”
At stake is not only the technical question of how to design a more socially responsible AI, but also how AI developers might expose systems’ workings and intentions to humans by means of tangible and embodied interaction.
Conclusions
AI systems are becoming increasingly complex and intelligent. Their complexity and unpredictability pose important questions that are concerned with, on one hand, the degree to which end users actually can control them, and on the other hand, whether and how we can design them in a more transparent and responsible way. In the above we have brought forward some possible challenges and opportunities of using tangible and embodied interaction in designing AI systems that are understandable for human users.
Acknowledgment
We would like to thank workshop participants and our colleagues David Cuartielles and Laurens Boer for their valuable input and participation.
Endnotes
1. Preece, A. Asking ‘why’ in AI: Explainability of intelligent systems – perspectives and challenges. Intelligent Systems in Accounting, Finance and Management 25, 2 (2018), 63–72; https://doi.org/10.1002/isaf.1422
2. Krippendorff, K. and Butter, R. Product semantics: Exploring the symbolic qualities of form. Innovation 3, 2 (1984), 4–9.
3. Bardzell, J. 2011. Interaction criticism: An introduction to the practice. Interacting with Computers 23, 6 (Nov. 2011), 604–621; https://doi.org/10.1016/j.intcom.2011.07.001
4. Shneiderman 1983. Direct manipulation: A step beyond programming languages. Computer 16, 8 (Aug. 1983), 57–69; https://doi.org/10.1109/MC.1983.1654471
5. Fitzmaurice, G., Ishii, H., and Buxton, W. Bricks: Laying the foundations for graspable user interfaces. Proc. of the SIGCHI Conference on Human Factors in Computing Systems. ACM Press/Addison-Wesley Publishing Co., USA, 1995, 442–449; https://doi.org/10.1145/223904.223964
6. Fitzmaurice, G. Graspable User Interfaces. Ph.D. thesis. University of Toronto, 1996; https://www.dgp.toronto.edu/~gf/papers/PhD%20-%20Graspable%20UIs/Thesis.gf.html
7. Ishii, H. and Ullmer, B. Tangible bits: Towards seamless interfaces between people, bits and atoms. Proc. of the ACM SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, 1997, 234–241.
8. Ghajargar, M., Bardzell, J., Renner, A.S., Krogh, P.G., Höök, K., Cuartielles, D., Boer, L., and Wiberg, M. From explainable AI to graspable AI. Proc. of the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction. ACM, New York, 2021, 1–4.
Posted in:
on Tue, February 15, 2022 - 4:11:15
Maliheh Ghajargar
Maliheh Ghajargar is a design researcher and associate senior lecturer in interaction technologies with a background in industrial design. Her research interests are within the areas of design research and human-AI interaction. Her latest research project, “Graspable AI,” concerns designing human-AI tangible and explainable interactions.
[email protected]
View All Maliheh Ghajargar's Posts
Jeffrey Bardzell
Jeffrey Bardzell is professor of informatics and director of HCI/design in the Luddy School of Informatics, Computing, and Engineering at Indiana University Bloomington. As a leading voice in critical computing and HCI/design research, he has helped to shape research agendas surrounding critical design, design theory and criticism, creativity and innovation, aesthetics, and user experience. He is co-author of
Humanistic HCI (Morgan Claypool, 2015) and co-editor of
Critical Theory and Interaction Design (MIT Press, 2018).
[email protected]
View All Jeffrey Bardzell's Posts
Allison Smith Renner
Alison Smith Renner is a research scientist and engineer with 12-plus years experience designing, building, and evaluating intelligent systems and interactive visualizations for data exploration, analysis, and augmented decision making. Her research lies at the intersection of AI/ML and human-computer interaction, building explainable and interactive AI/ML systems to engender trust, improve performance, and support human-machine collaboration.
[email protected]
View All Allison Smith Renner's Posts
Peter Gall Krogh
Peter Gall Krogh is trained as architect and product designer. He is professor in design and heads the Socio-Technical Design group at the Department of Engineering at Aarhus University.
He contributes to service and interaction design both in doing and theorizing based on co-design techniques with a particular interest in aesthetics, collective action, and proxemics.
[email protected]
View All Peter Gall Krogh's Posts
Kristina Höök
Kristina Höök is a professor in interaction design at the Royal Institute of Technology (KTH). She is a SIGCHI ACM CHI Academy member since 2020, an ACM Distinguished Scientist since 2014, and editor in chief of ToCHI, but most of all fascinated and devoted to developing ideas around interaction design, aesthetics and movement—or, as she frames it: soma design.
[email protected]
View All Kristina Höök's Posts
Mikael Wiberg
Mikael Wiberg is a full professor in informatics at Umeå University, Sweden. Wiberg's main work is within the areas of interactivity, mobility, materiality, and architecture. He is a co-editor in chief of
ACM Interactions, and his most recently published book is
The Materiality of Interaction: Notes on the Materials of Interaction Design (MIT Press, 2018).
[email protected]
View All Mikael Wiberg's Posts
We need to get rid of significance in A/B testing, seriously!
Authors:
Maximilian Speicher
Posted: Tue, February 08, 2022 - 11:20:00
In my previous job, I was responsible for all A/B testing in an e-commerce department. We worked with an external partner and took care of the whole package—from generating hypotheses to developing the tests to providing a report of the results. Those reports were presented to a variety of stakeholders, including product management, merchandise, and the director of e-commerce. To conduct and analyze the tests, we worked with a dedicated experimentation platform that calculated all the numbers per treatment in the traditional, frequentist way. We had the number and share of visitors, values for the different metrics that were measured, a single number for the uplift (or effect size) of the primary metric, and a percentage denoting “confidence.” Our reports displayed more or less the same numbers, but wrapped in a bit of story, with a recap of the test hypothesis and design and recommendations for action. This practice, however, unveiled a number of issues and misunderstandings.
First, it was problematic that we communicated a single number for effect size. This was misunderstood by many stakeholders (and I can’t blame them for that) as the uplift we would definitely make, given that the test was “significant” (see below). But that number denoted only the maximum likelihood estimate. The uplift expected by stakeholders deviated from the true expected value. Sometimes, after implementing a successful test, this manifested in questions such as, “Why don’t we see the 5 percent uplift that we measured during the test in our analytics numbers?”
Second, in tandem with the above estimate, confidence was also often misinterpreted. In our case, it was 1-p, with p being the probability of results at least as extreme as the observed one, if there were no difference between control and treatment. And while this cannot even be interpreted as the chance that there’s any uplift at all [1], many stakeholders mistook it for the actual chance to achieve the reported maximum likelihood estimate (“The test has shown a 96 percent chance of 5 percent uplift”). Truth be told, we didn’t do a very good job of explaining all the numbers—p-values in particular are just so far from being intuitively understandable [1]—so part of the blame was on us. Yet, reporting something along the lines of “We’re confident there’s a chance of uplift, but we can’t exactly tell how much” is also not an option in fast-paced business environments with short attention spans. It’s a catch-22.
Third, and most problematically, there was always the magical and impeccable threshold of “95 percent confidence” or “95 percent significance” (i.e., a significance level of α=.05) in stakeholders’ minds. If a treatment had a confidence of ≥95 percent, it was considered good; otherwise it was considered bad, no further questions asked.
All of the above led to the following decision model when a test was completed:
if (confidence ≥ .95 && uplift) {
implement treatment;
} else {
keep control;
}
And this is just plain wrong.
First of all, one can argue that a significance level of α=.05 is rather arbitrary. After all, economists usually work with α=.1 in their experiments, so why not use that? Still, “95 percent significance” was etched into many of our stakeholders’ minds simply because it’s the most widely used threshold. Now, working with such a relatively tight threshold might rob us of a lot of information, since for a result that falls short of the magical 95 percent confidence, the only thing we can really deduce is that we can’t reject the null hypothesis at the given significance level—and especially not that the control should be kept!—and if we’re above 95 percent, we can’t even reliably communicate what uplift one’s probably going to make. In the else part of the above decision model, we’ve probably discarded many a treatment that would’ve proven good if we hadn’t relied on p-values so much.
To make things worse, at a more fundamental level, a p-value from a single experiment is very much meaningless. Steven Goodman writes, “[T]he operational meaning of a P value less than .05 was merely that one should repeat the experiment. If subsequent studies also yielded significant P values, one could conclude that the observed effects were unlikely to be the result of chance alone. So ‘significance’ is merely that: worthy of attention in the form of meriting more experimentation, but not proof in itself” [2]. Additionally, p-values can vary greatly between repetitions of the same experiment [3,4].
In other words, to a certain degree it’s simply random if a single result lies on this or that side of the magical threshold.
That being said, as Jakob Nielsen explains, there are sometimes very good reasons to implement a treatment, even if it did not achieve a “significant result” [5]. After all, significant or not, there’s always a certain probability for a certain uplift, which has to be weighed against implementation and opportunity costs [5]. But if maximum likelihood estimates and p-values are not suited to communicate that and facilitate informed decisions, what could be a better way? One potential remedy that’s noted by both Steven Goodman [2] and Nassim Taleb [3] is to rely on a Bayesian approach, which has a number of advantages, including interpretability [2,4] and the possibility to report probabilities for minimum uplifts (based on a complementary cumulative distribution function).
So, what did we do? We did some good ole user research and talked to stakeholders about our reports and what they needed to make informed decisions. For all of the reasons stated above, we got rid of the notion of significance altogether. Instead, together with our partner, we started using Bayesian inference to calculate minimum uplifts for certain probabilities. Additionally, using a formula from our controlling department, we translated all the rather abstract conversion rates and average order values into expected added revenue per month. That is, at its core, our reports read, for example, “With a probability of 95 percent, we’ll make ≥10,000 euros a month; with a probability of 90 percent, we’ll make ≥15,000 euros a month,” and so on. Now, without some magical threshold to rely on, our stakeholders had to actively decide whether they deemed the probability of a certain minimum additional revenue high enough to justify implementation of the corresponding treatment. They could calculate an expected ROI and make an informed decision based on that.
I don’t mean to say that we invented something new. There are already plenty of A/B testing tools going the Bayesian way, including Google Optimize, VWO [6], iridion, and Dynamic Yield, to name just a few. And yet, there are some—and many experimenters—who still blindly rely on notions of significance, which just doesn’t make sense. There is no sudden change in the quality of a result solely because it passes an arbitrary threshold [3]; and it is perfectly fine to conclude that in a given situation, 80 percent is a high enough chance to make an additional 500,000 euros a month.
The more I think and read about this topic, the more I’m convinced that p-values and significance are utterly ill-suited tools for A/B testing. They’re difficult to understand, interpret, and communicate; unreliable; prevent perfectly good treatments from being implemented; and there’s a better alternative. Therefore, we should get rid of them.
Endnotes
1. Google. Optimize Resource Hub—for those used to frequentist approaches. 2021; https://support.google.com/optimize/answer/7404625#dont-significance-and-p-values-tell-me&zippy=%2Cin-this-article
2. Goodman, S. A dirty dozen: twelve p-value misconceptions. Seminars in Hematology 45, 3 (2008), 135‒140; http://www.ohri.ca/newsroom/seminars/SeminarUploads/1829%5CSuggested%20Reading%20-%20Nov%203,%202014.pdf
3. Taleb, N.N. A short note on p-value hacking. arXiv preprint arXiv:1603.07532, 2016; https://arxiv.org/pdf/1603.07532.pdf
4. Amrhein, V., Korner-Nievergelt, F., and Roth, T. The earth is flat (p > 0.05): Significance thresholds and the crisis of unreplicable research. PeerJ 5 (2017), e3544; https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5502092/
5. Nielsen, J. Handling insignificance in UX Data [Video]. YouTube, 2021; https://www.youtube.com/watch?v=QkY8bM5bAOA
6. Stucchio, C. Bayesian A/B testing at VWO. Whitepaper. Visual Website Optimizer, 2015; https://www.chrisstucchio.com/pubs/VWO_SmartStats_technical_whitepaper.pdf
Posted in:
on Tue, February 08, 2022 - 11:20:00
Maximilian Speicher
Maximilian Speicher is a computer scientist, designer, researcher, and ringtennis player. Currently, he is director of product design at BestSecret and cofounder of UX consulting firm Jagow Speicher. His research interests lie primarily with novel ways to do digital design, usability evaluation, augmented and virtual reality, and sustainable design.
[email protected]
View All Maximilian Speicher's Posts
Could tech upgrades increase Covid risk? Covid-19 risk in public washrooms
Authors:
Ilona Posner
Posted: Wed, February 02, 2022 - 10:40:43
Dyson AirBlade Wash + Dry faucet instructions in English and French. Photo by Ilona Posner
Many airports closed drinking fountains for fear of spreading Covid-19. Or perhaps they did it to increase the sale of bottled water. In either case, travelers need to search hard to find free drinking water to refill their bottles after passing through security, before boarding flights.
Zurich airport was one place with easy access to clean and trustworthy Swiss drinking water! On my trip through this airport in November 2021, I was surprised and saddened to see a change.
Previously, this airport had traditional washroom faucets that dispensed water at any temperature and were activated by sensors touch-free. The old faucets in the airport were replaced by the high-tech Dyson AirBlade Wash + Dry all-in-one faucet and hand-dryer device.
So, now instead of being able to get warm water for washing or cold water for drinking, these faucets deliver either warm water or warm air at high-pressure. The user must place their hands directly under the center of the faucet for water, and to the sides for air. These instructions are written in German on stickers attached to the faucets and on trilingual signs in English, German, and French on the mirror in front of the sink (see below). Travelers speaking other languages, or those who are visually impaired, need not bother with these instructions.
Faucet instructions in English, French, and German on the mirror (left). German only instructions on the faucet handle (right). Photos by Ilona Posner
Unfortunately, this important information is not at all obvious to the weary travelers passing through the airport! Instead of water, they are often surprised by a blast of air from the faucet. My first attempt at handwashing resulted in me being showered by foamy white soap. The same thing happened to my washroom neighbor. I almost got a picture of the annoyed lady cleaning the wet mess off the shiny black countertop, while swearing in German under her breath!
Wet counters with Dyson AirBlade Wash + Dry faucets at the Zurich airport (left) and the Louvre Museum, Paris (right). Photos by Ilona Posner
In my day job as a user experience consultant, I conduct usability tests of technology to see how real users react. Given this informal two-user usability test, I am concerned about the decreased hygiene at the Zurich airport. Considering all the different types of dirt people might try to wash off in an airport, anytime, not just during a pandemic!
Imagine the horror of a sudden gust of forced-air sending filth flying around the room, to land on unsuspecting passengers. I have traveled with babies and had to clean them in airport washrooms. This difficult task has been made harder by the Dyson AirBlade Wash + Dry.
Ironically, just one week before the Zurich airport, a student shared an image of the Dyson AirBlade Wash + Dry during a discussion of good and bad designs in my UX Design class at the University of Toronto. In the next class, I shared my airport story with this same device.
One more detail worth mentioning is that this change of faucets happened during the pandemic. At this same time as Zurich airport upgraded the faucets to AirBlades Wash + Dry, other airports and public places turned off forced-air hand dryers due to concerns about spreading the coronaviruses with forced air propelled by hand dryers.
“Gesspert Für Ihre Sicherheit” (“Closed for your safety”) sign on a Dyson AirBlade hand dryer in Graz, Austria (left). "Caution, hand dryer out of service. Please use the provided paper towels" sign at the Lindt Home of Chocolate Museum, Kilchberg, Switzerland, September, 2021 (right). Photos by Ilona Posner
Before concluding, let’s estimate the cost of the rollout for replacing the traditional faucets at Zurich International Airport. Don’t forget that to quench the travelers’ thirst they will also need to add water fountains to replace the functionality lost with temperature controlling faucets. Plus, I don’t know the value of commission earned from the sale of all the hardware. But let’s just look at a rough estimate:
The rough estimate shows about a tenfold difference between the original faucets and the Dyson upgrade.
Unfortunately, some technology upgrades do not improve users’ experience despite the cost.
Post Script: On my return trip home through ZRH Zurich Airport, I again encountered these Dyson faucets. I noticed that when the airflow activated the faucet moved at its base on the sink. Its forced air was so strong that it almost propelled the faucet by blowing it out of the sink. The movement of plumbing components is never a good thing; I can speak from experience having had many plumbing projects in my home. This suggests that the Dyson faucets will require additional maintenance during their lifetime, suggesting an increase to the above cost estimate.
Finally, after I passed security, I was thrilled to find a washroom with traditional faucets. These older designs enabled me to refill my water bottle with cool, clean Swiss tap water that I could safely bring with me on board the plane for the nine-hour trip home. Thanks, ZRH!
Airport washroom with traditional faucets.
Posted in:
on Wed, February 02, 2022 - 10:40:43
Ilona Posner
ILONA POSNER is a user experience consultant and educator. She enjoys encounters with good design and is enraged by bad design which jeopardizes users' experience, health, and safety. ILONA teaches UX design to computer science students at the University of Toronto and tries to help techies develop empathy for their users.
[email protected]
View All Ilona Posner's Posts
Rethinking the Participatory Design Conference experience
Authors:
Rachel Clarke
Posted: Mon, January 31, 2022 - 4:07:02
I read an article recently that suggested with the growth of online and hybrid conferences in response to Covid-19, there were more opportunities for everyone. The focus of the article was on celebrating the enabling potential and design of technology to make this happen. But after spending the past two years attending online conferences and now serving as general co-chair for the 2022 Participatory Design Conference (PDC 2022), I think it’s a mistake to focus on what technology can or cannot bring to the table in trying to make conferences more equitable.
Technology, in all shapes and sizes, offering all sorts of novel interactions for online conferencing and meetings, has been at the front of our brains for many working hours as we hastily integrated and abandoned new apps while juggling alternative ways of working and domestic life. Many conferences took on this challenge recognizing that our reliance on the face-to-face conference model, while benefiting many, also excludes many interesting scholars and practitioners. Extortionate prices for flights, accommodation, and attendance; being away from family; care responsibilities; language; access to existing networks; CO2 emissions—all are key issues that have always existed, but recently became more apparent. In organizing PDC 2022 with co-chairs Yoko Akama and John Vines, we recognized these challenges before the pandemic. We felt there were also other approaches to reconceptualizing what conferences could look like if they were more decentralized, encouraging a greater respect for a diversity of place-based responses to particular themes before working on the technology.
PDC 2022 branding by Ellen McConnell
PDC Places
PDC 2020 was hosted in Colombia, despite the fact this had to quickly become an online-only event. This showed what could be done with a more localized approach to engaging a global network of scholars and practitioners doing and discussing PD while drawing from a strong tradition of political Latin American activists and theorists.
PDC Places is a new initiative for PDC 2022 where distributed face-to-face and hybrid gatherings will take place across 11 different locations around the globe before, during, and after the conference program in August. As general chairs, we were keen to find ways of addressing and reaffirming our commitments to equity and sustainability and a more distributed place-based approach to PD as a diverse set of practices and concerns, beyond Europe and the U.S., despite technically hosting part of the conference in Newcastle, U.K.
We’ve been lucky to have the extraordinary expertise of Reem Talhouk and Andy Dearden leading the way in helping to take this vision forward through a lively network of scholars, practitioners, and industry professionals. We initially received 18 expressions of interest to run PD events around the globe, which created a lot discussion between potential hosts and Places chairs to define how we can practically make this vision happen. We’re now working on consolidating opportunities for meaningful connection across Places hosts including creating a flexible program and branding strategy that can be personalized and work with different languages.
Time zones, language, and facilitation
Some of the ways we are trying to build these connections is carefully planning for time-zone scheduling, multilingual translation, and facilitation. We’ve spent hours mapping time zones, creating “touchpoints” for when different countries can reasonably connect with each other online and without people feeling the need to wake up in the middle of the night. One of the challenges of global online conferences is people not sharing the same time zone. It sounds obvious, but most conferences take place in one time zone that best suits the conference hosts. This makes a lot of sense, but time zone scheduling has enabled us to rethink and plan for running hybrid and online sessions, being mindful of when different countries are likely more able to connect across Europe and Africa, Australasia, and the Americas to avoid the limits of time-zone silos.
We’re planning for a day of hybrid multilingual events to include keynotes and panels. It was our intention to spread these multilingual events throughout the conference, but working with Newcastle University’s School of Modern Languages, our plan is to have several different languages, and focus on one day, due to the complexities of translator scheduling, setup, and costs. We are, however, also hoping that more informal translation can take place. One of our workshop chairs, Angelika Strohmayer, mentioned a conference she had attended recently where people provided more intimate whispered translations encouraging more informal interpretation, which is in the spirit of generosity and the kind of conference atmosphere we hope to create.
It’s also important to recognize the importance of creating connection through careful facilitation between global communities. Nowhere is this more important than in facilitating hybrid online and face-to-face dialogue. Facilitation is definitely a craft not always appreciated, but we’ve learned a lot on how important facilitation and chairing sessions is to making any discussion work well and making the most of our time together, whether this be online, face-to-face, or in a hybrid environment. We’re hoping we can do this in a way that supports panel and paper chairs to embrace this approach providing guidance and sharing practice and experience from the past two years.
Online platform
And finally, yes, the technology. We have started to design a bespoke online platform that bolts together different videoconferencing, scheduling, and networking tools that run on low bandwidth, mindful of places where intermittent online access still presents lags and outages. We have already used these platforms in our lab during large-scale global gatherings including MozFest, IFRC Climate Red, and ServDes at RMIT. We’ll have all the usual things that the PD community have nurtured over the years—presentations for full papers, exploratory papers, situated actions and exhibition, workshops and doctoral colloquia, beyond academia, panels, and awards—but we are also encouraging people to be creative and experiment with hybrid formats. There will also be representation from our PDC Places communities and activity to further bring to life events happening across the globe. There has been so much discussion about how the online experience has diminished the conference experience as a whole. Many of us have felt this, but there are also people doing some really creative things with the technology too, not necessarily creating better experiences than being face-to-face, but aiming to create alternative experiences with what they have to hand.
The technology is important for helping us achieve these things, but we are also very excited to welcome people to Newcastle city itself as part of the conference experience with our unique urban, rural, coastal landscape. Some of our workshops will be hosted at the Urban Sciences Building at Open Lab, and our situated actions and exhibition program will have a combination of online content, short films, live performances and a visual art exhibition. Rather than have a significant number of presentations, though, we will focus on making the most of our time together for focused discussion too where, if people can make it, we will see you in person very soon.
PDC 2022 will take place online from August 19th, with events in Newcastle upon Tyne, U.K. from August 30 to September 1, 2022 (
https://pdc2022.org)
Posted in:
on Mon, January 31, 2022 - 4:07:02
Rachel Clarke
Rachel Clarke is a design researcher and senior lecturer in interaction design at Open Lab, Newcastle University. Her work focuses on the politics of participatory design practice, international development, and smart urbanization in more-than-human worlds.
[email protected]
View All Rachel Clarke's Posts
Black literary technoculture and Afrofuturist rupture
Authors:
Kristen Reynolds
Posted: Wed, October 20, 2021 - 10:31:41
As an epigraph to her incomplete book, Parable of the Trickster, Octavia Butler writes, “[T]here’s nothing new under the sun, but there are new suns.” I take this to be her call for opening ourselves up to those other worlds that are already here, yet we have been conditioned to ignore. To seek them out and challenge ourselves to explore and embody the different modes of being that emerge within them. Butler’s vision of the current world and the other worlds we might one day inhabit is far from utopian in nature. Even her Parables series reveals a possible future in which Butler’s version of a more just, sustainable, and philosophically mutable world thrives in a bubble alongside a white, colonialist world that endures just beyond the trees that encapsulate it. People in this space must constantly negotiate the axis of Afrofuturist possibility and colonialist modernity that structures the world as many of us have come to know it. In these negotiations, Butler grapples with what it means to access otherwise ways of being in a world where technologies of whiteness and coloniality maintain and expand European/American imperial projects globally. How, then, can we use Butler’s call to elucidate the theory and the practice of Black techno past/present/future? A partial answer to this question resides in Black speculative visioning in N. K. Jemisin’s creative works and in Simone Browne’s concept of dark sousveillance.
Jemisin’s work, for example, presents methods that challenge anti-Blackness, resisting the colonial disciplining that limits our vision of possible futures. Her Broken Earth Trilogy begins with a literal earth-shattering rupture—an earthquake with global implications that is brought on by the power of a Black man who desires to set the stage for the creation of a new world. Through this rupture, we are exposed not simply to something new but to a different way of being that has been there all along. Jemisin questions the limits of the world we have been conditioned to accept and explores what might be found when we break away from it to explore a landscape beyond those limits. Because Jemisin’s work also contends with Black world/love/meaning-making amid the backdrop of anti-Black violence, the trilogy can also be read as asking and attempting to answer two questions: What do we know about Blackness and what do we want to know about Blackness? Put another way, what do we know about Blackness beyond the violence and dispossession that white supremacist coloniality foregrounds? What of Blackness exists in excess of these limits? The work of Afrofuturism (and the Black speculative more broadly) offers an answer to Black being in excess of the limitations of the white colonial mind, Black being in and as rupture, Black being beyond white imagination. With this work in mind, we learn not only of Black being in resistance to anti-blackness but also of Black being as the foundation for different worlds and the technologies that might emerge within them. Afrofuturist practice as presented in Butler’s and Jemisin’s fictional worlds give us the tools to think technology otherwise and develop road maps for living alternate technological worlds. They are not promises, but possibilities.
This reading of the philosophies at work in Butler’s and Jemisin’s narratives facilitates my analysis of Black critiques of technology as Afrofuturist practice. Consider, for example, Simone Browne’s discussion of dark sousveillance in Dark Matters. In the introduction, she describes it as “an imaginative place from which to mobilize a critique of racializing surveillance…[one which] plots imaginaries that are oppositional and that are hopeful for another way of being.” She is clear in articulating that “acts that might fall under the rubric of dark sousveillance are not strictly enacted by those who fall under the category of blackness” (emphasis mine). Dark sousveillance as a rupturing Afrofuturist practice centers Blackness and in so doing situates Black folks as the architects of possible, otherwise techno-futures. In these futures, all people who are subjected to the dehumanizing gaze of white epistemologies and surveillance technologies find opportunities for realizing alternative ways of being. Put another way, Afrofuturism is a Black epistemic project generating worlds that counter and/or erode surveillance as we know it. In this worldmaking, we are privileged to encounter Blackness as both a foundation to and entry point into other relations of seeing and being seen.
The ideal project and practice of Afrofuturism resides in part in how it transforms innovation and traces its lineages. How does Harriet Jacobs’s decision to steal away to an attic—away from the sight of those who might harm her—anticipate Black practices of hiding away in the present moment? How are Black organizers’ and activists’ practices of thwarting today’s conventions of surveillance (phone, video, social media) both rooted in a long history of Black counter/antisurveillance and generative of otherwise ways of seeing and being seen (or not)? These issues are of course much more complex than presented here, but it remains that these Afrofuturist ruptures, these innovations, are happening around us daily. It is incumbent upon us all to see them. Doing so requires that Blackness be seen as the transformative worldmaking force that it is. Afrofuturism gives us that.
Posted in:
on Wed, October 20, 2021 - 10:31:41
Kristen Reynolds
Kristen Reynolds is a Ph.D. student in American Studies at the University of Minnesota. Her research explores the liberatory aspects of Black speculative fiction, particularly engaging with the ways in which it allows us to posit alternative techno-futures that explore realities beyond the limiting and oppressive systems that define our presents and pasts.
[email protected]
View All Kristen Reynolds's Posts
Where is the transparency of the new ACM Violations Database?
Authors:
Angelika Strohmayer
Posted: Thu, September 23, 2021 - 10:28:16
After a summer of discussions and action about racism in our community in 2020 (see, e.g., [1]), the spring of 2021 raised further concerns about harassment and oppression. Long before this eventful year, however, I and many others have had both private and somewhat public conversations about the need to reckon with the trauma in our community, whether it is due to sexual harassment, bullying, racism, ableism, sexism, or any other host of harms that are experienced and perpetrated on a daily basis (see, e.g., [2,3]).
As shown in some recent Interactions articles and publications, we appear to be starting to take these concerns more seriously as a community. I have seen articles that start to ask questions about what institutional and informal systems and practices we need to be able to handle the trauma, and how we can reduce experiences of harm. The recently announced ACM Violations Database (published in a blog post for our SIGCHI community on May 25, 2021: https://sigchi.org/2021/05/the-new-acm-violations-database/) is one way of grappling with these harms. But for it to function in our SIGCHI community, which is part of the ACM, we need clear, transparent communication channels and community engagement with the database, as well as conversations about how, why, and when it is used. I, and I’m sure many others, have questions about the system, its functions, and its uses.
With this blog post, I want to start a public conversation about the violations database and how, or even whether, it addresses harms in our community. I feel it is important to establish that I am writing this piece because I have not received adequate, or in some cases any, answers to questions I have raised about the database. I will start by sharing some of my initial thoughts on the transparency, or rather the lack thereof, of the development, use, and communication of and around this new system. I hope that through public discussion, the community that makes SIGCHI, and the ACM, the prestigious body that it is, will be listened to in the continued development of the database.
I welcome the violations database as a real effort to address and change conditions, but do not agree with how it is being shared, communicated, or developed. The SIGCHI blog post introducing it included the following: “If you have further questions regarding ACM policies, please contact the person indicated at the end of this piece. For questions about the ACM’s Violations Database, please contact [email protected]. Questions regarding the SIGCHI process can be sent to [email protected].”
As a researcher who has done work with sex-work peer-alerting systems in the U.K. [4] and Canada [5], especially having looked in detail at the architecture, trust, social, and political contexts in which these databases sit; as a SIGCHI member who has experienced abuse and harassment from members of our community; and as a person who works to change our academic cultures around power and abuse, of course I had questions.
I drafted an email outlining questions I had about the use of the system in relation to SIGCHI and sent it to the address mentioned in the blog post. In the email, I tried to find a balance between 1) being supportive of this initiative and how it puts into practice some of the things that were outlined years earlier in the ACM Harassment Policy and the ACM Code of Ethics and Professional Conduct; and 2) asking critical questions about the use of the database, based on my personal and research expertise. My questions included ones about the methodologies of the database’s creation, the role of the advocate mentioned in the post, the decision-making processes behind the database, and how its sustainable use was envisioned at the SIGCHI level.
Some of my questions were partially answered, but most of them were not. Instead, I was directed to the ACM policies mentioned in the blog post, without an indication of where in the policies I should look, and encouraged to email Vicki Hanson, CEO of the ACM. I mustered up my courage and fed my healthy disregard for authority and sent another email.
Perhaps the most disappointing response in this whole exchange did not relate to a question, but rather to my offer of support as an expert who has done research for multiple years with databases that can be called upon to see whether someone has previously been reported for having perpetrated harms. My expertise was not acknowledged, nor was there any understanding of my knowledge as someone who herself had previously experienced abuse. Instead, I received this response: “I should note that the ACM has significant resources including their own lawyers who inform all they do.” This is just one more example of how SIGCHI and the ACM do not appreciate community-driven initiatives [6].
In my opinion, a violations database is not something that should be developed solely by lawyers. It should predominately be driven by our community and the expertise that is within it; it needs to be embedded in our politics and our social structures, and, most importantly, it should center survivors and their experiences—not those who cause and perpetuate the harm. A violations database should not be developed by the same people in an organization who have repeatedly avoided difficult conversations about harms that are experienced by their members and “seem to repeatedly move in a more inclusive direction only to undermine such efforts” [6].
I find it inexplicable that the ACM should leave me and fellow members in the dark about why, how, and when the violations database was produced. But it is even more inexplicable that the (now former) SIGCHI president wasn’t informed either, as was shared with me in an email on May 30, 2021: “Honestly, as of a little over a month ago, I didn’t even know this database was coming.” If neither members nor the executive committees of the SIGs that make up the ACM were informed of this development, whom is the violations database meant to be for?
So what happens now?
Since my initial questions, I have had conversations with others, realizing there are of course many more questions that remain unanswered. But since I had not received an answer from the ACM CEO to any of my questions at the time of writing this piece—I emailed her on May 31 and followed up on June 11 and the 29th and on July 19—and since the named SIGCHI contact person for questions on the blog post was unable to provide adequate responses to many of my questions, I have also been angry about the lack of transparency—the disregard for open communication channels, or even basic information about the system.
It should not be up to me or any other SIGCHI member to contact the CEO of the ACM to get basic information about this new system that could greatly affect, both positively and negatively, how we exist in our community and at our events. I should not require the courage to email the most powerful people in “the world's largest educational and scientific computing society” [7] to learn the most basic information about this new system. Conversely, I would also argue that it should not be up to the CEO of the ACM to have to answer basic questions about such a system. This whole experience makes me wonder who is in charge of the ACM Violations Database and its uses if there is no point of contact for questions about the system.
To understand how the ACM Violations Database could function in our SIGCHI community, we need more information about it. SIGCHI members and others in the community need space to discuss its use and many potential misuses. We need more details, more context, more understanding. We need responsive infrastructures in place through which we can ask questions and have conversations about harms and violations. None of these currently exist.
This blog post is part of my work on understanding the new system and how we could make something like it work to improve safety at SIGCHI events. After talking to friends and colleagues (some of whom do research on violence and technologies; some of whom are on organizing committees of SIGCHI conferences), I decided it was important to start a public conversation about the violations database. I thank the Interactions blog editors for giving me space to air my concerns. I hope to see others join this public conversation by centering those who have experienced violence, who have gone through ACM, SIGCHI, or other institutional complaints processes, and those who are experts on related topics.
There is so much trauma in our community, some of which has been caused by others in our community as well as the hierarchies and infrastructures of power that govern SIGCHI and the ACM. Finding ways of addressing this harm through structural change such as the violations database are very welcome, at least by me. But for the database to work, we must have information on its intentions and uses; we must have space to ask questions and receive answers; we must center those who have experienced trauma in how we hold space for and reduce opportunities of harm in the future.
Out of professional courtesy, Interactions sent my article to Vicki Hanson prior to publication.
Professor Hanson informed Interactions that she had had technical difficulties with receiving my e-mails. Soon after, on August 24, she was able to answer my questions, and on September 2, we spoke about the violations database, its relation to the complaints process within ACM, the transparency of communication channels between ACM, SIGs, and members in relation to this development, as well as opportunities for change to current processes and systems. I’m thankful to have had this constructive conversation, and look forward to continuing to work with others on issues related to the complaints procedure and violations database.
Endnotes
1. Grady, S.D., Wisniewski, P., Metoyer, R.,Gibbs, P., Badillo-Urquiola, K., Elsayed-Ali, S., Yafi, E. Addressing institutional racism within initiatives for SIGCHI’s diversity and inclusion. Interactions blog. Jun. 11, 2020; https://interactions.acm.org/blog/view/addressing-institutional-racism-within-initiatives-for-sigchis-diversity-an
2. fempower.tech. An open letter to first-time attendees. fempower.tech. May 7, 2018; https://fempower.tech/2018/05/07/chi-2018-an-open-letter-to-first-time-attendants/
3. fempower.tech. An open letter to the CHI community. fempower.tech. May 7, 2018; https://fempower.tech/2018/05/07/chi-2018-an-open-letter-to-the-chi-community/
4. Strohmayer, A., Laing, M., and Comber, R. Technologies and social justice outcomes in sex work charities: Fighting stigma, saving lives. Proc. of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, 2017.
5. Strohmayer, A., Clamen, J., and Laing, M. Technologies for social justice: Lessons from sex workers at the front lines. Proc. of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, 2019.
6. fempower.tech. A call for respect, inclusion, fairness, and transparency in SIGCHI. Interactions blog. Dec. 3, 2020; https://interactions.acm.org/blog/view/a-call-for-respect-inclusion-fairness-and-transparency-in-sigchi
7. https://www.acm.org/about-acm/about-the-acm-organization
Posted in:
on Thu, September 23, 2021 - 10:28:16
Angelika Strohmayer
Angelika Strohmayer is an interdisciplinary technology researcher, working closely with third sector organizations and other stakeholders to creatively integrate digital technologies in service delivery and advocacy work. She aims to work collaboratively on in-the-world projects that engage people at all stages of the research process to engender change toward a more just world.
[email protected]
View All Angelika Strohmayer's Posts
Artificial intelligence is changing the surveillance landscape
Authors:
Arathi Sethumadhavan ,
Esra Bakkalbasioglu
Posted: Wed, June 30, 2021 - 12:46:38
The following is a review of recent publications on the issue of AI and surveillance and does not reflect Microsoft's opinion on the topic.
The term surveillance is derived from sur, which means from above, and veillance, which means to watch. Theoretical approaches to surveillance can be traced back to the 18th century with Bentham’s prison-panopticon. The panopticon premise involved a guard in a central tower watching over the inmates. The “omnipresence” of the guard was expected to deter the prisoners from transgression and encourage them to self-discipline. Today, the declining costs of surveillance hardware and software coupled with increased data storage capabilities with cloud computing, have lowered the financial and practical barriers to surveil a large population with ease. As of 2019, there were an estimated 770 million public surveillance cameras around the world, and the number is expected to reach 1 billion this year.
During the Covid-19 pandemic, we have been witnessing a massive global increase in surveillance as well. For example, technologies are used to allow people to check if they have come in contact with other Covid-19 patients, map people's movements, and track whether quarantined individuals have left their homes. While such surveillance technologies can help combat the pandemic, they also introduce the risk of normalizing surveillance.
China is home to the top surveilled cities in the world
Of the top 20 most surveilled cites in the world (based on the number of cameras per 1000 people), 16 of them are in China. Outside of China, London and the Indian cities of Indore, Hyderabad, and Delhi are estimated to be the most surveilled cities in the world.
The 20 most surveilled cities in the world (cameras per person). Source: Comparitech.
AI-based surveillance technologies are being deployed globally at an unfathomable rate
Surveillance technologies are being deployed at an extremely fast rate globally. In fact, as of 2019, at least 75 countries were using various AI technologies such as smart policing systems, facial recognition, and smart city platforms for surveillance purposes. Every nation has a unique approach to surveillance shaped by its technological landscape and economic power as well as its social, legal, and political systems. As of 2019, China has installed more than 200 million facial recognition-powered cameras for various purposes, ranging from identifying thefts and finding fugitives, spotting jaywalkers, designing targeted advertisements, and detecting inappropriate behaviors in classrooms. In India, the Internet Future Foundation has estimated 42 ongoing facial recognition projects, with at least 19 systems expected to be used by state-level police departments and the National Crime Records Bureau for surveilling Indian citizens. Earlier this year, the Delhi police used facial recognition technology to track down suspects allegedly involved in violent clashes during the farmers’ tractor march in the nation’s capital.
AI Surveillance Technology Origin. Source: Carnegie Endowment For International Peace
Growing deployment in various sectors criticized by civil liberties advocacy groups
The growing reach and affordability have led to increased use of AI-based surveillance technologies in law enforcement, retail stores, schools, and corporate workplaces. Several of these applications are fraught with criticism. For example, several police departments in the U.S are believed to have misused facial recognition systems by relying on erroneous inputs such as celebrity photos, computer-generated faces, or artist sketches. The stakes are too high to rely on unreliable or wrong inputs. Similarly, this year, news of Lucknow police in Northern India wanting to use AI-enabled cameras that can read expressions of distress on women's faces when subjected to harassment was met with backlash, with civil rights advocates noting how it could violate women’s privacy and exacerbate the situation. Further, the scientific basis behind the use of AI to read “distress” was deemed unsound. In the education sector, a few schools in the U.S. are relying on facial recognition systems to identify suspended students and staff members as well as other threats. Civil liberties groups argue that in these scenarios there is a lack of evidence that there is a positive correlation between the use of the technology and the desired outcome (e.g., increased safety, increased productivity). Further, critics also contend that the investigation of petty crimes does not justify the use of surveillance technologies, including the creation of a massive facial recognition database.
Surveillance technologies raise public concerns
Widespread use of new surveillance technologies has posed valid privacy concerns among the general public, in several nations. For example, a survey conducted by the Pew Research Center revealed that more than 80 percent of Americans believe that they have zero-to-very-little control over their personal data collected by companies (81 percent) and government agencies (84 percent). A 2019 survey conducted in Britain showed that the public is willing to accept facial recognition technology when there are clear benefits but want their government to impose restrictions on the use of the technology. Further, the desire to opt-out of the technology was higher for individuals belonging to ethnic minority groups, who were concerned about the unethical use of the technology. These findings demonstrate the need to involve the public early and often in the design of such technologies.
Need for legal frameworks and industry participation to address public concerns
Recently, the European Commission introduced a proposal for a legal framework to regulate the use of AI technologies. As part of this, they have proposed banning the use of real-time remote biometric identification systems by law enforcement, except for a limited number of uses like missing children, terrorist attacks, and serious crimes. While the U.S. does not have federal laws regulating AI surveillance, some cities have taken restrictive measures around the use of such technologies by law enforcement. For example, San Francisco became the first U.S. city to ban the use of facial recognition by local agencies, followed by Somerville, Oakland, Berkeley, and Brookline. A number of companies, including Microsoft, have called for the creation of new rules, including a federal law in the U.S. grounded in human rights, to ensure the responsible use of technologies like facial recognition.
Posted in:
on Wed, June 30, 2021 - 12:46:38
Arathi Sethumadhavan
Arathi Sethumadhavan is the head of research for Ethics & Society at Microsoft, where she works at the intersection of research, ethics, and product innovation. She has brought in the perspectives of more than 13,000 people, including traditionally disempowered communities, to help shape ethical development of AI and emerging technologies such as computer vision, NLP, intelligent agents, and mixed reality. She was a recent fellow at the World Economic Forum, where she worked on unlocking opportunities for positive impact with AI, to address the needs of a globally aging population. Prior to joining Microsoft, she worked on creating human-machine systems that enable individuals to be effective in complex environments like aviation and healthcare. She has been cited by the Economist and the American Psychological Association and was included in LightHouse3’s 2022 100 Brilliant Women in AI Ethics list. She has a Ph.D. in experimental psychology with a specialization in human factors from Texas Tech University and an undergraduate degree in computer science.
[email protected]
View All Arathi Sethumadhavan 's Posts
Esra Bakkalbasioglu
Esra Bakkalbasioglu is a design researcher at Microsoft, focused on AI ethics. Her recent research includes developing disclosure mechanisms in different domains. She has a Ph.D. in interdisciplinary social sciences from the University of Washington.
[email protected]
View All Esra Bakkalbasioglu's Posts
The labor behind the tools: Using design thinking methods to examine content moderation software
Authors:
Caroline Sinders,
Sana Ahmad
Posted: Thu, June 24, 2021 - 10:02:38
Content moderation is widely known to be hidden from the public view, often leaving the discourse bereft of operational knowledge about social media platforms. Media and scholarly articles have shed light on the asymmetrical processes of creating content policies for social media and their resulting impact on the rights of marginalized communities. Some of this work, including documentaries, investigative journalism, academic research, and the accounts of individual whistleblowers, has also uncovered the practices of moderating user-generated content on social media platforms such as YouTube, Facebook, and TikTok. In doing so, the complex and hidden outsourcing relationships between social media companies and third-party companies—often located in different geographical areas—have been made visible. Most importantly, the identification of these companies and their outsourcing practices has brought to public attention the secretive work processes and working conditions of content moderators.
In this article we illustrate the unique method we undertook to examine the software used in the content moderation process. In December 2020, we held two focus-group research workshops with 10 content moderators. Our participants included former and current employees of a third-party IT services company in Berlin that supplies content moderation services to a social media monopoly based in the U.S. Many participants in our study were immigrants, most of whom had lived in Germany for less than five years. For this group, a lack of German-language skills meant fewer employment opportunities, which led them to apply for content moderation jobs. (As this article focuses on the methodological part of our research workshops, we will not elaborate on the recruitment and work process of the participants.)
The content moderation process is highly confidential; employees are forbidden from describing the work they do and how they do it. When crafting these workshops, we were inspired by collective memory practices. The workshops allowed us to use design thinking exercises to uncover and gauge the infrastructural design, user interface, and user experience design of the systems the participants worked with. While design thinking has a broad meaning and definition, it can be used to problem solve and to create new software or designs; we reverse engineered the process and used exercises designed to help frame questions and translate ideas into tangible interfaces and architectural layouts. From her time as a UX designer in industry, Caroline observed that product designers often lean toward building a specific or concrete “thing,” be it a new product, product augmentation, or process when using design thinking exercises. Our goal was to ground the participants in the space of making. By asking people about their day, the hardships they face, the structure of their workday, and how they iteratively approach their work, a foundation is created to think through building something that would support that work. These exercises helped ground the moderators in the logic of how their software functions and spark memories of the software they use. By working with multiple content moderators in a workshop setting, we created a space of organic reflection and comparison of the tools and protocols they used, leading to discussions on how the Berlin-based employer and the social media client managed the work. In holding our workshops, we followed necessary research protocols, informed by the research ethics standards of the WZB Berlin Social Science Center.
The workshop was divided into five exercises, with each exercise iterating off the previous one. It began with participants writing out a workflow of their day, such as the first thing they see when they log in, what do they do after logging in, where their tasks are stored, and how they engage in tasks. Activities belonging to the first, second, and third exercises were aimed at deciphering the layouts, workflows, and design. Activities four and five had a group discussion format that invited participants to share their experiences of workplace surveillance and propose work-related changes and potential improvements.
Designing the Workshops
Ensuring the privacy of content moderators was integral to this project. Considering the company’s secretive work practices, we were aware of the potential threats to moderators’ job security; therefore, they were not requested to provide screenshots of their work. As an additional precautionary measure, the wireframes have been redrawn for this article. The workshops were held using the audio communication function of a Web-based, open-source software, with appropriate attention given to anonymizing the participants.
Apart from privacy considerations, there were other challenges. To examine the infrastructural design, UX, and UI of the software, we needed to guide participants through the process to draw the software they used for moderating content. Design is a specific medium with a language and vernacular of its own. Content moderators may not know the names of or how to describe the elements in the software they use. Therefore, we designed the flow of activities to accommodate this constraint and guide the moderators through an ideation process that could ensure their participation. Our approach was to create exercises that slowly, organically, and iteratively helped the participants sketch the software they used.
Workshop Structure
Both workshops began with presentations by the organizers. Sana introduced the existing studies on the labor process of content moderation and the importance of undertaking this research project, followed by Caroline, who explained basic design elements that we assumed the participants would see in their own software. The main question guiding the workshops was: How can we examine the design of the content moderation software through the experiential inputs of the workers while protecting them? With the collective knowledge of our organizing team—including Sana’s academic research on the labor process of content moderation in third-party IT BPO companies in India and Caroline’s design background and expertise in designing digital tools and software—we aimed to answer the research question in a multidisciplinary manner. The practical knowledge of two student assistants at the WZB Berlin Social Science Center further benefited the workshops.
As mentioned earlier, the indispensable aspect of the workshops was their iterative structure; knowledge about the content moderation software was developed gradually, using a step-by-step process wherein participants could build out, think, and iterate on the software design they were recalling. This gave them space to reflect and redraw for accuracy. Each activity started with general questions on the content moderation process, including the generic layout of the software and the moderators’ workflows, which became increasingly specified in the process of adding complexities in each subsequent workshop activity. Much like generating an outline, we could then go more granular, asking about smaller, more specific features and adding more-qualitative questions on the kinds of content they moderated, how their company organized their workday, and how their team used their software. Such a sequential technique allowed the participants to ease into the design thinking process.
The first activity was planned to obtain an undetailed view of the content moderators’ daily workflows and routines. Accordingly, the participants were asked to list the daily sequences of their work. Through this, we could first determine the type of software used, for example, whether or not it was Web-based. Other elements that could be assessed from the participants’ experiences included their observations while logging in to start the moderation work, the first task they undertook after logging in, and the software functions available, including those related to internal communication (e.g., the ability to send emails to management or messages to team members or other coworkers). This overview was important in getting a preliminary grasp on how workers accessed content moderation tasks and whether they were able to exercise flexibility between carrying out work-related tasks and other assignments.
The activity was useful in persuading the participants to revisit their daily work routine and recall the essential elements of their work process. It also gave them a basis for remembering the different kinds of screens and windows in their software.
Building on the first, the second activity allowed the participants to further iterate and lay out the different screens and states of their work software, pulling directly from the initial list they had made. The list helped the participants look back at what each “state” or “screen” held and the kind of actions each screen allowed for.
The third exercise was focused specifically on the software page layout and gaining insights into the granular elements. During this activity, the participants started drawing and building out a rough wireframe, with focus placed on elements such as menus and buttons and locating different information in distinct pages and positions in the software. Along with their illustrations, participants also shared with us the experiential narratives of the content moderation process, including the management control embedded in the software. Our questions related to these themes spurred responses from the participants.
Wireframes from activity 3 in the second workshop, where participants detailed the software page layout.
The workshops then proceeded to the final activities, which invited participants to engage in group discussions. The fourth activity picked up on the themes of workplace surveillance and monitoring strategies by management. This included examining participants’ views on the surveillance technologies possibly embedded in the content moderation software. The fifth and final activity drew on participants’ understanding of the content moderation process and the ways in which they imagined work could be made better for moderators. In doing so, they were also provoked to think about the probability of machine-learning tools being used in their work, and whether these could potentially affect their job security.
Conclusion We see the merits of this research method, especially in being able to draw out human-machine interaction through the collective memories of our participants. Our workshops yielded software design layouts, which are unique given the limited information available on the labor processes of social media content moderation. At the same time, conducting a workshop or focus group interviews can be more fruitful when combined with one-on-one interviews. Considering the precarious background of our participants and their limited possibilities for exercising collective struggles against management, we managed to create a space where current and former content moderators were able to share their work-related experiences and management interactions with us and with one another by focusing on the content moderation software. Future research on the use of technical control and novel ways of labor resistance using technology can enrich the existing research on content moderation.
Posted in:
on Thu, June 24, 2021 - 10:02:38
Caroline Sinders
Caroline Sinders is a machine learning designer/user researcher, artist, and digital anthropologist examining the intersections of natural language processing, artificial intelligence, abuse, online harassment, and politics in digital conversational spaces. She is the founder of Convocation Design + Research and has worked with organizations such as Amnesty International, Intel, IBM Watson, and the Wikimedia Foundation.
[email protected]
View All Caroline Sinders's Posts
Sana Ahmad
Sana Ahmad is a doctoral candidate at the Freie Universität Berlin writing her thesis on the outsourced practices of content moderation in India. Her project is aimed at understanding the outsourcing relationships between global social media companies and the supplier companies based in India. In doing so, she looks at the content moderation process and the working conditions of Indian content moderators.
[email protected]
View All Sana Ahmad's Posts
Covid-19, education, and challenges for learning-technology adoption in Pakistan
Authors:
Muhammad Zahid Iqbal,
Abraham Campbell
Posted: Fri, February 05, 2021 - 2:14:12
Creating educational disruption everywhere, the Covid-19 pandemic has hindered the lives of students and, sadly, will probably have a lasting impact on their future academic lives. What has gone relatively unnoticed is that it has created far more difficulties in developing countries. This is due to the fact that these countries were already lacking in internet accessibility, e-learning solutions providers, and government policies for developing localized education technology, as well as personal resources among students. In managing the Covid-19 crisis better than many countries, Pakistan avoided the need for a full lockdown and set an example for the world. Its savvy policies even kept the economy running. Despite its proximity to neighbors China (where the first Covid-19 infection was found) and India (the second-most-affected country), Pakistan is surprisingly safe when compared with Europe and the U.S., with around a 98 percent recovery rate.
Educational technology has been rapidly advancing—smartphones, tablets, augmented and virtual reality, and high-speed Internet, 4G, and 5G connectivity. All of this makes online learning more productive, adaptive, and accessible. In fact, the e-learning industry is currently valued at more than $200 billion and is expected to top $375 billion by 2026. Even so, Pakistan has some of the world’s worst educational outcomes. For example, it has the world’s second-highest number of children not in school: 22.8 million children ages 5 to16, which is 40 percent of Pakistan’s school-age children.
In an unfortunate twist, the onset of the pandemic coincided with Pakistan’s struggle to implement a uniform curriculum across all provinces. As coronavirus control measures spread throughout South Asia, departments of education and higher-level universities found themselves poorly or, in most cases, completely unprepared for online learning and delivering distance learning. In the past, Pakistan had closed educational institutes due to terrorist attacks and political threats, but there was still no official policy around online education.
Pakistan has an emerging mobile-phone-user market—currently 75 percent of the population uses mobile technology. But out of a population of 220 million—the fifth most-populous country in the world—there are only 76.38 million internet users. That’s only 35 percent of the population, with only 17 percent using social media. Facts on the ground show that accessibility to the Internet is the major hurdle to adopting an e-learning system. Resistance to adopting technology or new learning pedagogy and being used to the classroom environment also play a negative role in resistance to online learning policy.
E-learning initiatives in Pakistan and future prospects
When schools were forced to shut, Pakistan started seeking alternative solutions to the globally adopted “suspending classes without stopping learning” policy [1]. One idea was the establishment of a national TV channel for education that would provide equal educational opportunities for all students. The channel programmed content for kindergarten through high school and provided one lesson per day to each grade, so students would have to watch in shifts. Also, during the second wave of Covid, Radio Pakistan started transmitting “radio school” to promote virtual learning in the country for primary-level students, as a part of an effort toward overcoming the digital divide.
In the higher-education sector, Virtual University is at the forefront of virtual learning, providing full-time online learning courses, from bachelor's to Ph.D. level, in different subjects. As the pandemic disrupted education, Microsoft Teams were deployed in Pakistani universities to build connection between students and teachers. Previously, Microsoft and the Citizens Foundation (TCF) collaborated to provide technology-based education in underdeveloped areas. The eLearn Punjab program has generated educational content based on videos and illustrations for primary and secondary school classes. And in tackling the digital divide in gender, The Malala Fund has investigated Covid as an amplifying factor for the girls’ education crisis in Pakistan.
Figure 1. Challenges and possible solutions for the educational landscape of Pakistan.
Lessons learned from the pandemic can be used as an opportunity to redesign learning spaces and restructure the curriculum to facilitate student learning, as shown in Figure 1. This abrupt wake-up call should prompt all relevant stakeholders to reflect on the true purpose of schools and the future of learning in this country.
In Pakistani institutes, there is a lack of technically trained teachers to run online classes smoothly. To strengthen blended, distance, and online learning, there is a need to provide more awareness and accessibility to MOOCs, Coursera, and EdX. There is also a need to develop innovative, immersive learning technologies and modern education spaces using virtual, augmented, and mixed reality technology [2]. These technologies, along with the use of artificial intelligence (AI) and machine learning (ML), can change the future of learning by helping us build more interactive, personalized, and productive learning solutions. More specifically, when we talk about practical, hands-on learning in STEM, where there is an urgent need for learning material, augmented reality can provide virtual material to help teach with the kinesthetic learning approach [3].
Technologically developed countries have innovative and advanced systems for e-learning, allowing them to stay in the loop and keep the learning flow active during this academic year. But in Pakistan, online learning is at a nascent stage. Having started as emergency remote learning, it needs further investment to create more adoption and overcome limitations. Along with the establishing the Internet in remote areas, developing specialized authoring tools, and creating awareness for getting the most out of online learning, faculty need training to use online modalities and innovative pedagogies to reduce cognitive load and increase interactivity. In his book 21 Lessons for the 21st Century [4], Yuval Noah Harari highlighted a drawback of the current state of education:The focus is on traditional academic skills rather than critical thinking and adaptability, which are needed to create opportunities and success in the future [2]. This critical period, which is moving us rapidly toward the adoption of e-learning, can spark more focus on providing Internet facilities in remote areas, developing more innovative, low-cost learning solutions, and creating more adaptive and effective methods of learning in the near future.
Endnotes
1. Zhang, W. et al. Suspending classes without stopping learning: China’s education emergency management policy in the Covid-19 outbreak. J. Risk Financial Manag. 13, 3 (2020), 55; https://doi.org/10.3390/jrfm13030055
2. Campbell, A.G. et al. Future mixed reality educational spaces. Proc. of 2016 Future Technologies Conference. IEEE, 2016.
3. Köse, H. and Güner-Yildiz, N. Augmented reality (AR) as a learning material in special needs education. Education and Information Technologies (2020), 1-16.
4. Harari, Y.N. 21 Lessons for the 21st Century. Random House, 2018.
Posted in:
Covid-19 on Fri, February 05, 2021 - 2:14:12
Muhammad Zahid Iqbal
Muhammad Zahid Iqbal is a Ph.D. researcher in the School of Computer Science, University College Dublin, Ireland. His research interests are human-computer interaction, augmented reality in education, touchless interactions technologies, artificial intelligence, and e-learning. He is alumni of the Heidelberg Laureate Forum.
[email protected]
View All Muhammad Zahid Iqbal's Posts
Abraham Campbell
Abraham G. Campbell is an assistant professor at University College Dublin (UCD), Ireland, who is currently teaching as part of Beijing-Dublin International College (BJUT), a joint initiative between UCD and BJUT
[email protected]
View All Abraham Campbell's Posts
Technology use and vulnerability among seniors in Sweden during Covid
Authors:
Linnea Öhlund
Posted: Wed, January 20, 2021 - 2:20:00
From its place of origin, Wuhan, China, the coronavirus has spread worldwide and infected millions of people. Countries have adopted various strategies to curb the spread of the virus, social isolation being one. Sweden is one of few countries that has never closed down entirely but instead has relied on individuals' own responsibility in taking adequate precautions and following recommendations. A part of the strategy has been to recommend that individuals over the age of 70 physically isolate and not have any unnecessary social contact with others to protect themselves from the virus. Despite the many criticisms of the Swedish Covid-19 strategy, many seniors living at home have followed the recommendations of isolating themselves from any non-vital social and physical interaction.
A common way of thinking of this group is that they don't use, don’t want to use, or can't use technology, and that if they simply could use it, many aspects of their life would be improved [1]. This is nowadays somewhat of a preconception, but a study from 2020 found that 87 percent of Swedish people age 66 and up use the Internet. The current global situation of Covid-19 presents a golden opportunity to find out if seniors are using digital technology in ways that help them face negative consequences of isolation, and, more generally, if they have started using technology in a more socially connecting way. With these thoughts in mind, I set out to interview 15 people over the age of 70 living in Sweden. In this blog post, I present a summary of the results of those discussions and discuss how participant demographics, vulnerability, and not using or overusing technology can all be factors that play into the results.
Participant demographics
The ages of the participants were between 69 and 81. Half of them were married, living with a partner in a house or big apartment, and about half were single and lived in an apartment. Participants were contacted through email (due to Covid), which told me in advance that they most likely owned some digital artifact and have enough knowledge to navigate it. They were all young enough to have been working with computers and technology for a substantial part of their working life, which has provided them with knowledge and experience. Most of the participants had a university education. This information tells us that they come from a particular socio-economic background, which means that they have previous knowledge to understand digital technology and presumed capital to buy it. I present the results in the following three categories; the demographics of the participants will be a resurfacing pattern.
Feeling vulnerable in society
During the interviews, despite no question being directly asked about vulnerability, many felt that in Swedish society today, seniors are not treated nicely. Especially now during Covid, many thought they have been labeled vulnerable in a negative sense, and that other age groups, but predominately younger individuals (15 to 35), would be disrespectful and view them with contempt. They felt that the heavy restrictions for individuals 70 and over had not only protected them but also stigmatized them further.
Technology use
I was a little surprised to find that all of the seniors in the study used digital technology daily. They all had smartphones, computers, tablets, and multiple apps such as Facebook, digital banks, Wordfeud, and newspapers. Although not all of them felt particularly skilled or even interested in technology, all of them had a certain type of confidence regarding technology that little previous research mentions. After some discussions, the participants admitted that they played Wordfeud longer, scrolled Facebook more, and made more video calls, but many said they didn't do this more than normal.
Experiencing further negative feelings by using technology
All participants used technology, but some still felt socially isolated and lonely. In some cases, participants even felt that using technology made them feel sadder and more isolated because they would remember how life was before. Most of the participants also mentioned family or friends who did not use technology and felt left out of society. Many felt frustrated that digital change is happening at such a fast pace in Sweden. This quick change is casting a shadow over a specific part of society, mostly related to age. Some individuals do not have the experience from their work-life that allows them to use technology, nor do they have the means to buy technology. According to some of the participants, the individuals who cannot or will not use technology are therefore left out from many options that would have given them a better quality of life.
Discussion
Even though technology can serve as a tool to connect with friends and family, using it does not automatically give you a socially isolated, happy life, free of negative feelings. Many seniors already use technology to a large extent. Still, in periods of isolation, negative feelings seem to increase. Using technology to attempt to curb these negative feelings already seems to be done by many seniors. But overusing technology for longer periods also appears to render more negative emotions, because it is forced upon them and not chosen. Furthermore, in Swedish society, despite many seniors being tech-savvy, many are left out because they do not use digital technology, which means not having the same opportunities for a higher quality of life as others.
This study and the results from it can be further summarized in four takeaways related to seniors as a vulnerable group, their demographics, and their use of technology when trying to create a better quality of life for themselves. These four points may provide further insight into seniors as a vulnerable group and specifically this type of demographic:
- The Covid crisis has meant that seniors have faced further stigmatization in society. They feel like they are being pointed out as a problem and not respected because they may face many negative consequences from Covid-19.
- Many seniors under the age of 80 have extensive knowledge of digital technologies and systems and use them daily without critical challenges.
- The usage of technology has gone up slightly during this Covid period to make up for the lost contact with others, but feelings of loss and sadness remain.
- Overusing technology can create further negative feelings, but not using digital technology at all means being left out of society and missing out on opportunities that could generate a better quality of life. The reasons for not using technology today may be linked to not having enough experience to understand it or enough money to buy it.
Endnotes
1. Khosravi, P., Rezvani, A., and Wiewiora, A. The impact of technology on older adults’ social isolation. Computers in Human Behavior 63, (2016), 594–603; https://doi.org/10.1016/j.chb.2016.05.092
Posted in:
Covid-19 on Wed, January 20, 2021 - 2:20:00
Linnea Öhlund
Utopian futures for sexuality, aging, and design
Authors:
Britta Schulte,
Marie Søndergaard,
Rens Brankaert,
Kellie Morrissey
Posted: Tue, January 19, 2021 - 1:35:54
This excerpt is from a letter to a future self, a story written at a workshop held at DIS 2020 [1], where participants reframed and reimagined what intimacy might mean for the aging body and what role technology might play. Aging and the changes to the body it brings with it are often portrayed as something negative, a time of loss and fading away. Images of older bodies are rarely publicized or celebrated; in fact, old age is more often expressed through a (black and white) image of a hand placed on a shoulder. Although initiatives such as #nomorewrinklyhands try to make this lazy messaging visible as well as counter this stereotype, the prevailing societal fear of growing and appearing older means that— deliberate or not—we can tend to erase images of bodies that are engaged in processes of aging. However, when we ignore the aging body, we also erase experiences such as menopause and the changes—positive and negative—that this period brings for people undergoing it. When ignoring the body, we erase close intimate practices that are part of caregiving, including bathing, dressing, and close physical support. When ignoring the body, we erase experiences of intimacy and sexuality and the important part they play in our well-being.
Increasingly, HCI practitioners problematize and critique the way we, as a field, frame and address aging and the aging body [2,3]. This framing impacts the ways we configure and design new technologies that address (or ignore) the well-being of older people [4]. While the community is growing in this respect, many research and design projects in HCI still often steer away from intimacy and sexuality, choosing instead to focus on how older participants can gain satisfaction through family or civic life instead [5,6]. Although there is a healthy body of work in HCI on innovating surveillance technology or memory aids for older people, HCI has similarly balked in engaging with the “bodywork” that is also necessary in aged care (and which is often left to women carers, or to low-paid care workers). Elsewhere, however, intimacy is a reoccurring topic for HCI research (see, e.g., [7] for an overview). And beyond this, as a young, cross-disciplinary field, HCI often expands the border of “what can be talked about”: Here, sexuality is often a subject of design research, exploring technology through the lenses of sex toys [8,9], and kink [10], while others explore the role sexuality could play in HCI research through workshops at top-level conferences [11,12]. But it can be argued that there is a comparable pattern here: Through these publications and projects, sexuality is framed at least implicitly as a prerogative of the young and able-bodied.
Organized as part of the ACM Designing Interactive Systems conference in 2020, our workshop, titled “Don’t Blush: Ageing, Sexuality & Design,” explored the (dis)comforts surrounding the potential role of technology in sexuality and intimacy in later life [1]. Bringing together researchers and practitioners from many disciplines, we particularly wanted to imagine speculative, positive futures of aging to help us visualize a sex-positive society. The workshop relied heavily on storytelling as a means to approach, communicate, and ground this sensitive topic in a manner that all felt comfortable with. In a first step, we used three stories—two descriptive of current scenarios (drawn from current practice), one speculative— to kickstart the discussion and familiarize ourselves with the different themes this topic might encompass. The stories described 1) the experiences of a couple not able to find space together privately in a care home, 2) experiences of nursing staff with a “Mr. Carter” and his unsatisfied urges, and 3) the story of a couple who asked the salesman of a robobutler company if the robot could provide support in the bedroom. These initial discussions were useful to find common ground and as a tool to develop a shared vocabulary on intimacy and sexuality. Participants were able to raise the topics important to them, outline blindspots, and relate the stories to their own areas of expertise, as well as to the things they found interesting from the other participants’ workshop submissions. After extensive discussions, smaller groups collaborated on writing a new story. The only direction provided was that the stories were to be optimistic, maybe even utopian, so that, instead of lamenting the status quo, we could develop visions of what we want to see.
The four resulting stories exceeded our expectations of what could be possible in such a short amount of time, as they were subtle, sensitive, and sensual accounts that allowed us to collectively think the “unthinkable.” Drawing on the variety of experiences the participants brought to the table, the stories covered a range of experiences, including: 1) enriching bodily experiences through lingerie that is suitable for people using incontinence pads, 2) sensitive, tailored sexual care package subscriptions openly advertised in care homes, 3) love letters to the body and self-care rituals surrounding menarche and menopause, and 4) a sex-positive older blogger embraced by her family.
This illustration was developed alongside story 2, illustrating the different sex packages available in the fictional future care home.
Even though these stories were planned out as utopian stories, they are inherently grounded in the everyday, the mundane, and the values of the writers. Drawing on the body as a focal point has enabled the authors of these stories to discuss societal changes not in the abstract, but rather on a personal, embodied level. Even though all stories describe deeply personal encounters, they all link to changes beyond these experiences, which are articulated through the artifacts that the stories’ protagonists use. The erotic lingerie that Jo—a nonbinary character—uses to change their body (image) is not only their own imagination: Jo also reflects how it has been shared widely through Instagram adverts, showing that society has made space for conversations around desire, changing bodies, and incontinence to happen. The care home presented in story 2 advertises the sexuality packages they developed openly, hinting at a whole history of conversations, changes, and decisions that took place beforehand. Through the family members, who are somewhat uncomfortable about the idea, we get a hint as to how far society has adapted to it.
As with every good workshop, “Don’t Blush” left us with more questions than answers. But we are convinced that these questions are useful for us as a field to move toward technologies that are truly supportive of the lived experience of older bodies. We tried to summarize some of them here to stimulate discussion within the field.
If we acknowledge that older adults might have or wish for an active sex life, what does that mean for the technologies we develop in this area? How can we ensure that the technologies we create support the joy, dignity, and privacy that we would allow everyone else? How do we develop research and design strategies that approach the question in suitable, sensitive, and satisfying ways?
If we acknowledge that good care in older age means caring for the body as well as ensuring people’s basic health and safety, how can we extend our understanding of intimacy, privacy, and dignity to improve and enrich often-ignored body work (bathing, dressing, and toileting)?
If we acknowledge the aging body and the changes it goes through, what directions does this open up for us to explore through our research and design work? How can we make space for the body in our research and keep an open dialogue about experiences, staying with the (dis)comfort of such conversations?
We hope to keep this conversation going and growing. All organizers, as well as most participants, came from a Western European perspective, with a strong focus on the U.K. and the Netherlands. Both the stories that inspired the work and those that came out of the workshop embody a certain understanding of sexuality, intimacy, as well as aging and caregiving. The majorities of the stories focus on a female perspective, which again mirrors the composition of the workshop participants. Even though not always explicitly, most stories further present a heterosexual outlook. While this is a limitation of the current work, it is also an explicit invitation to build on these stories, contradict them, and expand them. In addition, the experiences and wishes of people in their later lives themselves are missing here. We are planning to respond to this by using the stories developed in the workshop as conversation starters and other ways of qualitative and co-design research with aging people. We will further distil the insights and stories from the workshop into a zine to be shared within the academic and non-academic audience. You can get a copy by contacting [email protected]. If you want to be part of this conversation, please get in touch or join the conversation at #agesextech.
Endnotes
1. Schulte, B.F., Morrissey, K., Juul Søndergaard, M.L., and Brankaert, R. Don’t blush: Sexuality, aging & design. Companion Publication of the 2020 ACM Designing Interactive Systems Conference. ACM, New York, 2020, 405–408; https://doi.org/10.1145/3393914.3395915
2. Kannabiran, G., Hoggan, E., and Hansen, L.K. Somehow they are never horny! Companion Publication of the 2020 ACM Designing Interactive Systems Conference. ACM, New York, 2020, 131–137; https://doi.org/10.1145/3393914.3395877
3. Schulte, B. and Hornecker, E. Full frontal intimacy - on HCI, design, & intimacy. Companion Publication of the 2020 ACM Designing Interactive Systems Conference. ACM, New York, 2020, 123–129; https://doi.org/10.1145/3393914.3395889
4. Vines, J., Pritchard, G., Wright, P., Olivier, P., and Brittain, K. An age-old problem: Examining the discourses of ageing in HCI and strategies for future research. ACM Transactions on Computer-Human Interaction 22, 1 (2015), 2.
5. Reuter, A., Bartindale, T., Morrissey, K., Scharf, T., and Liddle, J. Older voices: Supporting community radio production for civic participation in later life. Proc. of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, New York, 2019, 1–13; https://doi.org/10.1145/3290605.3300664
6. Welsh, D., Morrissey, K., Foley, S., McNaney, R., Salis, C., McCarthy, J., and Vines, J. Ticket to talk: Supporting conversation between young people and people with dementia through digital media. Proc. of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, New York, 2018, 1–14; https://doi.org/10.1145/3173574.3173949
7. Hassenzahl, M., Heidecker, S., Eckoldt, K., Diefenbach, S., and Hillmann, U. All you need is love: Current strategies of mediating intimate relationships through technology. ACM Trans. Comput.-Hum. Interact. 19, 4 (Dec. 2012), 1–19; https://doi.org/10.1145/2395131.2395137
8. Bardzell, J. and Bardzell, S. Pleasure is your birthright: Digitally enabled designer sex toys as a case of third-wave HCI. Proc. of the 2011 Annual Conference on Human Factors in Computing Systems. ACM Press, New York, 2011, 257; https://doi.org/10.1145/1978942.1978979
9. Juul Søndergaard, M.L., and Hedegaard Schiølin, K. Bataille’s bicycle: Execution and/as eroticism. Executing Practices (2017), 179.
10. Buttrick, L., Linehan, C., Kirman, B., and O’Hara, D. Fifty shades of CHI: The perverse and humiliating human-computer relationship. Proc. of the Extended Abstracts of the 32nd annual ACM Conference on Human Factors in Computing Systems. ACM Press, New York, 2014, 825–834; https://doi.org/10.1145/2559206.2578874
11. Brewer, J., Kaye, J., Williams, A., and Wyche, S. Sexual interactions: Shy we should talk about sex in HCI. CHI ’06 Extended Abstracts on Human Factors in Computing Systems. ACM Press, New York, 2006, 1695; https://doi.org/10.1145/1125451.1125765
12. Kannabiran, G., Ahmed, A.A., Wood, M., Balaam, M., Tanenbaum, T.J., Bardzell, S., and Bardzell, J. Design for sexual wellbeing in HCI. Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, New York, 2018, 1–7; https://doi.org/10.1145/3170427.3170639
Posted in:
on Tue, January 19, 2021 - 1:35:54
Britta Schulte
Britta F. Schulte is a postdoc at Bauhaus University Weimar. Her work explores our relationships toward technologies for elderly care and the ageing body, with a strong focus on intimacy and sexuality. In her works she often uses speculative and creative approaches such as storytelling and design fictions in many forms.
[email protected]
View All Britta Schulte's Posts
Marie Søndergaard
Marie Louise Juul Søndergaard is an interaction designer and design researcher. Her work explores critical-feminist design of digital technologies for intimate health, such as menarche, menopause, and sexual pleasure. She is currently a postdoc in interaction design and digital women’s health at KTH Institute of Technology, Stockholm, Sweden.
[email protected]
View All Marie Søndergaard's Posts
Rens Brankaert
Rens Brankaert is professor of health innovations and technology at Fontys University of Applied sciences and assistant professor at Eindhoven University of Technology (TU/e). His work focuses around the design of technology, systems, and services for and with people living with dementia by using design research and living lab approaches.
[email protected]
View All Rens Brankaert's Posts
Kellie Morrissey
Kellie Morrissey is a lecturer in design for health and wellbeing at the School of Design in the University of Limerick, Ireland. Her work focuses on experience-centered and phenomenological approaches to the co-design of digital technologies for and with marginalized participants.
[email protected]
View All Kellie Morrissey's Posts
Cultivating activism with speculative design
Authors:
Richmond Wong,
Nick Merrill
Posted: Fri, December 18, 2020 - 4:29:36
As design researchers, we love our speculative methods—methods for imagining possible futures—and opening them up to discussion and critique. But what good do they do? Designing speculative futures to discuss values, ethics, safety, and security can feel naive, as fellow researchers are being dismissed for doing the work of ethics.
We want to believe that to imagine possible futures is to be able to change them: to surface discussions of social values and ethics so that we may imagine worlds to work toward (or avoid). But, as prior work has observed, who gets to speculate matters a great deal [1]. Of course, scholarly production of speculative artifacts has its place. But can it make change—lasting change—on the ground?
Our past work has used speculative designs—creating fictional products, headlines, and scenarios for others to react to and play within—to surface discussion and consideration of values, ethics, and alternative notions of security or safety [2,3]. We envisioned these as techniques that could be adopted within existing product design practices.
In a (perhaps subtle) shift, we discuss the role speculative methods may have in fostering activism and dissent, particularly among so-called rank-and-file tech workers such as designers, UX professionals, and engineers.
This concept is not without precedent. Turkopticon, a platform for organizing Mechanical Turk workers, has created lasting infrastructures for workers mobilizing on the platform [4]. Prior work has also involved activists in speculative practices; Asad et al. had activists produce prototypes that expressed their particular needs [5]. In some of our own prior work, we describe infrastructural speculations: a call to use speculative design techniques to center systems of power and imagine alternative ones in our speculative work—questions that are often relegated to the background when speculating about technology and use [6].
We’re motivated by the desire to produce critically oriented practices that can become part of a lasting infrastructure among tech workers—a practice for critiquing technology as common as think-alouds or user personas are for building them. In the midst of widespread public skepticism of technology companies, and a fair share of tech-worker-led dissent and activism (via letter writing, walkouts, and other forms of organizing), there is an opportunity to identify, describe, and discuss points of dissent and refusal of “business as usual.” Speculative methods allow us to imagine, construct, and communicate alternative social relationships and configurations of power. So a critical, speculative method could, with groundwork, become an industry-wide practice for fostering such dissent.
Yet, as Timnit Gebru’s recent dismissal from Google, and broader dismissals of activist-workers (many of whom are PoC, women, trans, and nonbinary) across technology companies illustrate, fostering dissent among tech workers requires more than new speculative techniques. It requires social and organizational change; it requires solidarity among workers. Even if someone comes armed with worthy critique, without worker organizing, their analyses can be met with outright hostility.
We are excited to see the development of new tools and methods for surfacing questions related to values, ethics, bias, and more, often combining speculative methods with approaches such as design fiction, value sensitive design, or participatory design. But many of these interventions—including our own, at times—have abstracted away social issues crucial to the potential adoption and use of these tools: questions of workplace power, the precarity and risk involved in organizing or critiquing, and who carries the burden of that precarity. Our work, which centers structures of economic power and capital, has not engaged deeply enough on how these forces shape the adoption of our practices.
What can speculative practices do for activism? We approach this question humbly. Design, even with a critical orientation, cannot “solve” technology’s problems without touching the social and political structures within which these technologies, and their development, are entangled. Speculative design alone will not save us. Simply raising conversations will not necessarily lead to change. Without an underlying political commitment, we risk that speculative work gets re-appropriated by the systems we attempt to critique [7]. Worse, we risk ignoring the hard groundwork already done by activists, union organizers, and people working in local communities to advocate for more fair, just technical practices.
As we look toward our future work with these practices, we ask ourselves: What pragmatic and tactical work can speculative practices do today to help workers, activists, educators, and organizers already working on the ground to achieve their goals? And to help people who are beginning to ask critical questions become more inclined toward activism?
Our new challenge is to use speculative design to create methods difficult for corporations to co-opt, perhaps methods that take place outside of the corporate world. Even the dystopian visions of speculative methods are seen by some as the next disruptive product.
Toward these ends, and building on the work of colleagues and co-conspirators, we suggest:
- Changing when/where speculative design is done. Deploy speculative design outside of work contexts. While user-centered design methods take place in contexts of work, speculative methods for critique should take place in contexts of organizing and activism.
- Changing with/for whom speculative work is done. Create speculative designs with and for more targeted activist audiences, rather than defaulting to sharing them broadly for general public discussion. Activists are one audience. But speculative work can also make the comfortable, such as the C-suite, uncomfortable. These audiences should not be ignored either.
- Changing what speculations are about. Shift speculative designs away from easy-to-reappropriate imagined products toward depicting futures through other forms
This process is easier described than enacted. Making the methods, then mobilizing them, takes significant work, and academics will need to work with activists on the ground. Our goal in sharing these reflections is to inspire students, researchers, and practitioners to join in doing that work; to expand and (re)orient speculative methods to further justice and activism, joining existing critical perspectives on design methods [8]. Speculative methods have the capacity to inspire meaningful change, meaningful dissent. We hope our critical self-reflection will spark interest in building reusable, dare we say dangerous methods for fostering activism and dissent. We hope these questions will help our community build them.
* Both authors contributed equally to this piece.
Endnotes
1. O’Leary, J.T., Zewde, S., Mankoff, J., and Rosner, D.K. Who gets to future? Race, representation, and design methods in Africatown. Proc. of the 2019 CHI Conference on Human Factors in Computing Systems, 1–13; https://doi.org/10.1145/3290605.3300791
2. Merrill, N. 2020. Security fictions: Bridging speculative design and computer security. Proc. of the 2020 ACM Designing Interactive Systems Conference, 1727–1735; https://doi.org/10.1145/3357236.3395451
3. Wong, R.Y., Mulligan, D.K., Van Wyk, E., Pierce, J., and Chuang, J. Eliciting values reflections by engaging privacy futures using design workbooks. Proc. of the ACM on Human Computer Interaction 1, CSCW. 2017; https://doi.org/10.1145/3134746
4. Irani, L. and Silberman, M.S. 2014. From critical design to critical infrastructure. Interactions 21, 4 (2014), 32–35; https://doi.org/10.1145/2627392
5. Asad, M., Fox, S., and Le Dantec, C.A. Speculative activist technologies. Proc. of iConference 2014; https://doi.org/10.9776/14074
6. Wong, R.Y., Khovanskaya, V., Fox, S.E., Merrill, N., and Sengers, P. Infrastructural speculations: Tactics for designing and interrogating lifeworlds. Proc. of the 2020 CHI Conference on Human Factors in Computing Systems, 1–15; https://doi.org/10.1145/3313831.3376515
7. Wong, R.Y. and Khovanskaya, V. Speculative design in HCI: From corporate imaginations to critical orientations. In New Directions in 3rd Wave HCI. M. Filimowicz, ed. Springer, 2018, 175–202; https://doi.org/10.1007/978-3-319-73374-6_10
8. Schultz, T., Abdulla, D., Ansari, A., Canlı, E., Keshavarz, M., Kiem, M., Prado de O. Martins, L., and Vieira de Oliveira, P.J.S. What is at stake with decolonizing design? A roundtable. Design and Culture 10, 1 (2018), 81–101; https://doi.org/10.1080/17547075.2018.1434368
Posted in:
on Fri, December 18, 2020 - 4:29:36
Richmond Wong
Richmond Wong (
https://www.richmondywong.com) is a postdoctoral fellow at the University of California Berkeley Center for Long-Term Cybersecurity. His research focuses on how technology professionals address social values and ethical issues in their work, and on developing design-centered methods to surface discussion of ethical issues related to technology.
[email protected]
View All Richmond Wong's Posts
Nick Merrill
Nick Merrill is a researcher at the UC Berkeley Center for Long-Term Cybersecurity, where he directs the Daylight Lab. He is interested in the social process of threat identification: how and why people identify particular security threats, and who gets to do so.
[email protected]
View All Nick Merrill's Posts
A call for respect, inclusion, fairness, and transparency in SIGCHI
Authors:
Fempower.tech
Posted: Thu, December 03, 2020 - 3:59:36
We are writing this blog post as a response to the discussions about exclusion and oppression within SIGCHI that occurred on the Interactions blog in summer 2020, and a call for respect, inclusion, fairness, and transparency in SIGCHI. Our collective, fempower.tech, started the #CHIversity campaign at the 2017 Human Factors in Computing (CHI) conference because we didn’t feel welcome in previous years [1]. Through this, we created our own space within SIGCHI. We are one of many groups working to make SIGCHI more inclusive and welcoming to everyone who wants to be a part of this community. SIGCHI is a volunteer-led organization that is not only shaped by elected leaders but also by community members who care. Over time, communities create their own ways of working to make changes in their organizations. Sub-communities form when individuals or groups don’t see themselves represented or fitting in to the larger community.
Grassroots groups like AccessSIGCHI or fempower.tech and formalized groups like the Realizing that All Can be Equal (R.A.C.E.) team do this work because we hope SIGCHI can be better. However, when these groups take actions, in some cases encouraged by SIGCHI leaders, they can encounter opposition, disapproval, and accusations of wrongdoing. For example, the R.A.C.E. inclusion team recently explained how the SIGCHI Executive Committee (EC) halted their diversity and inclusion work [2], and Jen Mankoff discussed how the EC hampered her efforts to address accessibility issues in the community by suggesting she violated ACM policy [3]. Based on these descriptions, as well as the experiences that some fempower.tech members have had while doing inclusion-related work in the SIGCHI community, we as fempower.tech observe a pattern that suggests that those holding power to make decisions in SIGCHI do not value community-driven inclusion efforts.
When grassroots or formalized groups have worked to build a more supportive community for themselves, the SIGCHI EC has sometimes responded in hostile ways that undermine or proactively stop these volunteer efforts (as described in the preceding paragraph). We believe that a volunteer-led organization should be open to engage with community efforts to improve situations for those who experience marginalization. We are disappointed that SIGCHI has repeatedly failed to choose a more constructive and responsive approach when engaging with community efforts.
Jen Mankoff’s post reminds us that marginalization and oppression are not one-time, isolated experiences. They are systemic concerns that affect people’s everyday existence. Many are working to make changes, formally and informally, to dismantle the barriers of racism, ableism, sexism, and other forms of oppression. Groups like the Inclusion Teams, SIGCHI CARES, AccessSIGCHI, or the CHI2019 Allyship initiative offered hope that SIGCHI wanted to tackle problems related to marginalization. Yet, by stating that the R.A.C.E. inclusion team and Jen Mankoff, an AccessSIGCHI leader, had violated ACM policy, the EC appeared to undermine its own efforts. Members of fempower.tech who are part of equity-seeking groups have experienced similar (micro)aggressions and scapegoating. This has led us to feel used, unsupported, or unwelcome at meetings and/or conferences.
Creating or supporting initiatives led by people from marginalized groups—and then challenging their work—exploits the good intentions and beliefs of SIGCHI leaders, members, and volunteers who are trying to make positive change. This erodes trust and damages communities already experiencing marginalization. What is especially unsettling about the R.A.C.E. team’s experience is that the people who were doing the work that the institution requested were undermined when their efforts gained traction in the community [2]. Indeed, SIGCHI has repeatedly started inclusion-related projects without providing them a clear path to success. Through these actions, SIGCHI has let its members down, time and time again.
Scholar Reshmi Dutt-Ballerstadt notes that “failing to interrogate institutional decision-making processes while claiming to work towards social justice” is one way individuals (un)consciously sustain white supremacy [4]. When organizations apply their rules inconsistently, in ways that silence volunteers and activists without decision-making power, they not only perpetuate the status quo but also actively harm movements toward more just, caring, and inclusive communities.
How can we make changes in an organization that seems to repeatedly move in a more inclusive direction only to undermine such efforts? For example, the lack of year-to-year continuity between conference organization and new initiatives makes it seem that the work and energy that volunteers expend to make improvements in one year is not valued in the next. Instead of expecting people in leadership roles to make these decisions, community negotiations could decide which initiatives need to be carried forward.
We advocate for structural change that recognizes the interlocking nature of marginalizations. Such change requires a combination of: sustainable resourcing for initiatives such as the Inclusion Teams, SIGCHI CARES, and the Allyship program; individuals unlearning harmful behaviors; better communication with the community; and grassroots activism having pathways to hold governing bodies to account. We urge that these groups be given the necessary, equitable resources and support to ensure their efforts toward a more inclusive SIGCHI are sustainable and equipped to alter old structures. This means official inclusion-related groups must have the power to hold the EC and other bodies’ decision-making processes accountable. Finally, the EC and other governing groups must have the will to enact the changes these groups recommend.
Change is challenging, and the work of volunteering is not equitable, especially across a group as diverse as SIGCHI consisting of students, precariously employed researchers, professionals, and faculty. Given this, the best way forward for our community is to heed the various calls to action which liberate us all [5]. This requires work from the EC and other formalized SIGCHI bodies. It requires them to take our caring critique of their systems seriously and to behave in ways that support rather than harm those already experiencing marginalization. As we said at the beginning of this article, we critique and actively work to improve structures for those of us who experience marginalization precisely because we care about this community and hope it can help all of us thrive. As fempower.tech, we want to work with the SIGCHI community as a whole: grassroots activists, international members, formal advocacy groups, and the EC. We need collective action and concrete changes: no more relying on individuals and incrementalism. Indeed, “change happens slowly” is a narrative that centers those closest to power rather than those experiencing harm.
Many of the requests that others have made [2,3,6,7] and that we amplify are not impossible demands or blue-sky thinking. They are what should be the baseline in just systems. SIGCHI’s own mission and vision statements say as much:
SIGCHI MISSION
: ACM SIGCHI facilitates an environment where its members can invent and develop novel technologies and tools, explore how technology impacts people’s lives, inform public policy, and design new interaction techniques and interfaces. We are an interdisciplinary field comprising academics, practitioners, and educators, and we welcome a variety of approaches to solve these complex problems. The mission of ACM SIGCHI is to support the professional growth of its members who are interested in how people interact with technologies and how technology changes society.
SIGCHI VISION: We aim to enhance our members’ ability to innovate and understand technologies for the greater public good.
With this blog post, we join others who want to help SIGCHI achieve its vision of supporting the greater public good. We as fempower.tech ask that SIGCHI meet grassroots and formalized groups with respect and equality rather than opposition and aggression. In short, we ask that SIGCHI work toward realizing the mission and vision that it celebrates itself as already doing.
Endnotes
1. Strohmayer, A., Bellini, R., Meissner, J., Mitchell Finnigan, S., Alabdulqader, E., Toombs, A., and Balaam, M. #CHIversity: Implications for equality, diversity, and inclusion campaigns. Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, Paper alt03, 1–10; https://doi.org/10.1145/3170427.3188396
2. Grady, S.D., Wisniewski, P., Metoyer, R., Gibbs, P., Badillo-Urquiola, K., Elsayed-Ali, S., and Yafi, E. Addressing institutional racism within initiatives for SIGCHI’s diversity and inclusion. Interactions blog. Jun. 11, 2020;
https://interactions.acm.org/blog/view/addressing-institutional-racism-within-initiatives-for-sigchis-diversity-an
3. Mankoff, J. A challenging response. Interactions blog. Jun. 17, 2020; https://interactions.acm.org/blog/view/a-challenging-response
4. Dutt-Ballerstadt, R. A checklist to determine if you are supporting white supremacy. Inside Higher Ed. Jan. 12, 2018; https://www.insidehighered.com/advice/2018/01/12/checklist-determine-if-you-are-supporting-white-supremacy-opinion
5. Irani, L. “A call to action for the ACM” liberates all of us. Interactions blog. Jun. 29, 2020; https://interactions.acm.org/blog/view/a-call-to-action-for-the-acm-liberates-all-of-us
6. Harrington, C., Rankin, Y., Jones, J., Brewer, R., Erete, S., Dillahunt, T., and Brown, Q. A call to action for the ACM. Interactions blog. Jun. 22, 2020; https://interactions.acm.org/blog/view/a-call-to-action-for-the-acm
7. Rankin, Y.A., and Thomas, J. Straighten up and fly right: Rethinking intersectionality in HCI research. Interactions 26, 6 (2019), 64–68; https://doi.org/10.1145/3363033
Posted in:
on Thu, December 03, 2020 - 3:59:36
Fempower.tech
Fempower.tech are an international network and collective of feminist researchers, practitioners, and activists working with digital technologies. They aim to raise awareness of feminist issues in technology research by being overtly critical and political within the field, raising voices of underrepresented groups and topics, presenting tangible outcomes, and taking on an activist role for this. They create supportive and collaborative environments in their workplaces, within academia, industry, and at international conferences.
View All Fempower.tech 's Posts
How fragmentation can undermine the public health response to Covid-19
Authors:
Andrew Tzer-Yeu Chen
Posted: Fri, October 30, 2020 - 5:24:04
At this point, we are all familiar with Covid-19 and its impacts on ourselves, our communities, and our world. Responses to the disease have largely been led by local, national, and international public health agencies, who have activated their pandemic plans and opened the epidemiological toolkit of modeling, testing, isolation and movement restrictions, surveillance, and contact tracing. When it comes to contact tracing, it’s natural for people to see the tech-heavy world around them, then hear about the common manual process of human investigators and phone calls, and ask, “Why can’t technology help make this better?” But it’s not as simple as “add more technology”—the complex way in which users and societies interact with the technology has significant impacts on its effectiveness. When efforts are not well coordinated, leading to fragmentation in system design and user experience, the public health response can be negatively impacted. This article briefly covers the journey of how contact tracing registers and digital diaries evolved in New Zealand during the Covid-19 pandemic, how the lack of central coordination led to poor outcomes, and ultimately how this was improved.
Contact tracing registers and digital diaries?
Dr. Ayesha Verrall notes that “rapid case detection and contact tracing, combined with other basic public health measures, has over 90 percent efficacy against Covid-19 at the population level, making it as effective as many vaccines” [1]. Contact tracing involves identifying people who have been in contact with an infected person, and therefore who may have been unknowingly exposed to the infectious disease. By identifying the contacts and rapidly isolating and testing those individuals, the chains of transmission in the community are cut off, limiting the spread of the disease. Importantly, contact tracers need to find potential contacts who are unknown to the infected person, and can only do this by tracing the movement of the person to find others who may have overlapped in time and place.
With the pervasive nature of digital technologies, there has been a lot of discussion globally around digital contact tracing solutions, particularly around Bluetooth-enabled smartphone apps (including the Apple/Google protocol) and wearable devices [2,3,4]. Proponents of the technology offer the promise that these solutions can achieve better completeness (finding more contacts of cases, especially where the identities are not known to the case) and speed (finding contacts and testing/isolating them faster). However, in the absence of a validated, effective digital contact tracing solution initially, a number of governments opted for simpler, lower-tech methods of collecting data about people’s movements.
The contact tracing register (or visitor/customer check-in log) has been deployed around the world (Figure 1). Individuals are asked to provide their personal details at businesses and other places of interest, so that if a venue is identified as potential exposure site, then the register can be provided to contact tracers to quickly find people who were there at the relevant time. Digital diaries have also been introduced to help people keep track of their own movements to support their recollection if they get interviewed by a contact tracer—the distinction being that instead of the venue or the government holding the records, the individual themselves maintain and control their logs.
Figure 1. A pen-and-paper contact tracing book along with two QR codes for digital diaries in Wellington, New Zealand.
Too many solutions
In New Zealand, a strict Level 4 lockdown was implemented across the country on March 25. Most people (with the exception of essential workers) stayed at home, and the high level of compliance meant that within two weeks the number of active cases began to fall. After four weeks, on April 27, some restrictions (particularly around schools and takeaway food services) were eased at Level 3, and then most restrictions were lifted at Level 2 on May 13 (with the exception of physical distancing and gathering limits). As the country moved into Level 3, the government introduced a requirement under the Public Health Response Order for all businesses to maintain contact tracing registers. These registers required visitors and customers to provide their entry/exit times, name, address, and contact details to the business in case they are needed for contact tracing purposes. The government provided a template for businesses to print and use.
A number of criticisms were leveled at the pen-and-paper contact tracing registers that most businesses used initially. Customers had to provide their personal information on a piece of paper that was visible to all customers, creating a privacy risk. This led to real privacy breaches, such as a female customer being harassed by a male restaurant worker after he took her details from a contact tracing register. There were also some concerns about “dirty pen” risks (if everyone is using the same pen, could that become a vector for virus transmission?), usability (can people be bothered providing their details at every business they go to?), validity (could people provide false details?), and enforcement (can a business deny entry to someone who refuses to provide their details?).
Private software developers took the initiative to come up with better solutions. They reasoned that most people have smartphones (NZ has 80 to 85 percent smartphone penetration), and that using digital tools would mitigate or resolve some of the risks associated with pen-and-paper approaches. Within a week, there were over 30 tools available, almost all using QR codes, with a variety of system architectures and user flows. Some QR codes directed the user to a URL (thus requiring a mobile Internet connection); others required a specific standalone app to interpret the code. Some stored the data on a central server owned and controlled by the developer; others stored the data on the phone for the user’s reference only in a decentralized way (i.e., the “digital diary” approach). Some collected only a name and contact email address; others also asked for phone numbers and residential addresses —it was unclear what would be genuinely necessary for contact tracers to find people. Some tools were offered for free; others required businesses to pay a monthly fee, and two of the largest City Councils bulk purchased licenses of one product for businesses in their cities. Some providers had developed full privacy policies; others said that speed-to-deployment was more important. Unfortunately, there was duplication of effort, and many developers found themselves reinventing the wheel and then struggling under the burden of providing tech support for their products.
Figure 2. Photos of various QR codes from different providers in New Zealand, crowdsourced by the author from Twitter.
Almost every business soon adopted a QR code from one of these private providers, but with very little coordination or information around which systems were trustworthy or superior, chaos ensued. A lack of familiarity with how QR codes work among the public also led to significant confusion, with people getting frustrated when some QR codes worked and others didn’t. This was not helped by a number of the QR code posters using similar branding, such as the yellow diagonal stripes that were used in government messaging about Covid-19 (Figure 2)—some posters even using government logos to make their posters look more official. Most of the systems had centralized approaches, which also led to concerns about security and potential unauthorized reuse of data held by private corporations. Some businesses simply created Web forms and added clauses in their privacy policies that allowed them to reuse the collected data for marketing purposes, attracting a stern message from the privacy commissioner. However, an interesting counterargument was that since there were so many different tools, the data was fragmented between different providers and therefore no single company held too much data.
The undirected approach also meant that there was insufficient consideration for the needs of marginalized people. Posters were sometimes placed in positions that were inaccessible for disabled populations. Some businesses removed pen-and-paper registers entirely, making it impossible for participation by the digitally excluded—people without smartphones, or without the skills to effectively use the smartphone, or without expensive mobile Internet data. There was also some confusion about whether or not digital diary approaches (with the data staying on the device) complied with the regulatory requirement for businesses to maintain registers.
The government steps in
On May 20, a week into Level 2, the Ministry of Health launched the NZ COVID Tracer app. This was (and is) also a QR-code-based system with a “digital diary” approach. The app was accompanied by its own QR code standard, which contained a unique Global Location Number for each business, and therefore could be scanned without requiring an Internet connection. Data about check-ins (where people had been at what time) stayed on the device, and the user can choose to release that information to a human contact tracer if identified as a close contact of a known case. The app also allowed individuals to provide up-to-date contact details to the Ministry of Health, which would help contact tracers find them more quickly if necessary.
Unfortunately, QR codes were everywhere by this stage. Some businesses tried to provide multiple options (as shown in Figure 3) with clearer instructions. But this didn’t stop people from being confused about the proliferation of QR codes. The two loudest complaints about the government app were that 1) it wasn’t compatible with older devices (requiring at least Android 7.0 or iOS 12 at launch) and 2) the app didn’t recognize most of the QR codes that were available (Figure 4). The government app wasn’t designed to work with the other QR codes (which from a technical perspective might seem obvious, but for non-technical folks was bewildering). Displaying the government QR code was not mandatory, so many businesses didn’t even have it as an option.
Figure 3. Businesses attempting to provide clearer instructions on which QR codes to use, crowdsourced by the author from Twitter.
This fragmentation harmed the uptake of the government app because people felt that existing tools served the same purpose. Within a week of launch, about 380,000 users were registered, equivalent to approximately 10 percent of the adult population of four million people. Registrations plateaued, and a month later sat around 570,000. Meanwhile, the number of QR codes being scanned each day was counted through Web analytics events, slowly ramping up initially as businesses started printing and displaying government QR codes, settling around 50,000 scans per day in early June. Given the size of the population, this was clearly not enough activity to give us confidence that the data from the app would be useful in the event of a further outbreak. However, the government app was the only one that reported statistics about usage, so we don’t have data about how widely other tools might have been used.
Figure 4. Screenshots from users complaining that the NZ COVID Tracer app wasn’t recognizing the QR code, when they were in fact scanning QR codes from other providers, crowdsourced by the author from Twitter.
It turned out that the government app had actually been in development (with a private sector partner) for at least a month. The specific reasons for why the app was released late have not been made clear yet, although it should be noted that the government has a higher onus to “do things correctly” and needed to prepare a full privacy impact assessment, undergo an independent security audit, have the app checked by the government cybersecurity bureau, and complete other steps that weren’t required for private developers.
By July, it appeared that we had the pandemic under control. New Zealand experienced 102 days in a row without any community cases of Covid-19 detected. The Prime Minister moved the country to Level 1, lifting almost all restrictions except for border controls. Most people became complacent around the risks of Covid-19, with daily scan counts dropping to 10,000 in early July. There were even reports that some businesses were taking their posters down because they felt the QR codes were no longer necessary.
An improvement to the app in June introduced exposure notification functionality. Contact tracers could identify and securely broadcast a place and time where an active case had been, and then the app would check that against the check-in logs on the device and notify the user if an overlap was found. In late July, a further improvement was made to allow users to add manual entries, mostly to account for venues that did not have a government QR code. However, while there were minor upticks in registration and usage after these releases, the activity level still remained very low.
Consolidation is the solution
On August 11, the Prime Minister announced that four cases of community transmission had been found in Auckland (the largest city in New Zealand). There were no links to overseas travel, so it was highly likely that there were other undetected cases in the community. Given the recent experiences of places like Victoria, Australia, where second waves have grown quickly, the government implemented a second lockdown, with stricter movement restrictions in Auckland. In the next day, they also announced that displaying a NZ COVID Tracer QR code in a prominent place would become mandatory for all businesses by the following week. It would still be optional for individuals to scan, but at least the codes had to be available for people to scan if they wanted to. This decision wasn’t without precedent—Singapore required their SafeEntry QR codes to be displayed at all businesses in May.
This announcement caused three things to happen over the following week. First, the private developers with the most prevalent QR codes agreed that consolidation was necessary, and advised their customers to switch to the government QR code. Second, businesses largely complied, with the number of government QR codes increasing four times over the subsequent two weeks (from approximately 87,000 to 324,000). Third, the presence of the disease in the community in New Zealand and the accompanying lockdown elevated the seriousness of the situation, and more people began to scan the NZ COVID Tracer QR codes.
Figure 5. Daily scan counts from the NZ COVID Tracer app, overlaid with significant events. Data sourced from the NZ Ministry of Health.
The number of daily scans shot up, from approximately 30,000 per day before the second wave to over two million per day (Figure 5). The number of registered users also increased, from 640,000 before the second wave to just over two million users (approximately 50 percent of the adult population, although duplicate registrations are not accounted for) as of September 4. While the change in context was a significant driver for shifting user behaviors, moving away from the fragmented system clearly helped increase participation in the system too.
Unfortunately, this increase in participation came too late to be of significant help for the second wave. In the event of an outbreak, contact tracers need at least 14 days of movement logs for infected cases in order to help find close contacts. While the government did use the app to publish six exposure notifications, and some close contacts were found faster because they had updated their contact details, it seems that ultimately the app is yet to find significant numbers of close contacts or new cases. As New Zealand now comes out of its second lockdown, we can only hope that the current level of participation continues to grow for NZ COVID Tracer, and is sustained long enough to help defend against a potential third wave of cases in New Zealand.
Leadership and communication
When a global pandemic catches the world by surprise, a strong response is needed to contain, mitigate, and recover from its impacts. The public expects government to play a leading role in coordinating this response, but many individuals also want to do something to contribute. While people are to be lauded for utilizing their skills to support the broader community, undirected efforts can lead to confusion, duplication of effort, and ultimately harm the overall response. This is not the fault of the individuals—the responsibility lies with the public health agencies to make good use of the available resources and to clearly communicate with people about what is and isn’t needed. Normally, competition between private entities in the open market might be desirable to drive innovation, but in a pandemic, we really need something that just works and supports progress on public health outcomes.
In a country that has had strong communication with the public overall, the confusion around contact tracing registers has been an unfortunate blemish for New Zealand. This case study shows that fragmentation can lead to disparate and negative user experiences, which can harm trust in the system and lead to low participation. In the context of a global pandemic, trust is one of the things we need the most for an effective response.
Acknowledgments
The author thanks the local Twitter community for contributing their images of QR codes to the dataset, and for engaging on digital contact tracing during the COVID-19 pandemic. The author also acknowledges members of Koi Tū: The Centre for Informed Futures for discussions around the use of technology for contact tracing.
Endnotes
1. Verrall, A. Rapid audit of contact tracing for Covid-19 in New Zealand. New Zealand Ministry of Health. Apr. 20, 2020; https://www.health.govt.nz/publication/rapid-audit-contact-tracing-covid-19-new-zealand
2. Wilson, A.M. et al. Quantifying SARS-CoV-2 infection risk within the Apple/Google exposure notification framework to inform quarantine recommendations. medRxiv. Jul. 19, 2020; https://www.medrxiv.org/content/10.1101/2020.07.17.20156539v1
3. Asher, S. Coronavirus: Why Singapore turned to wearable contact-tracing tech. .Jul. 4, 2020; https://www.bbc.com/news/technology-53146360
4. Alkhatib, A. We need to talk about digital contact tracing. Interactions 27, 4 (Jul.–Aug 2020), 84; http://interactions.acm.org/archive/view/july-august-2020/we-need-to-talk-about-digital-contact-tracing
Posted in:
Covid-19 on Fri, October 30, 2020 - 5:24:04
Andrew Tzer-Yeu Chen
Andrew Tzer-Yeu Chen is a research fellow with Koi Tū: The Centre for Informed Futures, a transdisciplinary think tank at the University of Auckland, New Zealand. His background is in computer engineering, investigating computer vision surveillance and privacy. His research interests now sit at the intersection of digital technologies and society.
[email protected]
View All Andrew Tzer-Yeu Chen's Posts
Future directions for situationally induced impairments and disabilities research
Authors:
Garreth Tigwell,
Zhanna Sarsenbayeva,
Benjamin Gorman,
David Flatla,
Jorge Goncalves,
Yeliz Yesilada,
Jacob Wobbrock
Posted: Tue, October 06, 2020 - 1:29:59
Mobile devices are our constant companions. We use them in varied contexts and situations, such as outside on a cold street, lying down in a dark bedroom, and commuting to work on a train. These situations challenge our ability to use mobile devices and can negatively influence our interactions with them. For example, we slow down or make more errors when typing, or select a wrong button. These adverse contextual factors have been referred to as situationally induced impairments and disabilities (SIIDs), or sometimes just situational impairments for short.
The experience of SIIDs during mobile interaction applies to users of all abilities, as SIIDs adversely affect both non-disabled and disabled user groups [1]. But SIIDs have been shown to exacerbate negative user experiences in mobile interaction for people with disabilities [2]. Thus, by accommodating SIIDs during mobile interaction, we can provide solutions to improve user experience for people with and without disabilities [1].
Research in the field of SIIDs is conducted in four main areas: Understanding (e.g., [3]), sensing (e.g., [4]), modeling (e.g., [5]), and adapting (e.g., [5]). Understanding provides knowledge about the effects of SIIDs on mobile interaction; sensing allows building mechanisms to detect the presence and extent of SIIDs; while modeling and adapting enable the creation of models and interfaces to accommodate SIIDs. The main criticism about research conducted on SIIDs is the lack of systematic knowledge of the effects of underexplored SIIDs and the combination of different SIIDs on mobile interaction. Furthermore, there is an absence of built-in sensing, modeling, and adapting mechanisms on conventional smartphones to detect and accommodate these SIIDs. Therefore, the research community should strive to push the SIIDs research agenda further in order to build a roadmap for future work in the field.
Figure 1. Workshop organizers and attendees enjoying a meal after an engaging full-day workshop.
Workshop purpose, structure, and goals
We organized a one-day workshop for CHI 2019 in Glasgow, Scotland, called Addressing the Challenges of Situationally-Induced Impairments and Disabilities in Mobile Interaction. The purpose of the workshop was to assemble researchers whose work is related to SIIDs so that we could identify current research gaps and define new directions for future research. The workshop included five main events throughout the day: “lightning” presentations, focus groups, a panel discussion, exploring scenarios, and a town hall meeting, followed by an evening meal (Figure 1). A detailed structure is found in our workshop proposal paper [2].
The four goals of our workshop were to: 1) provide a space for organizers and participants to share their expertise and insights on SIIDs, 2) engage attendees to discuss and identify gaps in the SIIDs research space, 3) ideate new solutions that could mitigate the effects of SIIDs, and 4) create and strengthen an international collaborative SIIDs research network.
The workshop included 17 participants (including organizers), collectively representing universities in eight countries across four continents. Our workshop call for submission was purposely made to be broad to allow participants to submit whatever they felt was relevant. For example, submissions could be in the form of position papers, case studies, empirical studies, or new interaction methods. Submissions were limited to eight pages (including references) and we used arXiv to keep a record of the accepted papers (the proceedings can be found at https://arxiv.org/html/1904.05382). Our participants were each given four minutes to discuss their work during the lightning presentations event (Figure 2) so that we could dedicate as much time as possible to identifying necessary future work.
Figure 2. A workshop attendee giving a short presentation about their work.
Activity 1: Focus group activity
Our attendees were divided into groups and given 1.5 hours for the first activity:
- Task: The groups reflected on the lightning presentations and deliberated on challenges they have faced when conducting SIIDs research.
- Outcome: The groups highlighted key parts of their discussion to share with the other workshop groups.
A list of example questions were provided to drive the discussion related to research methods and equipment, recruitment, ethical challenges for SIIDs studies, challenges utilizing sensor data and modeling SIIDs, and the limitations of adaptation. Figure 3 provides an example of one group’s record of challenges that they identified.
Figure 3. An example of challenges discussed by one of the groups.
Recruitment can be a challenge for SIIDs research, particularly when seeking to run longitudinal and ecologically valid studies—the costs incurred can be higher. Lab studies are beneficial for isolating factors to study, but more work needs to be conducted outside to increase the ecological validity of results. There should be particular consideration given to active observation approaches rather than only passive observation. Guided tours might be one way to mitigate some of the challenges of in-the-wild research and to address IRB concerns for participant safety. People with permanent disabilities should also be recruited, since they can also experience situational impairments and need very specific solutions to address SIIDs.
Modeling and adaptation are promising solutions to addressing SIIDs. Artificial intelligence and machine learning research could support the efforts of HCI researchers, but there is a lack of ground-truth data. It is important to parameterize the environment to help understand the relevant context; it may be possible to do this in an unobtrusive way such as by using smartwatches for sensing. However, although logging data can help address SIIDs, it may cause people to become uncomfortable, as devices become more aware of context, which highlights various legal, security, and privacy challenges. There was also concern that through addressing SIIDs, people’s own skill development might diminish, as devices work more independently and people may be encouraged to interact at times when they should be focusing on their environment (e.g., while driving).
Activity 2: Exploring scenarios
Our attendees were divided into groups and given 1.25 hours for the second activity:
- Task: The groups explored random scenarios and focused on the gaps in understanding, sensing, modeling, and adapting, in regards to the context, technology, and task.
- Outcome: The groups recorded the unique issues identified from the card game and shared the findings during the town hall meeting.
Figure 4. Example scenarios created by taking one card from three piles (a situation, a piece of technology, a task).
We gave each group a set of prompt cards to facilitate this activity. The cards covered three categories (a situation, a piece of technology, a task). We had at least 10 ideas for each category. The cards were laid out in the three categories, stacked in a random order, and the groups drew a card from the top of each pile so that the three cards made up a scenario (see Figure 4). The group could draw three cards that propose something common and relatable, such as “jogging,” “using phone,” and “reply to a message,” or something less familiar, such as “outside in the rain,” “wearing an AR headset,” and “call an Uber.” Sometimes a combination of cards was drawn that suggested an unlikely scenario, but the purpose was to quickly generate unique scenarios that the group could use to identify challenges and possibly where new research efforts need to be focused. We added wildcards to each pile so the group could determine their own entry (e.g., a technology wildcard would allow the group to invent some future mobile device to add to the scenario). The benefit of this approach was that the groups were less constrained to the ideas written on our cards.
Here are three examples generated by the participants and the issues that were considered:
- On a beach under an umbrella using a tablet and needing to unlock the device. The group considers that a person may have sand on their hands, and depending on how they unlock their device, this might be a challenge. For example, if using a fingerprint to unlock and it does not work due to sand, what other unlock methods are there? This highlights the need for various fallback methods to address SIIDs. But also the user may want control of what those fallback mechanisms look like—maybe the user does not want to compromise on biometric unlocking? Perhaps a user’s voice signature could be used for unlocking?
- Running in a park and receiving an email response on a smartwatch. Focusing on the email will distract the user from running and their environment. It could be difficult to read a long message and to respond to the email, which means more focus has to be given to the task. A user may have privacy concerns for reading in public depending on the content of the email. There may also be network inconsistencies while outside. The technology should aim to sense the level of danger and possibly delay notification if it is not an urgent task. Perhaps the smartwatch could detect that the user is running and defer its interruption?
- At the airport sending a message using a laptop. A person traveling can be stressed, tired, and likely distracted as they are listening for important flight information. The user is likely concerned about conserving their battery and dealing with limited network connectivity. Here, it is not only environmental factors that lead to SIIDs, but also internal factors. For example, the concept of emotions is purely situational; it is unclear if emotions directly influence the way we interact with mobile devices. In this scenario, the device could detect that the user is in an airport and make the changes necessary to reflect the current mood of the user. Perhaps future laptops could detect users’ stress levels and avoid contributing to information overload?
It is clear we need to be accurate and flexible with SIIDs solutions. There is little room for error when sensing the environment for potential dangers and determining the best method for interaction. Furthermore, an individual user will have their own needs that must be met and these are not likely to be static over time or for particular tasks. For example, some emails are urgent and others are not. There needs to be an easy way for the user to make these aspects known to the device to enable alternative modes of interaction.
Should we always adapt/accommodate?
Currently, most research in the area of SIIDs has been conducted within the laboratory environment. A laboratory environment limits our understanding of SIIDs, as it strictly controls and excludes the effects of accompanying factors that might be present in a real-world scenario. For this reason, we suggest that more research should be conducted in the wild. It is also necessary to study the effects of combined SIIDs, as it is common for a user to experience the effects of multiple SIIDs at once, for example being outside in a cold and noisy environment late at night. We argue that these future directions would increase our understanding of SIIDs and create new insights, potentially revealing new behaviors of people observed under realistic conditions with multiple SIIDs present.
We also suggest further understanding and investigation of the effects of SIIDs on mobile interaction according to the 2D space presented by Wobbrock et al. [1]: from-within (emotions, mood, mental well-being), from-without (difficult terrain, lack of connectivity/power), and mixed (combination of external and internal) factors. It is pivotal for the research agenda to understand the effects of SIIDs from this perspective in order to progress further by building sensing, modeling, and adapting mechanisms for these SIIDs. Furthermore, if the similarity between the effects of underexplored SIIDs and permanent impairments is established, it can further enable the creation of design solutions to accommodate users of all abilities (i.e., permanently, periodically, or situationally impaired).
Moreover, research has shown that the personal and individual characteristics of users are very important when building sensing, modeling, and adapting mechanisms to address SIIDs. The challenges of building individual models and adaptive interfaces [1] can be overcome by optimization algorithms and formulating cost functions [2]. These should be sensitive to any of the user’s privacy and security concerns. In addition, it is also important for these mechanisms to provide adequate judgements of SIIDs to decide if the adaptation should take place at all, especially in high-risk, high-cost situations, when visual and attentional needs should be focused on tasks of higher priority (e.g., crossing a busy road).
Finally, we suggest that the research should expand to include a wider range of devices [6], for example smart watches, fitness and activity trackers, and other less common wearable technology such as AR glasses, which we foresee becoming the new norm. Considering potential SIIDs when designing new technology is important in order to build in necessary solutions from the outset, rather than after the technology is adopted by the user. Our CHI 2019 workshop highlighted these (and many other) issues that the attendees and, we hope, many other researchers will undertake, creating more aware, responsive, accessible, and safe mobile technologies that are usable by everyone.
Endnotes
1. Wobbrock, J.O., Gajos, K.Z., Kane, S.K., and Vanderheiden, G.C. Ability-based design. Communications of the ACM 61, 6 (2018), 62–71.
2. Tigwell, G.W., Sarsenbayeva, Z., Gorman, B.M., Flatla, D.R., Goncalves, J., Yesilada, Y., and Wobbrock, J.O. Addressing the challenges of situationally-induced impairments and disabilities in mobile interaction. Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, New York, 2019, 1–8.
3. Sarsenbayeva, Z., van Berkel, N., Hettiachchi, D., Jiang, W., Dingler, T., Velloso, E., Kostakos, V., and Goncalves, J. Measuring the effects of stress on mobile interaction. Proc. of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 1 (2019), 1–18.
4. Goel, M., Findlater, L., and Wobbrock, J. WalkType: Using accelerometer data to accommodate situational impairments in mobile touch screen text entry. Proc. of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, 2012, 2687–2696.
Post Comment
@ElizabethHenry (2024 08 21)
RocketPlay’s loyalty program is another excellent way to get value from your gameplay. As you play, you earn loyalty points that can be exchanged for bonus funds, free spins, or other rewards https://rocketplay.co.nz/. The more you play, the higher you climb in the loyalty tiers, unlocking bigger and better rewards as you go. VIP players can enjoy exclusive offers, personalized bonuses, and even dedicated account managers who can tailor promotions to suit their preferences. To stay informed about the latest promotions, it’s a good idea to subscribe to RocketPlay’s newsletter or enable notifications if you’re playing on mobile. This way, you won’t miss out on limited-time offers or special events. Additionally, regularly checking the promotions page on the RocketPlay website ensures that you’re always up-to-date with the current deals.
@luciferdonghua (2024 08 22)
The info provided is appropriate and I want to Thank you for posting this. https://luciferdonghua.com.co/ is a Chinese Donghua website. Watch free donghuas here.
@1v1 lol (2024 08 25)
1v1 lol really won me over with its unique building mechanics. Being able to change and create the environment on the fly is a very exciting element. I felt like a battlefield architect, where every platform, ramp or wall I created contributed to my winning strategy. This flexibility not only added depth to the game but also brought extremely tense and exciting moments.
@ivy michael (2024 08 26)
Early detection of autism is crucial for effective intervention. I believe that recognizing signs such as delayed skilled nursing care facilities speech, limited social interaction, and repetitive behaviors can lead to earlier support. Identifying these traits in toddlers allows for timely therapies, which significantly improve developmental outcomes and quality of life.
@andipras (2024 08 26)
Thank you for sharing this insightful article on the early detection of autism. As awareness grows, it’s crucial for parents and caregivers to recognize the early signs and seek support as soon as possible. Early intervention can make a significant difference in the development and quality of life for children with autism. Please visit my blog: Surabaya Prop
@traffic rider (2024 08 28)
The article provides me with a great deal of knowledge, helping me expand my knowledge. You can access: https://traffic-rider.io
@tracyberg (2024 08 29)
Me who out about this game from basketball stars unblocked in my class in 2024.
@Elf Tricky (2024 08 31)
Exciting to see innovative approaches like this integrating HCI and computer vision for earlier and more accurate ASD detection. This work holds immense potential to revolutionize diagnostic methods and support for families worldwide
@Bathroom Renos (2024 09 03)
for any kind bathroom renovations Contact Us
@Platilla (2024 09 05)
This initiative is impressive! By combining eye-tracking and textual data to enhance early autism detection, you’re paving the way for more accurate diagnoses and better geometry dash support for children and families. It’s inspiring to see such innovative approaches in healthcare, especially with the focus on inclusivity and real-world impact.
@malena morris (2024 09 05)
Thanks for this amazing post! Carmatec
@Suika (2024 09 05)
If you’re looking for a fun and engaging game, you should definitely check out Suika Game. It’s a unique experience that blends exciting gameplay with beautiful visuals. Whether you’re a seasoned gamer or just looking for something new, Suika Game has something for everyone!
@charles jooks (2024 09 06)
[key3](https://domain)
[Key4]: https://domain
Key5
Key6
@Amelia Clarke (2024 09 06)
In agario, you begin as a tiny cell and must eat smaller cells and food scattered across the map to grow. But watch out—larger players are always hunting you, creating a thrilling survival challenge.
@shikshade (2024 09 11)
ShikshaDe is an educational portal offering comprehensive information about colleges, courses, exams, and career guidance in India. It assists students in making informed decisions about their educational pursuits through valuable resources and personalized guidance.
@shikshade (2024 09 11)
A comprehensive e-learning offering insights into academic courses, career options, and exam details. At Shikshade, we aim to assist students in making informed decisions about their education and future by providing valuable content on colleges, exams, and diverse educational pathways.
@sandeep (2024 09 11)
This is a great analysis of the new Gmail interface! I appreciate the balanced approach in evaluating the changes.
https://sarahjosbeauty.com/
@oinkercareful (2024 09 11)
I total agree with your idea in this post.
google
@Subway Surfers (2024 09 12)
subway surfer Daily challenges and missions: Players can complete tasks for extra rewards.
Character and customization options: Different characters, outfits, and hoverboards can be unlocked using in-game currency or through special events.
World tours: The game features various updates where the setting changes to real-world cities, each with its unique background and visuals.
Subway Surfers is free-to-play but offers in-app purchases for players who want to accelerate their progress or unlock additional content.
@Gorlock the destroyer (2024 09 12)
gorlock the destroyer is a fictional character from the popular mobile game Subway Surfers. He is a powerful and intimidating villain who serves as the main antagonist of the game.
Here are some key facts about Gorlock the Destroyer:
Appearance: Gorlock is a large, muscular creature with a menacing appearance. He has glowing red eyes and sharp teeth, and is often depicted wearing a black and red armor suit.
Powers: Gorlock has incredible strength and agility, allowing him to chase down the game’s protagonists, Jake and his friends, as they surf through the subway tracks.
Goals: Gorlock’s primary goal is to catch Jake and his friends, who are constantly evading him as they surf through the subway.
Personality: Gorlock is depicted as a ruthless and relentless pursuer, who will stop at nothing to achieve his goals.