Marion Koelle, Thomas Olsson, Robb Mitchell, Julie Williamson, Susanne Boll
Technology is changing the way we experience our lives. Interactive and increasingly intelligent technologies allow us to tackle challenges previously considered unsolvable and to augment our capabilities of sensing, communication, and even creativity. Examples include head-mounted displays and smart personal services for ubiquitous information access, and autonomous vehicles for increased comfort and safety. Simultaneously, these very same technologies introduce new risks, raise new concerns, and can increase both social tension and inequality between users and non-users.
A natural focus for HCI is new interface technologies and how they are used in social situations, which is crucial to their acceptance. For example, unconventional interface technologies can face resistance from bystanders and cause embarrassment when used in public places. And the increasing autonomy of agents (e.g., vehicles, virtual assistants) can raise broader ethical and societal discussion on the roles and purposes of technology. The social context, as a broad notion, can set requirements and act as a catalyst or a hindrance for the adoption and appropriation of an interface. The actual or anticipated disapproval from other people can have a major impact on if, where, and how an interface might be used.
Such collectively constructed effects are typically embraced by the terms social acceptance and social acceptability. Social acceptance could be considered the broader concept, referring to a cultural phenomenon, while social acceptability is typically used to refer to a design having the quality or attribute of being socially acceptable. However, this distinction is hardly established; in reality, both terms are used to refer to how an interface is received, internally by the user as well as externally by others. Lack of social acceptability can render an interface, application, or technology practically unsuccessful. For the user, use of a technology might come at the cost of being socially disagreeable or stigmatized. At the same time, a user refraining from using an interface to conform with social context might ultimately experience a lack of access to particular applications, services, or information, which may impair social empowerment and social equity.
While social acceptability has been acknowledged as an essential part of overall system acceptability , there is limited empirical research on this topic thus far. Technology acceptance research (e.g., the Technology Acceptance Model, TAM) was extended to incorporate social factors (e.g., by Malhotra et al. in 1999 ), but they overlooked how potential negative feedback from peers or bystanders affects the user’s continuous decision process to continue or discontinue interacting.
The actual or anticipated disapproval from other people can have a major impact on if, where, and how an interface might be used.
We argue that the research field of HCI lacks more contemporary and actionable articulations for social acceptability and social acceptance, as well as design guidelines and agreed-upon evaluation methods for putting it into practice. Catching up is indeed a timely issue, as the spread of computing technologies beyond the desktop into all aspects of our lives dramatically increases the range and scale of potential issues with social acceptance.
In a one-day workshop at CHI 2018, 16 researchers and practitioners from research areas such as user experience, wearable computing, e-textiles, conversational agents, voice user interfaces, decision support systems, social networks, and gender studies assembled to discuss social acceptability in today’s HCI research and design practice . The workshop included 10 lightning talks out of the participants’ areas of research, along with demos and moderated discussion sessions. In this article, we share highlights from the workshop , including results of discussions and a pre-workshop survey conducted among the participants .
The variety of application areas covered by the workshop contributions indicates that social acceptability and the lack thereof becomes increasingly relevant. However, anecdotal evidence also highlights that it is often encountered as a by-product of studies or discovered by accident, for example, when interfaces receive insufficient attention during in-the-wild studies or are not adopted upon entering the market. Consequently, social acceptability often becomes apparent only through its absence: Aspects of a design causing a lack of social acceptance (i.e., bad practices) are easier to identify than design strategies for increasing an interface’s social acceptability (i.e., good practices). At the same time, research explicitly targeting social acceptability issues is rare, and so far only a few authors (e.g., Montero et al. ) have attempted to formally define social acceptability in HCI.
In our pre-workshop survey, we collected the participants’ personal understandings of both social acceptance and social acceptability. While HCI research has often used these two terms interchangeably, our collection of informal definitions consolidated our impression that the term social acceptance is mainly used to describe a phenomenon: “when a person can use or wear the technology around others without feeling uncomfortable, out of place, or judged. The other people around the user also do not feel uncomfortable by the presence and use of the technology” (P7). Social acceptance is subjective, dynamic, temporal, and contextual. It’s not a simple, binary decision but rather a continuum: Instead of being a one-time decision for either acceptable or unacceptable, it is a continuous decision process that evolves over time .
While these definitions tend to focus on an individual perspective, social acceptance could be understood as a collective judgement that is not only subjective (one subject) but also a compound of the perceptions and opinions of multiple people who might be influenced through media coverage or greater societal and cultural changes. “What is regarded as socially acceptable is highly contested; however, it differs depending on cultural or religious background and it changes throughout history. A lot of things we nowadays consider progressive were historically considered socially unacceptable” (P6). Nevertheless, social acceptance can also “be measured or at least empirically analyzed” (P6).
Although similar to social acceptance, the term social acceptability was used by participants to describe a product quality, for example, an interface’s appropriateness and suitability for a certain context or culture, as well as a design’s ability to respond to societal norms, values, and visions. They considered social acceptability as “the probability that a technology will be accepted by society and not only individuals” (P10). Furthermore, they also saw a clear connection between an interface’s social acceptability and its design: “Intuitively, I consider social acceptability to operate at a similar level as accessibility, that is, a practice for designing [or] evaluating the acceptability of technologies” (P9). This connection also recurred during the workshop, where participants considered social acceptability as an interface quality that could and should be influenced by design. They also considered it a design requirement related to a broader notion of techno-ethical risks: “I would define social acceptability in the context of existing or emerging technologies as a fundamental requirement to think about possible impairments or disturbances of an interactive system with regard to other people” (P4).
Based upon workshop discussions, we would argue that for future research it is helpful to clearly distinguish between acceptance as a descriptive concern with what is and what has been and acceptability as looking forward, relating to what could be. HCI can undoubtedly benefit from better understanding what has influenced the social acceptance of current and previous products. Insights gleaned from such efforts should feed into our field’s efforts to improve the future. Developing successful, novel systems and designing for new interactions requires attempting to predict, influence, and evaluate different options concerning possible social acceptability—hence we chose to use acceptability in the title of this article.
Social acceptability encapsulates the socially constructed factors that affect user experience and the acceptance of new interaction techniques. In practice, those are what make an interface more acceptable or unacceptable. However, those factors are hard to grasp, as they depend on context and perspective.
The social acceptability of an interface can be regarded on different levels. On a micro level, the social acceptability of a technology concerns whether an actual encounter with the technology (or user thereof) affects the social comfort, status, reputation, moral convictions, and so on of participants or close witnesses in the encounter. Brewster et al.  describe this as the internal (user) and external (bystander) view of technology usage. Montero et al.  formalize this as the user’s social acceptance (how comfortable or relaxed does the user feel interacting with an interface?) and the spectator’s social acceptance (does interacting with an interface appear normal, or does it stand out?). On a macro level, social acceptability concerns the bigger picture of whether a technology is tolerated, accepted, or possibly even appreciated by a community or culture. Olshannikova et al.‘s workshop contribution  extends Brewster’s and Montero’s notion, and suggests five perspectives:
- Internal perspective. How do I perceive myself, mindful of self-image and cultural norms, while interacting with a particular technology?
- Interpersonal perspective. How does using a technology affect my impressions of others and my interaction with them?
- Perspective of social structure. How does using a technology affect my professional image and my position in organizations and other social structures?
- Normative perspective. How is using an interface generally perceived in the cultures and communities I belong to?
- Ethics and regulations perspective. To what extent is using a technology in line with existing laws, regulations, and moral standards?
So far, these perspectives have not yet been combined in a generalized model or framework. In addition, factors influencing those perspectives, and thus shaping an interface’s social acceptability, have not yet been researched comprehensively.
During the workshop and in the pre-workshop survey, we collected a list of factors that link to the social acceptability of an interactive technology. While this list is not exhaustive, it can serve as a valuable starting point for future research:
- Aspects of a technology that cause fear, objections, or eerie emotions can be relevant to social acceptability. Potential causes of fear and anxiety include control loss, lack of situation awareness, and privacy infringements.
- Aspects concerning the user’s social image, such as perceived awkwardness, coolness, or publicity of interactions, relate to the user’s impression management . In consequence, strange form factors and unusual or ambiguous interactions can impair social acceptability, which may vary depending on usage scenario, location, and interface type or variant.
- Aspects that make a technology non-inclusive can trigger ethical concerns. These aspects might include poor availability, low accessibility, or a perceived or real lack of fairness (e.g., in algorithmic systems). Designs that neglect some standpoints, perspectives, circumstances, or contexts—i.e., technology that works only for a few or causes disadvantages to certain people—is likely to be considered not socially acceptable.
Evaluating interfaces in terms of social acceptability—or evaluating interfaces that have issues with social acceptability in terms of something else—is challenging. Issues with social acceptability might hinder usability testing, as users could be hesitant to interact. Moreover, research prototypes can be prone to a novelty effect (e.g., the wow factor) or include single aspects that are not (yet) acceptable, or not part of the to-be-evaluated concept (e.g., bulky hardware). Participants’ feedback might then strongly focus on these most evident issues and neglect others.
Thus, testing design concepts instead of functional prototypes has become a popular choice. Variants of such hypothetical designs are often evaluated using scenario-based or storytelling methods that require participants to imagine the use of an interface in a certain situation. These methods typically use sketches, illustrations, or video prototypes to depict how the envisioned interface would be used. While these methods have the advantage that they are highly controllable, mitigate bias, and allow the investigating and comparing of individual factors and their effects (e.g., visibility of interactions), they may be challenged in terms of external validity. It has to be acknowledged that these methods do not provide absolute measures—that is, they cannot answer whether an interface would reach a certain acceptability threshold. On the other hand, they allow researchers to isolate certain features and compare variants against each other, which allows for accurate relative validity.
In contrast, in-the-wild tests with actual prototypes, such as technology probes using on-street recruitment, are prone to self-selection bias and thus might attract only participants who already have a positive attitude toward the evaluated technology, or participants who feel the need to protest against it. In this case, viable and purposefully designed tools such as questionnaires measuring social acceptability (e.g., the WEAR Scale ) may be prone to ceiling effects or attract answers that are out of scope. Nevertheless, research on technology adoption illustrates that actual user experience and perceived social acceptability often differ from what users would initially predict. For this reason, in-the-wild tests of prototypes may be necessary to obtain an estimation of an interface’s actual social acceptability.
Both approaches, testing visions and testing prototypes, have their strengths and weaknesses. Future research on social acceptability would require methods or processes that work with a meaningful combination of both.
The workshop on social acceptability at CHI 2018 demonstrated that social-acceptability issues influence many areas of HCI. Although the term was conceptualized more than two decades ago, the social acceptability of human-computer interfaces has received little attention since.
Although there was a consensus among the workshop participants that purposefully designing for social acceptability was possible and necessary, it was also noted that dedicated design methods and best practices are sparse.
Future research should answer questions around what designers can do to mitigate fears, social rejection, and other issues causing a lack of social acceptability. Starting a collection of good and bad practices out of different application areas that might be transferred into guidelines or heuristics could serve as a starting point and be a good way of packaging the insights of current research efforts targeting social acceptability issues in HCI. With this in mind, we intend to evolve the CHI 2018 workshop on social acceptability into a workshop series.
2. Malhotra, Y. and Galletta, D.F. Extending the technology acceptance model to account for social influence: Theoretical bases and empirical validation. Proc. of the 32nd Annual Hawaii International Conference on Systems Sciences. IEEE, 1999, 14.
3. Koelle, M., Boll, S., Olsson, T., Williamson, J., Profita, H., Kane, S., and Mitchell, R. (Un) Acceptable!?!: Re-thinking the social acceptability of emerging technologies. Proc. of Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, New York, 2018, W03.
4. See https://socialacceptabilityworkshop.uol.de/#program for all workshop contributions, accessed 22/01/2019
5. The pre-workshop survey was conducted online, prior to CHI 2018. The survey link was distributed amongst the workshop attendees and their coauthors, and completed by 10 participants (denoted as P).
6. Montero, C.S., Alexander, J., Marshall, M.T., and Subramanian, S. Would you do that? Understanding social acceptance of gestural interfaces. Proc. of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services. ACM, New York, 2010, 275–278.
9. Kelly, N. and Gilbert, S. The WEAR scale: Developing a measure of the social acceptability of a wearable device. Proc. of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, New York, 2016, 2864–2871.
Marion Koelle is a research associate at the University of Oldenburg. She is currently pursuing her doctoral dissertation on designing body-worn cameras that intelligently adapt to social contexts. Her research on the social acceptability of emerging technologies and novel interaction paradigms was published at MobileHCI, CHI, and TEI. email@example.com
Thomas Olsson is an associate professor at Tampere University focusing on the experiential and social implications of information technology. His research interests include ethically and socially sustainable information technology, enhancing social interaction with the help of emerging ICT, big social data analytics, and extended reality technologies. firstname.lastname@example.org
Robb Mitchell is an associate professor at University of Southern Denmark specializing in how design may bring people closer together—socially, creatively, and professionally. His research and practice draw upon a diverse background that includes community development, market research, music promotion, cultural management, science communication, and new media curating. email@example.com
Julie Williamson is a lecturer in human-computer interaction in the Glasgow Interactive Systems Group at the University of Glasgow. Her research focuses on how new display form factors are used in public and social settings, including non-planar displays and head-mounted displays. firstname.lastname@example.org
Susanne Boll holds a full professorship for media informatics and multimedia systems at the University of Oldenburg and heads the Human Machine Cooperation Competence Center at the OFFIS-Institute for Information Technology. She is an active researcher in the field of human-computer interaction in pervasive environments and is a leader of competitive research projects in this area. email@example.com
Copyright held by authors. Publication rights licensed to ACM.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2019 ACM, Inc.