Features

XXV.5 September-October 2018
Page: 34
Digital Citation

Avoiding bias in robot speech


Authors:
Charles Hannon

back to top 

We humans are teaching our robots how to speak. But when humans speak to each other, we can be pretty terrible. The words we use express our biases and prejudices, our stereotypes about each other, and the inequalities of our society. Do we want our robots to replicate all of that? Or should we hold them to a higher standard? If the latter, is there any chance that they can teach us, over time, to be better people?

back to top  Insights

ins01.gif

We are at a middle point in AI speech development, where the computers are speaking to us in two different ways. In consumer-oriented digital assistants, natural language processing has developed enough for them to convert human speech to text with high accuracy. Statistical methods match that text to probable intents and the systems return programmed responses. This describes what’s happening with chatbots, and with Amazon Alexa or Google Home. If you speak to one of these systems today, it is most likely that another human has literally and quite mechanically typed out the words that the AI is saying back to you. The “intelligence” here is in understanding what we are saying, and in matching our words to a set of possible intents. The responses are “dumb” in the sense that they are not generated dynamically: They are scripted beforehand by a human.

A little bit closer to artificial intelligence are systems that use machine learning to train themselves on huge corpora of text in order to “understand” the meaning of words, the different contexts in which those words can be used, and the syntax that can be used to order those words. Then, based upon this understanding, they answer our questions in a variety of contexts. Here, it is our questions that are more heavily scripted and structured: We ask for a report on earthquakes, and Quakebot at the L.A. Times delivers the story when one occurs. We ask for a Wikipedia article on an obscure animal or geographical detail, and Lsjbot obliges (nearly three million articles and counting).

ins02.gif

Both types of systems are intelligent in the way that they process text and relatively dumb in the way that they communicate back to us. Artificial general intelligence remains a distant goal, especially the ability to generate natural language responses to unexpected input. The best-known attempt in recent years shows one reason why: Microsoft’s chatbot Tay had to be disabled within 16 hours because in that time she had learned from other Twitter users how to speak like a racist, sexist, homophobic bigot. But Tay only made very visible the problem of bias that is already endemic in machine-learning systems. The training data that machine-learning algorithms use to understand a domain, and to understand the language of that domain, often contains the same biases that already plague human—human interaction.

When our documented history is used as the basis of machine learning, our algorithms are going to replicate the biases contained in that data. A recent paper in Science reports on a word-embedding algorithm that learns the meaning of words by analyzing them in context, and by observing their co-occurrence with other words in connection with 300 semantic dimensions [1]. The machine crawled more than 800 billion words on the Internet and can do a pretty good job of learning the meaning of the words in the corpus. For instance, asked to complete the word analogy “man is to king as woman is to x,” the computer can correctly answer “queen.” However, the same computer will say, “man is to computer programmer as woman is to homemaker” and “father is to doctor as mother is to nurse.” The machine “thinks” these things because of the cooccurrence of these semantic concepts in the 800-billion-word corpus of the Internet that it used to learn English.

What the algorithm found is in fact a very accurate reflection of human biases as measured in the well-known Implicit Association Test (IAT). Humans can take the IAT to see if their brains are wired for preferring one group over another even when their conscious beliefs and principles may not be. The IAT measures the amount of time it takes for your brain to associate two or more ideas. If you can quickly associate the word man and the words computer programmer, but it takes you longer to draw the same connection between computer programmer and woman, the assumption is that those two concepts are less connected in your mind, implying bias. Caliskan et al., using strength of association between word pairs instead of cognitive lag time, found that their word-embedding algorithm replicated all of the human biases—about race, age, weight, sexuality, religion—commonly found with the IAT.

Some forms of bias are subtler than others. Researchers in 2015 analyzed Wikipedia in six different languages and found two kinds of gender bias in all six: Structurally, entries about women in Wikipedia link to articles about men far more frequently than vice versa; lexically, entries about women comment on romantic relationships and family-related issues much more frequently than entries about men. As a result, “an article about a notable person that mentions that the person is divorced is 4.4 times more likely to be about a woman rather than a man” [2]. Wikipedia presents a gold mine for data scientists looking for structured data upon which to train their machine-learning algorithms. But to the degree that any corpus can contain biases, we need to be aware of the dangers of those biases being imported into the machine algorithm’s representation of the world.

These examples are about machine-learning algorithms, which need large amounts of text in order to “learn.” But interaction designers are giving voice and speech to AIs every day as we create Alexa “skills” and Google “actions” and deploy them. If you have ever developed one of these apps, you know that they are not at all smart in the way that they generate responses. Typically, the AI’s voiced response to a user’s utterance is a manually entered string of text. This is a brute-force method of literally feeding the digital assistants the words that they say in response to users’ questions. Among everything else they are doing with these products, Amazon and Google are amassing huge databases of utterances matched to intents and system responses, which will no doubt be used in future years as further training data for improved AI devices. So we should be thinking now about ways to populate these libraries with content that will not worsen problems of bias and inequality.

This means being aware of the subtle ways in which human biases are embedded in everyday language. We should be aware, for instance, of differences in the ways men and women speak, and make conscious decisions about whether our AIs replicate, and thereby reinforce, these differences. Psychologist James Pennebaker has analyzed large bodies of text by male and female authors across a variety of genres and found that in general, women use I-words (I, me, my) more often than men. Men use articles (a, an, the) more than women. Women use “cognitive” words more than men (understand, know, think) as well as “social” words (“words that relate to other human beings”). Men use more nouns and women use more verbs [3].

It happens also that when two or more people are in conversation, the person with higher status uses we more often, while lower-status individuals use I more often. Given that so many of our digital assistants, like Alexa and Siri, have female voices by default, we should be aware that if they use I more often than necessary, we are reinforcing their low status, at the same time that we are personifying them as women. We are replicating patterns from the real world that align female speech patterns with low-status positioning. As AIs begin to converse about more than just the things we’ve asked them to do, we should be aware of these male and female speech patterns and consciously make decisions about when and whether to reinforce them.

Also on the topic of gender: Let’s make sure our AIs refer to another person’s gender (or race, ethnicity, sexuality, or religion) only when it is relevant to the context or story. Otherwise we can see a phenomenon of marking, which takes place when something about a person doesn’t match our expectation—in other words, doesn’t match our stereotype about that person’s group. According to Camiel Beukeboom, we have a tendency to explicitly mark unexpected gender roles, like male nurse or female surgeon [4]. This is particularly true when describing women in roles that do not align with stereotypes: We are more likely to describe a man as a computer programmer but a woman as a female computer programmer. This marking communicates the unexpectedness of the subject’s role and thus reinforces the stereotype. This can also happen when we use adjectives to mark a subtype or subgroup that violates stereotypes about the larger category of person, for instance, peaceful Muslim or tough woman. Sometimes the marking is embedded in common phrases such as family man or career woman.


To the degree that any corpus can contain biases, we need to be aware of the dangers of those biases being imported into the machine algorithm’s representation of the world.


How might bias markings find their way into AI systems? Since they emerge in everyday speech, they already populate large corpora like Wikipedia and the blogs and news sites that machine algorithms use to learn our language. They can also easily make their way into the responses that today’s voice user interface (VUI) designers give to Alexa or Google Home. And, really, bias markings may be transmitted anytime a human creates an image or video annotation that can be scooped up at a later date by AIs developing their language libraries in different semantic contexts. In the Flickr30k project, for example, humans provide descriptions of images, and then algorithms use these descriptions to build models that help them interpret further images. Many opportunities for the transmission of bias ensue, as Emiel van Miltenburg has demonstrated [5].

Here’s another, related way in which stereotypes can be subtly embedded in our everyday language and transmitted from one person to another, or from digital assistant to human. Beukeboom writes about negation bias, which explains why we sometimes say a person is smart, and other times, not stupid. It has to do with whether we expect the subject of our conversation to be smart. If we categorize someone as smart (a professor, for instance), and then that person does something stupid, we are more likely to say the act was not smart. The negation construct (“not smart”) seems more appropriate, and is easier to understand, when it is used to communicate expectancy-inconsistent information. If we expect someone to be not smart (stupid), and we want to communicate the idea that they did something smart, the phrase not stupid comes to mind more readily. The stereotype-related words (either smart or stupid, depending upon our preconception) are cognitively more available to us at the moment we are formulating our expression. As a result, the expected condition, smart or stupid, is reinforced and conveyed to the listener, even though at that particular moment, we are communicating expectation-inconsistent behavior.

On the sentence level, we should be aware of the positional power implied by subject-verb-object sentence construction in English [6]. “Teachers disagreed with the governor” is a very different sentence from “The governor disagreed with the teachers.” For the hearer of these two sentences, the swapping of subjects has a real effect on the matter of who is acting more disagreeably. This process is called initialization; as Sik Hung Ng and her colleagues write, “The placement of one party but not the other in the initial position of a sentence makes a subtle difference of considerable psychological consequence”—namely, in this case, as the originating source of the disagreement. Similarly, Ng notes how the use of passive rather than active voice can substantially alter meaning. Compare “The police fired upon the fleeing man” to “The fleeing man was fired upon.” The implicit biases embedded in the syntax of these simple sentences are far from simple. The second sentence—by omitting the subject that performs the action of the verb—masks who is doing what and subtly places blame on the object of the police action. These sentences could quite plausibly come from a computer system that aggregates news sites and distributes news summaries for mass consumption, or from one of the increasingly prevalent robo-journalism systems such as Heliograf, Narrative Science, or Automated Insights.

One final example: Language is used in a variety of ways to reinforce in-group identity and solidarity at the expense of out-group members. Researchers have discovered that, when describing negative behavior, people are more likely to use more concrete phrasing (A is punching B) if the bad actor is a member of the in-group, and more likely to use more abstract phrasing (A is aggressive) if the actor is a member of a competing out-group [7]. The opposite holds true for descriptions of positive behaviors. This is because it is easier to understand behavior as situational, based upon a specific context, when it is described concretely, whereas it seems more essential to the character of the individual when it is described abstractly. So, concrete language helps to describe negative behavior as out of character for the in-group, and abstract language describes negative behavior as an essential characteristic of the out-group member. And vice-versa for positive behavior. As we give our AIs language, and as they use this language to construct representations of the world, we should be aware of how simple differences in the level of abstraction in our phrasings can influence how different groups of people are portrayed. This will be especially important as AIs play a larger role in writing news summaries and social media postings.

Language implicitly and subtly embeds bias and prejudice. When we give that language to AIs, we risk replicating the problem. If we can find ways to remove bias and prejudice from our AIs’ speech, we solve one problem immediately—the problem of bias in robot speech. But over time, we might create a feedback loop that improves the way we humans use language ourselves. One way this can happen is through language style matching (LSM). LSM studies have shown that people in conversation tend to match each other across many different categories of words and word usage. One study even found that in speed-dating scenarios, the more that two people matched each other’s language patterns, the more likely they were to want to meet again; and among people already dating, the closer the LSM, the more likely they were to still be seeing each other three months later [8]. If humans tend to match each other in the ways they use language, and if language, as we have seen, carries the history of human bias and prejudice in a variety of subtle ways, then we have an opportunity to create a better version of ourselves in the ways we teach robots to speak to us. Over time, we might find ourselves matching—that is, adopting—their more enlightened ways of communicating about, and symbolically constructing, the world.

back to top  References

1. Caliskan, A., Bryson, J.J., and Narayanan, A. Semantics derived automatically from language corpora contain human-like biases. Science 356, 6334 (Apr. 14, 2017), 183–86.

2. Wagner, C., Garcia, D., Jadidi, M., and Strohmaier, M. It’s a man’s Wikipedia? Assessing gender inequality in an online encyclopedia. arXiv:1501.06307. Jan. 26, 2015.

3. Pennebaker, J. The Secret Life of Pronouns: What Our Words Say About Us. Bloomsbury Press, New York, 2013.

4. Beukeboom, C.J. Mechanisms of linguistic bias: How words reflect and maintain stereotypic expectancies. In Social Cognition and Communication. J.P. Forgas, O. Vincze, and J. Laszlo, eds. Psychology Press, New York, 2014, 313–330.

5. van Miltenburg, E. Stereotyping and bias in the Flickr30K dataset. Proc. of the Workshop on Multimodal Corpora. J. Edlund, D. Heylen, and P. Paggio, eds. 2016,1–4.

6. Ng, S.K. Language-based discrimination. Journal of Language and Social Psychology 26, 2 (2007), 106–122.

7. Maass, A. et al. Language use in intergroup contexts: The linguistic intergroup bias. Journal of Personality and Social Psychology 57, 6 (Dec. 1989), 981–993.

8. Ireland, M.E. et al. Language style matching predicts relationship initiation and stability. Psychological Science 22, 1 (Jan. 2011), 39–44.

back to top  Author

Charles Hannon is a professor and chair of computing and information studies at Washington & Jefferson College. He teaches courses in human-computer interaction, the history of information technology, information visualization, project management, and gender and women’s studies. He has published recently in Smashing Magazine and Interactions, and is the author of the 2005 book Faulkner and the Discourses of Culture. channon@washjeff.edu

back to top 

Copyright held by author. Publication rights licensed to ACM.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.

Post Comment


No Comments Found