Jonathan Effrat, Lisa Chan, B. Fogg, Ling Kong
After presenting our research about sounds people love and hate at a recent Stanford Media X Conference, we were approached by a program manager from a large software company. She told us a story about how her company learned a hard lesson about sound. They were testing a new version of a popular software title. In this application each time the user completed a certain task successfully, the software played a feedback sound. This particular sound had been chosen years ago and never questioneduntil now. In the new user studies, they watched their users get confused. She told us that when users completed the task successfully, they heard the feedback sound, but the typical user reaction was, "What did I do wrong?" The company realized they had made a mistake.
In everyday life we rely on sounds to know when we’re doing well (e.g. clapping, laughter) and to know when we’ve made a mistake or are in danger (e.g. car horn, fire alarm). As described in Buxton, Gaver and Bly’s work on nonspeech audio, sounds also matter in software experiences , from computer games to Web sites to productivity applications. Two years ago our lab ran some controlled experiments to see if sound alone can change user behavior. Our conclusion: Yes, it can. Even during a short experiment, positive and negative sounds had the power to generate predictable user behaviors . We now hypothesize that over time these new behaviors can turn into habits. All it takes is playing the right sound at the right time. Seeing the power and appeal of sound, we decided to study people’s reactions to sounds more carefully.
In his dissertation, G.T. Fechner, the founder of experimental aesthetics, wrote that aesthetics could be studied "from above" or "from below," the latter being concerned with collecting empirical data of perception . Along these lines, we set out to study how people responded to over 400 sounds, to gather empirical data about their reactions, and to link this to participant demographics. We deployed the study over the Web (see the interface below in Figure 1).
After some trial and error, we found a way to make it work. We eventually had 788 participants in our study; they submitted a total of 15,234 sound ratings. The majority of our participants were 18-24 year olds living in California; however, there were also pockets of diversity, including 262 participants from Asia and participants from 28 states and 20 countries.
So what sounds did people love and hate? In general, we found more agreement about what constitutes a bad sound than a good sound. For example, the loud car alarm and the long beep were both given the lowest possible rating by nearly everyone in the study. The sounds with more favorable ratings (the good sounds) had a broader spread of values. One highly rated sound, a baby’s murmur, had a very large distribution. An interesting note is that women rated the baby murmur sound 17 percent higher on average than men did. Associations with the sounds seemed to be a key factor influencing ratings. For example, after hearing a sound of a phone ring, one user wrote: "A good old fashioned phone ring a job offer or a call from a girlfriend." He rated it a five out of five. The variability in response to many of these sounds strongly suggests that the listener’s perceptions and context need to be taken into account, rather than assuming sounds elicit intrinsic or universal responses.
Our highest rated sounds generally related to escapism (e.g., fantasy chimes, birds singing) and pleasure (children laughing). The sounds people hated most generally related to disruptions (e.g., alarms, beeps, car crashes) or sadness (e.g., a woman sobbing). Based on our data we created a ranking with hundreds of sounds, with the most favorable sounds at the top, ranging down to the most hated (see box).
We realize that our lab research doesn’t go the full distance in determining which sounds developers should use in a given setting in a specific application. That’s the next stage of research we leave to those who create commercial products. However, we believe this type of academic research can help companies with their practical projects. Information about sounds people love and hate can help developers determine which sounds they should consider for their product and which sounds they should avoid. In other words, by doing a first pass of research in an abstract setting or by drawing on research performed elsewhere, a company can save time by quickly identifying a small number of sounds to test further in the context of their application.
So as we think about the company that learned a hard lesson about research, we wonder: What was the cost of choosing the wrong feedback sound? We don’t know. There’s probably no accurate way to assess the impact. But one thing that seems clear: It’s risky business to rely on intuition alone when selecting feedback sounds.
©2004 ACM 1072-5220/04/0900 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2004 ACM, Inc.