Columns

XXVI.1 January - February 2019
Page: 24
Digital Citation

Designing recommendations


Authors:
Elizabeth Churchill

back to top 

Books, products, movies, music—every day, our digital lives are strewn with suggestions and recommendations that invite us to direct our attention to specific content and spend our money on specific products. Driving and transportation routes are suggested to us based on past traffic patterns and current traffic load. Search-query autocomplete options and autocorrect spell-checking reduce typing effort.

Such suggestions and recommendations are often extremely useful.

However, there is a lot of work to do to make such systems even more useful. We are surely all aware how wildly off-base suggestions and recommendations can be—on occasion, hilariously so. Damn You Autocorrect (DYAC) [1] is just one site that documents ridiculous autocorrect suggestions.

Yesterday my favorite online-shopping site recommended a truly ugly skull-shaped table lamp to me. A few days ago, I was recommended a pair of shoes that I purchased weeks ago.

Such quirks range from entertaining to puzzling to irritating, and most do not have serious consequences. However, when off-base suggestions and recommendations are consequential, the examples are deeply troubling. Driving directions send drivers into lakes and onto not-yet-complete bridges. Apple's autocorrect was reported to be completing medication names incorrectly, substituting medications with completely different ones, resulting in the recommendation of life-threatening dosages [2]. "Smart" policing and technologies that use racial profiling can proffer suggestions to authorities that result in high-consequence errors in perpetrator identification.

These are all what Robert Merton, the 1930s sociologist, dubbed unintended consequences [3].

I believe we can, and should, design better systems, using design approaches that focus on reducing the likelihood of such unintended consequences. The driving example is both a data failure and a lack of understanding of drivers' attentional state—people are often distracted. The second example showcases that a lack of expertise may lead users to be unable to recognize correct and incorrect suggestions—recognizing and/or disambiguating complicated, unfamiliar drug names is hard. The last example reminds us that suggestions and recommendations operate in a milieu of existing social and cultural biases. All the examples point to the fact that we, as users, are prey to overinvestment in the "smarts" of suggestion/recommendation systems, over-trusting their suggestions while neglecting to bring sufficient critical attention to the fallibility of our human reasoning. They also point to the fact that the designers of the systems did not pay enough attention to the fundamentals of human information processing and decision making "in the wild."

Where can HCI and design thinking help? Here are some concrete areas where we can collaborate with the architects and developers of suggestion/recommendation systems to prevent such errors:

Raise awareness of biases in the data that underlies suggestions and recommendations. We get recommendations for things we already own because there is a large gap in the system's knowledge of our habits beyond specific areas of interaction. Such missing data is an example of bias that could be addressed by designing systems to be more collaborative and conversational—seek input from the recipients of the suggestions and recommendations.

There are other biases in data; Ricardo Baeza Yates laid out some of them in a recent CACM article [4]. Examples include activity bias (we tend to focus on activity, ignoring that inactivity is also a signal), data bias (what we decide to measure signals another bias), and sampling bias (how we sample from what we collect can also involve bias). Baeza Yates's article offers more examples and a worthwhile deeper dive into the different forms of bias [5].

Raise awareness of human cognition and information presentation. HCI researchers and designers are very aware of how the ways in which information is presented affects how people process that information. It is no surprise to us that the presentation format of recommendations affects how people perceive and receive the recommendation. The complexity of the information, the presentation format and appearance, the recipients' context at the time of presentation and their ability to attend to the information, the recipients' expertise, and the recipients' goals all affect the impact of the information.

Design reflective, interrogable, and conversational systems. Suggestion and recommendation presentation need to be interrogable. We need to move away from blackbox systems toward systems that can be interrogated and asked about data provenance, reliability, and reasoning rationale. There has been a clamor to create explainable AI (XAI) systems. Judea Pearl, a recipient of the ACM's Turing Award, has called for us to change the kinds of systems we build. His recent book with Dana Mackenzie [6] expresses a need for systems that reason like humans, systems that are able to reason with what if structures, moving from inexplicable correlation to causal reasoning. We need causal reasoning, reflective systems, and the creation of suggestion/recommendation rationale presented in interpretable ways.

ins01.gif

More mundanely, and more readily within reach, let's build systems that let us know the certainty with which a recommendation is made. People work well with suggestions and recommendations that have a rationale; given the evidence for a suggestion or recommendation, people can decide whether the suggestion or recommendation is really for them. By contrast, when no argument is offered there is little to work with, so what is suggested or recommended is taken on blind faith. Even offering a confidence for the suggestion or recommendation would be helpful.

Focus on failure—don't assume success. While designing and building systems, we should consider what is at stake, and ask ourselves: What is the price of failure, and what would UNDO look like? We need to ask what the cost is of undoing the consequences of actions taken on the basis of algorithmic suggestions and recommendations. It is easy to dismiss or ignore a product recommendation, but less easy to recover from the trauma of being apprehended incorrectly by authorities convinced of your guilt, filled with certainty that is powered by uncritically accepted and little understood computation.

back to top  Rethinking Recommendations

To avoid an escalation in the number of negative unintended consequences, we need to rethink algorithmic recommendation. We need to think about the why, where, and how of algorithmic suggestions and recommendations. We need to be more proactive in exploring the potential for high-consequence versus low-consequence errors. We need to ask: How trustworthy is the information presented? How is the information presented—what is present and what is missing? What is salient? What is the expertise of the person to whom the recommendation or filtered information is presented? We HCI researchers and practitioners have been grappling with these kinds of issues for a long time—perhaps having more influence on the design of recommendation systems would be a good thing.

back to top  References

1. http://www.damnyouautocorrect.com

2. https://twitter.com/slatestarcodex/status/944739157988974592

3. Merton, R.K. The unanticipated consequences of purposive social action. American Sociological Review 1, 6 (1936), 895; http://www.d.umn.edu/cla/faculty/jhamlin/4111/2111-home/CD/TheoryClass/Readings/MertonSocialAction.pdf

4. Baeza-Yates, R. Bias on the web. Communications of the ACM 61, 6 (Jun. 2018), 54–61.

5. Also see the November–December 2018 issue of Interactions, which featured some special topic articles curated by myself, Phillip van Allen, and Mike Kuniavsky.

6. Pearl, J. and Mackenzie, D. The Book of Why: The New Science of Cause and Effect. Basic Books, 2018.

back to top  Author

Originally from the U.K., Elizabeth Churchill has been leading corporate research at top U.S. companies for the past 18 years. Her research interests include social media, distributed collaboration, mediated communication, and ubiquitous and embedded computing applications. [email protected]

back to top 

Copyright held by authors

The Digital Library is published by the Association for Computing Machinery. Copyright © 2019 ACM, Inc.

Post Comment


No Comments Found