FeaturesSpecial topic: Designing AI

XXV.6 November - December 2018
Page: 58
Digital Citation

Assessing and addressing algorithmic bias in practice


Authors:
Henriette Cramer, Jean Garcia-Gathright, Aaron Springer, Sravana Reddy

Unfair algorithmic biases and unintended negative side effects of machine learning (ML) are gaining attention—and rightly so. We know that machine translation systems trained on existing historical data can make unfortunate errors that perpetuate gender stereotypes. We know that voice devices underperform for users with certain accents, since speech recognition isn't necessarily trained on all speaker dialects. We know that recommendations can amplify existing inequalities. However, pragmatic challenges stand in the way of practitioners committed to addressing these issues. There are no clear guidelines or industry-standard processes that can be readily applied in practice on what biases to assess…




You must be a member of SIGCHI, a subscriber to ACM's Digital Library, or an interactions subscriber to read the full text of this article.

GET ACCESS

Join ACM SIGCHI

In addition to all of the professional benefits of being a SIGCHI member, members get full access to interactions online content and receive the print version of the magazine bimonthly.


Subscribe to the ACM Digital Library

Get access to all interactions content online and the entire archive of ACM publications dating back to 1954. (Please check with your institution to see if it already has a subscription.)


Subscribe to interactions

Get full access to interactions online content and receive the print version of the magazine bimonthly.