Authors:
John Carroll
AI is incorporated into recommendation services, driverless vehicles, surveillance and security, social media, and many other types of systems. Yet people who interact with such systems often do not understand what the AI does or how it works. This lack of transparency can create confusion, frustration, and mistrust. And indeed, specific socially untoward consequences of algorithmic interactions have been widely identified. Insights → Explanatory AI is an important approach to make it possible for humans to trust AI. → Explanations are diverse; logical explanations are both more and less than humans usually need. → Pragmatic explanation is a…
You must be a member of SIGCHI, a subscriber to ACM's Digital Library, or an interactions subscriber to read the full text of this article.
GET ACCESS
Join ACM SIGCHIIn addition to all of the professional benefits of being a SIGCHI member, members get full access to interactions online content and receive the print version of the magazine bimonthly.
Subscribe to the ACM Digital Library
Get access to all interactions content online and the entire archive of ACM publications dating back to 1954. (Please check with your institution to see if it already has a subscription.)
Subscribe to interactions
Get full access to interactions online content and receive the print version of the magazine bimonthly.