Jeffrey Bigham, Walter Lasecki
Automatically providing access to information for people with disabilities when and where they need it requires solving some of the most difficult problems in computing. A characteristic example that we have been looking at is the real-time conversion of speech to text. Real-time captioning allows deaf and hard of hearing (DHH) people to access the aural speech around them at interactive speeds (less than five seconds from spoken word to text caption). This is a vital accommodation in classrooms where deaf people need access to mainstream education. Despite tremendous progress, automatic speech recognition (ASR) still cannot be used in…
You must be a member of SIGCHI, a subscriber to ACMís Digital Library, or an interactions subscriber to read the full text of this article.LOG IN TO READ THE FULL ARTICLE
GET ACCESSJoin ACM SIGCHI
In addition to all of the professional benefits of being a SIGCHI member, members get full access to interactionsí online content and receive the print version of the magazine bimonthly.
Subscribe to the ACM Digital Library
Get access to all interactions content online and the entire archive of ACM publications dating back to 1954. (Please check with your institution to see if it already has a subscription.)
Subscribe to interactions
Get full access to interactionsí online content and receive the print version of the magazine bimonthly.