Authors:
Jeffrey Kim, Arnie Lund, Caroline Dombrowski
Since as far back as at least 1995, researchers have worried about how to analyze ever-growing, overwhelming, gargantuan data. For much of big data research and analysis, computational feasibility is no longer the challenge. Peter Huber's taxonomy of dataset sizes classified "ridiculous" as 10 to the 12th power number of bytes, with storage on "robotic magnetic tape" [1]. Times have changed. Complex computations do require much more processing power, but now the problem more often is that little is being done with big data, or only rudimentary, surface-level analysis—which is a shame, because larger, more complex datasets are also,…
You must be a member of SIGCHI, a subscriber to ACM's Digital Library, or an interactions subscriber to read the full text of this article.
GET ACCESS
Join ACM SIGCHIIn addition to all of the professional benefits of being a SIGCHI member, members get full access to interactions online content and receive the print version of the magazine bimonthly.
Subscribe to the ACM Digital Library
Get access to all interactions content online and the entire archive of ACM publications dating back to 1954. (Please check with your institution to see if it already has a subscription.)
Subscribe to interactions
Get full access to interactions online content and receive the print version of the magazine bimonthly.