Gadgets: We love them, we hate them, we can't live without them. They come in a dizzying variety of sizes, shapes, colors, and utility. Some gadgets just occupy space, fulfilling our need to have more stuff. Some, like cellular phones, have become indispensable. Some gadgets are fashion accessories or social statements, serving to start conversations or signal membership in an elite club. Some facilitate mobility; some engender couch potatoes. All of them present HCI challenges.
Some of us, the Early Adopters, feel the need to have the latest and greatest gadgets, no matter what need they fulfill. Others of us might better be termed Never Adopters: those who are not driven by fashion or marketing-generated needs, but who want to ensure that a new acquisition fills an actual requirement in our lives, a need not otherwise filled by an existing tool. I'm in the latter group. I didn't buy a cell phone until the need was acute: to be instantly available to my wife after our twins were born. My current cell phone is nearly five years old. It doesn't have a color display, does not have games, does not have a camera. It sends and receives phone calls, which is what I need. Which undoubtedly makes me part of an exclusive niche market hated by gadget vendors.
I sometimes wonder whether new features and gadgets are demanded more by marketing executives than by potential users. Is gadget design driven by user requirements, or by novelty? Are researchers and designers focused on usability and user-driven functionality? There's plenty of research on mobile interfaces, as evidenced by presentations at CHI and other conferences such as Mobile HCI. Does any of this research make it into real products that real users need or want? Or are devices being designed purely to push a brand, to fulfill a marketing requirement, or to be novel just because being different will drive sales? Isn't it important that devices fulfill user needs? Or is a few months' worth of sales enough to justify each minor design variation?
Apple's iPod may not have the ideal user interface, but it seems to perform well enoughto "satisfice"to be accepted by millions of purchasers. Each new variation drives more sales, and drives users to the Macintosh brand. Can "good enough" be any better than that? At IBM's New Paradigms in Using Computers conference in July 2005, former PalmSource CEO David Nagel described how it took him only a few minutes to learn the iPod's interface, but hours to learn to use the Palm LifeDrive. We've all had the experience of using only a small portion of the power of a software application (Microsoft Excel, for instance). Are we doomed to using only part of our gadgets' functionality as well? Do we pay for power we don't use and never wanted in the first place?
In his book Ambient Findability, Peter Morville discusses the "intertwingling" of gadgets and applications, and the creative uses their users find for them. GPSs (global positioning systems) were originally developed for mission-critical navigation of military vehicles and satellite tracking. Now GPS transceivers are used for entertainment such as geocaching or the Degree Confluence Project (www.confluence.org). GPS-enabled cell phones help us to find friends and meet at a moment's notice. I doubt that the developers of the first GPS ever anticipated that their high-tech tools would one day facilitate pub-crawling. Nor, I expect, did the developers of text messaging (SMS, or short message service) expect to facilitate a president's resignation, forced by text-messaging smart mobs in the Philippines in 2001.
At DUX 2005 (www.dux2005.org), Dr. Edward Tenner, author of Why Things Bite Back, discussed the unintended consequences of technology. Many unintended consequences are negative. Some, however, are positive. Do designers of gadgets try to anticipate or control the uses to which users will put their devices? Do they design for unintended consequences, positive or negative? Should they? Can they encourage innovation after the device has left the manufacturer?
What's the best design for any gadget when you don't know the range of eventual creative uses? What does HCI have to say about designing for unintended consequences, and about encouraging modifications and novel uses? Do we design for control of user experiences, or for user input on the ultimate experience? Do we need a new branch of HCI: HGI, human-gadget interaction?
I assert that interaction designers must forego control: Control is an illusion. Once your device is out in the world, humansthe uncontrollable, unpredictable element in the human-gadget interactionwill put their collective creativity to work. Users breathe new life into tools that they find useful. The more system creators try to control usage, the less attractive their creations become to users. John Gilmore said that a virtue of the Internet is that it "interprets censorship as damage and routes around it." If censorship is a form of control, do we value gadgets that enable rerouting? If our gadgets don't allow repurposing, will we still use them, or toss them out and find devices that support our needs? How can applying what we learn from HCI encourage design for adaptation? Does packing more features into a smaller package enable innovative use, or is it better to devote gadgets to a single purpose?
The July-August 2006 issue of <interactions> will focus on gadgets in all their glory, and the challenges and consequences faced by HCI professionals in making ever-smaller and ever-more-powerful devices both usable and desirable.
©2006 ACM 1072-5220/06/0300 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2006 ACM, Inc.