Rethinking the fundamentals

XVI.3 May + June 2009
Page: 6
Digital Citation

FEATUREIs usability obsolete?


Authors:
Katie Scott

Usability and HCD grew to prominence with the expansion of the Web: We’ve written the standards, developed the testing protocols, and had a hand in designing the leading systems. While the roots of the field are much older, growth has been significant in the past 15 years. We’ve gone from invisible wonks to key “technology untanglers” [1].

Unfortunately, our field has barely evolved in that time frame. While the computing universe has shifted dramatically, we’ve clung to the same methods, advice, and processes. Without significant changes, the growth and influence of our field is unlikely to continue. Although the New York Times named usability a hot “emerging career field” in 2007, in many ways the height of usability has already passed us. Current usability work is a relic of the 1990s, an artifact of an earlier computer ecosystem, out of step with contemporary computing realities.

Usability can no longer keep up with computing: The products are too complex, too pervasive, and too easy to build. And in our absence, users and engineers are beginning to take over the design process. Five trends demonstrate the growing gap between usability theory and commercial practice—the “new realities” of computing haven’t been truly embraced by the usability community. The trends are, at a minimum, making traditional usability more difficult, if not irrelevant in the new paradigm.

Products Have Become Too Complex

First, the idea of a “computer system” has obviously changed significantly. Initially, usability and HCI research methods were developed to tackle stand-alone systems or individual devices or programs. As computers evolved, we tweaked our techniques to work with networked applications, linking a single provider to a group of consumers. Computing has continued to evolve, with users now interacting with a vast network of hosts, services, applications, and platforms. Most first-world inhabitants use this interconnected infrastructure on a daily basis, often without noticing the underlying infrastructure until the system breaks down.

Take the example of a location-based service, like the “Urban Spoon” application for the iPhone. The application provides real-time restaurant recommendations based on their proximity to your current location. The user launches the application on the handset, which identifies your location via GPS. The system returns potential restaurant ideas based on location and ratings in their database (i.e., highly rated restaurants are shown more often). With a shake of the phone, recommendations appear, cleverly, like reels in a slot machine. The Urban Spoon collects ratings and reviews from users, as well as aggregated information from other sites like CitySearch and Yelp.

There are at least six pieces to the “enterprise” of Urban Spoon. First, there are three traditional interface components: the “store” interface for purchasing and downloading the application (managed by Apple), the slotmachine interface for requesting and reviewing restaurants (part of the iPhone application, from Urban Spoon), and the rating interface (part of the Urban Spoon website). But there are also three unseen infrastructure components that affect the application’s usability: the quality of the restaurant data (e.g., how many restaurants are available in any given area), the resolution of the GPS satellites (e.g., does it know I’m in Boston, or does it realize I’m in Somerville near Davis Square?), and the responsiveness of the system (e.g., do new restaurants appear in a timely fashion?).

Even if the system was perfect in every other way, a major breakdown in any of these six components would cripple the rest of the system in the eyes of users. Imagine a recommendation system with perfect GPS but with a very limited restaurant database; similarly, imagine if the system couldn’t resolve your location in less than a mile, making the quality of the actual restaurant database irrelevant. In both cases, users would cancel the service because it just “didn’t work.”

The current suite of usability methods is inadequate for this new, enterprise-reliant context. Testing an enterprise application is vastly different from testing a standalone product. How do we accurately test the usability of an overall enterprise, like Urban Spoon, while it’s still in development? How do we ensure it passes both the ease-of-use and utility benchmarks that users require? We could easily test narrow portions of the application using traditional techniques: Lab testing could evaluate the process of entering constraints or purchasing the application. We could even test the latency or resolution alternatives in a simulation (e.g., is 100-foot resolution adequate for dense urban settings? Is the resolution requirement equal in both urban and suburban settings?). Unfortunately, the sum of these narrow tests would not be able to get at the true value of the service or its real-world usability.

We see sophisticated clients who still focus their design on lab testing and scripted usability, when it’s not really appropriate for their networked environment. Certainly, there is a narrow role for “sanity checking” the usability of each interface within the enterprise, and there are naive clients who see “usability testing” as shorthand for all user-centered design. But scripted usability testing forces the product into unnaturally small pieces, tested in series. Worse, it means you are setting yourself up to miss the larger design issues (e.g., does the latency make the core idea impossible? Does actual context of use make this feature meaningless, or does the size of the hardware make this product unattractive?). Lab testing, once regarded as the gold standard, isn’t meaningful for most current products. Products are no longer “rooted” in singular settings; they are collaborative, interwoven, and interdependent. To be accurate and relevant, the testing must be mobile, modular, and contextual (none of which can be accomplished in a lab).

With service-based, cooperative enterprise applications, we are limited to hacked-together usability methods (i.e., a series of narrow simulations) or rough design estimation (i.e., a combination of contextual studies and design research) to try to understand the benefits and pitfalls with the system. And these enterprise challenges are becoming ubiquitous: location-based services like Urban Spoon, social-networking applications like Facebook and Twitter, or third-party platform sellers like eBay and Amazon. We need to replace our narrow usability methods with rich design methods that can address these types of enterprise-design challenges.


Usability and design become add-on fixes or upgrades, rather than initial product drivers. We need to make usability and design an integral part of the development process, at whatever rate it’s conducted.

 


Computing Has Become Too Pervasive

As computing devices get smaller and more pervasive, they are used in a broadening array of contexts. For example, company email accounts were once reserved for in-office communication during the workday. Then, in the 1990s, employees were supplied with laptops to work anytime, anywhere, introducing the world to the concept of telecommuting. Now, with email-enabled phones, employees can read and respond to work email at any hour of the day or night, from truly anywhere.

Beyond mobile phones, there are thousands of other mobile computing devices. There are satellite radios and Internet radios that can be used in the car, on a boat, out camping, or within the home. There are fitness systems like the Garmin Forerunner or the BodyMedia GoWearFit that sense and track physical characteristics like heart rate or skin temperature to calculate fitness metrics. The sensors can be used during athletic or day-to-day activities, or can be repurposed as health-reporting tools. There are complex chips and sensors in late-model cars that can track our driving behavior, monitor our safety, and improve our driving skills.

The point here is not that computing has moved into the domain of automobiles, fitness, or music. It’s that these computing applications themselves are pervasive: They live in a variety of different environments, scenarios, and contexts of the users’ choosing. While context of use has always been a part of usability, the variability is making our job much more difficult. In the past, we could study and then simulate the anticipated context of use: an office, a vehicle, a kitchen, or a school. Even if the context were outside the norm (e.g., an operating room rather than an office), we could reasonably estimate the context as a category. In general, the per-device variability was much lower.

Most contexts of use today, however, live in the “long tail.” The increasing mobility and ubiquity of devices makes predicting the context of use, and thus its usability, more difficult. Sure, we can test the interface in an operating room, or in a yoga studio, or in the airportparking lot. Besides the obvious cost of repeating our tests, how can we even be sure we’ve chosen the right contexts as our benchmark? These three examples represent a raw sampling of the 1,529 potential contexts of use for the product. And as the context of use varies significantly from user to user, or day to day, the homogeneity of usability testing becomes a poor proxy for real use. And, again, perhaps the richest data comes from niche contexts within the long tail (e.g., ice rinks, car repair shops, hospital waiting rooms). We need methods that can anticipate and account for unexpected and continuously changing contexts of use. And we need to see the richness and variability as an opportunity for universal design.

Products Are Too Easy To Build

Introducing a product or service used to involve an inevitable production lag. With traditional hardware products, there were upfront design and development costs, plus the huge production costs. The factory must be built or reconfigured, new machinery or dies developed, and workers hired or trained. The initial delays are considerable, and they are repeated each time a change is made.

As we all know, cycle time has been shortened dramatically for software-based products and services. While the early design and development phase still exists, the deployment phase is essentially zero. Design and development can now be intensely iterative, with little additional cost. Downloaded applications are upgraded with weekly service patches; new features appear, often without notice, within Web applications; Web services are refined almost hourly as errors are fixed. In the growing open source community especially, the “invisible hand” of shared revision and correction has dramatically accelerated the pace of development. A study of intentional errors inserted in Wikipedia showed they were removed in less than two minutes [2]. While errors in Wikipedia are easier to fix than errors within programs, the fast pace is still indicative of a greater trend.

Certainly, the longer cycles still exist for products heavy in industrial design. Early vehicle GPS systems cost thousands of dollars and relied on embedded media. The early systems couldn’t be upgraded as available data improved, making the navigation system look embarrassingly obsolete long before the vehicle body did. Even the agile and vaunted Apple still releases its hardware on annual cycles. But as more physical products have embedded software and auxiliary services, they can essentially become self-correcting as well. We’re beginning to see this trend extending to other devices: television receivers that “reset” themselves to receive new channels, GPS systems that learn “shortcut” routes from other drivers, and gaming systems that update and patch in new features.

But this quick iteration cycle is a threat to traditional usability in two ways: Deployment begins to trump testing, and upgrades begin to overrule design.

With such short development cycles, deployment can easily replace even the cheapest or most realistic testing. Beta tools that would have once lived only as paper prototypes can be introduced as rough “products” and then refined to find a wider audience. The beta prototype can simply be reworked until it is a marketplace success, without any formal usability or design. While this may not be the most efficient development method, it often appears to be. Adding design tasks, usability tests, or contextual studies easily looks costly and unnecessary. While those tests could provide rich insight into why a service is failing for users or how it could be improved, that data is qualitative and formative. The pace of iteration reinforces an existing usability challenge: making a case for qualitative findings when tomes of quantitative data are available.

Second, the fast iteration cycles also reduce the focus on upfront design work—shifting the focus toward ongoing correction and revision. The lure of later revisions makes each individual deployment less crucial. Unpopular design and usability issues can be pushed off indefinitely to be part of a “big redesign” that never materializes. Usability annoyances can be ignored until they become entrenched parts of the product. If the system “works” on metrics that the owners care about, there is a decreasing pressure to “get it right.” Usability and design become add-on fixes or upgrades, rather than initial product drivers.

We need to make usability and design an integral part of the development process, at whatever rate it’s conducted. The speed of agile development and the constant deployment pressure must be embraced as an opportunity: for rich data, for iterative design cycles, and for immediate answers.

Users Can Design Their Own Products

An ever-expanding base of users are repurposing or reimagining how their products are used by modifying, tweaking, adding, building, etc. This work was once limited to a small population of hackers, but is expanding to a larger segment of the user base. According to research on userdriven innovation, these adaptations can represent up to 40 percent of the market. The research also shows that while many of these modifications are small, their makers are often leading users. The user-driven innovations signal future trends or unmet needs in the broader market. This adaptation also causes the “double usability challenge.” First, the system must provide users a straightforward way to make changes (e.g., APIs, help files, parts libraries). The second challenge is guiding new users to make their “new products” usable as well, for their own use or potential customers.

The online multiplayer game World of Warcraft provides a great example of user-driven innovation with a low barrier to entry. Users download the main game, pay a subscription service, and can interact with other players through an online metaverse. Users develop their own collaborative experiences within the WoW environment: joining guilds to team up with other players, choosing their character and style of play, and deciding their path through the game’s quests. A subset of highly engaged users also builds and maintains a complete infrastructure outside of WoW to support their guild- and individual-game play. There are loot-tracking systems, project-management tools for planning large-scale events, and a myriad of UI modifications that enhance game play. These complex modifications are built, downloaded, revised, enhanced, and shared through open source forums.

Similarly, less tech-savvy users also have the ability to make modifications and “produce” their own products. A plethora of online tools has allowed users with limited technical skills to publish, communicate, create, and organize. Even 10 years ago, publishing a blog or sharing photos required dedicated software and a certain level of expertise. Sites like Blogger, Facebook, and Flickr provide usable streamlined functionality, templates, workflows, and platforms that allow novice users to produce content and create their own sites. New parents provide baby blogs for extended friends and family, for example. While Facebook and Blogger pages may not have high “design style,” they do allow novice users to publish content and customize their pages in sophisticated ways.

These users are engaged, innovative, and creative, but they are unlikely to be understood or addressed using standard usability methods. Again, we face the variability problem and the long tail. Standard usability methods rely on the similarity between participants (e.g., 90 percent of users did X) rather than focusing on individual innovations or usage patterns. The novelty and insight of the lead users can get lost within the aggregate of the usability collection. And we can only anticipate the modification barriers to be reduced further and the percentage of hackers to continue to grow. If user-driven innovation and content continue increasing, anecdotal evidence will begin to outweigh the generalized statistics of usability. We will need to shift toward design methods that can identify lead users, their unique characteristics, and homegrown innovations in order to remain relevant.

Engineers Can Design on Their Own

When the usability field exploded, the Web was a nascent tool with few standard paradigms. Usability’s rise (and potential fall) mirrors the Web closely. Early on, usability was needed to evaluate potential pitfalls with existing sites and propose guidelines to design against. The Nielsen/Norman Group made a mint by providing detailed guidelines for specific contexts: for e-commerce, site maps, giftcertificate workflows, corporate intranets, etc.

And while these guides are useful, there is now a flood of successful examples to emulate and an archive of research to mine. We have discovered, tested, and refined the best ways to design basic tasks: organize a form, display a pull-down menu, define pagination, highlight items in a list, etc. Certainly, usability pioneers like Jakob Nielsen deserve a hat tip for laying the crucial groundwork. Looking forward, however, we can reasonably assume that many of the simple problems have been solved and we are working up the ladder of complexity. If many of the basic usability problems are “fixed,” are more complex assessment methods needed to address the more complex issues that remain?

Second, many of the traditional usability methods quantify data that we no longer care about. Lab tests, heuristic evaluations, and computational models focus solely on goals like efficiency, accuracy, and initial ease-of-use. While these metrics were relevant early on, they are rudimentary at best. Common system-design techniques like use cases and scenarios should make fast, straightforward, and learnable UI design a given.

Again, there are thousands of relevant, successful, timely examples to baseline against. And the “new” metrics like affect, stickiness, buy-in, loyalty, and engagement are nearly impossible to test within the confines of classic usability. How can we revise our core tool set toward the new metrics? How do we reprioritize the services we teach, use, and sell based on the current environment (rather than the past)?

Lastly, with the growth of the Web and usability, clients are likely to know the underlying usability principles, be familiar with the core heuristics, and have already solved the obvious “gotchas” in their products. They may even have in-house usability departments, labs, and protocols. Fewer and fewer clients need to be reminded of the basics. The heuristics we test for and baseline against are pervasive; at some level, we’ve put ourselves out of business. We need to provide more to clients than the same basic assessment from a decade ago. But how do we work with embedded usability departments and dated testing protocols to continue improving our clients’ products? Are different methods, different deliverables, or different workflows needed to address the “new usability”?

What Do We Do Now, Knowing It’s Terminal?

Given these five impending trends, the field of usability must change to survive. We can’t continue our practice on the current trajectory, pretending the environment around us is static. If our research methods don’t currently feel outmoded, they will within the next five years. These trends are not mere “blips” on the radar, but structural changes to which we must adapt to avoid being pushed aside. While the core principles of usability are universal—active user involvement, iterative and multidisciplinary design, appropriate pairing of users and technology—our techniques and methods need to catch up.

Looking more broadly, the usability community must find ways to embrace these trends, rather than hide from them. Certainly, there are obvious steps we can take to revise our methods, retire outdated models, and retool our own skills. But these trends also push us away from artificial testing and toward richer and more realistic data. The growing pains are hard, but if we capitalize on these trends, we can drastically improve the user experience. Even better, we can also increase our market influence, making us better predictors of user behavior, better advocates for true user needs, and better critics of design work.

To our credit, there are glimmers of hope, where usability has shifted to address the new computing environment.

  • Revising our work to fit with agile. There are significant efforts toward “agile usability,” to address the challenges of rapid deployment. Online giants like Google and Amazon deploy design alternatives, review the results, and then revise the products (often without customers realizing they were part of a “user test”). Groups like 37signals and A List Apart offer recommendations and guidelines for adapting user-centered design within fast engineering cycles.
  • Using ubiquity to our advantage. Researchers are beginning to reuse artifacts from the “always on” culture for design purposes. Public photo-sharing sites like Flickr can be mined for photographs (and timelines) surrounding specific events or topics. In our MAYA work, we used Flickr to identify what visitors photographed at trade shows (i.e., to determine what content was engaging and what was ignored). Flickr provided insight into many different trade shows, users, and patterns that would have been impossible with other methods.
  • Breaking our labs into pieces. There are significant efforts to break down usability labs into smaller, configurable components. Hardware costs have dropped significantly, and lightweight testing suites like Silverback have drastically reduced the price of data collection and analysis. Companies are shifting toward the “lab in a bag” model, where teams are dispatched with a prototype, an augmented laptop, and a video or still camera. The lab can travel to the participants and their context, rather than trying to squeeze participants (and their whole external life) into the lab.
  • Making our methods contextual. Guerilla methods continue to evolve, improving the data while reducing the overhead. Many of those methods are contextual, helping to move research closer to the field. There are evolving methods for “quick turnaround testing” (with a focus on speeding up the analysis process), “listening labs” (which employ contextual, user-driven tasks), plus a ton of revisions and extensions to paper prototyping. These lightweight methods are designed to fit into smaller time frames, deal with looser requirements, or make the testing mobile.
  • Embracing users as designers. Lastly, there is a growing push on collaborative research and participatory design. The users work closely with the usability and design team through diary studies, repeated interviews, and in-home testing. At MAYA, we’ve used long-term participatory design to develop a home monitoring system. We sent users “product kits” that they used to augment their home with “sensors” (made from Post-it notes). We asked users to document the sensors with notes and photographs, and then we would page the users with contextual alarms (e.g., potential leak in the basement) and discuss their response (e.g., needing to call a plumber). This “usage data” helped us to identify unmet needs in the market, before we even began to prototype the system.

References

1. Whitaker, B. “Technologies Untanglers: They really make it work.” The New York Times, 8 July 2007.

2. Viergas, Watternberg, D. “Studying Cooperation and Conflict between Authors with History Flow Visualizations.” Working Paper, CHI 2004, Vienna, Austria, 2004.

3. von Hippel, E. Democratizing Innovation Cambridge: MIT Press, 2005. 19–22.

Author

Katie Minardo Scott is a designer and researcher at MAYA Design in Pittsburgh, Pennsylvania. Her work focuses on organizing complex information for user understanding in domains like intelligence analysis, situational awareness, medical diagnostics, and engineering research. Scott holds a B.F.A. in design and a master’s in human-computer interaction, both from Carnegie Mellon. She is also a contributing editor for this magazine.

Footnotes

DOI: http://doi.acm.org/10.1145/1516016.1516018

Figures

UF1Figure. The “Urban Spoon” application for the iPhone provides real-time restaurant recommendations based on proximity to your current location.

©2009 ACM  1072-5220/09/0500  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2009 ACM, Inc.

 

Post Comment


No Comments Found