People: the well-tempered practitioner

XIV.5 September + October 2007
Page: 46
Digital Citation

The problem with usability problems


Authors:
Chauncey Wilson

A major goal for usability practitioners is to discover and eliminate usability problems from a product or service (without introducing new problems) within budget, time, and quality constraints. Making products more usable is laudable, but as a field, we have many rancorous debates about the definition of “usability problem.” I’ve been in meetings where tempers have flared over different views on what constitutes a usability problem. Why do we argue so much about something as fundamental as a definition?

Many of the debates are the result of the contextual nature of usability—usability is not simply an absolute property of a product, it is the interaction of a product or service with a particular context of use [2]. Context of use can include: the types and frequencies of tasks; domain and product experience; the goals and characteristics of the users; the social, physical, and psychological environments; fatigue; safety; and many other factors. If you change the context of product use, what was a problem in one situation could become a delighter in another.

My goal for this article is to prompt usability practitioners to explicitly consider the contextual factors that affect what we label a usability problem or non-problem. Let’s examine some of the contextual issues with usability problems.

What Is a “Real” Usability Problem?

What makes a usability problem “real”? One definition states that a usability problem is real if it predicts a problem that users will experience in their own environment, which affects their progress toward goals and their satisfaction. Inherent in this definition is the importance of environment—that is, the context within which an activity is embedded. You could observe a problem in a paper prototype test, for example, that you might not consider a real problem—rather, it is an artifact of the paper prototype test procedure and not something that users would experience.

Can you have a real usability problem if no one complains? Yes, I think you can. I’ve tested websites that have a combination of text and background colors with relatively low contrast. In several lab studies, not a single person complained about the text, and we had no measures that were sensitive enough to capture whether their reading or target acquisition was impaired by low contrast. On the other hand, a user-interface inspection by a person trained in human factors principles would likely include the issue as one of “low contrast between text and background.”

A change in perspective can affect what is considered a real usability problem. In a remote usability study in which my team as well as developers could see only the screen, not the person, observers started to notice “menu hovering:” Participants would position the mouse pointer over a toolbar or menu item and hesitate before moving on to another item. Was there a problem here? The hovering lasted only three to five seconds, but after watching several sessions, we noticed a pattern of hovering over particular items; the group remotely viewing the session believed that the icon on the toolbar or menu name was unclear though not a single participant voiced a complaint about ambiguous icons. Observers, who were in the lab where the test was actually run and could see the person working on the tasks, didn’t notice this subtle hovering behavior at all. The screen-only view of the usability session gave my remote group a different perspective that revealed what was invisible to others. We often discuss how the user context can affect what we view as usability problems, but observer context and perspective can also influence what we label as problems. Perspective-based and persona-based usability inspections where you examine a user interface from a particular viewpoint have been proposed as a way to detect more problems.

Usability Does Not Equal Simplicity

Usability is not a simple concept. The usability of a product is a function of multiple attributes. If you follow the definition of usability in ISO-9241-11 [1] as a guide, usability has three major attributes: effectiveness, efficiency, and satisfaction. These three attributes are a foundation for usability, but there are many other attributes that can define usability (or perhaps, more accurately, the “user experience”). These factors include consistency, memorability (ease of remembering after a period of disuse), evolvability (the ease with which a system adapts to changes in user expertise), aesthetics, first impressions, flexibility, and error tolerance [3]. Which attributes are most critical to the usability of a system is highly contextual, so when we are trying to identify usability problems, we first have to decide which attributes are important to focus our observations and reporting. If memorability is not an explicit attribute of an evaluation, then you could easily miss what could be problems for an intermittent (and even well-trained) user. Take the case of backup-and-restore software. You could evaluate how easy it is to set up backup software to perform a daily or hourly backup, but the ability to restore data often involves the attribute of memorability. Data restores often happen after rare events like power outages, perhaps months or even years after the well-trained technician set up the system. If you can’t remember the rules and procedures to restore your system, your company could be losing millions of dollars—or the person in the computerized hospital operating room could die while you are trying to remember how to get the system going manually. It is quite useful at the beginning of a project to hold a discussion with stakeholders about just what usability attributes are most critical to product success since that will influence what is defined as a problem.

Seeing Real Problems Might Require Real Data

Consider the following scenario: You have a highly functional alpha prototype ready for testing. You also have some sample data that your quality team put together for a usability test that consists of a few thousand rows in a database. You conduct your usability test and find relatively few usability problems and declare the product highly usable. You then take your team to a fine restaurant and celebrate with some haute cuisine and champagne. Two weeks later you start to receive complaints from a limited release to your early-adopter customers. They use some ugly language to describe the alpha prototype they were using. Why did it get ugly so quickly? Well, the lack of usability problems that the team celebrated was the result of well-intentioned but unrealistic simulated data that had little in common with the real data of the targeted customers (in quantity and quality). What are the lessons here regarding “real” usability problems? First, if you are evaluating a design that uses data, learn about customer data and create a data profile just as you create a user profile or task analysis. Ask questions about the size of data sets that are used at customer sites. Field studies that capture metadata about data can be used to make lab studies more realistic. Second, acknowledge the limitations of your data samples and strive to use samples that approximate those of your users. Third, keep in mind how data quality and scalability might affect the emergence of usability problems. What problems might occur, for example, if your customer base grows tenfold, a hundred-fold, a thousand-fold?

Consider Context When Using Usability Guidelines and Principles

We often generate usability problem lists based on guidelines, principles, patterns, and heuristics. While some (not all!) of these guidelines are research-based, usability practitioners still have to exercise context-consciousness. Aspects of a user interface that violate guidelines may or may not be problems for all classes of users across tasks and environments.

Some time ago I worked on a decision support system that would allow senior management (often through harried executive assistants) to get daily snapshots of their organization’s financial health. During design, the development team created a prototype with the ability to include up to 10 tiny charts on a single letter-size page. The charts were barely readable from our perspective (and from guidelines about information density), and we felt that a four-chart design would be more usable. We were wrong. Senior executives liked (and demanded) the dense page of charts because they were looking for relationships and patterns rather than specific details. In our evaluations, we considered the dense-pack design cluttered, confusing, and hard to read based on general guidelines for data display; our target group, senior executives and their highly trained assistants, wanted as much data as possible to get a holistic view of financial health and spot trends and relationships between different metrics, which would be hard if they had to flip across multiple pages. The use of guidelines (or heuristics or principles) to generate usability problems is risky because guidelines are often in conflict, ambiguous, and contextual.

Summary

Context of use influences what we label as usability problems. Usability practitioners can take context of use into account by:

  • Accepting that usability is multifaceted and context dependent;
  • Gathering data about context-in-use and applying that to evaluation methods;
  • Making context explicit through personas and other context artifacts;
  • Creating usability specifications that define what usability attributes are most relevant for each major user group;
  • Using support tools (persona descriptions, for example) that provide reminders about the context of users;
  • Considering the importance of choosing data samples that match the “data-context” of users; and
  • Using context as part of evaluation methods (perspective-based inspections, for example).

References

1. ISO 9241-11. Ergonomic Requirements for Office Work with Visual Display Terminals (VDTs) Part 11: Guidance on Usability. ISO 1997.

2. Karat, J. “User-centered software evaluation methodologies.” In Handbook of Human-Computer Interaction, edited by M. G. Helander, T. K.Landauer, and P. V. Prabhu Amsterdam: Elsevier Science, 1997, 689—704.

3. Wixon, D., and C. Wilson. “The Usability Engineering Framework for Product Design And Evaluation.” Chap. 27 in Handbook of Human-Computer Interaction, 2nd ed. edited by M. G. Helander, T. K. Landauer, and P. V. Prabhu. Amsterdam: Elsevier Science, 1997.

4. Zhang, Z., V. Basili, and B. Shneiderman, “Perspective-based Usability Inspection: An empirical validation of efficiency,” Empirical Software Engineering 4, no. 1 (March 1999): 43—69.

Author

Chauncey E. Wilson
chauncey.wilson@gmail.com

About the Author

Chauncey Wilson is a usability manager at The MathWorks, instructor in the Human Factors and Information Design Program at Bentley College in Boston, and author of the forthcoming Handbook of Formal and Informal User-Centered Design Methods (Elsevier). Chauncey was the first full-time director of the Bentley College Design and Usability Testing Center and has spent more than 25 years as a usability practitioner, development manager, and mentor. In his limited spare time, Chauncey hones his culinary skills as an amateur (but very serious) chef.

Sidebar: Perspective-Based User Interface Inspections

A perspective-based user-interface inspection [4] requires individuals to evaluate a user interface from different perspectives. The use of several perspectives is meant to broaden the problem-finding ability of “inspectors,” especially inspectors with little or no background in UCD or usability.

In perspective-based inspections, inspectors generally get descriptions of one or more perspectives that they will focus on, a list of user tasks, a set of questions related to the perspective, and possibly a set of heuristics related to the perspective. Inspectors are asked to work through tasks from the assigned perspective. These assigned perspectives might be based on many considerations, including:

  • Experience levels (novice versus expert user),
  • User characteristics (blind, hard of hearing, elderly),
  • Personas, and
  • Usability attributes (error-prevention inspector, consistency czar, aesthetics judge, readability assessor)

©2007 ACM  1072-5220/07/0900  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2007 ACM, Inc.

 

Post Comment


No Comments Found