People: the well-tempered practitioner

XIV.1 January + February 2007
Page: 48
Digital Citation

Taking usability practitioners to task


Authors:
Chauncey Wilson

As usability practitioners and interaction designers, we often have to choose tasks for user-centered design (UCD) activities including storyboarding, paper and medium-fidelity prototyping, usability testing, and walkthroughs. Choosing tasks for various user-centered design activities is a critical activity and sometimes a moral issue for usability practitioners and product designers. It is critical, at least for complex products, because we often have to make usability and quality judgments based on evaluations that tap only a small set of possible tasks. It is a moral issue because our choice of tasks is a source of bias that could affect perceptions of the product during development and, in rare cases, result in harm to users. Following are two examples in which the choice of tasks for a UCD activity had a profound effect on the final product.

Example 1. A consultant is asked to conduct a usability test of an e-commerce Web site. The entire product team and the senior management team will be coming to observe for a full day since the updates to the site will likely determine whether the company makes a profit over the next six months. The consultant creates a set of tasks for the usability test that represent "easy cases;" as a result, the participants in the test have almost no problems. The site goes live shortly after the usability test, and there are hundreds of complaints each week. An investigation reveals that the tasks were designed to make the site look good for the observers rather than give a realistic sense of how usable it would be for actual customers. Here there is an ethical dilemma—the consultant’s livelihood was going to be affected by the results of the test so he might have, consciously or unconsciously, used tasks that were easy so he could preserve a good relationship with the team. There is often a conflict of interest when a usability practitioner is asked to create tasks for a usability study. What is the lesson here? Have some colleagues who are not closely connected to your project review your tasks. That will guard against biases that may emerge from conflicts of interest.

Example 2. A usability group conducted field interviews with a large group of users at different customer sites. The usability group extracted important and frequent tasks from the field data and used that data to design the tasks for usability evaluations of early working prototypes. They tested the prototype using tasks based on what they had observed with real customers. There were no major usability problems based on a large-scale (20 representative users) test of the alpha version that would go out to customers. Everyone was happy and felt quite good about the results of the test; they decided to ship the alpha version to seven of the most important customers. Shortly after the alpha version was installed, there were reports that customers found the system totally unusable because of performance—something that didn’t come up at all with some of the same tasks in the usability lab. What happened? The usability tasks were based on 50,000 rows of data, but the customers had databases with ten million rows of data. The tasks used in the usability test were based on what real users did, but not on the amount of data involved in the real-world task. The product had to be substantially altered before it went out for beta testing because the tasks in the usability test were not based on mega-databases.

These two examples show how the choice of tasks can affect perceptions of product usability. There are obvious and not-so-obvious criteria that are important when choosing tasks for design and evaluation activities. Task frequency and criticality are perhaps the most common criteria for deciding what tasks to use; there are also more subtle criteria.

Task Frequency and Criticality

The most common criterion for choosing tasks in user-centered design activities is to include frequent tasks. Tasks that a person performs many times a day or week are good candidates for design and evaluation activities because high-frequency tasks with poor usability can affect personal morale and corporate return on investment (ROI). However, we should also consider low-frequency tasks with high criticality. Criticality can be viewed as the impact that the task has on the success or failure of a business or organization, or some hazard to particular groups. For example, extremely infrequent tasks, like performing a database recovery after a power failure, may be performed only a few times a year, but these critical tasks can have a profound effect on a company’s business or a user’s safety. In the design of medical devices that deliver drugs to patients automatically, the task of setting a dosage may be relatively low frequency and high criticality. Failure to set the proper dose could lead to patient harm and civil litigation.

Consider both frequency and criticality when choosing tasks for various activities. When you are gathering data about user tasks, consider methods such as critical incident research that can expose low-frequency but critical tasks.

Task Generality. When you decide on tasks for UCD activities for inspections, walkthroughs, and usability testing, look for common tasks or task patterns that allow you to generalize your findings, and consider tasks that are used across user groups and across different components in the product. For example, if there are common query, filter, edit, or selection features across a product, tasks involving those common features in one area of a product should allow you to generalize the results.

First Impressions. First-impression tasks are ones that have an immediate impact on users’ perceptions of a Web site, consumer device, or software application that can affect the credibility of the product and users’ buying decisions. A common first-impression task on Web sites is filling out a registration form. If the form requires too much effort, asks for too much personal information, or fails to indicate what the users will gain from registering, they might very well quit and find a similar Web site with a less onerous registration form.

Consider including tasks in your UCD activities that will color your customers’ first impressions of your product. Using first-impression tasks is especially important when users will have many choices about what products they can use. Bad first impressions could easily steer your customers to competitors.

Tasks Involving New Features. Consider using tasks that involve major new features that are heavily promoted in the marketing and sales literature. Most product announcements tout new features that make new products more usable than previous versions of the products. Testing tasks that involve new (and highly publicized features) might keep a company from being embarrassed. Several years ago a company with a code-management system advertised major features and tasks that would "only take about an hour to learn." Attendees at a seminar on the tool subsequently learned from the salesperson that it really requires a three-day course to use the major features. Regarding the "one hour to learn" statement from the literature, the salesperson admitted that most people needed much more time to learn the basics.

Edge-Case Tasks. Edge-case tasks that involve troubleshooting, large databases, slow performance, and other system or user extremes are often forgotten in UCD activities. Edge cases can reveal problems that may not be evident under "normal" task conditions and can be quite useful for finding problems that will tax your technical support lines.

When you are collecting task data, don’t forget to ask questions that can identify edge cases. For example, an edge case for a stock trading system might involve the number of trades during a market crash when the volume goes from 800 million to three billion or more shares. You might do some testing to see what happens when performance is poor or the world’s financial markets are collapsing, as they have at least four times in my lifetime!

User-Defined Tasks Versus Structured Tasks. For some studies, invite your participants to bring their own tasks or conduct a mini-interview about what the participant would like to do with your product and create "on-the-fly" user-defined tasks based on user interests and goals. You might even give users funds to purchase something they want rather than something that is imposed on them. The problems that you find with user-defined tasks may differ from the problems you find when you impose a set of tasks on your users. It might be fruitful, especially if you have an e-commerce system, to include both user-defined and structured tasks in your design and evaluation activities.

Tasks That the Product Team Worries About. Some tasks are overly complicated or difficult to explain, yet they are tasks that worry developers, quality engineers, and technical writers. These are potential candidates for formal evaluation. Consider the overt and covert concerns of the product team when you are developing tasks for UCD activities.

Final Thoughts

The process of choosing tasks for UCD activities involves multiple (and sometimes conflicting) criteria, as well as potential conflicts of interest. Make the criteria for choosing tasks explicit in your UCD plans (well, you might have to be somewhat politic about the criterion "my boss made me do it"). Consider what criteria you should use to choose tasks at different stages of the entire product-development process given cost and task-coverage constraints. Any complex product probably has hundreds or thousands of potential tasks so you need to think broadly about how you can achieve the best coverage. You might start with the criterion "task generality" early in design, later use "frequency," "criticality," and "key new features," then move to edge cases and tasks that the team worries about late in development.

Good luck choosing tasks in your next project. If you have other ideas about how to choose tasks, please send me a note.

Author

Chauncey Wilson
The MathWorks and Bentley College in Boston
chauncey.wilson@gmail.com

About the author:

Chauncey Wilson is a usability manager at The MathWorks, instructor in the Human Factors and Information Design Program at Bentley College in Boston, and author of the forthcoming Handbook of Formal and Informal User-Centered Design Methods (Elsevier). Chauncey was the first full-time director of the Bentley College Design and Usability Testing Center and has spent over 25 years as a usability practitioner, development manager, and mentor. In his limited spare time, Chauncey hones his culinary skills as an amateur (but very serious) chef.

©2007 ACM  1072-5220/07/0100  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2007 ACM, Inc.

 

Post Comment


No Comments Found