Waits & measures

XIII.6 November + December 2006
Page: 20
Digital Citation

Quantifying usability


Authors:
Jeff Sauro

Numerical precision is the very soul of science [8]. The current practice of most usability measurement is not numerically precise. This does not mean that usability, or at least certain activities that fall under the broad topic of usability, are not science and cannot be precise. It means that there is still much honing to be done.

Practitioners address the usability of everything from simple Web sites to mission-critical health care devices. Qualitative inspections may be sufficient for the former but are risky for the latter [6]. In general, when the consequence of an unusable event (e.g., unintended actions or excessively lengthy events) become more severe, numerical precision and quantitative maturity become increasingly important.

Any discipline that contains such diverse activities as artistically influenced design and engineering influenced evaluation will necessarily have discussions as to whether the practice is an art or a science. While the current state of the profession is probably a discussion for another issue, the dichotomy between art and science may be unfounded. At least one of history’s most influential artists, Leonardo da Vinci, saw art, a decidedly qualitative discipline, inextricably tied to mathematics—the language of science [1].

Achieving quantitative maturity in usability first requires a solid definition of what is being measured, then a way to measure it. The construct of usability has not been easy to define, but there is a tenuous consensus (ISO 9241 pt. 11 among others [2]). Effectiveness, efficiency, and satisfaction have become the practitioner’s measuring triumvirate. Task-completion rates, time on task, and questionnaires (task and/or test level) are the most common techniques for data gathering (and are encouraged by the CIF [3]). This narrowing of the construct still leaves questions open. Are all measures necessary? What do they leave out? Do the measures correlate [5]? Can they be combined [7]? These questions have not yet been answered.

In this special issue on measuring usability, we revisit some old favorites and attempt to break some new ground.

What are we measuring? Niamh McNamara and Jurek Kirakowski present some new considerations for evaluating the user experience.

How many do we measure? It wouldn’t be an issue on measuring usability if we left out this perennial topic. Jim Lewis, an accomplished usability engineer, has been contributing to our understanding of problem discovery sample size for more than two decades. He reviews the history of what is at times a contentious issue and reminds us that there is math, not magic, behind sample-size computations for discovering UI problems.

How do we present the measures? The folks at NIST have provided a guide to the guide. Mary Theofanos, Brian Stanton, and Nigel Bevan walk through the nature of the Common Industry Format (CIF) and the newer Common Industry Specification for Usability-Requirements (CISU-R) and explain that a standard can be flexible.

How can the measures be used to justify the time and cost of usability activities? John Sorflaten has a new twist on the often-maligned ROI calculations [4]: Using data will clear some of the fuzzy aspects of return-on-investment calculations.

What measures do we take? Nigel Bevan has some thoughts on his experience with writing standards and the practical constraints of measuring usability.

To get things started, I’ve attempted to provide a small refresher for all those things you learned in your statistics class, which were important at the time, but have now mostly forgotten.

References

1. DaVinci, Leonardo (1651) Treatise on Painting.

2. ISO 9241-11. 1998. Ergonomic requirements for office work with visual display terminals (VDT)s - Part 11 Guidance on usability (ISO).

3. ISO/IEC 25062 2006. Software Engineering - Software product Quality Requirements and Evaluation (SQuaRE)-Common Industry Format (CIF) for Usability Test Reports.

4. Rosenberg, Daniel (2004) "The Myths of Usability ROI" in Interactions Volume 11, September-October pp22- 29

5. Frøkjær, E., Hertzum, M., and Hornbæk, K. (2000) Measuring usability: are effectiveness, efficiency, and satisfaction really correlated? In Proc. CHI 2000, (pp.345-352). Washington, D.C.: ACM Press.

6. Sauro J (2004) "Premium Usability: Getting the Discount without Paying the Price" in Interactions Volume 11, July-August.

7. Sauro, J. & Kindlund E. (2005) "A Method to Standardize Usability Metrics into a Single Score." in Proceedings of the Conference in Human Factors in Computing Systems (CHI 2005) Portland, OR

8. Thompson, D’Arcy Wentworth (1917) On Growth and Form.

Author

Jeff Sauro
Oracle Corporation
jeff@measuringusability.com

About the Guest Editor

Jeff Sauro is a Six Sigma-trained statistician at Oracle in Denver, CO. Before Oracle, Jeff was a human factors engineer at PeopleSoft, Intuit, and General Electric. Jeff has presented and published on the topic of usability metrics at CHI, UPA, and HFES conferences and maintains the Web site measuringusability.com. He received bachelor’s degrees from Syracuse University and a master’s from Stanford University.

©2006 ACM  1072-5220/06/1100  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2006 ACM, Inc.

 

Post Comment


No Comments Found