No Section...

VIII.5 Sept./Oct. 2001
Page: 13
Digital Citation

Whiteboard


Authors:
Avi Parush

back to top 

"What does usability mean to you?," Avi Parush challenges us to ask ourselves. If this feels like a trick question, try asking it of any ten usability professionals— you'll probably get at least eleven answers. In this installment of The Whiteboard, Avi criticizes a view at one end of the spectrum, contrasts it with one at the other end, shows the weaknesses of both, and describes how they can integrate to achieve balance in the usability process. — Elizabeth Buie

back to top  What Does Usability Mean to You?

This is a true story.

Not too long ago, in a serious conference dealing with usability issues, I attended a lecture in which the presenters described how they had solved a specific usability problem. The problem was nowhere near unique, but how had presenters solved it? By usability testing and retesting.

I was quite surprised by that approach. At least several standards and basic textbooks have well-established guidelines on the issue that the presenters had faced, and I wondered why they had not started with those. (And, by the way, the published guidelines are somewhat different from the solution that was presented.) I asked the presenters about the mismatch between their usability test findings and the previously published guidelines, especially why they did not use the guidelines in the first place. They justified their way of solving the user interface problem by claiming that it was tested empirically, which made it more valid than using the standards and other published guidelines.

So what does usability mean to you? Truthfully, was your first association something that has to do with some sort of testing? For many, the "test and retest" strategy has become the primary and common approach to solving user interface problems. The answer to many design questions is more often in the style of "...when we tested it on some people, we found that...." Less and less, we see references to scientific research findings, task analysis conclusions, or the use of established guidelines and available standards. It seems that most of us forgot that usability is much more than just testing.

Iterative design calls for testing and redesign as we go, and not waiting until the end of the design process to test the solutions and discover problems. However, the missing link is "The testing and redesign of what? How did we get to whatever we are testing?" Nothing is wrong with solving problems by testing. It is problematic, however, when it is used primarily to the exclusion of other strategies and approaches to solve user interface problems. What are the other strategies? What is the problem in employing testing exclusively as a strategy for solving user interface problems?

User interface design is a dynamic, time-spanning process. The "goodness" of the resulting design is reflected by the usability of the interface and is measured by usability evaluation and testing techniques. User interface problems should not, and cannot, be solved exclusively by testing. We need to balance good design practices and evaluation and testing approaches. In this article, I will explore the practices and limitations of each approach to identify how we can use the good practices and avoid the limits. Such a balance can help us to more efficiently create a much better user interface design.

But first, let's examine what one of the leading sources in user-centered design (UCD) has to say.

back to top  What Does the ISO Standard Offer?

The ISO standard for Human-Centered Design Processes for Interactive Systems (ISO 13407) recommends several interdependent activities in the human-centered design process. Those include user requirement specifications, the production of design solutions, and the evaluation of those designs. Does that mean that design and testing should be completely separate activities?

Not exactly, but the ISO standard does clearly distinguish between the production of design solutions and their evaluation. So, on the one hand, we can let ourselves use more testing and assessment techniques as THE way to generate design solutions. But then there are good chances that the solution assessment will become indistinguishable from the solution generation, it may be omitted, or at the least be incomplete.

The leading UCD standard does not offer the desired balance. Next I will examine what's on each side of the scale.

back to top  Usability by D3: Design, Development, Deployment

The design solutions we generate have a primary target: usability. Throughout the process, we must ensure that usability is maintained, that is, there is a good fit between the design solutions we offer and the user's requirements. So, first of all, define usability objectives (for instance, in terms of effectiveness, efficiency, and satisfaction) in order to guide the engineering process.

The engineering process is divided into several stages, during which we generate usable design solutions:

  1. Design, which includes analysis, conceptualization, and detailed design
  2. Development
  3. Deployment

Each of these stages has its set of tools and techniques, heuristics, guidelines, and standards. We use those primarily for generating solutions for the user interface problems. Table 1 lists some of the known strategies and tools that can help in generating user interface solutions at various stages.

back to top  Design: You Can't Always Get it Right the First Time

Using design practices exclusively has some important limitations:

  • Can we always find a solution with design strategies? Well, not always. Many user interface problems are unique, and the generic standards and available guidelines may not give us the unique answer we need. In addition, even thorough analyses may not be sufficient to crack the design problem.
  • Can we be confident of the solution we found? Not necessarily. This is another concern with the generality of design guidelines and analyses. The appropriateness of the solutions we find needs additional validation for the specific user interface problem.
  • Can we always count on design strategies? Not forever. Design needs are dynamic. Technology, culture, and fashion are constantly evolving. Conventions, guidelines, and standards that were once effective may become outdated. New tools and techniques need always to be developed and employed.

back to top  Testing: Explore, Confirm, Compare

Usability evaluation and testing are used for several purposes:

  • Explore the user interface to detect usability problems
  • Confirm or reject suspected problems
  • Compare design and implementation alternatives

We evaluate and test usability at different stages in the life cycle of software. These stages can be as early as initial user interface conceptualization or as late as deployment of the product. We can also have different test objectives at each of those stages. Consequently, we may need to employ different testing approaches and techniques to derive maximum results for the objectives of each stage.

The variety of ways to perform a usability evaluation or test depends on many factors, such as the product's usability goals, the test's objectives, the timing of the test, and the resources available. The latter can include human resources; software mockups, simulations, or prototypes; testing facilities and capabilities; and financial ability. Evaluation and testing has two basic approaches:

  1. Analytic strategies such as heuristic evaluation, cognitive walkthrough, and monitoring of compliance with standards and style guides
  2. Empirical strategies, such as laboratory tests, contextual inquiries, and remote data logging

back to top  Exclusive Testing: A View Too Narrow

Usability evaluation and testing approaches are highly effective and efficient in detecting usability problems. However, finding design solutions exclusively through testing can be problematic. Here are some reasons.

  • What do we test or assess? Some initial solution must be tested. That solution is prone to being influenced primarily by factors such as recent solutions, frequent solutions, solutions taken from competitive systems, or previous versions. In other words, the shortest possible method may have been explored to retrieve the solution. Consequently, it can limit the solution space, a priori, and thereby bias the solutions generated.
  • Can testing limit our solution process? Well, yes, sometimes. Testing is usually performed on a given product within a given context and user profile. In addition, it is usually performed with few participants. The limited settings of the test may further limit the generated solutions. This is to be contrasted with the widespread, general foundation of the well-established analysis techniques, human performance data banks, guidelines and standards that can enhance a wider range of possible solutions.

back to top  The Power of Balance

Taking either design practices or testing approaches exclusively is limited and problematic. I would like to propose another framework within which there can be some balance between the two strategies.

The process by which we design user interfaces or construct user experience can be thought of as a problem-solving and decision-making process. The general steps in problem solving are as follows:

  1. Define and scope the problem space. This step identifies the problem to be solved.
  2. Hypothesize which possible solutions may solve the problems identified in the previous step.
  3. Evaluate and test the solutions identified in the previous step.
  4. Decide on the appropriate solution for the problem. This may result in a revision in the problem space and possibly another iteration of solution exploration, evaluation, and decision.

The sequence described is somewhat reminiscent of the process suggested in ISO 13407. The big difference, however, is that this framework proposes those four activities as general steps nested within each phase in the design process. In other words, since in each phase we encounter some user interface problems, then we should approach the situation with problem solving and decision making strategies. Schematically, it would look like Figure 1.

In this framework, the design practices, expressed as the Define and Hypothesize activities, can be balanced with the testing practices, expressed as the Evaluate and Decide activities. So my answer to the question "Design vs. testing, or design and testing?" is that design and testing can clearly coexist. In any of these activities, the known and established variety of tools and techniques can be employed. Moreover, if the established practices may not be relevant, the designers can employ their own heuristics for going through the problem solving and decision making process.

back to top  A Last, But Not Final, Word

Although it may sound like it, I am not promoting a clear distinction and separation between design and testing. What I emphasize is balance between the two. Balancing or finding the middle of the road is usually easier said than done. This balance may not necessarily eliminate the limits and problems of design practices and testing approaches. However, by applying the approach of the typical human problem solving and decision making process to the user interface design phases, the roadmap to balance can be charted.

The story at the beginning of this article has another important lesson for us: Exclusively testing requires a lot of reinventing the wheel and rediscovering rules and guidelines that were already known (or can be generated analytically). This brings the potential risk of being tempted to extend and overgeneralize the test findings. That can do damage if findings are inappropriately added to the usability body of knowledge, or wasted if they can be generalized but aren't. Another roadmap waiting to be charted is between usability test findings and a dynamic, robust and scientific body of knowledge for the usability engineering professionals.

But that is for another article...

back to top  Author

Avi Parush has been practicing human factors and usability engineering for the past 18 years. With a background in cognitive experimental psychology, he works as a senior usability consultant to many international high-tech corporations, doing both design and usability testing. He is the co-founder of LaHIT, a leading Israeli usability consulting firm. Avi is presently a visiting scientist at the Isreal Institute of Technology, where he teaches and conducts HCI research, in addition to teaching HCI at Tel Aviv University. He is a private pilot and an avid reader of sci-fi books while listening to Celtic music.

Avi Parush
3 HaRotem Lane
Ra'anana, Israel 43515
Tel. 972-9-7718583
Fax 972-9-7413266
aviphit@netvision.net.il

Whiteboard Column Editor
Elizabeth Buie
Senior Principal Engineer
Computer Sciences Corporation
15245 Shady Grove Road
Rockville, MD 20850
+1-301-921-3326
fax: +1-301-921-2069
ebuie@csc.com

back to top  Figures

F1Figure 1. Framework for balancing design and testing throughout the engineering process.

back to top  Tables

T1Table 1. Strategies and tools for generating user interface solutions

back to top 

©2001 ACM  1072-5220/01/0900  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2001 ACM, Inc.

Post Comment


No Comments Found