UX management

XIV.3 May + June 2007
Page: 38
Digital Citation

Working with standards organizations


Authors:
Anna Wichansky

In the late 1990s, I had the good fortune to be part of a significant joint effort between industry and government to create standards for usability testing. This was called the Industry Usability Reporting (IUsR) Project, and it was run by the National Institutes for Standards and Technology (NIST). My interest in the project stemmed from a request from Boeing, a major enterprise-software customer of Oracle. Boeing had productivity goals in place for use of software; it wanted users to be able to come up to speed on commercial off-the-shelf software quickly, without excessive learning curves or help-desk support. Before it would buy the software, the company wanted to get usability test results from software vendors. But of course it wanted results that would be comparable in terms of methodology and reporting, making it easier to compare vendors. Individual conversations with major software vendors were favorable to the idea of reporting such results to customer companies, under the right nondisclosure agreements.

This idea led to the formation of a steering committee, including NIST members and industry representatives, and the organization of the first IUsR workshop in March 1998. The meeting was held at NIST in Gaithersburg, Maryland, and there were 25 key attendees representing a number of large software vendors, customer organizations, consultants, and academics in the usability engineering discipline. Several of us were invited to make presentations concerning what usability test data we actually collected on products and what we could propose as common denominators among our methods that would allow a common industry reporting format to be developed.

As a result of this meeting, working groups were formed to deal with general management issues, methodology, results and product descriptions, and pilot-test planning. As a member of the methodology working group, my main focus was to identify reliable and valid ways of conducting and reporting on usability testing on which we could all agree. Although this initially sounded like a tall order, it was amazing how much consensus we had in that initial meeting about how testing was done among the large vendors and the types of data we would be willing to provide customers. Some of the items people felt strongly about were:

  • We should not be proscriptive about testing methods, but rather concentrate on the reporting format to emphasize the types of information customers want in order to make procurement decisions.
  • There should be empirical data collected with users. Checklists and other analytical techniques conducted by vendors were of lesser value to customers than data collected from actual users.
  • There should be some quantitative, human performance data and some qualitative, subjective data collected. Customers were interested not only in how well people performed with the software, but also in how well they liked it.
  • There should be a minimum number of users tested (based on the literature, we recommended eight per user type).
  • There should be a template for reporting purposes that was accessible to procurement and executive audiences as well as usability professionals in the customer organizations.
  • We should recruit pairs of vendor and customer organizations to perform trials of the new reporting format.

Following our initial meeting, NIST promptly set up a website where we could all communicate about the progress of our working groups. It also helped organize conference calls and an email distribution list for updates and discussions. In the first year, an informational white paper and the basic guideline for the common industry reporting format were written. A document template was also produced, as well as an example of a test report based on a fictitious product. Participant companies were also recruited for pilot tests.

A second workshop was held in September 1998, where the draft Common Industry Format (CIF) for usability reporting was proposed. The third meeting was hosted at Oracle, in Redwood Shores in September 1999, and attended by additional customer and vendor organizations interested in hearing about pilot testing plans between customers and vendors, which included Oracle, Boeing, and Microsoft. A fourth meeting was held in 2000 in Gaithersburg to discuss the results of these trials. A fifth meeting took place in 2002 to form new working groups for hardware testing and user requirements CIFs. In 2004, a workshop was hosted by Fidelity Investments in Boston to discuss CIF standards for formative user testing. And in 2005, a workshop was held at the Usability Professionals Association to follow up on formative test reporting.

One of the missions of NIST is to facilitate industry efforts in setting their own voluntary standards. After sufficient drafts, reviews, and feedback, NIST submitted the CIF to the International Committee for Information Technology Standards (INCITS), an organization that facilitated acceptance of the document by the American National Standards Institute (ANSI). The CIF became ANSI/INCITS 354-2001. In the next five years, NIST also facilitated its acceptance by the International Organization for Standardization (ISO), as ISO/IEC 25062:2006.

Creating an international standard is a long and arduous process. A document has to be sponsored and proposed by a nation’s standards organization (i.e., DIN for Germany, ANSI for the U.S.). Then it is reviewed by the representatives of other nations participating in various ISO technical committees—in this case ISO/IEC JTC1/SC7, on software and systems engineering. There is travel to international meetings to discuss and vote on new standards. Thus, representatives have to be funded and dedicated to this purpose. Some members of our industry working groups actually sit on standards technical committees, but in general, we were all grateful for the willingness, expertise, and time of the NIST project managers in shepherding these standards through the various hurdles.

Standards-making requires a long-term perspective, keeping in mind that standards today may not reflect the technologies of tomorrow. The best contents for standardization are well-researched topics that are not likely to change between product generations. These include human factors, methodologies, and reporting parameters. Standards also tend to reflect well-established and generally accepted knowledge about a domain. Sometimes the latest research is not reflected in the standard, because it does not yet have sufficient verification or a track record of the application to be included. Still, it is reassuring that experienced managers in the usability community seem to have a great deal to agree upon that can figure in standards such as the CIF.

More than 300 participants have attended workshops and registered in the IUsR website, representing 100-plus organizations in more than 30 countries worldwide. A wide variety of vendors and customer organizations have used these standards in many ways. Companies like Oracle have revised their usability testing report templates and procedures to provide data in the CIF format, for summative usability tests. This enables them to quickly and easily provide such reports when requested by customers in the sales cycle. Customer organizations such as the Italian government have based requirements for new systems on the CIF document, and this has spawned such efforts as the Common Industry Format Specification for Usability Requirements (CISU-R) working group. NIST and the Usability Professionals Association have held workshops on the Formative CIF, which would enable rapid usability test reporting in a standardized document format.

Overall, I consider my experience on the IUsR project to be one of the highlights of my career in human factors in general, and software usability engineering in particular. It definitely increased the scope of my effort to appreciate customer needs and provide what customers wanted. It enabled me to learn and access the best practices of other industry representatives in a cooperative, rather than a competitive, way. It was a pleasure to work with such experienced and qualified people as the other members of the steering committee and workshop subcommittees. It exemplifies what can happen when managers collaborate to improve the lot of users on the highest levels.

Further Readings on CIF

Scholtz, J., Morse, E., Laskowski, S., Wichansky, A., Butler, K., & Sullivan, K. The Common Industry Format: A way for vendors and customers to talk about software usability. Proceedings of HCI International 2003, Vol. 1 (Human-Computer Interaction), Jacko, J. and Stephanidis, C. (eds.), Mahwah: Lawrence Erlbaum Associates, 554-558

Scholtz, J., Laskowski, S., Morse, E., Wichansky, A.M., and Butler, K. Quantifying Usability: The Industry USability Reporting Project. Proceedings of the Human Factors and Ergonomics Society 46th Annual Meeting, 2002, 1930-1934.

Author

Anna M. Wichansky, Ph.D. CPE
Oracle
anna.wichansky@oracle.com

About the Author

Anna Wichansky is senior director of the Advanced UI group at Oracle Corporation, where she founded the company’s usability labs and initiated the usability engineering process. She is an experimental psychologist and certified professional ergonomist with experience simplifying technology for users in a wide array of applications, including telecommunications, transportation, computing, media, entertainment, medicine, and space travel. Anna has worked for the U.S. Department of Transportation, Bell Labs, Hewlett Packard, and Silicon Graphics, and consulted for federal government agencies and nonprofits on usability-related issues.

©2007 ACM  1072-5220/07/0500  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2007 ACM, Inc.

 

Post Comment


No Comments Found