Forum: timelines

XIV.3 May + June 2007
Page: 18
Digital Citation

An unlikely HCI frontier


Authors:
Richard Pew

I left University of Michigan in 1974, in part because I wanted to begin a new research thrust in HCI. At the time, BBN Technologies (then Bolt Beranek and Newman) was an ideal place to accomplish this. It had a renowned staff in both computer science and cognitive psychology and was pushing the frontiers in interactive computing, due in part to the original stimulus of J. C. R. Licklider, who was at BBN from 1957 until 1962. My first HCI project at BBN, at least the first for which I was the principal investigator, was developing what today would be called a style guide for the Department of Agriculture from 1975 to 1976. My second major project, for the Social Security Administration in 1978 and 1979, was a pioneering effort in the development of user-interface prototyping, participatory design, usability evaluation, and cost-benefit assessment.

In the ‘70s the software world was transitioning from punched cards and batch processing to time-sharing with half-duplex and full-duplex terminals, interactive programming languages, and, slightly later, with incremental compilers. Government and commercial organizations were beginning to think that cost savings and improved information quality would result from moving to source data entry at the analyst’s desk. The Social Security Administration was mired in early centrally located batch-processing systems and magnetic tape data repositories. To the credit of its senior managers, the Social Security Administration began to see the potential for putting computer terminals on the desks of claims representatives and service representatives around the country. At the time its vision was for regional data concentrators and central management of the disk-based data repositories. It started a major system-development thrust referred to as “The Future Process.” The Social Security Administration was proposing to throw out the old and replace it with the new future process. Its advisory board, which included Licklider, convinced them that “human factors” was going to be an important thread of the project.

In 1976 the agency sent two of its systems analysts to the University of Michigan Human Factors Summer Conference, of which I was chair. During the next year I worked with BBN to collaborate on a small consulting project with SSA to survey Social Security agents concerning their attitudes toward using computers. Surprisingly, their attitudes were largely positive. Most thought computers would help them with their jobs, while a few commented that they were glad they would be retired before it came to pass. This project was really an invitation to see if we could “dance.”

Not too much later, in the fall of 1977, a request for proposals (RFP) was issued for a major project to address the human factors issues of the future process. Needless to say, we—Duncan Miller, Dan Massey, Mario Grignetti, and I (to mention just a few)—leaped at the chance to bid. We proposed to build a Test and Evaluation Facility (TEF) at Social Security headquarters in Baltimore. They were expecting that we would propose dumb terminals and a computer in each district office. Instead, we contracted with Xerox PARC to supply us with three Alto bit-map graphic terminals (a predecessor to the Xerox Star workstation), Xerox-associated INTERLISP software and DLISP graphics language, and a Digital Equipment Corp. (DEC) System 20/20 (Serial #3, I believe) as a laboratory server-machine. We proposed to design and build prototype interface software, simulate client interviews, and evaluate the prototypes with actual claims and service representatives imported from around the country on temporary one-week assignments to Baltimore. We won the proposal competition (we learned later that we were the only bidder).

Figure

This was, for us, a big project. I moved to Baltimore in the summer of 1978, set up the laboratory, and assembled a staff of seven people, including Douglas Hoecker, a newly minted psychology Ph.D. from Alphonse Chapanis’ laboratory at Johns Hopkins. We had about 12 people in Cambridge supporting us with software development and other ancillary activities, such as exploring the prospects for training the agents in the use of the to-be-developed systems (led by Wally Feurzeig).

We were given space and the full cooperation of the SSA staff. David Bernstein was our program manager, assisted by Richard Gonzales and about five senior staff analysts, most of whom were former field agents; they became our subject-matter experts and more. We worked collaboratively with them to evolve candidate interface designs through task analyses, scenario generation, and iterative prototyping using the Alto terminals, an early Hewlett Packard “smart terminal,” and an early color terminal from Tektronics.

Hoecker developed an activity sampling evaluation methodology in which an observer used a carefully constructed coding scheme to record what each participant/SSA claims representative was doing during simulated interviews on a 15-second sampling interval. Categories included reading, talking, listening to the client, writing, keyboarding, waiting for a computer response, manipulating paper forms, and a miscellaneous category. In addition, we coded the aspect of whichever social security process was taking place—addressing a current or future issue, a general SSA inquiry, or just social interaction. After a day of training in the use of the coding methodology, using samples of previously videotaped client interviews, the imported claims representatives worked in teams of three. One served as the SSA agent, doing the job as she or he normally would. One took on the persona of a potential client, supported by a one-page summary of the client’s characteristics. (They loved taking this role and adding client idiosyncrasies on the basis of their experience.) The third served as the observer, collecting the activity data on a scoring sheet. After each interview they rotated positions so that all agents (12 per week) served in all positions in the course of the evaluation. No agent interviewed any client persona more than once; all three prototypes plus the paper and pencil interviews, which served as the control condition, were tested on all personas and with all agents. At the end of the week, the agents assembled in a conference room for a videotaped debriefing session in which they critiqued the process, but especially the interface designs, providing rich participatory-design data. The team showed samples of these critiques to the interface developers, a much more influential medium than HCI professionals’ verbal criticisms. In the course of six months, we collected and analyzed a total of 461 client interviews, conducted by 67 SSA agents on the performance of the three different prototypes and paper-and-pencil interviews

The project was not without its stresses and challenges. Our DEC 20/20 was delivered late. Our software deliveries were always late (we convinced our sponsors of the value of the “waffle theory” of software development—you always throw away the first prototype). Our program manager or his manager kept thinking of new reports or new task assignments for us. Most critically, our Lisp software turned out to be unbelievably cumbersome and slow, causing long waits by the agents for the computer to respond. It was also subject to “garbage collection,” a euphemistic term for times when the system paused for several seconds, sometimes minutes, to “clean up” temporary storage locations to make room for new input.


In 1978 we established a “usability laboratory” at SSA—maybe not the first, but certainly one of the earliest.

 


In addition to system development and evaluation, we were charged with deriving an estimate of the cost-effectiveness of converting a district office to computer support. How in the world were we going to do that, we asked? It turned out that SSA already had a work-sampling methodology in place. Once per week an employee in every district office went around to every other employee in that office and asked, “What are you doing right now?” The findings were summarized once a month and coded in terms of their job category, the SSA process they were engaged in, and actual activity (reading forms, looking up files, talking with a client). With about 1300 district offices and 43,000 employees, this produced a pretty reliable sample of workload distribution—how the agents were spending their time. Periodically, they added special categories that were of particular interest to management. We were permitted to construct one of the sets of special categories for use for a single month’s assessment. We designed it specifically to get at the activities that were likely to change or go away if computers were introduced into district offices. This became our baseline assessment from which to extrapolate to the future process. We then used the results of our experimental evaluations to estimate the differences in time that would be spent on the various processes and activities. However, we had to pay special attention to how we handled the long waiting times for the software, which hopefully would not be present in the finally delivered system. Fortunately, we had recorded a category for wait time.

The study concluded that the time spent in interviews would actually increase, but it would be more than offset by reductions in claims-processing activities after the interview, given that the data would already be in electronic form. As a result, the same work force would be able to handle a 33 percent increase in claims volume. These estimates were made assuming there would still be substantial response-time waits. The report provided a further analysis of time-shared support employing a single district office computer versus personal-computer-type workstations, at the time in their infancy. The conclusion was that we could justify investing in PC workstations on the basis of the potential savings in waiting time and increased control in the hands of the agents.

Then came Black Friday! After we had been at work on all this for almost a year, a new administrator of the SSA was appointed who then hired a new manager of the systems division, of which our project was a part. He, in turn, reorganized the division. Management decided they were no longer going to pursue the future process, but, instead, would gradually and incrementally evolve the system to place computers in district offices. The future process cadre was disassembled. The new manager of the user-requirements branch (a layer above our program manager) came from a background in Navy system acquisition. He appeared on a Thursday for his briefing on our work. By the time we had completed presenting our year of accomplishments, it was Friday afternoon. He rose from his seat and said something like, “You all have accomplished a great deal in the course of the year. It is clear that you have done very fine work. However, this kind of iterative testing is just not compatible with the way we acquire and build systems. A management team will define the system requirements. A request for proposals will be generated and put out for bid. The winning contractor will then design and build the system. We have no use for the kind of work BBN has been doing.” Aghast, we then asked, “Well, we have built you a very useful laboratory with hardware and software already in place; surely, there are some other purposes at SSA for which this would be useful. What do you intend to do with it?” He replied, “I intend to declare it government surplus.”


At SSA, we also innovated by assessing the cost-effectiveness of introducing computers.

 


He did not declare the hardware government surplus. He transferred it to a unit that was involved in training SSA agents, but, as far as we know, it was quietly abandoned. We were given six months to complete and document what we had done. Years later I had SSA personnel again attend the Michigan summer course. They were charged with assuring the usability of computers in district offices, but they were not familiar with our original work. It took 15 more years for computers to begin to have a significant impact in SSA district offices.

So there you have it. In 1978 we established a “usability laboratory” at SSA—maybe not the first, but certainly one of the earliest. We undertook a spiral development process in which we did analysis, built prototypes, and evaluated their usability. We instituted several features of participatory design involving current and former SSA claims agents, conducting task analyses, developing scenarios, and participating in the assessment and critique of the interface software. Finally, because we were asked, we undertook a rather innovative approach to assessing the business case—the potential cost-effectiveness of introducing computers into SSA district offices. We, of course, were disappointed that it all went for naught as far as SSA was concerned, but for those of us involved, it was a rich, rewarding experience that has shaped much of our continued efforts in the HCI field.

References

1. I want to thank Jonathan Grudin for inviting (and encouraging) me to contribute to his column.

2. The only accessible documentation of this work, aside from the original technical reports, is a symposium that was held at the 1979 Annual Meeting of the Human Factors Society: “Human factors studies for interactive systems development at the Social Security Administration.” In Proceedings of the Human Factors Society 23rd Annual Meeting, Santa Monica, CA: Human Factors Society, Inc. p. 576-590.

Author

Richard W. Pew
BBN Technologies
pew@bbn.com

ABOUT THE AUTHOR

Dick Pew spent 11 years at the University of Michigan and the past 33 years at BBN Technologies in Cambridge, MA where his current employment status is “part-time irregular.” His interests have spanned a range of human factors activities from HCI to human performance modeling. At this stage of his career, history becomes an attractive topic.

Figures

UF1Figure. Three SSA claims representatives, circa 1978, simulate a client interview on a Xerox Alto computer.

©2007 ACM  1072-5220/07/0500  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2007 ACM, Inc.

 

Post Comment


No Comments Found