Authors: Jonathan Grudin
Posted: Fri, October 09, 2015 - 6:01:55
Three favorite research projects at Microsoft that were never written up: automated email deletion, an asynchronous game to crowdsource answers to consulting questions, and a K-12 education tool. I expected they would be, as were projects that led to my most-cited Microsoft work: persona use in development, social networking in enterprises, and multiple monitor use. What happened to them?
The unpublished projects had research goals, but they differed in also having immediate applied goals. Achieving the applied goal was a higher priority than collecting and organizing data to satisfy reviewers. In addition, there were research findings, but project completion provided a sense of closure that in other projects only comes with publication, because they aimed to influence practice indirectly: Publishing was the way to reach designers and developers.
Projects with immediate goals can include sensitive or confidential information, but this did not prevent publishing my studies. In my career, the professional research community has tried and sometimes succeeded in blocking publication much more than industry management.
I hadn’t thought carefully about why only some of my work was published. It seems worthwhile to examine the relationships among research, action, and our motives for publishing.
A spectrum of research goals
Research can be driven by curiosity, by a theoretical puzzle, or by a desire to address a specific problem, or to contribute to a body of knowledge. The first HCI publication, Brian Shackel’s 1959 study of the EMIac interface, addressed a specific problem. Many early CHI studies were in the latter category: By examining people carrying out specific tasks, cognitive psychologists sought to construct a general theory of cognition that could forever be used in designing systems and applications. For example, the “thinking-aloud” protocol was invented in the 1970s to obtain insight into human thought processes. Only in the 1980s did Clayton Lewis and John Gould apply it to improve designs by identifying interface flaws.
When GUIs became commercially viable, the dramatically larger space of interaction design possibilities shattered the dream of a comprehensive cognitive model. Theory retreated. Observations or experiments could seek either to improve a specific interface or to yield results that generalize across systems. The former was less often motivated by publication, and as the field became more academic, it was less likely to be judged as meriting publication.
Action research is an approach [1] that combines specific and general research goals. Whereas conventional research sets out to understand, support, or incrementally improve the current state, action research aims to change behavior substantially. Action researchers intervene in existing practice, often by introducing a system, then studying the reaction. Responses can reveal aspects of the culture and the effectiveness of the intervention. This is a good option for phenomena that can’t be studied in a lab and for which small pilot studies won’t be informative. A drawback to action research is that it is often undertaken with value-laden hypotheses that can undermine objectivity and lower defenses against the implacable enemy of qualitative researchers, confirmation bias. Action research is often employed in cultures not shared by the researchers. It is at one end of a continuum that reaches conventional research. My research was not intervention-driven; it has sought to understand first, then improve.
Publication goals
The goals of publication partly mirror the goals of research. Publication can contribute to a model, framework, or theory. It can help readers who face situations or classes of problems that the researchers encountered. Publication can enable authors to earn academic or industry positions, gain promotion or tenure, attract collaborators or students, astonish those who thought we would never amount to much, or become rich and famous. All worthy goals, and not mutually exclusive. Many are in play simultaneously—it’s nice when a single undertaking contributes to diverse goals.
An examined life
Returning to the question of why my favorite work is unpublished, aiming for an immediate effect introduces constraints and alters priorities, but it doesn’t preclude careful planning for subsequent analysis. Steve Benford and his colleagues staged ambitious public mixed reality events under exacting time and technology pressure, yet they collected data that supported publication.
Early in my career, my research was motivated by mysteries and roadblocks encountered while doing other things. There were a few exceptions—projects undertaken to correct a false conclusion in a respected journal or to apply a novel technique that impressed me. Arguably I was too driven by my context, afflicted by a professional ADHD that distracted me from building a coherent body of work. On the other hand, it insured a degree of relevance in a dynamic field: Some who worked on building a large coherent structure found that the river had changed course and no longer flowed nearby.
My graduate work was in cognitive psychology, not HCI. I took a neuropsychology postdoc. My first HCI experiment was a side project, using a cool technique and a cool interface widget in a novel way, described below. The second aimed to counter an outrageous claim in the literature. The third explored a curious observation in one of the several conditions of the second experiment.
I left research to return to my first career, software development. There, challenges arose: Why did no one adopt our multi-user features and applications? Why were the software development practices of the mid-1980s so inappropriate for interactive software? Not finding answers in the literature, I gravitated back to research.
I persevered in researching these topics, but distractions came along. As a developer I first exhorted my colleagues to embrace consistency in interface design, but before long found myself often arguing against consistency of a wrong sort. I published three papers sorting this out. I was lured into studying HCI history by nagging questions, such as why people managing government HCI funding never attended CHI, and why professional groups engaged in related work collaborated so little.
Some computer-use challenges led to research projects. My Macs and PCs were abysmal in exploiting two-monitor setups; what could be done to fix that? As social media came into my workplace in the 2000s, would the irrational fear of email that organizations exhibited in the 1980s recur? Another intriguing method came to my attention: Design teams investing time in creating fictional characters, personas. Could this really be worthwhile? If so, when and why? And each of the three favorite projects arose from a local disturbance.
Pressure to publish. Typically a significant motivation for research, I escaped it my entire career. My first week of graduate school, my advisor said, “Be in no hurry. Sit on it. If it’s worth publishing, it will be worth publishing a year later.” Four years later, too late to affect me, he changed his mind, having noticed that job candidates were distinguished by their publications. No telling how publication pressure would have directed me, but the pressure to finish a dissertation seemed enough.
I published one paper as a student. My first lab assignment was to report on a Psychological Review article that proposed an exotic theory of verbal analogy solution. Graduate applications had required Miller’s Analogy Test, and I saw a simpler, more plausible explanation for the data. I carried out a few studies and an editor accepted my first draft, over the Psych Review author’s heated objection. This early success had an unfortunate consequence—for some time thereafter, I assumed that “revise and resubmit” was a polite “go away” rejection.
Publication practices pushed me away from neuropsychology. I had been inspired by A. R. Luria’s monographs on individuals with unusual brain function. I loved obtaining a holistic view of a patient—cognitive, social, emotional, and motivational. The standard research approach was to form a conjecture about the function of a brain region, devise a short test, and administer it to a large set of patients. Based on the outcome, modify or refine the conjecture and devise another test. This facilitates a publication stream but it didn’t interest me.
The early CHI and INTERACT conferences had no prestige. Proceedings were not archived, only journals were respected. There was not yet an academic field; most participants were from industry. Conferences served my goal of sharing results with other practitioners who faced similar problems. It was not difficult to get published. Management tolerated publishing, but exerted no pressure to do it.
When I returned to academia years later, conferences had become prestigious in U.S. computer science. Like an early investment in a successful startup, my first HCI publications had grown sharply in value. I continued to publish, but not under pressure—I already had published enough to become full professor.
I did however encounter some pressure not to publish results along the way.
“Sometimes the larger enterprise requires sacrificing a small study.”
My first HCI study adapted an ingenious Y-maze that I saw Tony Deutsch use to enlist rats as co-experimenters when I was in grad school. It measures performance and preferences and enables the rapid identification of optimal designs and individual differences. I saw an opportunity to use it to test whether a cool UI feature designed by Allan MacLean would lure some people away from optimally efficient performance. It did.
A senior colleague was unhappy with our study. The dominant HCI paradigm at the time modeled optimal performance for a standard human operator. If visual design could trump efficiency and significant individual differences existed, confidence in the modeling endeavor might be undermined. He asked us not to publish. I have the typical first-born sibling’s desire to please authority figures, so it was stressful to ignore him. We did.
A second case involves the observations from software development that design consistency is not always a virtue. I presented them at a workshop and a small conference, expecting a positive reception. To my dismay, senior HCI peers condemned me for “attacking one of the few things we have to offer developers.” My essay was excluded from a book drawn from the workshop and a journal issue drawn from the conference. I was told that it had been discussed and decided that I could publish this work elsewhere with a different title. I didn’t change the title. “The case against user interface consistency” was the October 1989 cover article of Communications of the ACM.
Most obstacles to publishing my work came from conference reviewers who conform to acceptance quotas of 10%-25%, as though 75%-90% of our colleagues’ work is unfit to be seen. It is no secret that chance plays a major role in acceptances—review processes are inevitably imprecise. A few may deny the primacy of chance—was your paper assigned to generous Santas or annihilators? Was it discussed before lunch or after lunch? And so on, just as a few deny human involvement in climate change, and probably for similar reasons. One colleague argued that chance is OK because one can resubmit: “Noise is reduced by repeated sampling. I think nearly every good piece of work eventually appears, so that the only long-term effect of noise is to sprinkle in some bad work (and to introduce some latency in the system)” [2].
Buy enough lottery tickets and you will win. In recent years, few of my first submissions have been accepted, but no paper was rejected three times. However, there are consequences. Rejection saps energy and good will. It discourages students. It keeps away people in related fields. Resubmission increases the reviewing burden.
The status quo satisfies academics. More cannon fodder, new students and assistant professors, replace the disheartened. But for research with an action goal, such as that which I haven’t published, the long-latency, time-consuming publication process has less of a point.
This is an intellectual assessment. There is also an emotional angle. Some may regard a completed project that strived for an immediate impact dispassionately as they turn to the next project. I don’t. Whatever our balance of success and failure, I treasure the memories and the lessons learned. Do I hand this child over to reviewers tasked with killing 75% of everything that crosses their path? Or do I instead let her mingle with friends and acquaintances in friendly settings?
Endnotes
1. Or set of approaches: Different adherents define it differently.
2. David Karger, email sent June 24, 2015.
Posted in: on Fri, October 09, 2015 - 6:01:55
Jonathan Grudin
View All Jonathan Grudin's Posts
Post Comment
No Comments Found