The Challenge at the Interface

XV.3 May + June 2008
Page: 45
Digital Citation

THAT’S ENTERTAINMENTHCI impact and uncitedness


Authors:
John Hopson

back to top 

As an HCI researcher working in the games industry, the sweetest words I have ever heard from my clients have been "Okay, we'll fix that." At the end of the day, research is only as good as the amount of impact it has on the experience of our users. The heart of HCI is about understanding and measuring the user experience, about putting metrics around the impact of a design. It would be profoundly hypocritical of us not to apply that same sort of hard-nosed evaluation to our own work products and the impact we have on our users, the designers, and other members of our product teams.

Sadly, most standard methods of communicating research results don't work very well. Academics have struggled for years to come to terms with the phenomenon of "uncitedness," the proportion of published works that are never subsequently cited by other works. Depending on how it is measured and the particular field of research, estimates of uncitedness range from a mere 24 percent for some scientific fields, up to a startling 93 percent for the arts and humanities. Other studies have tried to gauge how many people actually read the average published journal article and have come up with estimates of 10 to 20. That estimate is particularly disheartening when one considers that mere readership doesn't imply those readers went on to act on the information conveyed in the article.

The academic journal article format is an old if cantankerous friend to anyone who has been trained in the halls of academia, but it is clearly nothing like an optimal format for motivating the reader to action. If anything, these uncitedness numbers underestimate the problem because they represent the best-case scenario, in which the writer and readers are both experts in the same field. Cross-discipline communication, such as that between an HCI researcher and a designer, can be assumed to be even more difficult.

There is a simple underlying truth here: Some research has more impact than other research. And once that dichotomy exists, once we have a split between the ignored and the effective, I know which side of that divide I want my life's work to be on. All of the output of an HCI researcher, the publications, presentations, reports, meetings, etc. are merely intermediaries between the research and the impact on the final design. We should evaluate them the same way we evaluate any other piece of design work, examining how they're used by the actual end user and reshaping them to produce the experience we want.

Working in the games industry, I've had the mixed blessing of being the first HCI researcher many of my clients have ever worked with. This means I have the extra burden of convincing these newcomers of the value of HCI work, but it also means that I constantly have the opportunity to reinvent (and hopefully improve) the way I present my data. I'm forced to engage very closely with my clients, walking them through research results step by step, and I've had a chance to observe the impact the data has had on their subsequent work. The game design teams I've collaborated with put their heart and soul into what they do, and their designs often reflect their best and most passionately held ideas about how games should work. In many cases they've put in extra nights and weekends of work to create particular features, fought for those features in meetings, and spent internal political capital to make sure the features made it into the game. They have a strong and completely understandable bias toward rejecting research results that require them to change their designs, no matter how true or important those results are. It falls to me and my team to act as advocates for the data, to present the results in the way most likely to get past that bias and jumpstart a serious collaboration about how to address the HCI problems revealed by the data. We're certainly not always successful, but I have seen several consistent themes in which presentations have struck home.

The first theme is that successful presentations tend to establish the audience's motivation before moving into the detailed analysis. The client's first unspoken question is always, "Why do I care about this?" Answering that question early on increases the odds that the audience will stay with you until the end of the article, meeting, or presentation. Even if the audience is ostensibly listening, if the answer to that question is not clear in their mind then the presentation is wasted. In many cases, this is about establishing the users' pain, the cost of an HCI failure. During the production of Halo 3, I got in the habit of playing a video of a sample user from the study we were discussing in the conference room before my research debrief meetings. As people filed in, found their seats, went looking for missing attendees, etc., there would be video playing of a study participant wandering lost through a poorly designed mission or struggling with an overly difficult combat encounter. It provided something for my attendees to watch while they waited and started things off with a concrete example of why we were there. It certainly can be difficult to measure the return on investment of HCI work, but a good example of the cost of not doing the work can be profoundly motivating for your audience.

Second, my research data has the most impact when presented as knowledge about the product, rather than evaluation of the product. As I said earlier, my designers are passionate about their work, and the idea that their favorite feature received a failing grade can add another layer of resistance to their preexisting biases. Telling a designer that a particular game mission is too hard is often counter-productive, but telling them that usability participants took an average of five hours to beat what should have been a one-hour mission tends to work much better. When they perceive their work as being under attack, a skeptical and intelligent audience can generally find a reason to disregard any research finding. The difference between positioning the research findings as "information to make the product better" rather than "information about what's wrong with the product" can be subtle, but it can save hours of argument. Furthermore, involving the product team in the creation of the metrics ("How long should this mission take?") can also help avoid the perception that their work is being graded on some arbitrary scale. Reframing the debate from "How did the mission score?" to "How did this mission match up with the designer's intent?" puts the researcher and the designer on the same sideĀ—a much better starting place for collaboration.

Finally, the presentation should reflect the goals of the audience, not the presenter. Material should be organized according to whatever schema the audience uses, not according to how they were addressed in the research. It doesn't matter whether the presentation matches our logical model of the topic; it matters whether the audience understands the results and goes on to act on them. The presentation is a tool, a machine for making an impression on the minds of the audience, and the depth and accuracy of that impression is the only metric that counts.

One special case of this principle is when there are several audiences for a given piece of research. This can demand multiple separate treatments of the same data, emphasizing different aspects of the work. For example, I've begun producing two distinct documents from my usability work: a producer report and a designer report. The producers tend to care more about issues of overall project progress ("Are the usability issues severe enough to delay the project milestone?"), while the designers tend to care about fine details ("How did the players react to that ambush in the first mission?"). I've found it more effective to address those audiences separately than to produce a single set of findings that is only partially relevant to any given reader. Both documents accurately represent the same usability study, and both serve the needs of their users.

This is not to say our presentations to our clients should look like marketing fluff, or that we should be telling them what they want to hear. But within the general constraints of sound research and accurate reporting, there is a broad spectrum of ways to accurately convey the results, some of which are more effective than others. As a field, HCI claims to analyze and understand the way users interact with products, and we have no excuse for not analyzing and understanding the impact of our own work. There is no reason we should ever go uncited.

back to top  Author

John Hopson
Microsoft Game Studios
[email protected]

About the Author

John Hopson is a user researcher at Microsoft Game Studios and has worked on several bestselling game franchises, including Halo and Age of Empires. John holds a doctorate in experimental psychology and is the author of a number of articles on the interaction of research and game design.

EDITOR

Dennis Wixon
[email protected]

back to top  Footnotes

DOI: http://doi.acm.org/10.1145/1353782.1353794

back to top 

©2008 ACM  1072-5220/08/0500  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2008 ACM, Inc.

Post Comment


No Comments Found