XXII.6 November - December 2015
Page: 20
Digital Citation

We must be more wrong in HCI research

Kasper Hornbaek

back to top 

The secret to being wrong isn't to avoid being wrong! The secret is being willing to be wrong. The secret is realizing that wrong isn't fatal. — Seth Godin, Tribes (2008)

For the past few years I have pondered how to be wrong in HCI research. By "being wrong" I do not mean blundering in experimental design, being irresponsible with participants' well-being, or generating false findings. Rather, I think of situations where a researcher discovers that an empirical study does not justify a strong belief based on existing HCI knowledge. Or when HCI as a field of research is shown to be mistaken in the concepts we use or the design guidelines we promote. Or when existing theories in our field provide inadequate descriptions of behavior around a new technology.

In these situations, being wrong is informative. It allows for learning and the revision of faulty assumptions or ingrown beliefs. More generally, being wrong and making mistakes are at the heart of science; they play a great role in design and engineering, too. Being wrong is also prominent in the history of science, as well as in some normative accounts of science, such as falsificationism.

In my view, HCI as a field is terrible at being wrong. Our conferences report findings on new behavior around technology; often there is no sense in which those findings can be wrong because they are new and rarely compared to behavior with older technology. Our journals show new interaction techniques evaluated in lab experiments using inferior alternative UIs and contrived tasks—not much can go wrong there either. Stated differently: When did you last read an in-depth study of a failed interaction technology? Or read a field study that showed a presupposed theoretical framework from HCI as being wrong?

I probably judge our field harshly. Nevertheless, I wonder if we can get better at being wrong in HCI research. Let me discuss two approaches: (a) agreeing on what we know and (b) doing daring empirical work.

For (a), a precondition for making mistakes is that there are results in HCI that we agree on as correct. If this is not the case, it is impossible as an individual researcher or as a field to be wrong about anything. It is an interesting (and hard!) exercise to find such results in the literature. I see a couple of reasons why.

First, little work in HCI takes stock of what we know about interaction with computers. For instance, by liberal inclusion, CHI proceedings contain fewer than 10 meta-analyses. I would argue the HCI literature as a whole contains few good reviews. The consequence is there are few agreed-upon results in the literature.

What to do? I think some ways forward are solid reviews, meta-analyses, and related work sections that synthesize insights (rather than justify a particular study or system). If we agree on what we know, it is much easier to challenge that knowledge.

Second, in many fields, researchers agree on problems that are crucial to solve (e.g., algorithms, the Millennium Prize Problems); sometimes they make it onto websites (e.g., or into reviews (e.g., papers titled "Open Problems in X"). Not so for HCI. I know of no recognized list of open problems in HCI, and few papers synthesize problems in subfields of HCI. Kostakos's argument that HCI lacks motor themes (themes with strong centrality and high density in a cluster analysis) also suggests a lack of agreed-upon problems [1].

Again, I think some ways forward are clear: We must attempt to describe and develop problems central to HCI (e.g., What is interaction? What makes an interface good?). Such problems would of course be different from the well-specified problems in algorithms. Nevertheless, those problems seem to be worth working on conceptually and empirically, and the results seem worth being wrong about.

The second approach (b) departs from the fact that much empirical work in HCI is done in a manner that does not allow being wrong in relation to what is already known. Again, there are a couple of prominent reasons.

First, there is little daring empirical work in HCI. Saul Greenberg and Bill Buxton pointed out that usability evaluations often produce existence proof by showing only a single case where a new interaction technique works better than an older technique [2]. They contrast this to risky hypothesis testing where a study could go either way. This echoes John Platt [3], who argued that research can be seen as climbing a tree; occasionally we get to a fork where each branch represents a feasible possibility or hypothesis. We should do the empirical work at those forks; it is daring yet informative. Such work is rare in HCI.

What are the ways forward? In one paper, I used Platt's ideas to suggest ways of making more informative experiments in HCI [4]. For instance, we can improve experiments through strong baseline interfaces, varied tasks, and clearer concepts about the interfaces being compared. We can also look at boundary conditions, that is, manipulating variables to make one or another interface perform the best. And we must not set up or pay too much attention to win/lose studies that merely provide existence proofs.


Second, top HCI outlets focus on novelty and originality. I wrote a paper with some colleagues on replications and discussed how journals require originality (e.g., that submissions must be "original in some way," HCI Journal) [5]. The insistence on originality contrasts with incremental research; the latter is sometimes used to reject papers at program committee meetings. This focus works against building on earlier work and against replications (we found three percent in a sample of 891 HCI papers). It means that HCI rarely challenges earlier results.

As a way forward, I think HCI researchers, and HCI outlets as well, need to emphasize incremental research more and need to build on what we already know to a much larger extent. Being novel is overrated compared with being wrong in a daring study about a fundamental question in HCI.

With this column I want to call for us to be more ambitious, both as individual researchers and as a community. I believe we can set up research in HCI to be wrong more frequently and in more informative ways: We can change our outlets to value negative results, and we can try to do empirical studies that stand a chance of failing in an interesting way. It is all about attitude: Let's try to be more willing to be wrong in HCI research.

back to top  References

1. Kostakos, V. The big hole in HCI research. Interactions 22, 2 (2015), 48–51.

2. Greenberg, S. and Buxton, B. 2008. Usability evaluation considered harmful (some of the time). Proc. of CHI 2008, 111–120.

3. Platt, J.R. Strong inference. Science 146, 3642 (1964), 347–353.

4. Hornbæk, K. Some whys and hows of experiments in human-computer interaction. Foundations and Trends in Human–Computer Interaction 5, 4 (2013), 299–373.

5. Hornbæk, K., Sander, S.S., Bargas-Avila, J., and Simonsen, J.G. Is once enough? On the extent and content of replications in human-computer interaction. Proc. of CHI 2014, 3523–3532.

back to top  Author

Kasper Hornbæk is a professor in computer science at the University of Copenhagen. He works on user experience, shape-changing interfaces, large displays, and body-based interaction. He is also interested in the methodology of HCI, including the role of replications, measures of usability, and solid experimental work.

back to top 

Copyright held by author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2015 ACM, Inc.

Post Comment

@dismailov (2016 01 05)

As Ken Robinson said:
“Prepared to be wrong, means you are creative”
“If you’re not prepared to be wrong, you’ll never come up with anything original.”

I think being Prepared is willing with reason.