Features

XXII.4 July - August 2015
Page: 53
Digital Citation

Locating the ‘big hole’ in HCI research


Authors:
Stuart Reeves

In a recent Interactions article, “The Big Hole in HCI Research,” [1] Vassilis Kostakos argued that HCI lacks persistent “motor themes,” based on a co-word analysis [2] of keywords sections from the past 20 years of CHI papers [3]. HCI as a discipline, it is argued, “simply roll[s] from topic to topic, year after year, without developing any of them substantially.”

In this analysis, motor themes—based on clusters of recurring keywords over time—are described as a critical feature of healthy disciplines. Motor themes represent commonly addressed topics that constitute the research mainstream and therefore are essential to creating a disciplinary core. After summarizing his work from a recent CHI paper, Kostakos characterizes the absence of these themes from HCI as “a very worrying prospect for a scientific community.”

Insights

ins01.gif

These concerns seem to be echoed by events at recent CHI conferences, such as the appearance since 2011 of yearly panels or workshops on “replication” (RepliCHI), and the Interaction Science SIG of CHI 2014. While my view contrasts with the proponents of what one might label as the “scientific program,” [4] the emergence of increased debate about the very idea of HCI—what its work does, could, or should look like academically—feels like a valuable activity and is probably long overdue.

Here, I want to talk about two matters that are core to the discussion: the relationship between science and HCI, and, more broadly, the disciplinarity of HCI [5].

HCI and Science

Claims about the big hole in HCI Research invoke science quite directly, with Kostakos warning that HCI’s apparent lack of motor themes should concern a scientific community. Yet all is not lost, Kostakos argues, for “new initiatives have sprung up in our field to make it more scientific in the sense of repeating studies, incremental research, and reusable findings.”

But this invocation of science and the desire to make HCI “more scientific” is not a new concern. HCI’s early emergence was oriented strongly by many researchers from cognitive and broader psychological sciences, the academic communities of which have, themselves, often been at pains to demonstrate scientific credentials through the application of methods from the natural sciences. So, I would argue that the cultural foundations for HCI’s desire to “become scientific” have always been present.

Like the programs of Interaction Science and RepliCHI, the expressed desire for a scientific disciplinarity is typically thought to be achievable by adherence to a set of signature scientific qualities that are seen as gold standards of being a science. I summarize these qualities as follows:

  • Accumulation: Science’s work is that of cumulative progress.
  • Replication: Science’s work gains rigor from its replicability.
  • Generalization: Science’s cumulative work builds toward transcendent knowledge.

Attempts to reorder HCI back into accord with these scientific qualities have been suggested before. For instance, in 2000, Whittaker et al. called for a similar program, arguing that the development of various standardized “reference tasks” could be achieved in order to establish accumulation and generalization in HCI, and therefore scientific legitimacy [6].

However, I see two problems with these appeals to science and scientific qualities. The first is that science as a term is problematic. The second is that I think it is mistaken to imply—or not guard against an implication even if unintended—that these qualities are properties of science itself.

On the first point, science is a linguistic chimera. Within HCI alone the term is diversely and nebulously applied. This is unhelpful, for science is often employed to do very different kinds of things that have little to do with pursuing scientific qualities. For instance, it is often used as a rhetorical strategy to assert epistemic or moral authority (science as “good work” and/or “transcendent truth”). Employed in this way, science is a linguistic method for legitimizing certain kinds of work—as a matter of categorizing things as science and not science. Science is also often used as an aspirational label to request peer recognition—for example, computer science rather than informatics. This means that the term science can create significant confusion: Hence, the term comes to be deployed in place of appropriate, relevant assessments of the rigor of research work.

This leads to my second point: that accumulation, replication, and generalization of findings are not intrinsic properties of some broad domain of science, but rather are specific methodical practices conducted by specific communities of researchers. Hence, I would argue that the standards of what counts as a generalization, what is a relevant process of accumulation (which I take as the establishing within researchers’ discourse of particular motor themes), and what motivates the conduct of replications should be decided upon as a matter of agreement between relevant researchers. I do not think these things can be determined through adherence to an external and nebulous set of “scientific standards” adopted from a textbook of formal descriptions of the approaches of the natural sciences.

I have a great deal of sympathy for proponents of the scientific program in HCI, especially where they seek to increase the rigor of the HCI community—such a motivation can only be encouraged. Yet I feel that this cannot come at the expense of demanding singular yet unspecifiable approaches such as making HCI “more scientific.” In other words, this veneration of the classic tropes of the scientific method will not assist us in firming up how researchers engage in various shared practices to establish agreement and disagreement over findings. If by science we mean “demonstrating a rigor agreed upon by practitioners of the relevant and particular genre of reasoning the work pertains to” then I might consider it a useful term. But this meaning seems unlikely (it’s definitely very unwieldy!).

HCI as a Discipline

The discussions of scientific models relate, I think, to broader debates about HCI’s disciplinarity that have recently emerged. Kostakos’s work provides an account of CHI’s—and therefore (perhaps tenuously) HCI’s—disciplinary architecture according to co-word analysis. The organization of knowledge in HCI is described by various quadrants. These quadrants (Figure 1) trace themes as they are born (“Quadrant III: Emerging or declining themes”), begin to stabilize (“Quadrant IV: Basic and transversal themes”), “go mainstream” as motor themes (“Quadrant I: Motor themes”) and then die off (back to Quadrant III) or perhaps decline (“Quadrant II: Developed but isolated themes”). Themes may never reach Quadrant I or may go straight to Quadrant II, or perhaps get stuck in Quadrant IV or never make it past Quadrant III. But the basic idea is that of charting the disciplinary lifecycle; with this, detecting appropriate movement of themes across the graph, and at the same time performing a kind of health check on HCI’s disciplinary coherence.

ins02.gif

It seems that HCI is often referred to as a discipline. For Kostakos’s arguments in “The Big Hole in HCI Research,” conceptualizing HCI as a discipline is necessary to ensure that it becomes comparable with the other disciplinary objects held as reference points. Thus, the co-word analysis of HCI is placed alongside extant analyses of psychology, consumer behavior, software engineering, and stem cell research. These reference points are then used to show the absence of Quadrant I keyword clusters in HCI. When compared with these other disciplines, the scientific deficiencies of HCI are revealed.

But I think there is something wrong with this picture. Assertions—both explicit and implicit—of HCI’s disciplinarity are frequently made, often alongside notes about its interdisciplinary and multidisciplinary character. Yet the core concept of HCI as a discipline (never mind as a scientific discipline) raises serious questions. The unstated assumption driving the assertions and implications of Kostakos’s work is that disciplinary objects are transcendentally comparable. However, it is hard to see how, say, the activities of stem cell researchers have any bearing on the activities of HCI researchers, and it is not clear whether it is reasonable to assume that their paper-writing practices, let alone their everyday research work practices, are in some way similar. It is also unclear why it might be that specialisms like stem cell research should be compared with all of psychology—a broad church to say the least. Why not social psychology or cognitive psychology?

My alternative view would be that each discipline works with phenomena particular to it, and also has methods of reasoning and research practices also particular to it. What counts as relevant research questions in one discipline may have nothing necessarily to do with what counts in another. At its most basic, the notion of a discipline is an attempt at finding a way of ordering knowledge [7]. In this sense, it is not a “natural fact”; instead, disciplinary order is an epiphenomenon of the particular practices of a particular community of researchers.

The very idea that HCI is a discipline is also itself contentious. Yvonne Rogers has suggested that HCI is an “eclectic interdiscipline” [8]. By implication, I take this to mean that in being an interdiscipline, HCI should, indeed, have the big hole Kostakos identifies, because the very nature of an interdiscipline would be centered on an absence of a disciplinary core. Alan Blackwell has recently advanced this argument, asking whether HCI is actually best suited to occupying a catalytic role between disciplines as opposed to engaging in the development and maintenance of a stable body of knowledge [9]. If there were some essential disciplinary core to HCI, it would struggle in this role as an interdiscipline.

Being an Interdiscipline

It seems trite to point out that much has been written about implications for design or to highlight how they are also seen as a problematic nervous tic of the HCI paper-writing genre. Kostakos, drawing out the “skewed ways in which our community values research,” pins the absence of HCI’s engagement with scientific qualities (i.e., of generalization, replication, and accumulation) to this drive to embed implications for design in HCI papers.

While I have sympathy for this argument, I also think a reassessment has to be made as to why implications for design have even emerged in the first place. What work is being done in writing implications for design? I would argue that they may be read as implicit recognition that gestures toward being an interdiscipline are normative in HCI. Hence, it would be a mistake to assume that we need not be accountable to the “interdisciplinary other” in HCI. At the same time, implications for design are also an effort to answer the question “why should I (the reader) care about this work?”—a question that is in no way unique to HCI. Thus, although they are often deficient in their form, as rightly pointed out by Kostakos and others, implications sections nevertheless can be an attempt to meet others at the interface of disciplines.

In closing, I think if we take Rogers’s and Blackwell’s challenge of being an interdiscipline seriously, we could be looking for two characteristic kinds of rigor in HCI research work.

First, I feel we should expect a rigor commensurate with the research’s own disciplinary wellsprings, whether this is (cognitive, social, etc.) psychology, anthropology, software engineering, or, more recently, the designerly disciplines. Rare examples of such internal rigor being taken to task are found in the “damaged merchandise” [10] or “ethnography considered harmful” [11] debates. What this means is that the adoption of materials, approaches, and perspectives from disciplines that contribute to HCI’s interdisciplinary interface should not result in lax implementations of such imported materials, approaches, and perspectives within the HCI community. The “magpie-ism” of HCI research is a double-edged sword: increasing vigor and research creativity, yet often resulting in violence being done to the origins of imported approaches and concepts. And without specialist attention, weak strains are sustained/incubated within HCI; the controversies outlined above are manifestations of this problem.

Second, we should expect a rigor in HCI research’s engagement with the very notion of being an interdiscipline. The tenor of implications for design sections is often problematic, a point picked up by Kostakos. Instead, I would suggest that we should perhaps start considering implications for HCI, rather than implications for design, as better suited for working at the interface of disciplines.

Acknowledgments

This work is supported by EPSRC (EP/K025848/1). This article is based on a longer blogpost: http://tinyurl.com/locating-the-big-hole

References

1. Kostakos, V. The big hole in HCI research. Interactions 22, 2 (2015), 48–51.

2. Co-word analysis was popularized by Michel Callon and colleagues in the 1980s to develop scientometric analyses of the natural sciences. Co-word analysis examines how frequently pairs of words or phrases co-occur within a given text.

3. Liu, Y., Goncalves, J., Ferreira, D., Xiao, B., Hosio, S., and Kostakos, V. CHI 1994-2013: Mapping two decades of intellectual progress through co-word analysis. Proc. CHI’14. ACM Press, 2014.

4. My use of this label does not imply a unity to the “scientific program.” For instance, there is a difference between advocating science in HCI and HCI as science.

5. Reeves, S. Human-computer interaction as science. Proc. 5th Decennial Aarhus Conference (Critical Alternatives). ACM, New York, 2015.

6. Whittaker, S., Terveen, L., and Nardi, B.A. Let’s stop pushing the envelope and start addressing it: A reference task agenda for HCI. Human Computer Interaction 15, 2 (Sept. 2000), 75–106.

7. Weingart, P. A short history of knowledge formations. In The Oxford Handbook of Interdisciplinarity. J.T. Klein and C. Mitcham, eds. Oxford Univ. Press, Oxford, 2010, 3–14.

8. Rogers, Y. HCI theory: Classical, modern and contemporary. Synthesis Lectures on Human-Centered Informatics 5, 2 (2012), 1–129.

9. Blackwell, A.F. HCI as an inter-discipline. Proc. CHI’15 Extended Abstracts. ACM, New York, 2015, 503–516.

10. Gray, W.D. and Salzman, M.C. Damaged merchandise? A review of experiments that compare usability evaluation methods. Human Computer Interaction 13, 3 (Sept. 1998), 203–261.

11. Crabtree, A., Rodden, T., Tolmie, P., and Button, G. Ethnography considered harmful. Proc. CHI’09. ACM, New York, 2009, 879–888.

Author

Stuart Reeves researches the design and deployment of interactive technologies for a range of cultural, performance, and public settings. He holds an EPSRC Fellowship and is investigating the relationships between theory and practice in HCI and UX. He is also author of the book Designing Interfaces in Public Settings. stuart@tropic.org.uk

Figures

F1Figure 1. Mapping disciplinary knowledge production as quadrants. (Diagram from [1])

Copyright held by author. Publication rights licensed to ACM.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2015 ACM, Inc.

Post Comment


No Comments Found