Features

XXXI.1 January - February 2024
Page: 38
Digital Citation

Evaluating Interpretive Research in HCI


Authors:
Robert Soden, Austin Toombs, Michaelanne Thomas

back to top 

Over the past few review cycles at CHI, CSCW, and other HCI venues, there has been a significant increase in the demands that reviewers and associate chairs (ACs) place on methods reporting for qualitative research. Some of this is appropriate, and to be expected as the community continues to grow and our engagement with a broad range of disciplinary and theoretical perspectives matures. However, this has not come without problems. In this article, we highlight what appears to be a growing misunderstanding of interpretive research practices. We discuss how to evaluate their methods and claims, and the vital contributions they make to HCI research and practice.

ins01.gif

As an example of recent changes in reviewing practice, we, along with many of our colleagues, have noticed that now nearly every paper that draws on qualitative methods is asked to include information such as the demographics of research participants. There are some fairly obvious problems with these kinds of blanket requests. For starters, uncritical reliance on the demographic characteristics of study participants to evaluate research may serve to essentialize socially constructed categories like gender and race, or reinforce stereotypes such as the idea that older people are less familiar with emerging technologies. Often the claims being made are not differentiated by, or bear no actual relation to, any of the requested information about participants. This forces authors to engage in a sort of methods theater, whereby shallow performative gestures toward "rigor" are substituted for careful and considered drawing together of data, theory, and argument.

back to top  Insights

HCI reviewers, often trained in positivist traditions, struggle to evaluate interpretive research, leading to inappropriate recommendations.
There are multiple varieties of interpretive research, each of which demand careful consideration on their own merits.
We can do more to educate our community on the unique demands and contributions of interpretive methods.

There is, however, a deeper challenge here that we want to discuss. We are concerned that increasingly all qualitative research, including that which draws on interpretive traditions of research design, is being evaluated from the perspective of positivism. Interpretive research is a distinctly different approach, and deals in types of research practices, epistemological theory, and contributions that a reviewer who is assuming a positivist orientation may find strange, off-putting, or incoherent. These misunderstandings may appear during the reviewing process as inappropriate demands for things like participant demographics, or requests for participant IDs alongside individual quotes to link them to demographics, including codebooks and code counts in the supplementary materials, or being asked to explain how research conducted in the Global South or with marginalized communities "generalizes" or applies to other settings. For those familiar with interpretive research, this is akin to being asked to include a positionality statement within a large-N quantitative study. It simply does not make sense within the context of the specific method.

This is not just a problem of the occasional Reviewer 2. In recent cycles of CHI and CSCW, numerous ACs have begun demanding such things, and these errors often slip past editors and others in the process whose role it is to catch them. Authors, especially junior scholars, including Ph.D. students, postdocs, and pre-tenure faculty who face career pressures to get their work published, often acquiesce to these demands rather than fight them, even though they know that these kinds of reporting practices aren't appropriate to the epistemological traditions the work is drawing from. This perpetuates the problem by normalizing reporting practices that are meaningless, even counterproductive, within interpretive approaches to research.


We are concerned that increasingly all qualitative research, including that which draws on interpretive traditions of research design, is being evaluated from the perspective of positivism.


As a community of scholars, we are working toward the creation of a new and fundamentally incoherent set of research practices. This is changing expectations about how critical, qualitative, and design research is done in HCI and what we are teaching our students, and is probably further confusing our peers who are not trained in interpretive methods in the first place. On the whole, it is making our papers worse. These changes are symptomatic of a pervasive and unfortunate lack of understanding of how interpretive research works, what this tradition offers to HCI research, and how to evaluate it that likely affects other aspects of paper reviewing, citation, and even hiring processes in HCI faculty searches. If this lack of understanding is left unaddressed, we may be robbing HCI of the very forms of research and insights that are arguably best suited to help address many of our most pressing concerns around the relationship between technology and society.

Of course, reviewers are not entirely to blame here. Research in HCI draws on a wide range of traditions and perspectives in their work, making it difficult for any single reviewer, associate chair, or editor to engage knowledgeably with all of the possible forms of research they may encounter during peer review processes. As authors working in interpretive traditions, we could and probably should be doing more to articulate our own positions and approaches. In the next section we offer a general overview of interpretive research. We then highlight some common mistakes in reviewing this type of work and offer guidance on how to avoid them.

back to top  What is Interpretive Research and How Does It Differ from Positivism?

To be sure, not all qualitative research is in the interpretive tradition. But much of it is, and it draws broadly on a wide range of humanist, sociological, and anthropological traditions. Though it is a wide-ranging and diverse category of approaches, within HCI interpretivist research is frequently associated with ethnography. Anthropologist Clifford Geertz is widely considered one of the developers of contemporary ethnography, and of interpretive research more broadly. His articulation of the concept of "thick description" is instructive here. Breaking with prior assumptions about ethnography as a tool for producing systematic or universal accounts of culture, Geertz instead sought a form of anthropology that served as "an enlargement of the universe of human discourse" [1]. By this, what he sought were detailed, particular, and specific—thick—accounts of social life that could bring previously disconnected communities into legibility and understanding. Such accounts were necessarily incomplete and resisted any effort at generalization that would rob them of their thickness.

Though not inevitable, it's understandable why ethnography gained acceptance in HCI, given our efforts to understand and develop "empathy" with the wide variety of people and social contexts our work touches. Interpretive research has certainly proved to be useful in design settings, but the richly detailed and context-specific character of this approach has also been valuable for thinking through many more fundamental concerns that arise when humans and computers interact [2]. Beyond ethnography, interpretive approaches have also been used within HCI by practitioners of ethnomethodology and grounded theory. Virginia Braun and Victoria Clarke's reflexive thematic analysis, a toolset for qualitative data analysis that is being increasingly taken up by qualitative researchers in many disciplines owing to its flexibility, pragmatism, and extensive documentation, is also aligned with these commitments [3]. So are the contributions made by speculative and critical design, some varieties of historicism, and other forms of work that draw more on HCI's connections to the arts and humanities than its more commonly noted roots in engineering and psychology.

What these different research traditions share are a set of epistemological commitments quite separate from how positivists think about topics such as universality, generalizability, and rigor. Instead, we look for rich descriptions of participant experiences, along with the context specificity and depth associated with how participants interpret and understand those experiences. We often approach obtaining this data by thinking of the researcher as a kind of "human instrument" [4], leveraging their unique identities and experiences as well as their human ability to navigate the subtleties of a social setting and collective practices of meaning-making with their participants in a way that a strictly structured survey or other kind of tool may not. Within the frame of these commitments, codebooks, participant demographic tables, and other tools more characteristic of positivist research practice may not make much sense. In some cases they may communicate something valuable to the reader, but even this depends heavily on the specific ways in which they are introduced and the kinds of contributions a given paper is seeking to make. In the next section we offer some general guidance for how to approach reviewing interpretive research in HCI.

back to top  Mistakes in Reviewing Interpretive Research and Some Alternatives

To develop the material for this section, we reached out to the HCI community on social media and requested that colleagues who conduct qualitative research in critical and interpretive traditions share their experience with inappropriate requests from reviewers. Drawing on our own experience as well as responses to our call, we selected a number of examples that illustrate some of the most important differences between positivist and interpretive research. For each, we provide some context for why this request might be made in error, and offer alternative approaches for reviewers to consider. This list is not exhaustive, but instead is meant to shed further insight on some of the factors to consider when reviewing interpretive research.

"Please provide a table including demographic information of study participants." This request is perhaps the most common one we encounter, and many qualitative papers in CHI, CSCW, and other venues in recent years (even some of our own!) have included tables with participant age, gender, and other questionably relevant information. We suspect the source of this request is the reviewers' desire to ensure the findings of the paper are "generalizable," with study participants being to some degree of representative of a broader population. This is wrong. As noted above, interpretive research does not seek to be generalizable in this way. Most authors of interpretive work will tell you that generalizability, in this sense, is neither possible nor desirable. In some cases, it may be important for authors to provide information about the background of the participants, but this should be in close relationship to the site of study, the phenomena in question, and the claims being made. Most of the time, careful narrative description of these aspects of the research will be preferable to a table that flattens participants' identities to a collection of superficial or easily reported traits.

"This study is not representative and/or the findings aren't generalizable because the authors only conducted X interviews." This is another problematic piece of feedback that authors of interpretive research may receive. There are several things that reviewers should consider if this question arises. Most importantly, what was the role of the interviews in arriving at the claims made in the paper? Did the authors collect other forms of data through participant observation or other means? What contexts, topics, or groups of people are the authors claiming that their findings are meaningful to? No forms of interpretive research prioritize generalizability. However, some use the concept of transferability to indicate what settings or contexts the findings of the paper would be expected to be relevant to. Here, reviewers can and should ask authors to be specific about the reach of their findings, and look for qualified statements like "users similar to those we interviewed would benefit…" or "platforms claiming to provide such services should."

"The study could not have achieved theoretical saturation with only X interviews or Y amount of time doing observation." Saturation is an important concept in interpretive research, and it comes in several varieties. Theoretical saturation is specific to grounded theory, an approach in which data collection, analysis, and sampling decisions are conducted iteratively. More generally, interpretive researchers achieve data saturation when additional time spent conducting participant observation in the field site, additional interviews, or collective other forms of new data are not yielding further insights into the phenomena in question. Clearly, this depends on the specific field site and phenomena in question; thus, "saturation" is not properly thought of as a kind of measurement or quantifiable metric. Therefore, it is likely disadvantageous to place strict requirements on the number of interviews required for a project or paper to be acceptable.

"Include the interview script." Many forms of interviewing are not reliant on a script. In semi-structured and unstructured interviews, predetermined interview questions are often used as prompts to help interviewers keep track of the most important topics they want to cover or provide suggested phrasing to help elicit desired information. Importantly, they often change as the study progresses and interviewers learn more about how to ask these questions in ways that their participants best respond to. As data collection progresses, interviewers often will ask less open-ended, exploratory questions and instead aim to probe or confirm some of the details about recurring topics. Instead of asking for the interview script, ask authors what topics were covered in interviews and how these related to the overall goals of the research and the specific contributions of the paper.

"Provide codebooks that include code counts." Reviewers who request this forget that much of the analysis process that occurs in interpretive research happens as part of the writing process itself. Listings of codes, if they are used at all, frequently are not finalized; rather, they are intermediate products that are revised and discarded as authors make iterative passes through the data, write, and reorganize their themes. As with saturation during data collection, some approaches rely on code or thematic saturation when conducting analysis. Thematic saturation can be thought of as a measurement of the "breadth" of a project. When analyzing qualitative data, we stop coding for a particular instance in a dataset once we notice it occurring enough that we know it's not an anomaly. We then tend to code only instances that further add to the breadth or depth of the concept/phenomenon of concern. Simply counting the number of times codes appear in our data is therefore not meaningful at all. Instead, it may be helpful to provide a narrative description of the coding process, the kinds of codes used, and how they were then used in theme development.

"The Methods section should report interrater reliability scores." Here, reviewers seem to want to ensure some degree of replicability and consistency when qualitative data analysis is conducted by teams of researchers, as represented by high rates of agreement among coders. However, as noted above, in interpretive studies researchers draw on their individual perspectives, backgrounds, and positionalities as assets during data analysis. When multiple coders are involved, these divergent perspectives can be an asset, in that they offer fresh or novel insights into the topic of study. Keep in mind the goal of theme development for interpretive research is not universality of coding, but rather breadth, depth, and the team reaching some kind of consensus through discussion. So we can't assume interrater reliability is a relevant metric [5]. Instead, data-analysis descriptions should provide a description of how authors consulted others using approaches as varied as peer debriefing, group discussions, or member checking, how they iterated on their claims throughout the process, and how they settled on their final arguments.

"Add participant IDs or pseudonyms with each quote." Reviewers seem to ask for this on the assumption that inclusion of a research participant's own words in a paper are somehow evidentiary, or "proof" of data. Accordingly, these quotes need to be evenly distributed across participants to justify authors' claims or to avoid cherry-picking. This is not the case in interpretive research. Instead, we often include quotes to add rich detail and understanding of the phenomena under study. Quotes can add depth and character to interpretive papers and facilitate "[capturing] participants in their own terms" [6], one of the primary goals of interpretive inquiry.

back to top  Where to Go from Here

It is not surprising that our research community is experiencing the problems we have described. HCI is a multidisciplinary and rapidly growing field. ACs and reviewers are expected to carefully evaluate many styles of research, across various areas of domain, theory, and method. The particular epistemological commitments and styles of argumentation that characterize interpretive research are further from popular imaginaries of scientific method and often devalued in the contemporary academic environment, which places so much emphasis on quantification and "hard facts." Finally, it can be challenging, even for those of us who do interpretive research, to stay abreast of the many ongoing debates over method that are typical of any sophisticated and widely adopted epistemological tradition.

There is clearly much to be done. Along the way, we suggest that reviewers feel comfortable not commenting on issues related to methods that are outside their realm of expertise; there are likely many other aspects of a paper they can provide insights into. Appropriate, constructive feedback has the potential of improving the quality of interpretive research and, by extension, contributing to the development of HCI as a field. As authors, we can do a better job of communicating our approach in ways that help readers understand the unique contributions that interpretive work can make. As a community, we can do more to foster discussions about various styles of research and the specific considerations of method that arise, particularly with regard to how those methods pair well with the various strengths of those research styles. This article is one part of a larger effort to do so. We hope it contributes to conversations about the important role that interpretive scholarship can play in the HCI community.

back to top  References

1. Geertz, C. Thick description: Toward an interpretive theory of culture. In The Cultural Geography Reader. Routledge, 2008, 41–51.

2. Dourish, P. Reading and interpreting ethnography. In Ways of Knowing in HCI. Springer, New York, 2014, 1–23.

3. Braun, V. and Clarke, V. Using thematic analysis in psychology. Qualitative Research in Psychology 3, 2 (2006), 77–101.

4. Lincoln, Y.S. and Guba, E.G. Naturalistic Inquiry. Sage, 1985.

5. McDonald, N., Schoenebeck, S., and Forte, A. Reliability and inter-rater reliability in qualitative research: Norms and guidelines for CSCW and HCI practice. Proc. of the ACM on Human-Computer Interaction 3, CSCW (2019), 1–23.

6. Lofland, J., Snow, D., Anderson, L., and Lofland, L.H. Analyzing Social Settings: A Guide to Qualitative Observation and Analysis. Waveland Press, 2022.

back to top  Authors

Robert Soden is an assistant professor in computer science in the School of the Environment at the University of Toronto. His research draws on design, social sciences, and the humanities to evaluate and improve the information systems used to respond to environmental challenges such as disasters and climate change. [email protected]

Austin Toombs is an associate professor at Indiana University. He explores how digital tools affect our sense of community, identity, and connections with others. By seeking to deeply understand people's experiences through interpretivist and constructivist approaches, he sheds light on technology's role in shaping our communities and relationships. [email protected]

Michaelanne Thomas is an assistant professor in the School of Information at the University of Michigan, where she leads the Anthropology and Technology Lab using ethnographic methods to explore how communities leverage technology to address critical infrastructure gaps. Her research has earned recognition at CHI, CSCW, WWW, and 4S and has appeared in The Atlantic, New Scientist, CNN, Reuters, and Vice. [email protected]

back to top 

Copyright held by authors. Publication rights licensed to ACM.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2024 ACM, Inc.

Post Comment


No Comments Found