Features

XXXII.1 January - February 2025
Page: 28
Digital Citation

CHI’s Greatest Hits: Analyzing the 100 Most-Cited Papers in 43 Years of Research at ACM CHI


Authors:
Annika Kaltenhauser, Gian-Luca Savino, Nick von Felten, Johannes Schöning

back to top 

The CHI conference is the premier venue for publishing research in human-computer interaction. As a result, CHI attracts a diverse and interdisciplinary group of scholars and practitioners, including those from computer science, psychology, design, engineering, and the social sciences. It is ranked as one of the top computer science conferences, with an h5-index of 129 and an h5-median of 183 and has been collectively cited 996,352 times, according to Google Scholar. The impact of HCI research extends far beyond the academic world and citations, shaping how millions interact with new technologies and influencing the design of everyday digital experiences. The research has transformed user interfaces, making technology more accessible and intuitive for a global audience.

back to top  Insights

Two-thirds of the top 100 CHI papers have empirical results as part of their contribution.
While older CHI papers do not generally yield higher citation counts (especially 1982–89), papers from the past decade tend to be cited less, showing a downward trend.
None of the top 100 CHI papers received a Best Paper award, and only two received an honorable mention.

One of many examples is the development of the "direct manipulation" interface concept identified and named by Ben Shneiderman in the early 1980s [1]. This concept, which emphasizes the importance of graphical user interfaces that allow users to interact directly with on-screen objects (e.g., dragging and dropping files), has significantly influenced the design of modern operating systems, such as Windows and macOS, affecting how millions of people interact with computers daily. ACM SIGCHI also oversees 26 sponsored and cosponsored conferences. Against this backdrop, CHI and its community have produced a wealth of HCI research in the past four decades.

ins01.gif

Given the large volume of publications, CHI publications and their knowledge production have also been the focus of research, as demonstrated by various meta-analyses and literature reviews. In this article, we critically reflect on the field of HCI by explaining our research and reporting practices and our authorship and collaboration patterns and shed light on the demographic and cultural diversity of empirical findings within the CHI community. Many researchers, reflecting on the impact of HCI and Interactions magazine, have long questioned what it means to affect technology, design, and human experience. Elizabeth Churchill's 2012 column "Impact!" (http://bit.ly/4f1rqit), in which she uses her childhood fascination with dinosaurs to unpack the word, provides a joyful example. She considers impact as both literal ("forceful contact" or collision) and metaphorical (a subtle yet transformative influence).

By examining CHI's 100 most-cited papers over the past 43 years, we aim to identify key trends and recurring themes, comprehend the pivotal shifts that have defined and reshaped HCI research evolution within the conference, and understand the characteristics that contribute to these papers being so frequently cited.

back to top  A Closer Look at Citations as a Measure of Impact

Citations provide a valuable perspective on the reach of research within and beyond the CHI community, serving as a measure of a paper's impact. This is highlighted in the Nature article "The Top 100 Papers," by Richard Van Noorden et al., which explores the most-cited papers from 1900 to 2014 [2]. It is important to acknowledge that using citations as an indicator of a paper's recognition in the scientific community has limitations. Some papers are cited as negative examples, and citation practices differ between HCI's subfields and adjacent fields of research (e.g., artificial intelligence, communication science, psychology, and social sciences). Our qualitative literature review was guided by a quantitative metric, which we used as a filter to identify relevant papers. We examined these papers' characteristics, such as types of contributions, methods, and topics, and used this analysis as grounds for further discussion.

We created a dataset of the 100 most-cited CHI full papers and qualitatively analyzed them in a systematic literature review. By synthesizing contribution types, methods, topics, and technology types and identifying other characteristics, the analysis revealed that most of these papers made empirical contributions, with two-thirds presenting empirical results often complemented with artifacts or theoretical insights. Key themes include the dominance of implementation as a research method, a balanced use of qualitative and quantitative approaches, and the introduction of influential theories and frameworks that have opened new research directions, such as feminist HCI. This literature review provides a comprehensive overview of the developments that have shaped the CHI community over the past 43 years.

For the coding process, the first author created an initial set of codes based on our research questions. Based on prior research and precision conference-system submission categories, all authors reviewed the initial codebook, and three authors independently coded a small sample, noting key insights. After two rounds of discussions, the team refined the code categories, ultimately adopting a deductive approach for contribution types and an open coding approach for the remaining categories, with ongoing notes to capture additional insights. This iterative process led to the final definitions and detailed approaches for coding the seven overarching categories as follows:

  • Contribution type: Refers to the specific category of contribution made by a publication (e.g., theoretical, empirical, or artifact, as described by Jacob Wobbrock and Julie Kientz [3]).
  • Topic: Classifies the primary research topics.
  • Users: Refers to empirical work that presents findings on a specific user population or artifacts that are designed for a specific user population (e.g., office workers, designers and developers, individuals with disabilities).
  • Technology type: Categorizes the type of technology subject to investigation, if any.
  • Environment: Describes the setting or context in which the technology or study is applied.
  • Research methods: Refers to the methods used. Only coded for studies that authors explicitly reported as being conducted in the present, as opposed to within prior work.
  • Innovation: Identified and classified papers' innovative, novel, or groundbreaking aspects.

back to top  Trends and Characteristics in Top-Cited Papers

To provide context for some of our findings, we briefly describe the general numbers of CHI proceedings from 1982 to 2024. Figure 1 presents the average citation counts per year, suggesting that while older papers do not generally yield higher citation counts (especially 1982–89), newer works from the past decade tend to be cited less, showing a downward trend. In 2009, Christoph Bartneck and Jun Hu [4] concluded that the mean citation count in the main proceedings (1981–2008) was 40.17. We, however, found a mean citation count of 88.25 of all papers in our corpus, not only highly cited full papers (1982–2024). More overviews on the number of research articles published in CHI proceedings until 2024 can be found in this analysis, although it is worth noting that the visualizations are based on a dataset slightly different from ours (1981–2024).

ins02.gif Figure 1. Average citation counts per paper per year for CHI full papers, 1982–2024.

The distribution of the top 100 cited papers over time (Figure 2) reveals an increase in top-cited works from the early 1980s, peaking in 2010, with a significant concentration of highly cited papers emerging from 1991 to 2011. In recent years, there has been a decline in the number of highly cited papers, suggesting that the impact of newly published papers has not yet reached the levels seen in earlier decades. The top 100 CHI papers were cited 157,693 times collectively (mean: 1,576.93; SD: 918.01), making up 15 percent of all citations for full papers published at CHI. In contrast, the top 100 papers made up only 0.9 percent of all full papers. The most-cited paper in our corpus was cited 6,019 times.

ins03.gif Figure 2. Counts of papers per year for our corpus of the 100 most-cited CHI papers, 1982–2024.

A total of 256 authors contributed to the corpus. Among them are 51 members of the SIGCHI Academy, who contributed to 70 of the 100 most-cited papers, indicating their significant presence and influential role. Our analysis of the number of authors per paper shows a mean of 2.89 authors (SD: 2.3) per paper. Figure 3 illustrates the variation, with the number of authors per paper ranging from one to 13.

ins04.gif Figure 3. Counts of authors per papers for our corpus of the 100 most-cited CHI papers, 1982–2024.

Fifty-one articles were authored or coauthored by female researchers, including 26 publications with female first authors. For two publications, we were unable to determine the gender of all authors. No contributions were identified from authors of other gender identities. By the 2000s, the frequency of female-authored or female-coauthored papers began to rise, with notable increases in female lead authorship. The trend continued, with contributions from female authors or coauthors and female lead contributions becoming more consistent.

Among the papers in our corpus, 52 were affiliated exclusively with academic institutions, 26 originated from industry institutions, and 22 were affiliated with both industry and academia. For example, Xerox PARC contributed 12 papers between 1982 and 2014, Bellcore produced seven between 1985 and 2000, and Microsoft Research contributed seven from 1999 to 2019. Some of these contributions were made in collaboration with academic partners.

None of the top 100 papers received a Best Paper award, and only two received an honorable mention, both of which were recent papers (2019 and 2020) on AI, which suggests an interest in this area that is quickly increasing. Nevertheless, the fact that only two papers in our dataset received an award aligns with the results of Bartneck and Hu's study [4], which showed that the citation counts of non-nominated CHI papers did not significantly differ from those of nominated or award-winning papers, indicating that awards do not always correlate with long-term academic impact or influence.

Contribution type. Two-thirds of the top 100 papers (66) had empirical results as part of their contribution. Of those 66, 18 had empirical results as their sole contribution, while 22 had empirical results paired with an artifact contribution and 20 with a theoretical contribution. With 38 occurrences, artifact was the second most common contribution, followed by theoretical contributions (37). Methodological contributions had 24 occurrences, opinions five, and surveys four. Interestingly, no dataset contribution made it into the top 100 cited CHI papers.

Topic. Among the top 100 papers, we identified 12 topic clusters, with many of these topics highlighting specific domains within the CHI community that have often sparked a new research field or have become their own conferences. We found the most prominent cluster among papers written about information visualization and information retrieval (15). These publications represent papers presenting novel visualization techniques and were mostly published in the 1980s and 1990s—the most recent paper in this subcategory was published in 2005. Within this cluster, we observed a progression of works, showcasing a continuous evolution of techniques that improve how users interact with and understand complex datasets.

The second most frequent topic was social media (13). As Liu et al. [5] reported in their analysis describing the thematic evolution of HCI, social media and online communities were "a completely new research theme" that only came up between 2004-13. Works in our corpus on this topic were mostly published around 2010 with the rise of Facebook and other social networking sites. The papers looked into why people use social networking sites and contributed analyses on these newly formed networks.

With 10 occurrences, evaluation methods was the third most frequent topic. As expected, this topic is not tied to a certain time, with publications ranging from 1982 to 2010. Within this theme, we found works that introduced a new evaluation method or adapted it to the HCI domain, such as the heuristic evaluation by Jakob Nielsen and Rolf Molich [6]. Additional clusters with fewer than 10 occurrences included topics such as crowdsourcing and human computation, design research, methods and practices, input techniques and touch/gestures, tangible user interfaces, self-tracking and personal informatics, user experience, privacy and security, collaboration and computer-supported cooperative work, and context-aware computing.

Users. While most papers did not specify a particular user group (65), the largest specific group we identified within the corpus was social media and online platform users (10). Similar to the topic of social media and online communities, these papers were mostly published between 2000–11. They investigated the relationship between the use of social networking sites and mental well-being, cooperation among users of online communities, and even the social dynamics of online game communities. The second-largest user group was information and office workers (7), with papers ranging from 1991 to 2004. These papers focused on establishing and improving early graphical user interfaces through interaction techniques, such as direct manipulation, using tools like email, and the design of hardware and physical space for office workers. Five papers explicitly focused on designers, developers, and researchers, and three on quantified-selfers.

Technology type. The papers in our dataset cover a wide range of technologies. Forty papers, however, did not mention a particular technology and thus applied generally to technology systems or were theory focused. The largest cluster we identified was desktop and laptop computers (10). These papers ranged from 1986 to 2004 and focused largely on the novelty of the devices. While we now distinguish mobile and stationary systems, the papers were mostly published before mobile computing became a dominant topic. We identified two other technology clusters: context-aware systems (three) and tangible user interfaces (three).

Environment. CHI publications occasionally focus on a specific environment, either serving as the context for the study itself or having implications that are specifically relevant to a certain environment. We found that the majority of papers did not specify a particular environment (73). Eleven papers focused on online platforms and communities. In addition to a few papers focused on social networking platforms, we found ones that studied various online platforms, including crowdworking platforms like Mechanical Turk, and investigated how people work with data-driven platforms like Uber. Another notable environment is the office environment (five). Contributions were published between 1992 and 2004; thus, their focus is on the early computer-supported office environments. The last cluster comprised papers presenting research in the educational and academic environments (three).

Research methods. Today, the HCI field is known for its mixed-methods approach. As an interdisciplinary field by nature, HCI researchers use the full arsenal of methods from multiple fields, including psychology, computer science, social studies, ethnography, and many more. It comes as no surprise, then, that the top 100 CHI papers feature 30 different methods, some of which were frequently combined in a single paper. Many of the methodological approaches within these papers could be clustered into a collection of methods.

One-third (33) of all papers reported on studies that used a prototyping/implementation method, meaning the authors built a functioning prototype of a system that was used either as the subject of the study itself or as an artifact that users interacted with during a study. This is followed by case studies, which were used as a method in 20 papers. Qualitative and quantitative research methods were used in equal measure. It is important to differentiate between measurement and analysis. With questionnaire/survey (19) and interviews (18), we found two predominantly qualitative measures compared to predominantly quantitative approaches, such as user and usability studies (15) and instrumentation and usage logs (13). We also discovered that qualitative analysis (14) and quantitative analysis (13) occur almost the same number of times. Methods occurring fewer than 10 times are not listed.


Citations capture the long-term influence of research, which may take years to emerge, or reflect the popularity of a topic rather than its true quality or impact.


Innovation. In our corpus, many papers were particularly innovative in certain areas, sometimes introducing a novel interaction technique (25), empirical insights (23), theory/framework (18), or methodological innovation (17). Within the contributions that introduced methodological innovations, we observed different focuses, including method development, methodological enhancements, validation of methods, and platform utilization. The same goes for contributions that introduced novel theories or frameworks. Theoretical extensions and groundbreaking perspectives aimed at advancing theoretical understanding in HCI are prevalent. For instance, the work on computers as social actors [7] profoundly affected how researchers viewed interactions between humans and computers.

back to top  How Do We Want to Measure Research Impact?

Citations often come with a delay, as confirmed by our analysis, making them more suitable for retrospective assessments. They capture the long-term influence of research, which may take years to emerge, or reflect the popularity of a topic rather than its true quality or impact. Initiatives such as the San Francisco Declaration on Research Assessment (DORA) aim to take a broader approach to evaluating research impact. DORA seeks to improve research assessment by challenging reliance on metrics such as the journal impact factor and seeing it as a measure of quality. It advocates evaluating research based on its intrinsic merit rather than the reputation of the journal in which it is published.

Furthermore, DORA discourages using impact factors in funding, hiring, and promotion decisions. It also supports a more flexible use of online publication formats by recommending the relaxation of restrictions on word count, citations, and illustrations in research articles—a practice already adopted by the CHI community. As a community, CHI can also intensify the use of alternative metrics, which consider user engagement on the Internet, including mentions on blog posts or social media platforms. Consequently, altmetrics can provide a more immediate measure of a publication's popularity compared to the slower accumulation of citation counts.

back to top  Implications for CHI and Community Practices

CHI has become the premier venue for HCI research, but its roots lie in more-interactive meetings centered on innovations in human-technology interaction. While senior members may well understand this historical context, it is crucial to acknowledge that these early interactive sessions laid the groundwork for CHI's transformation into one of the field's most significant events. With the growing number of submissions, however, focusing on innovation and impact is increasingly challenging.

The submissions are incremental, making it harder for transformative ideas to gain attention. This issue is linked to maintaining review quality, as the high volume of submissions makes it difficult to apply consistent evaluation criteria across different subcommittees. Controversial or groundbreaking ideas face hurdles, particularly as Program Committee meetings become more dispersed (and also virtual). Additionally, the Work in Progress and Interactivity tracks, which could serve as outlets for innovative research, are often filled with papers rejected from the main paper track. The logistics of organizing demo sessions, especially the costs, also pose challenges.

One obvious potential solution is to focus more on significant, groundbreaking research, admittedly a tall order. This could involve creating a dedicated track for highly innovative and relevant work. Such a track would align with the original vision of CHI while also catering to the evolving needs of the HCI community. The SIGCHI community and the SIGCHI Academy have the expertise needed to review and engage with such work and this could provide a valuable opportunity to reinvigorate the conference. We believe that this direction is worth exploring to maintain CHI's leadership in the field.

back to top  Where to Go from Here

As noted by Paul Dourish, scientific disciplines, including HCI, are normative enterprises, and peer review plays a significant role in shaping the field's core values and methodologies. Dourish also notes that the peer review process in HCI encourages adherence to established norms and conventions, reinforcing a shared set of values and approaches among researchers [8]. This dynamic is particularly evident as HCI has matured and established its disciplinary identity. In high-status venues like CHI, peer review produces disciplinary genres, setting valid and valuable research standards.

The rigidity of these standards, however, can pose a challenge for researchers working on pioneering or unconventional ideas. Therefore, even as pushing new research areas while asking for usability studies maintains rigor, it will prevent innovation in the review process, and may sometimes resist novel contributions that deviate from established empirical frameworks, especially during periods of rapid disciplinary consolidation.

CHI's historical emphasis on empirical studies, especially those grounded in quantitative methods, has made it more difficult to recognize qualitative work. Although qualitative research has gained acceptance over time, these difficulties persist, reflecting the ongoing tension between methodological diversity and the normative expectations of the field. Despite these challenges identified in our corpus of the most-cited papers, we observed a notable shift toward more qualitative methods and approaches. This shift underscores the growing relevance and recognition of qualitative research within the CHI community, as evidenced by the frequent citation of these works.

As CHI research evolves, adapting to new challenges and insights, we look forward to engaging with the community to shape its future direction at CHI 2025 in Yokohama.

back to top  References

1. Shneiderman, B. Direct manipulation: A step beyond programming languages. Computer 16, 8 (1983), 57–69.

2. Van Noorden, R., Maher, B., and Nuzzo, R. The top 100 papers. Nature 514, 7524 (2014), 550–53.

3. Wobbrock, J.O. and Kientz, J. A. Research contributions in human-computer interaction. Interactions 23, 3 (2016), 38–44.

4. 4.. Bartneck, C. and Hu, J. Scientometric analysis of the CHI proceedings. Proc. of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, 2009, 699–708.

5. Liu, Y., Goncalves, J., Ferreira, D., Xiao, B., Hosio, S., and Kostakos, V. CHI 1994-2013: mapping two decades of intellectual progress through co-word analysis. Proc. of the SIGCHI Conference on Human Factors in Computing Systems. 2014, 3553–3562.

6. Nielsen, J. and Molich, R. Heuristic evaluation of user interfaces. Proc. of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, 1990, 249–56.

7. Nass, C., Steuer, J., and Tauber, E.R. Computers are social actors. Proc. of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, 1994, 72–78.

8. Dourish, P. Implications for design. Proc. of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, 2006, 541–50.

back to top  Authors

Annika Kaltenhauser is a Ph.D. candidate in HCI at the University of St. Gallen in Switzerland. [email protected]

Gian-Luca Savino is a senior researcher in the HCI Lab at the University of St. Gallen. His work explores the societal and personal impacts of mobile navigation technology, examining how it shapes individual behavior and broader social dynamics. [email protected]

Nick von Felten is a psychologist and doctoral researcher in HCI at the University of St. Gallen. His research explores human-centered AI, questionnaire design, user experience, and meta science to advance understanding of how people interact with and experience AI systems. [email protected]

Johannes Schöning is a professor of HC at the University of St. Gallen. He aims to empower individuals and communities with the information they need to make better data-driven decisions by developing novel user interfaces with them. [email protected]

back to top 

Copyright 2025 held by owners/authors

The Digital Library is published by the Association for Computing Machinery. Copyright © 2025 ACM, Inc.

Post Comment


No Comments Found