Authors: Jonas Oppenlaender
Posted: Thu, April 10, 2025 - 11:51:00
Many scientists assume that public trust in science is a given and stable. Funding institutions, even amid fierce competition in academia, still keep the research engine running. The incentives in academia, however, do not always align with funding priorities. Funding is primarily awarded for innovation and renewal of science. But the prevailing “publish or perish” culture does not just motivate people to produce more research—it also encourages some to take shortcuts, sometimes crossing ethical lines. Paper mills have sprung up to serve those willing to pay for fake data and fraudulent studies [1], and the rise of generative AI makes misconduct easier than ever. At the same time, science denialism continues to grow, making it more important to uphold both rigor and transparency.
We should remind ourselves that public trust in science is not unbreakable; it can crumble. As Marc Edwards and Siddhartha Roy warn, “if a critical mass of scientists become untrustworthy, a tipping point is possible in which the scientific enterprise itself becomes inherently corrupt and public trust is lost, risking a new dark age with devastating consequences to humanity” [2].
We may be far from this tipping point, but it is still pertinent to consider how we can strengthen the current landscape. HCI should be at the forefront of this effort. The field is increasingly centered around how humans interact with AI, a timely focus that has potential to drive meaningful policy recommendations. Yet HCI remains a relatively “soft” discipline. Virginia Braun and Victoria Clarke’s thematic analysis, for instance, is now the most widely used qualitative method in HCI [3]. While valuable, this qualitative emphasis sometimes opens doors for authors to make unsubstantiated claims.
Rigor is lacking in other areas of HCI as well. Literature reviews, for example, often lack transparency. It is not uncommon to encounter a “systematic literature review” being reported in an HCI research article, yet few methodological details are provided. Standards are also missing in emerging areas such as prompt engineering, where practices are still evolving. And with HCI studies, small sample sizes (N<12) are common, yet results are frequently generalized to larger populations.
To address these potential risks to public trust in HCI research—or ideally, to prevent them from arising in the first place—the next logical step for HCI are multilaboratory experiments. These are studies conducted by multiple independent labs, often spanning different institutions and even countries. By designing experiments with common procedures that can be run and verified across diverse settings, these collaborative studies bring a much higher level of rigor and reproducibility to HCI research. Multilab studies help ensure that findings are not just anomalies or artifacts of one lab’s methods, but instead stand up to scrutiny from a broader scientific community.
Multilab experiments have made a significant impact in other disciplines. In psychology, for example, Martin Hagger and colleagues conducted a landmark multilab, preregistered replication of the ego-depletion effect, a theory suggesting that self-control is a finite resource that can be “used up” [4]. Their findings, drawn from contributions across many labs, cast doubt on the robustness of this effect and sparked widespread discussions about reproducibility in psychology. This type of collaborative research has proved invaluable for verifying—or challenging—previous findings.
In physics, large-scale, multilab experiments are even more routine. Projects such as those at CERN require collaborations that span continents, with research teams contributing resources and expertise to answer fundamental questions about the universe. These collaborations also bring transparency, as methodologies and data are often shared across institutions, creating a high level of trust in the findings.
Multilaboratory experiments offer several compelling benefits for HCI. First, by pooling resources across labs, multilab studies can achieve larger sample sizes, which in turn improve statistical power and help detect meaningful effect sizes. With larger, more diverse datasets, findings are not only more robust but also more representative, leading to higher trust in results.
Multilaboratory experiments also have the potential to expand HCI’s reach beyond its traditional boundaries. By involving diverse labs, researchers can tailor studies to specific cultural or technological contexts, capturing insights that a single-lab study might miss. This makes findings not only more reliable but also more inclusive, providing a fuller picture of how HCI systems perform across different user groups, devices, and environments.
Moreover, multilab research fosters interdisciplinary work, inviting collaborations with fields such as psychology, data science, machine learning, and engineering. These interactions don’t just enrich HCI research—they help create tools and frameworks that can be shared across disciplines, strengthening HCI’s role within the broader scientific community.
Multilaboratory studies can also serve as training grounds for early-career researchers, who benefit from exposure to diverse methodologies, collaboration across different academic cultures, and learning about best practices in research transparency. This builds a foundation for the next generation of HCI researchers who are skilled in both collaborative work and rigorous experimentation.
Involving multiple independent parties also reduces incentives to falsify data, as each lab verifies its own findings within the broader study. This shared responsibility adds a layer of accountability, making it more difficult for any one party to manipulate outcomes undetected.
Additionally, multilab experiments align with the growing trend toward collaboration in research. As science becomes more interdisciplinary and collaborative, multilab studies help HCI researchers break down institutional silos and foster open knowledge-sharing across the field. This creates a more resilient research environment, where findings are both stronger and more transparent. In addition, multilaboratory experiments can push the boundaries of HCI, creating opportunities to test theories and systems in varied settings, which would be difficult for any single lab to replicate alone.
HCI already has a foundation to build on when it comes to multilab research. The RepliCHI initiative [5], launched as a response to concerns about reproducibility in HCI, encouraged replication studies within the community. This effort helped emphasize the importance of validating findings across different labs, laying the groundwork for collaborative replication efforts. There are also a few HCI studies that have collaborated across countries to compare studies in different cultural settings. These projects are multilab by nature, involving various institutions. Overall, however, multilab studies are limited in HCI.
In an era where trust in science can no longer be taken for granted and science denialism is growing, the move toward multilaboratory experiments represents an essential step forward for HCI. By embracing this collaborative approach, HCI researchers can address many of the challenges we face today—ensuring transparency, building on rigorous methodologies, and fostering trust in our findings.
Multilaboratory studies offer a way to transcend the limitations of single-lab experiments, such as small sample sizes and narrow contexts, which often weaken the validity of our conclusions. They allow us to test HCI principles and systems across different environments, cultures, and user groups, paving the way for findings that are more generalizable and impactful. This is critical for areas such as human-AI interaction, where broader validation of findings is needed to guide ethical and effective design choices and policies.
This approach also signals a shift in how HCI sees itself. Multilab experiments are common in fields that have historically led the way in scientific rigor, and adopting them gives HCI a chance to contribute to this movement. As we bridge the gap between “soft” and “hard” scientific practices, we can position HCI as a leader in methodological rigor, ultimately enhancing our credibility and influence in the research community.
To realize this vision, HCI will need to make multilab collaborations more accessible. This may involve new funding models, open data initiatives, and development of shared frameworks and procedures that make it easier for labs worldwide to collaborate on large-scale studies. But the rewards—more robust, trustworthy, and impactful research—are well worth it. In the end, multilaboratory experiments are not just a method; they’re a statement. They show that HCI is ready to tackle its challenges head-on, setting a standard for transparency and trust that the field can carry into the future.
Endnotes
1. Else, H. and Van Noorden, R. The fight against fake-paper factories that churn out sham science. Nature 591, 7851 (2021), 516–519; https://doi.org/10.1038/d41586-021-00733-5
2. Edwards, M.A. and Roy, S. Academic research in the 21st century: Maintaining scientific integrity in a climate of perverse incentives and hypercompetition. Environmental Engineering Science 34, 1 (2017), 51–61; https://doi.org/10.1089/ees.2016.0223
3. Oppenlaender, J. and Hosio, S. Keeping score: A quantitative analysis of how the CHI community appreciates its milestones. Proc. of the 2025 CHI Conference on Human Factors in Computing Systems. ACM, 2025. DOI: 10.1145/3706598.3713464
4. Hagger, M.S. et al. A multilab preregistered replication of the ego-depletion effect. Perspectives on Psychological Science 11, 4 (2016), 546–573; https://doi.org/10.1177/1745691616652873
5. Wilson, M.L., Mackay, W., Chi, E., Bernstein, M., Russell, D. and Thimbleby, H. RepliCHI – CHI should be replicating and validating results more: discuss. CHI ’11 Extended Abstracts on Human Factors in Computing Systems. ACM, 2011, 463–466; https://doi.org/10.1145/1979742.1979491
Posted in: on Thu, April 10, 2025 - 11:51:00
Jonas Oppenlaender
View All Jonas Oppenlaender's Posts
Post Comment
No Comments Found