Forums

XXVI.1 January - February 2019
Page: 72
Digital Citation

Lessons from working with researchers and practitioners in healthcare


Authors:
Ann Blandford

back to top 

Conducting HCI research in healthcare is exciting, worthwhile, and essential: Medicine and healthcare are becoming increasingly reliant on interactive health technologies and data analytics. HCI researchers and practitioners have a key role in ensuring that healthcare systems are fit for purpose (safe, effective, engaging, etc.) and in discovering new interactions that can enhance health and well-being. However, working with health professionals and biomedical researchers can pose challenges—some obvious, some subtle.

back to top  Insights

ins01.gif

Every discipline evolves from earlier work and community consensus on its appropriate goals, values, and methods. Jonathan Grudin [1] traces the roots of HCI to sources including ergonomics and information processing. HCI has readily adopted and adapted methods from many disciplines (psychology, computer science, design, social sciences, etc.), with goals such as developing theories of how people interact with technologies and creating novel interaction paradigms that transform people’s experiences. Arguably, HCI is not a discipline but rather a community with a common interest in people’s interactions with digital technologies, recognizing that each person is an individual. In contrast, health research spans many scales of interest, from biological sciences that investigate the causes of disease to implementation sciences that investigate how to deploy health interventions at scale, but it has developed a narrower repertoire of methods, of which the gold standard is the randomized controlled trial (RCT). More general clinical trials, working to slightly looser constraints, are the common approach to evaluating digital health technologies. Much health research focuses on populations rather than individuals, and evaluations of digital health interventions have—at least until very recently—been based on an implicit assumption that an intervention will work equally well for everyone as long as they engage with it as intended.

Whereas the significant advances in healthcare in the 20th century were delivered through advances in our understanding of disease and pharmaceuticals, those of the early 21st century are being enabled by technological developments, such as:

  • discoveries from big data (which rely on advances in data gathering, data linkage, algorithms, imaging, and interaction design to support sensemaking);
  • interactive technologies for diagnosis and therapy (e.g., robotics, telemedicine, closed-loop therapy systems—all of which change the roles of the professionals managing the therapies);
  • technologies to support health behavior change (e.g., apps and wearables that support goal setting, behavior tracking, and information interpretation); and
  • technologies for communication and collaboration (e.g., finding and learning with “people like me,” telemedicine, citizen science).

In other words, the transformational tools for 21st-century healthcare are digital, and most of them are interactive. This creates challenges for HCI in terms of understanding people’s behaviors around these tools and designing tools that are fit for purpose, engaging, and safe. Despite these challenges, HCI, and the value of person-centered and context-sensitive design, are often invisible to health researchers, who focus on clinicians’ expertise in defining requirements for and evaluating interventions (see for example [2], where HCI is notable only by its absence from the discussion). A limited understanding and awareness of HCI poses challenges, which have been summarized by various authors [3]. Here we focus on digital interventions and develop two themes: the end goals of research and how the methods of different disciplines can complement each other.

back to top  The Goals of Research

The primary question for health research around any intervention, digital or otherwise, is: Does it improve outcomes? This encompasses questions about safety (What are the risks of patient harm, and are they proportionate?) and economics (Is the intervention cost-effective?). The question regarding health outcomes is commonly based on the assumption that the intervention will be used as intended; in the world of medications, this is discussed as compliance, adherence, or conformance, but for clinical trials, compliance is assumed (and is a requirement of participants in the trial). Questions of engagement, ease of use, or how people fit the intervention into their lives are rarely reported. The limitations of clinical trials for digital interventions are being recognized [4] and variants that address some of the limitations are being developed [5], but the focus remains on quantitative (summative) evaluation of interventions.

ins02.gif

Whereas health research tends to focus on outcome measures, HCI research tends to focus on user needs, design rationale, and process measures. Except where measures of success relate to time to complete tasks or error rates, they are often qualitative, based on user reports or observations of user experiences when using technology. Thus, the focus is commonly on qualitative (formative) evaluations and on the ways in which technology is used in practice by different people, including workarounds and other unexpected behaviors. These formative evaluations are typically cost-effective but do not address the questions that matter to a health audience—namely, about long-term engagement and health outcomes.

A concern, even with health research on digital interventions, is a tendency toward “pilotitis,” where interventions are evaluated as being effective enough—and safe enough and economic enough—but there is no clear strategy for scaling them up and making them sustainable. Arguably, HCI research is even more prone to pilotitis than health research. This is exacerbated by current funding regimes and publication cultures. Thus, the long-term impact of both HCI and health research can be limited because there is no pipeline of evaluation measures (Figure 1).

ins03.gif Figure 1. Proximal and distal evaluation measures, and their disciplines (simplified).

back to top  Complementary Methods

With their focus on clinical effectiveness, digital health interventions are widely regarded as complex interventions within healthcare. One widely cited framework for the development and evaluation of complex interventions is that of Craig et al. [6], comprising four key stages: development, feasibility and piloting, evaluation, and implementation. In this framework, feasibility and piloting and evaluation involve smaller and larger clinical trials, while implementation refers to the rollout of the intervention at scale (e.g., regionally or nationally). All digital-tool development is located within the development phase, but even there, scant attention is paid to it: It is just assumed to happen. In recent years, a few papers giving guidance on how to design digital health interventions have emerged (e.g., [7]). These typically start from the assumption that the main source of expertise for digital health interventions is clinicians or other subject-matter experts (such as those with expertise in behavior change) who define requirements for an intervention. In their person-based approach, Yardley et al. [7] go beyond this to focus attention on the intended users of a digital health intervention and what might motivate them to engage with (and get value from) that intervention, but the starting point remains a clinical view on desirable health outcomes. Summarized in Figure 2, the person-based approach has four stages:

ins04.gif Figure 2. Person-based approach to digital health intervention development [7].
  • Planning is similar to requirements gathering. It includes an explicit focus on reviewing existing evidence (e.g., about user needs and existing interventions) as well as conducting qualitative research with the intended users of the intervention.
  • Design covers requirements synthesis and possible features to address those requirements (sometimes referred to as “mechanisms of action”).
  • Development and evaluation emphasize the iterative nature of digital intervention development but make no reference to prototyping or the creative aspects of interaction design.
  • Implementation and trialing focus on larger-scale testing and deployment.

There are well-established HCI development lifecycles such as the ISO 9241 standard; these typically start with user needs and focus on the iterative development and testing of design concepts. The end user is typically regarded as the expert who, explicitly or tacitly, understands their activities and may be able to identify design opportunities. An example lifecycle that intentionally mirrors the person-based model but focuses on HCI activities is presented in Figure 3. According to this view:

ins05.gif Figure 3. A design lifecycle for digital health based on the principles and practices of HCI.
  • The first stage is identifying the problem to be addressed, the intended users, and understanding the contexts in which the digital intervention might be used. The output is likely to be design representations such as personas and scenarios that inform later stages of development.
  • Conceptual design involves creating design representations that address the identified user needs in ways that can be implemented within an interactive digital system.
  • Detailed design involves iterative prototyping and testing, where prototypes are of higher fidelity through successive cycles, or where new features are added through successive cycles in an agile approach.
  • Implementation and testing involve deploying in naturalistic settings and iterating as necessary.

These lifecycles have a lot in common. Probably the most important differences are the emphases on prior evidence at the early stage and on effectiveness (i.e., outcomes) at the late stage in the person-based approach, and the emphases on creative design representations and interaction-focused evaluation through conceptual and detail design in the HCI lifecycle. Clearly both are important, and a richer design lifecycle for digital health technologies will incorporate all these features.

back to top  Summary

Anyone who has been or has visited a patient will have experienced poorly designed health technologies; clinicians make do with some terrible interfaces in their work, as do patients with complex chronic conditions. There are many reasons for this, including ignorance of computer science and HCI; the complexity of healthcare systems and of people’s lives into which new technology is being introduced; population-, outcomes-, and RCT-oriented thinking that undervalues individual qualities and experiences; and the sheer magnitude of the challenge of getting all the necessary disciplines coordinated. Even the idea that the process is all one way (as implied in Figures 1–3 for simplicity of presentation) limits our imaginations in terms of what is needed and what is possible. HCI research and practice have an essential role to play in the development of health technologies that are safe, creative, engaging and truly fit for purpose. Conversely, health presents fascinating challenges for HCI research, and there are many things that HCI researchers can usefully learn—for example, about rigor, thinking about outcomes as well as experiences, and scaling solutions to populations. To make a real difference, we need to bring our complementary perspectives together, working with and learning from the expertise and resources available to different disciplines.

back to top  Acknowledgments

These reflections are based on extensive work with health researchers and students—many of whom have had to find a transdisciplinary research identity—and HCI colleagues who work in similar spaces. I have learned from you all, but any remaining errors or unclarities are my own.

back to top  References

1. Grudin, J. From tool to partner: The evolution of human—computer interaction. Synthesis Lectures on Human-Centered Interaction 10, 1 (2017), i–183.

2. Wachter, R. The Digital Doctor. Hope, Hype and at the Dawn of Medicines Computer Age, 2015.

3. Blandford, A., Gibbs, J., Newhouse, N., Perski, O., Singh, A., and Murray, E. Seven lessons for interdisciplinary research on interactive digital health interventions. Digital Health 4 (2018). DOI: 10.1177/2055207618770325.

4. Pham, Q., Wiljer, D., and Cafazzo, J.A. Beyond the randomized controlled trial: A review of alternatives in mHealth clinical trial methods. JMIR mHealth and uHealth 4, 3 (2016).

5. Collins, L.M., Murphy, S.A., Nair, V.N., and Strecher, V.J. A strategy for optimizing and evaluating behavioral interventions. Annals of Behavioral Medicine 30, 1 (2005), 65–73.

6. Craig, P., Dieppe, P., Macintyre, S., Michie, S., Nazareth, I., and Petticrew, M. Developing and evaluating complex interventions: The new Medical Research Council guidance. Bmj 337 (2008), a1655.

7. Yardley, L., Morrison, L., Bradbury, K., and Muller, I. The person-based approach to intervention development: application to digital health-related behavior change interventions. Journal of Medical Internet Research 17, 1 (2015).

back to top  Author

Ann Blandford is professor of human-computer interaction in the Department of Computer Science at University College London and director of the UCL Institute of Digital Health. a.blandford@ucl.ac.uk

back to top 

Copyright held by author. Publication rights licensed to ACM.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2019 ACM, Inc.

Post Comment


No Comments Found