Maybe you heard a horrible statistic about a health-related issue and thought, "Technology could help with this." Or perhaps an event in your own life prompted you to think about how technology could improve the health and wellness of others. Whenever I have an experience that highlights a potential new avenue of research, I begin by learning more about the health matter, reviewing current health informatics applications, and then investigating methods for learning about the needs of the target population. Then I collaborate with a multidisciplinary team to create a health intervention prototype that people use for a short period of time to validate its feasibility and effectiveness. It is this latter part that gives me pause about what I do.
I occasionally receive email messages from people at risk of an illness or already battling a chronic illness asking if they could use some part of our prototype. People are free to use what we create, but its utility for the general populationstraight from our open source repositoryis often questionable since it is a research project that does not come with much support after the funding ceases. Based on these experiences, I started thinking more deeply about what our responsibilities are when we design sociotechnical health interventions. Here, I briefly outline some of the responsibilities and questions the community should think about before thrusting technology on the next population that could benefit from it.
When we design sociotechnical health interventions, there must be some consideration of the accepted or correct way to achieve good health. Indeed, there are many guidelines from various federal agencies to help us understand everything from what a balanced diet is to what movements our child should make at three months old. But as soon as we decide on which guideline to use in our intervention, we are implicitly applying a set of values to the population who will use that intervention. Is this what we should be doing through our design work?
One argument is that this is exactly what we should be doingif it is a health intervention, it must be backed by evidenced-based research illustrating if people do these steps, they should live a healthier life. Indeed, healthcare providers traditionally treat patients in this manner; they provide a list of dos and don'ts based on well-established research. One of the projects I worked on followed this argument because we were designing a nutrition-monitoring application for people with chronic kidney disease. These patients must follow a restrictive diet; if they don't, serious health problems can result. We worked with nurses, dietitians, and a nephrologist to identify the ideal diet for this population . But one of our design considerations was to include foods the population was encouraged not to eat (for example, potassium-loaded strawberries). We wanted to respect the person as an individual and provide them with the ability to consider non-recommended foods to test how these foods affected their restricted dietary limits for certain nutrients. This prompted discussions with the research team about how we should talk to users who routinely record foods that put them over their dietary limits. There is a fine line between lecturing/admonishing and having an open discussion about the implications of personal dietary choices.
The other side of this argument is that we should not be pushing a specific set of values on a population because the values change periodically (for example, the recently updated food pyramid) and they may not resonate with the target population's culture, context, or social status. Choosing a set of values also implies a need for "corrective technologies"  whose goal it is to change an individual's actions to conform to some gold standard.
But how many of us can really conform to all of these guidelines? Did you eat your five servings of fruits and vegetables today (a reference to the National Health Services and former U.S. Centers for Disease Control and Prevention nutrition campaigns)? Interestingly, the U.S. has replaced its "5 a Day" campaign with "Fruits & VeggiesMore Matters," which almost makes it sound more flexible. How many vegetables should I eat? More . But if there is too much flexibility in the value set chosen, then how can we measure the outcomes of an intervention and show that it does improve health? Do small steps in health improvement suffice (for example, I went from eating zero servings of vegetables to one serving a day)?
Overall, when it comes to what type of intervention is needed and with which value sets, it depends on whom we are collaborating with and the type of population we are targeting. Currently, I am inclined toward the latter argument, in which we try not to impress a value set on a population but instead encourage users to improve their health through incremental change and personal reflection. For example, asking people who are not physically active to walk 10,000 steps a day seems unreasonable. However, if we ask them to slowly increase their step counts in certain increments, they can figure out how to integrate these small goals into their lives and perhaps even reach the gold standard of 10,000 steps a day. This idea respects people as individuals who may deviate from the ideal standard, but would still like to improve their health in some way.
As noted in Elizabeth Mynatt's previous contribution to this forum , some health technologies have the ability to collect data 24 hours a day. But just because we can, does it mean we should? When we think about designing interventions for dietary or physical-activity monitoring, it would be ideal to have information about each time members of the population partake in these activities. Yet we know from personal experience and previous research that people do not use interventions in this way. When employing sensors, people can cover them or take them off. In the case of manual input, people can backfill data by documenting all of their activities over a given period of time (a day, a week) in one data entry. Some people are "parking-lot compliant" , meaning they fill in their data right before they meet with a researcher or health practitioner. Still others forget to fill out the application or simply reject it. When regular usage is needed, we typically send reminders and prompts to users. But should we embrace users' true usage pattern or try to cajole them into using the intervention regularly?
A chronic kidney disease participant I worked with during a study session had the most compelling argument for embracing users' usage patterns. When I inquired about his decreased usage of the dietary-intake application, he told me he had not used it because he found the diet that "would not kill him." He did not have to input his diet anymore since he ate the same things every day. He said he would use the application again when he got bored of his current diet. True to his word, he asked for the application again after the study ended to help him during a holiday season when he would be eating at social events and could not anticipate his diet. This made me wonder if we were thinking about usage expectations incorrectly: Could episodic usage be acceptable? Would a user learn enough about their personal health during that short-lived New Year's get-in-shape resolution period to make small changes to their health for the rest of the year? If so, we are again facing the challenge of how to measure outcomes when people are not regularly using the application for a health-related activity in which they may participate daily (See  for a great discussion on evaluating health technologies in HCI).
On the other hand, if people do not get enough feedback on their activities, they most likely will not continue pursuing those activities. For example, some women of low socioeconomic status who have had ongoing battles with obesity have noted in interviews that diet affects their health the most . When they crash diet, they immediately see the effects: looser clothing and noticeable weight loss. When they exercise, however, the only immediate side effect they notice is feeling sore and tired. But without sustained exercise and good nutrition, the women would have difficulty in maintaining a healthy weight. Thus, in this example, we could argue for continuous usage through a wearable sensor system and ambient display that could illustrate how all of the physical activities done throughout the day add up to improve health. The ambient display could act as a virtual self, showing how the physical activity is making an impact, even if it does not result in immediately noticeable physical signs. In this situation, without continuous usage we would not be able to gather enough data to reveal the effects of their activities on their health.
Independent of what usage patterns we have in mind for our interventions, we must accurately report on participants' usage of the intervention, not just aggregate data that leaves the math to the reader (for example, "We recruited 20 participants for the six-week study... We collected 90 physical activities..." When we do the math, we are left wondering if people exercised less than once a week or if some participants input more data). When we look at actual usage of our health interventions, we can begin to explore why certain populations used the application more than others, which could improve future designs.
After a great deal of research, design, collaboration, and evaluation, a team should have a sociotechnical health intervention that can help improve the lives of a population. The team can easily disseminate their knowledge through papers and presentations. They can even share their designs and software through freely available code repositories. But what about the people who would like to continue to use the application? What about the people who see the press release and think this is the application they have been looking for? We built it, they came, now what?
This is a particularly challenging topic for sociotechnical health interventions. Paper-based interventions are easier to disseminate to the broader population because the research team can publish the intervention tool and results in one neat bundle (either paper or Web accessible). Updates could be disseminated in similar ways. The technical aspects of the interventions we design need more consideration, because there may be specific hardware, software, and telecommunications infrastructure that is necessary to make the intervention work. In addition, technology changes so quickly that by the time an intervention is designed, implemented, and evaluated, the technology may already be outdated in relation to current market standards. When upgrades are madeespecially to mobile health applicationsthe upgraded application may use completely different hardware. These issues make it difficult to disseminate and update the sociotechnical interventions on a broader population scale.
For example, the dietary-intake monitoring application for chronic kidney disease patients was designed and developed for a personal digital assistant (PDA), because when we wrote the grant, PDAs were state-of-the-art. By the time the application was deployed, smartphones were the newest mobile technology. Thus, after the grant ended, we discussed porting the application to a smartphone. But we knew it would be unethical to take a sociotechnical intervention away from participants who wanted to continue using the application to manage their diet. Thus, we gave participants the option to keep using the PDA-based application, explaining we could provide only limited support if participants encountered a problem; our funding had finished and we had limited resources. This situation is far from ideal. By doing the "right thing"allowing participants to keep the sociotechnical interventionwe had to address other issues regarding maintenance and technical support. In addition, when we deploy the smartphone version of the intervention, we have to think about our study design and decide whether we should restrict it to people who have never used the application before or include previous users.
One may argue that we are doing research, and thus sustainability should not be considered so that more innovative research can be undertaken. The ideal research situation is one in which good ideas that are proven effective are carefully selected for the mass market, similar to the idea of multiphase clinical trials or translational informatics, in which the goal is to push innovations from the lab bench to the hospital bedside to improve patient care. But then we must face the ethical issues, as people may begin to rely on these interventions for the sake of their health and wellness. The link between these interventions and the health and well being of living, breathing people makes it even more important that we figure out how we can better sustain interventions as part of our research agenda. This goes beyond funding agencies' requirements of making data public (for example, the U.S. National Science Foundation's Data Management Plan requirement) by ensuring the intervention can be publicly obtained and used. (Now that would bolster the argument for public funding of scientific research!)
This article covers some of the most pressing challenges and debates about responsibilities that my research group regularly encounters in our own work. There are no easy answers to the questions posed here (and undoubtedly there are more questions and responsibilities that remain to be discussed). But if we are cognizant about these responsibilities, we can design better sociotechnical health interventions that respect their current and future users.
I would like to thank the researchers in the Wellness Innovation and Interaction Lab and Julie Maitland for their input. In addition, I would like to thank Dr. Patricia Brennan and her ISyE 961 class for their interesting discussions about values, sustainability, and nutrition-monitoring applications.
1. Welch, J.L., Siek, K.A., Connelly, K.H., Astroth K.S., McManus, M.S., Scott, L., Heo, S., and Kraus M.A. Merging health literacy with computer technology: Self-managing diet and fluid intake among adult hemodialysis patients. Patient Education and Counseling 79, 2 (2009), 192198. DOI = 10.1016/j.pec.2009.08.016; http://dx.doi.org/10.1016/j.pec.2009.08.016
2. Grimes, A. and Harper, R. Celebratory technology: new directions for food research in HCI. In Proc. of the 26th Annual SIGCHI conference on Human factors in Computing Systems (CHI '08). ACM, New York, NY, 467476. DOI = 10.1145/1357054.1357130. http://doi.acm.org/10.1145/1357054.1357130
3. This is until one goes to the website and puts in some personal information to find out specifically how many servings are necessary; http://www.fruit-sandveggiesmatter.gov/).
4. Mynatt. E.D. IT in healthcare: A body of work. interactions 18, 3 (May 2011), 22-25 DOI = 10.1145/1962438.1962446; http://doi.acm.org/10.1145/1962438.1962446
6. Klasnja, P., Consolvo, S., and Pratt, W. How to evaluate technologies for health behavior change in HCI research. In Proc. of the 2011 Annual Conference on Human Factors in Computing Systems (Vancouver, BC, May 712). ACM, New York, 2011, 3063-3072. DOI=10.1145/1978942.1979396; http://doi.acm.org/10.1145/1978942.1979396
7. Maitland, J. and Siek, K.A. Technological approaches to promoting physical activity. Proc. of the 21st Annual Conference of the Australian Computer-Human Interaction Special Interest Group. ACM, New York, 2009, 277280. DOI = 10 1145/1738826 1738873; http://doi.acm.org/10.1145/1738826.1738873
Katie A. Siek is an assistant professor in computer science at the University of Colorado, Boulder, where she leads the Wellness Innovation and Interaction Lab. Her research focuses on how sociotechnical interventions affect personal health and well-being and is supported by the National Institutes of Health, the Robert Wood Johnson Foundation, and the National Science Foundation, including a five-year NSF CAREER award.
©2011 ACM 1072-5220/11/0900 $10.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2011 ACM, Inc.