Features

XX.3 May + June 2013
Page: 58
Digital Citation

Effective approaches to user-interface design with ACASI in the developing world


Authors:
Stan Mierzwa, Samir Souidi, Irene Friedland, Lauren Katzen, Sarah Littlefield

Several years ago, the Information Technology (IT) group at the Population Council was presented with a situation where researchers had a need and knew that some solutions existed, but what was available did not meet their needs. These investigators asked if there was a technology solution to help obtain more accurate responses to survey questions that concerned subject matter of a delicate nature. In response, the IT group built a software/hardware combination solution that both met the needs of the researchers and was accepted by the people who used the system—an illiterate or semi-literate population that perhaps had never seen or used a computer before.

There were three goals in mind: enhance the research by obtaining the best data possible; operate successfully in the local environment and be welcomed by the local populations; and increase efficiency and cost-effectiveness.

The solution was the development of a fully customized ACASI (audio computer-assisted self-interviewing) module. ACASI is an effective technology for obtaining more honest responses to sensitive questions than those asked face-to-face [1]. With ACASI, a respondent listens to prerecorded audio questions through headphones that are connected to a computer or handheld device and records his or her responses using a touchscreen or a keypad. Literate respondents can also simultaneously read the questions on the computer screen. Using ACASI and other ICT solutions allows for standardization, confidentiality, minimized bias, and more effectively collecting survey data. Our ACASI module has been used almost exclusively in quantitative research.

Here, we will outline some examples from experience (see sidebar) and some approaches to consider when implementing self-administered ICT questionnaires in developing countries.

User Interface Design

When providing a technology solution for a literate or semi-literate population, certain key steps must be taken to ensure the system is acceptable to the participants. Designing the participant or user interface is one of the main efforts that took place during the creation of our ACASI solution. One of the key principles is to keep the screen design very simple, using elements such as color-coded buttons or shapes; simple, locally recognizable graphics; and number/dial pads as ways of allowing participants with varying levels of literacy to answer questions.

One question per screen was utilized. This was also found to be an essential design element in a U.S.-based evaluation [2]. We provide an audio version of each question in the user’s chosen language and ask the participants to use a touchscreen system to key in their responses. The ability to navigate a screen regardless of a user’s level of literacy requires graphic controls that allow replaying a question, going back to the previous question, and proceeding to the next question. Results in a study of developing optimal audio-visual interfaces for illiterate computer users demonstrated that audio or voice annotation generally helps in speed of comprehension [3]. In addition, accuracy increases, and the average time taken to complete tasks on the computer decreases when audio is introduced.

If graphics or pictorial representations are to be provided to participants, it is beneficial for local collaborators to help create or review the graphics to ensure they are culturally appropriate. For a clinical trial in Nellore, India (MTN-005), we initially created a graphic in New York depicting a couple in a bed to be used in questions regarding sexual activity. Local collaborators suggested changes to the graphic, such as covering the couple with a blanket for discretion. We ultimately hired a local illustrator to ensure that all images were consistent and culturally appropriate. This strategy was also effective in earlier research showing that hand-drawn representations with audio were most clearly recognized [3].

Another important lesson was learned during the pre-test and training for a study in Pune, India, in which one of the question types required Hindu-Arabic number recognition. The number choices were arranged in the shape of a phone dial pad, and the participant was required to respond to a question by tapping one or more of these numbers. When a local collaborator commented that approximately 30 percent of the participant population would not be able to recognize the numbers, we redesigned the screens to include graphics rather than relying on a numeric dial pad. Moving forward, the planning of new studies should include the consulting of literacy maps and a closer working relationship with local collaborators when considering whether or not to introduce certain types of self-administered questions.

In another example, during the implementation of an ACASI solution in South Africa, a question type was introduced that used color-coded boxes, including blue and green. During pre-testing and training at the sites, we were informed that the colors blue and green can be used interchangeably in the local languages, and could cause confusion for participants. In this situation, having a system that permitted us to adjust the choice of colors proved instrumental in correcting a problem that could have provided flawed data. The experience also helped us to realize that local staff should be asked to confirm that any colors used in the survey are correctly translated into the local language prior to implementation.

In implementing combined audio and customized screens, there are times during the flow of the questionnaire when an audio script introduces a series of questions to help outline what is to come; this is common when a section of related questions is introduced. During pre-testing of such introductions for one clinical trial (MTN-003), participants thought something was wrong, as nothing was displayed on the screen. We decided to display a simple graphic in the center of the screens so participants would have a visible signal during the audio that the survey was progressing, rather than the original blank white screen. We also added a graphic to be displayed during the audio introduction that explains how to use the system and manipulate the screens. The sample screens in Figure 1 show the buttons that are available on every screen, including such items as the top blue bar for repeating the question, the grey box that provides the question text, the circle to allow a participant the ability to skip the question, and the Previous and Next buttons. We have reused this method in numerous studies where electronic self-report was done with our ACASI solution, and it has become part of our standard.

Translations and Audio Recording in ACASI User Interfaces

As of today, we have implemented ICT and ACASI systems in 21 languages. However, as we are a U.S.-based research organization, our practice is to establish the study questionnaire (what appears on the screen) and the accompanying audio script (what participants hear) in English. Further, the majority of our IT staff do not speak the languages spoken by local staff and participants, so a well-organized and -documented translation process is an integral part of ensuring the solution is acceptable, and it provides researchers with accurate data. Technically, our ACASI system can accommodate numerous languages in one questionnaire setup. In some studies, for example, with an English base version of the setup, we have programmed for 10 different languages in one survey.

Generally, our translation process has included sending English questionnaires and audio scripts to a local translator or local collaborators for translation into the local language(s) and “back translation” by a second, independent translator back into English. The English back translation should allow English-speaking researchers to find potential errors in the translation; errors are often attributed to the utilization of non-researcher translators or the utilization of colloquial or complex English terms. After refining this translation/back translation process for multiple studies, we found that involvement in the translation process by local collaborators is key. For example, participants in one study were asked to rate their ability to adhere to the dosing regimen; response options ranged from “very poor” to “excellent.” During a review of the documents, the team recognized that the word for poor that was used referred to poverty instead of inferiority. For some languages, we have also found that a word that might be appropriate to use for the on-screen text would sound awkward if used in the audio script.

Accordingly, a more refined translation process would seek to involve local bi- and multilingual collaborators during the development of the English questionnaires and audio scripts. Local collaborators who were involved in the development of the English questionnaires should then review the translated documents. Once the translation is satisfactory, the questionnaire would then be back-translated for U.S.-based researcher review. If time and resources permit, alternatively, the translation and back-translation documents might be reviewed as a team. By having multiple U.S.-based researchers, local bi- and multilingual collaborators and translators can sit together, clarifications can be made about the intended meaning of a question, and the group can come to a consensus about how to adjust problematic questions.

Given the sensitive nature of many questionnaires, we have found that participants are more comfortable when audio files are recorded by a person of their gender (i.e., a male voice for male participants and a female voice for female participants). Further, audio files should be recorded by a local person or staff member with a locally recognized accent. Standard Windows- and Apple-based computers provide utilities in the operating system to allow for recording audio in either 8-bit or 16-bit audio. These utilities can generally create .WAV, .MP3, .MP4, or .WMA files. If a utility is not available in the operating system, free or open source software options, such as Audacity, could be used.

When creating audio files, you will want to provide a high-quality headset that includes a filtered microphone, usually with a foam cover, as this will help to eliminate unwanted ambient sound. In addition, when recording, it is most important to find a truly quiet location. This may be a challenge if recording occurs at busy sites where staff come in and out of rooms, or at sites located in urban and peri-urban areas where traffic and other sounds might disrupt recording. It is especially challenging if recording takes place in warmer climates, where windows remain open for ventilation or air conditioning units create further background noise. Identification of a remote or private room well in advance of recording sessions is key; we have found that it is often easiest for recording to take place off-site. Alternatively, simply posting a “do not disturb” sign on the door indicating that recording is in progress can significantly reduce interruptions that might result in the need to re-record files.

The audio scripts should include tips and visual cues, such as underlining text that should be emphasized, or when it is important to pause for two to three seconds between phrases. Most important, those whose voices will be used for the audio recording should practice with the audio script ahead of time, to become comfortable with the source material and to get into the flow of the audio script. It is not uncommon to need to re-record audio files due to interruptions or pauses in the audio file; sufficient time for quality recording should be allowed.

Prior to the implementation of any study, we recommend that local collaborators officially sign off on the final translated on-screen text, audio scripts, and audio files.

Conclusion

The movement to ICT solutions to enhance or support innovative research has continued to expand since the authors began developing solutions for technology-driven self-report systems. Preference for using ACASI as opposed to face-to-face interviews has been demonstrated in some studies (MTN-035B), where 85 percent of 585 women responded favorably [4]. In many less-developed areas, mobile broadband use on smartphones and tablet-based computers is growing, and more stable electricity is becoming available, but connectivity still presents challenges and needs to be considered when building an ICT solution for use in these areas. Self-report systems such as ACASI can be implemented in these locations, but one should not assume that what is built far from the local environment where it will be used will work successfully. We have outlined some efforts that may be considered, such as giving special attention to user interface design and obtaining optimal audio for self-report systems used with illiterate or semi-literate populations.

Acknowledgments

We thank the many participants who have utilized our custom-developed ACASI solution and provided useful feedback on the user interface design during pre-tests. This work was supported in part by the Microbicides Trial Network (MTN), which has been funded by the United States National Institutes of Health and the United States Agency for International Development (USAID).

References

1. Hewett, P.C., Mensch, B.S., and Erulkar, A.S. Consistency in the reporting of sexual behavior by adolescent girls in Kenya: A comparison of interviewing methods. Sexually Transmitted Infections 80 (suppl 2), 2004.

2. Basch, E., Artz, D., Iasonos, A., Speakman, J., Shannon, K., Lin, K., Pun, C., Yong, H., Fearn, P., Barz, A., Scher, H.I., McCabe, M., and Schrag, D. Evaluation of an online platform for cancer patient self-reporting of chemotherapy toxicities. Journal of the American Medical Informatics Association. 2007.

3. Medhi, I., Prasad, A., and Kentaro, T. Optimal audio-visual representations for illiterate users of computers. Proc. of the 16th international conference on World Wide Web. ACM, New York, 2007.

4. Gorbach, P.M., Mensch, B., Husnik, M., Coly, A., Young, A., Masse, B., Makanani, B., Nkhoma, C., Chinula, L., Tembo, T., Mierzwa, S., Reynolds, K., Hurst, S., Coletti, A., and Forsyth, A. The Microbicide Trials Network: Effect of computer-assisted interviewing on self-reported sexual behavior data in a microbicide clinical trial. AIDS and Behavior. 2012.

Authors

Stan Mierzwa is the director of information technology at the Population Council and has collaborated with research scientists for more than ten years to help integrate technology use in studies in the developing world. He received an M.S. in information systems management from the New Jersey Institute of Technology.

Samir Souidi is a senior application programmer/database developer at the Population Council and has designed and developed custom modules for use in the Council’s ACASI software platform. He received an M.S. in Information Systems from Pace University.

Irene Friedland is an applications/support specialist at the Population Council. She has designed the technical documentation and contributed to the structured methods of recording audio files for ACASI, as well as the creation of audio scripts. She has a master’s degree in social work from Fordham University.

Lauren Katzen is portfolio manager at the Population Council’s Center for Biomedical Research. In her previous role as a clinical trial specialist, she was involved in several ACASI study implementations involving clinical trials, providing input on questionnaire design, managing the translation process, site training, and study flow. She has an M.P.A. in health policy and management from New York University.

Sarah Littlefield is a clinical research associate at CONRAD. Previously with the Population Council, she has been involved in several ACASI implementations involving clinical trials, providing input on questionnaire design, managing the translation process, and site training. Sarah has an M.P.H. in social and behavioral science from Yale University.

Figures

F1Figure 1. Sample touchscreen design examples. From top: color-coded multiple choice, dial pad, and graphic that increments with touch.

Sidebar: Breakdown of Solution Experiences—Methodology

We use experiences from the below list of projects that used our customized ACASI solution to illustrate the approaches we suggest for improving outcomes when implementing ICT with illiterate or semi-literate populations. A total of 21 languages were used in studies with approximately 35,000 ACASI-completed electronic surveys that we discuss in this article.

Sidebar: Clinic-Based Studies

  • Carraguard Phase III Trial (South Africa—3 clinics)
  • NES/EE CVR (U.S., Australia, Brazil, Dominican Republic—13 clinics)
  • MTN-035B (Malawi—2 clinics)
  • MTN-003 (Zimbabwe, South Africa, Uganda—14 clinics)
  • MTN-005 (U.S., India—3 clinics)
  • MTN-009 (South Africa—7 clinics)
  • MTN-020 (Zimbabwe, South Africa, Uganda—15+ clinics)
  • Assessing the reporting of sensitive behaviors in microbicide trials (South Africa—3 clinics)
  • A simulated clinical trial to explore willingness to participate (India—1 clinic, 3 satellite sites)

Sidebar: Non-Clinic-Based Surveys

  • Malawi Adolescent Schooling Study—Malawi
  • Assessing and Improving the Measurement of Sexual Behaviors—Multiple
  • Male Circumcision Partnership: Achieving Scale—Zambia, Swaziland

©2013 ACM  1072-5220/13/05  $15.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2013 ACM, Inc.

Post Comment


No Comments Found