Fast-moving software development. International adoption of your product. Smaller travel budgets. There are many reasons why we are driven to first try, and then rely on, technology to help us connect with users.
How do we manage the challenges with remote sessions while maintaining the level of quality we need to answer our research questions? This article describes the benefits and primary challenges with remote research and proposes methods to support strong research.
Remote research and testing have many benefits, both to participants and to the study facilitators. The most obvious benefits are logistical. The participants do not need to go to a lab, saving them time and effort, and enabling them to maintain their normal schedules. With automated (non-moderated) studies, they can participate at their leisure, at the hours that are most convenient to them.
As facilitators, we attain logistical benefits through reduced travel and less concern about providing correct wayfinding to participants. In addition, when observing the participants using their own system, we can see how they set up their desktop, how and when they navigate between programs, and how they use browser tabs (and sometimes their Favorites). This insight into how people work with their own machines can be extremely helpful and is often interesting as well.
For people with disabilities, remote studies enable them to fully participate. Those with visual disabilities can use the systems they are most comfortable with, such as screen readers, specialized keyboards, larger screens, and other personal devices. People with disabilities that reduce their ease of movement will not be frustrated by having to arrange transit or concerns about the study location's accessibility.
The challenges and limitations of remote studies are well established. It's hard enough for participants to focus when we do in-person studies—focusing during a remote study is even more challenging. This is particularly true if the participant has not been able to secure a private space. Often participants are at their normal desk, at work or in their home. As a result, their normal interruptions are likely to continue (urgent requests from their boss, sleeping baby waking up, etc.).
In moderated and automated studies, our work is "just" on a screen and will be deprioritized against these other situations. These interruptions can be informative and helpful to research but can also mean major disruption of a usability study. In addition, there is often background noise that can distract participants and disrupt our ability to understand their vocalizations.
The limited or completely absent facial/body language feedback can potentially result in researchers missing the full range of sarcasm, frustration, distress, and boredom among other "readable" emotions of participants. It can therefore be difficult to perceive when it is appropriate to ask the next question. Even five seconds after asking a question can seem like an eternity. However, it's also important to make sure that the participants do not feel abandoned. In some studies, I've observed participants asking "Are you still there?" when a facilitator is being too quiet. Making audible listening noises ("mmm") that encourage them to continue while avoiding words that may be perceived as confirming can help to facilitate the conversational tone without intruding into their thoughts. This is a difficult balance.
A pilot study is crucial for any usability study; when the study is conducted remotely, it is even more important. Due to the lack of a moderator to provide clarification, automated remote studies require pilot studies to ensure success. There is no safety net if participants don't understand the task. They will simply fail, with no option for either side to ask questions. Until you watch the study, it is impossible to know if it was successful. Additionally, while some of the solutions remind participants to think out loud, they are typically more quiet than with in-person or moderated studies.
To adequately prepare for an automated study, I recommend rewriting all the tasks in the context of the tool you are using. Don't take anything for granted. The written instructions must convey all meanings and provide all details to the participant (user IDs, passwords, etc.).
For moderated sessions, making a text version of the tasks available to participants will increase comprehension. The tasks can be pasted into the session chat window or written out in any word-processing or note-taking document and screen-shared.
The chat feature in the system can also be used to communicate during the study to compensate for technology or language issues. Practice using the chat with someone else, as well as copying and pasting into the windows as appropriate.
In general, practice using what is being tested and familiarize yourself with the most common browsers and operating systems. It is helpful to know some basic troubleshooting for what you are testing, and for the systems as well. Test the technology you'll be using while working from home or remotely to ensure you understand how it works from the participant's perspective.
For moderated studies, practice the script more than you would for in-person studies; try to limit the use of humor in your communication, as it can be difficult to interpret remotely.
Do a pilot study with one participant, and if they fail due to the study design, fix it and do another pilot, again with only one participant. Don't risk your budget on a bad study—save time and money by running a small number of participants at any given time. Small automated studies can be completed within a few days, depending on how participants are recruited.
Try to limit the use of humor in your communication, as it can be difficult to interpret remotely.
Consent for any remote study should be digitized, preferably using an online form that is easy to set up and manage. If the study is automated, the consent form can be integrated into the study; if the study is moderated, the consent form should be signed prior to beginning the study.
Non-disclosure agreements (NDAs) should be handled in the same manner. Participants are always able to do screen captures, so protecting intellectual property is challenging (at best). Be aware of the risk and keep things simple for everyone with a verbal agreement from the participant. If you must have documentation, create a plain-language agreement and collect consent via an online form.
Participant recruiting can be a major challenge for remote studies, and automated tools make this task easier. Some remote-study solutions offer recruiting by asking potential participants a few simple questions. These automated recruitment methods enable you to quickly create and run a study. As practitioners, our priority is to catch the biggest problems in the UI—the ones that are hidden to us, and that will confound an average person who encounters them. The ability to easily identify major issues and then incorporate changes into our work quickly far outweighs the need to identify insignificant issues.
In 2000, Steve Krug recommended that we "recruit loosely," finding participants who reflect our personas/audience but making allowances for disparities . Having run hundreds of studies, I find this approach works in most situations. With a few participants whom we've recruited rather loosely, we are able to identify big issues. Once those issues are addressed, another small round of testing helps us identify additional issues.
As with everything in our profession, context matters. You need to determine what works for your situation. There will be cases when you will need to recruit more tightly to match a particular type of user, and what may be considered loose recruiting to one organization may be overly stringent to another.
With some solutions, you can provide a list of potential participants to the company hosting the study. In other situations, you can invite people to the study yourself. The key is to ensure that participants have the technology and the ability to participate. Their ability to participate includes both the ability to install the software and the relatively basic technical knowledge to use it. In addition, they will need to agree to enable screen sharing.
The mismatching of participants can be a source of frustration in any study. Regardless of the reason for the mismatch with your study, you will need to determine how much of their feedback is helpful and how much should be dismissed.
Planning moderated sessions can be time consuming. Those that occur between people in different time zones is always my biggest personal challenge. I'm thankful for the World Clock meeting planner (http://www.timeanddate.com/worldclock/meeting.html). I use it at least once a week to ensure that I am planning meetings that make sense for most, if not all, attendees.
Doodle (http://doodle.com/) is another tool that I have found to be irreplaceable in my quest to solve the planning challenge. Doodle allows for time-zone specification, so that when I'm offering time blocks, people see them in their local time zone.
Build time into the study to compensate for any technical issues. I typically add five to 15 minutes of time to my study for technical considerations. In some cases that means I will have very little time for the actual study. Always end the study when you said you would. Participants have planned this time for you; you need to respect their schedule as well.
Send participants a calendar invitation with the pertinent details included in the invite, as well as an email with additional details and a link to the consent form. If possible, to further improve attendance, ask participants to communicate their plan to participate in the remote study (location, Internet access, admin rights, etc.).
Part of what makes usability studies so effective is the ability for teams to observe and learn firsthand about the issues inherent in the system. Observers who are collocated can be in the same room as the moderator, though background noise can be challenging to minimize. For new collocated observers, I typically recommend a separate room and that they stay on mute.
When the observers are themselves remote, I always ask them to mute themselves and to make sure that the conference-line software has muted the noise that accompanies people joining and leaving the meeting. I also ask the observers not to announce themselves. I refer to a "colleague who is listening," and I avoid enumerating how many people are watching or who is watching.
While there are ethical issues with this approach, in most cases introducing all the observers will make the participant nervous and eat up limited study time. As with other studies, we provide the observers with a guide and tips on note-taking, and then hold a debrief session with them. Frequently one or more observers will take notes in a shared online document.
It is not a question of if but rather when technology issues will arise. While I do ask participants to join the meeting early and/or download the software ahead of the session, it is the rare participant who does so. Challenges installing the minimal software necessary for an online meeting is typically the first issue.
Assisting participants is always a challenge, as their system setup is typically unique to them, making it difficult to troubleshoot problems. When screen-sharing software is used, we can attempt to troubleshoot by walking them through various tasks and have a chance of salvaging the sessions. However, in some cases they may not be able to activate the screen share, or there is not one to use. In those situations, a tool like Copilot (https://www.copilot.com/) can enable a direct line into their computer to diagnose and address the issues.
Always have a backup plan to use another type of software. Use screen-sharing software that enables you to give participants control in case they cannot (or will not) share their screen. Skype and/or similar social media/screen-sharing solutions can be used as a last-minute backup. The key is to make sure it is something you are familiar with. For moderated studies, have a note taker ready to take screenshots if you are unable to record within the software.
Finally, please keep in mind that mistakes will be made. People will miss their sessions and technology will fail. Empathize with your participants, apologize, and try to salvage the session and/or reschedule when appropriate.
Special thanks to the UX Designers Pittsburgh MeetUp May 2016 workshop participants, who informed this work through their participation.
Carol Smith conducted her first remote usability studies in 2004. She has a master's degree in human-computer interaction from DePaul University and is an active UX community organizer. She is currently senior design manager for IBM Watson and lives in Pittsburgh, PA. firstname.lastname@example.org
Copyright held by author. Publication rights licensed to ACM.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2017 ACM, Inc.