People: the well-tempered practitioner

XIV.3 May + June 2007
Page: 48
Digital Citation

Designing useful and usable questionnaires


Authors:
Chauncey Wilson

Asking good questions and designing useful and usable questionnaires are core skills for usability practitioners. I often find myself disappointed by the poor quality of online, paper, and telephone questionnaires. Part of the problem might stem from a lack of training in questionnaire design—a complex topic—as well as the assumption that questionnaires are a quick and dirty method of data collection that can be thrown together. The reality is that questionnaire design is a complicated process that involves many, often conflicting, considerations [1, 2]. The design of solid questionnaires must consider various issues, including:

  • Clear objectives—A questionnaire designer must be clear on the purpose of the questionnaire as a whole and of each question in the questionnaire.
  • Persuasion—What will get people to answer the questionnaires carefully and completely?
  • Efficiency—What can the questionnaire designer do to improve the efficiency of the questions and questionnaire?
  • Clear wording of questions and responses—How will the language of the question and any response categories influence the results?
  • Question order—What is the impact of the order of the questions and the responses? What should one consider about the first question? Where should sensitive questions go?
  • Bias—What are the common biases in the design of questions, responses, and scales that can affect how you interpret the data?

This is a big topic, so in this column I will highlight some of the common faux pas in the design of questions and questionnaires and suggest some best practices.

Relate Each Question to a Business and User-Experience Goal. Karen Donoghue [3] suggested this idea in her book Built for Use: Driving Profitability Through the User Experience. As part of the question-definition process, try to connect each question to a business and user-experience goal. For example, a business goal to “get more people on our system” might have an associated user-experience goal of “high learnability,” which would support a rating question related to ease of learning. As part of your questionnaire design, you could create a matrix of questions, question types, and the associated business and user-experience goals.

Make Your Questionnaire Persuasive and Trustworthy. Why do you fill out questionnaires? There are many reasons including compulsion (you are required to fill out the corporate satisfaction survey or risk your job), potential benefits in the future (your ideas about ways to improve the product have a better chance in the future), rewards (there is a raffle for an iPod), and altruism (you want to help the person or group soliciting your input). When designing a questionnaire, there are a number of things you can do to make it persuasive and trustworthy:

  • Personalize the questionnaire. Too many questionnaires have a brief introduction that identifies the owner of the questionnaire as “Customer Satisfaction Committee.” If you list a real person’s name, rather than a vague, impersonal “committee” or “team” designation, people are likely to view your questionnaire as a bit more trustworthy.
  • Indicate how the data will be used (for example, input into requirements for the next version), and also indicate that the group can do something with the results, like use them to prioritize what will be considered in future revisions (though don’t promise any specifics or your lawyers will pull you into their offices and punish you!).
  • Consider all the costs and benefits to respondents, and design the survey to minimize the costs and maximize the benefits for respondents. Costs and benefits can be a mix of tangible and intangible factors. Time to fill out a questionnaire, the usability of individual questions, the amount of personal information, the effort required to submit the questionnaire, and any potentially threatening or embarrassing questions can be considered costs. Benefits can include inclusion in a select group of people who are giving feedback, monetary benefits, access to the results of the questionnaire, and the simple reward of participating in an interesting survey.

Always Prepare a Data-Analysis Plan. Something that is often skipped in the design and implementation of a questionnaire is a detailed data-analysis plan that spells out how answers will be coded (for example, how you will code nonresponses, unusual responses, or ratings in paper questionnaires where people circle two numbers when you want only a single answer), what analyses you will do on single questions and sets of questions, any hypotheses that you may have, and what questions you will use to test those hypotheses. Your plan should spell out how you will analyze open-ended data (for example, will you use an affinity diagram to organize data or employ multiple raters to assess inter-rater reliability?). The plan should list the descriptive, and inferential statistics that you will use.

Revise and Pilot Test the Questionnaire with an Audience As Close to Your Real Audience As Possible. Pretesting questionnaires is essential for discovering flaws and usability issues with cover letters, the questionnaire itself, and the method of administration (for example, paper or online). The best way to pretest a questionnaire is to have potential respondents test the questionnaire individually by reading aloud each question and set of response categories and choosing their answer to the question. Encourage respondents to think aloud and comment on any aspect of the questionnaire, including unclear or ambiguous questions, the completeness and clarity of the response categories, biased questions, terminology, legibility (i.e., is the text size large enough for older respondents?), sentence structure, and potentially threatening questions. If you are using an online questionnaire, focus the process on the navigation and error-correction features of the survey tool you are using. If necessary, conduct remote telephone interviews with your pilot participants to get feedback on how well they can understand and navigate the questionnaire.


Pretesting questionnaires is essential for discovering flaws and usability issues with cover letters, the questionnaire itself, and the method of administration.

 


Choose the First Question Wisely. Dillman [1] and Bailey [4] stress that the first question is the most crucial one and is likely to determine whether a person responds. Make the first question:

  • Easy to understand. Do not require complex instructions.
  • Easy to answer. Consider a factual question with limited response categories that everyone can answer.
  • Interesting and relevant. The respondent should feel that the first question is worth answering. An interesting first question can increase the reward value of your survey.
  • Clearly connected to the purpose of the questionnaire. Dillman [1] notes that many neophyte questionnaire designers want to start out with a question about education, age, or job title because they are easy to understand and answer, but these may not be connected to the primary purpose of the survey (for example, understanding how the usability of our product affects your productivity). In general, put demographic questions (other than screening questions) at the end of most questionnaires since they do not interest the respondent.
  • Nonthreatening. Don’t start out by asking a question that might threaten the respondent. Even a question as simple as “How often do you use Product X?” could be threatening if it is a product they are supposed to use every day but don’t. The degree of threat may depend on how clearly you explain how a person’s identifying information will be protected.

Be Careful About Two Questions Disguised As One. A common flaw in question design is to have two questions posing as a single question. Double questions are difficult to answer and can yield ambiguous data. One example of a double question: “How satisfied were you with the performance and usability of the XYZ website?” In this example, the performance of the site could have been great, but the usability poor, or vice versa. This type of mistake renders the data uninterpretable because it is not clear which “question” (performance or usability) the participant is answering. Split this question into one question about performance and another question about usability.

Fowler and Mangione [5] describe another category of a double question called “hidden questions,” where a question is implied. For example, the question “Who will you vote for in the next Usability Professionals’ Association (UPA) election?” implies the question, “Will you vote in the next election?” and an explicit question “Who will you vote for?” (It also implies that you are a voting member of UPA.)

Avoid Vague Response Quantifiers When More Precise Quantifiers Can Be Used. This is so common in question design that it scares me. I received a survey today that was nearly identical to the “Poor” example here. In the following examples, the response categories are vague and can be interpreted differently by respondents, and the data from this question will be nearly impossible to interpret.

Poor: How often did you use Product X? (Check one answer)

  • squ.gif Never
    squ.gif Rarely
    squ.gif Occasionally
    squ.gif Regularly

What does “occasionally” mean and how would that differ from “regularly” or even “rarely”? This question is also missing the range that the person should consider (day, week, or month).

The “Better” example provides a range that applies to the question and more specific response categories. The range you use in questions calling for quantitative answers may require some pretesting or input from other data sources, and you may need to consider problems with memory and tailor the time period appropriately.

Better: How often did you use Product X in the past month? (Check one answer)

  • squ.gif Not at all
    squ.gif 1-3 times a month
    squ.gif Once a week
    squ.gif 2-4 times a week
    squ.gif Once a day
    squ.gif More than once a day

Summary

More than 75 percent of the questionnaires that I receive (as reviewer or respondent) have major design or usability flaws, which surprises me, since asking questions is so fundamental to our profession. I’ve listed just a few of the issues and problems that are common, and I encourage colleagues to apply the same UCD methods to the design of questionnaires that they do to the design of products and processes.

References

1. Dillman, D. Mail and Internet Surveys: The Tailored Design Method. 2d ed. New York, NY: Wiley, 2000.

2. Robson, C. Real-world Research. 2d ed. Malden, MA: Blackwell Publishing, 2002.

3. Donoghue, K. Built for Use: Driving Profitability Through the User Experience. New York, NY: McGraw-Hill, 2002.

4. Bailey, K. D. Methods of Social Research. 4th ed. New York, NY: The Free Press, 1994.

5. Fowler, F.J. & Mangione, T. W. Standardized Survey Interviewing: Minimizing Interviewer-Related Error. Newbury Park, CA: Sage, 1990.

Author

Chauncey E. Wilson
chauncey.wilson@gmail.com

ABOUT THE AUTHOR

Chauncey Wilson is a usability manager at The MathWorks, instructor in the Human Factors and Information Design Program at Bentley College in Boston, and author of the forthcoming Handbook of Formal and Informal User-Centered Design Methods (Elsevier). Chauncey was the first full-time director of the Bentley College Design and Usability Testing Center and has spent over 25 years as a usability practitioner, development manager, and mentor. In his limited spare time, Chauncey hones his culinary skills as an amateur (but very serious) chef.

©2007 ACM  1072-5220/07/0500  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2007 ACM, Inc.

 

Post Comment


No Comments Found