People: the well-tempered practitioner

XIII.6 November + December 2006
Page: 46
Digital Citation


Chauncey Wilson

Triangulation is an approach to data collection and analysis that uses multiple methods, measures, or approaches to look for convergence on product requirements or problem areas. While the term "triangulation" may not trip off the tongues of HCI practitioners, we often employ triangulation, implicitly or explicitly, to bolster our recommendations and be more persuasive to our colleagues. Consider how convincing you might be if the results you obtain independently from usability tests, field interviews, and customer support data all indicate similar problems. This convergence of results across different data collection methods can help you convince a team to focus on core problems—the things that tend to emerge across methods.

Triangulation can be used to determine core problems with a system or reduce the "inappropriate certainty" that sometimes comes when a single evaluation method or approach indicates that not much is wrong with a product. For example, if you run a single usability test and find that participants don’t have a serious problem, your product team may feel so confident that they think that they can forego further usability work. However, if you use multiple methods, say a large-scale customer survey and face-to-face interviews, in addition to a usability test, you might discover usability problems that were not evident in your usability test. Triangulating data from the test, survey, and interviews could help you convince the team that not all is right with the product as the usability test seemed to indicate (the inappropriate certainty to which I referred earlier).

Between-Methods and Within-Methods Triangulation

The phrase "multiple methods" is used in many different discussions of triangulation and this phrase can be confusing since it can literally mean "different methods" or it can mean using different variations of the same method. Jick [5] and Kaulio and Karlsson [6] use the metaphor of analysis of variance (ANOVA) to describe between-methods triangulation and within-methods triangulation. Like ANOVAS, you can have a mixed triangulation approach that combines the between and within approaches. Here are some examples of this distinction:

Research Method Triangulation. Using explicitly different research methods like questionnaires, focus groups, informal testing, and event logging to understand the user experience [8]. This is an example of between-methods triangulation.

Facilitator Triangulation. Using different facilitators or evaluators with the same research method. For example, different facilitators with different styles might run two usability tests with the same tasks. You would be interested in seeing if the same problems occurred across facilitators. This is an example of within-method triangulation.

Observer Triangulation. Using different observers to record data from a given method. Here you are interested in how the observations converge or diverge. This is an example of within-method triangulation.

User Group Triangulation. Using multiple user groups with a single method. You might test novices and expert users. Triangulation could point out convergent problems that everyone has, as well as problems that are specific to each group. This is within-method triangulation.

Geographic Triangulation. Using participants from different locations and comparing results to see where there is convergence and divergence in the results that can be attributed to location. I consulted on a project where there was a strong belief that east-coast users were quite different from west coast users-well, they were mostly the same, but several differences did emerge. This is within-method triangulation.

Qualitative-Quantitative Triangulation. Combining qualitative and quantitative approaches. This type of triangulation can occur in a single usability test where you have quantitative data (time and errors) and qualitative data (think-aloud verbalizations) [3]. Another example would be the use of usability testing to explain the data from server or client-side event logging. Logging data provide how information and usability testing data can provide why information. The triangulation of these two approaches can be used to understand what may be causing problems for users.

Using Triangulation for Prioritizing Product Requirements

Triangulation can be especially useful early in product development for rough prioritizing of requirements [6]. In product development, there are many sources of requirements including:

  • Various internal lists of use cases
  • Features that had been deferred earlier and tracked in a bug system
  • Usability evaluation reports
  • Tech-support databases
  • Site visits

Requirements triangulation involves looking across all these methods and sources of data for common patterns that you can use to decide what requirements are critical for success. Each method will elicit requirements at different levels with different users in different environments and the assumption behind triangulation is that the core requirements will tend to show up across methods, users, and environments. You can create a triangulation matrix that lists issues, problems, or requirements on one dimension and the various data collection methods on another dimension.

Table 1 shows a simple example of a triangulation matrix where requirements were obtained from multiple methods.

This simple example only indicates if a requirement was mentioned at least once for each method but, in reality, you might indicate how many times the same requests came up from different users within each method. From this simple example, you can see that Requirements 2 and 6 emerged from four different methods which might indicate that they are core requirements. This matrix could be based on a single method, but involve multiple user groups (or industries, or countries) which would show you what requirements are important to all users versus those that are important for only a subset of users. The triangulation matrix can be a persuasive document for those difficult meetings where you are trying to decide what product requirements give you the most business value.

Best Practices

My discussion about using multiple methods might be frightening to some already overburdened HCI practitioners, but with some sleuthing, I’ve always found that there are often many sources of data already available that can be triangulated. You could have trip reports, bugs, articles in PC magazines, user blogs, or user group discussions that highlight product issues. You could pull together the issues into a matrix similar to Table 1 and look for places where the data converge. I think that sometimes we seek out new data when there is already substantial data lurking in the labyrinth of corporate archives.

Take a neutral view of methods [6]. Don’t favor one method over all others and engage in "methodolatry" [4], the practice of viewing one method as the definitive way to answer usability questions. Methodolotry is detrimental to our field.

Choose sources of triangulation data that have different biases and complementary strengths [7]. For example, you might want to consider both lab and field studies. The use of these two methods is likely to reveal some core problems (convergence), as well as problems that are unique to each test setting (divergence).

Create a triangulation plan for each phase of development that list methods and data sources. For example, during the requirements phase, you might conduct site visits with customers from different market segments, review a database, conduct some paper prototype sessions, and do some online surveys. Your choice of these multiple methods will increase the odds that you are capturing core requirements.

Become a data aggregator. One important role that an HCI practitioner can play in development environments is that of data aggregator—the person or team who pulls together data from different sources and looks for patterns across all the sources. This role fits right into the triangulation process. The problem with this role is that it can be politically sensitive since data can be power for some individuals. I’ve worked at some companies where everyone is delighted to have a person willing to pull data together; I’ve also worked in environments where getting access to data required extreme political savvy.

The next time you start a project, consider triangulation when you are planning how you will work with groups. Applying concepts of triangulation can bolster the credibility of your recommendations, highlight core versus niche issues, and overcome the limitations of a single method.


1. Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56, 81-105.

2. Domagk, S., Hessel, S. & Niegemann, H. M. (2004). How Do You Get the Information You Need? Triangulation in Usability-Testing: Two Explorative Studies.

3. Dumas, J. S., and J. C. Redish. 1993. A practical guide to usability testing. Norwood, NJ: Ablex Publishing Corporation.

4. Janesick, V. (2000). The Choreography of Qualitative Research Design: Minuets, Improvisations and Crystallization. In Norman K. Denzin and Y. S. Lincoln (eds). Handbook of Qualitative Research, 2nd edn, pp. 379-99. Thousand Oaks, CA: Sage.

5. Jick, T. 1979, Mixing qualitative and quantitative methods: triangulation in action. Administrative Science Quarterly, 24, 602-611.

6. Kaulio, M. A., & Karlsson, I. C. M. (1998) Triangulation strategies in user requirements investigations: A case study on the development of an IT-mediated service. Behaviour & Information Technology, 1998, Vol. 17, No. 2, 103-112

7. Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook. Thousand Oaks, CA: Sage.

8. Sullivan, P. (1991). Multiple methods and the usability of interface prototypes: The complementarity of laboratory observation and focus groups. In Proceedings of the 9th Annual international Conference on Systems Documentation (Chicago, Illinois, United States). SIGDOC `91. ACM Press, New York, NY, 106-112.

9. Webb, E. J., Campbell, D. T., Schwartz, R. D., & Sechrest, L. (1966). Unobtrusive measures: Nonreactive research in the social sciences. Chicago: Rand McNally.


Chauncey E. Wilson

About the Author:

Chauncey Wilson is a usability manager at The MathWorks, instructor in the Human Factors and Information Design Program at Bentley College in Boston, and author of the forthcoming Handbook of Formal and Informal User-Centered Design Methods (Elsevier). Chauncey was the first full-time director of the Bentley College Design and Usability Testing Center and has spent over 25 years as a usability practitioner, development manager, and mentor. In his limited spare time, Chauncey hones his culinary skills as an amateur (but very serious) chef.


T1Table 1. Triangulation matrix based on four different data collection methods

Sidebar: Origins of "Triangulation"

For me, the concept of a triangulation is loosely based on a classic 1959 article titled "Convergent and discriminant validation by the multitrait-multimethod matrix" [1]. The article described a technique for assessing convergent validity of psychological constructs like self-esteem and locus of control. Miles and Huberman [7] suggest that the term "triangulation" became a part of social science jargon with the publication of Unobtrusive Measures [9], an entertaining book about how various non-traditional measures like nose prints on glass or the amount of wear on floor tiles can reveal clues about the behaviors of groups that might be hard to obtain with obtrusive measures like surveys. In the context of unobtrusive measures, triangulation involved the use of multiple rough measures of a particular variable, like public interest in museum displays, to corroborate particular research hypotheses.—CW

©2006 ACM  1072-5220/06/1100  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2006 ACM, Inc.


Post Comment

No Comments Found