We’ve all seen them. Colleagues taking turns tapping Post-its of different colors onto the walls of meeting rooms, standing back pensively, then grouping and regrouping them in seemingly arbitrary ways. Over time, those Post-its evolve into tidy little rows and broad columns. Later, they take on a more even form, as neatly drawn lines and block-type headings set them off into groups. Sometimes they even incorporate drawings and icons. They invite us in because they are so easily comprehensible in telling the user’s story; yet, on closer inspection, they become overwhelming in detail. Customer journey maps (CJMs) are fantastically useful tools, but they often fail to live up to their potential.
In this article I will discuss why CJMs are often frustratingly limited in their usefulness. There’s a long continuum in the quality and depth of CJMs. I’m going to show why taking shortcuts can lead to many risks and issues in the processes, output, and follow-up to CJMs. I will also explain how to overcome these limitations to ensure that CJMs are as powerful a tool as they should be.
Across companies throughout the world, you can find wall-size posters hanging in conference rooms, design spaces, and user labs. Many teams and functions, from the marketing, sales, and product teams to of course the customer experience and user experience teams, have embraced CJMs as part of their core capabilities and key deliverables. They proliferate in companies because they—or at least their output—are so comprehensible. They tell the story of your products and services from your customers’ eyes.
Ideally, CJMs involve multiple components: 1) a methodology and a process for collecting and analyzing data across customer touch points with a company’s products and services, 2) an artifact (often wall-size posters) to help drive a common view and shared discussion among different stakeholders responsible for those touch points, and 3) a set of insights that lead to a plan for product and service enhancements (e.g., a customer service blueprint).
From my experience, CJMs often fail to live up to expectations in the following interdependent ways:
- They are not based on sufficient data and/or there is a lack of rigor in the data collection or the analysis and synthesis of the collected data.
- There is no consistent definition of what they are in terms of vocabulary and content.
- They do not involve the stakeholders responsible for the different CJM touch points.
- The implications or actions are either unachievable or overly focused on the short term.
- The implications are not validated with customers and the fixes are not monitored over time to ensure against unintended negative effects.
Data collection. CJMs require data. The more complicated the customer’s journey, the more data must be collected. Touch points, channels, and back-office processes all add complexity and require qualitative and quantitative inputs to understand them. We’ve seen too many CJMs that are based on a few limited data points—customer care logs, a few satisfaction surveys, analytics on website traffic. Sometimes a team will also collect qualitative data by interviewing a handful of users or internal stakeholders, conducting some guerilla testing, or running a focus group or two. All of these qualitative and quantitative inputs can be valuable, but they can’t be delivered in a reflexive way. Careful planning is necessary to determine which methodologies make sense for a given context.
The point isn’t to imply that every CJM requires weeks or months of research using X or Y methodology. The point is to state unequivocally that unless you have training in using different quantitative and qualitative methods, you risk introducing tremendous bias into your CJM work. Trained researchers have experience in defining the appropriate methodologies and numbers of data points for an effective research plan.
Another challenge comes with the task of analyzing and synthesizing the collected data. This requires expertise in guiding multiple stakeholders through explorations of the data and revealing the key findings at each level. It is a tremendous waste to collect a rich set of data but then take shortcuts when analyzing it, only to rush through a few implications that are the low-hanging fruit. Good CJMs should result in short-term implications and opportunities for fixes to small isolated problems, but also medium- and longer-term fixes to bigger problems by connecting the dots between customer pain points. These medium- and longer-term fixes often lead to a much bigger ROI. It’s the difference between using a quick fix to shave a few tenths of a percentage point off the churn rate and investing in a more impactful fix that helps to win and keep loyal customers and grow market share.
CJM definitions. CJMs include different horizontal tracks that capture and describe different aspects of the customers’ interactions, generally related to what a customer is doing, thinking, or feeling for a given touch point in the journey. Some CJMs embed customer quotes, the general customer expectation at a particular step, the duration of the interaction with the touch point, and so on. I’m not a purist in thinking there is a single correct CJM template with a set of predefined horizontal tracks. As there are plenty of useful definitions, CJMs must be adapted to the specific business context, project goals, and stakeholder expectations.
However, we do insist on a common definition when there are multiple teams or third-party consultants conducting their CJM work at different times and for different products and services. I’ve seen CJMs with dramatically different forms, using different vocabulary and visuals and wildly different channels, even within the same department. This Tower of Babel makes it very difficult for teams to share and reuse insights across project boundaries. It leads to the inability to see the evolution of customer journeys over time. It also leads to a loss of a rich source of corporate memory.
To solve this, a company should agree on a general process and methodology for every CJM. Among other details, this implies specifying best practices for defining the CJM research plan, how to describe the output, the required level of stakeholder participation, and how to follow up. While this process should not try to prescribe every detail, every new CJM project team should have clear expectations about the level of commitment, the implied process, and the quality of output when they follow the process. Having this shared understanding of the CJM project helps to align expectations and resources.
CJMs are fantastically useful tools, but they often fail to live up to their potential.
Multidisciplinary participation. Another common pitfall with CJMs is that the process fails to involve the stakeholders responsible for or supporting major touch points. Obviously, since every product and service needs customer touch points to create awareness and consideration (in the context of competitors), marketing and sales should be represented among the project stakeholders. The stakeholders should also include product management and product engineering, someone from information technology responsible for the back office, customer support, and of course user experience. While the exact profiles required depend on the context of the journeys being captured, the entire CJM exercise requires broad continuous participation in data analysis and synthesis workshops, and, where appropriate, in data-collection activities.
There are two main reasons why this multidisciplinary participation is critical. First, without broad involvement, the understanding of the pain points and the possible solutions becomes superficial and often fails to identify opportunities to make a deep impact across multiple pain points and touch points. A multidisciplinary team enables its members to connect the dots across the journey.
Take, for example, customer support. Call logs or analysis of customer support incidents might reveal a certain set of pain points and possible solutions. A team involved in the CJM might mistakenly identify weak home Wi-Fi signals as a frequent pain point. The CJM team, without customer support team participation, might make certain recommendations, such as improving the call center staff to better guide home users for reconfiguring the router. On the other hand, the involvement of customer support representatives in the CJM process could provide valuable context (the team’s incentives, technical training, script constraints) that would cast light on the feasibility of such solutions. Moreover, the customer support team’s input could provide valuable insights that might in fact identify a simpler fix with a higher return on investment further upstream in the customer journey (e.g., in the initial set-up process, or self-help troubleshooting materials).
The second reason for multidisciplinary involvement is that to be most useful, CJMs require ownership across stakeholders. Too many stakeholders expect to be told at the end of a CJM project the implications on their areas (“If there are issues in my area, I hope they’re not too expensive to fix!”). It’s long been known that the likelihood of a team buying into a change is dramatically increased when they understand the rationale for that proposed change. Nothing increases understanding of the rationale better than connecting with customers and other CJM stakeholders during the data-collection process and the subsequent analysis and synthesis of that data. And the best way to ensure this is to make sure there is multidisciplinary participation.
The need for multidisciplinary participation implies another need that people in CJM projects often overlook. Coordinating the input from disparate profiles requires someone with experience to prepare for, facilitate, and synthesize output from workshops, analysis, and other activities. In this sense, every CJM project needs at least one person who enables each of these disparate profiles to provide context to and shape the customer journey. Coordination and facilitation are the glue that connects the contexts between different CJM steps and stakeholders. To succeed, CJM project teams must involve someone with these skills.
Addressing customer pain points. CJMs should always reveal opportunities to fix the pain points identified in the process. Often, however, CJMs yield results that are too narrow in scope to affect significant improvements to the customer journey. For example, imagine a pain point revealed by the CJM process where online shoppers were confused—because of vocabulary used, the presentation, etc.—by the ecommerce store choices. A narrow fix might redefine the visual presentation and descriptions of those product choices in the ecommerce store. This might even shave a few tenths of a percentage point off the ecommerce site’s abandoned carts.
Validation and monitoring. The final challenge with many CJM projects involves the delivery of solutions to solve different customer pain points. Far too often, CJM projects generate fixes without involving the company’s actual customers in validating those fixes. Validation is crucial: As all CJM projects reveal, customer journeys are full of interdependencies. Any given step in the customer journey is linked to the steps before and after it in terms of client satisfaction, understanding, and expectations. For this reason, identifying a solution to a pain point can often backfire because it leads to cascading effects in other parts of the customer journey.
To address this, CJM recommendations and solutions should be coupled with plans for validating and monitoring them over time to ensure there are no hidden second-order effects of a pain point solved in one area that leads to new pains in another area. Just as an agile product team should demand validation as part of releasing product improvements, fixes to the customer journey must identify the relevant hypotheses and steps necessary to validate that the fixes are in fact making the desired improvements without detracting from other customer touch points in other parts of the journey.
For example, let’s say a CJM reveals that a certain profile of customers struggled with the vocabulary describing the value and use of the key products and services the company offers. Connecting multiple pain points and developing a coordinated solution, the team plans a project to introduce a new set of vocabulary in terms of marketing, product, and support materials. The CJM team should define clear hypotheses to test (e.g., “Each touch point will see a 25 percent increase in response to a Likert Scale survey of understandability”) as well as the logistics to actually collect the data.
To borrow a cliché, “Customer journey maps are like pizza—even when they’re bad, they’re still pretty good.” Every activity that brings teams closer to customers and data about their activities, satisfaction levels, and mental models about products and services moves a company in the right direction. In general, while they introduce risk and limitations in the output and follow-up activities, even shortcuts to the CJM process can help improve products in small ways.
However, such shortcuts make CJMs much less effective than they should or could be. The entire rationale for starting a CJM project is to make improvements to the overall customer experience, not resolve a handful of pain points. Making the right level of investments in the team, the process, and the validation can mean the difference between saving a few customers from churn or increasing sales by a tenth of a percent and creating instead a loyal and profitable customer base. In today’s hypercompetitive world, this is the goal all of our customer activities should strive for.
Michael Thompson has spent 25 years solving customer problems by leading teams in UX design, product management, and marketing. The companies he’s worked for include Apple, several startups, an Environmental NGO, BusinessObjects, and SAP. He’s lived in Europe for more than 20 years and currently leads Telefonica’s Global UX discipline. email@example.com
©2017 ACM 1072-5520/17/01 $15.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2017 ACM, Inc.