Having recently made the transition from a 30-year career leading industry UX teams to becoming a UX strategy consultant and professor, I have a natural opportunity to refine my perspective on how companies should best utilize UX data to inform business-level design investments in products and services. This forum provides an opportune time and place to reflect on this "context of practice" topic regarding the adequacy of our professional approaches to collecting and communicating critical information.
By UX data, I mean in the broadest possible sense all forms of qualitative and quantitative information regarding user experience, collected irrespective of methodology. This includes data from classic usability lab sessions, field studies, real-time event logging, and eye-tracking, as well as information collected from Web-feedback tools such as Opinion Lab and User Zoom, click-stream analytics, formal BI tools, conventional surveys, and focus groups. Even some of the latest cutting-edge operational intelligence tools, such as Splunk, can generate relevant information for understanding user patterns of navigation and action.
I spent the last 17 years of my corporate career at the VP and senior VP level of two of the world's largest software companies, SAP and Oracle. This required a significant amount of time in the boardroom attempting to drive the corporate UX agenda forward. When not presenting the latest product design or the next-generation look-and-feel standards, I spent a significant portion of this boardroom time communicating the current state of UX quality as reflected through the aggregate of available data sources. Investment options and related user metrics that would indicate when commercially significant UX improvements actually had been achieved were also a common thread of discussion. These latter topics require a reasonable degree of UX metrics literacy among executive participants. Unfortunately, boardroom UX literacy does not develop by itself. It is the role of UX leaders to create an environment in which it can develop within their companies' leadership teams and to provide meaningful data to which it can be applied.
Today, in a consulting capacity through my new venture rCDO UX LLC (rCDOUX.com), I work with CEOs, chief product officers, chief engineers, and the leaders of in-house corporate UX teams to improve both their products and their design practices. These client companies create products and services in the B2B, B2C, and medical-software domains. They range in size from a 20-person startup to a multibillion-dollar technical giant with more than 5,000 employees in its R&D division alone.
The CEOs of these companies and my former employers, along with their executive staff, all share a common goal: They want their products and services to embody world-class user experience. This desire is not altruistic. Nor is it just lip service to a passing trend, or an emotional reaction to the death of Steve Jobs. They must do this to survive in today's highly consumer-oriented marketplace, and they know it.
And, for the most part, they all share a common problem. The usability data and evaluation information available to help CEOs guide investment decisions related to product or service designif they exist at allare frequently inadequate to be meaningfully applied.
This leads to three interrelated questions that we should reflect on:
- Why are C-level executives chronically underserved by user-research data?
- How has this shaped their impression of the value of usability professionals?
- What are the side effects of these executive impressions and frustrations on our professional effectiveness as practitioners, as well as on our corporate career growth options?
What C-level executives need to know is actually not complicated in the abstract. First, whatever the issue being considered, they need to know if things are getting better or worse, plus where they rank relative to the competition. Their need to quantify trends is simple to understand and easily satisfied with common business measures such as sales, revenue, and profit. It can also be satisfied by some common market-research measures. For example, even when there is questionable confidence in the absolute numeric values produced by marketing surveys, such as those that use the ubiquitous Net Promoter Score methodology, the CEO can still make a before-and-after judgment based solely on the direction of change in the data from quarter to quarter.
In this context, the typical usability-related questions a CEO would ask would resemble the following:
- Is product A's user experience improving or declining with this release?
- How does it compare with other similar products we sell?
- Does it cause usability problems if packaged in a suite of applications?
- How does it compare with the competition?
However, giving executives a longitudinal view of a trend requires consistency and repeatability in the measures collected. Some companies with very mature usability programs have continuous benchmarking programs that provide longitudinally valid information. This is an emerging trend, but it is far from standard practice outside of companies with a very quantitative culture. In those companies that do track usability longitudinally, it is just one of many benchmark indicators. It is rare when usability is the sole focus of a corporate benchmarking program. And, in most companies, UX practitioners are failing to provide this kind of information at all, either longitudinally or across products lines. This leads to what I call the CEO credibility gap. I have observed that over time this gap has limited both the influence of UX departments and the career growth of many UX leaders. It is something I have had to wrestle with throughout my own career.
Interestingly, one would think the leader of a UX team would need the same kind of information in order to manage the sustainability of his or her own department. If you can't show repeated value, it is difficult to secure next year's budget or have a basis from which to argue for expansion. However, I have not encountered many UX leaders who consider themselves primary consumers of the usability data acquired under their budgets, nor who insist that it be collected in a fashion that makes it valuable in the boardroom. More typically, UX management considers software engineers or lower-level product management (not even their bosses) to be the primary customer of user-research results. In this case, the focus is typically not on the results themselves, but on the recommendations for UI design changes that follow from the data's synthesis. This is understandable, of course, because the company's goal is to build good products, not to sell user-research data. Yet the incremental cost of collecting more complete data that can bridge the CEO gap is marginal.
Placing the sole focus on the software engineer as the usability-data consumer leads to each evaluation being treated as a one-off exercise. In my opinion, it is also one of several outcomes resulting from the overuse of quick-and-dirty and often experimentally questionable "discount" usability methods for both lab and field work. Unfortunately, these discount methods have one thing in common: They are not repeatable in a fashion that generates longitudinally valid or useful data.
I would suggest that the root cause leading to CEOs remaining underserved by the typical usability data available to them is a continued lack of business leadership focus and practice understanding among the UX community. This is not a new observation; it first surfaced as a larger theme in the May 2007 issue of interactions on UX business leadership, which I had the privilege to organize and guest edit.
If, as noted, the incremental cost of collecting more complete data is marginal, why don't we do it more often? It is easy to recognize that this is a chicken-and-egg problem, because if you don't have data that is relevant to business strategy and investment decisions, you are not likely to be invited into the boardroom in the first place. My experience is that if you have meaningful, business-relevant data to share, you can eventually work your way up the corporate management chain and get access to the CEO to share it, because everyone is struggling to gain an advantage through additional insight, regardless of corporate scale or type of industry. However, you will not get invited back on a regular basis to share usability and evaluation data unless it meets that C-level requirement for longitudinal validity.
One successful example from my tenure at Oracle is worth sharing. It demonstrates both the way data can be harnessed to motivate investment, and, incidentally, a clever interpretation of UX metrics by Oracle's famous CEO and founder, Larry Ellison.
In the late 1990s, Oracle had an active and highly quantitative usability measurement program in place that was executing between 80 and 100 user research evaluations per year. After some initial harsh customer and press feedback on a new CRM application, I was asked to present any usability information we had prior to product launch. This, of course, put the GM of this division under the microscope, because the data was not flattering. However, the boardroom discussion quickly turned to how this product compared with other products in the company portfolio. Fortunately, we had two years of consistent data spanning most product lines and ranking them on average task completion, error rates, and satisfaction. The data was in what today would be recognized as a precursor to the ANSI/NIST Common Industry Format because the architect of Oracle's usability measurement program, Anna Wichansky, was one of the original leaders of the NIST CIF effort. This international standardization effort for usability data was completed several years after the incident I am describing.
In addition to success and time-on-task data, several approaches to measuring satisfaction were used, which allowed for comparison and trend tracking. One of them was SUMMIa favorite in the boardroom because of its ability to reference an anonymous industry average. SUMMI is an extensive questionnaire-based approach used in the 1990s that yielded a score between one and 50, where 50 was the best quality value.
For the following two years, we reported in person in the boardroom each week's usability lab data. Larry often demanded retests of products before each release as a method to drive usability decisions deep into the planning process. He also turned this data into an internal competitive motivator for direct reports and frequently threatened to base their bonuses on it. (I don't know if he actually ever followed through.)
What was most interesting was how SUMMI was internalized intellectually in the organization. In the second year of these weekly boardroom usability evaluation reviews, Charles Phillips joined the company as co-president and attended his first review session. He was not familiar with usability metrics at all, and I was not expecting him to be in the meeting. Generally, it is best to start someone's introduction to usability with a lab tour, not in a high-stakes board meeting. As I mentally prepared to give an impromptu (short) metrics definition lecture with the CEO and a room full of development EVPs, Larry interrupted me and explained all the metrics, one by one, himself. Having a background in mathematics, he not only understood but also internalized the relevance and models behind them to the business. When he began to explain SUMMI, he looked at Charles and said, "This one is easy: Just double the number and think of it as the IQ of the product. It has to be near 100, because we are not going to ship any dumb products. Doing so is detrimental to our brand."
While this particular incident offers a glimpse of hope, what happens when the UX leader does not have an ongoing usability and evaluation program that meets the CEO litmus test? The answers to questions 2 and 3 raised earlier are not pretty. The result of neglecting the information needs of executives is often that, from their perspective, usability becomes viewed as a soft, non-scientific data source loosely equivalent to the anecdotal viewpoints traded every day as fact by executives, who are by nature comfortable making important business decisions with some degree of ambiguity.
In the end, this trickles down to limit the business effectiveness of both the UX team and its leadership. As we look in the historical mirror of our evaluation practice, we need to remain cognizant of this CEO gap and make an effort to close it for the long-term benefit of both our profession and our companies' shareholders.
The potential upside if the gap can be closed is huge. Conversely, the potential downside is never reaching the professional status of more mature disciplines in the business world, such as marketing and engineering, who bring their own supporting evaluation data and metrics into the boardroom to do battle for their point of view every day.
Dan Rosenberg recently founded rCDO UX LLC, a UX strategy firm serving C-level executives and industry UX leaders in defining competitive design strategies and executing them. He led global UX design at SAP, Oracle, Borland, and Ashton-Tate over the previous decades and has authored many well-known publications in the HCI field.
©2013 ACM 1072-5220/13/03 $15.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2013 ACM, Inc.