Cover story

XX.5 September + October 2013
Page: 42
Digital Citation

Fine-tuning user research to drive innovation


Authors:
David Siegel, Alex Sorin, Michael Thompson, Susan Dray

back to top 

User research that attempts to discover market-changing innovations faces many challenges. The more ambitious the innovation goal, the more difficult it can be to decide whom to study, what to look for, and how to make sense of the findings. Our reflections here are based on our experience collaborating on an ambitious project, in which we conducted in-depth contextual research with 54 people in eight enterprises. Its mission was to generate concepts for innovative solutions that would engage a large, new audience whose needs were not being addressed by existing products. In many respects, this was a dream project for researchers who wanted to introduce user-centered design into the product development process as early as possible. However, in planning the research we had to confront particular dilemmas, stemming from the combined goals of innovation and major market expansion, which we suspected were generalizable to projects with similar goals. We call this work market-changing innovation research (MCIR). Here we discuss two dilemmas that confront this type of work. Then we turn to four research challenges these dilemmas give rise to, discuss the limitations of common research practices in dealing with them, and describe our own approach. We also describe some of our findings to give an idea of what our approach yielded.

Our project's goal was to make business intelligence (BI) information more available, relevant, and useful to a large population of so-called casual users. The focus was on usage of quantitative information, typically but not necessarily created by others using specialized BI tools. For those not familiar with the term, BI is any information captured in the course of business operations and made available to support business decision-making. BI can also include externally generated information such as partner-generated data and competitive intelligence.

Unlike BI users who are actively engaged with quantitative data as a primary part of their role, casual users were assumed to use BI in a limited way for decision-making, meaning they consume it only occasionally or with little variety or depth of exploration. For example, they would use BI only in reports defined and structured by others, without examining it on their own from different angles.

A number of things supported the belief that there was a significant population of such users who were eager to do more with BI. Anecdotes about people's frustration when trying to access BI, and complaints about problems with integrating data across systems and organizational silos were common. In addition, recent reports by McKinsey [1] and Forrester [2] claimed that demand for and adoption of BI is rapidly spreading beyond the ranks of professional BI analysts.

Furthermore, longstanding advocacy of data-based decision-making practices seems to call for richer use of BI. For example, in deciding which products to keep in a product catalog and which to discontinue, a decision always affected by strong vested interests and intangibles, BI should enable a company to use its own data to examine the sales trends for the product in markets segmented flexibly on many dimensions, to examine its statistical relationships with other products in the catalog, and to bring some quantitative rigor to a risk/benefits analysis. Figure 1 shows the working assumptions about the structure of the hypothesized audience.

ins01.gif

back to top  Dilemmas of MCIR

Underlying the research challenges in MCIR are two basic dilemmas. The first affects decisions about how to target the research to optimize your "bets" about where the richest clues for major innovation opportunities lie. The second explains why it is particularly difficult to turn up these clues in relation to workplace tools and systems.

The circle of unknowns. In MCIR, you inevitably assume the existence of an audience experiencing unmet, addressable, and often latent problems, as well as the technological possibility of addressing them. This requires finding the intersection of an audience definition, a set of needs, and a product concept when, at the beginning, you may have only a vague idea of where this intersection lies. However, because you cannot possibly study every permutation of these interlocking unknowns, you must make some decisions about where to focus. Complicating things further, any decision about one factor will influence the others, potentially biasing the research in ways you may not recognize. If any one of these factors could be taken as a given—for example, a given attribute of the audience, a known "pain point," or an identified technology you are looking to apply—the one area of certainty could help narrow the remaining unknowns in your research plans. Thus, while addressing an untapped market potentially has huge payoffs, the challenge is much greater than when doing research aimed at finding an innovative solution for your existing users.

Co-evolution of human behavior and technology. Another thing that complicates MCIR is the fact that people's work practices and the tools they use to carry them out have co-evolved. This can create a conservative bias. When you study people using their habitual tools for their habitual practices, it often seems that parts of the work context fit together in ways that seriously constrain the possibilities for change. Similarly, if you already have a highly innovative concept and you go into field research trying to validate it, you have a high probability of finding that it does not appear to "fit" because work practices are adapted to the previous tool or process. Certainly, people can point you to annoyances and inefficiencies; however, paradoxically, the large problems built into the structure of processes and systems may not be perceived by people whose job is defined around them.

If research is too biased in the conservative direction, it's natural to think the solution is more design creativity and imagination. This is part of the answer, of course. On the other hand, imagination can easily become self-deception. Our ability to imagine future users happily adopting our products is not a very sound basis for product development decisions, although too often it seems to be used that way.

back to top  Research Challenges

Despite these dilemmas, we know that change does happen. Innovative products do take root, and when they do, they drive change in the ecosystem that surrounds them. When we see examples of successful innovation, we can always retrospectively identify how the various factors lined up to enable it. The question is, how can we maximize the contribution of user research to prospectively improve the odds? The answer requires us to address four basic research challenges that arise from the dilemmas we have just discussed.

Whom do you study? Commonly, the population within which you hope to find a rich, new opportunity is very heterogeneous. If you try to cover the full spectrum of non-users looking for your opportunity, the research will be either very costly or very shallow. To increase the odds of success, you need a rational way of optimizing your bets, which means neither spreading them too thinly nor concentrating them prematurely.

In our case, the definition of casual users of BI was quite abstract and could be interpreted so broadly that it could seem to cover most workers except those with the most narrowly defined or routinized jobs. We did not assume that people of interest were concentrated in particular roles or levels of hierarchy, or that they would preferentially be found in corporate as opposed to operational roles. We believed they could be found in a wide range of occupations and industries. Our clearest criteria were ones of exclusion: We did not want BI professionals; people in analyst roles that focused on providing information to support the decision-making of others, as opposed to applying the information themselves to help guide their own decisions; or executives who had heavy analytical support. We also excluded the financial industry, thinking it would be so heavily quantitative in culture and so focused on BI (which could include any information about a customer's finances) that it would not contribute much to our efforts to understand casual users.

We did believe that the kinds of people we were looking for would be within the broad category of "knowledge workers," a term originally introduced by Peter Drucker [3]. While there are many interpretations of this concept, our working definition can be summarized as: people who do not simply follow procedures designed by others, but who use judgment in applying principles to specific complex cases, and who evaluate, modify, develop, or establish processes or policy. Their jobs are typically defined in terms of goals rather than tasks, and they have relative freedom to decide how to approach their work. Unfortunately, this was still very broad and abstract.

There are a number of common ways that companies try to learn about new audiences that we will not consider here, because we are focusing on in-depth user research. This includes surveys, interviewing domain experts and "thought leaders," and gathering "requirements" from business stakeholders. These all may generate hypotheses about where to start, but they generally do not provide information that is detailed or contextualized enough to guide the design of solutions. However, there are some common user research sampling practices that are used in pursuit of innovation:

  • Studying existing users. If you want to understand how a tool or system currently works in practice, there is no better way than to study actual users in their own context doing what they normally do. Understanding their confusions, frustrations, errors, and the inefficiencies they experience may be relevant if you are trying to deepen their engagement. Researchers and product teams often seem to assume that increasing the satisfaction of current users will tend to expand the market, at least into the population of people on the edge of adoption. However, this may not tell you much about how to serve a new group that you believe has different needs, work patterns, and perspectives than the audience with whom you are familiar.
  • Relying on early adopters. Research for new products often relies heavily on studying so-called early adopters [4]. Identifying actual early adopters of a new technology retrospectively is a very different proposition from identifying likely ones prospectively. The latter requires you to make some assumptions about indicators of early adopter status relevant to predicting adoption of your future innovation. Often, the concept is used as if it refers to a personality trait implying generalized fascination with technological innovation for its own sake, either across or within domains. What was more relevant to us was the desire for more usable and useful BI because of its business value. We did not believe that a history of past technology adoption per se would predict this.
  • Lead users. The Lead User approach to innovation research [5] assumes that innovation is not necessarily driven from the top down; it may even more commonly and effectively arise among end users themselves. Some end users have the latitude, resources, capability, and initiative to modify their own processes. The assumption is the solutions at which they arrive based on their personal experience may be applicable to others. The problem is that these people may have some idiosyncratic attributes, and those that specifically qualify them as lead users may make them unrepresentative of the larger audience. Their solutions may work for them in the context where they personally experience the problem, without being generalizable to others. Furthermore, one is likely to be more tolerant of limitations in a solution that one designs for one's own use than one would be for a commercial solution.
  • Studying dissatisfied current users or ex-users. Studying those who have experience with a solution, having used it and abandoned it, can be extremely useful, whether their dissatisfaction is with your product or a different one in the same genre. However, it can be dangerous to assume that these people, who have already gone through the early stages of adoption and abandonment, are similar to non-adopters or people who have not even been exposed to products like what you envision. This is especially true in the workplace, where end users rarely have free choice about what tools are made available to them—a trap that researchers and innovators often fall into. Also, abandonment implies limited trial use followed by almost total rejection. In our case, we were not concerned about people who had completely given up efforts to find value in BI, but rather those who extract less value from BI than they could in principle, given greater engagement. As discussed, these people might not expect more from their tools, and therefore may not identify themselves as dissatisfied.

Because all of the above approaches are flawed and because of the limitations in our current knowledge, we did not provide tight criteria to use for recruiting; we used a flexible approach for which fuzzy logic is a good metaphor. As is common in user studies within businesses, we had to work through contacts within companies who had a broad view of their organizations and could lead us on a path toward appropriate participants. Without stating "tight" criteria, we needed to give them guidance about what we were looking for so they would not default to the people who were the easiest to recruit, perhaps for the wrong reasons (e.g., they were the most available).

In our conversations with them, we intentionally avoided identifying our target users based on job title, role, or position. Instead we shared our highly conceptual description of casual users of BI and of knowledge workers, and then discussed with them their rationales for people they suggested. This was extremely informative for us, because the rationales were essentially hypotheses about what could indicate a person was a candidate for support in doing more with quantitative data. The process also let us see how they translated our concepts into descriptions that were more concrete and meaningful in the particular contexts of their companies—something that was more difficult in some companies than in others. Finally, it allowed us to correct the mistaken assumption that we were looking for power users of quantitative data or for technical people who produced data reports for consumption by others, as opposed to people who tried to extract meaning from the data to apply it to their work.

To help our contacts identify likely candidates, we suggested they consider people whose work involved:

  • Managing or evaluating processes, performance, resources, etc.
  • Leading current in-the-trenches change initiatives
  • Temporary assignment to a task force that uses data to describe the current situation and to support their recommendations
  • Using data to help them make business decisions, make recommendations, or contribute to the decisions of others.

We also suggested some behavioral indicators to help our recruiting contacts nominate specific participants:

  • Requesting custom reports
  • Expressing frustration with data that is available to them
  • Bringing data questions to identified local experts
  • Challenging generally accepted interpretations of existing quantitative data, or introducing alternative data to present a different picture.

ins02.gif

This resulted in a diverse sample of participants from a range of jobs that seemed very consistent with our concept of target users. The sample also included people who were in a gray zone in our minds—either because their jobs were already so inherently quantitative that they might be more active users than casual, and people whom we thought might be stretching the definition of knowledge worker because their jobs might be too routine in following prescribed procedures. This was exactly what we had hoped for, a sample that bracketed the boundaries of our concept.

What should you look for? In addition to deciding where in the vast universe of the potential audience you will aim your MCIR microscope, you need to decide what in the vast universe of content you could potentially explore. In any complex domain, you can't possibly study all activities, tasks, and workflows in search of the most revealing use cases or the ripest opportunities for change. Nor would it be useful. Those common scenarios that are most central to people's roles, or their modal tasks, are most likely to already be supported with job design and tools perceived as being in relative balance, because this is where it is most likely that jobs are designed around the limitations of tools. What is needed is a way to prioritize cases that do not fit comfortably with routine approaches.

User researchers tend to prefer approaches to data gathering (where feasible) in which users' perspectives and concerns drive the exploration. This is consistent with the concept of user-centered design. However, when we are addressing non-users comfortably adapted to their current tools and processes, these approaches may need adjustment. Here are some of the common ones:

  • Relying on known pain points or user-identified pain points. User research is often portrayed as a process of looking for so-called pain points and unmet needs. These clichés seem to imply collecting conscious grievances. However, in seeking opportunities for major innovations, this is of limited value because of the co-evolution issue described earlier. People are not necessarily aware of the ways in which they have adapted to the existing context and perceive it as normal. Their perceptions of problems are anchored by the capabilities and context in which they currently use their tools. This is not to deny that people can identify frustrations or inefficiencies, but even if they call for creative solutions, they rarely point to opportunities for quantum leaps.
  • Contextual inquiry and contextual design. In theory, contextual design looks for opportunities for innovation. In practice, much user research that claims to be in this tradition seems to aim at capturing detailed but neutral descriptions of how people do their work, including ways that work processes vary depending on other factors. However, looking for opportunities for positive change requires more than just description; it requires an evaluative, diagnostic, and predictive orientation. While contextual design advocates looking for places where people's work breaks down and their tools do not serve them well, the fact that it focuses on studying experienced workers doing their typical work while they explicate it seems to make it difficult for many practitioners to move from describing their work to critiquing it. And, of course, if you study users of your existing tools, this is not the market you are interested in.
  • Ethnography. Design ethnographers strive to gain insight into fundamental dynamics of behavior and experience that have implications for design, rather than focusing on how people work given their current tasks and tools. The intent is to make it easier to envision fundamentally new approaches by removing a preoccupation with the tools themselves. There is certainly merit in this, but there are also some challenges. People's observed behavior, artifacts, and ways of expressing their experience are the windows into these fundamental dynamics. However, these are all shaped by existing processes and structured by existing tools. Of course, you are also interested in people's deep purposes, but these are abstractions that have to be inferred from broad patterns of behavior and from the rationales they express for them. That makes them several layers removed from observable behavior. But, ultimately, proposing an innovation in tools that you think people will adopt means predicting behavior with tools. Because of this, ethnography can be vulnerable to diffuseness. What can look like depth to an ethnographer sometimes looks to a product planner like overly vague information, in terms of its implications for the product.

Our basic approach resembled contextual inquiry. It began with semi-structured interviews regarding the participant's role and function vis-à-vis larger business processes, the range of things they did to fulfill these, their motivations, and their current challenges. This enabled us to frame the context in which BI was used. Participants naturally provided brief examples of work scenarios, enabling us to choose ones for deeper exploration. This exploration usually involved observation or walkthroughs of current or upcoming tasks, while in other cases it focused on recent work examples anchored by exploring artifacts from that work.

During this exploration, we were able to probe the "leading edge" of their current uses of quantitative data. That is, we looked for points where people seemed to put aside quantitative analysis and shift to other forms of thinking. A critical success factor was that our understanding of the business and the person's role and tasks enabled us to generate realistic, contextually relevant "what if" scenarios in which we guided the participant in envisioning taking their quantitative thinking one or two steps further.

To further clarify the leading edge of the participant's data usage, we also looked for work scenarios where the person was most highly motivated to muster data to support an argument. Examples included:

  • Internal controversies
  • Determining when exceptions should be elevated to rules
  • Internal and external accountability (e.g., audits)
  • Balancing trade-offs in allocating limited resources
  • Supporting and evaluating proposals and high-stakes decisions
  • Disputes over the meaning of metrics (e.g., performance evaluations, setting quotas, tracking productivity, etc.).

The result of this was that we ended up with a collection of more than 120 fully contextualized case studies of real work challenges where people were pushed to the edge of their current practice.

Making sense: The challenge of analysis. Our research required us to "cast a wide net" in terms of both participants and the variety of use cases we explored. Each two-hour interview yielded many pages of notes and collected artifacts. The sheer volume and heterogeneity of unstructured narrative data significantly increased the challenge of identifying themes and opportunities at a strategically significant and actionable level. This required many iterative passes through the data.

Another challenge was the need to understand and group specific cases and instances across participants while preserving the team's ability to contextualize them into the larger narrative of each user's story. Our approach enabled us to group each case on multiple dimensions, while keeping the "story" intact. Without retaining this context for each instance, a team may be tempted to group superficially similar observations. We have seen this happen all too often with the affinity diagramming process, which is often the only form of analysis used for contextual field research data. It involves decomposition of observations into molecular "interpretive" comments, typically captured during debriefings, and then clustering these thematically. Often, clusters are developed by people who do not know the context of the comment. Decisions about how to group items can too easily become an exercise in semantic associations and lead to superficial insights. And although affinity diagramming can be as iterative as you like, resistance to breaking an evolving structure and revising it can be high, especially since many people on the team contribute to it and naturally become invested in it.

As in affinity diagramming, we used a clustering approach, but a key difference was in the type of elements we clustered: We began by iteratively clustering the more than 120 work scenarios we had gathered. The process was iterative, because each story was rich in implications, relevant to multiple topics, and indexed in many ways (by company, role, task type, business goal, etc.). Because they required complex interrelated categories, these outputs would have been very difficult to document and discuss with traditional affinity diagramming exercises alone.

For example, one output from the analysis was detailed descriptions for dozens of basic quantitative tasks we observed and grouped into business areas. It resulted in cases being consolidated into task groupings structured in the following way:

  • A definition of the quantitative task type (e.g., monitoring the data from a process metric and changes to it over time)
  • Identification of the common quantitative thinking challenges that emerge in this task (e.g., how users assessed where a particular threshold in the data should be set)
  • A collection of thumbnail scenario examples, based on our data, that showed the applicability of the task type to different business functions (e.g., observed examples from product management, merchandising, budgeting, etc.).

Turning data into findings with strategic and design impact. Our research resulted in two conflicting findings that together painted a surprising picture of the audience and the opportunity. First, many casual users of business information demonstrated lack of motivation to go beyond their surprisingly rudimentary quantitative practices. Second, through our explorations with participants about what might be possible with their data, we observed countless opportunities where organizations could seemingly have benefited from the simplest of next steps in understanding their data.

That casual users did not display expected data curiosity was almost universally true across the participants we saw—even with people who frequently worked with numerical data as part of their jobs. Our data revealed several clues as to why users might lack curiosity about their data. We saw many examples where users showed limitations in their fundamental quantitative thinking about everyday business questions and tasks, and so had difficulty seeing the potential value of going somewhat deeper in their analyses. For example, they did not take variability into account in making projections. They relied on unvalidated rules of thumb and other crude quantitative assumptions. They did not evaluate their estimating practices by seeing if the data showed they were consistently over- or underestimating. As a result of limitations like these, they did not take full advantage of the potential value in the quantitative information that was available to them. Instead, they resorted to impressionistic thinking surprisingly quickly and had difficulty adopting the data-based decision-making practices that so many businesses try to promote.

In addition, casual users of BI had difficulty integrating qualitative with quantitative thinking. For example, they tended to overly discount the quantitative data when evaluating an outlier in a quantitative trend if they could think of a qualitative fact that might partially explain it (e.g., "Our sales were down this Thanksgiving compared to last year, but the weather was bad this year"). They had difficulty thinking of a way to use related data to assess their qualitative explanations. Qualitative information that was relevant to interpreting quantitative patterns tended to live in individuals' heads, rather than being shared.

Our research showed that these challenges occur within a wide cross section of industries and user roles. Though this might be interpreted as a huge barrier to wider usage of BI tools, it in fact suggests that if the challenges can be addressed, they could potentially be generalizable to a broad audience. Likewise, our detailed categorization of challenges, some of which are described here, created a list of issues that can be overcome. Because we observed opportunities for improved business thinking in even the most incrementally deeper engagement by BI users, we are confident that tools that address the quantitative thinking challenges we identified have the potential to guide users in modest but targeted steps to discover new value in their data.

back to top  Conclusion

We have argued that the challenge in innovation research we have focused on is fairly ubiquitous where systems and jobs have co-evolved. Our strategy for addressing this was to systematically look for edge cases, both within the experience of individuals and across individuals. Although UX professionals often talk about the importance of understanding edge cases, it seems we often use the term as synonymous with exceptions. In our case, we were looking for something more specific than parts of a job that are not well supported in the existing system, creating extra work. Rather, we systematically looked for situations where there is some motivation for individuals to push against the limits (i.e., the "edges") of their jobs. This is where you will find the intersection between the evolution of technology and the evolution of job and organizational design. It is at that intersection that true innovation always brings change.

back to top  References

1. Manyika, J., Chui, M., Brown, B., Bughin, J., Dobbs, R., Roxburgh, C., and Byers, A. Big data: The next frontier for innovation, competition, and productivity. McKinsey Global Institute, 2011.

2. Schadler, T. The state of workforce technology adoption: US benchmark 2009. Forrester, 2009.

3. Drucker, P. Landmarks of Tomorrow. Harper, 1959.

4. Rogers, E. Diffusion of Innovations. Free Press of Glencoe, 1962.

5. von Hippel, E. The Sources of Innovation. Oxford University Press, 1994.

back to top  Authors

David Siegel is a senior user experience researcher at Google. This work was done while he was a user experience researcher and consultant with Dray & Associates, Inc.

Alex Sorin has more than 20 years of experience designing innovative and strategic software products for leading software companies. He authored several U.S. and E.U. design patents. Currently, he is a director and user experience architect at SAP.

Michael Thompson has been working in product management, product marketing, and product design for more than 20 years in companies such as Apple, Business Objects, SAP, and several startups. He currently leads the user experience function at Emailvision, a provider of cloud-based digital marketing tools. This work was done while he was director of product management at SAP.

Susan Dray, president of Dray & Associates, is a practitioner and consultant carrying out both generative and evaluative field research, and has taught many practitioners how to design, conduct, and interpret field research, among other things.

back to top  Figures

F1Figure 1. Working assumptions of the opportunity to address the casual user segment.

back to top 

Copyright held by authors

The Digital Library is published by the Association for Computing Machinery. Copyright © 2013 ACM, Inc.

Post Comment


No Comments Found