“How can we make ourselves indispensable in the development process?” a young user experience (UX) researcher asked me during a class break. It’s a question I get quite a lot when I teach. Most of us want to know that we are having a positive impact on the products and designs we work on. The first step in achieving this goal is to identify, and target our evaluations to, things that are likely to matter.
UX research is essential to making sure that our products fit their users, but it is not possible to research all aspects of a product concept or design idea. All too often, we may make knee-jerk decisions about what to look for in the field or evaluate in the lab based on “obvious” things, such as what tasks people are currently doing and how easy it is to do an existing task. These aspects of design are interesting, and may even be the right strategic targets for evaluation, but they may not be getting at the core design issues that determine how well a particular product meets—or fails to meet—user goals once it fields.
Luckily, UX people are sensitized to issues different from what many other team members professionally focus on. Schooled in how people process information, approach new tasks, and recover from errors, and in the challenges of getting meaningful evidence about human behavior, we have a different perspective than many, or even most, developers and designers.
This can be a huge asset to us as we set about becoming indispensable because we almost automatically see issues that warrant evaluation but that may not be on the radar screens of other team members. As a result, we can often identify these areas proactively, help other team members recognize why they are priorities, and demonstrate how to approach them differently, ultimately helping to guide design in new directions. But how do we identify issues that others are not even thinking about and turn them into a research agenda that others will buy into? We can often get to the heart of the matter by helping to articulate the core challenges of a design brief and by identifying the team’s assumptions—both those that are explicit, and potentially more important, the implicit ones.
Helping teams concentrate on the “right” issues focuses their curiosity, motivation, and creativity. Product teams can spin their wheels over endless questions that are at the wrong level of abstraction and importance to the users’ experience. Raising the right questions to help teams focus their efforts on what will really matter can be one of our main contributions, and a way of adding value quickly. We can do this by identifying inherent challenges, which is relatively straightforward, and making assumptions explicit, which often is not.
Identify inherent challenges. There are always trade-offs and hurdles in any problem space. If you focus on these, you can begin by showing the team that the things they are debating are actually common design issues, or central issues in the domain.
For instance, a common decision design teams must make is how to make the information architecture shallow enough that it can be navigated without too many clicks while at the same time keeping each branch at a manageable size. Another is how to make features easily discoverable (especially when the user may not even be aware they exist) without having too many things vying for the user’s attention.
We worked on a website for a company that had grown by acquisition, as many have. There were complex partial overlaps among products and services offered by each business unit; the company had to wrestle with the trade-offs of whether to maintain the multiple separate brands, each of which had an identity and coherent product catalog, or to merge them into a single identity for the advantages of cross-selling and overall corporate branding. The downside is that this would render the overall catalog much more complex and heterogeneous. We were able to provide user data to show how confusing their existing website was to customers, which then helped to force the high-level strategic corporate discussion about what to do. This was, after all, an organizational decision, but the technology design process is what helped to raise it to an appropriate level of discourse.
Making assumptions explicit. Assumptions are often internalized and implicit. Simply surfacing them explicitly as hypotheses can be useful to the team.
A stakeholder review in which we begin by asking the team to identify their assumptions is a good way to start. This will elicit assumptions that people are aware of, and sometimes leads people to identify ones that have been taken as givens and never thought of as assumptions or hypotheses. These can have a big impact on product design decisions. Some of the key questions are:
- What are the key requirements you are thinking of for the product?
- What are those beliefs based on?
- What is the quality of the evidence? Has it been evaluated?
- What is their impact on the product direction?
- What is the risk if these assumptions are wrong?
- What are plausible alternatives?
- How might the direction of the product change, or the business risks/opportunity equation change, if those ideas were changed?
The team’s answers to these questions can help get assumptions on the table and to renegotiate them as hypotheses for evaluations.
Finding Implicit Assumptions
Sometimes this discussion can go only so far, because some assumptions really are implicit and unconscious. And sometimes these are the most crucial. We can find clues to key assumptions by probing in a number of places, but we tend to find them most easily in user typologies, value proposition stories, and interaction design thinking.
User typologies. Typologies are the sets of distinctions that companies adopt to describe different categories of users. These, in turn, can fundamentally define who the company sees itself as designing for, and should guide design.
Sometimes there is an illusion of consensus about who the users are. In one company, the team described a product in depth. It was aimed at the “power users.” However, when we asked the team to describe who the power user was, and how we would recognize one when we saw it, it turned out that no one on the team could actually define power users. This sometimes results in different definitions, or even pitched battles between camps describing the users differently. Helping the team recognize that there is not consensus about whom they are developing for is critical.
Teams can sometimes articulate qualitative differences among segments of their audience, but these typologies can lack behavioral evidence of their validity. Sometimes, these typologies are based on the team’s imagination, or general sense of familiarity with the audience, but turn out to be misleading stereotypes. Many times teams are designing based on user typologies that have been developed for marketing purposes. They may address differences in product messaging that will matter to people, or even different value propositions, but they may have a messy relationship with segmentations based on behavioral usage patterns. Often, the data driving these segmentations is quantitative data from marketing or data purchased from an outside firm. Segmentations from surveys are based on self-report and miss both context and actual behavior; demographics are not much better. Again, they are not behavioral and are often not validated as predictors of usage behavior. Personality-based, characterological segmentations are typically no better than “pop psychology”—simplistic and static. The problem is that until these distinctions among users are validated as predicting usage behavior, they may lead to designing for an illusion of who the users are.
Value proposition stories. Vivid descriptions of how people will benefit from a product are necessary for selling a product concept and have a significant influence on design, from choice of functionality and features to interaction and visual design. However, these stories are often imagined and are not validated behaviorally. In my experience, the more vivid a description and the more elaborate the story, the more likely it is to be “wrong,” at least in part.
Teams will almost inevitably tell stories about (future) user scenarios in which people are benefiting from the product. When evaluating value proposition stories, pay attention to how enthusiastically the team embraces the entire story. It is a dilemma because for a product to gain acceptance, a team must believe in it. Such belief may even be a prerequisite for membership on the team. However, this also puts critical thinking at risk. It is easy for a team norm—or even an organizational norm—to develop that makes skepticism unacceptable. In these cases there is no little boy saying that “the Emperor has no clothes,” to borrow an image from the children’s tale. No one can question the value proposition and the stories of product usage. This is a dangerous situation.
Another clue is when the value proposition stories and envisioned scenarios fit suspiciously well with the lives of the team. For instance, we worked with a team that was developing a product for people in India. As the team discussed their image of how people would discover the product, it became clear they were imagining a shopping experience similar to what they were familiar with. For instance, they expected that people would research their purchase ahead of time online (possibly at an Internet cafe) and then would go to a so-called big-box store to buy the product, which they would then drive home in their car (because this was a large product) to set up. We did ethnographic research in India, and not surprisingly, we found out that none of these characteristics was true for the ultimate users—people living just above the margin of poverty (the “top of the bottom of the pyramid”). Not only are there no big box stores, people in this demographic did not own cars (although some had “two-wheelers,” or motorcycles), and the way they purchased technology products was totally different.
Another clue that can indicate hidden assumptions is when the team, in describing their product, focuses on all the cool things that people “can do” with it. I imagine all of us have seen this. Think of all the times you have asked someone to describe their product and they list its features, capabilities, and what it enables people to do. Of course, the assumption is that people want to and need to do these things—and in the particular way the product supports. This focus on technical capability in principle is understandable but risky. Instead, it’s important to question whether anyone would want to do something, rather than simply specifying that they can do it.
For instance, one team we worked with was creating an application to help companies manage their sales leads and contact database. They regularly talked about how the technology would let individual users share leads and keep track of all contacts with clients, regardless of which agent initiated the contact. The team thought this was a great idea, but it required a behavior change that the team failed to account for or even to recognize. It required the sales department to collaborate, which is something that most of the salespeople with whom we worked there were resistant to doing.
Design concept and direction. Preparing a completely coherent whole is impossible in practice. There are always compromises and rough edges—those things that don’t fit quite as well as others. Every design has some basic choices that constrain other choices. Despite the fact that design thinking promotes the constant generation and consideration of alternatives, these pivotal decisions are sometimes taken for granted. Or by the time they are identified as choices, their ripple effects have already influenced so many aspects of the design that they acquire a lot of inertia. All of these things make them particularly key to identify in advance.
For instance, a good analogy is that any two-story house needs an entrance and a way to access the second story. When homes are being built, there is a decision about where to put the entryway and where to put the stairway. A classic distinction is whether the house will have a side entrance or a center entrance, and whether there will be an entry hall or a passage directly into a main room. These decisions, which are often made early on, shape all subsequent decisions in the structure and floor plan of the house.
Another clue about where constraining assumptions that should be evaluated might be hiding is when you hear design consistency invoked to justify a design direction. Consistency with a previous design or to fill in a suite of related products can be a design virtue, but it can also be applied too simplistically, without balancing other design virtues. If being consistent leads to a design that “breaks” or is overly cumbersome for users, it may be time to invent a new pattern.
Designers do espouse looking at alternatives, but we must ask the question: Are these alternatives different at a deep or a superficial level? For instance, are they simply different looks (e.g., “mild to wild”) or do they represent fundamentally different organizational concepts of interaction models? Changing a car dashboard to install a wide variety of controls together in menus displayed modally has huge implications for driving behavior in a way that changing the design of dials does not. By surfacing deep assumptions for evaluation, we can help put them on the table for design thinking to generate alternatives at a more than superficial level. And these deep assumptions are important in their own right to put on the research agenda. Anyone who has struggled to change the radio station using such controls in an unfamiliar rental car can attest that this kind of change can be very challenging for new users, especially casual users such as rental car drivers, and that the changes can border on dangerous depending on traffic conditions and other factors.
Raising questions such as these should be helpful to the design process you are working on and can demonstrate the value of UX to the company’s product design process.
Essentially, our role in UX is often to add value by questioning assumptions, identifying untested hypotheses, and helping to find ways to fix problems elegantly to move the design ahead. As such, it is our role to identify things that cause risk to the design if they are not addressed or addressed differently. This gives us a whole new way of thinking of ourselves. You might say we’re adding Risk Avoidance Manager to the already large collection of hats we wear.
Risk arguments are generally much more compelling to decision makers than the kinds of return on investment (ROI) arguments that UX has tried to use to garner resources. As Daniel Kahneman and Amos Tversky point out, losing something you have feels worse than failing to win something of equal value . People are willing to pay more in insurance premiums than what they would, statistically, be likely to lose in order to gain peace of mind by reducing the risk of catastrophic loss, even though that means being willing to lose money (on average). Losing market share because of a false, untested assumption having guided design in the wrong direction is probably more compelling an argument than failing to optimize something.
The kind of questioning of assumptions advocated here effectively helps a team avoid risk. Therefore, it becomes much easier to use risk-avoidance arguments based on the actual issues the team is encountering. Effectively, we must make the UX-related risks of current design ideas, directions, or products explicit. We need to identify issues in the domain itself as well—what things in domain X could lead to catastrophic problems that UX could help avoid.
Identifying assumptions can help us make an impact quickly, and helping teams realize that UX actually helps reduce risk can help us get the seat at the table we have long sought. Not only that: It also makes us indispensable.
1. Tversky, A. and Kahneman, D. Judgment under uncertainty: Heuristics and biases. In Judgment Under Uncertainty: Heuristics and Biases. D. Kahneman, P. Slovic, and A. Tversky, eds. Cambridge University Press, Cambridge, UK, 2003, 3–20.
Susan Dray, president of Dray & Associates, is a practitioner and consultant carrying out both generative and evaluative field research. She has taught many practitioners how to design, conduct, and interpret field research, among other things. email@example.com
Copyright held by author
The Digital Library is published by the Association for Computing Machinery. Copyright © 2014 ACM, Inc.