The role of marketing research is to work with product or service consumers to understand their needs and desires, while designers are often asked to use the marketing research results. So why have I never seen a marketing study targeting designers to find out how marketing research can benefit them? The answer may be because it can't.
Meanwhile, designers' continued willingness to put up with these methods is curtailing the evolution of design researchand perhaps design itself.
Market research evolved from a need to understand consumers' reactions to products, measuring responses to products already on the marketafter they've been designed, not before. Enter the toothpaste aisle of a supermarket and you are faced with many choices. Buy one and leave, and you are playing out an action that formed the genesis of marketing research. Given choices on a shelf, which one would you choose? A bit oversimplified, but that's the basis. In contrast, the goal of design research is to make us smart about our creative directions and opportunities.
Unlike marketing research, design research demands an understanding of things that do not yet exist. The difference between the two is so fundamental that it often isn't obvious. Pointing this out typically yields a deer-caught-in-the-headlights look. Yet this difference explains why, in many cases, innovation seems to be a casualty of marketing research, not a result of it.
Historically, designers have concerned themselves with the "thing"the product they have been hired to design. The initial description of this product, the design briefas well as funding for the projecttypically came from elsewhere. Marketing's job was to interface with consumers. Designers rarely ventured outward. And, certainly, design education never suggested that consumer interactions were part of the program. Few, if any, courses in social sciences, biomechanics, or psychology were a typical part of design curriculum. At the inception of 1930s industrial design, designers appeared happy to draw sketches or shape clay models in an ultimate effort to cull aesthetically pleasing forms from the manufacturer's factory machinery. Verification of these design efforts came afterward.
While there were design heroes and heroines with more humanistic or socially responsible goals prior to that time, for the most part designers were about objects. But by the late 1970s, several people were pioneering the idea of design research. Usually this meant enlisting experts from other fields. And often their expert background in ergonomics or cognitive psychology stemmed from the military, where everything from jet fighters to submarines needed to be controlled or inhabited by people. Henry Dreyfuss's Measure of Man, and later Niels Diffrient's Humanscale, served as standard reference materials for product designers for decades; the best available source at the time, but containing information gathered largely from studies performed on relatively fit military populationsmaking application to consumer products dubious, if not inappropriate. Even well into the 1980s, when designers said they were doing "research," it often meant they were going to a library to look through various regurgitations of that same military information. Either that or they were browsing design magazines.
For most, then, design research is relatively new. To get their feet wet, designers initially were invited (or forced) to participate in marketing studies. Usually this meant sitting behind a glass window in a focus-group facility, sucking down M&Ms, and saying rude things under their breath. The bad attitude was often justifiedthese sessions could be more aggravating than helpful. Perfectly good companies, it seemed, were trying to drive forward by looking out the back window; many sessions focused on ways to position existing products, or methods of "spinning" in order to paint a negative in a positive light. Designers usually reacted by abhorring research, labeling it as meaningless. But in a few cases, a more proactive reaction took place. Designers retaliated by conceiving and conducting their own research.
These proactive steps were tempered by a lack of resources; with little training (an understanding of statistics, for example, is rarely part of a designers' tool kit), designers lifted many techniques from marketing or worked with marketing in a new form of relationship. But did that bridge prevent design research from coming into its own? Many design efforts are funded through marketing, encouraging polite interrelationships (in which marketing needs to understand or approve every design move)with the unfortunate consequence that this may be slowing the evolution of design research.
What separates design research from marketing research is a core but elusive principle: There is a phenomenal distinction between evaluating a product before it is finalized, the focus of design research, and evaluating consumer response after a product is finalized. Logics and statistical methods used in studies can be misappliedoften by prestigious (and expensive) marketing consultants. They affect even seemingly simple questions that, while appropriate for finished products, can be completely inappropriate for products under development. While many, many differences exist, here are three rules outlining basic things to avoid:
1. Never ask people which one they like best. While this is probably the most often asked question in marketing and design, it's a faulty question. Not simply because it's closed-ended. Putting two or more items in front of someone and asking "Which one do you like the best" can yield a meaningless response.
Surprising? Here's why. Put three shapes in front of 100 peoplea circle, a hexagon, and an ovaland ask them to choose the one they like best. The results: The circle gets 28 percent of the votes, the hexagon get 40 percent, and the oval gets 32 percent. Go with the hexagon, right?
What you may have found is that 60 percent of people like soft roundish shapes such as the circle or oval. Either one may have been good with your participants. But in the results the hexagon looked falsely popular. The problem is right in front of us. The "which one do you like best" question contains two traps. First, you told the person he or she had to pick one (eliminating any knowledge on your part about ties or close calls). This means that the combination of soft, roundish shapespicking both circle and ovalis not an option. The participant must pick one or the other, dividing the vote. The second trap is that we assume by the question that the person does, in fact, like the one they chose. They may, in fact, have not liked any of them, but given their simplistic answer there is no way to differentiate that response from that of someone who liked them a lot. The question forced them to choose in a manner that did not reflect their actual opinion.
Note that the problem exists even when other words replace "like." For example, "Which one of these hats looks good on me?" embodies the same traps. This form of questioning presents a fundamental problem. Yet while the results are devoid of meaningful information, it's surprising how many people will tabulate and act on them.
The "like it best" question may have some toothpaste-purchasing merit when we are trying to find out how something would sellassuming all of the options would be shown simultaneously on the shelf, and consumers would be purchasing just one. But that's marketing, not design. Even then there are far more insightful ways to get to that information.
2. Don't force people to rank things in sequence. Even more disturbing (perhaps because I've seen the results turned into important-looking charts by otherwise respectable marketing firms) is the "forced hierarchy." Respondents are shown a group of products and asked to place them in order of preference: first, second, third, and so on. Similar to the "like it best" question, the forced hierarchy makes no more sense than asking two Democrats to run for president against one Republican. It can split the vote. A political party would never entertain two candidates simultaneouslylet alone three, four, or more. Most forced hierarchy results really just cloud the issue. A forced hierarchy provides no indication of how or why the rankings were split. The responses from a person who hates them all look the same as results from someone who loves them all. Yet the results make it to top-line reports and deceptively interesting charts, and can mislead major design and product development programs.
I saw this happen recently in a package design study for a line of sports equipment. Three out of four concepts showed a woman running, each using a much different photograph. The fourth concept did not. When ranked in sequence, the "popular" vote was split three ways. Although the photo helped convey the message, by asking people to rank in sequence, the version without the photo came up in first place more often. Luckily, in this case the pattern was easy to spot. When we condensed the offerings to one package with a photograph and the one without, the one with a photograph received three times the votes.
Perhaps an even more obvious problem with the ranking technique, as well as the "like it best" question, is the fact that while a design is still in development, what people chose can be far less important than why. The design isn't final; it can be changed. Understanding why provides more food for the creative process and information that a design team can act on. Unfortunately, the why issues are often ignored or poorly addressed.
3. Don't worry about the average person. The average, although a basic statistical concept used in marketing, in itself does not contain any information on another basic statistical concept, probability.
Occurrences in nature, even chance games like coin tossing, center on probability. While the chances of tossing heads four times in a row is remote (1 in 16), toss enough coins and it will definitely happen. Scientists typically employ the convention of a 95 percent confidence level for acceptance of a finding, or conversely, 1 out of 20 for rejection. A hypothesis will be considered "proven" if the chance of an "accidental" occurrence of seeing a specific result in a study is 5 percent or less (probability is typically notated as p<.05). Analyses in marketing studies follow suit, looking for 95 percent confidence in the results.
In essence this means that if you toss heads four times in a row, science still considers the coin "fair." But toss heads five times in a row (where the chance is 1 in 32, or just more than 3 percent) and that coin, or the person tossing it, has just been placed under suspicion. Although we know it could happen, science set its threshold at 5 percent, which translates to science wanting to be correct at least 19 out of 20 times. In doing so, it accepts a range of possibilities.
An "average" set of coin tosses therefore means littlemany different combinations are normal and expected. Why, then, is there continued emphasis on the "typical" or "average" consumer? In past projects I've been handed detailed descriptions of a company's average user, a statistical distillation of thousands of data points. But designers don't care about the average person. We need to care about everyone. People will vary. Design a doorway for the average person and half the people will bump their heads. Designers must understand the spectrumtallest and shortest, fastest and slowest, or any number of corner cases that seem to not fall into typical marketing discussions. While we may hear from a marketing viewpoint that "that's not our consumer"in actuality, he or she is. In fact, understanding the complete spectrum may present our biggest opportunity for innovation.
A reliance on a "persona"a fictional distillation of a consumer that's assigned a nameis equally as limiting, even when more than one persona is identified. There are plenty of real people in the world; there's no need to design for a fictional one. Our PowerPoint tendency to oversimplify for purposes of consensus in a group hurts design and our understanding of peopleand probably underestimates the group.
Is There a Solution?
The solution can be discussed in a "little picture/big picture" way. A simple solution to the first two issues, for example, is to simply think in analog terms. For example, ask people to rate the products on a scale. Rate, not rank. And be sure to ask why. The results will be infinitely more informative. Love, hate, indifference, and ties will all be spelled out. The design team will get a sense of what they are up against in terms of usability, product performance, or consumer perception, which is the ultimate purpose. Designers don't need closed-ended questions and answers; they need to set direction and cultivate a point of view.
As for the average person, one homogenized fictional person will make a lovely presentation slide but in reality will get us nowhere. It takes a much more thorough understanding of people's diversity in needs and desires to develop design parameters.
In a bigger picture, design research needs to expand its techniques to more fully understand the potential of design. It's bad enough that some of these marketing-based methods continue to be practiced in a rote manner. (Delving into technical discussions involving both logics and statistics can bring many people, in marketing and design, far from their comfort level.) But blindly applying marketing methods to design creates a double whammy that should be avoided at all costs.
There is hope. In light of the fall of some major product-producing U.S. corporations, it may be obvious that the problem was not lack of design or engineering expertise, but an over-reliance on marketing practices that substituted for a strong, meaningful direction. It seems silly to hear a vice president of failed automobile maker GM say that they were just giving people the vehicles that they wanted, when clearly this was not the case. The "positive change" that needs to emerge will require us to abandon antiquated methods for determining product and design directions.
We're certainly not there yet. I was recently in a presentation to a health care company in which the design research team presented findings from interviews conducted with seven potential consumers, the result of in-depth home visits. Each person was individually profiled.
At the end of the presentation a key person on the client's team asked, "Which one is our target consumer, or should we rank them in order of importance?" All that research work, and all three rules instantly broken with one seemingly innocent question!
Dan Formosa is a consultant in design and research. He has received a variety of awards and his work has been selected for national and international exhibits. Formosa was a member of the design team for IBM's first personal computer, OXO Good Grips kitchen tools, XM Satellite Radio, Ford's SmartGauge instrument cluster, and was a founding member of Smart Design. His work is included in the permanent collection of the Museum of Modern Art. He lectures worldwide on design and innovation. On a different note, Formosa recently co-authored the book Baseball Field Guide, explaining the intricate rules of Major League Baseball.
Why is it that user experience designoften hailed on the covers of major contemporary business magazines as the creative savior of everything from product innovation to business operationsseems to prefer to paint a picture of itself as a misunderstood, misapplied, and unrecognized profession; a victim of ruthless market forces and incompetent business managers?
I asked myself that question once again after reading Dan Formosa's article about design and market research.
Questions about how to better integrate marketing functions and design functionsand by extension, market research and design research functionsare valid and worthwhile. But unfortunately, there is a great divide between the constructive discussions needed for these disciplines to better leverage each other.
For many years I have wondered why so many companies seem to struggle with effective integrationeven simply effective collaborationbetween marketing and design. Why? The overall objectives of the two functions are largely complementary, and both apply a customer-centric view of business management.
Yes, marketing typically has a broader view and provides essential input to a business strategy, while user experience design typically has a narrower view that focuses on product design and use. However, the responsibilities and capabilities of these two functions have changed over the last two decades.
First, let's consider the marketing function. Marketing has traditionally focused on what is referred to as the "marketing mix" (the four Ps: product, price, placement, and promotion). In some industries, "product" has become relatively more important for many reasons (one being that the customer relationship is considered much more fragile and vulnerable in an online world where the competition is only a click away). As a result, a specific product marketing discipline has emerged, focused on defining product strategy and positioning, market demand, customer needs and requirements, and making sure that products are designed and positioned to meet said needs. In effect, product marketers are stepping into well-known terrain for user experience designers.
Second, user experience design has established itself as a professional discipline in its own right, although apparently lacking the self-confidence of its older siblings. Growing out of the human factors discipline, it has embraced new techniques and insights (participatory design, ethnographic techniques, etc.) and new technologies and interaction paradigms. Today, in what some people call an "experience economy," many companies' strategic planning and innovation practices have user experience planning and design front and center. In effect, designers are stepping into well-known terrain for marketers.
With increasingly overlapping objectives, why is there so much finger pointing between the disciplines? Unfortunately, I have found no satisfactory answer to this question, other than it is often based on a lack of understanding of the other discipline and very active stereotyping. Unfortunately, that seems to be the case in Dan Formosa's article as well, which focuses specifically on design research vis-à-vis market research. Although I would agree the two are far from interchangeable, I do claim they are complementary and their ground rules are not as different as Formosa would have us think. They are both built on scientific method and thus should comply with principles and requirements related to data collection and analysis. Understanding "independence of events" and "levels of measurement" is no different for a design researcher than for a market researcher, as Formosa seems to argue (although I admit to having a difficult time understanding the coin-toss metaphor, as it supposedly relates to different underlying statistical principles of design research and market research). It is also simply not true that market research focuses only on evaluating a product after it is finalized, whereas design research focuses only on evaluating a product before it is finalized.
There is no doubt that Formosa has been exposed to a lot of bad market research in his career. So have I. But I have also been exposed to a lot of bad design research, whether dealing with qualitative data or quantitative data. I cringe at both. And while we should point out when the emperor has no clothes in our daily work situations, it is not the bad research that defines a discipline. I have been exposed to both good market research and good design research as well and, more important, some of the most compelling and impactful research combined different research techniques for a more comprehensive and insightful outcome. That, I suppose, leads me to my conclusion.
First, as professionals who rely on research insights, we must take responsibility for shaping the quality and usefulness of the research outcomes. Step out of the victim role and demand representation if you don't have it, or proactively define your research needs with your research teams (Formosa lists some do's and dont's to be aware of; hopefully your research team is no stranger to such considerations). Better yet, show that you understand the research process and that you understand how to make informed decisions based on a range of databoth qualitative and quantitative.
Second, find ways to bring your research teams closer together. For example, have you considered merging your market research team with your design research team to actively seek to leverage the different perspectives and skills for the better of the whole? Formosa refers to "failed attempts" of bridging the gap, and while it is certainly possible to point to examples of such, I have seen only very few attempts of close collaboration based on a shared understanding of underlying principles as well as respect for differences in research needs. One such example is the research organization and practice at Yahoo that actively seeks to leverage the perspectives and competencies of both disciplines for a better combined outcome.
Third, create opportunities for your product marketing and
design teams to improve collaboration on product strategy,
planning, positioning, and innovation. Marketing and design have
more in common than we seem to acknowledge today. It is time to
define where and how that overlapwhich, in my mind, is
complementarycan lead to more positive outcomes. Surely,
it would not hurt many design professionals to better understand
market analyses, ROI, NPV, and customer value analyses, just as
it would benefit many marketing professionals to better
understand persuasive design, affordances, and the role of design
in user behavior. Cringing at examples of poor practices and
stereotyping other disciplines will not move either of them
Klaus Kaasgaard | Telstra
About the author: Klaus Kaasgaard is the executive director of customer experience at Telstra in Sydney, Australia. He leads the user experience design team responsible for bringing to life the company vision of "1 click, 1 touch" simplicity. Before joining Telstra in April 2009, Kaasgaard held the position of Vice President, Customer Insights, at Yahoo! where he was responsible for user experience and market research across the business. He spent more than six years at Yahoo! in a range of roles including Vice President, User Experience Design, and Director, User Experience Research. Prior to this, he held user research roles at MSN Hotmail and KMD in Copenhagen, Denmark. Klaus has a Ph.D. in sociology of technology from Aalborg University in Denmark and an M.A. in human-computer interaction and philosophy from Aalborg University and Aarhus University, Denmark.
©2010 ACM 1072-5220/10/0100 $10.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2010 ACM, Inc.