The mystery of product development

XVII.4 July + August 2010
Page: 13
Digital Citation

Visible synthesis


Authors:
Katie Scott

In the words of Brenda Laurel, research makes design a “more muscular profession.” [1] Design research provides the details that define the problem structure: whom a product is for, what purpose it serves, where it fits in a given context, when it is necessary, and why it is preferable. Formative contextual research provides the starting point for developing a solution.

But design research is a messy domain, with data coming from all directions, in all forms, and in all levels of fidelity. The data derives from a wealth of different research methods—interviews, ethnography, latent video capture, and participatory design methods—all with their strengths and weaknesses. Some of the data is from primary sources, first-person testimony and sessions we can shape to fit our current needs and questions. Other data is from secondary sources, culled from other interviews and earlier work, from clients themselves or the contractors they’ve hired. The resulting evidence takes as many different forms: graphs of demographic data, Post-it notes of customer anecdotes, tables of customer history, pages of interview transcripts, collections of photographs, drives full of video clips, drawings of work spaces, and so on.

As designers we’re constantly filling information gaps to better understand the problem (deductive reasoning) and define potential solutions (generative reasoning). But there is very little information about how to make the leap from deductive ideas to generative ones, to convert the design-research questions into potential solutions. Said differently, “...because of the very nature of design problems, there is very often very little information about the problem, even less information about the goal and absolutely no information about the transformation function.” [2]

That transformation function is the root of design synthesis: The conversion from design research to design ideas. Currently, in professional practice, design synthesis is treated like the magical step that happens between structured research protocols and organized design iterations. There’s no explicit step in the project plan, just an expectation that the magic happens. Somewhere between the last interview and when the findings are written, the synthesis occurs. Up until now, the “magic” of design synthesis has hinged solely on the cleverness of the researchers—to collect, learn, organize, and synthesize the data in their heads and then blurt out meaningful design ideas. That weakness— that lack of an explicit step in which research becomes ideas—hinders us as practitioners and has a negative effect on the value we bring to clients.

Why Synthesis Matters

Currently design synthesis is an invisible aspect of the design process. We pay lip service to the idea of synthesis, but without clear processes or methods for actually doing it. We write interview summaries, we create highlight reels and develop storytelling artifacts, but we don’t provide actionable synthesis artifacts. By ignoring synthesis or cutting it short, we undercut the value of our design research. Certainly the research team that conducted the study has the deepest firsthand knowledge of the data. They remember nuances of individual stories, the inflection in a participant’s voice, or the particular hot buttons they cared about. Those insights have to be transferable for efficiencies within the design team and impact beyond it.

As a consultancy, we help clients understand how to use design services to affect their projects and their bottom line. In a mechanical sense, this means defining a project plan for design research, design synthesis, and generative work within the feasible budget and helping clients understand the proposed approach. The projects are often driven by the need for generative work, but the clients now tend to recognize the importance of effective design research. Few clients, however, understand the need for synthesis to bridge the gap in between.

Without synthesis, interested stakeholders have to immerse themselves in the research to understand its implications. Studying the data might suffice for other user experience (UX) professionals and perhaps even the broader design team. The ultimate audience of design research, however, is not the UX team at all. It’s the engineers, VPs, or program managers who must make decisions based on the research and recommendations we provide. The synthesized research needs to be presented in an actionable way to enable design decisions.

To go a step further, skipping synthesis is a waste of money. The rough rule of thumb for design research is equal time for preparation, execution, and analysis. Without adequate synthesis time, the time spent preparing and executing the research itself is wasted. Without capturing those findings for later reuse, in a format in which they can be expanded and built upon, the other two-thirds of the budget is misspent. The research won’t provide a clear goal for the design team, a true target to build toward. As consultants, we must make the case for why synthesis is needed, why it provides value to the rest of the research process, and why it’s “worth” the time and money.

Why the Current Methods Fail

Occasionally clients mistake the research itself for its output. We worked with a consumer retailer to explore the shopping behaviors of their target demographic through a large set of interviews. The client stakeholders observed the research, identifying potential topics, discussing how they fit into other research efforts, etc. Once the interviews were finished, however, the client wanted to shift the project schedule and hold the wrap-up meeting the next day.

To them the research was already complete. We had to make a strong case for continuing as planned, with those next few weeks including: a detailed study of the data collected, flagging commonalities, quantifying observed behaviors, validating early findings, and identifying new ones. We had to explain what output would look like, how our value consisted of more than rough notes and a set of tapes. The resulting synthesis was presented in a visual report that organized the key findings by how they affected the existing product. It allowed the research team to compose recommendations for the client and, in turn, gave the client an actionable way to share the detailed insights with the product teams for implementation.

In this case, the report provided a synthesized representation of the research findings that was useful to the clients. It was not, however, an established process or known deliverable. Unlike a research task or a generative one, the client couldn’t know what the output would be until it was created. Similarly, the research team is forced to define the deliverable anew each time, based on the situation, the topic and, often, the findings themselves. At worst, everything we do is a one-off, a mix of abstract models that fit the domain, a set of findings that work in the current situation, but no clear method that’s focused and repeatable across clients, across groups, across industries. Currently we have very limited methods to make all the abstract work of design research visible to our clients. We are experts at the research and the analysis phases, but we don’t do the synthesis and representation work that makes it shine.

The Problems are Clear

  • How could we share the synthesis work in a meaningful way, to pass the insights from the eyes of researchers into the heads of designers? What would make the designers as versed in the data as the researchers?
  • How do we generate an actionable summary without losing all the potential idiosyncrasies? How do we synthesize to find “important” parts of the data, without losing the rest?
  • What should meaningful design research outputs be? What should synthesized artifacts look like, and what makes the research insights visible?
  • How do we make the conversion process, from the research to the representation of it, repeatable? Can it be repeatable across industries, across projects, and across people?

A Lack of Methodology

The goal of design synthesis is to translate the wealth of data into a meaningful framework that can guide design work. This is not as simple as an interview summary, a compilation of findings, or a highlight reel from the user sessions. Great research work and user empathy can get lost in the torrent of data. We can focus too narrowly during our analysis: counting the wrong utterances or focusing on the obvious solutions. Without a better methodology, we can easily focus our reporting on only the details we remember and miss the bigger picture. If we don’t do it correctly, sensitively, with an eye on the eventual design questions, we can lose the signficant findings in a pile of merely interesting ones.

As a profession, we’ve formalized the design-development process and the research methods into a recognizable process that can be taught, adapted, and repeated. The steps themselves have an accepted set of inputs and deliverables. But the crossover between the two domains of design research and design development is still a no-man’s-land. Our current best practice is to swim in the river of available data and generate models as well as we can.

Design synthesis relies on making the subtle patterns in the data set visible in a format that the research team, the design team, and client can understand, discuss, debate, and act on. In most cases the synthesis hinges on a set of abstract models that accurately represent the design space and provide hints at the detail below. The models can take many forms: concept maps, work-flow diagrams, personas, bull’s-eye diagrams, or info-graphics. Those models form the artifacts for the design-discussion understanding, revision, and improvement. Rendering that knowledge visible makes it comprehensible and actionable for the rest of the team.

Developing a Process

In the most basic form, we post all the data we’ve found and “walk the wall” [3] of artifacts to generate a set of conclusions. The artifacts are still available as primary sources, to avoid losing the details, but they are often left in their primitive form. Even the vaunted affinity diagram gets us only so far, providing a basic categorization of topics without context or details. In most cases, we rely on the memory of the design researchers to knit together the key concepts and spark a set of findings.

There are five key aspects that are lacking in the current approach to design synthesis. These “requirements” must be resolved, so that design synthesis can become a full partner in the design process.

  1. The current state of design synthesis is not collaborative. Current working models basically require that the researchers are the designers, or that there is a clear carryover from one team to the next. It relies on each member of the research team and the design team to get up to speed using the raw data. The artifacts from design research must be accessible to the broader team: not just to the researchers who conducted the interviews, but also to the interaction designers, developers, content strategists, and visual designers who must translate those findings into a final product. That team must understand the details of the research at its core in order to infer requirements, understand gaps, and outline potential solutions. If we intend to practice true human-centered design, the synthesis needs to be both interdisciplinary and collaborative.
  2. Similarly, the design synthesis must be iterative. As we continue to assimilate new data, new understanding, and new ideas, the synthesis must likewise evolve. New data must revise the findings, adjusting the average understanding and revising our knowledge of the domain, in an ongoing process of active understanding. Design synthesis must support iterative problem structuring, to continually define requirements and evolve our own design brief to address the problem. Likewise, the output of design synthesis must be iterative: to assume the model will flex and grow.
  3. As our knowledge of the problem expands, we must be able to trace that synthesis back to the source data. This isn’t solely for issues of pedigree and credibility, but also to ensure that we’re adequately accounting for the idiosyncrasies and nuances in the raw data. That richness is critical to maintain—it’s the reason we design toward a set of varied personas rather than the nonexistent “average user.” To be clear, the synthesis can’t simply average the data set into a homogenous, undifferentiated mass. We need a clear metric for making sure design ideas jive with the real variability and nuance in the source findings. We also need to ensure that research findings are appropriately weighted, accounted for, or addressed in the design iterations.
  4. With our ever-growing data set, how do we address scale? It’s reasonable when you have a small set of researchers, a homogenous set of interviews, or a narrowly defined topic. But that circle can quickly widen as you approach a well-trod domain, a large suite of products, or a longitudinal effort. We need to develop design synthesis to handle large-scale problems, broad data sets, or large teams, to ensure the methods are robust. How would synthesis work for the “wicked problems,” or the longitudinal studies like the U.S. Census and the National Children’s Study, where the notion of “relevant research” expands exponentially? How could we scale up our synthesis to work at the far end of the spectrum?
  5. Finally, our current approach to design synthesis isn’t accessible and credible to clients. The methods aren’t standardized, repeatable, visible, and quantifiable. Certainly, this is a high bar to set, with the level of variability in design problems. At a minimum, design synthesis should have a set of repeatable processes that can be completed and verified. And we should have an accessible language for describing the synthesis process and its value.

Initial Steps Toward a Solution

While the problems and requirements can be outlined, the steps toward a solution are less clear. If we continue in the current process, we will slowly etch our own paths as individual practitioners, evolving our process toward synthesis artifacts that have served us in the past. We may find that a bull’s-eye diagram or concept map has worked in similar situations, so we begin to rely on them, without explicitly codifying why those representations are needed or what problem they solve. While this allows synthesis artifacts to emerge organically, the process is slow, fragmented, and full of potential dead ends. As a community we’re taking a generative approach to the problem, continuing to define new potential synthesis artifacts for each situation and hoping that some of the solutions stick more broadly.

I’d suggest we take the opposite tack, looking at the synthesis artifacts that do meet the requirements and working our way backward, to deductively identify the parameters of a solution. For better or worse, I would argue that most established synthesis artifacts are personas; personas are an anomaly among the other research methods as a labeled output, rather than a named process. While personas are equally lauded and demonized, a critique is peripheral here. At a minimum, they are a known, repeatable, recognized form of design synthesis. There are established methods of generating, leveraging, and extending personas. Personas generated enough buzz since Cooper introduced them [4] that they are accepted by a broader group of stakeholders. Regardless of their defects, personas have succeeded as a method of making design research visible to a broader community.

To provide an example, we recently worked with a large consumer company to help them understand the details of their user base of several million U.S. customers. The team had already conducted a series of contextual interviews, generating a lot of great anecdotes, quotes, and photos from the users, but failed to get any traction within their organization. The content was truly interesting, but nobody knew how to share and use it—it was too complex.

Without a data reference model for customer experience, the team often referenced their own personal and family habits as a lens for understanding the user community as a whole. There are problems with this: it is limited, personal bias looms large, and it is not shared by team members and other teams within the organization. The UX team had much richer data available but had limited ways to leverage it in their design meetings or expand its influence to other teams and other meetings.

When they came to MAYA, they explicitly asked for a set of personas. They were not as concerned about the number of participants or the type of research we needed to conduct, but they were adamant the results be a digestible set of personas. We interviewed 20 people and synthesized the results into a single persona document. The persona set included photographs, needs, goals and habits, plus charts that placed all the persona on a couple of spectrums to quickly show their similarities and differences across relevant axes. The personas were a wild success internally, allowing the UX team to lead a conversation with their stakeholders about the details of their user base, their needs, and potential trade-offs. The teams could discuss why a particular feature was critical for a certain persona, even if it was ignorable for the others, with clear data to back up their assertions. The accessibility of the personas gave the members of the research team and their stakeholders equal footing on which to make inferences on the data and the implications of the design ideas.

As this example illustrates, personas are a concrete, visual synthesis artifact that summarizes the available data in a shareable format. They are successful because they meet the aforementioned “requirements” of synthesis: They’re collaborative and interdisciplinary, allowing the rest of the team to understand the research findings and nuances discovered. Personas can be iterative, either by evolving the existing personas in light of new data, or by adding personas as new groups emerge. The personas are traceable through quotes, stories, and behaviors that are included from the primary research. While the personas form an archetype, the details are rooted in the data collected. The personas easily address scaling issues by collecting a large set of data into a much smaller set of personas.

The personas aggregate the characteristics of a much larger group; working from a larger data set forms better or richer individual personas, rather than a larger set of personas. Lastly, personas are accessible and credible with a larger team of stakeholders. They’re short, clever, and understandable to the extended team of stakeholders with limited explanation or training. The personas take the research out from the domain of the researchers and make it visible to the broader team in management, marketing, or technology that need to act on the research.

While personas are not the perfect method, they are an example of what an established synthesis process could look like. They reinforce the model we want to impart to our clients of what design research should be—research, synthesis, and representation—and how it fits in the larger design and development process. Similarly, personas provide a way to understand the client’s needs and concerns: If clients ask for personas by name, we know they want to identify synthesized archetypes from a larger set of representative interviews.

To make research valuable, we need to make the synthesis process as visible as the research phase and make the synthesis output visible to stakeholders. Personas accomplish both of these goals, recognized or not. By working deductively, we can understand what works about personas for researchers and their clients and develop other synthesis artifacts that follow that paradigm. If personas meet those aforementioned requirements, that solution can be used as a model for others. And, conversely, personas can outline the potential faults of synthesis artifacts and provide an outline for improvements.

References

1. Laurel, B. and Lunenfeld, P., eds. Design Research. MIT Press, 2003.

2. Restrepo, J. and Christiaans, H. “Problem Structuring and Information Access in Design.” Journal of Design Research 4, 2 (2003): 1551–1569.

3. Beyer, H. and Holtzblatt, K. Contextual Design: Defining Customer-Centered Systems. San Diego: Academic Press, 1998.

4. Cooper, A. The Inmates Are Running The Asylum. SAMS, 1999

Author

Katie Minardo Scott is a designer and researcher at MAYA Design in Pittsburgh. Her work focuses on organizing complex information for user understanding in domains like consumer products, medical devices situational awareness, and engineering research. Scott holds a BFA in design and a master’s in human-computer interaction, both from Carnegie Mellon. She is also a contributing editor for this magazine.

Footnotes

DOI: http://doi.acm.org/10.1145/1806491.1806495

©2010 ACM  1072-5220/10/0700  $10.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2010 ACM, Inc.

 

Post Comment


No Comments Found