Research basics every designer should know
Leveraging design research fundamentals to improve product experiences
Illustration by Patricia Doria
I’m Lirra, a researcher focusing mostly on Adobe Lightroom. My job is to ask real customers, like you, questions about how they work and what they think about ideas we have in development. It’s important to know that I’m not a designer, engineer, or product manager—you don't have to worry about my feelings, I’m here for your honest and open feedback. It’s only through conversations like this that we have any hope of achieving our goals of making Lightroom as good as it can possibly be. Thank you so much for your time, we could not do what we do or make what we make without you.
To build better products for our customers, we must first understand (through research) who we’re creating for and how we can best meet, and exceed, their needs and expectations. When research is conducted in an interesting and creative way, our product design work becomes exponentially better.
In addition to my day job helping the team bring new features to life, I also teach a course on research methods to design and marketing students. Although few of my students will end up doing research for a living, one of the first things I teach them is that research is not just delivering a pile of information and moving on to the next project, it’s about helping teams understand what the information means and what they can do with it.
My course is centered around tools and tricks my students can employ in their future work to make even scrappy, informal research efforts highly impactful. For designers who don’t have ten weeks to spend learning the basics, three focus areas—the building blocks of research design, the art and science of asking questions, and uncovering and presenting insights—will start them down the path toward research success.
The building blocks of research design
The first step in any research project is understanding your overall goals—research objectives. A typical UI study may have an objective like “explore the overall usability of Feature X, including aspects of delight and friction.” Whatever your objectives are, crystallizing them will light the path for everything you’ll need to decide later, including what you need to learn (key questions), who you need to collect data from (target audience), and how you’re going to conduct your research (methodology).
Determining how specialized your research objectives are will help determine which branch of research to employ. As questions and audiences become broader, so, too, can your information sources. In industry, we tend to encounter four main types of research:
- Product analytics: Large scale usage data specific to your product (e.g., the amount of time users spend on a web page or the average number of Generative Remove spots Lightroom Mobile users place on a single image).
- Primary: Custom research you conduct for your unique purposes (e.g., a survey I write and send out).
- Secondary: Research, articles, or data from other sources (e.g., academic and news articles).
- Syndicated: Research conducted by an agency without a particular client in mind (e.g., customer purchase behaviors).
We have many options and sometimes more than one “right way” to do research. To determine the best path forward, think of it like a decision tree, with the first question being, How unique are my objectives?” Followed by, “How much time, money, and expertise do I have at my disposal?” Each of these factors can point you in the right direction for accomplishing your goals within a set of constraints.
Let’s say I’m trying to answer a question is on the niche end, like “What items do Lightroom Mobile users want to remove from their family portraits?”). Since there might be some outside sources (photo retouching trends, blogs, forums, tutorials, etc.) relevant to the category or types of behaviors I’m interested in, I’d want to conduct a combination of primary and secondary research. Starting with secondary research, which is less specialized, helps get the general questions out of the way so you can focus on primary research (more on that soon).
With so much of the world’s information at our fingertips, the sheer volume of availability can make secondary research overwhelming. Where should you start? The world around us, the things we’re readily seeing and passively consuming daily—the category and cultural trawl, if you will. There’s a strategy for organizing secondary research often employed by marketers called The 4 Cs: Consumer, Category, Culture, and Client. Using this framework helps bucket pieces of information as you come across them, making it easier to recognize themes and larger trends. I also recommend always starting with a short list of favorite sources—places with trustworthy, fact-checked information, as well as nuanced explanations of cultural phenomena. (Yes, I looked up “brat summer” in the New York Times.)
Let’s say, I’ve done my secondary scour, but now I need to think about what my primary research is going to look like. Primary research has two big arms: qualitative and quantitative (or qual and quant as most of us say):
Qualitative research is exploratory in nature and best employed when you’re trying to discover something you didn’t know before (“How do new users approach editing their first image in Lightroom?”). It centers on gathering data through “conversations” which can mean anything from a live interview, to written or video diaries, to simple surveys (Yes, surveys can be qualitative!) with many things in between. The sample sizes tend to be smaller and more time is spent with each participant, gathering deep and rich inputs, but you don’t always know how prevalent these behaviors or opinions are.
Quantitative research, on the other hand, is about measuring the scale of something you know is happening (like the frequency of the top five friction points for new users). You gather data en masse through surveys and product analytics, where you have hundreds, even thousands of participants. With quant, we can make what’s called statistical inferences to help us better understand a group of people by analyzing the data of a subset of that population (e.g., looking at the data from 1,000 people to apply that thinking to a group of 100,000). But sometimes your understanding of why those data spikes or trends are happening is limited.
Some of the richest insight comes from the combination of qual and quant data, so you have a deep understanding of the reasons behind repeated trends and behaviors. I’m a qualitative specialist—my skill set is not in running detailed statistical analysis. Although I work regularly with this kind of data, my heart (and focus) is on getting to know small numbers of our customers and talking to them deeply about certain design stimuli or topics through a few common methodologies:
Common qualitative studies
Moderated
- One-on-one interviews (this is one of the bread-and-butter methodologies for many UX researchers)
- Dyads, triads, and focus groups (two, three, four or more participants)
- In-situation ethnographies (shop-a-long, in-homes, in-studio, etc.)
- Intercepts (sometimes referred to as “person on the street”)
Unmoderated
- User testing (a standardized set of questions participants complete on their own, most often capturing their audio and screen as they react to certain stimuli)
- Online diaries (often a multi-day series of tasks that combine written, video, and image responses)
- Qual surveys (series of multiple choice and open-ended questions)
Common quantitative studies
- Profiling (going deep on a particular user type or segment)
- Segmentations (dividing a broad audience into more specific groups)
- Prioritization (comparing key concepts against one another or rating them along impactful criteria)
- Brand trackers (looking at the same metrics over time at regular intervals)
Each methodology has superpowers and trade-offs; determining which to employ on a particular project often comes down to whether you need self-reported or observational information and how many data points you and your team need to feel confident in your decision-making. Once you determine the best methodology or combination of methodologies for your project, you can determine who you’re going to talk to (your target audience) and begin defining them by certain demographic (who they are), psychographic (what they believe), or behavioral (what they do) factors.
Then comes one of the most fun and creative parts of conducting research: talking to humans.
The art and science of asking questions
At the very start of my career, one of my mentors told me: I don’t expect you to have all the answers, but I want you to be asking the right questions. It was extremely liberating to hear so early on in my professional development and it’s something that has rung true in a variety of contexts, none more so than research. At the risk of stating the obvious, if you’re not asking the right questions, your research will be doomed to the irrelevant pile of unread emails and too-long decks (the deliverable equivalent of “remind me tomorrow”).
It’s worth remembering here that research is so much more than a conversation, as explained by my colleague Sharma Hendel in an article reminding people how important it is to remember that asking direct questions doesn’t always result in accurate answers. Whether you’re conducting an interview, focus group, or writing a detailed survey, having natural curiosity around the topic at hand certainly gives you a leg up, but remembering a few simple guidelines can help you along the way.
Guideline 1: Get comfortable with the basics
One of the most common mistakes my students make when first crafting questions is misusing the basic question types:
- Close-ended: Provides a set of options for participants to choose from (often through multiple choice, yes/no, or true/false constructs). Example: Did you drink coffee this morning?
- Open-ended: Cannot be answered with a single word and requires more explanation. Example: Tell me about your breakfast routine.
- Leading: Encourages the desired answer, often by including a subtle bias. Example: Tell me how delicious your coffee was this morning.
Most qualitative interviews should largely employ open-ended questions. The conversation will be quite dull and lacking in texture if every single question elicits a yes or no answer. Most quantitative surveys should emphasize close-ended questions. My students will sometimes rely too heavily on open-ended questions in surveys, making them difficult and time-consuming to analyze and even more difficult for respondents to answer (depending on the length and purpose of your survey, as well as your sample size, two or three open ends should suffice).
Leading questions are quicksand many non-researchers accidentally fall into, and one of the biggest areas to watch out for. As humans, bias can be ingrained, implicit, and difficult to extricate from our natural speaking patterns. For example, when conducting UX research, you must be very aware of how you frame even the simplest of requests. Something as seemingly innocuous as “How easy was it for you to complete this task?” is a leading question. Instead, try the open-ended “Describe your experience completing this task” or a close-ended “On a scale of 1–5 where 1 is very easy and 5 is very difficult, how would you rate your experience completing this task?”
Guideline 2: Start with an end in mind
Remember those research objectives that helped you define your methodology? Start there. Every question you ask should have a reason for being—a direct tie to one of your core objectives. Think about each objective, break it into components, and write down questions that capture each angle or way in. Don’t worry about being too thorough and having too many questions at the start, it’s much easier to edit down once you’ve got all your objectives covered.
Speaking of editing, often when we work with cross functional teams, stakeholders get so excited to have feedback that they come to the table with too many questions across a variety of detailed topics. If left unchecked, discussion guides (test plans, questionnaires, etc.) will bloat, respondents will fatigue, and findings will muddy. Referencing objectives to whittle down the team’s disparate queries to a streamlined, reasonable set of core questions will make your analysis easier and your recommendations even more impactful.
Guideline 3: Broad to narrow
When you’re meeting someone for the first time, do you start with specific, personal questions, like “What is your biggest regret?” or do you start with something easy and neutral, like “Where are you from?” or “What do you do for work?” (there are some cultural differences here, but that’s a topic for a different article). I imagine most of us would start with something neutral, easy, and broad. The same goes for research.
I start almost every Lightroom interview with “Please share a little about how you spend your days and how photography fits into your life.” Sure, I could ask people what their favorite food is, but I work on a photography app, so that’s not relevant to the task at hand. Broad, but pertinent starting points also provide critical context for our specific topics as we move through the interview. The same goes for unmoderated and written tasks… Before asking someone for their hot takes on the last season of Ted Lasso, you’d want to find out about their TV viewing habits and whether they even watched the series.
Guideline 4: Add just a little whimsy
At the start of this article, I mentioned that research conducted in an interesting and creative way helped final outputs be even more innovative. Creativity absolutely plays a role at each stage of research design and synthesis, but it’s perhaps most evident when framing and writing questions. There are so many techniques you can use to elicit engaging answers, but some of my favorites are:
- Projection: Abstracting and personifying to elicit emotions and opinions. Examples: “What item in the kitchen would Product X be?” Or, famously, “If Product X and Product Y were throwing a party, what would they be like, and which would you want to attend?”
- Association: Using stimulus to create meaningful links. Example: Choose a picture (or emoji) that fits with your experience using Feature X .
- Completion: Allowing your participants to finish a “story.” Examples: Fill in the blank or mad-lib type sentences, or category mapping.
- Arrangement: Ordering or grouping stimulus against given criteria. Example: Arranging a group of concepts from least to most interesting.
There are myriad possibilities within these techniques and the richness of feedback will more than make up for the extra time it takes to think of new ways to construct and frame these types of questions. However, be mindful that these are “special” questions and should be employed strategically and sparingly to maintain the integrity of responses. Creative questions require creative answers, so if you’re exhausted from thinking through how to ask all these questions, just think of how fatigued your participants will be when answering them!
Guideline 5: Last, but most important... Don’t forget to be a human
You are a human talking to and learning from other humans. Write surveys that your grandmother could understand. Don’t try to cram 90 minutes of questions into a 60-minute interview. Don’t use so much jargon that only an expert could unpack what you’re really asking. One of the best and easiest ways to keep yourself in check is to answer your own questions. If you can’t do it reasonably and thoughtfully, then you certainly have no business forcing someone else to try.
Good research is about uncovering (and presenting) good insights
Uncovering insights is, simultaneously, the hardest and most important work researchers do. In my classes, I introduce insights early. Then, week after week I reinforce them to give my students maximum opportunities to practice and hone the muscles of uncovering and crafting the things we find to be true—but no one has yet managed to articulate.
The definition of an insight is almost as nebulous as the art of identifying one. Ask three researchers and you will get three different answers (basically what amounts to shades of the same color separated by a few color swatches): One colleague says, an insight is “seeing something that everyone sees, but no one has thought.” Another says you should think of it “like the punchline of a joke—it resonates in a way that feels intuitive yet unfamiliar.” For me, an insight is a truth that unlocks and articulates the reasons behind specific behaviors, often taking multiple facts, observations, and data points into account.
As with many things, seeing is believing when it comes to insights. But they are often confused for data (facts and statistics, usually, but not always quantitative in nature), findings (direct results without a point of view), and observations (the intangibles of interacting with humans, the things they do and how they do them)—all of which have an important place in the research and reporting process, but none of which are truly insights. Some examples that illustrate how data, findings, observations, and insights differ:
- This is data: As reported in Travel and Leisure, San Jose, California, averages 300 days of sunshine a year.
- This is also data: According to the Census Bureau, employees in San Jose, California, have a longer commute time (29.8 minutes) than the typical American worker.
- This is a finding: 9/10 people in San Jose rate the weather as great on a scale of poor to great.
- This is an observation: People who live in colder climates with frequent precipitation appear happiest when the sky is blue, the sun is shining, and there are no clouds in the sky.
- This is an insight: San Jose residents take good weather for granted; a lighter traffic day may make them happier than 70-degree sun.
There is no secret recipe or template for uncovering great insights, but one of the simplest tactics for exposing them is to keep asking yourself “Why?” and not stopping until you’ve found something new, but still true (a logical leap is ok, great, even, but flat-out fabrication won’t help you or your team).
In my course, I also talk to students about the importance of using verbal and visual storytelling to bring their insights to life. It’s one thing to diligently design and conduct studies, ask the right questions and augment takeaways with keen observations, but your impact will be severely diminished if the eyes of even your most invested stakeholders glaze over as you’re presenting hundreds of slides filled with tables, percentages, and tiny customer quotes.
The Lightroom team will tell you that I’m constantly thinking of new ways to both involve them in the research process and share my findings along the way: I send Slack updates with top takeaways from individual interviews so team members who couldn’t listen live can keep up with the study as it’s progressing. I also spend time thinking about seasonal color palettes for my note-taking boards, so even my notes and “behind the scenes” efforts are visually pleasing.
When it comes to the final synthesis and share out of insights, takeaways, and data, presentations (slide decks) reign, though I also experiment with infographics and shortened formats that can facilitate conversation and interaction. But my main goal, when sharing my work with the Lightroom team, is to explain complex findings in the simplest ways possible—by minimizing words and maximizing visuals. To get your point across, sometimes a big bold sentence, in white type on a black slide, is all you need. Other times, a simple graphic best explains a person’s emotional journey. And certainly, a representative video clip of a customer’s struggles with a UI can say it better than any of us.
Good research is about interesting insights, mindful methodologies, and creative curiosity (apparently even annoying alliteration). It’s also true that research can help teams drive engagement, create new revenue streams, embody a user-first mindset, and make confident decisions. But at the end of the day, the best research comes from caring about the best research. By merely showing up to class for ten weeks straight, my students will be stronger, smarter leaders (within their project teams and beyond). Set yourself up for success by simply stopping to ask yourself and your team whether it’s the right time for research, what you’re hoping to learn, and how it will help you make decisions.
Special thanks to Sean Vidal, who was my collaborator on a Research Basics presentation given to product marketing manager interns this summer during which some of the above concepts and guidelines were crystallized.