Five research questions that provide the foundation for good design
An evidence-based approach for determining the viability of product ideas
Illustration by Rebecca Dunlap
Even when decisions are compelling and well-formed, it’s important to pause and understand them in the context of the world by gathering evidence to support or reject them. Evidence means systematically testing what we think against what’s true in people’s lives, so we can evaluate whether a solution fits real needs and defend design decisions from a place of knowledge rather than assumption.
Making space for these five straightforward questions introduces evidence early. The answers to them can provide a simple checkpoint between “I think” and “I know” that helps teams align quickly, spot new possibilities, and determine whether a product idea is well supported and likely to succeed. Once asked, they can transform how designers think about their ideas, and the products they’re shaping.
1. What evidence do we have that we're building for the right audience (user)?
Anyone who’s ever asked, “Who are we building for?” is familiar with answers like, “Mary the Marketer is our persona. She represents the real people we want to target.” But personas shouldn’t be accepted at face value. It’s crucial to understand where they came from, what data informed them, and whether they reflect a market opportunity worth pursuing.
Another common response is, “This product is for everyone.” But when we design for everyone, we risk designing for no one. Solutions intended for seasoned creative professionals with decades of experience should look fundamentally different from those meant for high school students. These audiences don’t share the same mental models, expectations, or understanding of how tools work.
Designers must clearly identify their primary audience to design focused, distinctive, and genuinely valuable experiences that meet real needs. Even if a product will eventually serve many people, start by researching who specifically those people are. Without that clarity, it becomes nearly impossible to gather meaningful supporting evidence.
Review market research, ask for evidence-based descriptions of the target user, and uncover the business justification for pursuing a specific audience. And if the answers aren’t distinct enough to guide design decisions, ask more questions until they are.
2. What evidence do we have that we're designing for real use cases?
It’s almost impossible to design products without understanding what users are trying to accomplish. When considering these needs, many teams rely on a “We think” framework (“We think users want to do X so they can achieve Y”). While well‑intended, this approach often substitutes assumptions for understanding, and oversimplifies the problem. Early in the process, imagined use cases are fine, but must be treated as hypotheses—the first step is clarifying whether a use case is real.
Consider pricing as an example: If someone doesn’t pay for a service because it feels expensive, the problem is affordability, not dissatisfaction. Jumping straight from a common user‑shaped concern (“This is too expensive for me.”) to a company‑centered solution, based on offering something “more” or “better” to increase value, bypasses the actual problem faced by a user and limits design’s ability to create a solution that will genuinely help.
Grounding decisions in evidence creates a more solid foundation. For example, we know this audience uses the product. We know they are price‑sensitive. We know this because of usage data, interviews, surveys, and market analysis. When problems are solved using this type of “We know” framework, it provides the clarity that allows designers to focus on where users are actually struggling by using evidence to support their solutions.
3. What evidence do we have that we’re designing for the unmet needs of our users?
Are we giving users a reason to choose our product? If a competitor’s product already satisfies a need, or satisfies it better, why would users switch? More importantly, what evidence shows that what we’re building truly matters to them?
If someone else was doing it first, or is doing it better, then we’re not solving an unmet need. In addition, just because a competitor is doing something doesn’t mean it works. Features, capabilities, and design patterns are often repeated simply because it’s what other companies are doing, but that doesn't make them good or mean they satisfy an unmet need. Wouldn't it be better to gather some evidence before committing to a roadmap?
To gather evidence, teams can use gap analysis, review market research, or run unmoderated studies to see if people like a particular feature. One particularly effective technique is to have users narrate their thinking as they complete tasks using a tool of their choice. These types of sessions reveal workarounds, multi‑tool workflows, creative misuse of features, and frustrations that signal unmet needs. Individually, these moments can seem anecdotal, but when patterns repeat across users, they reveal deeper opportunities worth designing for.
4. What evidence do we have that our solution will work for our users?
Once the problem is clear, the next step is to determine how well a solution will work for users (how “good” it is), and how that will be measured. “Good” is, of course, relative to users’ unmet needs and capabilities, so they should be the final arbiters of whether something meets that mark.
Designers often share internal stories to build empathy and excitement among stakeholders. These narratives are a nice way to bring people along, and to get them to care about the customer and excited about an idea. But without real customer stories, excitement can turn into a risky, “I love this. Please build it.” If the solution fails with actual users, teams lose trust and time.
Instead of constructing narratives based on guesses, talk to customers. Run cognitive walkthroughs, unmoderated studies, and concept tests to see whether a proposed solution will support them. In everything that’s shared, make the evidence behind decisions explicit.
5. What evidence do we have that we’re the right team (or company) to solve this problem?
This final question may be the hardest one to ask. Questioning whether our team, or even our company, is the right one to solve a problem is among the most challenging questions of self-reflection. Some ideas simply don’t align with a team’s strengths or a company’s identity. Brands that stretch into mismatched categories can confuse customers and damage trust (just ask the toothpaste brand that launched a line of frozen meals).
Teams (and companies) are better off when they play to their strengths. That doesn’t mean they need to avoid new directions or innovation. But for every product innovation or new idea, they must ask:
- What would have to be true for us to design, deliver, and support this successfully?
- Does it play to our strengths?
- Do we have the capabilities, infrastructure, trust, and operational foundation to support it?
If success depends on capabilities that don’t (yet) exist, or on building entirely new ecosystems, the idea may be chasing someone else’s strengths.
In the end, shifting from “I think” to “I know” strengthens designers’ confidence in their decisions and helps teams advocate for users with clarity and rigor. By building these five questions into their practice, designers gain the ability to say not just “we think this is right,” but “we know this will make a difference.”