Growth design and the art of failing successfully

A four-step process for overcoming a failed design experiment

A colorful three column illustration. In the left blue-and-red-checkered panel are four stacked green, oval emojis: smiling, neutral, disappointed, and amazed. In the center is a stylized woman's head in profile with a mushroom cloud for an eye and hair comprised of leafy vines from which a flower, being watered by a watering can, extends against a backgound of orange and purple rays. In front of her are three stacked circles, one with a sun rising from a cloud, another with punctuation marks, and a third with fire. In the right panel, on a background of yellow with red polka dots, a blue hand with white fingernails extends to hold the watering can above the woman's head.

Illustration by Eirian Chapman

My biggest fear is failure. It's what contributes to my meticulous meeting prep, my anxiety-ridden overachievement, and my imposter syndrome. But I fail a lot. About 40 percent of the time.

You might even say that failure is part of my job as a growth designer.

Growth design is a specialization of UX design that, instead of focusing on feature development and implementation, consists of repeated iteration on feature optimization and workflows. Growth designers understand user needs, obstacles, and goals through the lens of gathered data. Specifics of the role can vary from team to team, but there’s always an emphasis on data-driven design. A typical growth design experiment compares an "optimized” design against a design that’s already in the market. An experiment succeeds when it confirms a design hypothesis by hitting a desired metric, like 20% more users returning after their first visit. But sometimes hypotheses are wrong, and growth experiments fail.

I've become more adept at recovering from failure. "Failing successfully" is a recovery process that begins with dealing with the immediate aftermath and ends with long-term planning and strategizing.

When a growth design experiment goes wrong

My first big growth experiment failure was a full makeover of a paywall design based on hypotheses and knowledge from previous experiments. We deviated from our usual optimization path because the design had already been refined so much that we wanted to try something different. We called our experiment “Paywall 1 vs. Paywall 2.”

Paywall 1 was the existing design and Paywall 2 was our experimental design. Paywall 1 had a series of carousel cards showing premium features, and Paywall 2 had a bulleted list. Our hypothesis was that users clicking through a paywall want to get to premium features as fast as possible, and our design was a clean and minimal UI that wouldn't bog down that journey.

Our hypothesis overlooked one very important component that the existing design harnessed: imagery. Our experiment failed. Because it was my design that wasn’t successful, it felt like the failure fell on my shoulders, and every imposter syndrome thought I'd ever had kept replaying itself in my mind:

“Am I even a good designer?"

“How did I not see this?”

“Will my colleagues ever trust me or my designs again?"

I questioned myself and my abilities endlessly. Then I came up with a four-step process for overcoming failure.

Step 1: Boost team morale

Implementing an experiment takes a village. And, as it turned out, the rest of the team was feeling just as bad as I was.

Stakeholders in a growth experiment include designers, data scientists, product managers, product marketers, and engineers. Stakeholders increase exponentially when implementing a paywall because it's an essential touchpoint and drives revenue. When an entire village's assumptions, expectations, and instincts are so off-base that an experiment spectacularly fails, it's tough on everyone.

We learned about the failure of Paywall 2 in a large meeting when our data scientist read the results to the room. Our product partners knew immediately that it was time to take Paywall 2 offline. At that very moment, all the work it took for the team to implement that design felt like a complete waste of time. To me, it felt like the burden was mine to bear because my design was at the center of the experiment. But I wasn’t alone: Everyone gave feedback on the design, everyone was excited about the design, and we all did our due diligence to make the experiment a success. Experiments, by their very nature, are unpredictable (if we could predict their results, we wouldn’t need to experiment at all).

Our hypothesis was that users clicking through a paywall want to get to premium features as fast as possible, and our design was a clean and minimal UI that wouldn't bog down that journey.

Adobe has a culture that embraces trying, failing, and learning, so we didn't fear for our livelihoods when our experiment didn't go as expected.

On the heels of a loss, it's important to keep the team optimistic, build camaraderie, and remind ourselves why we do the work we do. Since the experiment failed while our team was working remotely, we set up virtual team bonding time to play games, eat, talk about our loss, and remind each other of our strength as a team. After we were done licking our wounds, we got back to work.

Step 2: Understand the “why”

After meeting with the team and boosting our collective morale, we started to look at our next move. Everyone thought Paywall 2 was going to be a winner, so we had to determine which assumptions we'd made about our users were incorrect and why our design didn't elicit the response we expected. I used a few methods to understand the “why” behind the outcome of our experiment:

First, I reached out to data science. It can be helpful to work with data science to slice the data in new ways: Do people in different regions respond differently? Do different age groups? Or user types? If a particular segment favors a design, it's important to know that.

Next, I talked to the stakeholders. Not only are stakeholders part of the team, they also have unique marketing, design, and product insights that can guide optimization. Marketing partners can be especially insightful because they often know what will be successful with new users. I wanted to understand from their perspective, why the paywall failed.

Finally, and most importantly, I listened to users. I put the designs in front of users in a guerilla-style, qualitative A/B test to learn firsthand what they preferred and disliked in both designs. This isn't deep research, but it can offer alternate points of view to guide the next iteration and lead to “aha!” moments. In this case, we learned that a lack of imagery was the culprit, but we also discovered that a dark UI we'd created as part of the experiment was well-received.

This part of the recovery process is invaluable. Learning the "why" behind the numbers not only helps to determine how to optimize and iterate a failed design, it can provide insights about structuring future growth experiments.

Step 3: Make a plan to move forward

After collecting every nugget of information possible, we presented them to our product partners and looked at options for a new experiment:

Option 1: Pivot. Stop working on optimizing the paywall and focus on a different part of the app for a better win—like a recently released feature that isn't gaining popularity.

Option 2: Drop the losing variant. Try to understand what could be improved in Paywall 1 and retest it in incremental optimizations, like implementing the dark mode from Paywall 2 that people responded to favorably.

Option 3: Optimize the losing variant. Try to understand exactly why Paywall 2 failed and optimize those parts for retesting. A future optimization could include adding the imagery that was missing.

Ultimately we chose a mix of Options 2 and 3, using the best parts of the winning variant to continue fine-tuning the losing variant.

Experiments, by their very nature, are unpredictable (if we could predict their results, we wouldn’t need to experiment at all).

It's important to understand that Paywall 1 has already been optimized more than 100 times, so the challenge is in making it better than it is already. By continually updating and testing it, we continue learning about our users at a crucial moment—that exact point when someone is delighted enough with the product to wade through the friction of a purchase. By continuing to focus on this point in time, we can understand what might compel someone to pay for additional features or drop off and never return.

Step 4: Document and share

Once our plan to move forward was cemented, it was time to document and share what we’d learned. Growth design is like research: Much of what's learned can be disseminated and reused by other teams. When other teams can benefit from our mistakes, we help the rest of Adobe, and a failed experiment becomes the silver lining in the cloud (no pun intended).

I share my experiments as stories with clear beginnings, middles, and ends. For this project, I began with a brief overview of the influential experiments that led to the current paywall. Next, to help people understand why the experiment was important, I added qualitative evidence and quantitative data (the number of users who see the paywall) that showed its potential impact. The body of my story focused on our hypothesis—we tested a paywall with less imagery because we believed users wanted a streamlined checkout—but I also included our testing methodology, the technical effort, and design modifications that resulted from technical and legal constraints. Finally, I spoke about numbers, the error of our hypothesis, and how we're implementing new insights in our next experiment.

I shared this story with my network of growth designers at Adobe, from Creative Cloud to Experience Cloud, and it sparked a larger conversation about paywalls and the freemium model.

Winning is easy, overcoming failure is hard

Everyone knows how to celebrate a success, but since failure isn’t something we want dwell on, it can be difficult to move forward when it inevitably happens.

For me, getting things wrong is still difficult. But it's not as scary as it once was. Although I know that all feedback is essential to success, it's often the negative feedback that helps me improve on my work and grow as a designer… and no negative feedback is quite as profound as a failure.