What makes a learning experience good?

A design research framework for evaluating learning outcomes


A pink and orange checkered tabletop against a sky blue background. On the table are two white plates, the one on the left holds four slices of bread and the one on the right holds two pieces of toast. Between them is a toaster with a lightbulb popping out of it. An arrow is pointing from the stack of bread to the toaster and another arrow is pointing from the toaster to the stack of toast.

Illustration by Shanti Sparrow

Think back to the last time you tried to learn something new, like a language, a recipe, or a craft. It’s likely you had a lot of questions and looked online for videos and articles to answer them. But how many were helpful? How often did you think, “Wow, I learned something new,” versus “Wow, that was a waste of time”?

Software companies invest a lot of time and resources to help people develop their skills and teach them how to use their products. It’s a wide-reaching topic at Adobe where teams are not only looking at how to help people learn to use our applications but also how to help them grow their skills in creative domains like photography and illustration. To help people unlock their creative potential, our learning experiences range from in-app tooltips to hands-on tutorials and expert-led livestreams. Although there are a variety of ways to teach creative skills, not all are equally effective and one of our jobs as design researchers is to investigate learning behaviors and needs so we can decide how to best to support people trying to grow their creative skills.

To better understand the types of learning experiences that would benefit people most, Victoria spearheaded internal interviews with several teams across Adobe to identify the big problems that needed to be tackled. At first, teams seemed to have different, disconnected issues (e.g., “This is inspirational, but do people actually learn from it?,” “Is this learning intervention working?,” “How do we design better tutorials?”). But when we looked closely, we realized there was a shared underlying question: What’s actually “good” when it comes to learning and how can we make more effective decisions when producing that content? We realized that to help our teams make better decisions, we needed a standardized and rigorous method to assess the effectiveness of learning interventions.

We leveraged internal workshops and synthesized existing research to create a framework of learning outcomes, then piloted and refined that instrument in qualitative research. The result is the Learning Outcomes Assessment Framework (LOAF). It’s proven so useful in the short time we’ve used it at Adobe that it could help other researchers and designers tackling the complexities of creating “good” learning interventions.

What is the LOAF?

The LOAF (Vega Villar & Hollis, 2022) provides a framework and tools to objectively evaluate the success of learning experiences across Adobe’s product ecosystem. It’s based on the idea that for a learning experience to be good, it must have a positive impact on both functional and emotional outcomes. The LOAF includes assessment criteria for those two dimensions:

Two columns of five buttons on a white background. On the left the buttons are pink and read (from top) Emotional outcomes: Enjoyment, Excitement, Confidence, Sense of control. On the right the buttons are blue and read (from top) Functional Outcomes: Engagement, Perceived difficulty, Concept knowledge, Transfer of skills.
Learning experiences must motivate, inspire, engage and challenge.

How LOAF research is structured

To assess the functional and emotional outcomes of a learning experience, a typical LOAF study uses a combination of rating scales, open-ended questions, and a transfer exercise (where we watch people as they try to apply their new skills to complete a new task). We use pre- and post-learning experience questions, a hands-on exercise, and closing questions to get at different dimensions of functional and emotional outcomes:

A row of five purple circles each with a small illustration inside it and a short description beneath it. From left: a pair of glasses, a checklist, and the words "Baseline questions"; a computer monitor with a play button and the words "Learning experience"; a pair of glasses, a checklist, and the words "Questions after the learning experience"; an artboard divided in half with two different graphics and the words "Hands-on exercise"; a pair of glasses, a checklist, and the words "Closing questions."
The LOAF process. Illustrations by Shanti Sparrow.

Questions to ask before and after the learning experience

Ideally, we want our learning experiences to demystify our products and nurture people’s motivation to use them. To capture possible changes in attitudes toward an application, we ask participants to indicate to what extent they agree with certain statements before and after completing the learning experience. The goal is to capture possible changes to emotional states before vs. after the experience:

Questions to ask after the learning experience

After the learning experience, we ask participants a few questions. The items in this questionnaire were adapted from various peer-reviewed scales (Ryan and Deci, 2000; O’Brien and Toms, 2013; Tisza and Markopolous, 2021; Webster and Ahuja, 2006) and are designed to capture essential dimensions of the different learning outcomes. The questions can vary depending on the focus of the study, but they should cover each concept as thoroughly as possible.

Aside from assessing learning outcomes, we also assess other dimensions, like a tutorial’s difficulty or usability, which can profoundly impact the overall experience. To assess difficulty and usability, we use a combination of participants’ responses but also observe key behaviors such as completion time, number of errors, and repeated attempts.

The hands-on exercise

A common way of assessing learning success involves asking quiz-like questions (e.g., “The tutorial prompted you to ‘group’ shapes. Please describe what ‘grouping’ means and why it might be important”). In early applications of the LOAF, we found that these questions weren’t great predictors of participants’ ability to apply their new skills in a hands-on exercise. That is, participants might be able to define grouping properly but fail to group objects, or they might successfully group objects while struggling to put into words what they were doing or why.

An alternative way to assess how well participants understood the gist of a tutorial (and to identify which parts resonated most/least) is to ask an open-ended question like, “Describe, in your own words, what you learned in this tutorial,” but asking participants to solve a new problem by applying the skills taught in the learning experience is the ultimate test of success.

If, for instance, the tutorial taught participants to arrange and group shapes in Illustrator, the hands-on exercise will ask them to replicate a new target image by arranging and grouping shapes in a file. If the tutorial showed participants how to remove an element from an image in Photoshop, the hands-on exercise might ask them to remove a person from the background in a new photo.

A digital illustration of two very similar artboards against a pink background. The left half contains the words "Your work area" and a stylized ice cream sundae bowl with three scoops of ice cream outside, four stacked glasses tilting to the left, a shelf with a large jar on it, and a bar stool seat with the bar stool post to the right of it. The right half contains the words "Final artwork" and a stylized ice cream sundae bowl with three scoops of ice cream in it, four stacked glasses tilting to the right, a shelf with a large jar on it identified as candy, and a bar stool seat.
A sample prompt for a hands-on exercise: Make the artboard on the left look as much as possible as the one on the right. Asset designed by Mercedes Vega Villar for Illustrator research.

As we do when assessing learning outcomes, we evaluate the performance of the hands-on exercise using a combination of observed behaviors (like completion time, repeated attempts, the percentage of tutorial tools/actions used, or off-task actions) as well as self-reported measurements (e.g., “I think the tutorial prepared me well for this exercise”).

How we’ve been using the LOAF at Adobe

The LOAF is for anyone interested in measuring learning experiences. So far, we’ve used it to assess various tutorial formats (like video or step-by-step hands-on instruction) in small, moderated studies, but also larger unmoderated studies. We’ve tested live tutorials that are widely available inside our applications, but also tutorial prototypes. Being able to study the effectiveness of tutorials in prototype has proven extremely useful because it allows stakeholders to choose from among various designs early in the process while being able to move forward with more confidence.

A digital illustration divided in two: On the left is a white plate with a heart-shaped loaf of bread on an orange and pink checkered background. On the right is a white plate with two slices of bread (each a half circle) side-by-side against a sky blue background.
The LOAF is based on the idea that for a learning experience to be “good,” it must have a positive impact on emotional and functional outcomes. Illustration by Shanti Sparrow.

Mercedes has partnered with different teams across Adobe to help them use it independently. Throughout these partnerships, we’ve continued to refine our method, streamline its application, adjust it for different use cases and, of course, gathered information that’s helping us create better learning experiences.

When’s the best time to use this assessment approach? Based on sharing it internally, we’ve found that designers and researchers are particularly interested in it. But the LOAF’s adjustable framework makes it useful and suitable for evaluating the quality and impact of many learning experiences in an objective and systematic manner, such as:

The LOAF is relatively new, but it’s a good start to enable a systematic evaluation of our learning experiences so we can understand what works and why. It’s one more tool to help us design better experiences for humans because while the utility of an experience is important, how learning makes people feel is also critical.

This project would not have been possible without our wonderful collaborators: Amanda Dowd, Katie Wilson, Jenna Melnyk, Melissa Guitierrez, Jessie Smith, Tyler Somers, Alex Hunt, Shanti Sparrow, James Slaton, Sumanth Shiva Prakash, Jan Kabili, Brian Wood, Erin Wittkop-Coffey, Ron Lopez Ramirez, and Sara Kang.

Header copy
Design your career at Adobe.
Button copy
View all jobs
Button link
/jobs