Behind the design: Adobe Firefly Boards

The challenges and rewards of designing a product, for the nonlinear start of the creative process, from scratch

A digital collage featuring various elements related to image editing and design. The central focus is a marble bust, with several floating icons and images around it (including a rocket launch, a cat's face, and an abstract art piece). Text labels indicate actions like Make a marble carving, Describe the image, Use as style reference, Use as composition reference, Edit, and Replace with a cat's face. Various arrows and lines connect these elements, suggesting different editing options or steps in the creative process.

Every design solution starts with an idea. But concepting, the important early stage of the creative process, has traditionally been constrained by resources and time. That means many ideas are often left unexplored. But what if every creative project could start with powerful ideating for discovering, creating, and sharing ideas? Furthermore, what if it was backed by the power of Adobe’s generative AI capabilities and collaboration tools?

That vision inspired Adobe Firefly Boards, a modern concepting application with an infinite canvas and multi-creator collaboration that transforms the start of every creative project with unparalleled creative control and direction. It helps people rapidly explore a range of artistic directions, transform regions of assets, and remix styles and ingredients.

Danielle Morimoto, group design manager for generative AI on the Machine Intelligence & New Technology team has led the design team for Firefly Boards since its early incubation. She discusses the challenges of designing a product, for the nonlinear start of the creative process, from scratch.

What was the primary design goal when you set out to design Firefly Boards?

Danielle Morimoto: One of the first things that any creative typically does in their process is gather—pull together bits of inspiration to help add definition to their ideas. It’s a euphoric stage where anything goes if it can be squeezed into the (usually) constrained amount of time allotted for concepting.

Our primary goal when starting Boards was to design a UI that would enable the two thought processes that people move in and out of throughout the early stage of the creative process : divergent thinking, the process of coming up with many ideas to explore possibilities, and convergent thinking, which requires honing in on ideas to find a solution. When we first began thinking about key concepts, two ideas kept rising to the top. Our challenge was to build an intuitive and simple product experience that supported both.

Exploration. Divergent thinking requires variety. During the creative process there are times when a user may not know exactly what they’re looking for, so they’re open to spontaneity as it may lead to a totally new idea or thread of thought. We wanted to infuse into our product that idea of “I’m feeling lucky” or “surprise me” to offer moments where someone can discover something delightfully unexpected. By selecting multiple assets, and remixing them, the technology combines elements from each selection, so it doesn't require explicitly defining an idea but instead leads people to variations they may not have considered.

A collage of three smaller images. The image on the left is a person wearing a blue fur coat and holding a perfume bottle. The middle image depicts a hand moving a chess piece on a chessboard with other pieces visible. The image on the right shows another person, also in a blue fur coat, holding a perfume bottle against a red circular background.
By combining elements from each selection, the Remix feature in Boards leads people to variations they may not have considered.

Precision. Convergent thinking requires control. Users want the ability to remove, add, and change parts of an asset with a high level of precision and predictability, and to perform these editing actions in the context of their workflows, without leaving the application. Boards enables a basic level of editing (adding a style or composition reference to influence generation) but since there are times when people want to do production-level refinement there are also easy pathways to open them in Adobe’s pro tools.

Two still lifes of perfume bottles. The bottle on the left is on a white pedestal, against a blue background, alongside green plant leaves, and a disco ball. The bottle on the right is on a gold pedestal against an orange background, alongside green plant leaves, and a disco balll. Both setups feature a hand reaching towards the perfume bottle from the top right corner. An overlay menu in the center of the image includes options: Use as style reference, Use as composition reference, Solid background, Pink color, Even lighting, Minimalistic style, Hand and disco ball.
Composition reference in Boards makes it easy to use any part of an image or artboard, optionally coupled with style selections, to influence a new generation.

What user insights did you leverage to help inform the design solution?

Danielle: This was a new product, and there’s a design-prototype-research-design loop that was integral. A lot of the time, the user feedback that our research partners Victoria Hollis and Rob Adams brought back to the team from this loop, was highlighting where people were struggling. One feature that changed significantly because of user testing sessions was the Eyedropper.

We were aware of a shift from the prompting era to the controls era. The prompting era was powerful because it was an intuitive way for people to get started. The UI for typing a prompt into a text bar was as familiar as starting a web search or writing a text message. But since every model requires a different way of “speaking” to generate output, learning to craft those instructions can be like learning a new language. In the controls era people want to get quality outputs that match their expectations without having to know how to write prompts.

During a team offsite in New York, we put a prototype of Boards in front of creatives and realized we’d gone too far with text prompting versus visual prompting. People wanted more gestural on canvas interactions. For example, they wanted to be able to pull aspects from images and apply them to others—and they wanted to do that without ever having to type a prompt. It was when the team first began exploring using an eyedropper tool to sample elements of assets.

What was the most unique aspect of the design process?

Danielle: The early stage of the creative process is messy. It’s not a straight line but a wild squiggle. And AI operates within a fine line between predictability and unpredictability. Our team needed to find ways to support both divergent thinking and convergent thinking and enable these two very different creation processes to live simultaneously on a single canvas.

There are times during the creative process where visual surprises can lead to new ideas or threads of thought. Other times, people are pursuing a concept and need a level of precision and control. We needed to build a product that supported both these moments of serendipity and predictability—all in an intuitive and simple experience—so to get started we separated our work into five pillars:

  1. Start and explore: Onboarding and demonstrating early on the fast and delightful experience for exercising artistic direction
  2. Canvas core: Creating a foundation to support the messy, nonlinear creative process
  3. Remix and variations: Enabling the endless exploration of different and often unexpected ideas
  4. Editing: Providing precision tools to help users refine and express concepts and to help them move back-and-forth between exploration and refinement
  5. Creative workflow: Exporting, presenting, and seamlessly integrating into broader creative workflows across Adobe applications.

We broke everyone into smaller squads (each combining a variety of stakeholders like an engineer, designer, prototyper, researcher) to sort through the key questions and problems we’d need to solve within each of these different areas. As an example, the Canvas core squad addressed questions about the application’s overall framework and canvas organization:

App framework: Can a simple chrome UI communicate the core value of the app? Can we empower people right away with a “wow” moment? How do we maintain clear navigation so people always know where they are, how they got there, and how to return? How might we be context-aware, showing people the right things at the right time in-flow? How do we progressively show complexity? What should we consider supporting with future generative AI?

Canvas organization: How do we organize the various “states” of content (active, archived, ephemeral)? To what extent should an asset’s history be surfaced and editable? How do we allow for chaos while also helping users contain it? What tools should we provide to group content and visualize larger concepts?

What was the biggest design hurdle?

Danielle: Starting a product from scratch under such a short time constraint. We came together as a team less than one year ago. There are so many different choices, the technology is complicated and the decisions you make at the start around the foundation of the application will affect what you can do later.

For example, during early versions of the application, we realized our navigation was getting cluttered and covered too much of the canvas space. Since it was the area where users were most often viewing and working with content, it needed as much room as we could give it. We decided to iterate, and the result was a big UI shift, led by Jeremy Joachim and Kyeung sub Yeom, where they reassessed the application framework while simultaneously making room for new technologies and design paradigms.

Two images stacked. Top: A computer screen displaying a dark-themed interface with mages arranged in two columns and three rows. The first column contains three images of a person wearing blue and red clothing; the middle image partially obscured by an overlay. The second column contains three different images: on top an abstract red and white pattern, in the middle a silhouette of a person against a red background, and on the bottom another image of a person wearing blue clothing with their face obscured by an object. Various icons and options are visible on the left sidebar, including Generate, Upscale, Variations, and Favorites. Bottom: A collage of various smaller images grouped into five distinct categories on a b;ack background: FLORALS?', MERMAID? PALM BEACH?, CACTUS COWBOY, and NEON?. Each category contains numerous images that share a common theme or visual style.
In an early version of Boards (top), the navigation covered too much of the canvas where people were viewing and working with content. Our final navigation framework (bottom) created more space for pulling together and organizing disparate bits of inspiration.

In the beginning finding a way forward through ambiguity and large problems can be challenging, but by adapting and iterating we created a foundation for the app (navigation, tooling, generative history, canvas primitives) that we could put in front of users for feedback and continue to refine. I’m proud of the team for working through challenges under short time constraints, listening to our users, and always holding a high bar for quality.

How did the solution improve the in-product experience?

Danielle: Besides the shift in the overall app framework, there are two capabilities that either drastically changed or were added in the experience.

The Eyedropper. We knew that users wanted to incorporate different aspects of images into their ideas. But with a lot of these new technologies designers aren’t necessarily going to nail the experiences and interactions right away—they need to be tested and iterated on. This proved true for the Eyedropper. After Victoria Hollis completed a few rounds of user research it became clear that there was confusion around how to use the parameters of the existing UX. In addition, we heard that people wanted more gestural on-canvas interactions—like being able to sample aspects of certain images and apply them to others. Jeremy Joachim, Effie Jia, and Veronica Peitong Chen reimagined the same technology but with new UX based on these user insights, which resulted in the Eyedropper design we have now.

Three photographs side by side. The photograph on the left features a person wearing a colorful outfit and holding an ice cream cone, against a vibrant background of geometric shapes. The middle photograph depicts a person standing under numerous disco balls with vibrant pink lighting. The photograph on the right shows another person in an extravagant outfit, surrounded by floating disco balls and an abstract background. An overlay menu in the middle of the image includes options such as Use as style reference, Use as composition reference, Disco balls, Vibrant pinks, Dynamic lighting, Glamorous style, and Woman and disco balls.
Instead of sampling colors, the Eyedropper in Boards samples visual attributes of an image, for a single asset or entire artboards.

If you’ve used the color picker in Adobe Photoshop or Adobe Illustrator it’s similar, but instead of sampling color users sample visual attributes of an image (ex: as a style reference), or an entire artboard (ex: as a composition reference). By simplifying the UI into something that was familiar, we’ve started to see more user understanding of the value of this capability.

Organization. When you start the creative process, you want to bring in inspiration from everywhere to a single place. We quickly realized there would be a lot of content on the canvas and we’d need to provide an easy way for people to go from a cluttered mess to perfect alignment, without it being tedious. To address that, we revisited one of our pillar questions: How do we allow for chaos while also helping users contain it?

Two images stacked. Top: A collage of five different images randomly arranged on an artboard. Each image features a person dressed in vibrant, futuristic clothing with bold colors and patterns against an equally colorful and abstract background of geometric shapes and patterns. Bottom: The same five images but this time insude a bounding box and arranged in a formal columned grid. An overlay menu in the middle of the image includes the option to arrange in a grid of Rows, Columns or a Mosaic with Columns selected.
The organization functionality in Boards provides an easy way for people to go from clutter (top) to perfect alignment (bottom).

The team also recognized that how and when people want to organize differs, so in addition to foundational tools like padding, alignment, and snapping to grids the team thought about new methods for organization and tidying up. For this Kelly Hurlburt partnered directly with our amazing engineering partners to come up with a solution that allows users to “Collect items” in a single click so they can easily get started with their boards. Users can also select multiple assets scattered across the canvas and arrange them by row, column, or mosaic. It’s extremely satisfying to see the canvas decluttered in less than a second.

What did you learn from this design process?

Danielle: What’s been interesting about this project, and with generative AI in general, is that we’re constantly designing new paradigms for interaction. With all this new technology, and all the ways we’re constantly pushing new experiences forward, more than ever this is an iterative loop of designing, prototyping, testing. Designers need to keep a systems mind as they navigate the unknown with flexibility. Thinking holistically and not forgetting about the larger vision we’re designing for—how each piece has an implication on other aspects of the design and experience—helps make sure we create cohesive and seamless experiences.

What’s next for Firefly Boards?

Danielle: Three weeks ago at MAX London we announced Project Concept was going from incubation to a private beta as Firefly Boards. Next, we’re looking to open it up to everyone with general availability so we’re continuing to build out improvements with fit and finish, new feature capabilities, and a focus on four main pillars:

  1. Professional creative surface: Onboarding and getting started, expanded media types, presentation tooling, and advanced commenting and collaboration needs
  2. Fast, tactile, art-directed AI surface: Additional generation controls
  3. Connected workflows: Further connecting Boards with Adobe’s product ecosystem. Madeline Hsia is working on capabilities like linked documents, CC Libraries, and file management
  4. Technical foundation: Overall performance, quality, and stability improvements

Overall, we’ll continue iterating and partnering closely with research to listen to our users and what they’re looking for from a tool that supports and enables their creative ideation.

A special thanks to Anumeha Bansal, Evan Shimizu, Ross McKegney, Joe Reisinger, Karthik Shanmugasundaram, and so many others, without whom building this product would not have been possible.

Header copy
Design your career at Adobe.
Button copy
View all jobs
Button link
/jobs