Ask Adobe Design: How are you using Adobe Firefly?

How our team is using our generative AI model in their personal and professional projects

An AI-generated illustration of a scuba diver swimming alongside two large fish. Light is streaming into the blue-green water from the surface.

Digital illustration created in Firefly

We’ve watched excitedly as the creative community has generated more than half a billion images with Adobe Firefly.

With generative workflows in Adobe Photoshop (Generative Fill), Adobe Illustrator (Generative Recolor), and Adobe Express (text-to-image and text effects) Firefly has been changing the way our designers brainstorm, create concept art, and complete repetitive tasks. Adobe Design helped shape the Firefly experience, and our team members are also using the technology in a range of projects, both professional and personal. We spoke with a handful of Firefly power-users from Adobe Design to hear how they're using generative AI in their workflows:

“The ability to create unique Photoshop textures (and generate multiple versions) using Firefly seems exponential.”

Lee Brimelow, Software Development Engineer, Design Engineering

“I’ve spent countless hours, since Firefly was released in beta, creating a wide variety of content that’s ranged from cute and funny images to compelling and beautiful illustrations (creative minds can run wild with it). But I’ve also been trying to create more useful assets that designers and photographers commonly use to augment their work: Photoshop overlay textures, which usually live in a layer above the main content and are overlaid onto it using a blend mode. These textures can be used to adjust a photo’s lighting, to add effects like grunge or grain, and a whole host of other effects. In the example below, I used a rainbow light leak texture to recolor an existing photograph.

“Finding just the right overlay texture can be time-consuming, so the ability to generate them in Firefly is great, and the ability to create unique textures and generate multiple versions seems exponential. The work I’ve been doing only scratches the surface of what’s possible, but I still don’t envision that generative AI will ever replace creative professionals, I see it as empowering them to bring their designs and photographs more easily to the level that’s in their mind’s eye.”

An AI-generated photograph of an abandoned, wrecked car (with a dented hood and front end) rotting into the earth in a fogged-in grove of trees.
The original image of the abandoned car was something I’d generated previously in Firefly using the prompt, “shot of an abandoned car that crashed into the woods, cloudy, foggy, and rainy.”
Four AI-generated rainbow-hued gradients in various patterns.
The next step was to try and generate a colorful light leak texture which, when applied to the image, would change the color and feel of it. I started with the prompt “colorful light leak overlay,” but prompts can be tricky, sometimes you get what you envisioned on the first attempt, but it often takes time and patience to get what you’re looking for. I was looking for a colorful blurred texture but initially nothing I generated worked.
A single AI-generated rainbow-hued, diagonal gradient. Colors from top left: to lower right: indigo, teal, green, yellow orange,, red, pink, violet.
I usually start with short prompts and add to them as needed. I added “on a black background,” since it would make things much easier when trying to overlay it onto the image, and after several failed attempts I finally got what I was looking for.
An AI-generated photograph of an abandoned, wrecked car (with a dented hood and front end) rotting into the earth in a fogged-in grove of trees overlaid with rainbow-hued, diagonal gradient.
The last step was to bring both images into Photoshop to create the final piece: I placed the texture onto a layer directly above the photo of the abandoned car and applied an “overlay” blend mode to the texture and set its opacity to 60%. (When applying an overlay texture, it’s good to go through all the blend modes and find the one that works best for that texture.) The next step was to combine the two images in Photoshop.

"The ability for Firefly to produce convincing portraiture from scratch is extremely practical, especially when I need unique assets for demos."

Davis Brown, Experience Designer, Digital Imaging

“Working with Firefly to generate photorealistic portraits is of particular interest to me in my creative process. The ability for it to produce convincing, lifelike images from scratch is extremely practical, especially when I need unique assets for demos. The reactions of people when they realize that these convincing images are entirely AI-generated never ceases to amaze me. It's a powerful testament to the advancements in this technology and its potential in the world of art. My process is driven by the ongoing progression of AI so it’s constantly evolving. Every day I'm learning and exploring new creative territories—it's an exhilarating part of my journey as an artist.”

A screenshot of Adobe Firefly with four AI-genrated portraits of (from left to right): an Asian man, a white man, a white woman, and a Black man. All of them are wearing white T-shirts and black leather jackets.
Creating photorealistic portraits with Firefly involves a precise setup of the prompt. I start by outlining the camera view and the shot's positioning. Then, I detail the subject's appearance and surroundings, including their clothing and the setting (including specifics about the environment and lighting). An example of a prompt I might use is, "a medium-shot portrait of a person in a patterned chore jacket, in an art studio, with monstera plants, sunlight streaming through windows during the golden hour DSLR telephoto HD photo backlit." This detailed approach helps guide the AI model to craft a realistic portrait that aligns with my vision.
A screenshot of Adobe Lightroom with four rows (of ten images each) of AI-gnerated portraits of woman and men of various ethnicities and styles of dress.
The next phase is organization... curating my favorite creations in Adobe Lightroom. I import all my favorite photos, compare them side-by-side, rate them, make basic color adjustments, and upscale the image resolution for quality.
Four AI-genrated portraits of (from left to right) two of a Black man (one cropped with a heavy black frame around around it and the other showing a full interior background) and two of a white woman (one cropped with a heavy black frame around around it and the other showing a full interior background)
The refinement process occurs in Photoshop. Using Generative Fill I expand the images, tweak the backgrounds, regenerate parts of clothing, and generate new objects or scenery to tell a new story.
Two rows of three AI-generated portraits. Top row left to right: an Asian woman wearing a white T-shirt and a leather jacket; a white man wearing a gray hoodie with asymmetrical black stripes; and an Asian man with blond hair wearing an open button-down shirt over a T-shirt. Bottom row left to right: a white woman wearing a white T-shirt and a leather jacket; a Black man wearing a plaid blazer over a white T-shirt; and a Latina woman wearing a sequined-blazer of a back T-shirt.
All these portraits were generated entirely with Adobe Firefly and edited in Photoshop with Generative Fill.

“Firefly has a style engine that’s extremely useful for achieving a consistent aesthetic throughout a collection of images."

Veronica Peitong Chen, Experience Designer, Machine Intelligence & New Technology

“Imagine you’re asked to paint a portrait of your life story. How would you captivate the audience and make every moment come alive? That was the challenge I faced when preparing for Pivotal Moments, one of Adobe Design’s internal speaker series that gives people an opportunity to share transformative moments from their lives and careers.

“I’d crafted my speech around three pivotal moments that shaped my academic and career journey. As part of the core team building and testing Firefly, it occurred to me that it would be the perfect way to create a visual theme to thread through my story. Firefly empowers storytellers to experiment, refine, and enhance their visuals until they capture the essence of the story they want to tell. With each iteration, I had the opportunity to fine tune the results, adjust the composition, and experiment with visual elements—a process that ensured the results closely matched my memory while effectively illustrating the story.”

A screenshot of Adobe Firefly. Top row slide text. Story: As we approached a narrow doorway on the road, the muddy terrain cause our bike to get stuck. Second row text and images. Prompt: Rainy day, Asian woman sitting at the front seat of a bike with a kid sitting at the back seat. Four AI-generated images showing a woman riding a bicycle with a child on the back (the image on the left is marked Selected). Third row text and images. Prompts: Muddy road in front of iron gate and fence. Rainy day, a single narrow door in front of muddy road, a fence and bushes. Four AI-generated images showing variations of a road leading to a gate (the image on the left is marked Selected). Fourth row text and images. Prompt: Rainy day, bike wheel wth mud close-up. Four AI-generated images showing variations of a muddy bicycle tire (the image on the left is marked Selected).
Using my written narrative as a starting point, I fed it into Firefly as prompts, to generate highly customized visuals for a personalized story. One of the prompts was based on a story where my mom took me to a class on a bicycle on a rainy day. With just a few adjustments, Firefly conjured an image that so authentically captured the environment, the mood, and the essence of the story itself that even I felt transported back in time.
A screenshot of Adobe Firefly. Top row slide text. Style: Palette knife, Oil painting. Second row text and images. Prompt: Beijing China city streets and classroom. Three AI-generated images showing streetscapes and waterscapes and a fourth of children in a classroom (the image on the left is marked Selected). Third row text and images. Prompt: A detailed oil painting of a long-hair Asian woman programming. Four AI-generated images showing variations of dark-haired asian woman seated at a computer, one of them with her back to the viewer (the image on the left is marked Selected). Fourth row text and images. Prompt: Abstract architecture. Four AI-generated images showing variations of modern architecture (the image on the left is marked Selected).
Firefly has a style engine that was extremely useful for achieving a consistent aesthetic throughout the collection of images in my presentation. With "palette knife" and "oil painting" styles selected across all generations, I was easily able to illustrate different subjects and moments with a consistent visual language.
A a split-screen screenshot. On the left are 83 thumbnail images of AI-generated illustrations and on the right are 24 thumbnail images of AI-generated illustrations.
I generated 83 images in total and ended up using 24.
Three rows of four images on the diagonal all in a similar oil paint stle. Top row from left to right: a woman riding a bicycle with a child on the back, the back of a woman's head facing a computer screen, a muddy bicycle tire in a puddle, a woman seated at a computer using a mouse. Middle row from left to right: a tree-filled courtyard surrounded by buildings, five women standing in front of a painting on an easel, a cluster of houses, a modern office building. Bottom row from left to right: a world map, an open air street market, an Asian inspired interior, two women facing each other.
Firefly-generated images helped me thread a visual theme through my Pivotal Moments presentation deck.

“The ability to create any scene I can dream up has really opened possibilities for the types of stories and analogies I use in internal demos.”

Kelsey Mah Smith, Experience Designer, Machine Intelligence & New Technology

“Part of my job is to shed light on the unknown, to tell stories rooted in research, trends, and an understanding of user needs. I often create decks and designs to share abstract and broad concepts with other teams which require visual analogies to set the stage and strengthen my narrative. Standard deck templates felt stale and didn’t do much to support the variety of stories and concepts that I wanted to convey. Whenever I needed to customize a deck, I would search on stock or create illustrations on my own. It was time consuming to get the right images, textures, and illustrations I needed.

“I’ve been using Firefly for almost all my projects since the beta launched. It’s where I start ideating concepts for what I want my story or visual theme to be. I’ll generate everything from textures, icons to full on visual scenes to help support my stories. From there I download the assets and bring them directly into a design tool where I’ll collage, mask, and layer them. Having the ability to create any scene I can dream up without having to search across stock sites or public domain images has really opened possibilities for the types of stories and analogies I can use. In addition, the time it takes to produce custom assets has reduced while the variety of assets and themes has increased. Essentially, it's faster to be even more creative than I was before, which in the end ultimately helps me get back to designing strategies to make our products and features easy for our customers to use.

“For this particular deck I wanted to use a deep space analogy—to evoke the feeling of the vastness and excitement of the unknown quality of space exploration.”

A screenshot of Adobe Firefly with four AI-generated illustrations for the solar system with a close-up of one planet.
I started with a simple prompt “planets in space,” but it was generating images that were much too stylized and fantastical.
A screenshot of Adobe Firefly with four AI-generated illustrations (in more of a photo-illustrative style) of close-up views of planets
From there, I continued to refine the prompt by adding the words “high resolution” and “black space” until I got the look I had in mind.
A screenshot of Adobe Firefly with four AI-generated black-and-white photographs of clouds against a black sky.
I still needed other images with textures like smoke and dust so I could layer it onto the planets to make the whole composition a bit more abstract—without detracting from the planets and the emptiness of space. I created a prompt “floating mist over black background,” that generated exactly what I needed.
A screenshot of slide from a presentation deck with "Title slide here" placeholder text on the left and an AI-generated composite illustration of the solar system and close-up views of planets, against a clouded black sky.
I ultimately generated three images. None of them required editing so I dropped them in a design tool where I collaged and layered, using different effects and masks, to create my final title screen.

“I can quickly generate images on a specific theme or concept which not only sparks ideas but helps me explore visuals I hadn’t even considered.”

Tomasz Opasinski, Creative Technologist, Machine Intelligence & New Technology

“One of the most exciting possibilities of Firefly is its ability to aid in the ideation process. As a creative (prior to Adobe I was a poster designer and was part of more than 560 theatrical, streaming, TV and video game campaigns) I always feel a need to be generating fresh ideas, so I experiment a lot. Firefly quickly generates images on a specific theme or concept which not only sparks ideas but helps me explore visuals I hadn’t even considered—it’s like having a design assistant tirelessly generating concepts and visual references, freeing up my time and mental energy to focus on refining and executing my vision.

“I don’t think I’ll ever see generative AI as a replacement for human creativity and ability; for me it’s a tool for exploring ideas, speeding up workflows, and generating high-quality assets. I recently used Firefly to assist in the creation of a poster for a Halloween party, with themed characters, pumpkins, haunted houses, and appropriately scary background images.”

Two images. On the left is a vertical list of words: Halloween, Trees, Pumpkins, Zombies, Dead, Cemetary, Skeleton, Creature, Eyes, Birds, Haunted house, Purple, Orange, Graffiti. On the right rough-sketched shapes.
I started with a blank page for ideas and words that I associate with Halloween, then used a second blank page to block placement for characters and content as I gathered assets.
A screenshot divided into three columns. On the left is tiny thumbnail images of over 100 AI-generated illustrations. In the center is a single AI-generated illustration (of a fiendish rabbit) with a colorful background and next to that is the same rabbit with the background removed. On the right is the completed Halloween poster comprised of 44 AI-generated illustrations.
For each prompt, Firefly generates four candidates, each with slightly different characteristics. Selecting one image from many is an iterative process that involves selecting what’s closest to what I have in mind then altering the prompt as necessary to get closer to my final asset. It’s a classic process of narrowing down a search that started broadly with a search for a particular “look”—a colorful and Halloween-esque graffiti style. The center image shows my final selection and the masking in preparation for final compositing in Photoshop.
A screenshot divided into two columns. On the left is the completed Halloween poster comprised of 44 AI-generated illustrations. On the righ is tiny thumbnail images alongside the prompts that geenrated them.
My final poster consisted of 44 images. On the right are the thumbnails with the associated prompts: The more you tell the computer, the more precise the output will be; simple two- and three-word prompts may not work as well when you have a particular concept in mind.
A Halloween poster comprised of 44 AI-generated illustrations including pumpkins with cat eyes, skeletons, a fiendish rabbit, a zombie, bats, a haunted house, a graveyard, and an appropriately dark (purple, black, and orange) graffiti background.
With so many images in one project, not having to create them myself freed up a lot of time to focus on the poster design.

“Before Firefly, if my content required variations, I would have to manually modify them—each pattern, color, alteration was a separate task.”

Heather Waroff, Senior Experience Designer, Extensibility

“As a designer on the Extensibility team (where we explore how to add new functionality to our applications without changing core functionality and how to embed Adobe tools and assets into third-party applications) I’m often asked to envision how experience interoperability works across Adobe. Doing this requires telling a workflow story that encompasses a user’s entire journey—from first interaction to accomplishing a goal—when interacting with our products or services.

“By telling a visual story of a user’s end-to-end journey, we can help others visualize what an experience will look like before building it. My workflow for these experience stories includes the creation of supporting content so that wireframes and digital prototypes feel real enough that they capture the essence of what a user will be doing. When figuring out what content is needed, I always ask myself the same set of questions:

  1. What’s the result/outcome they’re looking to get to?
  2. How will they shape their content to get to the result?
  3. What are the steps to getting there?
  4. What content is the user starting with?

“Prior to Firefly, I was illustrating stories through template manipulation and manual content creation using Express, and my process started with looking for a template that could communicate what a user would be doing. For example, if I was exploring what a small business owner would be creating, I would create a fake company to illustrate that. If the content required variations, I would have to manually modify them—each pattern, color, and alteration was a separate task. Since the beta launch of Firefly, I’ve been using it to generate content to show user journeys and options—and it’s exponentially sped up my work.

“This example workflow tells the story of how the new Dropbox, Google Drive, and One Drive add-ons would work in Express: The fictional user is a freelance book cover illustrator using Express to create book covers and the process shows how they would pull in images from various cloud storage services. It required a variety of images to make the experience feel realistic.”

A screenshot showing 14 rows of thumbnails of AI-generated illustrations.
My first step was to create the book cover art for my persona. That began by creating multiple variations of each cover illustration—a process that takes minutes with Firefly—to find the one I wanted to use for the final output.
A multi-panel image showing the process of importing an an AI-generated illustration into Adobe Express and adding text to it.
Next, I’d select a single illustration from each grouping to create the cover art. Because my persona would be working in Express, I imported the Firefly generated images into the app to create the covers (this workflow has since been built into the application to make it even easier).
Two rows of three AI-generated illustrations with text applied. Top row (left to right): a lunar campsite that reads Mars 0001; a modern home built on a cliff that reads Cliffside; a scuba diver swimming in the ocean that reads Deep Dive. Bottom row (left to right): A bonfire outside a tent that reads Blazing Fire, a tropical waterfall scene that reads Into The Deep, a man standing inside a rock formation that reads Redrocks.
For this presentation, I created six covers. The different outputs help show the progression of an imagined workflow and what a user is doing with content.
A multi-panel image showing the process of uploading AI-generated illustrations into a cloud storage service (Dropbox).
Next, using a design tool, I filled in wireframes showing the folders of content in the cloud storage services. I started with greyed-out boxes, then filled them with the Firefly-generated images to show how the add-ons feature would work in Express.
A multi-panel image showing AI-generated illustrations saved to four different cloud storage services (Google Drive, Dropbox, One Drive, and Box).
I needed four cloud storage examples—each showing how an illustration would scale—so I used a different image for each.
Since my persona would need to find an image file, I showed how the user would get to nested folders using a different series of images. Firefly was helpful with creating these variations: I began with the prompt, “modern cliff buildings” and reused that base prompt to capture those same images during different times of the day by generating nighttime and sunset variations.
A multi-panel image showing the process of uploading AI-generated illustrations from a cloud storage service (Dropbox) to Adobe Express using a new feature called add-ons.
In the end I showed the progression of a user’s workflow to create content in Express by uploading their work through a storage cloud add-on.

The Firefly beta is open to everyone. Visit the site and experiment with how to use it in your work.

Ask Adobe Design is a recurring series that shows the range of perspectives and paths across our global design organization. In it we ask the talented, versatile members of our team about their craft and their careers.

Header copy
Design your career at Adobe.
Button copy
View all jobs
Button link
/jobs