Designing for generative AI experiences

Shaping interfaces for artificial intelligence requires leaning into specific design skills

An illustration in curviilinear persective. A person, with their back to the viewer, is standing in the foreground of a neverending field of pixels in shades of green, pink, purple, and black.

Illustration by Karan Singh

I became fascinated with artificial intelligence in graduate school when I began using AttGAN (a machine learning method that improves detail and realism in generated images) and poetry to generate images. The output was hard to decipher but I was hooked. Fast forward five years and everything—from image quality and representation to stylistic treatment and level of detail—has changed.

I'm now a senior experience designer on Adobe Design’s Machine Intelligence New Technology (MINT) team, and alongside the quantum leap that generative AI has experienced there’s been a paradigm shift in user expectations for the digital products that incorporate it. People want seamless experiences, personalized recommendations, and adaptive systems that cater to their unique needs and understanding.

Those new expectations are creating an expanding need, and a compelling opportunity, for UX designers to evolve the practice of designing for more static interfaces and begin crafting more natural, intuitive, personalized, human-centered experiences. The shift won’t require developing new or different sets of skills, but it will require leaning into those that are most useful.

The role of design in artificial intelligence

AI models are great at detection and pattern recognition (like spotting faces in a crowd). They’re also great at classification (they can neatly organize a jumble of data), prediction (using historical data to forecast), and can make recommendations based on human interaction (learning from previous selections and preferences). They can also synthesize and generate text, image, video, and audio by weaving fragments of information into new creations.

But even with its impressive capabilities, AI walks a fine line between predictability and unpredictability. While it can unfailingly recognize patterns and classify data, there's a degree of nuance that it often misses (like human anatomy). Because models can only make sense of what they’ve seen before, their ability to grasp the idiosyncrasies of new data is limited. Finally, and most importantly, AI algorithms may not accurately capture or portray human emotion, cultural context, or the depth of personal experience.

When designing interfaces that incorporate AI, designers must never overlook the needs of the people behind each touch and interaction. The foundational focus of designing for them should be to create a reciprocal relationship between the technology and the people using it that evolves in tandem with each technological leap. When there’s balance between users, design, and technology, it fosters a cyclical ecosystem where each enhances the other:

Key responsibilities of AI designers

Designing experiences is as important as designing the underlying algorithm, infrastructures, and data of the models. Designers not only have the power to shape the future of AI, they also have the responsibility to ensure that the technology isn’t just intelligent, but also approachable, useful, and aligns with human value. They can do that by facilitating control, personalizing digital experiences, and building understanding and trust.

Facilitating agency and control

When designing for AI, designers must consider how to amplify human agency in the AI experience and make room for people’s choices and decisions. Designing the experience for Text to image in Adobe Firefly began simply: generate images by typing in a prompt. However, when we put ourselves in the shoes of users, we realized how intimidating an empty prompt field can be—crafting the right input may require the user to be equipped with professional knowledge from multiple creative fields and art movements.

To help people better understand and adopt the new task of prompt-writing, the prompt bar on the Firefly homepage uses a familiar search-like interface, where users can enter a prompt in much the same way that they would start a web search. But unlike search bars, a sample prompt describing the background image automatically fills the prompt bar to educate first-time visitors.

Additionally, to reduce the creative blocks that can arise during prompt writing, and to give people more agency over generated images, we created a panel with controls, presets, and parameters that people can use to refine prompts and generated results. Displaying style options, with corresponding categories and thumbnails, fosters a sense of ownership, builds a deeper connection between the user and AI, and encourages people of all skill levels to experiment.

Two screenshots. On the left a panel from Adobe Firefly titled "Effects." The content of the panel is of two rows of thumbnail images of a hot weather balloon with style descriptions beneath them. The top row reads (from left): "Digital art," "Synthwave," "Painting." The bottom row reads: "Layered paper," "Hyper realistic," "Bokeh effect." Above the. thumbnails are two rows of buttons labeled "All," "Popular," "Movements," "Themes," "Techniques," "Effects," "Concepts," "Materials." Below the thumbnails are three dropown menus titled "Color and tone," "Lighting," "Camera angle." The screenshot on the right is of a forest scene, in muted shades of peach and teal, composed of cut paper. On the bottom left of the image are two tabs labeled "Layered paper" and "Surreal lighting."
The Style effects panel in Firefly with categories, controls, presets, and parameters to adjust things like art movements, color and tone, lighting, and camera angles helps people experiment with refining generated results (or prompts).

Personalizing and contextualizing experiences

In today's digital landscape, designers must understand that users want tailored and contextualized experiences (those that are maximally relevant to their individual needs). In UX design, contextual understanding recognizes a user’s environment, needs, and situations and adapts experiences to fit them.

Contextual understanding is at the heart of delivering personalized experiences, even those deeply embedded in workflows. On the one hand, the contextual task bar in Photoshop reveals relevant information, actions, and features during a specific user journey. On the other, AI features like Generative Fill contextually understand image and input to update a section of an image in a consistent style (behind the scenes, AI predicts and fills in missing information based on contextual cues, minimizing the need for manual input).

Multi-modality (the ability to interpret multiple types of inputs like text, voice, or images) and dynamic interactions (a continuous and responsive interaction that adapts to user input in real-time) create rich, flexible, personalized, and contextual experiences. In Project Neo, users can easily create and modify shapes using multiple modes of interaction (3D shapes or multiple camera angles). It’s a multifaceted approach that caters to diverse user preferences and skill levels by enabling people to engage with content in personalized and intuitive ways.

Building understanding and trust

A large part of designing for AI experiences is building empathies between the user and the model to mitigate potential risk. A big step toward doing that is improving explainability/interpretability (providing clear rationale for a model’s decisions and outcomes) so people can make sense of what's happening and why.

To improve the explainability of Firefly’s Style reference feature, we introduced several design elements to inform people about how AI could enhance the experience. An onboarding tour provides a quick look at the feature and how data is stored. Later in the upload flow a pop-up modal communicates, "You should own this uploaded image” to help people understand the guidelines for uploading reference images.

Three screenshots on a blue-to-green-to-purple gradient background. On the left is a panel titled "Reference" with a dropdown button in the upper right with the letter "i" in it. Beneath that are an image upload field alongside a black oblong button labeled "Upload image," above a white oblong button labeled "Browse gallery." Beneath the buttons are two rows of thumbnail images. The upper right screenshot is a pop-up notice that reads: "About uploading images. Style reference helps users apply a particular style to the images you generate. To use this feature, you must have the rights to use ay third-party images." Beneath the text is a white button labeled "Cancel" and a blue buttton labeled "Continue." The lower right screenshot is also from the start of a pop-up tutorial. At the top is a Pug in a pink chair, wearing a a pink and teal long-sleeved T-shirt. Beneath the image it reads: "Match image style. Choose a reference image from our gallery or upload your own to match its style. Learn more." Beneath the text are two white buttons labeled "Skip tour" and "Next."
Onboarding for Firefly’s Style reference feature (left) includes a description of the feature and guidelines for uploading third-party images (top right), and feature tour (bottom right).

Design can also make experiences more participatory by reducing jargon to explain the potential issues and abilities of AI. By designing clear feedback systems (and inserting them at the right moments in a user journey) and aligning with people’s mental models of AI's abilities and limitations, users won’t be easily caught off guard by incorrect or unexpected outputs. And, by predicting and learning from error conditions, designers can build trust and confidence in AI systems. In Firefly, we've made it easy for people to provide feedback and report issues. That explicit feedback allows designers to continually refine AI model quality to enhance the overall experience and it helps build trust by allowing users to make decisions about data provision.

Two screenshots of feedback forms on a blue-to-green-to-purple gradient background. The form on the left reads: "Your feedback is appreciated. Generated content and prompt information will be included with your feedback." To the right of the text is a column of selections under the heading: "What went wrong? Select all that apply." The selections are: "Harmful stereotype or bias," "Copyright or trademark violation," "Nudity or sexual content," "Violence or gore," "Not aesthetically pleasing," "Errors or poor quality," "Not relevant to prompt." Across the bottom is a blank field with the heading "Note," above a white button labeled "Cancel" and blue buttton labeled "Continue." The form on the right reads: "Your feedback is appreciated. Generated content and prompt information will be included with your feedback." To the right of the text is a column of selections under the heading: "What worked well? Select all that apply." The selections are: "Aesthetically pleasing," "High quality or production ready," "Prompt accurately intereted," "Closely matches requested style or theme," "Exceeds exprectations, impressive," "Great for inspiration," "Other." Across the bottom is a blank field with the heading "Note," above a white button labeled "Cancel" and blue buttton labeled "Continue."
Firefly has an easily discoverable feedback mechanism (desktop left, mobile right) for output quality that allows people to share positive and negative observations—along with the generated content and prompts—that can help tailor future model output.

The soft skills UX designers need to design for AI

The practice of experience design is built on the understanding of our users. Empathy, a skill of UX designers is undeniably valuable in designing for AI, but there are other qualities that make people particularly well-suited to this type of design work:

Ways to expand an experience design practice

AI is rapidly maturing. Alongside that rapid growth there’s a growing demand for skilled designers in a field that didn’t exist three years ago. UX design skills can serve as a base but it’s helpful for designers new to the field to understand fundamental concepts about artificial intelligence as well as a bit about what’s under the hood. Fortunately, there are multiple avenues for learning, collaboration, and innovation as well as steps designers can take to begin making a career transition into this specialty:

As AI continues to be incorporated into digital products it will continue to create a compelling need for designers who understand the nuances of designing for those experiences. For experience designers, already accustomed to adapting design processes and approaches as advances in technology require, crafting natural, intuitive, personalized, human-centered experiences for AI will simply require leaning into and sharpening a set of skills they already have.

Header copy
Design your career at Adobe.
Button copy
View all jobs
Button link
/jobs