How to adapt your design practice for the age of generative technology
The new three-way interaction model changing the product design landscape
Illustration by Gordon Studer
For the launch of GenStudio we used established tools like Figma and proven processes for handing over designs to engineering for implementation, while prototyping countless interactions. The result was a beautifully designed application. But with the rapid pace of technological change our "beautiful design" needed to evolve rapidly—not years after our initial release, but months. The reason is clear: Generative AI and the way we design for it have completely upended interaction patterns, definitions of quality, and the needs of the AI models and users.
With this change, generative AI is altering how people interact with computer interfaces—which have long provided the foundation of UX design. Designers can no longer just consider relatively static human-computer interactions. They must consider more fluid exchanges between humans, interfaces, and models. We’ve moved from a two-way conversation to a three-way discussion. Furthermore, agent-to-agent interactions require the orchestration of multiple models, and countless visible and invisible exchanges all within a single interface.
Designers must adopt a new mindset centered on Human-Model-Interface experiences (HMIx)—where interface design is inseparable from the behavior, possibilities, and limitations of generative models—to deliver trusted, outcome-oriented user experiences.—where interface design is inseparable from the behavior, possibilities, and limitations of generative models—to deliver trusted, outcome-oriented user experiences.
Redefining design practice
Design teams must adopt new ways of designing and collaborating that keep pace with this new paradigm. They must now deeply understand the underlying technologies, including how they will interact with the user interface, the user’s intent and end goals, and the results and quality of the generative model. That means, to create a successful interface, designers must understand:
- Technical capabilities of the underlying model, agent, or orchestrator
- Input design, and what information or context is required to deliver value
- User output expectations for individual actions and workflows
This can seem messy or chaotic. Functional proof of concepts must now combine ideal experience visualizations, product requirements, and model expectations, which must be created and iterated on in parallel to model creation. Without this collaboration, there is no way to ensure that the ideal interface yields the ideal outcome. To understand the dynamics between these, designers must prototype early and evaluate outputs, because model behaviors redefine entry points, failure states, and success criteria. Model outputs can also change the fundamental design and information architecture of products.
A great recent example in GenStudio was discovering that many brand guidelines were too subjective to provide effective validation. By deeply understanding what was needed to support generative, on-brand creation and validation, the team developed two key experiences:
- A way to communicate to enterprise users how to make their brand more objective through generative feedback
- The ability for enterprise users to request annotations (like hyperlinks) to verify specific brand application guidelines
These new interactions combined with increased output quality of the large language model improved the experience of both creating and using GenStudio’s Unified Brand Service feature.
Designing for outcomes
Since these experiences will be judged on the quality of the generated output and the journey to get to it, designers must relentlessly consider complete end-to-end workflows and use cases to successfully design them. Some of the questions designers must be able to answer as AI-driven workflows are defined and designed:
- Do you know the user's intent and expectations up front?
- Can the intelligence reliably deliver on expectations?
- Do you know what it will take to “get it right”?
- Are you building trust with your communications?
- Does your UI support iteration when results fall short?
A framework for generative interactions that build trust and confidence
To go beyond simple conversational overlay, designers must think outside the two-way interaction patterns of call-and-response and traditional human-computer interaction methods. Complex conversational UX experiences typically have four key phases working together to build confidence, trust, and comfort throughout an interaction: prompt, plan, show, next.
Prompt (or ask)
This is how interaction begins. It can be any selection, question, contextual suggestion, or request that helps users articulate intent and provide context to augment the model.
- Push for new and innovative ways of gathering context so even minimal- or no-user inputs still yield a strong first response. By leveraging metadata and prior interactions to infer intent, GenStudio generates strong first outputs even after simple image uploads.
- Smart prompt support is expected and can take many forms, from text to visual assistance. The parameters that ensure better output can range from a library of successful prompts to, in GenStudio’s case, selecting which components of brand imagery or information will best shape the result.
- Elevate common or necessary inputs within the prompt to increase user success. In Custom Models within GenStudio when someone uploads an image, they’re prompted to select keywords and tags for it. When the model references those tags, it makes the process less subjective and ensures that the output aligns more closely with the user’s intent.
Plan
As interaction continues, users need to understand what happens next. This is especially important as interactions with large language models move from simple conversations to agentic, multi-step orchestration.
- Reduce surprises and increase confidence by allowing users to refine or edit the approach and assumptions before execution. Multi-step agentic tasks often require several pieces of information for the output to be useful. Create ways for users to edit and refine what they provide the model.
Show
Within the interaction, provide visibility into the model’s actions. That means offering details about how the model will arrive at its output and providing traceable results about how it was generated.
- Show thinking and key steps to build trust and confidence. If an output will require deep research, let users know in advance so they don’t attribute the wait to latency. In GenStudio, when a task involves pulling in brand products and persona data for validation, users are notified step-by-step.
- Allow a user to see the result, reasoning, and rationale behind the output. Show your references. Even though the references in GenStudio are based on proprietary information, after generating an output, GenStudio displays the sources (like which brand guidelines were applied), so people can understand and have confidence in the result.
Next
Follow up with actions that are proactive and relevant; recommendations that extend a great output build a feeling of collaboration.
- Add trust and growth loops by making the content actionable, with ideas on how to iterate or take the next step. In Adobe Firefly, after generating an image, users can view their generation history alongside a list of possible next steps—like adjusting the style, changing a setting, or adding new details.
This prompt-plan-show-next framework helps users feel supported throughout simple and complex interactions, with generative models, and increases success and confidence through each phase.
We’ve entered a new era. With the rise of AI and agent-driven experiences, design’s role in understanding users’ expectations has evolved. By considering the full end-to-end experiences of generative workflows, not only will products be more resilient, but designed experiences will function as the connected and contextual collaborators that people need