Behind the design: Adobe MAX Sneaks 2024

A peek at three exciting new feature explorations from Adobe

A photo of a stage set with a tropical theme. Ten pinball machines are lined up in front of a man standing at a podium speaking. At the center, front of the stage is MAX spelled out in yellow, all-cap, sans serif type. Writ large among palm trees and purple lighting the word SNEAKS is spelled out in neon lights.

Adobe MAX Sneaks, a must-see session every year at Adobe MAX, showcases experimental product features and offers a glimpse into the innovation that often shapes the future of Adobe’s products. Teams across Adobe collaborate to bring these visionary concepts to life, providing a first look into these emerging technologies.

This year, nine Sneaks spotlighted groundbreaking developments in AI and how it’s transforming the future of creativity across 3D, photo and video editing, vector illustration, audio engineering, and content authenticity.

We're looking at the designs behind three of these exciting concepts with the Adobe Design teams that contributed to them. Their insights into the processes behind these experimental projects (that may or may not make it into upcoming versions of Adobe products) show how these advancements might empower creatives in the future.

Project Hi-Fi

By combining composition references with text-to-image prompts, Project Hi-Fi transforms early-stage collages, mockups, and sketches into high-fidelity, polished visualizations. Envisioned as a plug-in for Photoshop, with the goal of shortening the time between idea generation and polished composition, it renders stylized, high-fidelity representations using real-time generative AI.

<iframe width="560" height="315" src="https://www.youtube.com/embed/iM8ejIpaqF8?si=ayqXaMaQ8HB1ElTQ" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

What was the primary design goal for this feature exploration?

Veronica Peitong Chen (Senior Designer, Machine Intelligence & New Technology): Creators deeply enjoy the messy, sometimes chaotic nature of the ideation and conceptualization phases of the creative process—where raw ideas take shape. But these stages can be incredibly time-consuming, and that can slow down momentum.

When we set out to design an experience that turns mockups and rough sketches into polished, high-fidelity visuals, our goal was to make that process feel smoother, faster, and more intuitive. The intent of Project Hi-Fi was to help creators quickly transform rough ideas into something more tangible without losing the magic of the creative journey. Learning from our experience designing Adobe Firefly, as well as the countless interviews we’ve done with Firefly users, we observed that AI is reshaping how people approach design. With the help of generative technology, our creative process can be greatly accelerated into a continuous dialogue with human ideas and machine-driven possibilities.

Creators want to tap into AI’s potential to push boundaries, but they also need to feel in control. By creating a seamless bridge from low-fidelity mockups to high-fidelity visuals, we aimed to empower users with flexible control options so they could guide adherence to their original concepts. The process had to be fluid, allowing designers to move effortlessly across multiple apps without missing a beat.

What user insights did you leverage to help inform the design solution?

Veronica: One major takeaway came from an Adobe Design Research & Strategy user study that highlighted a fascinating shift in how AI transforms the user’s role. Rather than merely executing tasks, users are increasingly becoming curators, guiding and refining the outputs generated by AI. Our interpretation of this shift meant that the front end of the creative process—ideation—had become more demanding and required creatives to spend more time experimenting and iterating to get the results they wanted.

Beyond this, we also observed how users were currently engaging with Firefly, Photoshop, and other creative apps during the ideation process. Many creative professionals use Photoshop to quickly put together mood boards or composites and Firefly is helping users realize their visions more efficiently through prompting, composition, and style references. These insights highlighted how users want tools that blend AI with natural workflows to enhance their ability to experiment and visualize ideas without the fear of failing.

What was the biggest design hurdle in completing it?

Veronica: Navigating the complexities of cross-app integration while maintaining a cohesive system that could scale over time. Since every app has its own set of requirements, limitations, and user expectations, it was a challenge to design a consistent and seamless experience.

We not only had to ensure the design worked on different platforms, we also had to understand the nuances of each one to ensure that the design didn’t lose functionality or consistency in different contexts. For example, we crafted source-picker solutions that let users bring generations from Adobe Substance 3D or Adobe Illustrator into Photoshop layers easily. At the same time, the design needed to be future proof. We had to think systematically to create a flexible, scalable architecture that could adapt to new features, integrations, and evolving technologies.

Those hurdles became far less challenging thanks to the incredible collaboration with our prototyping team and the open lines of communication throughout the process. Their input and expertise helped us transform these complexities into opportunities, making the process much smoother than expected.

What did you learn from the design process?

Veronica: Given the fast pace and dynamic environment of this project, we had to embrace temporary solutions so we could constantly iterate and look for areas we could improve. This agile approach allowed us to gain feedback more quickly and adapt the design based on real usage. Some of the integration constraints became catalysts for us to explore a more layered user experience—what users encounter during their first interaction should offer deeper value as they grow more familiar within the feature.

The process of preparing for MAX Sneaks provided the luxury of constantly testing the build and experiencing the project from a user perspective. For instance, we noticed when the source switched, the prompt remained unchanged, but users would likely want to start a new prompt to align with a new file. This insight led us to update the build to remember prompts based on file choice. These types of invisible complexity can be overlooked in the design process, but solving for them is what makes good projects great.

During this time, we also learned the significance of crafting a cohesive narrative. By presenting the challenges, design decisions, and how each iteration moved us closer to the final aha moment, we practiced guiding the audience through the evolution of the project--which required striking a balance between conveying the complexity of the project while keeping the message digestible and engaging.

Design: Daniela Caicedo, Veronica Peitong Chen, Shannon Rubes, Kelsey Smith, Vikas Yadav; Design Prototyping: Yaniv De Ridder, CJ Gammon, Tim Kukulski, Greg Muscolino; Research & Strategy: Laura Herman

Project Perfect Blend

Project Perfect Blend, a generative model for image harmonization, in Adobe Photoshop automatically creates visual consistency in composite images. It adjusts color, relights the foreground, and casts shadows in the background to create beautifully blended photographic compositions.

<iframe width="560" height="315" src="https://www.youtube.com/embed/xuPd0ZZa164?si=fajh9NkF5xHO9c0O" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

What was the primary design goal for this feature exploration?

Avalon Hu (Staff Designer, Photoshop): Whether for product photography, posters, or social media, compositing and harmonization are important workflows for professionals and casual creatives—and one of the main reasons people use Adobe Photoshop. But that task can be extremely time-consuming. It involves tedious masking and endless trial and error to get the correct color, lighting, and shadows.

Although this feature would provide a great opportunity to remove the tedium of those tasks, our challenge was to successfully integrate it into Photoshop in a way that was easy to use, that would streamline the task, but that also made sense in the overall workflow. And, since generative AI, and the features we develop with it, is continually evolving, we had to think about how that evolution might affect a future version of this feature—and the features that might be created alongside it in the future.

What user insights did you leverage to help inform the design solution?

Avalon: At the time we were exploring two AI technologies: one of them focused on harmonizing two images using color and lighting without changing the perspective, the other could fill a selected area with a generated image based on a reference photo. Though developed separately, we’d been looking at these two features in tandem because they were remarkably similar.

After two rounds of user studies we learned that what people really wanted was functionality that would help them harmonize assets (blending two images). We eventually decoupled the features to focus on that standalone functionality. That ultimately became Project Perfect Blend, and we designed the experience with a simple, one-click function that could be accessed contextually via an action menu on the selected layer. The simplicity of it conveys how powerful this technology is both for people without a lot of color and lighting knowledge, and the professional who wants to cut time from a workflow.

What was the biggest design hurdle in completing it?

Avalon: Since we were initially working with two similar AI explorations, one of our earliest struggles was designing a way to toggle between them. That was resolved after a first round of user studies showed that treating them as independent generative explorations was the right way to move forward.

We worked with Adobe Research to understand that capabilities and limitations of the technology and how it could best work alongside Photoshop’s existing technology and features. We addressed questions about whether the harmonized results should show up as a pixel layer or an adjustment layer, whether variations should be generated, and what type of controls should be exposed to users. Once our learning and our ideas crystalized, the project came together rather quickly and smoothly.

What did you learn from the design process?

Avalon: When bringing new technology to a product like Photoshop, design is the bridge between users and research scientists. Understanding the capability of the technology was instrumental in translating it correctly and respectfully, with design, to bring it to users. We also needed to advocate for users, and how they would use it in the real world, so that we could design the UI in a way that was valuable. Finding that balance was key. And it was possible because of the incredible speed at which our prototyping team was able to put together a working prototype.

Working on the Sneaks storyline helped us think hard and clearly about the purpose and the execution of the project. Because the feature is exploratory, storytelling plays a huge role in conveying its value to a broader audience.

Design: Avalon Hu; Design prototyping: Evan Shimizu, CJ Gammon; Adobe Design Research & Strategy: Roxanne Rashedi; Adobe Research: Mengwei Ren.

Project Super Sonic

This new exploration for sound design in Adobe’s video and audio tools bypasses the need to own and search large sound libraries. Project Super Sonic streamlines the creation workflow by leveraging generative AI to make adding sound effects to video as easy as typing in a text prompt or selecting objects from the editor timeline.

<iframe width="560" height="315" src="https://www.youtube.com/embed/RddSWodgX5w?si=jW6bCI04eW6AlBpW" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

What was the primary design goal for this feature exploration?

Adolfo Hernandez Santisteban (Staff Designer, Digital Video & Audio): Our primary goals were to validate and advance our audio research initiative for generative AI sound effects, to address pain points in the video editing process, and to ensure that the quality of the AI model met professional standards and could seamlessly integrate with Adobe’s audio and video product ecosystem. We also wanted to democratize sound design for all video editors, regardless of expertise, by making advanced sound design tools accessible and easy to use—so that even beginners could enhance their projects with professional-quality sound effects. Finally, we had to create a scalable foundation that could accommodate future enhancements as the model and user experience evolve.

What user insights did you leverage to help inform the design solution?

Adolfo: Video editors often spend considerable time searching for the right sound effects to match their visuals. Knowing that those sourcing pain points can interrupt creative flow, we aimed to streamline the process by enabling on-demand generative AI sound effects.

Users prefer tools that fit into their existing workflows, and feedback showed that editors wanted to be able to integrate sound effects without requiring a steep learning curve. We aligned our design solutions with those mental models and, since editors want the ability to customize and fine-tune AI-generated sounds to match their vision, we also addressed the need for creative control over sound effects outputs.

Finally, early research revealed the necessity for AI-generated sound effects to meet the high standards of professional editors so they could be used confidently in professional productions.

What was the biggest design hurdle in completing it?

Adolfo: The feature needed to balance advanced AI capabilities with an intuitive user experience. That meant designing an interface that allowed video editors to maintain creative control without overwhelming them with complexity. Making it both powerful and accessible—technologically advanced but natural and effortless to use—meant including enough adjustable parameters and options to allow for personalized tweaking, so editors could achieve the exact AI-generated results they wanted.

What did you learn from the design process?

Adolfo: Since traditional design tools can fall short, due to limited integration with audio functionalities, it was difficult to convey the full user experience through static designs or standard prototypes. Having a dedicated design prototyping team, that developed live prototypes connected to working AI model, was instrumental.

Their expertise enabled us to validate our designs more quickly and get prototypes into the hands of users sooner, so they could interact with the feature in a realistic setting and provide feedback on both the audio output and the user interface. That rapid validation was crucial because it accelerated the overall development timeline by providing immediate insights that could be acted on.

The process also emphasized the importance of the iteration, prototyping, and testing cycle. Engaging in continuous cycles of designing, prototyping, and testing allowed us to align the feature more closely with the actual needs and expectations of video editors. That user feedback highlighted what worked well, what needed improvement, and allowed us to test assumptions and make informed decisions based on real user interactions.

Design: Gabi Duncombe, Adolfo Hernandez Santisteban, Mary Tran; Design Prototyping: Lee Brimelow, Yaniv De Ridder; Research: Hugo Flores Garcia, Oriol Nieto, Justin Salamon, Prem Seetharaman.

Join Adobe’s celebration of innovation: Watch all ten Adobe MAX Sneaks 2024.

Header copy
Design your career at Adobe.
Button copy
View all jobs
Button link
/jobs