AI/ML: The good, the bad, and the buzz

A look at technology's influence on illustration and design

An anthropomorphized rabbit wearing a long coat, boots, gloves, and a VR headset, stands against a glowing background of peach, pink, blue and purple.

Illustration by Artranq / Adobe Stock

Up until about a year ago, Artificial Intelligence (AI) and Machine Learning (ML) weren’t top of mind for most people—even those in creative industries. Enter Generative AI. Now everyone from students to morning news anchors is talking about them and, as can be expected, not all that information is correct. Misinformation or not, AI/ML is a topic that generates strong feelings.

Because of my deep involvement in drawing and painting at Adobe, my interest around AI/ML centers around the digital artists and illustrators who were among the first to raise concerns about text to image AI generators impacting their professions. I understand and empathize with their concerns. But when I think back on how technology has affected artists and their art throughout history, I can’t help but see AI/ML as a “tool” that is creating a massive shift in how people make and consume art and design.

As I step outside the buzz of these technologies and take a closer look at some of the impact (good, bad, and speculative) that they’re having on the illustrative arts, it’s important to remember that progress has never marched backward. AI/ML has been present in many tools—not just creative ones—for more than a decade, and Generative AI is not going away. That said, it’s imperative that creatives everywhere step in, share their voices, and help shape its evolution.

A few definitions

These technologies are changing rapidly but there is some terminology that will survive even as they do:

Artificial Intelligence is simulated human intelligence for machines. Massive amounts of data create patterns, and those patterns are used to make future predictions. Many people in creative industries are concerned about this because of its generative potential, but it can be extremely useful for repetitive work, the most recognizable example is customer service chatbots that can replicate human-like conversations.

Machine Learning is a subset of AI that allows computers, programmed with specific learning algorithms (models), to learn from data. There are multiple learning models that can be applied to the data; in the case of Generative AI, the data can be text, images, audio, or video and the output is generative—that is, existing data is used to create new data.

Generative AI is programming algorithms used to create new content. There are many generative products (DALL-E, OpenAI, Stable Diffusion, Midjourney) but the most well-known is ChatGPT, which gained worldwide attention with its text generation model. With all Generative AI, data accessed using text and image prompts is used to create new text and images.

Web3, not to be confused with Web 3.0, is the third generation of the World Wide Web. The most important aspects of Web3 are decentralization and ownership: People will have complete control over their works, their digital assets, their online identities, and where that information is stored.

Technology’s impact on creative output throughout history

Feeling uncomfortable about new technologies is not without precedent. Understanding how they’ve impacted society in the past can help us focus on how we can affect and shape the evolution of AI, ML, and generative technologies:

Moveable type & the printing press

It’s difficult to imagine that the invention of the printing press—and its contribution to widespread knowledge—had anything other than a positive impact. At the time, though, there was excessive concern that widespread access to printed materials would cause a pandemic of misinformation.

Cameras & digital photography

While not initially a form of artistic expression, some Renaissance painters used very early camera technology to aid traditional painting but were hesitant to divulge it as part of their process for fear of being accused of “cheating.” Correspondingly, anyone who watched the advent and adoption of digital photography can probably remember the stigma faced by those artists who embraced it as not “serious.”

Templates & automation

The onset of templatized website design (by companies like Squarespace and Wix) is not so far in the past that it’s difficult to remember the job security concerns of designer/developers whose professions included coding websites from scratch. What was at one time considered an industry-killing technology has been adopted by those same creative professionals to spend less time on development and more time on designing/creating content.

The struggles we’re facing with these new technologies

I want to quickly make a distinction between inspiration and sampling: People can and do reference artists' work for inspiration; AI can and does reference artists' work as samples. Both benefit from artistic reference, but people interpret art to visualize new ideas and AI uses it to generate new ideas. Since in certain circumstances Generative AI models work by scrubbing the Internet to “learn,” artists and illustrators are concerned and angry that their work is potentially being used, broken apart, rearranged, and “regenerated” without permission or attribution. With this type of content generation artists and illustrators are primarily concerned about attribution and copyright, but there are other issues worth mentioning:

1. Attribution

Generative AI models are sometimes trained on web data to create images. I’ve talked to many artists who’ve seen remnants of their signatures in the art outputs of certain Generative AI products. It’s not hard to understand why those artists believe that not only is their work being used (taken), but they’re also not being credited for their intellectual property.

With Generative AI, copyright is a double-edged sword that serves neither AI content creators nor the artists whose work has been used to train it.

Since only humans can be “authors” under the Copyright Act, there is no copyright protection for works generated solely by machines. There was headway on that front last year when the U.S. Copyright Office granted protection to Kris Kashtanova for the comic book Zarya of the Dawn, but that protection is now partially canceled.

3. Style Imitation

Any artist with work on the Internet may have had their work used to train AI without their knowledge. In turn, a generative AI model could potentially imitate the style of a freelance illustrator who makes their living off their personal style. This could understandably feel like a violation.

Nine digital portrait paintings in three rows: Top row: Lionel Messi, Tupac Shakur. Giannis Antetokounmpo Middle row: Generative AI portrait of a white female, Billie Eilish, Generative AI portrait of a Black male Bottom row: Generative AI portrait of a Black female, Finneas O'Connell, Generative AI portrait of a Black male.
Adobe designer, Dana Jefferson, trained an AI model on her illustration style. She generated four images, and posted them, alongside five originals, in a nine-square grid. It’s nearly impossible to determine which is which.

4. Site scraping

Site scraping involves the extraction of content and data from websites using a programming script. In generative visual technology, most AI models use diffusion, which destroys content and data by adding “noise.” From there the AI model essentially regenerates that same content and data by hallucinating new, coherent visuals from the noise. Simply put, Generative AI collects visual information from the web, destroys it, then references its “knowledge” to remove that noise and create new visuals.

5. Consistency

One of the criticisms of Generative AI is that the results aren’t consistent. In part, results vary because of differences in how AI systems are trained, but it’s also because AI is always learning. Typing the same prompt into an AI system multiple times will result in different results each time. I recently tested this with the prompt Frida Kahlo Self Portrait with Thorn Necklace and Hummingbird, and it resulted in 16 images, each an unsettling iteration of the original.

Sixteen AI-generated images in two rows of eight of Frida Kahlo's painting "Self Portrait with Thorn Necklace and Hummingbird."
Results of the prompt Frida Kahlo Self Portrait with Thorn Necklace and Hummingbird.

It’s interesting to note how AI interprets prompts (words). I had my four-year-old daughter help me with a few to test my assumptions about the randomness of output. Her first was bones of dragons on the grass with plants, animals, and people, which created four Surrealist landscapes. Her second, a train this big with persons and balls that is all red, returned a series of round(ed) trains. After seeing each of the results, she expressed that not only weren’t they what she expected, they weren't what she wanted.

Eight digital images in two groups of four: On the left are four very similar AI-generated landscapes of monstrous-looking green plants against a blue sky. And on the right are four AI-generated red trains on railroad tracks. The top two are ball-shaped with no train characteristics and the bottom two are train-shaped with no ball characteristics.
The results of two generative AI prompts. On the left, bones of dragons on the grass with plants, animals, and people, and on the right, a train this big with persons and balls that is all red.

In the end, AI generated art is fun to play with, and quite probably useful for reference or inspiration, but if consistency or “personal style” are what people are looking for, there’s no certainty that they’ll find either.

6. Bias & lack of diversity

I recently used generative software to create a family portrait. I’m white and my husband and children are also white. After a few attempts, I generated portraits that resembled the four of us. There was one glaring problem: Bias. Not once had I used white as a descriptor, but each time my results came back with white likenesses. I decided to test this bias with a prompt unrelated to me or my family and input woman in her twenties with curly black hair and brown eyes. It should have generated many ethnicities. Unfortunately, that’s not what happened. Every output was a white woman with dark hair. AI models are continually learning and training on similar data sets in theory any bias that exists will only amplify over time unless corrected. I returned recently to the same prompt and had a bit more success, which means that humans are working to address the biases in this technology… but it’s a complex problem for the entire industry without an easy solution.

Twelve AI-generated images in two rows of six: The four on the far left are cartoon like portraits and the eight on the right are photo-realistic.
Results of the generative AI prompt woman in her twenties with curly black hair and brown eyes repeated over a sixth month period with some improvement in output.

On the bright side

From a creative perspective part of the animosity toward AI and ML is a result of their newness and the uncertainty they create: Am I going to lose my career? Will my skills become irrelevant? Are we all just going to end up being data curators? Or will AI and ML simply become the latest generation of tools for the creative industry? We might not have those answers for quite some time. I know that it can feel like we're on a bullet train to dystopia but history and the forever optimist in me, has urged me to look at how AI and ML could help artists and illustrators and what could be the result of embracing and helping to shape these technologies.

1. Detail work

AI and ML are already making artists' lives easier and Generative AI could help illustrators skip right over time consuming and painstaking detail work to focus on the pieces of the creative process they enjoy. There are tedious, repetitive parts of the creative process that don’t require creative or artistic skill, but they must be done. AI will be able to complete those tasks, make composition suggestions, and last-minute additions and color changes.

A massive crowd of people all facing a stage in the far off distance. Behind the stage are large trees, a cityscape, and a blue and pink skyline.
With AI tools, an illustrator could create and execute the concept and draw the main parts of the image and AI could generate the rest in his illustration style. Crowd at the Concert illustration by Ramjana / Adobe Stock.
Five yellow blob-like cartoon characters with orange stick-like arms against a midnight blue background each have different facial expressions while doing different yoga poses on light blue yoga mats,
AI could also generate alternate poses for a character for animation or storyboarding. An illustrator could create three or four versions of it and AI could generate every in-between pose. Cute monsters in different yoga poses illustration by Roi_and_Roi / Adobe Stock.

2. New output models (from 2D > 3D)

There's a steep learning curve to make the transition from 2D to 3D; it takes a lot of time and practice, particularly for 2D artists who want to create 3D work in a digital space. But for 2D artists to be successful in an immersive three-dimensional future, they’ll either need to learn a new skill or allow technology to help them with it. AI technology can already detect form and shape from two-dimensional information, so it’s not a stretch to think that in the future it could enable 2D artists to create dimension by rendering their art in 3D.

3. Provenance

The good news is that the U.S. Copyright Office is actively evaluating the role of AI on creativity. Still, there is currently no way to prove provenance of AI-generated art. But a few years ago, to fight misinformation and add a layer of trust to digital content (primarily to address deepfakes), Adobe partnered with the New York Times and Twitter to create the Content Authenticity Initiative. If that initiative is taken just one step further, and AI is trained ethically, I can imagine a future where artists, who’ve always had to take steps to ensure their work isn’t used without their permission, could have the freedom to create without worry that someone would steal their intellectual property.

Unfortunately, creative communities aren’t strangers to other people or entities trying to pass off art as their own. Over the years, several fast-fashion brands have been called out for the unauthorized for-profit appropriation of artwork for T-shirt designs, graphic patterns, and even ceramics. In 2020, Urban Outfitters was accused of stealing Watiya Tjuta—a design based on the sacred art of spear-making by Australian aboriginal artist Mitjili Napurrula—and using it on rugs for sale in their stores. In the past, many artists had claimed to have had their work plagiarized by the company and within days of the comparison being called out on social media, the company removed the rug from their stores.

Imagine the same scenario in a digital future where authenticity and provenance are built into our tools: If a company tried to use an existing original work on their products without permission, it would be flagged as such, with the artist’s name attached to the piece. Skeptics will say that there are always ways to steal, appropriate, and plagiarize, but digital tokens or signatures, that could be connected to something as simple as a screenshot, could go a long way toward minimizing those infringements.

4. Collaboration

If our future is all about bridging physical and digital connections, you can't really talk about the future of creativity without talking about collaboration. They'll be synonymous. Illustrators, artists, and designers could work in tandem, generating variations and extending the work on the fly, even inviting clients to take part. Tools will become more accessible, and the digital canvas will become dimensional and social. Media types and mediums could be mixed, and anything could become a canvas.

5. Accessibility

Currently, there is no good digital drawing solution for artists with low to no vision. The current process is through touch where someone feels where they want to draw and draws there. But the future of these artists, and their ability to paint digitally via the same means that they paint traditionally today, may be through AI-generated audio and haptics, or semantic tags for screen readers. Generative AI (in the reverse of the text-to-image way it’s being used now) could be used for on-the-fly-generated alt text for images, the naming and labeling of every tool and action in an application, or all screen reader-only content.

It’s impossible to predict the ultimate impact of these new technologies on artists and artistic output, but history has taught us that new technologies don’t go away, they evolve. Since each of us has a voice and role in our individual and collective creative futures, I’m optimistic that we can use them to begin shaping these technologies in ways that benefit everyone.

Header copy
Design your career at Adobe.
Button copy
View all jobs
Button link
/jobs