Artists are terrified. Or they're ecstatic. Or, honestly, they're just tired of the noise. If you’ve spent any time on the internet lately, you’ve seen the images—those hyper-detailed, slightly "too perfect" portraits or swirling psychedelic landscapes generated by a few lines of text.
The conversation around artificial intelligence and art usually devolves into a binary screaming match about whether a machine can "feel" or if prompt engineers are "real" artists. But that's kinda missing the point. The real shift isn't about whether a computer can paint a sunset; it’s about how the very definition of labor and value is shifting under our feet.
Art has always been a mirror of our tools. When the camera first appeared, painters thought they were out of a job. Instead, they got Impressionism. Now, we’re staring at Diffusion models like Midjourney and Stable Diffusion, wondering if we’re at the end of the line or just starting a weird new chapter.
The Myth of the "Magic Button"
Most people think using AI to make art is like hitting a "win" button. You type "cat in a tuxedo" and—boom—art.
It’s not really like that for the people actually doing it professionally.
The reality of artificial intelligence and art in a studio setting is a gritty, iterative process of failure. It’s about "in-painting" a specific hand twelve times because the model gave the subject seven fingers. It's about using ControlNet to force a specific pose. It is, in many ways, more like digital puppetry than traditional painting.
Take a look at Jason Allen’s Théâtre D’opéra Spatial. You probably remember it. It won first place in the digital art category at the Colorado State Fair in 2022. The internet exploded. People were livid. But Allen didn't just type a sentence and walk away. He spent over 80 hours refining 900 iterations, using Photoshop to clean up the mess the AI left behind.
Is it "art"?
The judges said yes. The US Copyright Office, currently, says no. That’s where the friction is. Under current US law (specifically the rulings regarding the Zarya of the Dawn comic book), you can’t copyright an image that was purely generated by AI. The human has to have "creative control." We’re currently in a legal limbo where the tech is moving at 100 miles per hour and the courts are still trying to find their shoes.
Where the Data Actually Comes From
We have to talk about the "scraping" problem.
Models like Stable Diffusion weren't born knowing what a brushstroke looks like. They were trained on billions of images, often without the consent of the original creators. The LAION-5B dataset is basically a giant vacuum that sucked up everything on the public internet.
This is why artists like Kelly McKernan and Karla Ortiz are suing. They found their names being used as prompts. People were literally typing "in the style of Kelly McKernan" to get art that looked like hers for free. It feels like a heist.
But there’s a nuance here that gets lost.
Generative AI doesn't "collage" images. It doesn't store pieces of photos. It learns mathematical patterns. It’s more like a student going to a museum, looking at a thousand Van Goghs, and then going home to paint something in that style. Does a human student owe Van Gogh’s estate? Usually, no. But when a machine does it a billion times a second, the scale changes the ethics.
The Rise of Ethical Models
Because of this backlash, we’re seeing a shift. Adobe Firefly, for instance, claims to be trained only on Adobe Stock and public domain content. It’s "commercially safe."
Companies are realizing that if they want big brands to use artificial intelligence and art tools, they can't have a lawsuit lurking in the pixels. This creates a two-tier system: the "wild west" open-source models and the "clean" corporate ones.
It’s Not Just Pretty Pictures
Art isn't just JPGs. It’s music. It’s film. It’s the way we interact with space.
In the music world, AI is doing things that are actually quite helpful, if less "scary" than deepfaking Drake. Look at what Peter Jackson’s team did for the Beatles' Now and Then. They used "source separation" technology—basically AI-powered ears—to pull John Lennon’s voice out of a muddy, low-quality cassette recording from the 70s.
That’s AI as a scalpel.
Then you have the gaming industry. Developers are using AI to generate textures or "bake" lighting. What used to take a human artist three weeks of clicking on bricks can now happen in minutes. Does this put people out of work? Maybe. Or maybe it just lets an indie dev with three employees make something that looks like God of War.
The middle ground is disappearing. The "low-level" production work—the stuff that pays the bills for many junior artists—is being automated. The "high-level" conceptual work is where humans are still holding the line. For now.
The Weird Side: AI as an Occult Tool
Some artists aren't trying to make "good" art with AI. They’re trying to find the "glitch in the matrix."
There’s this concept of "Loab." If you haven't heard of her, she’s a disturbing, recurring figure that appeared in AI-generated images when users used "negatively weighted" prompts. She’s like a digital ghost. Artists are using these weird, unintentional outputs to explore the "subconscious" of the latent space.
👉 See also: Are AirPods Gen 2 Waterproof? The Honest Truth Before You Ruin Your Buds
It’s less about "make me a cool dragon" and more about "what does the machine think sadness looks like?"
Refik Anadol is a great example here. His work at the MoMA, Unsupervised, uses a machine learning model to interpret the museum's entire collection. It’s not a static image. It’s a shifting, dreaming flow of color on a massive screen. It feels alive. It’s an installation that wouldn't exist without AI, but it requires a human architect to frame it.
How to Actually Navigate This (The Survival Guide)
If you’re a creator, or just someone interested in the intersection of artificial intelligence and art, sitting on the sidelines isn't an option anymore. The "I’ll never use it" stance is becoming as practical as saying "I’ll never use a computer."
You have to find the "Human Plus" workflow.
1. Treat AI as a Sketchpad, Not a Finished Product
Use it for "vibe checks." If you're designing a character, generate 50 versions to find the color palette that works, then draw the final version yourself. This keeps the copyright in your hands and the soul in the work.
2. Learn the Tech to Break the Tech
The most interesting art happens when you push a tool past its intended limit. Don't just use the standard prompts. Learn about LoRAs, ComfyUI, and how to train small, "personal" models on your own hand-drawn art. That way, the AI is literally an extension of your own hand.
3. Lean Into What Machines Suck At
AI is terrible at consistency. It struggles with complex spatial logic—like how a hand actually grips a specific tool. It also can't do "intent." It doesn't know why a character is crying; it just knows what tears look like. Double down on storytelling, narrative, and specific, weird human experiences that a dataset can't replicate.
4. Protect Your Work
Use tools like "Glaze" or "Nightshade." These are programs developed by researchers at the University of Chicago that add "invisible" changes to your pixels. To a human, the art looks normal. To an AI scraper, it looks like a mess of static, effectively "poisoning" the data and protecting your style from being mimicked without permission.
The future of artificial intelligence and art isn't a robot holding a paintbrush. It's a human holding a tool that’s smarter than a hammer but dumber than a collaborator. We’re moving toward a world where the "craft" of making something—the hours spent cross-hatching—is becoming less valuable than the "vision" of what should be made.
That's a scary trade-off.
But it's the one we're making. The most successful artists of the next decade won't be the ones who fought the machine, but the ones who learned how to steer it without losing their own voice in the process.
To get started, don't just play with web-based generators. Look into the open-source community on sites like Civitai (with caution) or Hugging Face. Understand the architecture. If you're going to live in a world reshaped by these algorithms, you might as well learn how the gears turn. Start by experimenting with "hybrid" pieces: take a photo, use an AI to "expand" the background, then paint back over it. See where the machine helps and where it gets in your way.