The Ghibli-Style AI Wake-Up Call
In this article
The viral Studio Ghibli-style images trend could strengthen arguments that OpenAI’s 4o image model is infringing copyright, according to legal perspectives several IP litigation lawyers shared with VIP+.
The release of OpenAI’s 4o image generation model in ChatGPT on March 25 unleashed a viral storm of AI-generated images in the style of Studio Ghibli animation as users uploaded and asked the system to remake personal photographs and other images as restyled versions.
Other famous IP and artists with highly recognizable styles were repurposed as well, including Pixar, The Simpsons, Lego, Muppets and Dr. Seuss. It’s also worth noting the image likenesses of numerous public figures were also replicated in Ghibli style, including double-whammy unauthorized uses such as Snoop Dogg.
Making a picture with ChatGPT inserted millions of unwitting average users into legal and ethical debates that have been ongoing for over two years about generative AI and copyright, training data, labor theft of human creators and the value and meaning of art.
By no means is replicating exact art styles a brand-new capability for AI image models. Visual artists with recognizable styles have been the first victims of AI mimicry since late 2022, immediately after the launch of Stable Diffusion. The first copyright infringement lawsuit against AI companies was brought by visual artists in Andersen v. Stability AI in January 2023.
But while replicating style with AI isn’t new, the use of the capability in the 4o model during the Ghibli trend has gone further to proving the threat to IP is real, as it supercharged and dispersed the damage across numerous IPs.
A tsunami of personalized, algorithmic derivatives swarming social media feeds is a predictable outcome when millions of internet users are given easy-to-use tools powered by models capable of producing precise, diverse and infinite replicas of any IP that was used to train them. Just a prompt away is a growing category of synthetic content that pollutes the digital commons with misleading simulacra of IP.
For their part, studio IP owners have undoubtedly been aware the threat exists, though studios are notably absent from content owner lawsuits against AI companies. As IP-style replicas proliferate in massive numbers online, public furor around 4o image generation could encourage new lawsuits focused more directly on AI outputs as evidence of infringement. At the very least, the ability of the 4o model to reproduce styles from IP proves that copyrighted works from IP were used to train it.
GPT-4o is in fact more capable of directly producing these kinds of outputs than previous model releases, as described in OpenAI’s blog post.
But its greater ability to replicate specific IP styles is also happening because OpenAI slackened its content moderation policies, and therefore the guardrails, applied to the model that previously prevented such outputs from being generated. Typically, AI systems will refuse to output material that doesn’t comply with moderation policies, often using classifiers tasked with identifying and blocking problematic content.
OpenAI’s 4o can do more, including generate or manipulate images of IPs and famous people, because it simply complies more often than previous model versions. The model still refuses requests for violent or sexual content, but the policies applied to 4o are overall “more permissive,” as it “shifted from blanket refusals in sensitive areas to a more precise approach focused on preventing real-world harm,” the company’s head of product and model behavior Joanne Jang wrote in her follow-up Substack post.
The expanded addendum on the safety risks and controls applied to 4o image generation notes the model “can generate images that resemble the aesthetics of some artists’ work when their name is used in the prompt,” which is only blocked for living artists and is further “not blocking the capability to generate adult public figures,” though any who don’t want to be generated can opt out.
Relaxing guardrails is a more brazen disregard for protecting copyrighted properties from appearing in model outputs. Since OpenAI and other AI developers argue training on copyrighted material is fair use that allows them to scrape and use content, guardrails filtering model outputs are currently IP owners’ and artists’ easiest line of defense against being generated by anyone. When restrictiveness is reduced, AI-enabled misappropriation gets worse.
“I don’t think OpenAI cares at this point,” said Pierre-Carl Langlais, cofounder of the France-based ethical AI lab Pleias, who observed after testing the model with multiple adversarial prompts that the “vast majority of known and less-known intellectual properties can now be generated on the fly.”
“Right now, the only criteria for blocking generation is whether IP raises a potential wellbeing risk [to individuals or society],” he said. “Some IPs from big studios like Disney are still totally blocked from creation, but they don’t see a big issue with many others.”
Langlais interpreted relaxing guardrails as a sign OpenAI is emboldened by the new administration’s pro-AI stance and the strength of the AI industry’s lobbying efforts.
Yet slacker guardrails also increase the risk of AI models outputting content that infringes copyright, such as something that looks substantially similar to an original work used in training. Guardrails for IP exist to prevent models from outputting potentially infringing content. Absent any guardrails, a model will conceivably output anything requested based on its training, and it’s been demonstrated that AI models can memorize by “regurgitating” specific works verbatim.
“The fact that they are putting guardrails on tells you that the content that comes out of an AI before you apply the guardrails may well be infringing content. We saw that early on with AI models generating verbatim entire chapters of books and other creative works,” said Lance Koonce, an IP litigation partner at Klaris.
Even though style can’t be copyrighted in the abstract, some of these outputs “in the style of” IP or specific creators could still infringe the original work. An AI model’s interpretation of style might not correspond with how it’s been defined by the law, such that an AI output replicating style could still contain substantially similar elements of protected expression. Training an AI model on large numbers of creative works from a specific IP is not the same as what happens when someone is influenced by a style in their own creative process.
“It’s not always clear what is meant by ‘style’ when someone requests it from the model,” said Josh Weigensberg, an IP litigation partner at Pryor Cashman. “Generative AI is not just being influenced in the abstract by a studio’s style or an artist’s style; it’s training on their works. The output may well contain copies of expressive elements of the original works. The question is, are the resulting images substantially similar to or the same as the expressive elements to what the plaintiff creator owns.”
“At some point, you cross the line between style and expression if you generate a work that looks close enough to an existing piece,” said Koonce. “When you’re too close to that expression, you’re going to be infringing.”
The capability of 4o to replicate style also might not qualify as a fair use of the copyrighted works for AI training, which is what has enabled the capability. There’s a strong argument that the capability itself directly competes in the market with the original creator and undermines demand for the original work by serving as a replacement, which violates one of the four factors used to determine fair use. OpenAI promotes image generation from the 4o model as more “useful and valuable,” though it stops short of naming uses.
“What are you setting up your AI product to do? Are you training on visual art and then enabling your AI product to generate similar art?” said Weigensberg, who argued this type of use wouldn’t be fair use because “copying expressive works without permission and then allowing your users to generate knockoffs is substitutional” and may be “damaging to artists or IP owners.”
“This creates a serious risk, and I’m sure reality, that the demand for your services and your art is going to be diminished because people can just get knockoffs for free through generative AI,” he added.
Content owner lawsuits against AI companies will take years to unfold. During that time, AI companies will continue to claim fair use and proceed with scraping, deploying new models and changing their content moderation policies.
Shifting guardrails could present its own unique challenges as IP owners try to prove infringement. OpenAI notes its approach to content policymaking is “iterative” depending on “real-world use.” Changing the guardrails — whether they’re slack or strict, apply or don’t — changes the outputs AI models are capable of generating.
“While courts will look at the case presented to them and not extrapolate to facts not at issue, in these cases, it’s essential that IP owners and creators are able to demonstrate to the courts the way the technology is shifting. AI companies can clearly flip the switch and make changes all along the way,” said Koonce. “One of the real issues is what happens if they start changing those guardrails in ways that aren’t as helpful to creators and IP owners? Do the creators and owners have to keep suing every time someone changes a guardrail?”
Regardless, the metered presence or absence of guardrails doesn’t change the main contention that has been raised against AI companies — that training models on copyrighted works is infringement. “Guardrails may theoretically help protect artists and copyright owners from having infringing works generated as output, but that has no bearing on what happened on the acquisition and input side, and in fact is revealing as to what materials are in the models,” said Scott Sholder, copyright litigator and partner at Cowan, DeBaets, Abrahams & Sheppard.
More immediate methods for creators to address scraping without permission or payment will be technical, as described in the March 2025 VIP+ special report “AI Training: Consent & Content” and December 2024 report “Generative AI: Celebrity Deepfakes & Digital Replicas.” Licensing is also an option, though not for those who want to avoid ingestion altogether.
VIP+ Unearths Gen AI Data & Insights From All Sides — Pick a Story