Who Owns a Vibe? The Ghibli Effect and the Rise of Aesthetic Appropriation
This cover image was AI-generated in a visual style reminiscent of Studio Ghibli. Its inclusion is intentional — a mirror of the very questions raised in this piece about ownership, legacy, and the commercialization of creative expression.
If you haven’t yet, check out the first article in the Owning Innovation series: “Owning Innovation: Why IP and Commercialization Matter for Ethical AI.”
Introduction
This article is the second piece in our Owning Innovation series, which examines the evolving relationship between intellectual property, innovation, and commercialization in the age of AI.
In this piece, we explore a compelling case study: OpenAI’s ability to generate imagery in the recognizable style of Studio Ghibli. This case is particularly important because it highlights the way creative works can be imitated and commercialized without consent or compensation. It also illustrates how existing intellectual property (IP) frameworks struggle to address intangible elements like style, tone, and emotional resonance.
We break the case down through three key lenses:
Innovation – What makes AI-generated art impressive—and why replicating style is not the same as creating it
Intellectual Property – Why current copyright law doesn’t protect artistic style, and what that means for creators
Commercialization – How platforms profit from aesthetic mimicry while original artists are excluded from the economy their work enables
Ghibli as a Creative & Commercial Powerhouse
Founded in 1985 by Hayao Miyazaki, Isao Takahata, and Toshio Suzuki, Studio Ghibli is one of the most influential animation studios in the world, and has created a deeply recognizable visual identity. It is known for its hand-drawn artistry, emotionally rich storytelling, and imaginative world-building. Its films include Spirited Away (2001), Howl’s Moving Castle (2004), and Ponyo (2009). These films are not only critically acclaimed, but have stood the test of time across generations.
What sets Studio Ghibli apart is its distinct aesthetic. They use soft lines, muted color palettes, natural, whimsical landscapes, and expressions of wonder or melancholy. These stylistic choices have become instantly identifiable as part of the studio’s brand.
Over the years, Ghibli has become a global cultural phenomenon. This commercial success is built not just on storytelling, but on the emotional and artistic signature that defines the Ghibli “vibe.” It is precisely this aesthetic, which comes from decades of creative labor, that AI technologies are now learning to mimic.
This four-second scene took Hayao Miyazaki over a year to complete—a testament to the meticulous artistry behind every Ghibli frame. Today, an AI can mimic that style in seconds.
The Case: OpenAI, Ghibli, and the Viral Trend
In March 2025, OpenAI introduced a new image feature that allows users to generate visuals in the style of Studio Ghibli, among other well-known aesthetics. The function quickly went viral (Reuters). Within days, social media platforms were overwhelmed with Ghibli-style images, as users uploaded their own photos and prompted ChatGPT to return dreamy, animated renditions in seconds.
At first glance, this might seem like just another playful AI feature tapping into cultural nostalgia. But this trend raises serious concerns. To create outputs that so closely resemble Ghibli’s signature look, it is highly likely that OpenAI’s models were trained, at least in part, on Studio Ghibli’s original films or related content. This may have involved scraping copyrighted material or pulling from a long history of fan art and media influenced by Ghibli’s work (Business Insider).
The ChatGPT image feature has sparked mixed reactions online. On one hand, many users have embraced it, transforming their personal photos into Ghibli-style images and sharing them widely. Some companies have even adopted the feature for marketing purposes, taking advantage of its viral appeal. On the other hand, critics argue that this is yet another example of creative labor being repackaged and commercialized without consent (Business Insider).
These are original stills from Studio Ghibli films.
The top image is a frame from Arrietty (2010), also known as The Secret World of Arrietty in the United States. The bottom image is a frame from Kiki’s Delivery Service (1989), directed by Hayao Miyazaki.
They are included here to illustrate the distinct aesthetic that AI models are now trained to imitate. These images are not AI-generated.
Copyright Studio Ghibli. Used here under fair use for educational and critical commentary.
Innovation: When Innovation Imitates
In the first article of the Owning Innovation series, we defined innovation as “the algorithm, model, or system that solves a problem or introduces a new way of doing something.” But how does that definition apply in a case where technology does not necessarily invent, but simply imitates?
OpenAI’s image generation feature is certainly a technical achievement, because the model learned how to mimic highly specific visual cues, such as color palettes, line work, lighting, emotional tone to match a prompted aesthetic (i.e., Studio Ghibli). It can be considered innovative because generating images at this level of quality, speed, and scale was never possible before.
However, OpenAI did not invent the Studio Ghibli style, it learned it. Therefore, can this feature truly be “innovative” when it mimics rather than invents?
This version of innovation is concerning because it rewards the technical system (OpenAI), and not the original creator (Studio Ghibli). It makes creative labor cheap and replicable. It changes the meaning of innovation from creating something new to automating something that already exists.
At its core, the Ghibli case asks us to rethink how AI generates images, and what limits should exist to ensure that aesthetic replication does not become artistic appropriation. Right now, those limits are still undefined.
Intellectual Property: When Style Has No Rights
The Ghibli case also exposes significant gaps in the application of IP law to AI.
What the Law Protects—and What It Doesn’t
In most jurisdictions, copyright law protects specific expressions of ideas, such as a film, a drawing, or a character design (Science Direct). However, copyright law does not protect general styles or aesthetic choices (Science Direct). This is a fundamental limitation, and becomes especially problematic in the context of generative AI.
In the United States (where the OpenAI headquarters is based), copyright law does not extend to “ideas, procedures, or methods of operation,” which arguably includes artistic style (Copyright Laws). As Evan Brown, partner at law firm Neal & McDevitt, explains: “Copyright law has generally protected only specific expressions rather than artistic styles themselves.” As a result, if an AI output mimics the look and feel of Ghibli’s work without copying specific scenes or characters, it could fall outside legal enforcement.
The Grey Area
Studio Ghibli’s globally recognizable identity alone does not grant it legal protection under existing copyright regimes. This has enabled OpenAI to replicate the “vibe” of Ghibli’s aesthetic without technically violating copyright law.
Notably, Ghibli could potentially pursue a claim under the Lanham Act. The Lanham Act is the law that governs trademark protection and false association. According to IP lawyer Jeff Rosenberg, “OpenAI is trading off the goodwill of Ghibli’s trademarks, using its recognizable artistic identity in a way that may confuse consumers into believing the function is endorsed or licensed by the studio.” In other words, if the outputs appear so authentic that users assume an official partnership between OpenAI and Studio Ghibli, Ghibli might be able to argue that its brand is being misused.
Is Fair Use Enough?
One of the biggest legal grey zones in generative AI is the question of fair use.
Fair use is a legal rule in the U.S. that lets people use small parts of someone else’s copyrighted work without permission for purposes like education, news, or parody. For something to qualify under the fair use doctrine, it depends on factors such as: how much of the original work was used, why it was used, and whether it impacts the original creator’s ability to earn money from it (US Copyright Office).
If OpenAI claims that its use of training data qualifies as fair use, recent case law suggests that this defense may not hold in court. In the recent Ross v. Thomson Reuters decision, a US court ruled that using copyrighted material to train an AI system did not constitute fair use, particularly when 1) the AI competes with the original work and 2) does not add anything new or protect the creator’s ability to earn from it. While the case involved non-generative AI, it sets an important precedent that could influence how courts evaluate similar claims in the generative AI space.
As the legal landscape evolves, the fair use defense may no longer be a guaranteed defense, especially when creative work is used without permission to build tools that compete with the original.
What This Means
As generative AI continues to shape the creative sphere, IP law needs to evolve beyond its traditional definitions. Protecting the essence of creative work may be the next legal challenge. Until then, artists will remain on the losing side of this imbalance.
Commercialization: Profits Without Credit
The Ghibli case raises numerous commercialization concerns for the original creators. OpenAI’s Ghibli-style image generation feature resulted in major user engagement and viral exposure. It led to increased usage of ChatGPT, and positioned OpenAI as an innovator at the intersection of AI and creativity. But while OpenAI gained visibility and growth, Studio Ghibli, who is the original source of the aesthetic, received no credit, control, or compensation.
What Ghibli Lost: Missed Business and Branding Opportunities
The harm here is not only about lost profits for Studio Ghibli. It is also about lost possibilities for commercialization on their own terms.
As IP lawyer Jeff Rosenberg explains, “if Studio Ghibli ever wanted to launch its own tool allowing fans to transform photos into its signature style, OpenAI’s update has essentially taken that business opportunity away.” In other words, the studio may now have a harder time developing its own AI-based tools or licensing deals, because the market has already moved on. What could have been a new creative product or fan experience for Ghibli is now something users expect for free from a tech company.
This also affects how Ghibli’s brand is shaped going forward. The more AI tools copy their look, the harder it becomes to tell what is official and what is not. This kind of imitation not only affects revenue but also weakens the studio’s control over its brand.
Homage or Appropriation?
Even more concerning is the potential for audience confusion. As AI-generated visuals become more advanced, there is a growing risk that users will mistake them for real Ghibli content. As Rosenberg warns, this “blurs the line between homage and outright misrepresentation.”
That confusion has real consequences. When AI systems use an artist’s style to generate content without permission, it becomes more than a creative concern. The artist gets no say, no credit, and no share of the value their work helped create. Their style turns into a tool for someone else’s profit. The legal system has not caught up, but the impact is already being felt by artists, audiences, and the culture that connects them.
What This Means
The rise of generative AI has created a growing imbalance: companies benefit, users are entertained, but creators are left out. When style becomes a product, artists can become pushed out of the picture. Without stronger protections, AI-driven mimicry will not just change markets, it will also change who gets to profit from creativity.
Takeaways by Stakeholder
This case highlights how vulnerable creatives are in the face of rapid AI advancements, and how urgently we need more responsible innovation. The path forward must include clear roles for different actors. Below is a non-exhaustive list of practical takeaways for key stakeholder groups, grounded in current developments and real-world examples.
For Artists & Creatives
Artists and creatives can start by pushing for stronger attribution standards in AI tools. The Santa Clara Principles offer a helpful framework here, emphasizing transparency, accountability, and due process when automated systems affect individuals and communities. This becomes especially important when an artist’s unique style is being mimicked without consent. Artists should also look into collective licensing options and opt-out tools where available. The EU AI Act now gives rights holders the ability to opt out of having their work used to train foundation models, which sets an important precedent (Morgan Lewis). Platforms like HaveIBeenTrained.com offer an early way to check if your work is being used and to submit opt-out requests. These tools hint at a future where stronger registries and licensing systems may become the norm.
For AI Developers/Platforms
AI developers and platforms have a responsibility to build with transparency. That means clearly disclosing what kind of data was used to train a model, especially if it includes copyrighted works or particular artistic styles. Developers should also prioritize creating guardrails around stylistic prompting. Adobe Firefly offers a strong example by limiting training data to licensed and public domain sources. Further, developers should be collaborating with artists to build better data practices. This includes strategies such as opt-in models, or fair use guidance. The Authors Guild’s has already laid the groundwork for this kind of advocacy on behalf of writers, and similar initiatives are needed in the visual arts space.
For Policymakers
Policymakers need to revisit what creative ownership actually means in the age of AI. Right now, copyright law mostly protects specific works, not the style that defines an artist’s body of work. That gap has become a major blind spot. Updating these frameworks is key, and consultations like the UK Intellectual Property Office’s review on AI and copyright are a strong step forward.
Policymakers also need to think globally. As AI tools scale across borders, creators face a serious challenge: the work they created in one country can be scraped, mimicked, and commercialized in another. This lack of international alignment makes enforcement nearly impossible. Global bodies like WIPO and UNESCO should take the lead on building cross-border protections that reflect cultural and creative values.
For Platform Users & Consumers
Users and consumers also play a role in shaping how generative AI tools are used. Users should stay informed about how these systems work and be mindful of the ethical impacts. Supporting artists, ethical AI tools, and responsible platforms can make a difference.
This list is not comprehensive, but it is a meaningful starting point. The choices we make today will shape the future of creativity, innovation, and artistic integrity. Whether you're an artist, policymaker, or AI developer, now is the time to help build systems that don’t just protect creative work, but honor it.
Closing Reflection – Protecting the Magic
This frame from Arrietty (2010) captures the beauty of Studio Ghibli’s hand-drawn world.
Copyright Studio Ghibli. Used here under fair use for educational and critical commentary.
The Studio Ghibli controversy is more than a viral moment. It shows us how far we still have to go in aligning innovation with ethics and accountability. It reveals the deep tensions in our current frameworks for intellectual property and commercialization, particularly when it comes to artistic style.
Style is powerful precisely because it is both intangible and instantly recognizable. That paradox is what makes it so culturally valuable, and simultaneously so easy to exploit. As AI becomes better and better at mimicking creative expression, we need stronger, more imaginative forms of governance to protect not just copyrighted works, but the emotional and cultural legacies they carry.
This case has been a timely addition to the Owning Innovation series because it is unfolding in real time. It reminds us that these are not just theoretical debates, they are urgent questions with real-world consequences. The choices we make now will shape the future of creativity: who gets credited, who gets paid, and whose vision gets preserved.
If AI can recreate the feeling of a Ghibli film, the real question becomes: how do we protect the people, cultures, and creative labor that made that feeling possible?
Written By: Maryama Elmi
Founder, What If AI
References
Adobe. (n.d.). Adobe Firefly FAQ. Adobe. https://helpx.adobe.com/firefly/get-set-up/learn-the-basics/adobe-firefly-faq.html
Authors Guild. https://authorsguild.org/
Business Insider. (2025, March). Studio Ghibli has few legal options to stop OpenAI from ripping off its style. https://www.businessinsider.com/studio-ghibli-openai-chatgpt-image-feature-copyright-law-2025-3
Cornell Law School. (n.d.). Lanham Act. Legal Information Institute. https://www.law.cornell.edu/wex/lanham_act
Digital Media Law Blog. (2024, December). Court rules AI training on copyrighted works is not fair use: What it means for generative AI. Davis & Gilbert LLP. https://www.dglaw.com/court-rules-ai-training-on-copyrighted-works-is-not-fair-use-what-it-means-for-generative-ai/
EU AI Act. (n.d.). Key issue #5: Transparency requirements. https://www.euaiact.com/key-issue/5
Have I Been Trained. (n.d.). https://haveibeentrained.com/
Morgan Lewis. (2025, April). EU AI Office publishes third draft of EU AI Act: Key copyright issues. https://www.morganlewis.com/pubs/2025/04/eu-ai-office-publishes-third-draft-of-eu-ai-act-related-general-purpose-ai-code-of-practice-key-copyright-issues
NDTV. (2025, April). Can Studio Ghibli Sue OpenAI Over AI-Generated Images? What Lawyer Says. https://www.ndtv.com/world-news/studio-ghibli-may-have-grounds-to-sue-openai-over-ai-art-lawyer-8067470
Reuters. (2025, April 1). ‘Ghibli Effect’: ChatGPT usage hits record after rollout of viral feature. https://www.reuters.com/technology/artificial-intelligence/ghibli-effect-chatgpt-usage-hits-record-after-rollout-viral-feature-2025-04-01/
Santa Clara Principles. (n.d.). The Santa Clara Principles on transparency and accountability in content moderation. https://santaclaraprinciples.org/
ScienceDirect. (n.d.). Copyright protection in computer science. https://www.sciencedirect.com/topics/computer-science/copyright-protection
Stanford University Libraries. (n.d.). The four factors of fair use. Stanford Copyright and Fair Use Center. https://fairuse.stanford.edu/overview/fair-use/four-factors/
UK Government. (2023). Copyright and artificial intelligence: Consultation. UK Intellectual Property Office. https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence
U.S. Copyright Office. (n.d.). More information on fair use. https://www.copyright.gov/fair-use/