Owning Innovation: Why IP and Commercialization Matter for Ethical AI
The first piece in our “Owning Innovation” series on AI, intellectual property, and ethical commercialization.
Introduction
Innovation, commercialization, intellectual property (IP), and AI governance are deeply interconnected. Innovation is what people create, IP determines who owns it, commercialization decides how it is shared or scaled, and governance shapes what is allowed, what is fair, and who benefits.
AI systems are advancing rapidly, and as a result, the line between human-generated and AI-generated outputs is becoming increasingly blurred. This raises important questions about ownership and commercialization — because who gets to profit from innovation, and under what terms, is ultimately a question of governance.
This tension is already playing out across the AI landscape. Breakthroughs in AI are emerging globally — from university labs and startups to public research institutions and even from hobby-based communities. Some are developed by humans using AI tools; others are generated entirely by AI. Yet, many of these innovations never make it past the early development stage. Some are shelved, others absorbed by large companies, and many just disappear — not because they lack value, but because there is no clear path for protection or scaling.
Consider this:
What happens when a student trains a model, but the university claims the rights?
What if a small research lab’s breakthrough gets quietly absorbed into a corporate product — with no credit or compensation?
What if an artist’s work is used to train an AI model, and the outputs mimic their style — without their permission, attribution, or involvement?
In all of these cases, we’re left with more questions than answers. Specifically:
Who owns the innovation?
How is it protected?
How will it be used — and by whom?
These aren’t just legal questions — they’re governance questions, because they reveal the gap between how innovation is currently handled and how it ought to be.
If we want AI development and deployment to be ethical, inclusive, and sustainable, we need stronger systems for ownership, attribution, and responsible commercialization — especially for those working outside the world’s largest platforms.
In this piece, we explore why IP and commercialization matter in AI governance, what happens when we ignore them, and how we can build better pathways to protect innovation and the people behind it.
What Do We Mean by Innovation, IP & Commercialization?
These terms can sound abstract, but at their core, they’re about something simple: turning meaningful ideas into action, while protecting the people who created them. Below are explanations of each of these terms, in the context of AI.
Innovation is the idea — the algorithm, model, or system that solves a problem or introduces a new way of doing something.
Example: A university research team builds a machine learning model that can predict disease outbreaks using local environmental & health data.
Intellectual property (IP) is the safety net — the legal framework that says, "this belongs to you, and this is how you can control who uses it.”
Example: The research team applies for copyright and considers filing a patent on the methodology to prevent anyone using it in an unauthorized manner.
Commercialization is the bridge between innovation and real world impact — it’s the process of turning an idea or research breakthrough into something that can be used, shared, or scaled.
Example: A nonprofit health tech organization licenses the model and deploys it in rural clinics, while the university earns licensing revenue and retains rights for research use.
This chain ensures that innovation doesn’t remain just an idea — it becomes a contribution with longevity. When done right, this pipeline protects creators, promotes innovation, and ensures that technology serves the public good.
Why This Gap Is an AI Governance Issue
AI governance is often treated as a reactive process — something addressed only after systems are deployed or cause harm. But governance should begin much earlier, with how innovation is protected, shared, and scaled from the start.
Global frameworks like the UNESCO Recommendation on the Ethics of AI and the NIST AI Risk Management Framework emphasize principles such as transparency, accountability, and fairness. But those values are hard to uphold without clear systems for determining who owns, governs, and benefits from innovation — especially during the early stages of development, when direction and intent are still being shaped.
Real-World Consequences When Governance Is Missing
As the World Intellectual Property Organization notes, even when AI drives invention, these innovations still rely on human effort, funding, and time to move from idea to impact. When IP and commercialization are treated as afterthoughts, creators lose agency, institutions lose value, and communities lose access to technologies that could serve the public good. Without strong protection and clear strategies for scaling, even the most promising innovations can be stalled, co-opted, or quietly absorbed.
Here’s how that plays out across the AI landscape:
Researchers’ efforts go to waste
Example: UK government dropped multiple AI prototypes for welfare services due to scalability challenges (The Guardian)
Smaller players get pushed out.
Example: DeepSeek’s rise reshaped China’s AI landscape, forcing smaller startups to pivot or disappear. (Financial Times)
Institutions struggle to build pipelines from research to application
Example: RAND Corporation reports that over 80% of AI projects fail due to poor data quality, misaligned incentives, or lack of practical integration. (Pure AI)
Public interest projects disappear
Example: Experts warn that cutting funding for AI cancer tech in the UK will worsen wait times and reduce access, despite the technology’s promise. (The Guardian)
These aren’t isolated incidents — they are the consequence of missing governance at the foundation of innovation.
This gap is even more pronounced in the Global South, where innovation is often grassroots, under-resourced, and overlooked. A recent Carnegie Endowment article highlights the work of African Natural Language Processing (NLP) communities — local networks building AI tools for underrepresented languages using open models and shared resources. While openness has enabled progress, it has also exposed deep tensions. Without supportive IP frameworks or ethical commercialization pathways, these communities risk losing control over the very tools they have created. As the article states, “openness must be practiced in a manner that considers the communities directly or indirectly providing the data.”
This is why we should see IP and commercialization as core pillars of responsible AI governance. Because ethical AI is not only about what gets built — it is about who gets to build it, own it, and use it.
Innovation with Integrity in Practice
Hypothetical #1: Scaling Public Health Innovation Responsibly
A university professor develops an AI tool that can detect early signs of skin cancer using photos taken on a smartphone. It’s trained on diverse, locally sourced images and shows promising results in communities with limited access to dermatologists.
Now What?
The university helps the professor protect the tool through its internal IP policy.
An ethics team reviews how the image data was collected and helps ensure it’s safe for clinical use.
A local health-tech hub helps bridge the gap between research and real-world use by connecting the project with developers and funders.
Together, they release the tool for free in low-income regions — and license it commercially in high-income markets.
In this example, the AI breakthrough is not limited to a research paper. Rather, it is protected, used responsibly, and scaled in a way that supports both public health and the creator’s rights.
This example is inspired by real AI health tools — particularly skin cancer detection apps and smartphone-based diagnostics.
Hypothetical #2: Sharing Power Through Language Technology
A linguistics professor at a public university creates an AI-powered tool that automatically translates educational materials into underrepresented languages. The tool is trained using texts contributed by community members, and is designed to support language preservation and access to learning resources.
Now what?
The university has a clear IP policy that allows the professor to retain rights over the tool while ensuring that the community partners who contributed data are acknowledged and involved in decision-making.
An ethics review panel helps the team build a consent-based data sharing model and advises on how to avoid cultural or linguistic misrepresentation in outputs.
A regional nonprofit helps turn the project into real impact by linking it with schools and translation networks for hands-on testing and pilot use.
The professor and university co-develop an open-source version of the tool tailored to nonprofit use — while also licensing it for commercial use to major publishers and edtech platforms interested in inclusive content distribution.
In this scenario, the AI tool does not just stay in the lab. It becomes a bridge between innovation and cultural preservation — expanding access, supporting community goals, and creating value across sectors.
The creator shares control. The community shapes the outcome. And the innovation scales with integrity.
This scenario is inspired by real initiatives like Masakhane, Ghana NLP, and other AI + language tech projects in the Global South.
“Innovation in itself is insufficient. To be successful, commercialization is needed too — along with IP protection.”
The Bigger Picture
The above examples are not only illustrations of responsible innovation — they also highlight the space where intellectual property, innovation, commercialization and AI governance overlap.
Today, how we protect and scale innovation is a governance question — one that shapes who gets to build, who gets to benefit, and whose values are embedded in the systems we deploy. And yet, this intersection remains under-explored. It is not something most institutions are equipped to navigate — but it could be, with the right tools, intentional study, and clearer frameworks designed for this new area.
Why We’re Digging Into This at What If AI
What If AI is about asking thoughtful questions about technology, power, and possibility. Lately, one question keeps coming up:
What if the systems to protect and scale AI innovation were actually built for equity, ethics, and public benefit?
In other words, what if IP and commercialization were not just about maximizing profit — but about supporting the people who are solving real problems through AI?
That’s why we are launching a new series on IP, commercialization, and innovation governance. Here’s what you can expect in the coming weeks:
Insightful pieces on who owns innovation in the age of generative AI
Case studies on how researchers, artists, and public institutions navigate IP, licensing, and commercialization — and what gets lost when governance is not built in
Practical toolkits for institutions seeking to support ethical commercialization
Because governance doesn’t end at regulation. It includes imagination, infrastructure, and the systems we build to protect what matters.
We’re building this series in public, and we’d love to hear from you: contact@whatifai.org
Written by: Maryama Elmi
Founder, What If AI
References
Bender, M. (2020). Ethical concerns mount as AI takes bigger decision-making role. Harvard Gazette. https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role
Esteva, A., Kuprel, B., Novoa, R.A. et al. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542, 115–118. https://www.nature.com/articles/nature21056
Ghana NLP. (2023). https://translation.ghananlp.org/
Hern, A. (2025). UK Abandons AI Welfare Prototypes Amid Ethics and Scalability Concerns. The Guardian. https://www.theguardian.com/technology/2025/jan/27/ai-prototypes-uk-welfare-system-dropped
Kuo, L. (2024). DeepSeek’s Dominance Shakes China’s AI Startup Scene. Financial Times. https://www.ft.com/content/c19f3988-45d7-4a81-854d-9ba0d71812fe
Masakhane Research Foundation. https://www.masakhane.io/
McKinsey & Company. (2023). What Is Innovation? https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-innovation
MIT Department of Materials Science and Engineering. (2023). The Spark of Innovation and the Commercialization Journey. https://dmse.mit.edu/news/the-spark-of-innovation-and-the-commercialization-journey
National Institute of Standards and Technology (NIST). (2023). AI Risk Management Framework. https://www.nist.gov/itl/ai-risk-management-framework
Okorie, C. & Marivate, V. (2024). How African NLP Experts Are Navigating the Challenges of Copyright, Innovation, and Access. Carnegie Endowment for International Peace. https://carnegieendowment.org/research/2024/04/how-african-nlp-experts-are-navigating-the-challenges-of-copyright-innovation-and-access?lang=en
Pure AI. (2024). Why 80% of AI Projects Fail: Insights on Abandoned AI Initiatives. https://pureai.com/Articles/2024/08/02/Abandoned-AI-Projects.aspx
UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000381137
World Intellectual Property Organization. About Intellectual Property (IP). https://www.wipo.int/en/web/about-ip
World Intellectual Property Organization. (2023). Artificial Intelligence and Intellectual Property: An Economic Perspective. https://www.wipo.int/publications/en/details.jsp?id=4715