Innovation Isn’t Just About Technology. It’s About the Systems Behind It

WRITTEN BY: MARYAMA ELMI, JD

This series has applied a systems thinking lens to AI innovation. We did not just ask what’s broken, but what patterns are shaping the outcomes we see in the age of AI.

Introduction

Over the past few weeks, we’ve explored what happens when AI innovation outpaces the systems meant to govern it. From grassroots projects like Masakhane to recent controversies like Studio Ghibli vs. OpenAI, each case revealed the same underlying truth: our current innovation frameworks were not built with AI in mind. And it shows.

It’s not just that AI innovation is moving fast. It’s that the infrastructures we rely on weren’t built for how innovation actually works today. Two things make this shift especially urgent:

  • First, AI is breaking open who can innovate. Open-source models, decentralized communities, and collective research are making it easier than ever for people outside of traditional institutions to contribute to major breakthroughs. But our systems haven’t caught up. They still reward closed, credentialed, ownership-driven models; not the open, distributed, and collaborative ones AI is helping unlock.

  • Second, AI innovation impacts far more people than it includes. Creatives, communities, and everyday users often shape the data these systems rely on, yet they’re excluded from the decisions, benefits, and protections that follow. Our current governance models treat them as inputs, not stakeholders — even when the outputs wouldn’t exist without them

In essence, AI is widening the gap between how innovation actually happens and how it’s governed. And if we don’t address that gap, we risk building a future where innovation is faster, but also more extractive, inequitable, and unaccountable.

This series has taken a systems-level view of that problem. It doesn’t just ask what’s broken, it asks:

What patterns, incentives, and infrastructures are shaping the outcomes we see in the age of AI, and what would it take to build better ones?

Over the next sections, we’ll explore the core infrastructure gaps shaping today’s innovation failures, and outline a new model for building systems that actually work.

Where the Infrastructure Breaks 

When we talk about infrastructure in the context of innovation, we’re talking about the systems that make innovation possible. These systems were never built with AI in mind, and in case after case, the cracks are showing. But the deeper problem isn’t just oversight. It’s reproduction.

Undesirable behaviors [are] characteristic of the system structures that produce them.
— Donella Meadows, Thinking in Systems

In other words, the problem isn’t just what the ‘system structures’ allow; it’s what the systems create, over and over again. If we want different outcomes, we have to change the structures themselves.

Across this series, we’ve identified four main infrastructure gaps that show up consistently:

Together, these gaps leave both grassroots and commercial projects unsupported. In other words, these projects are unable to access the recognition, protection, or support they need.

Early Warnings: How AI Innovation Is Stress-Testing Governance

AI innovation tests the limits of systems built for a different era. The four infrastructure gaps we've mapped — legal, cultural, commercial, and institutional — are not theoretical. They’re unfolding in real time across different sectors, communities, and national strategies.

Here’s an overview of how these challenges are unfolding around the world:

Community Innovation: Indigenous Data Sovereignty Movements

Across Indigenous communities, initiatives like the Māori Data Futures program and frameworks like the CARE Principles for Indigenous Data Governance (endorsed by the Global Indigenous Data Alliance) are redefining what ethical innovation can look like. They emphasize governance rights and collective benefit over extractive ownership.

But while these frameworks offer critical alternatives, they remain largely unrecognized by mainstream innovation systems. As AI development accelerates, the failure to integrate Indigenous governance models raises a pressing question: whose systems will shape the future of innovation and who will be left out?

📍 UNESCO has explicitly called for Indigenous peoples’ full participation in AI development and for governance frameworks that uphold their data sovereignty and self-determination. They warn that exclusion from these processes risks cultural misappropriation and deepening systemic inequities.

Collective Innovation: The Case of EleutherAI

Open research collectives like EleutherAI have contributed foundational work to the AI landscape by building models, datasets, and tools that have inspired broader innovation. Yet much of this work has been commercially repurposed without clear attribution, compensation, or protections for the collective.

This unfolding challenge reveals a deeper governance gap: our current IP and recognition systems were built for individual inventors and formal pipelines, and not for distributed, collective innovation.

As open, collaborative research models become more central to AI development, the inability of existing systems to recognize, credit, or protect them exposes innovation itself to systemic extraction and erasure.

📍 As highlighted by researchers from EleutherAI, the collective contributions of open-source communities remain structurally unrecognized and economically unrewarded. This challenge represents the broader governance failure to protect and credit distributed innovation in AI (Chan, Bradley & Rajkumar, 2023. “Reclaiming the Digital Commons”).

National Innovation Models: Qatar’s Vision 2030

State-led initiatives like Qatar’s Vision 2030 represent bold efforts to build new innovation ecosystems. They are investing heavily in research, technology, ethics frameworks, and commercialization infrastructure. But without rethinking the underlying governance structures that determine who is recognized, who benefits, and how innovation is scaled, these ambitious strategies risk replicating the same exclusionary models that have long dominated global innovation systems.

The challenge is not building new innovation hubs. Rather, it is ensuring they aren’t governed by outdated rules. As the OECD puts it, “existing approaches have proven ill-suited to maintain an equilibrium between exploitation (of current systems) and exploration (of future possibilities) amid complexity and uncertainty” (OECD). In other words, investment without governance reform isn’t innovation, it’s replication.

📍 Reports from the OECD emphasize that ethical, inclusive governance models are critical to ensure that national innovation strategies deliver long-term, shared public value — not just private gains.

Why This Matters

Across the community, collective, and national levels, the message is clear: Innovation is evolving. Governance is not.

Without updated infrastructure:

  • Community-driven innovation will continue to be marginalized.

  • Open research will be extracted and repackaged without fair recognition.

  • National innovation strategies will replicate old inequities under new banners.

The cost isn’t just individual. It’s systemic. Entire fields of innovation could be shaped by exclusion, extraction, and short-term gain, and not sustainability, equity, and shared public good. That’s why we need more than new technologies. We need infrastructures that are adaptive, inclusive, and built to govern innovation as it actually happens today.

What a Better System Looks Like

If the patterns we’ve seen across grassroots, collective, and national innovation efforts reveal anything, it is that our current systems weren’t built for the complexity, scale, and speed of AI innovation.

We don’t just need better rules. We need better foundations. That’s where the Pillars of AI Innovation come in.

Built around three essential pillars, this model offers a practical approach to rebuilding the infrastructure of innovation governance for the AI era. It’s not about managing risk alone. It’s about designing systems that reflect how innovation actually happens, and who it should serve.

The Pillars of AI Innovation

A Framework for Rethinking Recognition, Pathways, and Protection in AI Innovation

This framework surfaces the systems that must be considered if we want innovation in AI to be ethical, inclusive, and built for the public good.

1. Recognition – Who counts as a contributor?

At the heart of recognition are questions of credit and attribution, such as who gets named and rewarded in the innovation process. Today, these questions are largely governed by intellectual property law, which was designed around the lone inventor, the formal institution, and the idea of innovation as an isolated act. But that model no longer fits the way AI innovation actually happens.

These days, AI systems can be built collaboratively, globally, and often informally; through open-source communities, scraped datasets, shared models, and remixable tools. Current systems struggle to recognize or support this kind of collective, distributed labor.

Recognition also matters beyond the act of creation. As AI systems scale, they affect a much broader set of people, such as those whose data is used, whose styles are mimicked, and whose communities are impacted by the technologies that emerge. However, these groups are rarely seen as contributors, let alone stakeholders.

2. Pathways – How should AI innovation be moved, funded, and scaled?

The question of pathways is about what kinds of innovation are able to grow and under what conditions.

Today, the dominant route from research to real-world impact is commercialization through formal channels. That model may work for profit-driven ventures, but it leaves little space for open, collaborative, or community-centered innovation, especially the kind that serves the public good.

Too often, important work stalls not because it lacks value, but because it lacks a viable path to scale outside of the market.

3. Protection – How are rights, risks, and responsibilities shared?

Protection isn’t just about safeguarding innovation; it’s also about deciding whose interests are defended, and whose risks are ignored. In today’s system, protection often defaults to intellectual property hoarding, restrictive licensing, or litigation. But in an AI landscape shaped by scraped data, appropriation, and model outputs built on collective knowledge, exclusion can’t be the only form of defense.

We need protection frameworks that reflect a different logic that is rooted in ethical boundaries, enforceable rights, and shared accountability, especially for the communities and creators most affected by how AI systems are built and deployed.

We will go into more depth on this proposed model in our upcoming toolkit.

We Don’t Just Need Innovation. We Need Infrastructure.

We often talk about innovation as if it emerges fully formed. But innovation is never just a tool, it is a process and the outcome of systems. When those systems fail to evolve, even the most promising innovations stall, get extracted, or disappear.

That’s what this series has shown.

From Masakhane’s quiet erasure to the aesthetic extraction of Studio Ghibli, these cases make one thing clear: the fractures are in the foundations of how we attribute, scale, and govern innovation itself.

That’s why we’re building a toolkit — not just to name what’s broken, but to help change it. Designed for researchers, institutions, and policymakers, it will offer practical tools for governing AI innovation in ways that are ethical, inclusive, and grounded in the realities of how innovation actually happens. Not someday, but now.

Because the future of AI won’t be shaped by better tools alone. It will be shaped by the systems we build around them.

This article is part of our What Then series — a collection of future-facing ideas about how AI governance systems can be redesigned.

References

  1. Chan, J., Bradley, H., & Rajkumar, A. (2023). Reclaiming the digital commons: Towards an equitable AI future. arXiv. https://arxiv.org/abs/2303.09001

  2. Meadows, D. H. (2008). Thinking in systems: A primer. Sustainability Institute. https://research.fit.edu/media/site-specific/researchfitedu/coast-climate-adaptation-library/climate-communications/psychology-amp-behavior/Meadows-2008.-Thinking-in-Systems.pdf

  3. OECD. (2020a). Anticipatory innovation governance: Shaping the future through proactive policy making. OECD Publishing. https://www.oecd.org/content/dam/oecd/en/publications/reports/2020/12/anticipatory-innovation-governance_d1aded4e/cce14d80-en.pdf

  4. OECD. (2020b). A broad-based innovation policy for all regions and cities. OECD Publishing. https://www.oecd.org/content/dam/oecd/en/publications/reports/2020/10/broad-based-innovation-policy-for-all-regions-and-cities_1ce6985d/299731d2-en.pdf

  5. UNESCO. (2023, December 11). New report and guidelines for Indigenous data sovereignty in artificial intelligence developments. https://www.unesco.org/en/articles/new-report-and-guidelines-indigenous-data-sovereignty-artificial-intelligence-developments

What If AI

Maryama Elmi is the founder of What If AI, a platform exploring the future of AI governance through global perspectives, cultural analysis, and public-interest storytelling. A lawyer and policy strategist, she writes about the systems, gaps, and ideas shaping how AI shows up in the world.

Previous
Previous

The Researcher and the Machine: Rethinking Innovation in the Age of AI Tools

Next
Next

The Innovation Spectrum: What Two AI Cases Reveal About the Rules We’re Missing