The Fractured Map of AI Governance: How the World is Dividing Over AI Laws
AI’s impact is global, but its rules are not.
Right now, nations are making high-stakes decisions about artificial intelligence—who controls it, who benefits, and who gets left behind. But instead of a unified approach, the world is splintering into competing regulatory models, each with its own vision for the future of AI.
This fractured landscape isn’t just about laws—it’s about power, influence, and the future of global technology. Here’s where AI governance stands today and what it means for all of us.
1. The Three Competing Models of AI Governance
The European Union: The Global Rule-Setter
Approach: Strict, risk-based regulation (e.g., the AI Act, GDPR).
Stance: AI should be transparent, accountable, and safe—even at the cost of slowing down innovation.
Impact:
The EU is setting global AI standards, just as it did with GDPR.
Companies that operate in Europe must comply with EU laws, influencing AI policies worldwide.
Critics argue that heavy regulation could stifle AI development within Europe.
The United States: The Wild West of AI Innovation
Approach: Market-driven, fragmented regulation (state-level laws, voluntary AI safety pledges, Big Tech self-regulation; see this article for more).
Stance: AI should be led by private companies, with minimal government interference.
Impact:
Home to AI powerhouses like OpenAI, Google DeepMind, and Anthropic.
At present, there is no federal AI law in the US. Trump recently revoked Biden’s Executive Order (EO) on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, and has issued a new order aimed at “decreasing barriers to innovation and positioning the United States to be a global leader in AI” (see this article for more).
The Trump administration is likely to adopt a light regulatory approach to AI development and deployment.
China: The State-Controlled AI Powerhouse
Approach: Government-led, strict regulations on AI usage.
Stance: AI is a strategic national asset—it must be tightly controlled and aligned with state interests. China’s emphasis is on social control and economic planning.
Impact:
China has some of the world’s strictest AI laws, requiring companies to submit AI models for government approval.
Heavy investment in AI-powered surveillance and military applications.
Unlike the U.S. and EU, China’s AI policy prioritizes state control over corporate freedom.
2. Who’s Getting Left Out? The Global South’s Fight for AI Equity
While the EU, U.S., and China dominate AI governance discussions, the Global South is often sidelined. But AI is shaping economies and societies everywhere—including in regions where regulations are still developing.
Africa: Emerging AI hubs (e.g., Kenya, Nigeria, South Africa) are pushing for AI policies that reflect local priorities, such as preventing algorithmic bias against African languages.
Latin America: Countries like Brazil and Chile are exploring AI ethics frameworks, but struggle with tech dependence on the U.S. and China.
Southeast Asia: Balancing AI-driven economic growth with concerns about privacy and authoritarian misuse (e.g., government surveillance in Singapore and Vietnam).
The Big Question: How can we ensure these regions have a voice in shaping global AI rules? or will they be forced to adopt laws set by the EU, U.S., and China?
3. The Real Battle: Who Gets to Set the Global AI Standards?
What happens when a company develops an AI model in California, trains it on data from Europe, deploys it in India, and sells it in China? Who’s responsible if something goes wrong?
Right now, nations are competing to set the global AI standard—but no one can fully control how AI spreads.
The Three Competing Forces:
The EU wants the world to follow its AI Act—and many countries are adopting similar laws to stay compliant.
The U.S. wants to keep AI regulation flexible—allowing its tech industry to lead, while avoiding strict government oversight.
China is exporting its AI governance models—offering its AI surveillance technologies to developing nations, especially in Africa and Latin America (see this article for more).
The result of this is a fractured AI governance map where the "rules of AI" depend on where you are in the world.
4. What Happens Next? The Key AI Governance Battles to Watch in 2025
The global AI landscape is shifting fast. These are the key developments to watch:
1. The EU’s AI Act Goes Into Effect
Will companies outside of Europe follow suit—or will they push back?
Will the AI Act become the global standard, like the GDPR?
2. The U.S. Debates Federal AI Regulation
Will Congress pass federal AI laws—or will regulation remain fragmented at the state level?
How much influence will Big Tech have in shaping the rules?
3. China’s AI Expansion
Will China’s strict AI laws inspire other countries to take similar approaches?
How will its AI regulations impact global trade and national security?
4. The Global South’s Growing Voice
Will developing nations get a seat at the table in global AI policymaking?
Can they build their own AI ecosystems—or will they remain dependent on AI from the U.S., China, and the EU?
Final Thoughts: AI Governance is a Global Power Struggle
The battle over AI governance isn’t just about tech—it’s about power. The question is, who gets to shape the future of the digital world?
Whether AI becomes a tool for empowerment or control depends on the rules we set today.
The Takeaway: AI governance is happening now, and it’s shaping the world far beyond just tech companies and policymakers. Will we allow a handful of nations to dictate the future, or will the world demand a more inclusive, global approach to AI governance?
References
APA Services. (n.d.). AI executive orders: What psychologists need to know. American Psychological Association. https://www.apaservices.org/practice/business/technology/on-the-horizon/ai-executive-orders
Arise News. (2024). Vance warns Europe that strict regulations could stifle AI development. https://www.arise.tv/vance-warns-europe-that-strict-regulations-could-stifle-ai-development/
Asia Times. (2024, August). China’s AI strategy: All about serving the state. https://asiatimes.com/2024/08/chinas-ai-strategy-all-about-serving-the-state/
Carnegie Endowment for International Peace. (2019, January). We need to get smart about how governments use AI. https://carnegieendowment.org/posts/2019/01/we-need-to-get-smart-about-how-governments-use-ai?lang=en
CIO. (2024). EU AI Act: Sensible guardrail or innovation killer? https://www.cio.com/article/3480307/eu-ai-act-sensible-guardrail-or-innovation-killer.html
Federal Register. (2023, November 1). Safe, secure, and trustworthy development and use of artificial intelligence. https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence
MIT Technology Review. (2024, July 22). AI companies promised the White House to self-regulate one year ago. What’s changed? https://www.technologyreview.com/2024/07/22/1095193/ai-companies-promised-the-white-house-to-self-regulate-one-year-ago-whats-changed/
Skadden. (2025, January). U.S. federal regulation of AI is likely to be lighter. 2025 Insights. https://www.skadden.com/insights/publications/2025/01/2025-insights-sections/revisiting-regulations-and-policies/us-federal-regulation-of-ai-is-likely-to-be-lighter