Fifteen months ago, at the first global AI summit, the focus was on cooperation, mitigating risks, and ensuring AI developed in a safe, transparent manner. Fast forward to the latest AI summit in Paris, and the tone has dramatically shifted—geopolitical competition now takes center stage. The AI arms race is in full swing, with the US and China battling for supremacy, and Europe striving to carve out its role. In this rapidly evolving landscape, one idea is gaining momentum: open-source AI might just be the way forward.
The US-China AI Battle and the Open-Source Debate A key inflection point in this race has been the emergence of China’s DeepSeek, an AI model developed with significantly lower costs and computing power than its Western rivals. DeepSeek’s launch has rattled US confidence, underscoring that innovation doesn’t have to be locked behind expensive, closed-source models.
Former Google CEO Eric Schmidt recently warned that Western countries must champion open-source AI or risk ceding leadership to China. With most US models—OpenAI’s GPT-4, Anthropic’s Claude, and Google’s Gemini—remaining closed, the danger lies in a scenario where China emerges as the global leader in open AI, while the US and Europe operate behind corporate walled gardens. Schmidt’s concern is not just about competitiveness but also about scientific progress, arguing that closed AI models will hinder universities and independent researchers who lack the capital to access them.
This fear is compounded by the US’s recent policy shift under Vice President JD Vance, who has signaled a more aggressive approach, prioritizing AI dominance over regulatory oversight. With the Trump administration revoking Biden’s executive order requiring AI companies to share developments with the government, the US is betting that unrestricted competition will deliver an advantage. But this approach raises an important question: Is a winner-takes-all strategy really the best way forward?
Open-Source AI: A Path to Innovation and Security
Open-source AI could provide the answer to some of these pressing concerns. Unlike closed models that are controlled by a handful of tech giants, open-source AI democratizes access, enabling a broader spectrum of developers, businesses, and researchers to innovate. The success of Meta’s Llama model—a rare example of an open-source US initiative—demonstrates the benefits of a more inclusive AI ecosystem.
One of the strongest arguments for open-source AI is its role in fostering innovation. By allowing developers worldwide to build on existing models, new applications can emerge more rapidly, enhancing everything from healthcare to financial services. Europe, which lacks the capital to compete head-on with US and Chinese tech giants, could particularly benefit from this approach. French President Emmanuel Macron has already embraced this idea, advocating for shared AI platforms and significant investments in AI infrastructure.
But beyond innovation, there’s another crucial factor: security. Open-source AI allows for greater transparency, making it easier to detect biases, security vulnerabilities, and
potential misuses. In contrast, closed AI models operate as black boxes, with limited external scrutiny. In an era where AI safety is a top concern, ensuring that AI development remains open and accessible could be a key safeguard against unintended consequences.
The Challenges of an Open AI Future
Despite its promise, open-source AI comes with challenges. Critics argue that making AI models freely available could lead to misuse, from deepfake manipulation to cyber threats. The balance between openness and security must be carefully managed.
Moreover, the business case for open-source AI remains complex. AI development requires significant investment, and companies like OpenAI, Google, and Amazon have poured billions into proprietary models, expecting returns. If AI becomes fully open-source, monetization strategies will have to shift—possibly towards service-based models rather than one-time product sales.
There’s also the question of regulation. Europe has taken a proactive stance with its AI Act, while the US is now opting for a hands-off approach. China, meanwhile, is playing both sides—engaging with Europe on regulation while aggressively scaling its AI capabilities. A global consensus on how to manage AI openness versus control remains elusive.
The Way Forward: A Hybrid Approach
Rather than an all-or-nothing approach, the future of AI might lie in a hybrid model, combining both open- and closed-source elements. As Schmidt suggests, leveraging the strengths of both approaches could provide the best path forward. Open models can drive research and transparency, while closed models ensure commercialization remains viable.
The AI race is far from over, and the stakes couldn’t be higher. While the US accelerates its AI ambitions with fewer guardrails, China is proving that efficiency and cost-effectiveness matter just as much as raw power. Europe, caught between these giants, has an opportunity to redefine the landscape by championing open-source AI as a force for democratization and innovation.
As AI continues to shape our world, the question remains: will the future be defined by a handful of corporate-controlled models, or will an open, collaborative AI ecosystem emerge? The answer may determine not just who wins the AI race, but how the technology ultimately benefits humanity.