What happens when the biggest names in AI — like OpenAI and Google — urge Washington to ease up on regulation, just as fears of a fragmented, 50-state patchwork grow and China’s DeepSeek makes global headlines? We may soon find out. In recent weeks, both public- and private-sector leaders have intensified efforts to accelerate U.S. AI development, spurred in part by a wave of comprehensive state-level regulations in places like California, Colorado, and Utah.
Among the key voices is OpenAI, which—just before the March 15 deadline—submitted a detailed proposal in response to the White House Office of Science and Technology Policy’s (OSTP) call for input on a national AI action plan, alongside many thousands of other public comments. In its submission, OpenAI urged the government to reduce regulatory burdens, arguing that a more flexible approach would better position the U.S. to lead the global AI race.
AI innovation and regulation are colliding across borders—will the U.S. keep up?
Referred to as part of OpenAI’s “freedom-focused policy proposal”, OpenAI’s proposal calls for lessening existing regulatory burden while increasing voluntary collaboration between the industry and the federal government. According to OpenAI, such an approach would strengthen the nation’s lead on AI, as it would enable it to develop faster. In addition, the proposal urges the government to create a framework for the said voluntary partnership, while providing the private sector relief from proposed AI-related bills introduced in various US states. Such relief, according to the document, would serve to advance innovation and would prevent other players in the global AI race, especially China, from benefiting from regulatory arbitrage being created by individual American states.
The AI arms race is on—and so is the race to regulate it
While the global AI race might be a relatively new thing, the tension between innovation and regulation is not. Scholars have been discussing for years this tension, exploring if and to what extent regulation stifles innovation. And while innovation-related concerns frequently accompany regulatory proposals, they appear to resonate most powerfully in the context of data management and information privacy—highly relevant aspects of AI governance—where the balance between progress and protection remains deeply contested.
While it is essential to protect consumers, businesses, and institutions from the unintended—and at times even intentional—harms of emerging technologies, the mere possibility that regulation could slow innovation should not serve as a blanket excuse to reject reasonable safeguards. To be sure, overly burdensome regulation in fast-moving and evolving sectors like technology may hinder the pace of innovation—a dynamic often referred to as “RegLeg,” where regulation inevitably lags behind technological advancement. But that reality underscores the need for smarter, more adaptive governance and regulatory technology tools—not the absence of those.
Compared to other types of technologies, in AI the regulators’ balancing task is even more delicate. For one, the safety risks associated with AI have been described as more alarming than other innovations. From existential risks to humanity, with ‘Godfather of AI’ Geoffrey Hinton warning that there is an up to 20% chance of this happening within 30 Years; through risks of having humans develop irreversible dependency on AI that would erode human abilities and expose them to manipulation by AI; to transforming the job market in unprecedented manner; to causing breaches of privacy or enabling biased and unfair decisions made by AI that would affect individuals in a various ways – AI today challenges regulators around the world in ways never considered in the past.
From Brussels to Beijing, global powers are rewriting the rules of the AI race
Dr. Karni Chagal-Feferkorn, an expert in comparative law and AI, explains that different jurisdictions are navigating the tension between protecting the public interest through regulation and maintaining a competitive edge in the global AI race. The European Union, she notes, is the first major power to implement comprehensive, binding AI regulations that apply to both European and non-European entities. This positions the EU as a global standard-setter and could give European companies an advantage, as their systems are already designed for compliance. However, the regulatory burden may also deter innovation within the EU and discourage foreign companies from engaging with the European market due to costly compliance obligations.
Building on this perspective, Jan Czarnocki, co-founder at White Bison, a Swiss-based consultancy for AI and compliance cautions that the EU AI Act has introduced significant uncertainty and is likely contributing to delays in the rollout of downstream AI systems. He attributes this, in part, to the challenge of translating complex technical AI concepts into clear, actionable legal norms—and then effectively communicating those norms to the engineers and developers responsible for implementation. The disconnect between legal and technical teams, each operating with different incentives and priorities, adds another layer of complexity. As legal professionals gain increasing authority over AI deployment decisions, new internal stakeholders are introduced, making the organizational process of adopting AI systems more cumbersome.
In contrast to the EU’s binding regulatory approach, Israel—often dubbed the “start-up nation”—has opted for a lighter-touch, voluntary framework aimed at fostering innovation. According to Josef Gedalyahu, Director of the AI Policy & Regulation Center at Israel’s Ministry of Innovation, Science & Technology, regulators have been explicitly instructed to avoid mandatory rules whenever possible, with recent news reports announcing their decision to opt for tools like self-regulation and regulatory sandboxes. This approach, now echoed by proposals from OpenAI in the U.S., reflects Israel’s strategy to maintain its status as a global tech leader by positioning regulation as a facilitator rather than an obstacle to innovation.
Finally, China, which is arguably United States’ primary rival in the global AI race, has taken a markedly different regulatory path. According to Tehila Levi, an attorney at Sullivan Worcester specializing in Asia, China’s approach combines elements seen in other jurisdictions. Since 2021, it has introduced several regulatory measures, most notably the Cyberspace Administration of China (CAC) and China’s algorithm filing framework, which requires AI companies to register in a national database. This filing requirement enhances government oversight and transparency but imposes few additional compliance burdens. While China does regulate AI—primarily to protect national security—its framework remains comparatively light-touch, helping to sustain its momentum in AI development.
Striking the balance between safety and speed in the age of AI
Earlier this year, the Trump administration has revoked Biden’s executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Meanwhile, several U.S. states continue to enforce—or plan to introduce—AI regulations, often targeting specific sectors such as insurance or consumer protection. Rolling back these state-level laws would not only reduce governmental oversight of AI systems but also remove key incentives for developers and users to adopt precautionary measures. This could be potentially troubling given the high-stakes risks associated with AI. At the same time, echoing OpenAI’s recent proposal and reducing regulatory burden—if paired with strong public-private collaboration and robust voluntary safeguards—could preserve innovation without sacrificing safety.
Therefore, as the global AI race accelerates, the real challenge isn’t choosing between innovation and regulation—it’s designing systems that deliver both, before the future arrives faster than we’re ready for it.
Read the full article here