The Window Is Closing for the EU AI Act to Have a Brussels Effect

Summary
- High hopes: European policymakers are hoping the EU’s economic clout can pressure the world’s AI developers to adopt European standards.
- Not so fast: The GDPR showed how European standards can quickly become the global norm – but the 'Brussels Effect' won't be so straightforward for AI.
- So what? The longer the EU AI Act is left with unclear or unenforced guidelines, the less companies are likely to adopt European rules as their global baseline.
- Recommendations: The EU AI Office should fast-track implementation guidance and start enforcement dialogues now with major AI companies.
The EU is implementing the most comprehensive AI regulation in the world. But whether it will matter beyond Europe is an open question – and the window for impact is closing.
The EU AI Act is entering a critical implementation period. Rules on prohibited practices have been in force since February 2025. The General Purpose AI (GPAI) Code of Practice, signed by major AI companies including Anthropic, Google, and OpenAI, has been guiding compliance since August 2025. But the European AI Office has not yet had the power to penalise non-compliance.
That changes this August, when full enforcement powers activate. Also in August, the bulk of remaining obligations are due to come into force, including requirements for high-risk AI systems. (Recent legislative developments may push back these high-risk obligations to as late as December 2027).
Meanwhile, other countries are developing their own AI governance frameworks, and companies are already building compliance systems. The longer the EU takes to specify its rules clearly, the less likely those rules are to become the global default.
The Act’s primary purpose is inward-looking: protecting Europeans from harmful AI. But the most powerful AI systems deployed in Europe are being developed elsewhere.
If AI companies can cheaply maintain separate compliance systems for different markets, they will follow EU rules only when serving EU customers. Europe would then bear the full price of regulation without the benefit of shaping how AI is developed beyond its borders.
Why the Brussels Effect is not guaranteed
The ‘Brussels Effect’ is the tendency for EU rules to become the global standard, thanks to the size of the European market. The EU’s General Data Protection Regulation (GDPR) has become the best-known example. When it took effect, companies everywhere found it easier to apply EU privacy standards worldwide than to run separate systems for each jurisdiction.
Many have assumed the AI Act will follow the same pattern. But for AI, compliance is more divisible.
The GDPR’s Brussels Effect operated primarily at the infrastructure level: firms had to rebuild their data-processing pipelines, consent management systems, and data storage architectures to comply. Once those systems were rebuilt to EU standards, it made no economic sense to maintain a parallel, weaker infrastructure for non-EU markets.
AI compliance works differently. The core product, the trained model, can remain identical across markets. What changes is the compliance layer around it: documentation, risk assessments, and disclosure obligations.
An AI model provider can offer full EU-grade documentation to European customers while providing only the minimal disclosures required elsewhere. The cost of maintaining two tiers is low because the expensive part, building the model, is already done.
For frontier GPAI providers, the Act's risk mitigation requirements could eventually require changes to the models themselves. But here, the timing matters. These companies' safety practices are evolving, and EU requirements are most likely to shape them if made concrete early.
How the AI Act might reach beyond Europe
The Act could have international reach through several mechanisms.
First, market-access compliance. Any provider placing an AI system on the EU market must comply with the AI Act. The EU market is large enough that major providers cannot afford to exit it – they will develop frameworks that meet EU requirements. Whether those frameworks become the global default depends on how precisely the requirements are specified.
Second, supply-chain pressure. The Act requires AI providers to share information across the AI value chain. European companies deploying high-risk AI systems will need technical documentation from their upstream model providers to meet EU obligations. That procurement requirement will apply whether the provider is in Paris or San Francisco.
Third, standards. If the EU’s technical standards for AI Act compliance (currently behind schedule) align with the International Organization for Standardisation, companies are more likely to treat them as a global baseline. If the standards diverge, market segmentation is more likely.
Why speed matters as much as market size
If the AI Office publishes precise, stable guidelines promptly, companies will build their global systems around them because redesigning later is expensive.
If the guidelines are delayed, firms will build interim compliance packages tailored to their own interpretation, and those packages will harden into jurisdiction-specific systems that are costly to unify later. First-mover rules tend to stick.
The Act can still achieve a Brussels Effect, but only if implementation keeps pace. Every month of ambiguity is a month in which firms are more likely to invest in segmented compliance rather than portable global compliance.
What the AI Office should prioritize
The AI Office is significantly under-resourced for the mandate it has been given. With a small team overseeing compliance for a technology that spans every sector of the economy, prioritization is essential.
The Office should focus on the channels it can most directly influence: market-access requirements and supply-chain pressure.
First, accelerate implementation guidelines. Three provisions in particular will shape how firms build their compliance processes: high-risk classification, value chain responsibilities, and transparency.
If the guidelines for these provisions are precise enough, firms will treat them as global templates because building one system is cheaper than building several. If they are vague, firms will develop a minimal compliance package for EU requirements, which will be too thin to satisfy requirements in other jurisdictions.
The European Commission has already published guidelines on prohibited practices; extending this approach to the above three provisions is the logical next step.
Second, launch enforcement dialogues now. The AI Office has the power to request information from GPAI providers and investigate compliance. The Office should begin structured dialogues with major providers on Code adherence before fines become available.
Early enforcement signals reduce the incentive to segment by raising the expected cost of non-compliance. This is the approach the European Commission took with the Digital Services Act, launching early investigations before imposing penalties to establish institutional credibility.
The channels exist for the EU AI Act to have global influence. But a delayed Brussels Effect is a diminished one. Whether the Act can shape international AI governance depends on whether the AI Office treats the coming months as an implementation sprint or a waiting period.
.png)
