Article published in Sciences Po Conference, 2026
On 2 August 2026, the European regulation on artificial intelligence (the AI Act) was due to come into full effect[1]. The date had been set to give businesses two years of preparation following the publication of the text in August 2024, marking the conclusion of a three-year legislative process. Six months before the deadline, the European Commission proposed to postpone it — a move presented as purely technical, but one that also deserves to be read through a political lens.
The most ambitious text in the digital world
The AI Act is, by the admission of its own architects, an unprecedented construct. No democracy had ever attempted to regulate artificial intelligence so comprehensively. The text classifies AI systems according to their level of risk (unacceptable, high, limited, minimal) and imposes graduated obligations: an outright ban on Chinese-style social scoring, rigorous compliance for systems used in recruitment or justice, transparency for synthetic content. On paper, the architecture appears simple and well-designed. In practice, it reveals considerable complexity.
This is no trivial matter. According to the Regulatory Review of the University of Pennsylvania, the text — with its recitals, articles, annexes and delegated provisions — constitutes the most extensive regulatory framework in the European digital ecosystem[2]. This density is no accident: it reflects the ambition to govern a constantly evolving technological domain, from language models to autonomous vehicles and facial recognition. But it raises a structural problem of readability and legal predictability — two qualities that innovative companies need in order to invest.
The Draghi Report: an unsparing diagnosis
This assessment came from an unexpected source: one of Europe’s most respected former leaders. In September 2024, Mario Draghi presented to the European Parliament a report on European competitiveness whose candour took many by surprise[3]. The former president of the European Central Bank painted a troubling picture: the continent lags behind technologically, struggles to create digital platforms of global scale, and remains heavily dependent on American and Chinese infrastructure.
The figures bear this out. In 2023, the European Union attracted $8 billion in AI venture capital, compared with $68 billion for the United States and $15 billion for China[4]. Of the fifty largest technology companies in the world, only four are European. And nearly 30% of European unicorns relocated their headquarters to the United States between 2008 and 2021[5].
Draghi draws a clear conclusion: despite good intentions, European regulation has often replaced investment rather than fostering it. He therefore recommends simplifying the rules, harmonising their application across member states, and resolving the overlaps between the GDPR and the AI Act that complicate life for European developers.
Draghi also proposes the creation of a regulatory sandbox regime allowing companies to test their AI systems within a lighter framework. Finally, he calls for an investment effort of €750 to €800 billion per year — a figure which, relative to European GDP, exceeds the Marshall Plan in proportion[6].
The Digital Omnibus: the Commission’s response
On 19 November 2025, the European Commission responded to the Draghi diagnosis with a legislative package dubbed the “Digital Omnibus”[7]. The term, borrowed from the parliamentary practice of bundling several laws into a single text, aptly reflects the stated objective: to simplify without reopening the political debates that led to the adoption of the AI Act.
The proposal focuses on three main areas. First, the postponement of obligations for high-risk systems: the date of 2 August 2026 is now contingent on the effective availability of harmonised standards, with a maximum deadline of 2 December 2027 for systems listed in Annex III and 2 August 2028 for those integrated into already-regulated products[8]. Second, documentation requirements will be eased for SMEs and mid-cap companies, a category recently defined to limit the asymmetry of compliance costs vis-à-vis dominant market players. Third, the role of the European AI Office will be strengthened: it will become the central supervisory authority for systems built on general-purpose AI models.
The Commission insists that this amounts to simplification rather than deregulation. The distinction matters: the substance of the obligations remains unchanged, but their timeline and enforcement mechanisms are being adapted.
The EDPB and the EDPS, in their joint opinion of January 2026, endorsed the distinction while cautioning that the postponement could weaken the protection of fundamental rights in the most sensitive use cases[9].
The real problem: the compliance ecosystem does not exist
The Commission’s move would be unremarkable if it were merely a calendar adjustment. But it reveals a deeper problem: at the point when the text was supposed to take effect, the tools needed to implement it do not yet exist. The harmonised standards being developed by CEN-CENELEC’s Joint Technical Committee 21 — which are essential for translating legal obligations into technical specifications for developers — will not be finalised before the end of 2026[10]. Moreover, notified bodies, the certification gatekeepers required to authorise the placing on the market of high-risk systems, are still being designated in several member states. The Commission’s guidelines on questions as fundamental as the definition of an “AI system” under the regulation remain under consultation.
In practice, Europe has adopted the world’s most ambitious regulation on artificial intelligence, but has not yet put in place the institutional structures needed to enforce it. It is somewhat akin to a country enacting a highway code without training inspectors, installing road signs, or painting lane markings.
This gap is far from trivial. It reflects a political choice: to move quickly on the legislative front in order to shape the global debate — what is often called the “Brussels Effect,” a concept championed by Anu Bradford — even if it means deferring the practicalities of implementation.
This approach already proved successful with the GDPR, which established itself as an international benchmark. But the GDPR dealt with practices that were already well-identified and widely understood. The AI Act, by contrast, targets a technology whose capabilities, risks and uses evolve faster than the drafting of implementing measures.
Washington deregulates, Beijing adapts, Brussels legislates
The geopolitical context of 2026 makes the tension between regulation and competitiveness more acute than ever. Under the Trump administration, AI has been approached through the lens of deregulation, with the repeal of Biden-era executive orders aimed at governing AI safety[11]. Major American companies continue to invest tens of billions in high-performance computing and foundation models, free from the compliance constraints imposed by Europe. China, for its part, aligns its regulation with the industrial objectives of its “Made in China 2025” plan.
Europe thus finds itself caught between two logics: by legislating swiftly, it asserts its values on the world stage; by investing too late, it risks becoming the regulator of a game in which it no longer truly participates. As the Draghi report warned, the EU could end up being more of a “consumer” than a genuine “producer” of advanced AI technologies.
Some initiatives, however, suggest that awareness is growing. The InvestAI programme plans to mobilise €200 billion to develop European AI infrastructure[12]. The Chips Act 2.0 envisages €80 billion in public and private funding to support the semiconductor sector[13]. Finally, the EURO-3C project, announced in March 2026 at the Mobile World Congress, allocates €75 million for a federated cloud-edge infrastructure[14]. These efforts are significant, but they remain smaller than American and Chinese investments and, above all, they come several years behind the rapid pace of innovation.
The false dilemma of regulation and innovation
The European debate has for too long been structured around a binary opposition: regulate or innovate. Anu Bradford, in a 2024 article, showed that this opposition is largely artificial[15]. The absence of regulation in Europe before 2010 did not prevent the continent from missing the first wave of digital platforms; conversely, the GDPR did not hamper the growth of the European data market. The problem lies not in regulation itself, but in its fragmentation, its inconsistencies, and its occasional overlaps with existing legislation.
In this context, the Digital Omnibus gives the impression more of a complex system than of a genuinely simplified one. The fact that a law needs to be adjusted before it even takes effect suggests that something did not entirely work in the process.
Ultimately, the question is not whether to choose between strict regulation and laissez-faire, but how to strike a balance: setting robust principles without making their application too burdensome or impractical on the ground.
What Europe owes itself
The AI Act remains, in its conception, a major political project. It affirms that artificial intelligence cannot develop outside any democratic framework. It holds that fundamental rights are not obstacles to innovation but preconditions for the trust without which no technology can take lasting root in a society. It offers an alternative model to American laissez-faire and Chinese state dirigisme.
A political project can only truly be measured once it is put into practice. Europe has devised what it calls a “third way” in regulation, but it remains to be demonstrated that this approach will not lead to paralysis. The Digital Omnibus is a first step — necessary but limited. The real question is not merely whether the AI Act will be simplified, but whether the Union will invest at a level commensurate with its regulatory ambitions. To regulate without investing is to set the rules for a world that others are building. And in the geopolitics of AI, whoever builds the infrastructure ultimately decides who plays and who watches.
Footnotes:
[1] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), Official Journal of the European Union, L 2024/1689, 12 July 2024.
[2] Nicoletta Rangone, “The Paradoxes of the European Union’s AI Regulation,” The Regulatory Review, University of Pennsylvania, 10 March 2026.
[3] Mario Draghi, The Future of European Competitiveness, report presented to the European Commission, September 2024. Available at ec.europa.eu.
[4] Draghi, op. cit., Part B, Chapter on Digital Technologies and AI.
[5] Justyna Lisinska, “Draghi’s Competitiveness Report Shows Why the EU Needs a Pro-Innovation Approach Towards AI,” Center for Data Innovation, 25 September 2024. Unicorn figure: 30% of European unicorns relocated their headquarters to the United States between 2008 and 2021.
[6] Draghi, op. cit., Part A, pp. 2–6. Investment recommendation of €750 to €800 billion per year.
[7] European Commission, Digital Omnibus on AI Regulation Proposal, COM(2025) 836, 19 November 2025. Available at digital-strategy.ec.europa.eu.
[8] Müge Fazlioglu and Joe Jones, “EU Digital Omnibus: Analysis of Key Changes,” IAPP, December 2025.
[9] EDPB and EDPS, Joint Opinion 1/2026 on the Proposal for a Regulation as regards the simplification of the implementation of harmonised rules on artificial intelligence (Digital Omnibus on AI), 21 January 2026. Available at edpb.europa.eu.
[10] CEN-CENELEC, Joint Technical Committee 21 on Artificial Intelligence. Harmonised standardisation work for the AI Act has been coordinated by JTC 21 since 2023.
[11] Executive Order on “Removing Barriers to American Leadership in Artificial Intelligence,” 20 January 2025, revoking Executive Order 14110 from the Biden era on AI safety.
[12] European Commission, InvestAI Initiative, launched on 11 February 2025 at the Summit for Action on Artificial Intelligence in Paris. Press release IP/25/467.
[13] Regulation (EU) 2023/1781 of 13 September 2023 establishing a framework of measures for strengthening the European semiconductor ecosystem (European Chips Act). The Chips Act 2.0, announced in 2025, provides for €80 billion in public and private funding.
[14] EURO-3C Project, announced in March 2026 at the Mobile World Congress in Barcelona. €75 million allocated for a federated European cloud-edge infrastructure.
[15] Anu Bradford, “The False Choice between Digital Regulation and Innovation,” ProMarket (Stigler Center, University of Chicago Booth), 11 December 2024.