EU rejects calls from Alphabet, Meta and others to stall AI regulation
The European Union has dismissed calls from major technology firms, including Alphabet and Meta, to delay the implementation of its Artificial Intelligence Act, reaffirming its commitment to enforce the world’s first comprehensive AI regulatory framework on schedule.
In recent weeks, several tech giants and European AI companies — among them ASML and Mistral — urged the European Commission to postpone the law, citing high compliance costs and complex regulatory demands. However, the Commission made it clear there will be no changes to the timeline.
“I’ve seen, indeed, a lot of reporting, a lot of letters and a lot of things being said on the AI Act. Let me be as clear as possible: there is no stop the clock. There is no grace period. There is no pause,” said Commission spokesperson Thomas Regnier, as quoted by Reuters. He added that the Act has legally binding deadlines designed to ensure responsible AI development, which will be enforced accordingly.
Why tech companies sought a delay
Both U.S. and European AI companies have expressed concerns that the law could increase costs and potentially stifle innovation in the sector. Alphabet, Meta, and others argued for more time to adapt, warning that the obligations could hurt competitiveness and slow progress in AI development.
But the Commission remains firm, saying that delays would compromise public trust and safety. “We have legal deadlines established in a legal text,” Regnier noted, explaining that the law’s requirements will take effect in a phased manner beginning this year.
What is the EU AI Act?
The Artificial Intelligence Act, which entered into force on 1 August 2024, is the first legislation of its kind globally. It takes a risk-based approach, classifying AI systems based on their potential harm. The most hazardous applications — such as untargeted facial recognition scraping and manipulative behavioral prediction tools — are outright banned under the Act.
Enforcement timeline
The Act’s provisions are staggered:
-
From 2 February 2025: The most harmful AI practices are prohibited.
-
From 2 August 2025: Obligations for general-purpose AI (GPAI) models come into effect.
-
By August 2027: Existing GPAI models must also comply.
-
From 2 August 2026: Rules for high-risk AI systems — such as those used in employment, healthcare, education, and critical infrastructure — will be enforced.
The EU is also planning to simplify certain digital rules for smaller firms later in 2025, though this will not impact the rollout of the AI Act itself. The Commission emphasized that the law is intended to create safeguards around AI as it becomes an increasingly integral part of economies and societies.
With this firm stance, the EU signals its determination to balance innovation with public accountability, positioning itself at the forefront of global AI governance.