45 European Companies Call for a "Pause" in the Application of the AI Act

45 European Companies Call for a “Pause” in the Application of the AI Act

European Tech Companies Call for Pause on AI Act

Forty-five major European technology groups and companies have requested a two-year “pause” of the AI Act in an open letter to the European Commission, dated July 3rd. The signatories are asking for a suspension – or “clock-stop” – on “the main obligations of the European regulation on artificial intelligence (AI) and to postpone its application until technical standards and compliance guides are in place.”

The collective, named EU AI Champions Initiative, includes major French companies such as Axa, Airbus, Total, BNP Paribas, Carrefour, and Publicis, as well as German companies like Lufthansa and Siemens, the Dutch company ASML, and AI and digital players such as Mistral AI, Dassault Systèmes, Pigment, Owkin, and the Association of German Start-ups.

Concerns Over Regulation

“Europe has long distinguished itself by its ability to strike a balance between regulation and innovation (…) Unfortunately, this balance is undermined by vague and increasingly complex European regulations, which sometimes overlap,” the authors of the letter write. “This endangers the EU’s ambitions in AI, weakening its ability to foster European champions, but also the possibility for all its sectors to deploy AI on the scale necessary to face international competition.”

Key Deadlines and Demands

The collective is requesting the postponement of two major deadlines in the text. Obligations for manufacturers of “general purpose” AI models, such as large text or image generation models that serve as the basis for business uses or assistants like ChatGPT (OpenAI), Gemini (Google), or Le Chat (Mistral), are scheduled to come into force on August 2, 2025. These companies would be required to conduct risk assessments associated with their software and provide technical documentation and a summary of the data used to train them.

Furthermore, obligations to assess the risks (errors, biases, etc.) of “high-risk” AI systems are to be applied from the summer of 2026. These systems are used in critical infrastructure such as electricity, water, and roads, as well as in education and training, employment and business (algorithmic management), banking and insurance (loan or contract granting), justice, police, and immigration management.



Enjoyed this post by Thibault Helle? Subscribe for more insights and updates straight from the source.

Similar Posts