How to Comply With the EU’s AI Act: EU issues code of practice to help AI developers follow AI Act regulations
The European Union published guidelines to help builders of AI models to comply with the AI Act, which wasenacted last year.

The European Union published guidelines to help builders of AI models to comply with the AI Act, which was enacted last year.
What’s new: The General Purpose AI Code of Practice outlines voluntary procedures to comply with provisions of the AI Act that govern general-purpose models. Companies that follow the guidelines will benefit from simplified compliance, greater legal certainty, and potentially lower administrative costs, according to EU officials. Those that don’t must comply with the law nonetheless, which may prove more costly. While Microsoft, Mistral, and OpenAI said they would follow the guidelines, Meta declined, saying that Europe is “heading down the wrong path on AI.”
How it works: The code focuses on “general-purpose AI models” that are capable of performing a wide range of tasks.
- Stricter rules apply to models that are deemed to pose “systemic risk,” or “a risk that is specific to the high-impact capabilities” owing to a model’s reach or clear potential for producing negative effects. Managers of such models must perform continuous assessment and mitigation, including identifying and analyzing systemic risks and evaluating how acceptable they are. They must protect against unauthorized access and insider threats.
- Developers who build models that pose systemic risk must maintain a variety of documentation. They must disclose training data and sources, how they obtained rights to the data, the resulting model’s properties, their testing methods, and computational resources and energy consumed. They must file updates when they make significant changes or upon request of parties that use the model. They must report mishaps and model misbehavior. They must file a report within 2 days of becoming aware of an event that led to a serious and irreversible disruption of critical infrastructure, 5 days to report a cybersecurity breach, and 10 days a model’s responsibility for a human death.
- The code doesn’t mention penalties for noncompliance or violations. Further, the code doesn’t discuss the cost of compliance except to say that assessing and mitigating systemic risks “merits significant investment of time and resources.” In 2024, Germany’s Federal Statistical Office estimated that the cost of compliance for a high-risk system would come to roughly $600,000 to get started and another $100,000 annually.
Behind the news: The AI Act is the product of years of debate and lobbying among scores of stakeholders. EU technology official Henna Virkkunen called the AI Act “an important step” in making cutting-edge models “not only innovative but also safe and transparent.” However, companies and governments on both sides of the Atlantic have asserted that the law goes too far. In May, the EU moved to relax some provisions, including language that would allow users to sue AI companies for damages caused by their systems. Earlier this month, 44 chief executives at top European companies asked European Commission President Ursula von der Leyen to postpone the AI Act’s rules that govern general-purpose models for two years.
Why it matters: The AI Act is the most comprehensive and far-reaching set of AI regulations enacted to date, yet it remains highly contentious and in flux. The commitments by Microsoft, Mistral, and OpenAI to follow the code mark a significant step in the act’s circuitous path to implementation, but also an increase in bureaucracy and potential for regulatory capture. Their endorsement could persuade other big companies to sign on and weaken further efforts to loosen the act’s requirements.
We’re thinking: From a regulatory point of view, the notion of systemic risk is misguided. Limiting the inherent risk of AI models is as helpful as limiting the inherent risk of electric motors, which would result only in relatively useless motors. We hope for further revisions in the AI Act that relieve burdens on builders of foundation models, especially open source projects, and address practical risks of specific applications rather than theoretical risks of their underlying technology.