EU presents draft regulatory guidance for AI models
EU presents draft regulatory guidance for AI models


The draft “Code of Good Practice for General-Purpose AI” marks an essential step in the European Union’s (EU) efforts to regulate artificial intelligence models. This ambitious initiative aims to define comprehensive regulatory guidance that meet the challenges of AI while promoting innovation.
Multi-sector collaboration
Developed through collaborative work involving various sectors — industry, academia and civil society — this project demonstrates the importance of a collective approach. Four specialized working groups have been set up to address the different facets of AI governance:
- Working Group 1: Transparency and copyright rules
- Working Group 2: Identification and assessment of systemic risks
- Working Group 3: Technical Risk Mitigation
- Working Group 4: Governance risk management
Strategic objectives of the project
The project’s ambitions go beyond simple recommendations. It aims to:
- Clarify compliance methods for providers of general purpose AI models.
- Facilitate understanding throughout the AI value chain.
- Respect copyright when training AI models.
- Continually assess and mitigate systemic risks associated with AI models.
Recognize and mitigate systemic risks
One of the major innovations of the project is its taxonomy of systemic riskswhich helps classify potential AI-related threats. Among these, we find:
- Cybercrimes
- Biological risks
- Loss of control over autonomous AI models
- Large-scale disinformation
The project is progressing in a constantly evolving technological environment, highlighting that this taxonomy will require regular updates to remain relevant.
Security frameworks and collaboration
As systemic risk AI models proliferate, the project calls for Safety and Security Frameworks (SSF) robust. It provides a hierarchy of metrics, supported by key performance indicators (KPIs), for appropriate risk management throughout the lifecycle of a model.
Vendors should have processes in place dedicated to identifying and reporting serious incidents related to their AI models. Additionally, the project encourages collaboration with independent experts for rigorous risk assessments.
A proactive position for the future
The EU law on AI, effective August 1, 2024, imposes a deadline of May 1, 2025 for the finalization of this code. This approach reflects the EU’s proactive commitment to AI regulation that ensures security, transparency and accountability.
As the project evolves, working groups solicit input from stakeholders. This collaboration is essential to shape a regulatory framework balancing innovation and societal protection.
Conclusion: a global reference in the making
Although the EU Code of Practice for general-purpose AI models is still under development, it is already positioning itself as a potential benchmark for responsible AI development on a global scale. By addressing crucial issues such as transparency and risk management, this code aims to create a regulated environment that promotes innovation while protecting the fundamental rights of consumers.






