European Commission Releases First Draft of General-Purpose AI Code of Practice
The EU AI Act entered into force on 1 August 2024, and the obligations under it will come into effect on a staggered basis over the coming years (for more information, see this Latham article). Under Article 56, the European AI Office (AI Office) — which was established within the European Commission (EC) to oversee the implementation of the EU AI Act — is required to encourage and facilitate the drawing up of codes of practice at EU level. Such codes need to be ready by May 2025, and will enter into force on 2 August 2025.
In advance of these deadlines, the EC on 14 November 2024 released the first draft of the General-Purpose AI Code of Practice (Code), which is intended to guide the future development and deployment of general-purpose AI (GPAI) models in the EU by providing clear objectives, measures, and KPIs in alignment with EU principles and values. As discussed below, the draft Code addresses matters such as transparency and copyright-related rules, as well as technical and governance-related risk mitigation for systemic risk.
The first draft of the Code, which was prepared by the Chairs and Vice-Chairs of four working groups, will be developed and refined pursuant to an iterative drafting process. Plenary sessions were scheduled during the week commencing 18 November 2024, during which stakeholders, EU Member State representatives, and observers had the opportunity to provide feedback on the Code.
Key Features of the Draft Code
1. Transparency
Transparency is a cornerstone of the Code, reflecting the EU’s commitment to fostering trust in AI technologies. The Code outlines obligations for providers of GPAI models to produce and maintain certain specified information and documentation about their models: (1) for provision, upon request, to the AI Office and national competent authorities, and/or (2) to be made available to providers of AI systems who intend to integrate the model into their AI systems. This includes information relating to (1) the model, (2) the intended tasks and type and nature of AI systems into which the model can be integrated, (3) acceptable use policies, (4) the core elements of the model licence, (5) design specification and training process, and (6) testing and validation.
While the primary focus is on documentation for regulatory and downstream purposes, the Code also encourages providers to consider disclosing information to the public where possible.
2. Copyright Compliance
The Code places significant emphasis on compliance with EU law on copyright and related rights. Key obligations include:
- Copyright Policy: Providers of GPAI models are required to adopt a policy to comply with EU law on copyright and related rights. As part of this obligation, providers must implement a comprehensive internal copyright policy that covers the entire life cycle of their AI models. In addition, they are required to undertake reasonable copyright due diligence before entering into a contract with a third party about the use of data sets for the development of a GPAI model, including with respect to third-party compliance with rights reservations under Article 4(3) of the Directive on Copyright in the Single Market (2019/790) through state-of-the-art technologies. Further, GPAI model providers (other than SMEs) need to establish reasonable copyright measures to mitigate the risk that a downstream system or application into which a model is integrated generates copyright-infringing outputs, including through avoiding overfitting of their GPAI model. Where a GPAI model is provided to another entity, providers are encouraged to make the conclusion or validity of the contractual provision of the model dependent upon a promise of that entity to take appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works.
- Text and Data Mining (TDM): When engaging in TDM, providers of GPAI models must commit to ensuring that they have lawful access to copyright-protected content. They are also required to identify and comply with rights reservations, using state-of-the-art technologies to do so. Among other points, providers must:
- only employ crawlers that respect the Robot Exclusion Protocol;
- ensure that crawler exclusions do not negatively impact content findability (to the extent the provider also provides an online search engine);
- make best efforts to identify and comply with other machine-readable rights reservations in the case of content made publicly available online;
- commit to collaborative development of rights reservation standards; and
- take reasonable measures to exclude pirated sources from their crawling activities.
- Transparency in Copyright Compliance: Providers of GPAI models must be transparent about the measures they adopt to comply with copyright obligations. This obligation includes: (1) publishing information about their rights reservation compliance, (2) providing a single point of contact for rightsholders to lodge complaints, and (3) maintaining and providing the AI Office upon its request with information about data sources used for training, testing, and validation and about authorisations to access and use protected content for the development of a GPAI model.
3. Taxonomy of Systemic Risks
The Code sets out a taxonomy of systemic risks that providers of GPAI models must draw from as a basis for their systemic risk assessment and mitigation. The types of systemic risks to which this taxonomy relates covers:
- cyber offences;
- chemical, biological, radiological, and nuclear risks;
- loss of control of GPAI models;
- automated use of models for AI research and development;
- the facilitation of large-scale persuasion and manipulation; and
- large-scale discrimination of individuals, communities, or societies.
Providers are encouraged to assess these risks continuously and implement appropriate mitigation strategies.
4. Models with Systemic Risk
The Code establishes a comprehensive framework for providers of GPAI models with systemic risks to manage such risks. Among other points, it emphasises the implementation of safety and security frameworks that are proportional to the identified risks, supported by detailed documentation to ensure transparency and accountability. It further outlines governance structures that include clear decision-making processes and independent expert assessments to effectively oversee the development and deployment of AI models. Additionally, it addresses operational aspects such as incident reporting, whistleblowing protections, and public transparency, all aimed at fostering trust and ensuring compliance with regulatory standards.
Next Steps
The release of the Code marks an important first step in the EC’s efforts to facilitate the implementation of the EU AI Act. As the iterative drafting process continues, engaging actively and providing constructive feedback is essential for stakeholders wishing to shape the final version of the Code.