Article

Upcoming EU AI Act Obligations: Mandatory Training and Prohibited Practices

January 31, 2025
From 2 February 2025, AI system providers and deployers will need to ensure AI literacy among their workforces, as well as eliminate prohibited AI practices.

The EU AI Act came into effect on 1 August 2024, and the first obligations under the Act will become applicable from 2 February 2025. Providers and deployers of AI systems must ensure that their employees and contractors using AI have an adequate degree of AI literacy, for example by implementing training. Certain AI practices will also be prohibited from that date onwards, and must be avoided or removed from the EU market. In this article, we provide an overview of the impact these new AI obligations will have, and suggest some practical solutions for companies providing or deploying AI.

AI LITERACY

What AI training obligations does my company have to comply with?

Article 4 of the AI Act imposes a high-level requirement for AI literacy: “Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used”.

Companies therefore have significant flexibility in devising the content and format of their AI training for their staff, to meet this obligation by 2 February. While countering allegations of non-compliance will be challenging if companies fail to implement any form of AI training or learning resources, defending against regulators or civil claimants arguing that the AI training provided was inadequate will be much easier. Base-level training is certainly better than doing nothing in this context.

What are the consequences of non-compliance with the obligations around mandatory AI literacy?

No direct fines or other sanctions will apply for violating the AI literacy requirements under Article 4 of the AI Act. From 2 August 2026 (when the sanctions provisions of the AI Act become applicable), providers and deployers of AI systems may face civil liability, for instance if the use of AI systems by staff who have not been adequately trained causes harm to consumers, business partners, or other third parties (though no direct fine applies for violating Article 4). Furthermore, regulators will likely criticise obvious non-compliance with AI literacy requirements in any later inquiries and investigations.

What can companies do now?

A useful first step is to analyse what trainings or other resources to achieve AI literacy the company has provided to its workforce in the past — and to document these measures to evidence compliance and defend against future enquiries from regulators or claims from third parties.

If an analysis is either not feasible or indicates gaps in AI training, companies may opt to quickly implement AI literacy measures to close existing gaps in an efficient and effective manner before 2 February 2025.

This AI Fundamentals by LathamTECH video explains the core components and functions of AI, and can be deployed as part of your AI literacy training. For further assistance with developing a more comprehensive AI literacy programme or otherwise considering your obligations with regards to the deployment of AI within your organisation, please contact one of our authors below, or the Latham lawyer with whom you normally consult.

Failing to ensure compliance with AI literacy obligations ahead of 2 February comes with particular risks in all EU Member State jurisdictions where director or managerial liability regimes apply.

Practical Solutions to Mitigate Risk

  • Assess AI training needs: Evaluate current training programmes to identify gaps in AI literacy.
  • Document existing measures: Keep records of all training initiatives to demonstrate compliance.
  • Choose a layered approach: Not all business areas or employees will need to have the same degree of AI literacy. Offer a general basic training to all employees and rollout more sophisticated or role-specific trainings as needed, in a phased approach.
  • Consult with your legal advisors: We can help you to implement AI literacy workshops or online courses tailored to the needs of your various teams.

PROHIBITED AI PRACTICES

What AI practices does the AI Act prohibit?

Article 5 of the AI Act lists a number of prohibited AI practices and use cases. These prohibited practices are described in relatively general terms, which leaves room for interpretation and requires nuanced analysis to determine their application in practice. Prohibited practices include:

  • Manipulative AI systems: AI systems that use subliminal techniques beyond a person’s consciousness to significantly alter behaviour, potentially causing physical or psychological harm. Companies must avoid deploying AI that manipulates consumer behaviour in harmful ways.
  • Exploitive AI systems: AI systems that exploit vulnerabilities of specific groups, such as children or individuals with disabilities, to significantly alter behaviour in a harmful manner. This includes marketing practices that take advantage of such vulnerabilities.
  • Emotion analysis in the workplace: AI systems that perform emotion analysis or biometric categorisation of employees in the workplace, which can lead to privacy infringements and discriminatory practices. Companies should refrain from using AI to monitor or evaluate employees’ emotional states.
  • Social scoring for unfair commercial purposes: AI systems used for social scoring that evaluate or classify individuals based on their social behaviour or personal characteristics, leading to unfavourable or unjustified treatment.

What are the consequences of non-compliance with the prohibitions of Article 5 of the AI Act?

Article 99 of the AI Act provides for severe sanctions for engaging in prohibited AI practices, including fines of up to the greater of €35 million or 7% of the previous year’s global turnover per violation. As previously mentioned, the sanctions regime under the AI Act will apply from 2 August 2026. In general, EU law does not provide for retrospective fines. However, engaging in prohibited AI practices in violation of Article 5 from 2 February 2025 (when Article 5 becomes applicable) may lead to civil, administrative, or criminal law exposure under other EU or EU Member State laws, such as product liability or general tort law. EU data protection authorities may argue that processing personal data in the course of prohibited AI practices may constitute a sanctionable violation of GDPR rules, too.

What can companies do now?

Companies can access various practical ways to mitigate the risks of engaging in AI practices prohibited by Article 5 of the AI Act. While the usual course of business poses low risk for many companies, potential issues are more likely to arise from the ambiguous description of these prohibited activities in the AI Act. For instance, the definition of AI-powered emotion inference in the workplace or manipulative techniques in business processes are highly ambiguous in practice.

Hence, in the process of achieving robust compliance structures in accordance with the AI Act, companies should identify, assess, and document the AI systems they use in their business processes. In particular, they should analyse systems that may potentially fall within the categories of prohibited AI practices. If the scope of Article 5 seems unclear regarding a particular AI practice, companies should document their rationale for determining that the relevant AI use does not fall under the prohibition.

For efficiency, this identification and assessment step may also be used to identify potential high-risk AI systems and AI systems with transparency obligations under Article 50 of the AI Act.

Practical Solutions to Mitigate Risk

  • Identify and assess AI systems: Review AI systems to ensure they do not fall under prohibited categories.
  • Document compliance efforts: Document rationale for why certain AI uses do not violate the prohibitions under Article 5.
  • Other possible objectives: Analyse AI systems for potential high-risk activities and ensure transparency obligations are met.

Endnotes

    This publication is produced by Latham & Watkins as a news reporting service to clients and other friends. The information contained in this publication should not be construed as legal advice. Should further analysis or explanation of the subject matter be required, please contact the lawyer with whom you normally consult. The invitation to contact is not a solicitation for legal work under the laws of any jurisdiction in which Latham lawyers are not authorized to practice. See our Attorney Advertising and Terms of Use.