The AI Act came into effect 20 days after its publication on 1 August 2024, marking the beginning of the preparation period for organisations subject to its regulations.
The time organisations have to comply with specific provisions depends on their role under the AI Act, as well as the risk level and functionality of their AI systems. For instance, while the general compliance date is set for 2 August 2026, providers of general-purpose or high-risk AI systems face shorter timelines to meet the requirements.
2 February 2025 Prohibitions on High-Risk AI systems start to apply
The regulation strictly bans specific AI applications that may cause significant harm to individuals or groups. The prohibited practices include:
- Psychological manipulation: AI systems that use subliminal techniques to distort individuals’ behaviour without their awareness.
- Exploitation of vulnerabilities: AI systems that exploit a person’s or group’s vulnerabilities based on age, disability, or specific economic/social situations.
- Social scoring: Systems that evaluate or classify individuals/groups based on social behaviour, leading to discrimination or unjustified unfavourable treatment.
- Crime prediction: AI systems that assess the likelihood of someone committing a crime based solely on profiling or personal characteristics, except to support human-led evaluations based on objective facts.
- Mass facial recognition: Creation of facial recognition databases using images from the internet or CCTV without consent.
- Emotion inference: Use of AI to infer emotions in workplaces or educational institutions, except for medical or safety purposes.
- Sensitive biometric categorisation: Systems using biometric data to deduce race, political affiliation, religion, sexual orientation, etc.
- Real-time remote biometric identification: Use of such systems in public spaces for law enforcement purposes, except under strictly regulated exceptional circumstances.
2 August 2025 Obligations for Providers of General-Purpose AI Models
- Maintain technical documentation: Keep updated records about the model’s training, testing, and evaluation, which can be shared with regulatory authorities upon request.
- Share information with system providers: Provide clear documentation to those integrating the AI model, enabling them to understand its capabilities and limitations while complying with the regulation, without compromising intellectual property or trade secrets.
- Comply with copyright laws: Have a policy in place to ensure compliance with EU copyright laws, using advanced technologies if needed.
- Publish a summary of training data: Make a detailed summary of the training content publicly available, using a standard template provided by the AI Office.
2 August 2026 The rest of the AI Act begins to take effect , except Article 6
Member States must notify the Commission of the rules on penalties before the regulation’s application date. Explicitly prohibited AI practices are subject to administrative fines of up to €35 million or 7% of the company’s annual global turnover, whichever is higher.
2 August 2027 Article 6 and the corresponding obligations in the Regulation start to apply.
Article 6 establishes the classification rules for high-risk AI systems, specifying in its first point that an AI system will be classified as high-risk if it simultaneously meets the following two conditions:
- It functions as a safety component of products or is itself a product regulated by EU harmonisation laws.
- The product or system requires a third-party conformity assessment before being placed on the market.
For example, consider an AI system that controls a car’s braking mechanism. Under condition (a), the AI system qualifies as a safety component essential to the car’s functionality. Under condition (b), EU regulations mandate that braking systems must undergo inspection by an independent body before the vehicle can be sold. This scenario illustrates how the AI system meets both criteria to be classified as high-risk.
AI systems listed in Annex III, such as those intended for emotion recognition, are automatically deemed high-risk unless they meet specific exceptions. These exceptions apply when the system poses no significant risk to health, safety, or fundamental rights and is designed to perform a narrow procedural task, enhance prior human activity or detect patterns without influencing human judgment. However, systems involving the profiling of individuals are always considered high-risk.