Be EU AI Act Ready
Our platform automates evidence collection and audit tracking, making European regulatory compliance effortless.


What is the EU AI Act?
The EU AI Act is a regulatory framework governing the development, commercialisation, and use of artificial intelligence systems in the European Union. Its primary goal is to ensure that AI operates safely and ethically, balancing the protection of fundamental rights with the promotion of innovation.
The Act classifies AI systems by risk level, determining obligations accordingly. For example, high-risk applications, such as medical diagnostics or workplace performance monitoring, require providers to establish a risk management system. The enforcement of the Act will be handled by national regulators, which may vary across countries.

What Is the Applicability of the EU AI Act?
The AI Act applies to AI systems with a connection to the EU, either through development, use, or market presence.
The presence of an AI system
The Act defines an “AI System” as a machine-based system that can analyze inputs and generate outputs like predictions or decisions, with some adaptability or learning capability. Examples include chatbots, autonomous robots, and predictive models in sectors like finance and healthcare.
View more
Geographic connection with the EU
The AI Act is an EU regulation primarily tied to the region, but it also has extraterritorial reach, similar to the GDPR. This means it can apply to entities and activities outside the EU if they have a significant connection to the region. It applies to AI systems that:
- Are developed or used by entities within the EU.
- Are placed on or made available in the EU market.
- Produce an output that is used within the EU.
- Affect people within the EU.
View more

Who is Subject to the Requirements of the EU AI Act?
The obligations under the EU AI Act depend on the role each party plays in the AI lifecycle.
Here are the types of stakeholders recognized in the EU AI Act:
Providers
Providers
This includes developers, companies, and organisations that create, modify, or offer AI systems to the EU market.
Deployers
Deployers
Organisations or individuals using AI systems in the EU (e.g., businesses integrating AI into operations or services).
Importers
Importers
Companies that import AI systems into the EU market on behalf of non-EU providers.
Authorized Representatives
Authorized Representatives
Non-EU providers of AI systems must appoint authorised representatives within the EU to oversee compliance with the Act’s obligations.
General Users
General Users
While individuals using AI in a personal, non-professional capacity aren’t directly regulated, the Act indirectly impacts them by setting standards for safety, transparency, and rights protections on AI products they may use.
Discover How Our Platform Simplifies AI Governance

What Are the Four Levels of Risk in the EU AI Act?
The EU AI Act categorises AI systems on 4 different risk levels, focusing on those that may pose significant risks to health, safety, and fundamental right, each with specific regulatory requirements.
1 Unacceptable Risk
Unacceptable Risk
These systems are completely banned due to the high risks they present to public safety and citizens’ rights. They include applications such as subliminal manipulation intended to alter behavior, social scoring by public authorities, and remote biometric identification in real-time in public spaces, except in specific security situations.
2 High-Risk AI Systems
High-Risk AI Systems
These systems may be components within safety-regulated products under EU harmonization legislation, meaning they are already subject to stringent EU safety and quality standards. Examples include systems embedded in medical devices and automobiles.
They can also be critical products in themselves. For example, an AI system used to screen job candidates, determine credit scores, or even manage traffic in cities is considered a “critical product in itself.”
Include applications in areas such as critical infrastructure (water, gas, electricity), education, (or) employment, worker management, judicial administration, public services, and credit scoring assessments.
3 Limited-Risk AI Systems
Limited-Risk AI Systems
These systems require transparency measures, such as notifying users when interacting with AI, like chatbots. Providers and deployers of limited-risk AI must ensure users are informed they are interacting with AI rather than a human.
4 Minimal/ No-Risk AI Systems
Minimal/ No-Risk AI Systems
All other AI systems that do not fall into the above categories are considered safe in their current design and intended use. They are not subject to specific requirements under the AI Act, although providers may choose to adopt voluntary codes of conduct to enhance safety and transparency. Examples include AI-driven video games and spam filters. However, as AI technology, particularly generative AI, evolves, these minimal-risk applications may be reviewed and could require additional transparency measures in the future.
What Are the Penalties for Non-Compliance?
The EU AI Act outlines a penalty structure to ensure compliance, where the severity of the penalty depends on the type of infringement:
Prohibited AI Practices
Non-compliance with banned practices.
Up to €35 millon or 7% of global revenue.
High-Risk AI Obligations
Violations of provisions related to operators or notified bodies.
Up to €20 millon or 4% of global revenue.
Fines for Incorrect Information
Incorrect, incomplete, or misleading information to national authorities.
Up to €7,5 millon or 1% of global revenue.
SME Penalty Reductions
Including start-ups, the penalties will be capped.
Lower amounts of the specified percentages
How Can Traze Ensure Your Compliance with the EU AI Act?
Conducting AI risk assessments
Identify and address risks to ensure fairness, transparency, explainability and accountability.
Performing impact assessments
Evaluate risk impacts and apply bias mitigation to protect your AI systems.
Managing documentation
Automate and organize compliance records, keeping you audit-ready.
Ensuring transparency
Provide clear, explainable AI decision-making for regulators and stakeholders.
Maintaining traceability
Continuously track data and model changes to stay compliant.
Frequently Asked Questions
How can Traze help automate compliance with the EU AI Act?
How can Traze help automate compliance with the EU AI Act?
Traze automates compliance processes by streamlining risk assessments, managing AI system documentation, ensuring transparency, and providing real-time compliance tracking to meet the EU AI Act’s requirements.
What steps should organizations take to prepare for the implementation of the EU AI Act?
What steps should organizations take to prepare for the implementation of the EU AI Act?
Organizations should conduct AI system risk assessments, align internal processes with compliance standards, implement transparency and accountability measures, and ensure ongoing monitoring for adherence to the Act.
Are AI systems that fall outside the EU required to comply with the EU AI Act?
Are AI systems that fall outside the EU required to comply with the EU AI Act?
Yes, AI systems outside the EU must comply if they are marketed, used, or affect individuals within the EU, similar to how the GDPR applies to international entities with EU ties.
What is the process for certifying AI systems under the EU AI Act?
What is the process for certifying AI systems under the EU AI Act?
The certification process involves assessing AI systems for risk categorization, ensuring they meet regulatory standards, documenting compliance measures, and potentially undergoing third-party audits to verify adherence to the EU AI Act’s requirements.