⚠️ EU AI Act enforcement live · Up to €35M / 7% of turnover fines

EU AI Act Risk Assessment

Answer 9 quick questions and get your AI system's risk classification under Article 5, Article 6, and Annex III. Plus a checklist of what you need to do next. Free, no signup.

Question 1 of 9
Loading…
🚫
Tier
Result
Tagline

Your obligations under the AI Act

Your answers

Need a compliant AI disclosure label too?

Article 50 requires labeling for AI-generated content. Generate the HTML, JSON-LD, and image overlay snippets in 30 seconds.

Open AI Disclosure Generator →

How the EU AI Act risk classification works

The EU AI Act, fully applicable from August 2, 2026, sorts every AI system into one of four risk tiers. The tier determines what you have to do, who you have to notify, and how big the fine gets if you skip your obligations. This EU AI Act risk assessment tool walks you through the classification logic in the same order a Notified Body would.

Prohibited AI practices (Article 5)

Eight specific practices are flat-out banned in the EU. You cannot place these systems on the market, put them into service, or use them — there is no compliance path, only the off switch. The list includes social scoring of natural persons by public authorities, untargeted scraping of facial images for face databases, real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions), emotion recognition in workplaces and schools, predictive policing based purely on profiling, and any AI that exploits vulnerabilities of children, the elderly, or persons with disabilities.

High-risk AI systems (Article 6 + Annex III)

High-risk is the heaviest compliance tier short of an outright ban. Article 6 routes a system here through two doors. Door one: the AI is a safety component of a product covered by EU harmonization law (Annex I) — think medical devices, vehicles, toys, lifts, machinery. Door two: the AI is used in one of the eight Annex III areas — biometric identification, critical infrastructure, education, employment, access to essential services (credit, insurance, public benefits), law enforcement, migration and border control, or administration of justice and democratic processes.

Article 6(3) carves out an exemption: even if you fall in an Annex III area, you may avoid high-risk classification if your system performs only narrow procedural tasks, improves a previously completed human activity, detects decision-making patterns without replacing human review, or performs a preparatory task. You still have to document the carve-out and register it.

Limited-risk AI (Article 50)

Limited-risk systems trigger transparency duties only. If your system interacts with humans (chatbots, voice agents), generates synthetic media (text, image, audio, video), or recognizes emotions or biometric category, users must be told they are dealing with AI or AI-generated output. Deepfakes get a specific labeling rule. Penalties for failure to disclose are smaller than the high-risk fines but still reach 1.5% of global turnover.

Minimal-risk AI (Recital 165)

Everything else — spam filters, AI-enabled video games, inventory optimization, recommendation engines that don't fall under DSA — is minimal-risk. No mandatory obligations, just a recommended voluntary code of conduct. Most consumer-facing SaaS AI features land here.

General-purpose AI models (GPAI, Article 51-55)

If you're a model provider rather than a deployer of an application, you may also fall under the GPAI rules. Standard GPAI obligations include technical documentation, copyright policy, and a public training data summary. GPAI models with systemic risk — currently any model trained with more than 10^25 FLOPs of compute, which captures GPT-4-class models and above — get a heavier set: model evaluation, adversarial testing, cybersecurity safeguards, and serious incident reporting to the AI Office.

Penalties

The fine ladder runs €7.5M / 1.5% of turnover for incorrect information to authorities, €15M / 3% for non-compliance with high-risk system obligations, and €35M / 7% for prohibited practices — whichever is higher. SMEs and startups get reduced caps in absolute terms but not as a percentage. The Act applies extraterritorially: a US or UK company with EU users is on the hook the same as an EU one.

Common classification mistakes this calculator catches

FAQ

Is this EU AI Act risk assessment legally binding? No. This is an informational classifier built from the public text of Regulation (EU) 2024/1689. For binding classification you need a Notified Body assessment (for high-risk) or your own conformity assessment with documentation. Use this tool to find out where you stand before you spend lawyer hours.

I'm a US-only company. Does this matter? Yes if any output is used in the EU. Article 2 makes the regulation extraterritorial — same logic as GDPR. The only safe assumption for any web product is that you have EU users.

What about open-source AI? Open-source GPAI models get carve-outs from some technical documentation duties, but providers of open-source models with systemic risk still face the full obligation set. Open-source application-level systems still go through the Article 6 / Annex III test like any other.

When does this all kick in? Prohibitions: February 2, 2025. GPAI rules: August 2, 2025. High-risk obligations and most of the rest: August 2, 2026. High-risk systems already covered by EU product law (Annex I): August 2, 2027.

How do I document the result of this classifier? Click "Copy report" or "Print / Save PDF" at the end of the assessment. Keep it with your conformity assessment file. It does not replace legal review but it gives counsel a starting structure.