Annex III High-Risk AI: Plain-English Checklist
Determine if your AI system falls under Annex III high-risk categories and see exactly what compliance steps are required under the EU AI Act.
What counts as high-risk (quick table of areas)
AI systems for recruitment, screening, ranking, interviewing, or evaluating candidates.
AI systems for creditworthiness assessment, insurance decisions, or financial risk evaluation.
AI systems for grading, admissions, or determining access to educational institutions.
AI systems for triage, diagnostics, or treatment decisions used for healthcare decisions.
AI systems for migration control, border management, or asylum assessments.
AI systems for crime prevention, investigation, detection, or prosecution.
AI systems for energy grid management, transportation networks, or utility operations.
AI systems for access to essential private and public services.
Note on Biometrics
Some biometric uses are prohibited under the AI Act. Real-time biometric identification in public spaces and social scoring systems are banned.
Decision steps
Describe your AI's purpose and users
Clearly define what your AI system does and who it affects. Be specific about the use case and target users.
Map against the Annex III categories
Compare your AI system against the high-risk categories listed above. Check if it falls into any of these areas.
If matched → treat as high-risk; otherwise check limited/minimal
If your AI matches any Annex III category, it's high-risk. If not, check if it falls under limited-risk (transparency requirements) or minimal-risk (no specific obligations).
Run the 60-second scan to confirm and get tasks
Use our free scanner to get a provisional risk classification and specific action items for your AI system.
Checklist (if you're high-risk)
Risk Management System (RMS)
Identify hazards/harms, implement mitigation strategies, and establish testing procedures.
Data governance
Ensure representative, relevant, and documented datasets with bias testing procedures.
Technical documentation
Create comprehensive documentation covering architecture, training data, evaluation methods, and controls.
Human oversight
Define who can intervene and how override mechanisms work in practice.
Accuracy/robustness/security
Establish targets, implement tests, and maintain comprehensive logs.
Logging
Implement comprehensive logging systems to track AI system operations and decisions.
Post-market monitoring
Monitor for incidents, collect feedback, and implement corrective actions.
Incident reporting
Establish procedures for reporting serious incidents to authorities.
Conformity assessment & CE marking (if you're the provider)
Conduct conformity assessment and apply CE marking before placing high-risk AI on the market.
Ready to check your AI risk level?
Run our free 60-second scan to get a provisional risk classification and specific action items for your AI system.