EU AI Act Tool
Identify likely AI Act category and get a one-page action plan with evidence checklist. Instant PDF after checkout.
Determine your risk
AI systems that pose significant risks to health, safety, or fundamental rights.
- • Healthcare AI
- • Biometric identification
- • Critical infrastructure
- • Employment decisions
AI systems with transparency obligations and specific requirements.
- • Chatbots
- • Content generation
- • Emotion recognition
- • Deep fakes
AI systems with no specific obligations under the AI Act.
- • Video games
- • Spam filters
- • Basic automation
- • Non-EU deployment
Next steps
Get a clear, actionable roadmap for achieving EU AI Act compliance.
- • Immediate actions (this week)
- • Short-term goals (this month)
- • Medium-term objectives (3-6 months)
- • Long-term compliance (6-12 months)
Understand your specific obligations based on your risk category.
- • Documentation requirements
- • Human oversight procedures
- • Transparency obligations
- • Monitoring and reporting
Evidence to prepare
- System documentation
- Risk assessment reports
- Data processing agreements
- Technical specifications
- Human oversight procedures
- Incident response plans
- Monitoring and logging
- Training and awareness
Sample report
Risk Classification
Your AI system is classified as [Risk Level] based on the following factors...
Compliance Obligations
Based on your risk category, you must comply with the following requirements...
Evidence Checklist
Prepare the following documentation and evidence to demonstrate compliance...
Implementation Roadmap
Follow this timeline to achieve full compliance by the relevant deadlines...
Frequently Asked Questions
Ready to check your EU AI Act compliance?
Get your risk classification and action plan in 60 seconds
Check Your Risk Now