EU AI Act Art.6 High-Risk Classification: The Decision Tree Every SaaS Developer Needs in 2026
The question that keeps AI product teams up at night: "Is our system high-risk under the EU AI Act?"
Get it wrong in either direction and you pay. Call yourself high-risk when you're not, and you sink €400k+ into conformity assessments for a feature that didn't need them. Miss that you are high-risk, and you face fines up to €30 million or 6% of global annual turnover — plus potential market withdrawal.
Article 6 of the EU AI Act defines high-risk classification through a two-tier system that combines a product safety track (Annex I) and a use-case track (Annex III). This guide gives you the complete decision tree, walks through every Annex III category with real SaaS examples, and explains the Art.6(3) narrow exception that lets some systems escape high-risk status even when they appear to qualify.
Timeline note: High-risk obligations under Title III apply from 2 August 2026 (24 months after entry into force on 2 August 2024). You have months, not years, to complete your classification and begin conformity work.
The Art.6 Classification Framework
EU AI Act Article 6 creates two independent pathways to high-risk status:
Pathway A — Annex I (Product Safety Track): Your AI system is used as a safety component in a product covered by EU harmonization legislation listed in Annex I, or your AI system itself is such a product. Examples: AI in medical devices, AI in aviation equipment, AI in machinery.
Pathway B — Annex III (Use-Case Track): Your AI system falls into one of eight high-risk use-case categories listed in Annex III, regardless of the product it's embedded in.
Critically: these are independent. You only need to hit one pathway for high-risk status. Most SaaS developers reach the question through Pathway B.
The Complete Classification Decision Tree
Work through these steps sequentially. The first "yes" that applies determines your track.
Step 0: Prohibited System Check (Art.5)
Before asking whether your system is high-risk, confirm it isn't prohibited:
- Subliminal manipulation techniques causing harm
- Exploitation of vulnerable groups' weaknesses
- Social scoring by public authorities
- Real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions)
- Emotion recognition in workplace or educational settings
- Biometric categorization to infer sensitive characteristics (race, political opinions, sexual orientation, etc.)
- Predictive policing based solely on profiling
If any of these apply: stop development. Prohibited systems cannot be remediated into compliance. No conformity assessment exists for them.
Step 1: GPAI Model Check (Art.51)
Is your system a General Purpose AI Model — a model trained on broad data, designed for a wide range of tasks, that you make available via API or embed in other products?
If yes: You follow the GPAI track (Arts. 51-56, Title IX), not the high-risk track. GPAI models have their own obligation set: technical documentation, compliance with copyright law, summary of training data, systemic-risk-specific obligations for frontier models. Note that a GPAI model can be used in a high-risk AI system — in that case both tracks apply.
If no (you're building an AI application, not a foundation model): continue to Step 2.
Step 2: Annex I Safety Component Check
Does your AI system function as a safety component in one of these product categories (Annex I, as updated by the AI Act):
| Product Category | Relevant EU Legislation |
|---|---|
| Machinery | Machinery Regulation (EU) 2023/1230 |
| Toys | Toys Safety Directive 2009/48/EC |
| Recreational craft and personal watercraft | Directive 2013/53/EU |
| Lifts | Directive 2014/33/EU |
| Equipment for explosive atmospheres | Directive 2014/34/EU |
| Radio equipment | Directive 2014/53/EU |
| Pressure equipment | Directive 2014/68/EU |
| Civil aviation | Regulation (EU) 2018/1139 and derived acts |
| Agricultural and forestry vehicles | Regulation (EU) 167/2013 |
| Marine equipment | Directive 2014/90/EU |
| Rail systems (interoperability) | Directive (EU) 2016/797 |
| Motor vehicles | Regulation (EU) 2019/2144 |
| Medical devices (Class IIa, IIb, III) | Regulation (EU) 2017/745 |
| In vitro diagnostic medical devices | Regulation (EU) 2017/746 |
A safety component is defined as a component performing a safety function whose failure would endanger the health and safety of persons or property. An AI-powered collision avoidance system in a car qualifies. An AI-powered infotainment recommendation engine does not.
For most SaaS developers: this step is "no." Continue to Step 3.
Step 3: Annex III Use-Case Match
This is where most SaaS products face the real question. Annex III lists eight categories of high-risk AI use cases. Does your system:
- Perform or contribute to remote biometric identification of natural persons?
- Operate as a safety component of critical infrastructure (energy, water, transport, finance)?
- Determine access to, or outcomes in, education or vocational training?
- Make or influence employment, worker management, or HR decisions?
- Determine access to essential private services (credit, insurance, social benefits)?
- Support law enforcement activities (risk assessment, polygraphs, profiling)?
- Support migration, asylum, or border control decisions?
- Support administration of justice or democratic processes?
We'll examine each category in detail below.
If none apply: your AI system is NOT high-risk. Document this conclusion in your risk management records (good practice even when not legally required).
If one or more apply: proceed to Step 4.
Step 4: Art.6(3) Narrow Exception
Art.6(3) creates an escape hatch: even if your system falls under Annex III, it is not high-risk if it doesn't pose a significant risk to health, safety, or fundamental rights.
Three conditions must be met:
- The AI system performs a narrowly limited procedural task
- The AI system is intended to improve the result of a previously completed human activity (not replace it)
- The AI system is intended to detect decision-making patterns without replacing or influencing human assessment
Additionally, the system is not high-risk if the purpose is for:
- Preparatory tasks to an assessment not directly related to the high-risk use case
- Narrow functionality ancillary to the core use case
Critical: If you invoke Art.6(3), you must document your reasoning. The burden of proof lies with you. DPAs (in their capacity as market surveillance authorities for AI) can audit this documentation. A self-declaration that "we're not high-risk because we said so" without analysis is not defensible.
The Commission has published guidelines on applying Art.6(3) (expected by Q3 2025, check the EU AI Office portal for current status). The European AI Office website (digital-strategy.ec.europa.eu/en/policies/eu-ai-act) maintains updated guidance.
If Art.6(3) applies: document, document, document. Then you're clear.
If Art.6(3) does not apply: your system is high-risk. Proceed to Step 5.
Step 5: High-Risk — Identify Your Obligations
You're in the high-risk zone. The core obligations under Title III, Chapter 3:
| Obligation | Article | Key Requirement |
|---|---|---|
| Risk management system | Art.9 | Documented iterative process throughout lifecycle |
| Data governance | Art.10 | Training data quality, bias assessment, data management |
| Technical documentation | Art.11 + Annex IV | Detailed system documentation before market placement |
| Record-keeping/logging | Art.12 | Automatic logs for lifecycle tracing |
| Transparency | Art.13 | Instructions for use, capabilities and limitations disclosed |
| Human oversight | Art.14 | Measures enabling humans to understand, monitor, override |
| Accuracy/robustness | Art.15 | Defined performance metrics, cybersecurity measures |
| Conformity assessment | Art.43 | Self-assessment (Annex III classes) or notified body (Annex I + biometric) |
| EU database registration | Art.49 + Annex VIII | Register before market placement |
| Declaration of conformity | Art.47 | Signed declaration with specific content |
| CE marking | Art.48 | Affix CE mark |
Annex III Deep Dive: Eight Categories for SaaS Developers
Category 1: Biometric Identification (Art.3 no.36, Annex III point 1)
High-risk if: Your system performs real-time or post-remote biometric identification of natural persons in public spaces.
Real-time: facial recognition of people in a crowd while it's happening. Post-remote: facial recognition in recorded video.
SaaS examples — likely high-risk:
- CCTV analytics platform that identifies individuals across cameras
- Event access control that matches faces against a database
- Workplace attendance system using facial recognition
SaaS examples — likely NOT high-risk:
- Login biometrics (fingerprint/face ID) — user-initiated, single person, consent-based
- Age estimation for content access (no identification of a specific person)
- Liveness detection for fraud prevention (not identification)
Note: Real-time biometric ID in public spaces by law enforcement is prohibited under Art.5. For commercial operators (not law enforcement), real-time remote biometric ID is high-risk, not prohibited. The distinction matters for compliance path.
Category 2: Critical Infrastructure (Annex III point 2)
High-risk if: Your AI system is a safety component of critical infrastructure: energy, water management, wastewater, transport, digital infrastructure, financial market infrastructure, banking/financial sector, public health, production, processing and distribution of food.
SaaS examples — likely high-risk:
- AI-powered grid load balancing for a power utility
- Automated traffic signal optimization for a city transport network
- Algorithmic trading system for financial market operations
SaaS examples — likely NOT high-risk:
- Energy monitoring dashboard (read-only analytics, no control function)
- Route optimization for logistics (not safety function)
- Financial reporting tools (analysis, not market execution)
The key test: does failure of the AI system endanger safety of the infrastructure? Recommendation systems and dashboards rarely qualify. Control systems often do.
Category 3: Education and Vocational Training (Annex III point 3)
High-risk if: Your AI system determines access to, or assigns individuals to, educational institutions; evaluates learning outcomes with effects on students' futures; monitors/detects prohibited behavior during testing; assesses the appropriate level of education for students.
SaaS examples — likely high-risk:
- Admissions scoring system that ranks applicants for university admission
- AI-powered proctoring that flags "prohibited behavior" and leads to disqualification
- Automated essay grading used to determine pass/fail in accredited programs
- Skills assessment platform used by vocational training bodies for certification
SaaS examples — likely NOT high-risk:
- Study assistant / tutoring chatbot (supportive, not determinative)
- Flashcard recommendation engine (user-controlled, no qualification consequence)
- Learning analytics dashboard for teachers (human interprets, human decides)
The test: does the AI system's output directly determine or substantially influence a formal educational outcome? If educators use it as one factor among many, Art.6(3) may apply. If the system makes the call, high-risk.
Category 4: Employment and HR (Annex III point 4)
This is the category that catches most SaaS AI tools.
High-risk if: Your AI system performs recruitment/selection tasks (CV screening, filtering, interview scoring), takes decisions affecting terms/conditions of employment, allocates tasks (automated work scheduling), manages or evaluates performance, promotes or terminates employment, or performs automated profiling relevant to promotion/termination.
SaaS examples — almost certainly high-risk:
- ATS with AI CV screening that filters candidates before human review
- Video interview analysis tool scoring candidates on verbal/non-verbal cues
- Automated work scheduler that assigns gig economy workers to tasks
- Performance management platform that generates "performance scores" affecting bonuses
- Workforce analytics that predicts employee flight risk and influences retention decisions
SaaS examples — likely NOT high-risk (but verify with Art.6(3)):
- Job posting optimization (no decision about individuals)
- Salary benchmarking tool (aggregate data, no individual decisions)
- Interview scheduling tool (logistics only, no assessment)
- Skills gap analyzer purely for L&D planning with no employment consequence
Reality check: If you're selling an "AI-powered HR tool" to businesses, you almost certainly need to run through the Annex III Category 4 analysis carefully. The phrase "AI for HR" is nearly synonymous with high-risk under the EU AI Act.
Category 5: Essential Private Services (Annex III point 5)
High-risk if: Your AI system determines creditworthiness, evaluates insurance risk and sets pricing, or assesses eligibility for public social benefits — in ways that affect access to essential services.
SaaS examples — likely high-risk:
- Credit scoring model for a fintech lending platform
- Insurance pricing algorithm that sets individual premiums based on behavioral data
- Buy-now-pay-later approval engine
- Social benefit eligibility assessment tool for a government contractor
SaaS examples — likely NOT high-risk:
- Fraud detection for payment processing (prevents harm, doesn't restrict access)
- AML transaction monitoring (compliance function, not credit decision)
- Financial health dashboard (informational, user-facing)
Note that the European Banking Authority (EBA) and European Insurance and Occupational Pensions Authority (EIOPA) have issued sector-specific guidance on AI Act compliance for financial services — check their portals for current versions.
Category 6: Law Enforcement (Annex III point 6)
High-risk if: Your AI system makes individual risk assessments for crime prevention, uses polygraphs, assesses the reliability of evidence, predicts behavior/future crimes, profiles people, or performs biometric ID for law enforcement (prohibited in real-time in public).
Most SaaS developers don't build law enforcement tools. If you do (GovTech, LegalTech for prosecutors), specialist legal advice is essential.
Fringe cases:
- Fraud risk scoring sold to financial institutions as "law enforcement support" — analyze carefully
- Corporate investigation platforms analyzing employee communications — likely not law enforcement
- Threat intelligence platforms — depends on whether they feed into enforcement decisions
Category 7: Migration, Asylum, Border Control (Annex III point 7)
High-risk if: Your AI system performs lie detection for immigration authorities, assesses risks of persons seeking asylum or crossing borders, assists with asylum applications, or identifies/verifies documents for border purposes.
Again, most SaaS developers don't operate here. GovTech providers serving immigration authorities: this category applies to you.
Category 8: Administration of Justice (Annex III point 8)
High-risk if: Your AI system assists courts or tribunals in researching and interpreting facts and law, or in applying the law to facts of a specific case.
LegalTech implications:
- Case outcome prediction tools sold to courts — high-risk
- Contract analysis tools sold to law firms — likely NOT (advisory, not adjudicatory)
- Legal research assistants — likely NOT (human lawyer decides, AI supports)
- Sentencing recommendation tools — high-risk
Also covered: AI used to influence elections or referenda, including tools for targeting, profiling, or political advertising. This touches AI-powered political ad targeting tools.
Applying the Art.6(3) Exception: The Self-Assessment Template
If you believe your Annex III system escapes high-risk via Art.6(3), document this analysis:
## Art.6(3) Self-Assessment — [System Name] — [Date]
### Annex III Category
[State which category(ies) appeared to apply and why]
### Reason for Art.6(3) Exclusion
Choose one or more:
☐ The system performs a narrowly limited procedural task only
☐ The system improves results of previously completed human activity
☐ The system detects decision-making patterns without replacing human assessment
☐ The system's functionality is preparatory only, not directly related to the high-risk use case
### Evidence of Limitation
[Describe specifically how the system's scope is limited. Cite functional
specifications. Describe human oversight mechanisms. Explain what decisions
the system does NOT make.]
### Affected Populations
[Who is impacted if the system produces incorrect outputs? What is the
blast radius of a wrong result?]
### Conclusion
[Not high-risk because: concrete reasoning. Document who reviewed this
assessment and when. Schedule reassessment if functionality changes.]
Store this document with your AI system's technical documentation. Review it whenever you add features that could expand the system's scope.
Conformity Obligations if High-Risk
High-risk AI systems in Annex III categories (other than biometric identification) can use self-assessment (internal conformity assessment, Annex VI procedure). You do not need a notified body — unless:
- The system uses biometric data (remote biometric identification) → notified body required
- You placed it on the market without notified body involvement and EU supervisory authorities dispute your self-assessment
Self-assessment requires:
- Complete technical documentation per Annex IV
- Implement all Arts. 9-15 obligations
- Sign EU Declaration of Conformity (Annex V format)
- Affix CE marking
- Register in the EU database for high-risk AI systems (Art.49)
The EU database is operated by the European AI Office. The public-facing portal is expected at ai-act-database.europa.eu (check the EU AI Office website for the current URL — the database opens for registration before 2 August 2026).
Code Example: Risk Classification Helper (TypeScript)
A typed helper for documenting your classification decision in code:
type AnnexIIICategory =
| 'biometric-identification'
| 'critical-infrastructure'
| 'education-vocational'
| 'employment-hr'
| 'essential-services'
| 'law-enforcement'
| 'migration-border'
| 'administration-justice'
interface RiskClassificationResult {
isProhibited: boolean
isGPAI: boolean
isAnnexI: boolean
annexIIICategories: AnnexIIICategory[]
art6_3ExceptionApplies: boolean
art6_3ExceptionRationale: string
finalClassification: 'prohibited' | 'gpai' | 'high-risk' | 'limited-risk' | 'minimal-risk'
assessmentDate: string
reviewDue: string
assessedBy: string
}
// Example: HR screening tool — no Art.6(3) escape
const hrScreeningTool: RiskClassificationResult = {
isProhibited: false,
isGPAI: false,
isAnnexI: false,
annexIIICategories: ['employment-hr'],
art6_3ExceptionApplies: false,
art6_3ExceptionRationale:
'System ranks candidates for human reviewers but output directly determines shortlist composition. ' +
'Not narrowly limited procedural task — influences individual employment outcomes.',
finalClassification: 'high-risk',
assessmentDate: '2026-05-05',
reviewDue: '2026-08-02', // Before obligations apply
assessedBy: 'Product Legal + Engineering Lead',
}
// Example: Interview scheduler — Art.6(3) applies
const interviewScheduler: RiskClassificationResult = {
isProhibited: false,
isGPAI: false,
isAnnexI: false,
annexIIICategories: ['employment-hr'], // Appears to qualify...
art6_3ExceptionApplies: true,
art6_3ExceptionRationale:
'System performs narrowly limited procedural task (calendar slot optimization). ' +
'No assessment of candidate quality. No influence on who is selected. ' +
'Purely logistical function with no individual employment consequence.',
finalClassification: 'minimal-risk',
assessmentDate: '2026-05-05',
reviewDue: '2027-05-05',
assessedBy: 'Product Legal + Engineering Lead',
}
Store this structured record in your technical documentation. It creates an audit trail and forces a concrete analysis rather than a vague "we checked and we're fine."
SaaS Compliance Infrastructure and EU Sovereignty
If you're building a high-risk AI system, your compliance infrastructure itself becomes a regulated asset. This includes:
- Audit logs (Art.12): Who queried the system, what outputs were produced, when
- Training data records (Art.10): Provenance, bias assessments, quality measures
- Model monitoring (Art.9): Ongoing risk management throughout the system lifecycle
- Incident records: When the system produces harmful outputs
All of this data is sensitive. GDPR applies to any personal data within it. Under Art.28 GDPR, the processors you use to store compliance records must sign DPAs. If you route that data through US cloud infrastructure, you inherit CLOUD Act exposure — US authorities can compel access to compliance records for your high-risk AI systems.
Running your AI compliance stack on EU-sovereign infrastructure (processing in the EU, no US parent entity exposure) is increasingly the default choice for enterprise customers who themselves face regulatory audit. The compliance officer who signed off on your high-risk AI system does not want to explain to the DPA why the audit logs sit on a US server.
Summary: The Decision Tree at a Glance
START
│
├─ Is system PROHIBITED (Art.5)? ──YES──▶ STOP. Do not build.
│
├─ Is system a GPAI MODEL (Art.51)? ──YES──▶ GPAI track (Title IX obligations)
│
├─ Is system an ANNEX I safety component? ──YES──▶ HIGH-RISK (Pathway A)
│
├─ Does system fall under ANNEX III categories?
│ ├─ Biometric identification
│ ├─ Critical infrastructure safety
│ ├─ Education / vocational training
│ ├─ Employment / HR decisions
│ ├─ Essential services (credit, insurance, benefits)
│ ├─ Law enforcement
│ ├─ Migration / border control
│ └─ Administration of justice / democratic processes
│
│ ──NONE──▶ NOT HIGH-RISK (document conclusion)
│
│ ──ONE OR MORE──▶
│ │
│ ├─ Does Art.6(3) narrow exception apply?
│ │ (narrowly limited task, no significant risk)
│ │ ──YES──▶ NOT HIGH-RISK (document analysis, review on feature change)
│ │ ──NO ──▶ HIGH-RISK (Pathway B)
│
HIGH-RISK: Implement Arts. 9-15, conformity assessment, EU database registration,
CE marking, Declaration of Conformity. Deadline: 2 August 2026.
What To Do Now
If you haven't done the classification: Schedule it for this sprint. The analysis takes 2-4 hours for a typical SaaS product. The documentation you produce has direct value in customer conversations, DPA inquiries, and eventual conformity assessments. Start with the Annex III Category 4 (employment/HR) and Category 5 (essential services) checks — these catch the most products.
If you're likely high-risk: Don't panic. Annex III self-assessment (Arts. 43-47) is achievable without a notified body for most categories. Begin with Art.9 risk management documentation and Art.11 technical documentation. These form the backbone of conformity.
If you're not high-risk: Document it. The analysis is a one-page justification that protects you if a customer, partner, or supervisory authority later asks. The answer "we analyzed it and here's why it doesn't qualify" is vastly better than "we didn't check."
EU-Native Hosting
Ready to move to EU-sovereign infrastructure?
sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.