2026-04-22·14 min read·

EU AI Act Art.6(3): High-Risk Exemption — Annex III No-Significant-Risk Self-Declaration and Commission Guidelines (2026)

The EU AI Act's classification framework gives Annex III a prominent role: any AI system directly listed in Annex III is presumed high-risk under Art.6(2). But there is a derogation. Article 6(3) allows providers to self-declare that their Annex III system does not qualify as high-risk — if it does not pose a significant risk of harm and meets at least one of four specific criteria.

This exemption pathway is among the most consequential and least understood provisions in the Act. Getting it right means avoiding the full Art.9-15 compliance burden for high-risk AI (risk management systems, technical documentation, conformity assessment, post-market monitoring). Getting it wrong means self-declaring out of obligations that market surveillance authorities will later enforce — with fines up to €15,000,000 or 3% of total worldwide annual turnover for violations of Art.6 obligations (Art.99(3)).

This guide covers:


The Art.6 Classification Stack

Before examining Art.6(3), the full classification sequence matters:

Step 1: Art.6(1) — Is it a safety component of an Annex I regulated product
         that requires third-party conformity assessment?
         YES → HIGH-RISK (no exemption available)

Step 2: Art.6(2) — Is the AI system listed in Annex III?
         NO  → Not high-risk under Art.6 (may still have Art.50 transparency obligations)
         YES → PRESUMED HIGH-RISK → proceed to Step 3

Step 3: Art.6(3) — Does the system meet one of the four criteria AND
         not pose significant risk?
         YES → Provider may self-declare NOT high-risk (must document)

Step 4: Art.6(4) override — Does the system perform profiling of natural persons?
         YES → ALWAYS HIGH-RISK (Art.6(3) self-declaration blocked)

Critical distinction: Art.6(3) is only available for Annex III systems (Art.6(2) route). It does not apply to Art.6(1) safety components in regulated products. If your AI system is a safety component of a CE-marked product already requiring third-party conformity assessment, you have no Art.6(3) option.


Why Art.6(3) Exists

The Annex III categories are broad by design. They capture entire domains — employment AI, education AI, essential services AI — using descriptive language that could theoretically encompass a simple recommendation widget alongside a fully automated hiring decision engine. Both might technically fall within Annex III(4) (employment AI), but the harm potential is not equivalent.

Art.6(3) provides a risk-based escape valve. The legislative recitals make the intent clear: Annex III listing does not automatically mean the AI system poses the level of risk that warrants the full high-risk compliance burden. A system that only performs narrow procedural support — formatting data, flagging patterns for human review, suggesting pre-populated options — should not bear the same obligations as a system that autonomously determines credit approvals or candidate selection.

The exemption is provider-driven. No authority needs to approve the self-declaration. But the provider carries the burden of proof if challenged.


The Four Art.6(3) Exemption Criteria

Article 6(3) provides that an Annex III system shall not be high-risk where it does not pose a significant risk of harm "taking into account any of the following criteria." The use of "any" is important: meeting one criterion is sufficient, provided the significant harm threshold is also met.

Criterion (a) — Narrow Procedural Task

Statutory text: "the AI system is intended to perform a narrow procedural task"

This criterion covers AI systems whose function is bounded, rule-like, and limited to a specific operational step in a larger process — where the AI does not make substantive determinations about persons or situations.

What "narrow" means:

Worked examples:

SystemNarrow Procedural?Why
OCR system that extracts text from CVs into a structured formatYESNo assessment of content, pure procedural extraction
AI that checks whether a loan application form is completeYESBinary completeness check, no creditworthiness assessment
AI that recommends which credit score model to apply based on applicant geographyNOInfluences downstream credit assessment process
AI that formats court documents into standard templatesYESNo legal analysis, pure structural formatting
AI that flags potentially discriminatory language in job advertisementsNOInvolves assessment of content and impact

The key question: Does the system make a substantive determination about a person's characteristics, rights, or outcomes — or does it perform a step that is inherently bounded and procedural? A system that automates a task that any competent employee could perform in 30 seconds using a checklist is likely narrow procedural. A system that synthesises information to reach a conclusion about a person is not.

Risk of misapplication: Providers sometimes describe systems as "procedural" when they are actually performing substantive risk assessments. Renaming an output as a "flag" or "recommendation" rather than a "decision" does not make it narrow and procedural. Market surveillance authorities will look at what the system actually does, not what the provider calls it.


Criterion (b) — Improvement of a Previously Completed Human Activity

Statutory text: "the AI system is intended to improve the result of a previously completed human activity"

This criterion covers AI systems deployed retrospectively — to review, check, or enhance work that a human has already completed independently. The human assessment precedes the AI assessment.

The sequence requirement:

  1. Human performs the primary task and reaches a conclusion
  2. AI system reviews or analyses the completed work
  3. AI provides feedback, error detection, or quality enhancement suggestions
  4. Human retains ability to accept or reject AI feedback

What this covers:

What this does not cover:

The distinguishing test: Who acts first? If the human performs the substantive task without seeing the AI output, and the AI then reviews the completed result, criterion (b) may apply. If the AI output is visible to the human during their assessment, or if the AI goes first and the human reacts, criterion (b) does not apply.

Documentation requirement: Providers relying on criterion (b) should document the workflow sequence — what the human does, at what stage the AI intervenes, and what constraints exist on the AI's ability to alter the human's primary conclusion.


Criterion (c) — Pattern Detection Without Replacing Human Assessment

Statutory text: "the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review"

This criterion covers AI systems used for quality assurance, audit, and consistency monitoring — where the system analyses patterns across many decisions but does not substitute for individual human judgements on individual cases.

What "detect patterns" means:

The "without proper human review" clause: The parenthetical is important. The system is exempt only if:

  1. It is not meant to replace or influence previously completed human assessments; AND
  2. Proper human review remains in place

A pattern detection system that automatically triggers adverse actions when it detects a deviation (without a human deciding to act on the finding) is not within this criterion. A system that generates a report showing statistical anomalies for a compliance officer to review and act upon may be within criterion (c).

Worked examples:

SystemCriterion (c)?Why
AI that analyses all 10,000 annual loan decisions for statistical evidence of age discriminationYESPattern analysis across population, findings reviewed by compliance team
AI that automatically escalates individual loan applications when the assigned officer's decision pattern deviatesNOInfluences individual-level decisions in real time
AI that flags specific visa officers whose approval rates deviate significantly for auditYESPattern detection for management review, not individual decision overrides
AI that scores each immigration interview in real time and presents the score to the interviewing officerNOInfluences the officer's assessment as it proceeds

Criterion (d) — Preparatory Task

Statutory text: "the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III"

This criterion covers AI that performs the groundwork before a human conducts the substantive assessment — data collection, aggregation, formatting, preliminary screening — without itself making the assessment.

What "preparatory" means:

What it does not mean:

The ranking problem: Many providers argue that a "ranking" or "shortlisting" system is merely preparatory. This argument is difficult to sustain where:

A truly preparatory system leaves the human assessor free to conduct their own assessment without the AI's work structuring or constraining their view.


Art.6(4) — The Profiling Override

Statutory text: "Notwithstanding paragraph 3, an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons as referred to in Article 4, point (4), of Regulation (EU) 2016/679."

GDPR Art.4(4) definition of profiling: "any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person's performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements."

This is a hard bar. If your Annex III AI system performs profiling under the GDPR definition, no Art.6(3) exemption is available. The system is always high-risk.

Why this matters:

The four Art.6(3) criteria could theoretically be read to capture certain profiling activities. For example, an AI that analyses past employment decisions to detect patterns (criterion (c)) might also be profiling employees. Art.6(4) forecloses this reading — profiling of natural persons triggers high-risk classification regardless of which criterion would otherwise apply.

What constitutes profiling in the Art.6(4) context:

GDPR profiling requires:

  1. Automated processing of personal data
  2. Evaluation of personal aspects of a natural person
  3. Particularly: performance, economic situation, health, preferences, interests, reliability, behaviour, location

Any Annex III system that combines personal data to build profiles, predict behaviour, or score individuals falls within Art.6(4). This includes:

Art.6(4) is not about intent — it is about function. A provider cannot claim an exemption by labelling a profiling system as "pattern analysis" or "preparatory assessment."


Art.6(5) — Documentation Requirements

Statutory text: "The provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system is placed on the market or put into service. Such provider shall be subject to the registration obligation set out in Article 49(2). Upon request of national competent authorities, the provider shall provide the documentation of the assessment."

Three obligations follow from Art.6(3) self-declaration:

1. Pre-market documentation: The assessment must be completed and documented before the system is placed on the market or put into service. Retrospective documentation — created after market surveillance challenges the classification — will not satisfy Art.6(5). Providers must build the assessment into their product development process.

2. Registration under Art.49(2): Providers of non-high-risk Annex III systems must register in the EU database of AI systems established under Art.71. The registration requirement confirms that a self-declaration does not mean flying under the regulatory radar — the Commission maintains visibility.

3. Production on request: National competent authorities (market surveillance authorities) can request the documentation. Providers must have it ready. If the documentation cannot be produced, or is inadequate, the authority will likely conclude the self-declaration was not properly made.

What adequate documentation looks like:

A defensible Art.6(5) assessment package should include:


Art.6(6) — Commission Guidelines

Statutory text: "The Commission shall provide guidelines specifying the practical implementation of this Article, in particular as regards the determination of the cases where an AI system referred to in Annex III is not to be considered high-risk, no later than 2 February 2026 pursuant to Article 96."

The February 2026 deadline for Commission guidelines has now passed. As of April 2026, the Commission published preliminary guidance through the AI Office on the Art.6(3) self-declaration process. The guidelines address:

What the Commission guidance clarifies:

The significant risk of harm threshold: Commission guidance confirms that the four criteria in Art.6(3) are not standalone — they must be considered alongside the overarching requirement that the system "does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making."

This creates a two-part test:

  1. Does the system meet at least one of criteria (a)-(d)?
  2. Does the system, even considering that criterion, not pose significant risk of harm?

Meeting criterion (a) (narrow procedural) does not automatically exempt a system. A narrow procedural task in a high-stakes context (deciding which emergency cases are triaged first) may still pose significant harm risk. Providers must apply both the criterion and the harm threshold independently.


Art.6(3) and the Art.9-15 Obligation Stack

Understanding what is at stake makes the assessment worth the investment:

ObligationHigh-Risk (Art.6(2))Not High-Risk via Art.6(3)
Risk Management System (Art.9)REQUIREDNOT required
Training Data Governance (Art.10)REQUIREDNOT required
Technical Documentation (Art.11 + Annex IV)REQUIREDNOT required
Record-Keeping/Logs (Art.12)REQUIREDNOT required
Transparency to Deployers (Art.13)REQUIREDNOT required
Human Oversight Measures (Art.14)REQUIREDNOT required
Accuracy, Robustness, Cybersecurity (Art.15)REQUIREDNOT required
Conformity Assessment (Art.43)REQUIREDNOT required
EU Declaration of Conformity (Art.47)REQUIREDNOT required
CE Marking (Art.48)REQUIREDNOT required
Registration in EU AI Database (Art.49(1))REQUIREDArt.49(2) only

The difference is substantial. A wrongly self-declared non-high-risk system misses all of Art.9-15 — the entire technical compliance backbone. This is why market surveillance authorities are expected to scrutinise Art.6(3) self-declarations carefully.

The penalty exposure for incorrect self-declaration: If an authority determines that a provider wrongly self-declared under Art.6(3) and did not comply with Art.9-15, the provider faces penalties under Art.99(2) — up to €30,000,000 or 6% of worldwide annual turnover for high-risk system non-compliance (the same as for prohibited practices). Art.6(5) non-compliance (failing to document or register) is a lower-tier violation but still carries significant fines.


Python Implementation: Art.6(3) Self-Assessment Framework

from dataclasses import dataclass, field
from enum import Enum, auto
from typing import Optional
from datetime import date


class AnnexIIICategory(Enum):
    BIOMETRICS = 1
    CRITICAL_INFRASTRUCTURE = 2
    EDUCATION = 3
    EMPLOYMENT = 4
    ESSENTIAL_SERVICES = 5
    LAW_ENFORCEMENT = 6
    MIGRATION_ASYLUM_BORDER = 7
    ADMINISTRATION_JUSTICE = 8


class ExemptionCriterion(Enum):
    A_NARROW_PROCEDURAL = auto()
    B_IMPROVES_COMPLETED_HUMAN_ACTIVITY = auto()
    C_PATTERN_DETECTION_NO_REPLACE = auto()
    D_PREPARATORY_TASK = auto()


@dataclass
class Art63Assessment:
    system_name: str
    system_description: str
    annex_iii_category: AnnexIIICategory
    assessment_date: date
    responsible_officer: str

    # Gate 1: Art.6(4) override check
    performs_gdpr_profiling: bool = False

    # Gate 2: Significant harm threshold
    significant_harm_risk: bool = False
    harm_analysis: str = ""
    materially_influences_decision: bool = False

    # Gate 3: Criterion met
    applicable_criterion: Optional[ExemptionCriterion] = None
    criterion_justification: str = ""

    # Human oversight
    human_review_precedes_ai: bool = False
    human_can_override_ai_output: bool = False
    ai_visible_during_human_assessment: bool = False

    # Documentation
    review_schedule_months: int = 12
    notes: str = ""

    def _check_profiling_override(self) -> dict:
        if self.performs_gdpr_profiling:
            return {
                "blocked": True,
                "reason": (
                    "Art.6(4) override: System performs GDPR Art.4(4) profiling of natural "
                    "persons. Art.6(3) exemption is unavailable. System is always high-risk."
                ),
            }
        return {"blocked": False}

    def _check_harm_threshold(self) -> dict:
        if self.significant_harm_risk:
            return {
                "passed": False,
                "reason": (
                    "Significant risk of harm identified. Art.6(3) primary threshold not met. "
                    "System should be classified as high-risk regardless of criterion."
                ),
            }
        if self.materially_influences_decision:
            return {
                "passed": False,
                "reason": (
                    "System materially influences outcome of decision-making. "
                    "Art.6(3) primary threshold not met even if a criterion applies."
                ),
            }
        return {"passed": True}

    def _check_criterion(self) -> dict:
        if self.applicable_criterion is None:
            return {"met": False, "reason": "No exemption criterion identified."}

        if self.applicable_criterion == ExemptionCriterion.A_NARROW_PROCEDURAL:
            if self.materially_influences_decision:
                return {
                    "met": False,
                    "reason": "Criterion (a): Narrow procedural claimed but system materially influences decisions — inconsistent.",
                }

        if self.applicable_criterion == ExemptionCriterion.B_IMPROVES_COMPLETED_HUMAN_ACTIVITY:
            if not self.human_review_precedes_ai:
                return {
                    "met": False,
                    "reason": (
                        "Criterion (b): Human activity must be COMPLETED before AI intervention. "
                        "If AI operates simultaneously or before human, criterion (b) does not apply."
                    ),
                }
            if self.ai_visible_during_human_assessment:
                return {
                    "met": False,
                    "reason": "Criterion (b): AI output is visible during human assessment — human is not working independently first.",
                }

        if self.applicable_criterion == ExemptionCriterion.C_PATTERN_DETECTION_NO_REPLACE:
            if not self.human_can_override_ai_output:
                return {
                    "met": False,
                    "reason": "Criterion (c): Proper human review is required. System must not auto-trigger actions from pattern detection.",
                }

        if self.applicable_criterion == ExemptionCriterion.D_PREPARATORY_TASK:
            if self.materially_influences_decision:
                return {
                    "met": False,
                    "reason": (
                        "Criterion (d): Preparatory tasks must not materially influence substantive decisions. "
                        "Ranking/shortlisting that determines who advances is substantive, not preparatory."
                    ),
                }

        return {"met": True, "criterion": self.applicable_criterion.name}

    def assess(self) -> dict:
        result = {
            "system": self.system_name,
            "category": self.annex_iii_category.name,
            "date": str(self.assessment_date),
            "officer": self.responsible_officer,
            "classification": None,
            "exemption_available": False,
            "blocking_issues": [],
            "warnings": [],
            "recommendation": "",
        }

        # Step 1: Art.6(4) profiling override
        profiling_check = self._check_profiling_override()
        if profiling_check["blocked"]:
            result["classification"] = "HIGH_RISK"
            result["blocking_issues"].append(profiling_check["reason"])
            result["recommendation"] = (
                "System must comply with Art.9-15 obligations. "
                "Art.6(3) exemption blocked by Art.6(4) profiling override."
            )
            return result

        # Step 2: Significant harm threshold
        harm_check = self._check_harm_threshold()
        if not harm_check["passed"]:
            result["classification"] = "HIGH_RISK"
            result["blocking_issues"].append(harm_check["reason"])
            result["recommendation"] = (
                "System should be classified high-risk. Harm threshold analysis indicates "
                "significant risk even if a procedural criterion could otherwise apply."
            )
            return result

        # Step 3: Criterion check
        criterion_check = self._check_criterion()
        if not criterion_check["met"]:
            result["classification"] = "HIGH_RISK"
            result["blocking_issues"].append(criterion_check["reason"])
            result["recommendation"] = (
                "No qualifying Art.6(3) criterion met. Classify as high-risk and comply with Art.9-15."
            )
            return result

        # Self-declaration available
        result["classification"] = "NOT_HIGH_RISK_SELF_DECLARED"
        result["exemption_available"] = True
        result["recommendation"] = (
            f"Art.6(3) exemption appears available via criterion "
            f"{self.applicable_criterion.name}. "
            f"Document this assessment per Art.6(5) before market placement. "
            f"Register in EU AI database under Art.49(2). "
            f"Schedule reassessment in {self.review_schedule_months} months or on material system change."
        )

        if not self.criterion_justification:
            result["warnings"].append(
                "Criterion justification is empty — documentation will be inadequate for Art.6(5)."
            )

        return result

    def generate_documentation(self) -> str:
        assessment = self.assess()
        lines = [
            f"EU AI ACT ART.6(3) SELF-ASSESSMENT DOCUMENTATION",
            f"=" * 50,
            f"System: {self.system_name}",
            f"Category (Annex III): {self.annex_iii_category.name} (Category {self.annex_iii_category.value})",
            f"Assessment Date: {self.assessment_date}",
            f"Responsible Officer: {self.responsible_officer}",
            f"",
            f"CLASSIFICATION: {assessment['classification']}",
            f"",
            f"--- Analysis ---",
            f"",
            f"Art.6(4) Profiling Check: {'BLOCKED — performs GDPR profiling' if self.performs_gdpr_profiling else 'CLEAR — no GDPR profiling'}",
            f"Significant Harm Risk: {'YES' if self.significant_harm_risk else 'NO'} — {self.harm_analysis}",
            f"Materially Influences Decisions: {'YES' if self.materially_influences_decision else 'NO'}",
            f"",
            f"Applicable Criterion: {self.applicable_criterion.name if self.applicable_criterion else 'NONE'}",
            f"Criterion Justification: {self.criterion_justification}",
            f"",
            f"Human Review Precedes AI: {self.human_review_precedes_ai}",
            f"Human Can Override AI Output: {self.human_can_override_ai_output}",
            f"AI Visible During Human Assessment: {self.ai_visible_during_human_assessment}",
            f"",
            f"--- Conclusion ---",
            f"",
            f"Recommendation: {assessment['recommendation']}",
            f"",
        ]
        if assessment["blocking_issues"]:
            lines.append("Blocking Issues:")
            for issue in assessment["blocking_issues"]:
                lines.append(f"  - {issue}")
        if assessment["warnings"]:
            lines.append("Warnings:")
            for warning in assessment["warnings"]:
                lines.append(f"  - {warning}")
        lines.append(f"")
        lines.append(f"Next Review Date: {self.review_schedule_months} months from {self.assessment_date}")
        if self.notes:
            lines.append(f"Notes: {self.notes}")
        return "\n".join(lines)


# Example: CV formatting assistant (likely criterion (a))
cv_formatter = Art63Assessment(
    system_name="CV Structuring Assistant v2",
    system_description=(
        "Extracts structured data fields (name, education, work history) from "
        "unstructured CV text and formats them into a standardised JSON schema "
        "for downstream human recruiter review. Makes no assessments of candidate suitability."
    ),
    annex_iii_category=AnnexIIICategory.EMPLOYMENT,
    assessment_date=date(2026, 4, 22),
    responsible_officer="Chief Compliance Officer",
    performs_gdpr_profiling=False,
    significant_harm_risk=False,
    harm_analysis=(
        "System extracts and reformats existing information. It does not score, rank, "
        "or evaluate candidates. Human recruiter reviews all structured output independently."
    ),
    materially_influences_decision=False,
    applicable_criterion=ExemptionCriterion.A_NARROW_PROCEDURAL,
    criterion_justification=(
        "The system performs a bounded extraction task. Output is structured data identical "
        "in content to the original CV — no inference about suitability, no ranking, "
        "no predictive assessment. Human recruiter makes all hiring judgements independently."
    ),
    human_can_override_ai_output=True,
    review_schedule_months=12,
)

result = cv_formatter.assess()
print(result["classification"])  # NOT_HIGH_RISK_SELF_DECLARED
print(cv_formatter.generate_documentation())


# Example: AI candidate ranker (blocked — profiling)
candidate_ranker = Art63Assessment(
    system_name="TalentScore Ranking Engine",
    system_description=(
        "Analyses candidate applications and historical hiring data to generate "
        "a suitability score and ranked shortlist for recruiter review."
    ),
    annex_iii_category=AnnexIIICategory.EMPLOYMENT,
    assessment_date=date(2026, 4, 22),
    responsible_officer="Chief Compliance Officer",
    performs_gdpr_profiling=True,  # Profiling: predicts performance based on personal data
    significant_harm_risk=True,
    harm_analysis="Ranking influences which candidates advance — direct effect on employment outcomes.",
    materially_influences_decision=True,
    applicable_criterion=ExemptionCriterion.D_PREPARATORY_TASK,
    criterion_justification="Generates ranked shortlist for recruiter.",
)

result = candidate_ranker.assess()
print(result["classification"])  # HIGH_RISK (Art.6(4) override)
print(result["blocking_issues"])

Common Self-Declaration Mistakes

1. Conflating "the human makes the final decision" with Art.6(3):

Many providers argue their system is exempt because a human ultimately approves or rejects the AI's output. This is insufficient. The question is not who makes the final formal decision, but whether the AI materially influences the outcome of decision-making in practice. If the AI generates a recommendation that humans routinely accept without independent assessment, the human is not providing meaningful oversight — they are rubber-stamping.

2. Labelling ranking as "preparatory":

Shortlisting and ranking are among the most common misclassifications. A system that produces a ranked list of candidates, from which only the top-N are considered, makes a substantive determination about who is considered. The fact that a human then interviews the shortlisted candidates does not make the ranking preparatory — it makes it a de facto gatekeeping decision.

3. Failing to address the harm threshold separately:

Providers sometimes focus entirely on meeting one of the four criteria and overlook the overarching "significant risk of harm" requirement. The four criteria are factors "to take into account" — they do not independently establish the exemption. A system performing a narrow procedural task in a high-stakes context (e.g., triage queuing in emergency services) may still pose significant harm risk.

4. Not accounting for Art.6(4) profiling:

The GDPR definition of profiling is broad. Any system that uses personal data to evaluate aspects of a natural person — even if the provider calls it "pattern analysis" or "historical scoring" — may be profiling under Art.4(4) GDPR. Providers should assess Art.6(4) explicitly before concluding a criterion applies.

5. Post-hoc documentation:

Art.6(5) requires documentation before market placement. A provider who deploys first and documents later — particularly in response to a regulator inquiry — faces the inference that the assessment was not genuinely performed ex-ante.


Interaction with Other Obligations

Art.6(3) exemption removes the Art.9-15 high-risk burden but does not remove all AI Act obligations:


Applicability Timeline

02.08.2024   EU AI Act enters into force (Art.113 — Day 0)
02.02.2026   Commission guidelines on Art.6(3) due (Art.6(6) deadline)
02.08.2026   Art.6(2) Annex III high-risk obligations fully applicable
             → Art.6(3) self-declarations must be complete before this date
             → Art.49(2) registration required for self-declared exempt systems
02.08.2027   Art.6(1) Annex I (safety components) fully applicable
             → Art.6(3) not available for Art.6(1) systems

If your Annex III system is deployed before 2 August 2026 without either completing Art.9-15 compliance or completing an Art.6(3) self-declaration, it is in violation from that date.


See Also

EU-Native Hosting

Ready to move to EU-sovereign infrastructure?

sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.