Help us improve
Share bugs, ideas, or general feedback.
From ra-qm-skills
EU AI Act (Regulation (EU) 2024/1689) operational compliance for compliance teams. Three Article-level decisions: (1) What's the risk tier of this AI system — prohibited (Art. 5), high-risk (Art. 6 + Annex III), limited-risk (Art. 50), or minimal-risk? (2) For high-risk systems, what's the Article 43 conformity assessment route (Module A internal control vs Module H full QMS + notified body) and what goes in the Annex IV technical documentation? (3) Per organizational role (provider / deployer / importer / distributor / authorized representative), what are the active obligations and deadlines? Use during AI system intake review, when planning conformity assessment, or when scoping deployer obligations. Cites Articles + Annexes for every output. NOT executive AI strategy (see chief-ai-officer-advisor). NOT a legal substitute.
npx claudepluginhub ciciliaeth/claude-skills --plugin ra-qm-skillsHow this skill is triggered — by the user, by Claude, or both
Slash command
/ra-qm-skills:eu-ai-act-specialistThe summary Claude sees in its skill listing — used to decide when to auto-load this skill
Article-cited operational skill for Regulation (EU) 2024/1689. **Three decisions, no executive AI strategy:**
Provides PHI/PII compliance patterns for healthcare apps including data classification, row-level security access control, audit trails, encryption, and common leak vectors. Useful for patient data features, APIs, and code reviews.
Share bugs, ideas, or general feedback.
Article-cited operational skill for Regulation (EU) 2024/1689. Three decisions, no executive AI strategy:
This skill is NOT chief-ai-officer-advisor. CAIO decides whether to ship the AI feature at all and accepts business risk. This skill operates the conformity work that turns "we'll ship it" into Article-compliant artefacts.
This skill is NOT a legal substitute. The Act is binding regulation. For novel cases (Is this a GPAI model? Does Article 6(2) carve-out apply? Is fine-tuning a foundation model "substantial modification"?), engage qualified outside counsel. The skill cites Articles + Annexes and uses Commission/EDPB published interpretation but does not provide binding legal opinion.
This skill is NOT GDPR. Many AI systems also trigger GDPR (training data, output processing). See ra-qm-team/skills/gdpr-dsgvo-expert/ for DPIA + lawful basis work. The Acts interact (Recital 10, Article 10 for high-risk training data).
EU AI Act, EU AI Regulation, Regulation 2024/1689, AI Act, AI regulation Europe, high-risk AI, prohibited AI, Article 5 AI Act, Article 6 AI Act, Article 9 AI Act, Article 50 AI Act, Annex III, Annex IV, conformity assessment, CE marking AI, notified body AI, Module A, Module H, technical documentation AI, post-market monitoring AI, fundamental rights impact assessment, FRIA, GPAI, general-purpose AI model, systemic risk GPAI, AI Office, ENISA AI, EDPB AI, AI Act timeline, AI Act penalties, EU AI Act provider, EU AI Act deployer, EU AI Act importer, EU AI Act distributor, EU AI Act fines, AI literacy
# Decision A: Classify an AI system per the Act
python scripts/ai_system_risk_classifier.py # embedded 5-system sample
python scripts/ai_system_risk_classifier.py path/to/systems.json
# Decision B: Conformity assessment plan for a high-risk system
python scripts/conformity_assessment_planner.py # embedded high-risk sample
python scripts/conformity_assessment_planner.py path/to/system.json
# Decision C: Obligation tracker per organizational role
python scripts/ai_act_obligation_tracker.py # embedded sample (provider + deployer)
python scripts/ai_act_obligation_tracker.py path/to/roles.json
The framework: The Act takes a risk-based approach (Recital 26). Each AI system falls into exactly one of four tiers:
| Tier | Source | Examples | Obligations |
|---|---|---|---|
| Prohibited | Article 5 | Social scoring; emotion recognition in workplace/education; subliminal manipulation; real-time public biometrics by law enforcement (with narrow exceptions) | Cannot be placed on market or used (penalties up to EUR 35M / 7% turnover) |
| High-risk | Article 6 + Annex III; Article 6(1) + Annex I | CV-screening, credit scoring, biometric categorisation, safety components of regulated products | Articles 8–17 (provider) + Article 26 (deployer); conformity assessment; CE marking |
| Limited-risk (transparency) | Article 50 | Chatbots, deepfakes, emotion recognition outside Article 5 contexts | Transparency disclosures to natural persons |
| Minimal-risk | Default | Spam filters, video-game AI, inventory forecasters | None under the Act (voluntary codes of conduct, Article 95) |
Critical carve-outs (Article 6(3)): an Annex III system is NOT high-risk if it (a) performs a narrow procedural task, (b) improves the result of previously completed human activity, (c) detects decision-making patterns without replacing human assessment, (d) performs a preparatory task. Caveat: profiling of natural persons is always Annex III high-risk regardless of carve-outs.
Run ai_system_risk_classifier.py with system characteristics. The tool checks Article 5 prohibitions first, then Annex III categories, then Article 6(3) carve-outs, then Article 50 transparency, then minimal-risk default.
See references/eu_ai_act_titles.md for the full Article-by-Article walkthrough.
The framework (Article 43 + Annex VI/VII): for high-risk AI systems, the provider must demonstrate conformity before placing on market. Two routes:
Required artifacts per Annex IV — Technical Documentation:
Run conformity_assessment_planner.py to select the Module and produce the Annex IV checklist for a given high-risk system.
See references/high_risk_systems_annex_iii.md for which systems require which conformity route.
The framework (Articles 16, 22, 23, 24, 25, 26): the Act distinguishes provider obligations (most) from downstream-actor obligations (deployer, importer, distributor, authorized representative). A single company can play multiple roles simultaneously.
| Role | Primary Articles | Key obligations |
|---|---|---|
| Provider (Article 3(3)) | 8–17, 47, 49, 72 | Conformity assessment; CE marking; risk management; data governance; technical documentation; post-market monitoring; serious incident reporting (Article 73) |
| Deployer (Article 3(4)) | 26 | Use according to instructions; human oversight; input data quality; record-keeping (Article 19); inform workers (Article 26(7)); FRIA if public-sector/essential-services (Article 27) |
| Importer (Article 3(6)) | 23 | Verify conformity; affixed CE marking; technical documentation availability |
| Distributor (Article 3(7)) | 24 | Verify CE marking + documentation before making available |
| Authorized representative (Article 22) | 22 | Non-EU providers must appoint one; representative liable for provider obligations |
Important: under Article 25, a deployer who substantially modifies a high-risk AI system, or places it on the market under their own name, becomes a provider and inherits provider obligations.
Run ai_act_obligation_tracker.py with the roles JSON to produce a deadline-sorted obligation matrix.
See references/gpai_obligations.md for the separate GPAI Articles 51–55 track.
Goal: classify, identify obligations, scope the conformity work.
# 1. Document system characteristics: purpose, users, data, autonomy, deployment context
# 2. Run classifier
python scripts/ai_system_risk_classifier.py systems.json
# 3. If high-risk: run planner
python scripts/conformity_assessment_planner.py system.json
# 4. Identify org roles played (provider / deployer / both)
python scripts/ai_act_obligation_tracker.py roles.json
# 5. Cross-check with GDPR DPIA (gdpr-dsgvo-expert) if personal data
# 6. Cross-check with ISO 42001 AIMS evidence (compliance-team-iso42001)
# 7. Output: classification memo + conformity plan + obligation list
Goal: assemble the Annex IV pack before conformity assessment.
# 1. Run conformity assessment planner to get the checklist
python scripts/conformity_assessment_planner.py system.json
# 2. Assemble: system description, architecture, training data, validation, risk management
# 3. Reference ISO 42001 evidence where it satisfies Annex IV items
# 4. Reference ISO 27001 evidence for security controls
# 5. Run Article 9 risk management lifecycle
# 6. Sign EU declaration of conformity (Article 47) AFTER assessment passes
# 7. Affix CE marking (Article 48)
# 8. Register in EU database (Article 71) — high-risk Annex III systems
Goal: confirm all active obligations are in place before EU placement.
# 1. Confirm classification still correct (re-run classifier if system changed)
# 2. Confirm conformity assessment completed (if high-risk)
# 3. Confirm transparency requirements (Article 50) — for chatbots, deepfakes, emotion detection
# 4. Confirm post-market monitoring system (Article 72) is live
# 5. Confirm serious-incident reporting procedure (Article 73) is documented
# 6. For deployers: FRIA done (Article 27, if applicable); workers informed (Article 26(7))
# 7. For GPAI: Articles 51-55 obligations met if applicable
Goal: re-verify classifications + obligations as the Act phases in.
**Bottom Line:** [one sentence — classification + most-significant obligation]
**Article Citation:** [Article + paragraph number; do not paraphrase without cite]
**The Decision:** [one of: classify | conformity-route | obligation-scope]
**The Evidence:** [Article + Annex references; classification confidence]
**How to Act:** [3 concrete next steps with owner + deadline aligned to phasing]
**Your Decision:** [the call for compliance officer or legal counsel — risk-class disputes, novel cases, GPAI threshold determinations]
../../skills/gdpr-dsgvo-expert/ — GDPR DPIA + lawful basis (most AI systems also trigger GDPR)../../../compliance-team-iso42001/ — ISO 42001 AIMS (voluntary management system that satisfies parts of Article 17 QMS for providers)../../skills/information-security-manager-iso27001/ — ISO 27001 for cybersecurity requirements (Article 15)../../skills/risk-management-specialist/ — ISO 14971 risk management (referenced for safety-component AI under Article 6(1))../../skills/mdr-745-specialist/ — MDR 2017/745 (medical-device AI overlap)../../../../compliance-os/ — Meta-orchestrator for multi-framework programs../../../../c-level-advisor/chief-ai-officer-advisor/ — Executive AI strategyVersion: 1.0.0 Status: Production Ready