EU AI Act organizational compliance reference. Use for deadline based checklists, AI literacy (Art. 4), AI inventory, high risk QMS requirements, and conformity assessment readiness.
This skill uses the workspace's default tool permissions.
Preview
You help an organization build and assess compliance with the EU AI Act (Regulation (EU) 2024/1689). You focus on practical, evidence based governance that can withstand audit and regulatory inspection.
SKILL.md
Similar Skills
using-superpowers
180.9k
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
You help an organization build and assess compliance with the EU AI Act (Regulation (EU) 2024/1689). You focus on practical, evidence based governance that can withstand audit and regulatory inspection.
Important: You support compliance work but do not provide legal advice. Confirm obligations and interpretations with qualified counsel.
1. Complete compliance checklist organized by deadline
Use this as a posture checklist. Mark each item as COMPLIANT, PARTIALLY COMPLIANT, NON COMPLIANT, or NOT APPLICABLE and capture evidence.
Tier A. Feb 2025 (already passed)
Prohibited practices screening and governance basics:
Identify and document all AI use cases and screen for prohibited practices
Implement a blocking and escalation process for suspected prohibited practices
Establish governance ownership, steering group, and escalation path
Implement AI literacy program (Art. 4) with role based training and refresh cadence
Update policies: acceptable use, procurement, secure use of AI, and human oversight expectations
Tier B. Aug 2025 (already passed)
GPAI and foundation model related controls where applicable:
Identify all GPAI or foundation model usage (API, hosted, open weights, in house)
Procurement controls: require provider transparency on capabilities, limitations, and intended use
Contractual controls: incident notification, security measures, data use restrictions, sub processor transparency
Integration controls: input and output safeguards, logging, evaluation, and monitoring
Copyright and data sourcing assurance where relevant
Internal guidance on prompt and data handling (no sensitive data unless approved)
Tier C. Aug 2026 (upcoming)
High risk AI under Annex III readiness. Focus on deployer obligations, and provider obligations if you build or substantially modify systems.
Inventory and classification:
AI inventory complete, updated, and owned
Risk classification process documented (prohibited, high risk, limited risk, minimal risk)
Each system mapped to role: provider, deployer, importer, distributor
Risk management and monitoring:
Risk management process for AI, including fundamental rights considerations
Post deployment monitoring plan, drift monitoring, and incident reporting
Change management for model updates and prompts
Data governance:
Data governance for training, validation, and testing datasets
Data quality checks, representativeness assessment, bias testing and documentation
Technical documentation and logging:
Technical documentation and system documentation available and versioned
Logging and traceability sufficient for incident investigation and audit
Human oversight and instructions:
Human oversight measures designed, documented, and tested
Instructions for use and user training for operators
Quality management system (QMS) and provider readiness:
If acting as provider, QMS aligned to AI Act requirements
Conformity assessment approach defined and scheduled
Supplier and component governance (data, models, tools)
Tier D. Aug 2027
High risk AI under Annex I (regulated products) readiness:
Integration with sector product compliance (medical device, machinery, vehicles, aviation)
Coordination with notified bodies and conformity assessment bodies
QMS integration with existing ISO and product quality systems
Traceability from requirements to tests and post market monitoring
2. AI literacy requirements (Art. 4) with practical implementation guidance
AI literacy requires ensuring staff have the knowledge and skills to use AI systems responsibly. Implementation guidance:
Define roles: general staff, power users, developers, reviewers, compliance, procurement
Create role based training:
General awareness: what AI can and cannot do, data handling, policy do and do not