EU AI Act — Plain-English Snapshot
Last updated: 2025-09-04
What the EU AI Act is
- The world's first horizontal AI law. Risk-based: prohibited, high-risk, limited-risk, and minimal.
- General-purpose AI (GPAI) has its own transparency obligations.
- Applies to providers and deployers (you, if you use AI in your products/processes in the EU).
Key timelines (at a glance)
- Prohibited uses apply ~6 months after entry into force (early 2025).
- GPAI transparency obligations apply at 12 months (mid-2025).
- High-risk system obligations apply at 36 months (mid-2027).
What this means for you
- Classify your use cases (are they high-risk?). If so, you'll need risk management, data governance, technical documentation, monitoring, and human oversight.
- Keep technical + organisational evidence, not just policy PDFs.
- Expect requests for logs and exportable audit packs.
Practical checklist (EU)
- Map your AI systems and classify risk; identify GPAI dependencies.
- Stand up risk management and post-market monitoring for high-risk uses.
- Maintain datasets + model documentation (traceability).
- Ensure human-in-the-loop for meaningful oversight.
- Prepare conformity assessments where required; keep your technical file ready.
🍋 Use compliance to your advantage: the documentation you create feeds real-time governance dashboards and board reporting later.
(These snapshots reflect the current direction: UK regulator-led principles; EU AI Act phased application. We'll refresh copy as rules evolve.) Sources you can cite in your blog/resources: EU AI Office timeline; UK White Paper & ICO guidance; BSI 42006:2025.