The EU AI Act — What It Means for Your Organization
The EU AI Act is the world's first comprehensive AI regulation. It applies to any organization operating in the EU that develops, deploys, or uses AI systems. Most obligations take effect August 2, 2026.
The world's first comprehensive AI law.
The EU AI Act (Regulation 2024/1689) establishes a harmonized legal framework for artificial intelligence across the European Union. It entered into force on August 1, 2024, with obligations phasing in over a three-year period. Unlike sector-specific guidelines, the AI Act applies horizontally to any organization that develops, deploys, imports, or distributes AI systems within the EU market.
The Act takes a risk-based approach: AI systems are classified into four tiers based on the potential harm they pose. Unacceptable-risk systems are banned outright. High-risk systems face strict requirements including conformity assessments, human oversight, and documentation. Limited and minimal-risk systems have lighter obligations, primarily around transparency.
For most organizations, the practical impact centers on deployer obligations. If your employees use AI tools like ChatGPT, Copilot, Claude, or Midjourney in their work, your organization is an AI deployer. That means you have concrete obligations around AI literacy, transparency, risk management, and record keeping — regardless of whether you built the AI yourself.
A risk-based framework.
Unacceptable Risk
Social scoring by governments, real-time biometric surveillance in public spaces, AI that manipulates people's behavior to cause harm, and emotion recognition in workplaces and schools.
High Risk
AI used in hiring and recruitment, credit scoring, law enforcement, critical infrastructure, education assessment, and migration management. Requires conformity assessments, human oversight, and full documentation.
Limited Risk
Chatbots, AI-generated content, deepfakes, and emotion recognition systems. Must disclose AI involvement to users and clearly label AI-generated outputs.
Minimal Risk
Spam filters, AI in video games, inventory management systems. Free use with no specific requirements, though general monitoring and AI literacy are recommended.
Obligations that apply to most organizations.
AI Literacy
Ensure all staff who use or oversee AI tools have sufficient understanding of AI capabilities, limitations, and risks. This applies to every organization using AI, regardless of risk tier.
Transparency
Inform people when they interact with AI. Label AI-generated content. Ensure employees and external parties know when AI is involved in decisions that affect them.
Risk Management
Assess and document AI risks. Maintain an inventory of AI systems in use across your organization. Classify tools by risk tier and ensure appropriate controls are in place.
Human Oversight
Ensure human review of AI decisions, especially in high-risk contexts. Humans must be able to understand, interpret, and override AI outputs when necessary.
Incident Reporting
Report serious incidents involving high-risk AI systems to the relevant market surveillance authority. Maintain internal processes for detecting, documenting, and escalating AI-related incidents.
Record Keeping
Maintain logs of AI system usage for at least six months. Keep records of policy decisions, risk assessments, and any changes to your AI tool inventory available for regulatory inspection.
Key dates you need to know.
The main compliance deadline arrives in . High-risk AI obligations, deployer duties, and penalties all take effect on August 2, 2026. The question isn't whether you'll comply — it's whether you can prove it when someone asks.
The fines are real.
Prohibited AI practices — social scoring, manipulative AI, unauthorized biometric surveillance.
Other AI Act violations — failing to meet deployer obligations, inadequate risk management, missing documentation.
Supplying incorrect, incomplete, or misleading information to national authorities or notified bodies.
GDPR fines (up to EUR 20M / 4% turnover) apply independently and stack on top of AI Act penalties. A single incident involving AI and personal data can trigger both frameworks simultaneously.
Built for AI Act compliance from day one.
AI Tool Inventory
Know which AI tools your organization actually uses. Automatic discovery across 55+ AI services — no surveys, no guesswork. A living inventory updated in real time.
Policy Enforcement
Set allow, warn, or block per tool based on your risk classification. New tools default to a coaching notice. Central policy management for the whole organization.
Usage Evidence
Automatic audit trail of all AI tool interactions — timestamps, policy decisions, pseudonymized user IDs. Retained for at least six months, ready for inspection.
Compliance Export
One-click evidence bundles with SHA-256 checksums. CSV and JSON exports of your full AI governance record — inventory, policies, usage logs, and attestations.
Employee Coaching
Inform employees about approved AI tools and policies. Friendly coaching notices guide users to approved alternatives instead of silently blocking access.
Real-time Monitoring
Dashboard visibility into AI usage patterns across your organization. See which tools are used, how often, and whether employees follow your policies — all in real time.