On August 2, 2026, the EU AI Act's high-risk system obligations become enforceable. Penalties: up to 35 million EUR or 7% of global annual revenue — whichever is higher. If you're a security leader at a company that uses AI in employment, credit, education, or critical infrastructure, this deadline is yours to own.
The proposed Digital Omnibus delay to December 2027 is not a safe bet. The European Commission hasn't confirmed it, and enforcement of general-purpose AI obligations (Article 51+) is already live as of August 2025. Betting on a postponement is betting your compliance posture on a rumor.
The question isn't whether the EU AI Act applies to you. It's whether you're a provider or a deployer — and the obligations are very different.
Does the AI Act Apply to Your Organization?
Most mid-market companies are deployers, not providers. A provider builds the AI system. A deployer uses it. If your company uses ChatGPT Enterprise for customer service drafting, you're a deployer. If you built a custom model that scores credit applications, you might be a provider. The distinction matters because deployer obligations are lighter — but they're not optional.
The high-risk classification covers specific use cases, not specific tools. Using GPT-4 for writing marketing copy is not high-risk. Using GPT-4 to screen job applicants is. Same model, different risk classification. Your compliance obligation depends on how you use AI, not which AI you use.
- Employment: AI-assisted hiring, performance evaluation, task allocation, promotion decisions
- Credit: AI-driven creditworthiness assessment, loan pricing, insurance risk scoring
- Education: AI-based student assessment, admission decisions, proctoring
- Law enforcement: AI for risk assessment, evidence analysis, behavioral profiling
- Critical infrastructure: AI managing energy grids, water supply, transportation networks
The Four Things Deployers Must Do
If you deploy high-risk AI systems — even third-party ones like an AI-powered HR screening tool — the Act requires four specific actions. Here's what they mean in practice:
1. Human oversight (Article 14). Every high-risk AI decision must have a human in the loop. Not a rubber stamp — meaningful review by someone who understands the system's limitations. For HR screening tools, this means a recruiter reviews every AI-ranked candidate list before anyone is rejected. Document who reviewed, when, and what action they took.
2. Record-keeping (Article 12 + 26). Maintain logs of the AI system's operation: inputs, outputs, and decisions. For deployers, this means keeping records of every interaction with the high-risk system, how long as the system is in use. Your AI governance platform should be generating these logs automatically.
3. Transparency (Article 13 + 52). Individuals affected by AI decisions have the right to know. If an AI system contributed to a hiring rejection, credit denial, or insurance pricing decision, you must be able to explain that AI was involved and how it influenced the outcome.
4. Input data governance (Article 10). The data going into AI systems must be relevant, representative, and free from errors. For deployers, this means understanding what data you're feeding into third-party AI systems and ensuring it meets quality standards. Garbage in, biased decisions out — and now that's a legal liability.
Article 4: The Inventory Requirement Nobody Talks About
Buried in the Act is Article 4, the AI literacy obligation. Organizations must ensure that staff operating AI systems have sufficient understanding of how those systems work. But there's a prerequisite everyone overlooks: you need to know what AI systems your organization is operating in the first place.
This is the shadow AI discovery problem restated as a regulatory mandate. If your marketing team connected an AI writing assistant via OAuth and your HR team is using an AI-powered interview scheduler, those are AI systems your organization is deploying. If they fall into high-risk categories, you have compliance obligations you don't even know about.
You cannot comply with the AI Act if you don't have an inventory of the AI systems your organization uses. This is step zero.
Mapping to What You Already Have
The good news: if you have SOC 2, ISO 27001, or NIST cybersecurity controls in place, you're not starting from zero. Many AI Act requirements map to existing frameworks:
- SOC 2 CC6.1 (access controls) → AI Act Article 14 (human oversight)
- ISO 27001 A.12.4 (logging and monitoring) → AI Act Article 12 (record-keeping)
- NIST CSF ID.AM (asset management) → AI Act Article 4 (AI system inventory)
- SOC 2 CC7.2 (monitoring) → AI Act Article 26 (deployer monitoring obligations)
- ISO 27001 A.18.1 (compliance with laws) → AI Act risk assessment requirements
The NIST AI Risk Management Framework (AI RMF) provides the closest structural alignment. Its GOVERN function maps directly to the AI Act's governance requirements. If you build your AI governance program around NIST AI RMF, you'll satisfy both U.S. voluntary frameworks and EU mandatory requirements with one structure.
The 15-Item Compliance Checklist
Here's what your security team should be doing right now, prioritized by enforcement risk:
- Inventory all AI systems in use across the organization (workspace API audit + browser detection)
- Classify each system: high-risk, limited-risk, or minimal-risk based on use case
- Identify which systems you are a provider vs deployer for
- Document human oversight procedures for every high-risk system
- Implement automated logging for high-risk AI interactions
- Establish transparency disclosures for AI-affected decisions
- Review and document data quality processes for AI system inputs
- Create an AI incident response procedure
- Train all AI system operators on the Act's literacy requirements (Article 4)
- Map existing SOC 2/ISO 27001 controls to AI Act requirements
- Engage legal counsel for Article 6 high-risk assessment
- Register high-risk systems in the EU database (when registration portal opens)
- Establish a quarterly AI governance review cadence
- Document vendor AI agreements and data processing terms
- Prepare Board/executive briefing on AI Act exposure
What the U.S. Looks Like Next
The EU AI Act is the most comprehensive AI regulation globally, but the U.S. isn't standing still. Colorado's AI Act (effective February 2026) requires deployers of high-risk AI to provide impact assessments and consumer disclosures. Multiple states are considering similar legislation. The NIST AI RMF is becoming the de facto voluntary standard.
Companies that build governance programs for EU compliance will find themselves ahead when U.S. regulations inevitably expand. The frameworks overlap significantly. This isn't compliance for one jurisdiction — it's building the governance infrastructure your organization will need everywhere.
Start With Discovery
The compliance deadline is five months away. The biggest risk isn't failing to meet every requirement — it's not knowing what AI systems you're responsible for. Every other compliance action depends on that inventory.
Vloex discovers every AI tool your organization uses in minutes — via Google Workspace and Microsoft 365 OAuth. No agents, no network changes. See your AI landscape, classify by risk, and build the audit trail the EU AI Act demands. Get started free.