When auditors ask "what AI systems does your organization use?" the answer is almost never complete. The median enterprise has 40+ AI tools in active use across the organization. Security teams typically know about 12. That 3.5x gap isn't a minor discrepancy — it's an entire attack surface you haven't assessed, a compliance obligation you haven't addressed, and a cost center you haven't measured.
And auditors are about to start asking. The EU AI Act requires AI system inventories. SOC 2 auditors are adding AI usage questions. Cyber insurance applications now include AI governance sections. If your answer to any of these is "we don't have a complete inventory," you're about to have a very uncomfortable conversation.
You can't govern what you can't see. An AI asset inventory isn't a nice-to-have — it's the foundation every other governance control depends on.
What Belongs in an AI Asset Inventory
Your AI inventory isn't a spreadsheet of tool names. It's a structured registry with enough context to make governance decisions. For each AI tool or system, capture:
- Tool name and vendor — the AI product and who provides it
- Discovery source — how you found it (workspace audit, browser detection, self-reported)
- Data access level — what data can this tool access? Read-only or read-write?
- Authentication method — SSO, personal account, API key, OAuth consent grant?
- Training-on-input policy — does the vendor train on user input? Can it be disabled?
- Department and team — who's using it and for what purpose?
- Approval status — sanctioned, tolerated, or prohibited?
- Risk classification — high, medium, or low based on data sensitivity and use case
This schema maps directly to what auditors, regulators, and insurance providers will ask. Build it right the first time and you won't need to scramble when the questions come.
Three Discovery Channels
No single discovery method catches everything. The most complete inventories combine three channels, each with different strengths:
Channel 1: Workspace API Audit
Connect Google Workspace Admin SDK or Microsoft 365 enterprise app registrations. This instantly reveals every OAuth-connected AI app across your entire organization — no agents to install, no network changes. You'll see which AI tools employees have authorized, what permissions they granted, and when the connection was made.
Strengths: instant, complete coverage of OAuth-connected apps, zero deployment effort. Limitations: only catches tools connected via OAuth. Won't detect browser-based usage of AI tools where the employee just visits a website.
Channel 2: Browser-Level Detection
A browser extension that recognizes AI provider domains and captures interaction metadata. This catches the tools that workspace audits miss — the employee who visits ChatGPT directly, the engineer using Claude in a browser tab, the designer generating images on Midjourney.
Strengths: catches browser-based AI usage, provides interaction-level detail (prompts, responses, data flows). Limitations: requires extension deployment, only covers managed browsers.
Channel 3: Network/DNS Analysis
Monitor DNS queries and network traffic for AI provider domains. This is the most comprehensive detection method — it catches everything, including API-level usage from code, CLI tools, and applications that don't run in browsers.
Strengths: catches non-browser AI usage (API calls, CLI tools). Limitations: requires network infrastructure changes, higher deployment complexity, less interaction-level detail.
Start with workspace API audit (5 minutes, zero deployment) to get immediate coverage. Add browser detection next. Add network analysis only if you need to catch API-level usage.
Risk Scoring Framework
Once you have your inventory, every tool needs a risk score. This isn't a subjective judgment call — it's a two-axis matrix that produces a consistent, defensible classification:
Axis 1: Data sensitivity. What's the most sensitive data type this tool can access? Restricted (PII, credentials, health data) = high. Confidential (internal strategy, financial projections) = medium. Public (marketing content, public research) = low.
Axis 2: Authentication type. How is the tool accessed? Personal free-tier account = high risk. Personal paid account = medium risk. Corporate SSO with enterprise agreement = low risk. The authentication method determines data handling guarantees.
Cross these two axes and you get nine cells. The top-right quadrant (high sensitivity + personal account) is your immediate action items. The bottom-left (low sensitivity + corporate SSO) is your lowest priority. Everything else falls in between.
The Personal Account Problem
Harmonic Security's analysis of 22.4 million enterprise AI prompts revealed that 16.9% of sensitive data exposure happens on personal free-tier accounts invisible to IT. ChatGPT Free accounts are responsible for 87% of these exposures. These accounts have the weakest data handling protections — most train on user input by default.
SSO-only policies don't solve this. An SSO mandate means employees use the corporate ChatGPT Enterprise account when they're being deliberate. But the quick question they paste into their personal ChatGPT tab? That's the one containing the customer list or the source code. The personal account usage is where the risk concentrates, and it's exactly the usage that SSO mandates don't cover.
From Inventory to Action
An inventory sitting in a spreadsheet is documentation, not governance. Your inventory should drive three downstream actions:
Policy enforcement. Feed your inventory classifications into your policy engine. Sanctioned tools get monitored. Tolerated tools get coaching. Prohibited tools get blocked. The policy engine references your inventory to determine which action to take.
License optimization. Your inventory reveals duplicate tools (three different teams paying for three different AI writing assistants), unused licenses (20 Copilot seats but only 8 active users), and consolidation opportunities. Most organizations save 20-30% on AI spend just from the visibility.
Regulatory compliance. EU AI Act Article 4 requires AI system registration. SOC 2 auditors want to see your AI tool inventory. Your CISO needs to present AI exposure to the board. All of these start with the same data: what AI is your organization using, and how?
Continuous Discovery vs Point-in-Time Audits
A quarterly spreadsheet audit fails because AI adoption moves too fast. In the time between audits, employees will connect 5-10 new AI tools. Pricing changes will make previously expensive tools accessible. New AI capabilities will create entirely new categories of tools. By the time you complete one audit, it's already outdated.
Always-on discovery solves this. Your workspace API connection continuously monitors new OAuth grants. Your browser extension detects new AI provider domains as employees visit them. New tools appear in your inventory within minutes of first use — not months later during the next quarterly audit.
Vloex builds your AI asset inventory automatically. Connect Google Workspace or Microsoft 365 for instant OAuth app discovery. Add the browser extension for real-time interaction monitoring. See your complete AI landscape in minutes — continuously updated, always current. Get started free.