Blog/Compliance
Compliance

The Delve Scandal: $300M in Fake Compliance — and What It Means for AI Governance

Satya Vegulla·Founder, Vloex·March 28, 2026·9 min read
$300M

valuation of a compliance startup accused of fabricating audit reports

On March 22, 2026, an anonymous whistleblower calling themselves "DeepDelver" published a Substack post that detonated the compliance industry. The target: Delve, a Y Combinator-backed compliance automation startup valued at $300 million after a $32 million Series A led by Insight Partners. The accusation: Delve had been fabricating SOC 2, HIPAA, ISO 27001, and GDPR compliance reports for over 1,000 customers across 50 countries.

Within 48 hours, Insight Partners scrubbed their investment announcement from their website. Within a week, TechCrunch revealed that LiteLLM — an open-source AI proxy project that was itself hit by malware — had been certified as SOC 2 and ISO 27001 compliant by Delve. The compliance certifications that were supposed to guarantee security had been rubber-stamped by the very system that failed to detect a supply chain attack.

The company that sold compliance as a product couldn't deliver compliance as a practice. If that doesn't make you question every SOC 2 badge you've ever trusted, it should.

What the Whistleblower Actually Said

DeepDelver's allegations are specific and structural, not vague grievances. According to the Substack post, Delve "achieves its claim of being the fastest platform by producing fake evidence, generating auditor conclusions on behalf of certification mills that rubber stamp reports, and skipping major framework requirements while telling clients they have achieved 100% compliance." The whistleblower claimed this included fabricated evidence of board meetings, tests, and processes that never happened.

The most damning detail: DeepDelver published analysis of a leaked Google spreadsheet containing hundreds of Delve clients' draft audit reports. Nearly all clients were funneled through two audit firms — Accorp and Gradient — which the whistleblower described as "part of the same operation" with a nominal US presence but primarily operating out of India. The allegation is that these firms rubber-stamped reports generated by Delve itself, inverting the entire compliance model.

The LiteLLM Connection Makes This an AI Security Story

This would be a compliance industry scandal on its own. But the LiteLLM connection turns it into an AI governance story. LiteLLM is a popular open-source AI proxy used by developers to route API calls between different LLM providers. It was hit by a malware attack — and at the time, it proudly displayed SOC 2 and ISO 27001 certifications on its website. Those certifications were done by Delve.

Think about what this means for AI governance: a tool that sits between your application and your AI provider, handling every prompt and response, was certified as secure by a company accused of fabricating that very certification. The compliance badge that was supposed to provide assurance actually provided a false sense of security that may have contributed to slower detection of the compromise.

A SOC 2 badge on an AI tool tells you nothing about what data is flowing through it right now. It tells you what an auditor — or an automation platform impersonating one — said about the tool's controls at a single point in time.

The Structural Problem: Compliance-at-Speed

Delve's pitch was speed. Get SOC 2 certified in weeks, not months. Automate the evidence collection. Streamline the audit. This is appealing because traditional compliance is genuinely painful — slow, expensive, and often disconnected from actual security posture. But the Delve scandal reveals what happens when you optimize compliance for speed without preserving its substance.

Compliance automation isn't the problem. Automating evidence collection, policy templates, and audit coordination is legitimate and valuable. The problem is when automation replaces verification rather than supporting it.

Speed without independence is theater. When the same platform generates the evidence, writes the auditor conclusions, and selects the audit firm, there is no independent check. The entire trust chain collapses into a single point of failure.

Annual certifications can't cover AI. Even legitimate SOC 2 audits examine controls at a point in time. AI tools change weekly — new models, new features, new data handling. A certification from six months ago says nothing about today's risk.

What This Means for Your AI Compliance Strategy

If you're a security leader, the Delve scandal should trigger an immediate review of two things: which of your vendors' compliance certifications were issued by Delve (or similar rapid-certification platforms), and whether your AI governance depends on point-in-time certifications rather than continuous monitoring. If you haven't already, review our AI compliance checklist for 2026 for a practical framework.

  • Audit your vendor certifications. Ask which compliance platform and audit firm were used. If you get vague answers or recognize the firms named in the Delve investigation, dig deeper.
  • Stop treating SOC 2 badges as security signals. A SOC 2 Type II report is a starting point for vendor evaluation, not the finish line. Ask for the full report, read the scope exclusions, and verify the audit firm's independence.
  • Demand continuous evidence for AI tools. Any AI tool processing your data should provide real-time visibility into what data flows through it, not an annual attestation that it has controls in place.
  • Separate compliance from governance. Compliance tells you whether boxes were checked. Governance tells you whether sensitive data is actually protected. You need both, and they are not the same thing.
  • Monitor AI data flows directly. If a tool claims to not store prompts, verify it. If a vendor claims SOC 2 compliance, ask who certified them. Trust but verify is dead — verify, then decide whether to trust.

From Checkbox to Continuous: What Real AI Governance Looks Like

The Delve scandal accelerates a shift that was already underway: from periodic compliance assessments to continuous security monitoring. For AI specifically, this means three things.

Real-time data flow visibility. You need to see what data is entering and leaving every AI tool in your stack — not what a compliance report says should be happening, but what is actually happening right now. This means monitoring at the point of interaction: the browser, the API call, the MCP server connection.

Policy enforcement at the edge. When an employee is about to paste customer PII into a tool whose compliance certification may be worthless, you need enforcement that works in real time — not a policy document that nobody reads and an annual audit that nobody verifies.

An audit trail that can't be fabricated. Every AI interaction captured, every policy decision logged, every data flow recorded. Not because an auditor asked for it, but because continuous evidence is the only evidence that matters.

Delve's Response — and Why It Doesn't Matter

Delve published a blog post calling the allegations "misleading" and clarifying that it is an "automation platform" that provides auditors with access to compliance information, not an audit firm itself. This may be technically accurate. But it sidesteps the core allegation: that the platform was designed in a way that allowed compliance reports to be generated without meaningful independent verification, and that customers believed they were compliant when critical requirements had been skipped.

For security teams, the lesson isn't about Delve specifically. It's about the model. Any system that promises compliance-at-speed through automation and pre-selected audit firms creates the conditions for exactly this kind of failure. The question every CISO should be asking isn't "was my vendor certified by Delve?" It's "would I know the difference between a real certification and a fabricated one?"

The Bottom Line

The Delve scandal is not an anomaly. It's the logical endpoint of an industry that optimized for the appearance of security rather than its substance. As AI tools proliferate across every department and workflow, the gap between checkbox compliance and actual governance will only widen. IBM data shows shadow AI breaches cost $4.63M on average — and fake compliance only increases that exposure. Companies that rely on badges and annual audits to manage AI risk are building on sand.

Vloex replaces checkbox compliance with continuous AI governance. Real-time visibility into every AI interaction, policy enforcement at the point of use, and an audit trail built from actual data — not fabricated evidence. See what's really happening in your AI stack.

Get started free
complianceSOC 2fake complianceDelve scandalAI governanceHIPAAGDPRcompliance automation
SV

Satya Vegulla

Founder, Vloex

Ready to see your AI landscape?

Connect your workspace. Get instant visibility. No agents required.

Get Started Free