98%
of organisations report unsanctioned AI use. 49% expect a shadow AI incident within 12 months — Acuvity 2025 State of AI Security Report
71%
of workers have used unapproved AI tools at work — Microsoft Work Trend Index 2025. This is not a junior-employee problem.
86%
of organisations lack visibility into how data flows to and from AI tools — Reco State of Shadow AI Report 2025
63%
of organisations have no formal AI governance framework — IBM 2025. Only 37% have any AI policy at all.
The C-Suite Imperative
The Problem Is No Longer Coming. It Is Already Inside Your Network.
Shadow AI is not the AI governance problem of 2027. It is operating inside your organisation today, at scale, without your knowledge. The employees entering customer data into ChatGPT to draft support emails, the finance analyst uploading contract terms to Claude for clause extraction, the engineer pasting source code into Copilot for debugging — none of them are acting maliciously. They are filling a governance vacuum that leadership has not yet closed.
Shadow AI is categorically different from its predecessor, shadow IT. When employees used unapproved file storage, data sat in a bucket somewhere. When employees use unapproved AI, data flows into models that can learn from it, store it, surface it in responses to other users, and be exploited by adversaries who have access to those models. The data does not just sit somewhere — it becomes part of something larger and far less controllable. The average enterprise now uploads 8.2 GB of data per month to AI applications, according to Netskope’s 2025 Cloud and Threat Report. Most of it flows through tools the CISO has never reviewed.
And crucially: leadership is modelling the behaviour it needs to stop. Microsoft’s 2026 Data Security Index found that 69% of presidents and C-suite executives openly prioritise speed over data privacy when adopting new AI tools. The exposure created by a CEO pasting a board presentation into a public LLM for summarisation is identical to the exposure created by an intern doing the same thing — and far more consequential if it contains material non-public information.
The 2025 incidents that define the stakes: Amazon employees found pasting confidential internal data into ChatGPT, with model outputs later resembling proprietary internal documents. Slack’s AI summarisation feature leaked data from private channels across user boundaries. A CVSS 9.3-rated zero-click prompt injection vulnerability disclosed in Microsoft Copilot — a tool deployed inside thousands of enterprise environments. Shadow AI is not theoretical. It is the source of documented, material incidents at organisations that believed their AI governance was adequate.
// Documented Real-World Incidents — 2024–2026
Amazon (2023): Employees pasting confidential internal data into ChatGPT — model outputs later resembling proprietary documents. Enterprise-wide warnings issued after the fact.
Slack AI (Aug 2024): Prompt injection vulnerability in AI summarisation leaked data from private channels across user boundaries. Patched, but standard security review never detected the original exposure.
Microsoft Copilot (June 2025): CVSS 9.3 zero-click prompt injection disclosed — deployed inside thousands of enterprise environments without adequate guardrails.
Moltbook (Feb 2026): 1.5 million API keys and 35,000 user emails exposed — allowing anyone to hijack AI agents and access OpenAI and AWS third-party services.
s1ngularity / Shai-Hulud (late 2025): AI-powered malware hijacked developer command-line tools to exfiltrate GitHub and npm tokens, creating a self-propagating worm through enterprise code packages.
Threat Taxonomy
Understanding the Four Types of Shadow AI
Shadow AI is not a single category — it is four distinct threat classes that require different detection approaches and different governance responses. Misidentifying the class means deploying the wrong control.
Type 01 — Highest Volume
Public LLMs (BYOAI)
Employees accessing ChatGPT, Claude, Gemini, and Perplexity through personal accounts or consumer-tier access — bypassing enterprise data handling controls entirely. 47% of GenAI platform users access tools through personal, unmonitored accounts (Netskope 2025). 68% use free tools like ChatGPT, with 57% entering sensitive data. The exposure is direct and immediate: data enters the model, OpenAI may use it for training unless the user opted out, and the enterprise has zero visibility or recourse.
ChatGPT (personal)
Claude.ai
Gemini
Perplexity
Type 02 — Hidden in Plain Sight
Embedded SaaS AI Features
AI capabilities built directly into enterprise tools that appear to be IT-approved — Slack’s summarisation, Notion AI, Salesforce Einstein, Microsoft 365 Copilot, Google Workspace AI. These tools are particularly dangerous because they inherit the trust of the approved platform while introducing AI risk that was never independently evaluated. Slack’s 2024 cross-channel data leak emerged from exactly this category: a tool every enterprise assumed was reviewed because the underlying platform was approved. Gartner’s November 2025 analysis found this is now the fastest-growing source of AI compliance incidents.
Salesforce Einstein
Notion AI
Slack AI
Copilot in M365
Type 03 — Endpoint Invisible
Browser Extensions & Copilots
AI tools installed at the browser layer — reading page content, email subjects, clipboard data, and web form inputs without any network-level detection. Browser extensions with AI capabilities represent the most difficult shadow AI category to detect through traditional network security tools because their traffic appears as normal HTTPS from an approved browser. The number of distinct GenAI SaaS applications tracked surged to over 1,550 in 2025, up from 317 in early 2025 — most entering enterprises via browser extension installs that IT never approved or reviewed.
Grammarly AI
Otter.ai
Monica AI
Sider.ai
Type 04 — Highest Risk
API-Driven Custom Workflows
Developer or analyst-built agentic workflows connecting LLMs directly to enterprise APIs, databases, and internal systems via frameworks like LangChain, AutoGPT, or CrewAI — without any security review, architecture approval, or compliance sign-off. These are the “shadow agents” in the truest sense: autonomous systems executing workloads against enterprise systems, operating with no documentation, no governance integration, and no visibility for security teams. They can query databases, interact with APIs, manage workflows, and submit content — entirely unmonitored. MCP servers that expose internal APIs are particularly dangerous: they create persistent, authenticated tool access that propagates beyond the original developer’s use case.
LangChain Agents
AutoGPT
MCP Servers
CrewAI Workflows
Detection Architecture
The 5 Pillars of Shadow AI Detection
No single detection layer provides complete shadow AI visibility. The four shadow AI types each require different detection approaches — which is why five complementary pillars are required for comprehensive coverage.
Pillar 1 · Network Signals
Firewalls, Proxy Logs & DNS
Traffic analysis to identify AI platform connections from enterprise infrastructure
Foundational
Types: BYOAI · APIs
Network-level detection is the most foundational shadow AI visibility layer — and the one most enterprises already have partial infrastructure for. Firewall logs, web proxy traffic, and DNS query data contain the raw signal that reveals which AI platforms are being accessed from enterprise networks. Analysis of traffic to domains like openai.com, api.anthropic.com, generativelanguage.googleapis.com, and the rapidly expanding catalogue of GenAI SaaS platforms (over 1,550 distinct applications as of 2025) provides the baseline inventory of AI tool usage across the organisation.
The limitation is traffic visibility — network-level detection catches traffic from managed devices on corporate networks but is blind to personal devices, mobile networks, and home Wi-Fi use. It also cannot inspect the content of HTTPS traffic without SSL inspection, limiting it to detection rather than data classification. Used as the foundational layer, network signals provide the breadth map; other pillars provide the depth.
What to Monitor
→DNS queries to known AI platform domains — maintain a live catalogue of GenAI endpoints
→Proxy logs showing traffic volume and frequency to AI API endpoints
→Firewall rules blocking prohibited AI platforms — verify compliance with weekly DNS review
→Data upload volumes to AI platforms — flag anomalous upload events exceeding baseline
→New domain first-seen alerts for emerging AI services not yet in the catalogue
Pillar 2 · CASB & SSE
SaaS Application Security
Cloud Access Security Brokers and Security Service Edge for unsanctioned cloud AI use
SaaS Layer
Types: BYOAI · SaaS
Cloud Access Security Brokers (CASBs) and Security Service Edge (SSE) platforms sit between enterprise users and cloud applications, providing the application-level visibility that network monitoring alone cannot deliver. Where proxy logs show traffic to openai.com, a CASB can identify which specific feature was used, what data classification was uploaded, whether the account is personal or enterprise-managed, and whether the session involved sensitive data patterns.
CASBs are particularly effective for the Embedded SaaS AI category — detecting when approved SaaS platforms enable unapproved AI features that inherit the platform’s trusted status. The 2024 Slack AI cross-channel data leak would have been visible in a CASB that tracked AI feature usage within the Slack application, even though Slack itself was an approved platform. In 2026, leading CASBs include AI-specific risk ratings for over 1,000 GenAI applications — making rapid policy decisions against a pre-assessed risk catalogue rather than manual analysis of each new tool.
What to Monitor
→Application-level AI feature usage within approved SaaS platforms
→Personal account vs. enterprise account usage of AI platforms
→AI risk ratings for every SaaS application in use — unsanctioned classification
→Data classification of content uploaded to AI platforms
→Enforce real-time DLP coaching — warn rather than hard-block where appropriate
Pillar 3 · Endpoint Telemetry
Browser Activity & Extensions
Local copilot and browser extension detection where network monitoring is blind
Endpoint
Types: Extensions · SaaS
Endpoint Detection and Response (EDR) telemetry provides the visibility layer for shadow AI activity that occurs at the device level — particularly browser extensions with AI capabilities that are invisible to network monitoring because their traffic appears as normal HTTPS from an approved browser. Endpoint telemetry can identify installed browser extensions, track which extensions interact with page content or clipboard data, and flag extensions that communicate with AI model APIs.
This pillar is also the detection surface for locally-running AI models — small language models like Mistral or LLaMA running directly on developer workstations without any network traffic at all. As local AI inference becomes more accessible, this category of shadow AI will grow. Endpoint telemetry combined with application inventory management provides the foundation for detecting all four shadow AI types at the device layer, filling the gaps that network-level and cloud-level monitoring cannot reach.
What to Monitor
→Browser extension inventory — audit all installed extensions against an approved list
→Extension permissions review — flag extensions requesting clipboard, page content, or API access
→Locally-installed AI tools — LM Studio, Ollama, local Copilot instances
→Process-level telemetry — AI framework processes (python, node.js) accessing sensitive file paths
→Clipboard and screen access by AI applications — flag when AI extensions access sensitive form fields
Pillar 4 · IAM & OAuth
Machine Identities & Delegated Access
Non-human identity governance for unmanaged AI agent workflows and OAuth grants
Identity
Types: Custom APIs · Agents
IAM and OAuth monitoring is the detection surface for the most dangerous shadow AI category: API-driven custom workflows and agentic systems built by employees without security review. When an employee builds a LangChain agent connecting to the company’s CRM, they typically authenticate that agent using their own OAuth credentials — granting it the same permissions they personally hold. If they are a CRM administrator, the agent has administrator access. This permission inheritance creates an unmonitored privileged agent that security teams cannot see through traditional identity management tools.
Only 10% of organisations have a strategy for managing non-human identities (NHIs) — Okta AI at Work 2025. The UNC6395 attack of August 2025, in which stolen OAuth tokens from a trusted SaaS integration provided access to 700+ organisations’ Salesforce environments, demonstrated exactly what unmonitored OAuth delegation enables at scale. AI agent workflows depend on OAuth grants, API tokens, and service account credentials that must be inventoried and governed with the same rigour as human identities.
What to Monitor
→OAuth grant inventory — every third-party application with delegated access to enterprise systems
→Service account and API token audit — identify tokens with AI model API scope
→Permission inheritance — flag agent workflows running under human admin credentials
→Non-human identity (NHI) lifecycle — creation, rotation, and revocation of AI agent credentials
→Token anomaly detection — API token usage outside normal hours or volumes suggesting agent activity
Pillar 5 · Data Security Context
DLP & Data Classification
Sensitive content exposure detection when AI tools process protected enterprise data
Data Layer
All Types
Data Loss Prevention (DLP) and data classification provide the content-aware context layer that answers the most critical governance question: not just which AI tools are being used, but what data is entering them. Cisco’s 2025 findings indicate that 46% of organisations have already experienced internal data leaks through GenAI — but without DLP integration, those organisations would have had no mechanism to detect the leak at the moment it occurred. Data classification systems that label documents as Confidential, PII, Regulated, or Customer Data enable DLP systems to intercept AI-bound uploads before they leave the enterprise.
The compliance dimension makes this pillar particularly urgent. HIPAA violations carry fines up to $1.5 million per violation category regardless of intent. GDPR carries fines up to €20 million or 4% of global turnover. EU AI Act penalties reach €35 million for prohibited practice violations. An employee sending protected health information to an unapproved LLM creates a direct HIPAA violation — whether or not their manager approved the workflow. DLP with AI-specific awareness is the technical control that closes the intent-blind compliance exposure.
What to Monitor
→Sensitive data classification events involving AI platforms — PII, PHI, financial, confidential
→DLP policy violations on AI-bound traffic — block or warn based on data classification tier
→Structured data patterns (SSNs, card numbers, patient IDs) in AI API requests
→File upload events to AI platforms — flag document types (contracts, financial reports, source code)
→Regulatory data category mapping — track which AI tools have received each data classification tier
“Blocking shadow AI is counterproductive. If you block things, you are just blocking visibility. The landscape of AI is evolving so fast that people will use AI on a day-to-day basis — and blocking it is just encouraging them not to be visible around using it. The goal is governed adoption, not enforced prohibition.”
Amar Akshat, SVP Technology & Chief Architect, Paysafe — Okta C-Suite AI Survey, February 2026
Implementation Roadmap
The Practical 30-Day Implementation Playbook
Shadow AI governance is not a multi-year transformation programme. The first 30 days can establish the visibility, decision frameworks, and executive baseline that make the rest of the programme executable. Research confirms that providing enterprise-grade AI alternatives reduces unauthorised use by 89% — making the 30-day window the highest-leverage governance investment available.
Pull firewall, proxy, and DNS logs — build initial AI platform traffic inventory across all managed devices
Run CASB discovery report across all SaaS applications — identify AI feature usage in approved platforms
Audit OAuth grant registry — catalogue every third-party application with delegated enterprise access
Browser extension inventory across managed endpoints — flag AI-capable extensions
Create a heat map: AI usage frequency × data sensitivity × regulatory risk by business unit
Classify every discovered AI tool into three tiers: Approved / Restricted / Prohibited — and publish the register
Define data handling rules per tier: what data may enter each AI category
Establish fast-track approval process for low-risk tools — 48-hour review SLA to reduce bypass incentive
Implement CASB policy enforcement — DLP coaching warnings on AI-bound sensitive data uploads
Revoke OAuth grants for AI applications in the Prohibited tier — document and communicate
Launch role-specific training using real-work scenarios — not generic compliance tick-boxes
Publish AI usage policy — approved tools, request process, data classification rules, prohibited uses
Designate AI Stewards per business unit — champions who support governed adoption rather than policing
Create a safe reporting channel for employees to disclose AI tools they are using — remove fear of reprisal
Announce enterprise-grade AI alternatives for the top 3 use cases discovered in Week 1
Produce the first Board AI Risk Narrative — total apps detected, % sanctioned, sensitive data events
Establish board reporting cadence — monthly KPI dashboard tied to the five board metrics below
Define control priorities for Quarter 2 — what the heat map revealed requires investment
Executive attestation — CEO and CISO formally acknowledge the risk posture and accept the governance roadmap
Set the 90-day review cycle — reassess quarterly at minimum; immediately after any trigger event
Board Reporting
Executive Oversight: 5 Board-Level AI Risk Metrics
These five metrics constitute the minimum viable AI risk reporting framework for board-level oversight. Each metric answers a governance question the board is now accountable for — and provides the evidence base for demonstrating responsible AI governance to regulators, insurers, and auditors.
📊
Total AI Apps Detected
Complete inventory of every AI application accessed from enterprise infrastructure, including personal account usage tracked by CASB and network monitoring.
Target: 100% of known categories catalogued within 30 days. Zero unknown-application blind spots.
⚖️
Sanctioned vs. Unsanctioned %
Percentage of total AI tool usage accounted for by approved, restricted, and prohibited tier applications. Trend direction matters more than the absolute number.
Target: Month-over-month increase in sanctioned %. Alert if unsanctioned % is rising.
🔓
Sensitive Data Events
Number of DLP events where classified data — PII, PHI, financial records, confidential IP — was detected in AI-bound traffic. Severity-weighted by data classification tier.
Target: Zero unmitigated Tier 1 (Restricted) data events. All Tier 2 events coached in real-time.
⚡
High-Risk Workflows & Access Grants
Count of agentic AI workflows and OAuth grants identified outside the approved AI governance process. Includes machine identities with elevated privileges inherited from human accounts.
Target: All agentic workflows formally registered. Zero unreviewed admin-privileged AI agents in production.
⏱️
Time-to-Visibility
Average elapsed time between first detection of a new AI application in the environment and classification by the governance team. Measures programme responsiveness to the rapidly-expanding AI landscape.
Target: <48 hours from detection to Approved / Restricted / Prohibited classification. Track weekly.
Executive Mandate
The Governance Vacuum Is the Board’s Risk to Own
Shadow AI is not a technology problem that CISOs can solve in isolation from executive leadership. It is a behaviour problem that persists because governance frameworks have not kept pace with the accessibility of AI tools — and because, in many organisations, leadership is modelling the very behaviour that creates the exposure. Microsoft’s 2026 Data Security Index found that 69% of C-suite executives prioritise speed over data privacy when adopting AI. That preference, expressed from the top of the organisation, signals to every employee that compliance is optional when productivity is at stake.
The regulatory environment of 2026 will not accept this posture. The EU AI Act’s high-risk system obligations are fully enforceable from August 2026. Gartner predicts that by 2030, more than 40% of enterprises will experience security or compliance incidents due to unauthorised AI use. Insurance underwriters are already requiring evidence of AI governance maturity for D&O coverage. The board is accountable not only for what the organisation knowingly does with AI — but for what AI systems operating within its network do without explicit authorisation.
The practical response is not prohibition. Research consistently shows that banning AI tools drives shadow usage underground rather than eliminating it — and that providing enterprise-grade alternatives reduces unauthorised use by 89%. The response is governed adoption: build the visibility infrastructure to see what is happening, establish the classification framework to make rapid decisions about each tool, and provide legitimate alternatives that meet the productivity needs driving unsanctioned adoption in the first place.
The board that governs shadow AI effectively in 2026 is not the board that said “no.” It is the board that said “yes — and here is the framework within which the answer is yes.” Visibility first. Classification second. Sanctioned alternatives third. That sequence, executed with urgency, converts the shadow AI risk register into an enterprise AI competitive advantage.
Sources: CIO — Shadow AI: The Hidden Agents Beyond Traditional Governance (November 2025) · Vectra AI — Shadow AI Explained: Risks, Costs, and Enterprise Governance (March 2026) · Noma Security — Shadow AI Agents: The New Enterprise Security Threat (December 2025) · OffSec — Shadow AI: How Unsanctioned Tools Create Invisible Risk · Proofpoint — What Is Shadow AI · Wiz — What Is Shadow AI (March 2026) · Okta — How C-Suite Leaders Are Taming Shadow AI (February 2026) · IP Consulting — Shadow AI Breaches Are Here: The $670,000 Problem (April 2026) · ISACA — The Rise of Shadow AI: Auditing Unauthorized AI Tools (September 2025) · Microsoft Work Trend Index 2025 (71% workers use unapproved AI) · Microsoft 2026 Data Security Index (69% C-suite prioritise speed over privacy) · Netskope 2025 Cloud and Threat Report (8.2 GB/month uploads; 1,550+ AI SaaS apps) · Reco State of Shadow AI Report 2025 (86% lack visibility) · Gartner (69% suspect prohibited GenAI use; 40% will face incidents by 2030) · Komprise 2025 IT Survey: AI, Data & Enterprise Risk (90% concerned; 80% experienced incidents) · IBM 2025 Cost of Data Breach Report (63% lack governance) · Menlo 2025 Report (68% use personal accounts; 57% share sensitive data) · Acuvity 2025 State of AI Security Report (49% expect shadow AI incident within 12 months) · CrowdStrike 2026 Global Threat Report · Healthcare Brew 2026 (89% reduction with approved alternatives)