Shadow AI Is the New Shadow IT

AI agents are proliferating across enterprises faster than security teams can track. Sound familiar?


AI agents are everywhere now. Not just in labs and demos — in production, doing real work, accessing real systems. And if you're in security, you've probably noticed something uncomfortable: you have no idea how many are running in your organization or what they're doing.

Welcome to Shadow AI.

The Pattern We've Seen Before

If you've been in tech long enough, this feels familiar. Remember when employees started using Dropbox because IT was too slow to provision file sharing? Or when marketing spun up their own Salesforce instance because the official CRM approval process took six months?

Shadow IT happened because people needed tools faster than governance could provide them. The tools worked. They were useful. And by the time security noticed, they were everywhere.

AI agents are following the exact same trajectory — except faster, and with higher stakes.

The Numbers Don't Lie

According to IBM's latest research, 79% of organizations are already deploying AI agents. That's not experimenting. That's deploying.

But here's what that number doesn't tell you: how many of those deployments went through proper security review? How many have documented permissions models? How many have audit trails?

In my experience, the answer is "almost none." People install OpenClaw or similar frameworks, connect them to whatever APIs they need, and start getting work done. The security team finds out when something breaks or when compliance asks questions nobody can answer.

Why Agents Are Worse Than Shadow SaaS

Traditional shadow IT was bad, but it had natural limits. A rogue Dropbox folder could only do so much. A marketing team's unauthorized CRM was isolated to marketing data.

AI agents don't have those limits.

An agent with access to your email can read every message, send on your behalf, and interact with external parties. An agent with filesystem access can traverse your entire directory structure. An agent with API credentials can take actions in production systems — and it will, because that's what you asked it to do.

The attack surface isn't "someone might read our files." It's "an autonomous system with broad permissions is making decisions at machine speed."

Moving Faster Than Security Can Track

This is the part that should worry you.

Operant AI launched something this week called "Agent Protector," specifically targeting what they call the "shadow AI" problem. Their pitch: agents are creating security blind spots because they're autonomous systems with access to sensitive data that move faster than security teams can track.

They're not wrong.

Traditional security models assume human-speed operations. You can review logs from yesterday. You can audit access requests from last week. You can investigate incidents after someone notices something wrong.

Agents operate in real-time. By the time your SIEM flags unusual activity, an agent might have already processed thousands of files, made dozens of API calls, and sent a handful of messages. The blast radius expands at machine speed.

The Visibility Problem

Here's the fundamental issue: most organizations have no central view of their AI agent activity.

They might know which agents are officially sanctioned. They probably don't know which ones employees are running on their personal devices with company credentials. They definitely don't have unified logs of what those agents are doing, what they're accessing, or what decisions they're making.

This isn't a hypothetical. I've talked to security leaders at mid-sized companies who discovered agents running in their environment only because an API rate limit got triggered. Not through monitoring. Not through policy. Through an accident.

What Cisco Gets Right (And What's Still Missing)

Cisco just announced their AI Defense expansion at Cisco Live EMEA — a suite focused on "agent protection, interaction governance, and resilient connectivity for AI-driven workflows."

Credit where it's due: the big players are recognizing that agentic AI needs different security treatment than traditional software. Governance and visibility are finally on the radar.

But enterprise security suites solve enterprise problems. What about the startup with 50 employees and a dozen different agents running across engineering, sales, and ops? What about the solo practitioner whose agent has access to client data? What about the consultants bringing their personal AI agents into client environments?

The gap isn't just tooling. It's mindset. We're still thinking about AI agents as applications to be secured rather than autonomous actors to be governed.

What a Control Plane Actually Looks Like

Here's what I think the industry is missing: agents need a control plane, not just security tools.

A control plane means:

  • Centralized policy management. One place to define what agents can and can't do, regardless of which framework they're built on.
  • Real-time visibility. Not logs you review tomorrow — dashboards showing what's happening now.
  • Approval workflows. Automatic escalation for high-risk actions, automatic approval for routine ones.
  • Audit trails by default. Every action logged, every decision traceable, every capability usage recorded.
  • Instant revocation. The ability to kill an agent's access in seconds when something goes wrong.

This is infrastructure that sits between agents and their capabilities. It doesn't replace the security tools you already have — it gives them something to work with.

The Enterprise Mandate

If you're running agents in an enterprise context, you need this infrastructure yesterday. Not because regulators are demanding it (though they will), but because the alternative is unacceptable risk.

Ask yourself:

  • Can you enumerate every AI agent with access to company systems?
  • Do you have logs of what those agents have done in the last 24 hours?
  • Could you revoke an agent's access in under 60 seconds if needed?
  • Do you know which agents have sent external communications on behalf of employees?

If the answer to any of these is "no" or "I'm not sure," you have a shadow AI problem. The question is whether you acknowledge it before or after the incident.

The Path Forward

Shadow IT eventually got absorbed into legitimate IT governance. It took years, and plenty of breaches along the way, but organizations figured out how to provide useful tools fast enough that employees didn't need to go rogue.

Shadow AI needs to follow the same path, but faster. The stakes are higher and the speed of proliferation is greater.

That means:

  1. Acknowledge the reality. Agents are already running. Pretending otherwise is denial, not strategy.
  2. Get visibility first. You can't govern what you can't see. Start with discovery.
  3. Build the control plane. Whether you build, buy, or cobble it together — you need centralized governance.
  4. Make compliant agents easier than shadow agents. If your approved agent workflow takes two weeks and running your own takes two minutes, guess which one wins?

The companies that figure this out will deploy agents confidently at scale. The ones that don't will either ban agents entirely (losing the productivity gains) or stumble into incidents they could have prevented.

Shadow AI is here. The question is what you're going to do about it.

P.S. This is exactly why we built Molten.Bot as a control plane, not just a hosting service. Visibility, governance, and audit trails aren't optional extras — they're the foundation. If you're deploying agents and want to actually know what they're doing, we should talk.