AI Agents Are the Third Kind of Identity
Your IAM system knows humans and services. Agents are neither — and that's breaking everything.
Your IAM system knows two kinds of things: humans and services. Humans get usernames, passwords, MFA, and session timeouts. Services get API keys, service accounts, and certificate-based auth. The distinction is clear and the tooling is mature.
AI agents fit into neither category. And that's breaking everything.
The Identity Problem Nobody Planned For
The Cloud Security Alliance just published a report that should concern anyone deploying AI agents: Securing Autonomous AI Agents. The findings are stark. Organizations have low confidence that their existing IAM tools can manage agent identity. Responsibility for agent identity is undefined — security, IT, DevOps, and AI teams all share accountability, which means nobody owns it.
Most enterprises are retrofitting existing tools rather than building purpose-built systems for agent discovery and governance. The result? Partial, delayed, and siloed visibility into what agents are actually doing.
This isn't a future problem. It's happening right now, at scale. Microsoft's Cyber Pulse report found that over 80% of Fortune 500 companies are deploying AI agents built using low-code or no-code tools. That's a lot of autonomous systems operating with identity frameworks that weren't designed for them.
Humans vs. Services vs. Agents
Here's why the traditional model breaks down.
Humans are interactive. They authenticate once, maintain a session, and their actions are bounded by attention and working hours. You can challenge them with MFA. You can revoke their access and expect them to notice immediately. They're slow, predictable, and auditable.
Services are automated but deterministic. A cron job does the same thing at the same time. A microservice handles requests within well-defined parameters. You provision their credentials once, rotate on a schedule, and monitor for anomalies against a known baseline.
Agents are neither. They're autonomous but non-deterministic. They make decisions based on context. They might send zero emails one day and a hundred the next, depending on what you asked them to do. They don't have sessions in the traditional sense — they're always running, always capable, always one prompt away from taking action.
The fundamental mismatch: we're giving agents human-style credentials (API keys, OAuth tokens) while expecting service-style predictability. That combination doesn't work.
Static Credentials Were Never Designed For This
Here's what the CSA report highlights about how organizations are credentialing agents today: API keys, usernames and passwords, and shared service accounts remain common. More sophisticated approaches like OIDC, OAuth PKCE, or SPIFFE workload identities are far less adopted.
The problem with static credentials is that they're static. Once issued, they grant the same access regardless of context. An API key doesn't know if the agent is performing routine maintenance or exfiltrating your customer database. It just authenticates the request.
This matters because agents don't operate in bounded sessions. A human logs in, does work, logs out. An agent might run continuously for days, making decisions across thousands of contexts. Static credentials can't distinguish between "agent doing its job" and "agent doing something it shouldn't."
The CSA puts it directly: "Static credentials and periodic policy checks cannot support the continuous authentication and context-aware authorization required for autonomous agents."
The Traceability Gap
Even if you've got credentials sorted, there's a deeper problem: traceability.
According to the report, most organizations cannot determine what agents did, what they accessed, under which authorization, or on whose request. That's not a minor gap — it's a fundamental blind spot.
When something goes wrong with a human user, you check the logs. You see their session, their actions, their access patterns. When something goes wrong with an agent, what do you check?
Agent registries are fragmented across identity providers, custom databases, internal service registries, and third-party platforms. There's no canonical answer to "show me what this agent has done in the last 24 hours." The data exists in pieces, scattered across systems that weren't designed to correlate agent activity.
Why "Treat Agents Like Humans" Doesn't Work
Some organizations try to solve this by treating agents as human users. Give them accounts, give them credentials, log their activity the same way.
This fails for three reasons:
Scale. Humans don't scale. Your security team can review anomalies in human user behavior because there are maybe thousands of users taking thousands of actions. Agents can take millions of actions per day. Human-oriented tools can't keep up.
Intent. Human actions have human context. You can ask someone why they did something. Agents don't have intent in the same way — they have instructions, prompts, and decision chains. Understanding what an agent did requires understanding the reasoning process, not just the action.
Response time. When a human account gets compromised, you have time. The attacker is human too — they move at human speed. When an agent goes wrong, the blast radius expands at machine speed. By the time you notice, the damage might already be done.
Why "Treat Agents Like Services" Doesn't Work Either
The other approach is to treat agents like traditional service accounts. Narrow permissions, defined scopes, predictable behavior.
This fails because agents aren't predictable. The whole point of an agent is that it figures out what to do based on context. If you could define exactly what it would do ahead of time, you wouldn't need an agent — you'd write a script.
Service accounts work because you know what the service does. You can write precise IAM policies because the behavior is deterministic. Agent behavior is inherently non-deterministic. You can guide it with guardrails and policies, but you can't predict every action.
That's not a bug — it's the feature. It's also why service-oriented identity models break down.
Agents Need Their Own Identity Category
Here's what I think the industry needs to accept: AI agents are a third category of identity principal. Not human, not service. Agent.
This new category requires:
- Continuous authentication. Not "authenticate once, trust forever." Validate agent identity on every significant action.
- Context-aware authorization. Permissions that adapt based on what the agent is trying to do and why, not just static role assignments.
- Real-time behavior monitoring. Not log analysis after the fact — live observation of agent actions with the ability to intervene.
- Attribution chains. For every agent action, a clear trail back to the human who requested it and the policy that authorized it.
- Instant revocation. Kill switches that work in seconds, not minutes or hours.
None of this exists in standard IAM tooling today. That's why the CSA report found such low confidence — security teams are trying to solve a new problem with old tools.
The Enterprise Wake-Up Call
If you're deploying agents in an enterprise context, this should be keeping you up at night.
The CSA report notes that respondents express uncertainty about their ability to pass compliance audits related to AI agent activity and access controls. That's not surprising — most audit frameworks assume human actors or deterministic services. Agents fit neither model.
When your auditor asks "who accessed this data?" and the answer is "an AI agent acting on behalf of this user, under this policy, using this credential, following this reasoning chain" — do you have the logs to prove it? Can you reconstruct the decision process? Can you demonstrate that appropriate controls were in place?
If not, you have a compliance problem. And compliance problems become business problems fast.
What Comes Next
The industry is starting to wake up. NIST is developing an agent-specific security framework. The Coalition for Secure AI is working on standards. Enterprise vendors are adding agent-aware features to their platforms.
But frameworks and standards take time. The agents are already deployed. The gap between capability and governance is widening every day.
Here's what you can do now:
- Inventory your agents. You can't secure what you can't see. Start with discovery.
- Centralize agent credentials. Get them out of environment variables and into proper secrets management with rotation and audit trails.
- Implement action logging. Every agent action should be logged in a way that can be correlated and queried.
- Define escalation paths. Which actions require human approval? Build the workflows now.
- Test your revocation. Can you kill an agent's access in under 60 seconds? If not, fix that.
The organizations that figure out agent identity will deploy agents confidently and at scale. The ones that don't will either avoid agents entirely (and lose the competitive advantage) or deploy them blindly (and accept the risk).
There's a third category of identity now. It's time to treat it like one.
P.S. This is exactly why Molten.Bot exists. We built agent identity and governance into the foundation — continuous monitoring, action-level audit trails, real-time visibility, and instant revocation. If you're deploying agents and need to actually know what they're doing, we should talk.