Executive brief
AI agents are being deployed faster than the frameworks designed to govern them can evolve to accommodate. The result is an already flawed identity and access management (IAM) model that is fast approaching a breaking point.
The simply reality is that the IAM model that determines who gets access to what is designed for two types of actors — humans and machines. Agent AI is neither. And that gap has consequences.

The Original IAM Model — Built for Humans
The Foundation
The first version of identity and access management was straightforward. Authenticate an authorized person. Grant access. The underlying assumption was simple: an identity belonged to a human, a human had a defined role, and that role determined what they could touch.
Over time, the model matured. Role-based access control (RBAC) replaced flat access grants with tiered permissions — different roles, different levels of reach. Then regulatory pressure pushed organizations toward "least privilege": granting only the minimum access needed, and auditing it regularly.
A change management process was established to manage privilege over time — periodic reviews, access certifications, provisioning / de-provisioning tied to organizational changes and even manual request handling when the creativity of humans justified a change.
The least privilege model was designed for actors who work 9-to-5, move at human speed, and take direction from managers. That world is changing.
IAM Expands for Machines — But Stays Simple
A New Class of Actor
The explosion of cloud infrastructure, APIs, and IoT devices forced a significant expansion of the IAM model. A second class of actor entered the picture: non-human identities — machines, services, APIs, and automated pipelines.
These identities don't log in with passwords. They authenticate via tokens, certificates, or API keys. And because their function is fixed and repetitive — a service that pulls data, an API that processes transactions, a bot that returns pre-defined answers — they were granted broad standing privileges aligned to that function, so as not to slow down their relentless pace.
The key assumption: non-human actors are deterministic. They do the same thing, in the same way, on a loop. Change management barely applies because the function never changes.
Machines get standing privileges because they behave predictably. That assumption breaks entirely with agentic AI.
Enter Agent AI — The Actor That Fits Nowhere
The Hybrid Problem
AI agents combine what has always been kept separate.
They have the creativity and adaptability of human actors — interpreting instructions, making decisions, taking novel paths to reach a goal. But they operate at the speed and relentlessness of non-human systems — running continuously, without fatigue, without the friction of human judgment.
That combination creates three distinct failure modes under the current IAM model:
Failure Mode 1: Treated as non-human → Runs wild. Grant an AI agent the broad standing privileges typically assigned to machine identities, and it will exploit every accessible resource at machine speed, with human-level creativity. The blast radius is unpredictable. New stories abound.
Failure Mode 2: Treated as human → Breaks at scale. Constrain an AI agent under the least-privilege model — manual change management — and the volume of tasks and permission changes will immediately overwhelm the process.
Failure Mode 3: Ignored entirely → Becomes dark matter. The most common outcome today. AI agents are deployed without being enrolled in any identity governance framework. They become unmanaged identities — operating inside the organization, accumulating access, taking action — completely invisible, like much of identity today.
Agentic dark matter isn't a theoretical risk. It's the default outcome when Agent AI is deployed faster than governance frameworks can keep up.

The Need — IAM Must Evolve Again
The history of IAM is a history of evolution. It expanded once to accommodate the changing roles of humans. Then again to accommodate non-human identities. It must expand again, to accommodate a hybrid actor, combining the powerful (but incompatible characteristics) of both. And if that was not difficult enough, there are two additional complications:
- Agent AI is not simply a third independent class of actor. It is a delegated actor — one that operates on behalf of a human, machine, bot or service, inheriting intent and, in some cases, privilege from such actors. The IAM model must account for that delegation chain: who authorized the agent, what they were permitted to do, and whether the agent is operating within those boundaries.
- AI agents will exploit any available shortcut to accomplish their goal — regardless of their own privilege constraints. That makes IAM hygiene — well-managed identity across the entire environment — a prerequisite, not a nice-to-have. No more unmanaged applications, accounts, authentication paths, overpermissioning, insufficient controls or theory-based audit.
What's needed:
- Identification of all actors, including Agent AI
- A delegated identity model for Agent AI that captures the relationship between prompter and agent
- Dynamic, context-aware privilege tied to the prompter— not broad standing permissions for the agent
- Continuous observability and audit across all user (including agent) activity, in real time
- Strong identity hygiene as the foundation — because agents will find and exploit any gap
The question isn't whether IAM needs to evolve. It's whether your organization will evolve it proactively — or wait for an agent to make the case for you.
Three Eras of Identity: A Comparison of Requirements
What This Means — And Where Orchid Fits
Most identity governance tools operate at the access layer — they manage what identities are declared to have access to. But declaration and reality are not the same thing.
Orchid Security works at the source of identity, the application binary layer, where identity behavior actually happens. Not from documentation or declared configurations — from the ground truth of what is executing, what is being accessed, and what authorization logic is actually running.
For organizations deploying AI agents, that distinction matters enormously:
- Orchid sees all users, including agents, and their activity in real time — not after the fact, not from logs, from the source.
- Orchid provides the continuous observability layer that agentic IAM requires — surfacing what agents are doing, on whose behalf, and whether it aligns with intended privilege.
- Orchid feeds that real context back into your existing IAM stack — it's not a rip-and-replace. It's the missing layer that makes your current investment work for a world that includes agents.
You can't govern what you can't see. Orchid makes agent identities visible — and governable — before they become dark matter.
If you are deploying AI agents — or plan to — here is what the current state of IAM means for you:
- Your existing human identity governance will not scale to agent volumes or speed.
- Your existing non-human identity practices create unacceptable blast radius when applied to agents with creative, goal-directed behavior.
- Agents deployed without enrollment in any identity framework become a new category of unmanaged identity — invisible to governance and compliance.
The organizations that build the right identity foundation now will be the ones that deploy agents safely at scale. The others will discover their agentic dark matter the hard way.
Understanding, let alone maintaining, identity security posture across any large organization- with its diverse and always evolving application estate- is a constant challenge.
Remember, that estate includes applications created by different developers, at different times- when technology, regulations and cyber risk were different- and even by different organizations if acquisitions were part of the growth strategy.
Any approach, but especially an automated one, that provides a comprehensive and accurate view into the true state of identity, is hugely valuable to CISOs. Especially when it can surface all of the identity flows coded in each application. We know that many threat actors are adept at finding the alternate or forgotten ways into our organizations, and this report highlights the most common exposures we need to look out for (and address).
The insights shared here are instructive for every cyber security professional.
- 48%
Storage of hard coded, cleartext credentials or use weak hashing
- 44%
Authentication paths that bypass the corporate Identity Provider
- 40%
A lack of baseline controls like rate limiting, account lockout and password complexity
- 37%
Outdated or non-standard authentication protocols
- 37%
of applications failed to enforce access controls fully or at all
Checklist to Identify the Top Missing Identity Controls
Download Checklist
Discovery and Gap Analysis: Continuous Visibility Beyond the Known
Orchid delivers continuous, telemetry-driven visibility into identity implementations across all automatically discovered applications regardless of geography, technology stack, or existing compliance knowledge. This capability empowers organizations to uncover both commonly missed controls and hidden identity mechanisms that conventional audits and reviews often fail to detect.
No Prior Context or Manual Input Required
Unlike traditional assessment and onboarding processes that rely on interviews, documentation, or involvement from app owners or developers, Orchid's analysis is entirely autonomous. It requires no prior data points, tribal knowledge, or manual onboarding, making it ideal for large, fast-changing environments.
Save Time, Save Money — Harness Your True Identity Landscape
By eliminating the need for human-led discovery, context-gathering, or code walkthroughs, Orchid significantly reduces the time and cost of identity posture management. It accelerates both discovery, gap analysis and remediation cycles including onboarding, freeing up security teams and engineering resources to focus on higher-impact work while utilizing the organizational siloed identity tools.
Checklist, Fully Covered
Our platform aligns directly with the Checklist to Identify the Top Missing Identity Controls and many more providing instant, actionable insights on where your applications stand and what needs attention.
- January 2025
PowerSchool Breach
Cybercriminals reportedly used stolen credentials to access a support portal that lacked MFA, exposing sensitive student and parent data.
- March 2025
Jaguar Land Rover Incident
A threat actor used stolen credentials to infiltrate the company’s Jira system, allegedly stealing over 700 internal documents.
- April 2025
Verizon Data Breach Investigations Report
Verizon Identifies Stolen Credentials as Top Breach Entry Point In their latest report

