Presented by 1Password
Adding agentic capabilities to enterprise environments is fundamentally reshaping the threat model by introducing a new class of actor into identity systems. The problem: AI agents are taking action within sensitive enterprise systems, logging in, fetching data, calling LLM tools, and executing workflows often without the visibility or control that traditional identity and access systems were designed to enforce.
AI tools and autonomous agents are proliferating across enterprises faster than security teams can instrument or govern them. At the same time, most identity systems still assume static users, long-lived service accounts, and coarse role assignments. They were not designed to represent delegated human authority, short-lived execution contexts, or agents operating in tight decision loops.
As a result, IT leaders need to step back and rethink the trust layer itself. This shift isn’t theoretical. NIST’s Zero Trust Architecture (SP 800-207) explicitly states that “all subjects — including applications and non-human entities — are considered untrusted until authenticated and authorized.”
In an agentic world, that means AI systems must have explicit, verifiable identities of their own, not operate through inherited or shared credentials.
“Enterprise IAM architectures are built to assume all system identities are human, which means that they count on consistent behavior, clear intent, and direct human accountability to enforce trust,” says Nancy Wang, CTO at 1Password and Venture Partner at Felicis. “Agentic systems break those assumptions. An AI agent is not a user you can train or periodically review. It is software that can be copied, forked, scaled horizontally, and left running in tight execution loops across multiple systems. If we continue to treat agents like humans or static service accounts, we lose the ability to clearly represent who they are acting for, what authority they hold, and how long that authority should last.”
How AI agents turn development environments into security risk zones
One of the first places these identity assumptions break down is the modern development environment. The integrated developer environment (IDE) has evolved beyond a simple editor into an orchestrator capable of reading, writing, executing, fetching, and configuring systems. With an AI agent at the heart of this process, prompt injection transitions aren’t just an abstract possibility; they become a concrete risk.
Because traditional IDEs weren’t designed with AI agents as a core component, adding aftermarket AI capabilities introduces new kinds of risks that traditional security models weren’t built to account for.
For instance, AI agents inadvertently breach trust boundaries. A seemingly harmless README might contain concealed directives that trick an assistant into exposing credentials during standard analysis. Project content from untrusted sources can alter agent behavior in unintended ways, even when that content bears no obvious resemblance to a prompt.
Input sources now extend beyond files that are deliberately run. Documentation, configuration files, filenames, and tool metadata are all ingested by agents as part of their decision-making processes, influencing how they interpret a project.
Trust erodes when agents act without intent or accountability
When you add highly autonomous, deterministic agents operating with elevated privileges, with the capability to read, write, execute, or reconfigure systems, the threat grows. These agents have no context, no ability to determine whether a request for authentication is legitimate, who delegated that request, or the boundaries that should be placed around that action.
“With agents, you can’t assume that they have the ability to make accurate judgments, and they certainly lack a moral code,” Wang says. “Every one of their actions needs to be constrained properly, and access to sensitive systems and what they can do within them needs to be more clearly defined. The tricky part is that they’re continuously taking actions, so they also need to be continuously constrained.”
Where traditional IAM fail with agents
Traditional identity and access management systems operate on several core assumptions that agentic AI violates:
Static privilege models fail with autonomous agent workflows: Conventional IAM grants permissions based on roles that remain relatively stable over time. But agents execute chains of actions that require different privilege levels at different moments. Least privilege can no longer be a set-it-and-forget-it configuration. Now it must be scoped dynamically with each action, with automatic expiration and refresh mechanisms.
Human accountability breaks down for software agents: Legacy systems assume every identity traces back to a specific person who can be held responsible for actions taken, but agents completely blur this line. Now it’s unclear when an agent acts, under whose authority it is operating, which is already a tremendous vulnerability. But when that agent is duplicated, modified, or left running long after its original purpose has been fulfilled, the risk multiplies.
Behavior-based detection fails with continuous agent activity: While human users follow recognizable patterns, such as logging in during business hours, accessing familiar systems, and taking actions that align with their job functions, agents operate continuously, across multiple systems simultaneously. That not only multiplies the potential for damage to a system but also causes legitimate workflows to be flagged as suspicious to traditional anomaly detection systems.
Agent identities are often invisible to traditional IAM systems: Traditionally, IT teams can more or less configure and manage identities operating within their environment. But agents can spin up new identities dynamically, operate through existing service accounts, or leverage credentials in ways that make them invisible to conventional IAM tools.
“It’s the whole context piece, the intent behind an agent, and traditional IAM systems don’t have any ability to manage that,” Wang says. “This convergence of different systems makes the challenge broader than identity alone, requiring context and observability to understand not just who acted, but why and how.”
Rethinking security architecture for agentic systems
Securing agentic AI requires rethinking the enterprise security architecture from the ground up. Several key shifts are necessary:
Identity as the control plane for AI agents: Rather than treating identity as one security component among many, organizations must recognize it as the fundamental control plane for AI agents. Major security vendors are already moving in this direction, with identity becoming integrated into every security solution and stack.
Context-aware access as a requirement for agentic AI: Policies must become far more granular and specific, defining not just what an agent can access, but under what conditions. This means considering who invoked the agent, what device it’s running on, what time constraints apply, and what specific actions are permitted within each system.
Zero-knowledge credential handling for autonomous agents: One promising approach is to keep credentials entirely out of agents’ view. Using techniques like agentic autofill, credentials can be injected into authentication flows without agents ever seeing them in plain text, similar to how password managers work for humans, but extended to software agents.
Auditability requirements for AI agents: Traditional audit logs that track API calls and authentication events are insufficient. Agent auditability requires capturing who the agent is, whose authority it operates under, what scope of authority was granted, and the complete chain of actions taken to accomplish a workflow. This mirrors the detailed activity logging used for human employees, but must adapt for software entities executing hundreds of actions per minute.
Enforcing trust boundaries across humans, agents, and systems: Organizations need clear, enforceable boundaries that define what an agent can do when invoked by a specific person on a particular device. This requires separating intent from execution: understanding what a user wants an agent to accomplish from what the agent actually does.
The future of enterprise security in an agentic world
As agentic AI becomes embedded in everyday enterprise workflows, the security challenge isn’t whether organizations will adopt agents; it’s whether the systems that govern access can evolve to keep pace.
Blocking AI at the perimeter is unlikely to scale, but neither will extending legacy identity models. What’s required is a shift toward identity systems that can account for context, delegation, and accountability in real time, across both humans, machines, and AI agents.
“The step function for agents in production will not come from smarter models alone,” Wang says. “It will come from predictable authority and enforceable trust boundaries. Enterprises need identity systems that can clearly represent who an agent is acting for, what it is allowed to do, and when that authority expires. Without that, autonomy becomes unmanaged risk. With it, agents become governable.”
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact [email protected].
Credit: Source link























