Product News
Identity Security for AI Agents
Identity Security for AI Agents

Aakash Bhardwaj
Aug 4, 2025
Aug 4, 2025





Table of content




Secure all Identities and Permissions
Introduction
The emergence of autonomous AI agents is not just a technological shift, it represents a fundamental change in how enterprises operate, build software, and interact with data. From copilots enhancing human workflows to multi-agent systems managing complex infrastructure, these agents are no longer just tools. They make decisions, access sensitive data, chain API calls, and even create sub-agents to complete tasks.
Unfortunately, while organizations rush to embrace AI, most are attempting to retrofit traditional Identity and Access Management (IAM) systems that were never designed for this new class of workloads. These systems assume human users with predictable behavior, long-lived identities, and static roles. AI agents break these assumptions and the illusion of control that comes with them.
In this blog, we’ll explore why AI agents represent a fundamentally different identity security challenge, propose what a secure agent identity framework should look like, and explain how ReShield is building scalable, policy-driven, and auditable agent ecosystems.
Emerging challenges with AI Agents accesss
Emerging challenges with AI Agents accesss
Traditional IAM systems like OAuth, SAML, OIDC, and role-based access control were designed for people and static machine identities. They share three core assumptions:
Identities are long-lived
Roles are mostly static
Access is granted at provisioning time
These assumptions fail in the context of AI agents due to the following:
Ephemeral Nature: Agents are created and destroyed within seconds, rendering static provisioning models ineffective.
Autonomy: Agents can independently request new resources or delegate tasks to sub-agents.
Shared or Static Credentials: Many agents use long-lived tokens or shared service accounts, undermining auditability and traceability.
Unbounded Delegation: Agents often act on behalf of users or systems, making delegation chains opaque and difficult to track.
These gaps create massive blind spots and break traditional models of authentication, authorization, and governance.
Real-World Risks of Unmanaged Agent Access
The risks are not hypothetical as organizations are already experiencing real attack vectors through poorly governed agents:
Prompt Injection: Malicious prompts can manipulate agents into exfiltrating data or modifying workflows.
Credential Sprawl: Hard-coded secrets and shared tokens can lead to privilege escalation and lateral movement.
Shadow Agents: Agents created without security approval operate without oversight or visibility.
Broken Chain of Custody: When agents invoke other agents, there are no clear boundaries to assign accountability.
These risks mirror traditional access challenges like overprovisioning, lack of auditing, and broken trust but are amplified by AI agents’ speed, autonomy, and scale.
Traditional IAM systems like OAuth, SAML, OIDC, and role-based access control were designed for people and static machine identities. They share three core assumptions:
Identities are long-lived
Roles are mostly static
Access is granted at provisioning time
These assumptions fail in the context of AI agents due to the following:
Ephemeral Nature: Agents are created and destroyed within seconds, rendering static provisioning models ineffective.
Autonomy: Agents can independently request new resources or delegate tasks to sub-agents.
Shared or Static Credentials: Many agents use long-lived tokens or shared service accounts, undermining auditability and traceability.
Unbounded Delegation: Agents often act on behalf of users or systems, making delegation chains opaque and difficult to track.
These gaps create massive blind spots and break traditional models of authentication, authorization, and governance.
Real-World Risks of Unmanaged Agent Access
The risks are not hypothetical as organizations are already experiencing real attack vectors through poorly governed agents:
Prompt Injection: Malicious prompts can manipulate agents into exfiltrating data or modifying workflows.
Credential Sprawl: Hard-coded secrets and shared tokens can lead to privilege escalation and lateral movement.
Shadow Agents: Agents created without security approval operate without oversight or visibility.
Broken Chain of Custody: When agents invoke other agents, there are no clear boundaries to assign accountability.
These risks mirror traditional access challenges like overprovisioning, lack of auditing, and broken trust but are amplified by AI agents’ speed, autonomy, and scale.
Industry Case Studies
Industry Case Studies
Case Study 1: LLM Integration Gone Wrong
A global SaaS company integrated a third-party LLM to summarize support tickets. They used a shared service account token with read/write/delete access to every ticket. The LLM retained context from previous prompts, leading to sensitive internal data being leaked in error logs. The absence of scoped, per-agent identity made accountability impossible. Rotating the shared key required a full system redeployment.
Case Study 2: Orchestrator Abuse in CI/CD
A fintech firm deployed an AI orchestrator to manage CI/CD pipelines. It spawned sub-agents for testing, deployment, and rollback. Due to overly permissive scopes and no identity isolation, a test agent triggered an unintentional production rollback. Postmortem analysis found no way to distinguish the orchestrator’s actions from the sub-agents in audit logs.
Case Study 3: Shadow Agent in Internal Applications
A dev tools team embedded a prompt-based agent in their internal issue tracker. It used a generic service account with elevated privileges. A broken prompt caused the bot to auto-close dozens of high-priority tickets. The security team discovered the issue only after an outage. The agent had been deployed without approval which is classic shadow IT.
These examples show how poorly governed agents cause downtime, data leaks, and compliance violations.
Case Study 1: LLM Integration Gone Wrong
A global SaaS company integrated a third-party LLM to summarize support tickets. They used a shared service account token with read/write/delete access to every ticket. The LLM retained context from previous prompts, leading to sensitive internal data being leaked in error logs. The absence of scoped, per-agent identity made accountability impossible. Rotating the shared key required a full system redeployment.
Case Study 2: Orchestrator Abuse in CI/CD
A fintech firm deployed an AI orchestrator to manage CI/CD pipelines. It spawned sub-agents for testing, deployment, and rollback. Due to overly permissive scopes and no identity isolation, a test agent triggered an unintentional production rollback. Postmortem analysis found no way to distinguish the orchestrator’s actions from the sub-agents in audit logs.
Case Study 3: Shadow Agent in Internal Applications
A dev tools team embedded a prompt-based agent in their internal issue tracker. It used a generic service account with elevated privileges. A broken prompt caused the bot to auto-close dozens of high-priority tickets. The security team discovered the issue only after an outage. The agent had been deployed without approval which is classic shadow IT.
These examples show how poorly governed agents cause downtime, data leaks, and compliance violations.
Modern AI Agent Identity
Modern AI Agent Identity
We need to rethink identity for agents from the ground up. A secure agent identity should be:
Ephemeral: Scoped to a single task or limited time frame
Verifiable: Based on cryptographic primitives like Decentralized Identifiers and Verifiable Credentials
Contextual: Tied to runtime attributes like task type, environment, and initiator
Traceable: Able to establish a full, auditable chain of delegation
At ReShield, we call this an AgentID, which includes:
Task origin (e.g., CI/CD pipeline, user action)
Time-to-live (e.g., 15 minutes)
Scope of action and accessible tools
Owner/controller (e.g., orchestrator or team)
This AgentID becomes the core unit of control, observability, and governance.
The Role of Agent Discovery and Naming
As proposed by the Cloud Security Alliance, organizations will benefit from an Agent Naming Service (ANS) similar to DNS, but for AI agents.
With ANS, teams can:
Discover agents based on their purpose and compliance state
Verify runtime identity, credentials, and versions
Enable secure, policy-compliant agent-to-agent interactions
ANS creates a secure discovery and trust layer for agent ecosystems.
Fine-Grained Access in Practice
Imagine a task orchestrator needs to summarize support tickets:
It spawns an agent and issues an AgentID
The agent requests access to a semantic search API
ReShield checks context, metadata, and policies
It issues a time-limited credential (e.g., a VC or token)
The agent completes the task and retires
No standing access. No credential reuse. Full accountability.
No Standing Access: From NHIs to AI Agents
ReShield was built to manage non-human identities (NHIs) from service accounts to automation. Here’s how that foundation extends to agents:
AgentID issuance and metadata enrichment
Dynamic trust scoring using behavioral analytics
Policy-as-code for real-time, attribute-based access control
Support for protocols like MCP (Model Context Protocol) and A2A (Agent-to-Agent)
ReShield connects to your IDPs, cloud providers, SaaS, and internal tools giving you one access control layer for both humans and agents.
Zero Standing Privilege (ZSP) for Agents
Just like humans, agents should not have standing access. The secure workflow looks like:
Discover → Verify → Authorize → Execute → Revoke
ReShield enforces ZSP for agents by:
Granting access only when needed
Automatically expiring credentials
Supporting session control for real-time revocation
This aligns with Zero Trust and Attribute-Based Access Control (ABAC).
Agent Identity Lifecycle Management
Inspired by CSA guidance, a complete agent identity lifecycle includes:
ID Generation: Created at task initiation
VC Issuance: Verifiable claims for identity, scope, origin
Execution: With scoped, auditable permissions
Logging: AgentID-based logs for traceability
Deactivation: Revoking access and archiving context
ReShield supports this full lifecycle to ensure visibility and governance.
We need to rethink identity for agents from the ground up. A secure agent identity should be:
Ephemeral: Scoped to a single task or limited time frame
Verifiable: Based on cryptographic primitives like Decentralized Identifiers and Verifiable Credentials
Contextual: Tied to runtime attributes like task type, environment, and initiator
Traceable: Able to establish a full, auditable chain of delegation
At ReShield, we call this an AgentID, which includes:
Task origin (e.g., CI/CD pipeline, user action)
Time-to-live (e.g., 15 minutes)
Scope of action and accessible tools
Owner/controller (e.g., orchestrator or team)
This AgentID becomes the core unit of control, observability, and governance.
The Role of Agent Discovery and Naming
As proposed by the Cloud Security Alliance, organizations will benefit from an Agent Naming Service (ANS) similar to DNS, but for AI agents.
With ANS, teams can:
Discover agents based on their purpose and compliance state
Verify runtime identity, credentials, and versions
Enable secure, policy-compliant agent-to-agent interactions
ANS creates a secure discovery and trust layer for agent ecosystems.
Fine-Grained Access in Practice
Imagine a task orchestrator needs to summarize support tickets:
It spawns an agent and issues an AgentID
The agent requests access to a semantic search API
ReShield checks context, metadata, and policies
It issues a time-limited credential (e.g., a VC or token)
The agent completes the task and retires
No standing access. No credential reuse. Full accountability.
No Standing Access: From NHIs to AI Agents
ReShield was built to manage non-human identities (NHIs) from service accounts to automation. Here’s how that foundation extends to agents:
AgentID issuance and metadata enrichment
Dynamic trust scoring using behavioral analytics
Policy-as-code for real-time, attribute-based access control
Support for protocols like MCP (Model Context Protocol) and A2A (Agent-to-Agent)
ReShield connects to your IDPs, cloud providers, SaaS, and internal tools giving you one access control layer for both humans and agents.
Zero Standing Privilege (ZSP) for Agents
Just like humans, agents should not have standing access. The secure workflow looks like:
Discover → Verify → Authorize → Execute → Revoke
ReShield enforces ZSP for agents by:
Granting access only when needed
Automatically expiring credentials
Supporting session control for real-time revocation
This aligns with Zero Trust and Attribute-Based Access Control (ABAC).
Agent Identity Lifecycle Management
Inspired by CSA guidance, a complete agent identity lifecycle includes:
ID Generation: Created at task initiation
VC Issuance: Verifiable claims for identity, scope, origin
Execution: With scoped, auditable permissions
Logging: AgentID-based logs for traceability
Deactivation: Revoking access and archiving context
ReShield supports this full lifecycle to ensure visibility and governance.
Built for the Agentic Future
Built for the Agentic Future
ReShield already supports:
Just-in-time access for NHIs across AWS, Azure, GCP, GitHub, Snowflake, etc.
Slack-based access requests for ephemeral roles
Policy-as-code via OPA, Sentinel, and more
Risk insights for human and machine identities
Now, we’re extending support to ephemeral agents, integrating with MCP, and incorporating ANS-based discovery.
We believe:
Every agent should be named, scoped, and governed
Access should be just-in-time, never always-on
All access decisions must be context- and policy-driven
ReShield already supports:
Just-in-time access for NHIs across AWS, Azure, GCP, GitHub, Snowflake, etc.
Slack-based access requests for ephemeral roles
Policy-as-code via OPA, Sentinel, and more
Risk insights for human and machine identities
Now, we’re extending support to ephemeral agents, integrating with MCP, and incorporating ANS-based discovery.
We believe:
Every agent should be named, scoped, and governed
Access should be just-in-time, never always-on
All access decisions must be context- and policy-driven
Conclusion
Conclusion
ReShield already supports:
Just-in-time access for NHIs across AWS, Azure, GCP, GitHub, Snowflake, etc.
Slack-based access requests for ephemeral roles
Policy-as-code via OPA, Sentinel, and more
Risk insights for human and machine identities
Now, we’re extending support to ephemeral agents, integrating with MCP, and incorporating ANS-based discovery.
We believe:
Every agent should be named, scoped, and governed
Access should be just-in-time, never always-on
All access decisions must be context- and policy-driven
ReShield already supports:
Just-in-time access for NHIs across AWS, Azure, GCP, GitHub, Snowflake, etc.
Slack-based access requests for ephemeral roles
Policy-as-code via OPA, Sentinel, and more
Risk insights for human and machine identities
Now, we’re extending support to ephemeral agents, integrating with MCP, and incorporating ANS-based discovery.
We believe:
Every agent should be named, scoped, and governed
Access should be just-in-time, never always-on
All access decisions must be context- and policy-driven
More Blogs




Jan 10, 2025
Jan 10, 2025
Jan 10, 2025
Jan 10, 2025
What is Next-Gen Privilege Access Management
What is Next-Gen Privilege Access Management

Aakash Bhardwaj
Co-Founder & CEO




Jan 17, 2025
Jan 17, 2025
Jan 17, 2025
Jan 17, 2025
Beginner's guide to understand Aws IAM and Identity Center
Beginner's guide to understand Aws IAM and Identity Center

Aakash Bhardwaj
Co-Founder & CEO




Jan 26, 2025
Jan 26, 2025
Jan 26, 2025
Jan 26, 2025
Why Zero Standing Privileges (ZSP) Should Be Priority
Why Zero Standing Privileges (ZSP) Should Be Priority

Aakash Bhardwaj
Co-Founder & CEO
Features
Resources
Features
Resources
Features
Resources
Features
Resources
Features
Resources