Table of Contents
Why AWS Bedrock Agents Are a Unique IAM Risk
AWS Bedrock makes it easy for cloud teams to build and deploy generative AI applications. With just a few clicks, developers can stand up agents that query company data, automate workflows, and interact with AWS services.
But these new capabilities introduce new risks. The moment AI agents gain access to your infrastructure, you need to consider more than just how people use AI. You need to control what the AI itself can do.
Unlike human users who authenticate with credentials, Bedrock agents execute actions through IAM roles — making every agent a non-human identity: a machine principal that holds permissions and interacts with AWS services autonomously. Non-human identities are already one of the fastest-growing sources of cloud risk, and AI agents represent their most dynamic form yet.
Bedrock agents aren’t as autonomous as other existing agents – they respond to user prompts and execute actions within the scope of a conversation. But within that interaction, they can chain together multiple API calls across services like S3, Lambda, and DynamoDB in rapid succession. A single prompt can trigger a number of actions that would take a human several manual steps to replicate. A misconfigured agent role isn’t just a policy gap. It’s an automated pathway into your infrastructure.
AI Agent Behavior in AWS Is Defined by IAM Permissions
AWS Bedrock acts as a front-end interface to large language models (LLMs) from providers like Anthropic, Meta, and Amazon. When you deploy an agent or use a workflow, you give it IAM permissions to perform tasks on your behalf. That could be reading from data sources, invoking Lambda functions, or querying other AWS services. What an agent can do depends entirely on how its execution role is scoped, and that scoping is where things can quietly go wrong.
Consider a common setup: a Bedrock agent with an execution role that grants broad S3 read permissions – something like s3:GetObject scoped to * rather than a specific bucket or prefix. The intent might be narrow – to read from a single data source – but overly broad permissions give the agent the ability to list buckets across the account and read objects it was never meant to touch. If the agent is manipulated, that over-provisioning becomes a blast radius that spans your entire S3 environment.
Where agents can cause more significant damage is through the Lambda functions they’re permitted to invoke. A Lambda function attached to an agent’s workflow might write to S3, or modify configuration and the agent inherits that capability indirectly.
Most teams focus on access control for human users. But AI agents can also become privileged identities. Once granted the ability to invoke powerful Lambda functions or read broadly across storage, they can be leveraged to perform actions far beyond their intended purpose.
A Real-World IAM Risk Scenario: The Confused Deputy Problem
Your DevOps team builds a shared Bedrock agent to answer internal policy questions. It connects to a knowledge base in S3 and uses Lambda functions to process responses.
To speed up development, the team grants the agent’s execution role broad permissions, including access to multiple services.
Weeks later, another team repurposes the same agent to automate cloud infrastructure tasks. They add new Lambda integrations and expand its permissions to include VPC provisioning APIs.
What no one notices:
- The agent still has access to sensitive HR content
- It now holds permissions to modify core network infrastructure
- Any developer in the environment can invoke the agent, whether or not they should
This is not flagged. No alert is triggered. The agent becomes a confused deputy: A service that can be coerced into taking unauthorized actions on behalf of a user with limited or no direct permissions.
The blast radius is significant. A bad actor could use the agent to exfiltrate employee records, financial data, or proprietary documents stored in S3. On the infrastructure side, the same agent has enough access to spin up or tear down VPC resources, potentially disrupting production environments or opening new network exposure. All of it traced back to a single over-permissioned execution role that no one reviewed after the first deployment.
The Cloud Permissions Firewall closes this gap from both sides:
The agent still has access to sensitive HR content
- It restricts who can configure Bedrock agents
- It limits what those agents can do once active
- It enforces least privilege across identities, services, and resources
This is how you prevent unintended access paths and protect both your data and your cloud infrastructure.
Warning Signs Your Bedrock Agent May Already Be Over-Permissioned
- The execution role uses wildcard (*) actions. Policies like s3:* or lambda:* grant far more than any single agent needs.
- The same IAM role is shared across multiple agents or teams. Shared roles make it nearly impossible to scope permissions appropriately.
- No SCPs restrict Bedrock API access by account or region. Without Service Control Policies in place, there’s nothing preventing agents from being deployed or invoked in accounts or regions outside your intended scope.
- bedrock:InvokeAgent calls are not logged in CloudTrail. If agent invocations aren’t being captured, you have no audit trail of what actions were taken, when, or by whom.
Guardrails are not attached to any production agent. Guardrails are Bedrock’s built-in layer of behavioral controls.
Why AWS Bedrock Guardrails Alone Aren’t Enough to Secure AI
AWS Bedrock includes a security feature called guardrails. These guardrails can filter what the agent can say, redact sensitive data, and enforce restrictions on certain topics. You can configure them to protect input and output flows.
However, these protections only work when properly configured. AWS has made progress in making guardrails easier to enforce at scale. You can now use IAM policy conditions to mandate specific guardrails on every model inference call, and through AWS Organizations, push those requirements across multiple accounts without configuring each one individually.
But easier enforcement isn’t the same as complete protection. The guardrails themselves are easy to remove and any identity with sufficient permissions can modify, disable, or swap them out entirely. If your IAM posture is weak, a well-designed guardrail policy is just a permission change away from being gone. AWS privileged permissions are the foundation. Guardrails are only as effective as the policies that protect them.
Why AI IAM Risk Is Harder to Detect Than Human Access Risk
Security teams often assume their main job is to manage access to the AI tools. But in reality, you must also manage what those AI tools can access.
There are two sides to AI risk in the cloud:
- What users can do to AI services
- What AI services can do to your environment
If you ignore either part, you are vulnerable to privilege escalation, data exposure, and operational disruption. These issues are difficult to detect and often happen without malicious intent. A single over-permissioned role can create a massive blast radius.
Part of what makes this so hard is that AI agents don’t create the signals security tools are built to look for. There are no login events, no MFA prompts, no session tokens tied to a person. When an agent assumes a role and starts making API calls, it looks like normal service activity, even when it isn’t.
How The Cloud Permissions Firewall Solves the Problem
Sonrai’s Cloud Permissions Firewall protects your environment by enforcing least privilege. It does this on both sides of the AI risk equation.
1. Control Who Can Use AI Agents
The Cloud Permissions Firewall prevents unauthorized users from invoking AI agents, modifying configurations, or bypassing governance.
It uses real-time access data to identify which identities are actually using AI services. Based on that, it automatically generates a single, unified policy that removes unused permissions and blocks risky access paths.
2. Control What AI Agents Can Do
It also restricts what AI agents can do inside your environment. It scopes their execution roles, blocks access to sensitive resources, and ensures AI agents cannot perform actions they were not intended to handle.
This includes:
- Preventing agents from gaining access to privileged Lambda functions
- Restricting access to sensitive S3 buckets or databases
- Blocking actions in unauthorized AWS regions
- Securing the guardrails themselves from tampering
These protections are essential for any team using AWS Bedrock, Amazon Q, or Rekognition.
Read the full AI use case for Cloud Permissions Firewall
IAM Hardening Checklist for AWS AI Workloads
Securing Bedrock agents starts with applying the same least-privilege discipline to non-human identities that you would to any privileged human user — with a few AI-specific additions.
- Use dedicated IAM roles per agent, not shared ones. Shared roles make it impossible to scope permissions to a single agent’s purpose, and mean one misconfiguration affects every agent attached to that role.
- Apply SCPs to restrict Bedrock API actions to approved accounts and regions. Service Control Policies act as a guardrail at the organization level, ensuring agents can’t be deployed or invoked outside your intended scope regardless of what individual role policies allow.
- Enable CloudTrail logging for all bedrock:* actions. Without this, you have no record of what your agents invoked, when, or in response to what — making audit and incident response nearly impossible.
- Review execution role permissions any time an agent is repurposed or handed off. Roles scoped for one use case rarely stay appropriate when an agent’s function changes, and this is one of the most common sources of unintentional privilege creep.
- Use resource-based policies on S3 and Lambda as a second layer of control. Even if an execution role is over-permissioned, resource-based policies on the target services can limit which identities are actually allowed to interact with sensitive resources.
Lock Down IAM Before Your AI Agents Become a Liability
AI isn’t inherently dangerous. But like any system in the cloud, its behavior is governed by identity and access.
If you give an AI agent permission to change infrastructure or access sensitive data, you need to be absolutely sure those permissions are locked down. Otherwise, that agent can become a liability.
Sonrai helps you secure the permissions layer so your AI workflows operate safely, predictably, and under control.
Start a free trial or request a demo to see how Cloud Permissions Firewall secures AI services in AWS.

Frequently Asked Questions
Bedrock guardrails govern what an agent can say or do within a conversation — filtering topics, inputs, and outputs — while IAM controls govern what AWS resources and actions the agent’s execution role can actually access.
Yes — if the execution role’s permissions are broader than the original use case, the agent can interact with any lambda function or knowledgebase those permissions allow.
Typically not — most access review processes are designed around human users and service accounts, and non-human identities like Bedrock agents are frequently overlooked or excluded.
CloudTrail captures most Bedrock API calls by default, but data event logging must be explicitly enabled for some actions like bedrock:InvokeAgent.
SCPs can restrict which Bedrock API actions are permitted within an account or organizational unit, acting as an organization-wide ceiling that individual role policies cannot override.
