Table of Contents
Most Bedrock agents in production are running on the same IAM role they were built with. That role is now a standing identity with access to whatever services got attached during testing, invoking those permissions automatically on every execution, with no human in the loop. The potential risk should be palpable.
It’s a concern teams can underestimate. Bedrock agents chain API calls across services, trigger Lambda functions through action groups, and query knowledge bases, all under IAM roles that were configured once during a sprint and never revisited. The role becomes a fossil of the build process. Sonrai research shows that over 90% of cloud permissions across human and non-human identities aren’t used. This number is likely exacerbated by the boom of AI agents. The permissions just sit there, attached, with the potential to be exploited in ways you never intended.
Let’s review what needs to happen before that agent handles live traffic.
Why do Bedrock agents end up with overprivileged IAM roles?
Bedrock agents require an execution role at creation. That’s the starting point. From there it’s easy for the role to grow.
A knowledge base gets wired in, so the permissions get added. A Lambda function gets attached via an action group, so the blast radius expands further. Now consider someone hits an AccessDenied during testing on a Friday afternoon so AmazonBedrockFullAccess gets bolted on “just to unblock things.” It’s likely never getting unbolted.
The pattern is familiar to anyone who’s watched IAM roles age in production: permissions accumulate during development and almost never get pruned. The agent service role and the roles backing its Lambda functions both expand the same way. The difference with Bedrock agents is that the thing exercising those permissions isn’t a deterministic workload. It’s a model deciding what to do next, which makes this one of the more consequential AWS IAM mistakes a team can carry into production.
What permissions does a Bedrock agent actually need at runtime?
This is the question worth sitting down and answering explicitly.
At runtime, a Bedrock agent should only have access to:
- The specific foundation model it invokes (scoped to model ARN, not bedrock:*)
- The knowledge base it queries, and only that one
- The Lambda functions backing its action groups, by ARN
- The S3 location holding its action group schemas
- Any guardrail it references
- The KMS key, if customer-managed encryption is in play
That’s the working set. Everything else is excess: unrelated S3 buckets, unrelated Lambda functions, DynamoDB tables it doesn’t touch, Secrets Manager entries it never reads. Excess permissions aren’t neutral. They’re latent capability waiting to be exercised by a prompt no one anticipated. If they aren’t using the permission within a 60-90 day timeframe, it doesn’t need it.
Bedrock agent roles carry risks that standard workload roles do not
A standard application role executes predefined code paths. The blast radius is whatever the developer wrote. You can read the code and know what calls are possible.
Bedrock agents don’t work that way. They chain actions dynamically based on model output, which means a manipulated prompt can walk the full surface of whatever the role permits, without a human approving any individual step. This is the mechanism behind most Bedrock privilege escalation scenarios worth taking seriously: the permission surface is the threat surface, in a much more direct way than with traditional workloads.
A few specific things worth calling out:
- Multi-agent patterns compound the blast radius. Supervisor-and-sub-agent architectures mean every sub-agent role contributes to what a single prompt injection can reach.
- Knowledge base service roles deserve their own review. They typically carry read access to the underlying data sources and vector store components. Separate permission surface, separate review.
- Cross-account action groups need explicit attention. If a Lambda function in another account is invoked, that introduces a cross-account trust relationship in the resource-based policy. This is a risk worth mulling over.
Why AWS-native tooling leaves Bedrock permission gaps in place
The native tools help, but none of them close the loop on their own.
IAM Access Analyzer’s policy generation works from CloudTrail history over a date range. If your CloudTrail window includes development and testing activity, the generated policy reflects how the agent was built, not how it should run. Useful as a starting point, not enough for a final answer.
Bedrock guardrails are a separate control entirely. They operate on model inputs and outputs: topics, content filters, sensitive information patterns. They do nothing about IAM scope. Teams conflate these constantly. “We set up guardrails” gets treated as a security posture statement when it covers maybe half of the actual surface. Both controls are required. Neither substitutes for the other.
CloudTrail itself logs the calls, but doesn’t produce a least-privilege policy for you. Deriving one manually is the kind of work that gets postponed when a launch date is looming, but it’s exactly when it’s most needed.
What the pre-launch permission lockdown covers
If you’re going to lock down IAM before deploying, there are four control points to walk through:
- The agent execution role. Replace any managed policy with a custom policy scoped to specific ARNs.
- The knowledge base service role. Scoped to the specific data source and vector store, nothing more.
- Each action group’s Lambda execution role. Independently scoped per function.
- The Lambda resource-based policies. These are what permit Bedrock to invoke the function. Confirm the Principal and SourceArn conditions are tight, not wildcarded.
The rule is the same across all four: remove any permission you can’t map to a verified runtime call. Do it before the agent sees production traffic, not after.
Enforcing the permission boundary after go-live
Security isn’t a one-time event. Agent roles drift the moment a new action group is added or a new integration is wired in. Treat every one of those changes as a permission review, not a default approval. The muscle memory in most teams is the opposite.
A couple of structural controls that hold up over time:
- SCPs at the account or OU level put a ceiling on what can ever appear in a role, regardless of who attaches it. If a Bedrock agent has no business calling certain services in production accounts, deny it at the org level and stop worrying about individual policy drift.
- Just-in-time elevation for necessary tasks, instead of permanently widening the role. Standing access is convenient on a Wednesday, but becomes a liability on a Saturday.
The takeaway
The execution role on your Bedrock agent reflects how the agent was built, not how it should run. Those are two different permission surfaces, and reconciling them is deliberate work, not something that happens by default.
Three things to walk away with:
- Replace every managed policy attached during development with a custom policy scoped to specific ARNs. Do this for the execution role, the knowledge base service role, and each Lambda role independently.
- Bedrock guardrails and IAM permissions lockdown solve different problems. Configuring one does not reduce the requirement for the other.
- Drift starts the day after launch. Build the review process before you need it or implement a solution that is sustainable long term.

FAQ
What IAM permissions does an AWS Bedrock agent need?
At runtime, a Bedrock agent needs access to the specific foundation model it invokes, the knowledge base it queries, the Lambda functions behind its action groups, the S3 location holding any action group schemas, and any referenced guardrail or KMS key. Scope each by ARN. Wildcards on any of these expand the surface beyond what the agent actually exercises.
How do I apply least privilege to a Bedrock agent execution role?
Start from the runtime working set (the model, knowledge base, Lambda functions, schema storage, guardrail, and KMS key the agent actually uses) and write a custom policy scoped to those exact ARNs. Replace any managed policies attached during development, particularly AmazonBedrockFullAccess, and audit the knowledge base service role and each Lambda execution role separately rather than treating them as one bundle. The easiest way is using the Cloud Permissions Firewall.
Do AWS Bedrock guardrails replace IAM permission controls?
No. Guardrails govern what the model can say and what topics or content patterns are filtered in inputs and outputs. IAM controls govern what AWS resources and actions the agent’s role can reach. They cover different layers. A guardrail won’t stop an over-permissioned role from being walked by a manipulated prompt, and a tight IAM policy won’t stop a model from generating undesirable content. Both are required.
What is the risk of overprivileged Bedrock agent roles?
Because Bedrock agents chain API calls dynamically based on model output, the role’s permission surface is effectively the attack surface. A manipulated prompt, or an unexpected interaction between an agent and its tools, can exercise any permission the role holds without human approval. Excess permissions that would sit harmlessly on a traditional workload role can be reached by an agent in ways no one anticipated when the role was scoped.
How do SCPs help secure Bedrock agents?
Service Control Policies set an organization-wide ceiling on what API actions are permitted in an account or OU, and that ceiling can’t be overridden by an individual role policy. For Bedrock agents, SCPs are the structural backstop against role drift. If certain Bedrock actions or target services shouldn’t exist in a production account at all, denying them at the SCP level removes the question of whether a future role change might surface them. The Cloud Permissions Firewall uses SCPs to restrict excessive permissions without taking away access development needs.
