Table of Contents
Share this entry
AWS Bedrock makes it easy for cloud teams to build and deploy generative AI applications. With just a few clicks, developers can stand up agents that query company data, automate workflows, and interact with AWS services.
But these new capabilities introduce new risks. The moment AI agents gain access to your infrastructure, you need to consider more than just how people use AI. You need to control what the AI itself can do.
AI Behavior is Governed by Permissions
AWS Bedrock acts as a front-end interface to large language models (LLMs) from providers like Anthropic, Meta, and Amazon. When you deploy an agent or use a workflow, you give it IAM permissions to perform tasks on your behalf.
This means AI agents could access S3 buckets, invoke Lambda functions, or write to DynamoDB. In some cases, they may even gain the ability to modify infrastructure, depending on how their execution role is scoped.
Most teams focus on access control for human users. But AI agents can also become privileged identities. Once granted access to sensitive services, they can be used—intentionally or accidentally—to perform actions far beyond their intended purpose.
This is where things go wrong.
A Realistic Risk Scenario
Your DevOps team builds a shared Bedrock agent to answer internal policy questions. It connects to a knowledge base in S3 and uses Lambda functions to process responses.
To speed up development, the team grants the agent’s execution role broad permissions, including access to multiple services.
Weeks later, another team repurposes the same agent to automate cloud infrastructure tasks. They add new Lambda integrations and expand its permissions to include VPC provisioning APIs.
What no one notices:
- The agent still has access to sensitive HR content
- It now holds permissions to modify core network infrastructure
- Any developer in the environment can invoke the agent, whether or not they should
This is not flagged. No alert is triggered. The agent becomes a confused deputy: A service that can be coerced into taking unauthorized actions on behalf of a user with limited or no direct permissions.
The Cloud Permissions Firewall closes this gap from both sides:
The agent still has access to sensitive HR content
- It restricts who can configure Bedrock agents
- It limits what those agents can do once active
- It enforces least privilege across identities, services, and resources
This is how you prevent unintended access paths and protect both your data and your cloud infrastructure.
Bedrock Guardrails Help, But Only If IAM is Secure
AWS Bedrock includes a security feature called guardrails. These guardrails can filter what the agent can say, redact sensitive data, and enforce restrictions on certain topics. You can configure them to protect input and output flows.
However, these protections only work when properly configured. By default, Bedrock does not apply guardrails globally. You must attach them to specific agents or workflows. They are optional and easy to remove.
This creates a second layer of risk. Even if you establish strong guardrails, those controls can be removed if IAM is not tightly locked down.
Permissions are the foundation. Guardrails are only as effective as the policies that protect them.
Why This Matters More Than You Think
Security teams often assume their main job is to manage access to the AI tools. But in reality, you must also manage what those AI tools can access.
There are two sides to AI risk in the cloud:
- What users can do to AI services
- What AI services can do to your environment
If you ignore either part, you are vulnerable to privilege escalation, data exposure, and operational disruption. These issues are difficult to detect and often happen without malicious intent. A single over-permissioned role can create a massive blast radius.
How The Cloud Permissions Firewall Solves the Problem
Sonrai’s Cloud Permissions Firewall protects your environment by enforcing least privilege. It does this on both sides of the AI risk equation.
Control Who Can Use AI Agents
The Cloud Permissions Firewall prevents unauthorized users from invoking AI agents, modifying configurations, or bypassing governance.
It uses real-time access data to identify which identities are actually using AI services. Based on that, it automatically generates a single, unified policy that removes unused permissions and blocks risky access paths.
Control What AI Agents Can Do
It also restricts what AI agents can do inside your environment. It scopes their execution roles, blocks access to sensitive resources, and ensures AI agents cannot perform actions they were not intended to handle.
This includes:
- Preventing agents from gaining access to privileged Lambda functions
- Restricting access to sensitive S3 buckets or databases
- Blocking actions in unauthorized AWS regions
- Securing the guardrails themselves from tampering
These protections are essential for any team using AWS Bedrock, Amazon Q, or Rekognition.
Read the full AI use case for Cloud Permissions Firewall
Take Control Before AI Does
AI isn’t inherently dangerous. But like any system in the cloud, its behavior is governed by identity and access.
If you give an AI agent permission to change infrastructure or access sensitive data, you need to be absolutely sure those permissions are locked down. Otherwise, that agent can become a liability.
Sonrai helps you secure the permissions layer so your AI workflows operate safely, predictably, and under control.
Start a free trial or request a demo to see how Cloud Permissions Firewall secures AI services in AWS.