Published : 10.27.2020
Last Updated : 03.08.2023
Just about every process in the Google Cloud Platform (GCP) is regulated by identity access management (IAM). Yet, when it comes to IAM while using GCP, many users today are making major security mistakes. Unfortunately, these missteps lead to critical gaps creating pathways to your data.
So, what do these misconfigurations look like?
Let’s take a look.
There are two types of users on GCP: allUsers, which are both authenticated users and unauthenticated anonymous users, and allAuthenticatedUsers, which can be anyone with a verified Google account.
Granting allUsers access to one of your buckets is like leaving your front door unlocked. Basically, anyone who obtains the link will be able to access it freely — creating an opportunity for public data exposure.
Granting allAuthenticatedUsers access is slightly more restricted than allUsers, but not by much. This simply limits those that can access your data to anyone who is authenticated with a Google account or a service account, but not anonymous users.
All team members should be cognizant of the use of these permissions and ideally avoid using them if possible. Alternately, restrictions can be applied to prevent the use of these across GCP .
If an Identity is over-privileged, it can directly or indirectly promote itself to the ownership level of a bucket. With this level of privilege, they have the authority to make administrative decisions that could compromise your entire operation.
Let’s look at an example of what an Identity with excessive permissions could achieve:
Thus, it is very important to understand the Effective Permissions (aka: end-to-end permissions) of all your GCP Identities whether they be human or non-human.
The constraint that’s most relevant to this misconfiguration is called domain restricted sharing.
If you place your data storage buckets with sensitive data under a certain project or folder, you can then apply this constraint at the project or folder level to specify that no IAM permissions be granted to anyone outside of your organization.
If you are a G Suite customer, you can grant access to the G Suite ID for your domain. This will prevent any user who has not authenticated to your G Suite domain from being granted IAM permissions to any resources in your project.
The issues with using this constraint are as follows:
With this in mind, what Cloud Ops teams need to ensure is that they think through the design of their data storage architecture with Identities and access in mind. This will ensure that when implemented these issues don’t arise … and go unknown until it is too late.
Virtual Public Cloud (VPC) Service Controls can also help mitigate the misconfiguration of storage buckets.
GCP allows you to make resources private, which means they can’t be accessed via the Internet even if the IAM policy allows it. This control allows you to set up a VPC service perimeter around projects and then control access to that perimeter based on things like your IP address, geographic location, and conditions on the device requesting the access, among other things.
The issues with using a VPC Service Control are as follows:
While this is a great control, it comes back to the importance of fully understanding you data and identity access requirements. You need to know where you data is and which Identities will require access and from where they need it. Furthermore, you need to continuously monitor for potential changes to ensure that things stay locked down.
Users often want to know whether encryption will prevent the exposure of their files.
GCP provides encryption on stored objects by default with keys that they manage. You might think that this approach would reduce data exposure, but it doesn’t.
When you apply IAM permissions that allow the public to read objects in buckets, Google is obliged to decrypt the data — the same as it would for your internal users. This also applies to customer managed encryption keys (CMEKs) that you are able to provision and control in the key management service (KMS).
The one case that encryption would not allow data exposure is with customer supplied encryption keys (CSEKs) because Google never stores those. In this case, you have to store and manage your own keys. For use with storage buckets, you must supply your public key to Google to allow it to encrypt the data being stored in the bucket. But, under this configuration, GCP has no way to decrypt the data for you without that.
So, if an unauthorized party gains access to the encrypted objects, they would need to separately obtain access to your keys in order to decrypt the data.
API keys in GCP are a form of authentication and authorization that can be used when calling specific API endpoints in the cloud. These keys are tied directly to GCP projects and are therefore considered less secure than OAuth 2.0 client credentials or service account user-managed keys.
In a secure cloud environment, all assets and resources should be monitored for when they are created, updated, or deleted. This makes sensitive credentials like API keys especially important to track.
Unfortunately, GCP does not currently support a native way to programmatically inventory API keys across an entire GCP organization.
While this is listed as #7, it is one of the most important things you need to avoid.
It’s surprising how many organizations don’t enable, configure, or even review the logs and telemetry data that public clouds provide, which in many cases can be extremely sophisticated. Someone on your enterprise cloud team should have the responsibility for regularly reviewing this data and flagging security-related events. Be sure to enable logging and monitoring functionality to support these efforts.
Now that you have a better idea of some of the more common GCP security misconfigurations, here are some tips that can help you avoid them:
The rate at which new features and functionalities from cloud providers are growing is exciting and promising. At the same time, however, it adds complexities to our cloud environments that makes it harder to protect yourself against misconfigurations and compliance risks while keeping your data secure.
This is why it is so important to consider an intelligent cloud security posture management (CSPM) solution. Intelligent CSPM helps with many important processes — including real-time misconfiguration monitoring, providing a consolidated view for multi-cloud environments, and creating standards framework and compliance reports (like NIST, HIPAA, GDPR, PCI-DSS, and Amazon Well-Architected Framework, among others). The real differentiator between next-gen CSPM and traditional tools, is the ability to provide actionable insights based off risk context. That context comes from understanding how misconfigurations tie back to identities, permissions, and ultimately access to data. This helps your team prioritize risks that have the most severe business impact. When you know what risks you need to address, next comes remediation. An intelligent CSPM will facilitate smart workflows to get issues to the right teams for resolution as well as leverage automation to fix your problems before they become issues.