Powerful identity and access (IAM) models of public cloud providers like AWS, Azure, and GCP, enable the deployment of applications and data with far greater protection than what is possible in traditional data centers. However, these IAM solutions are not without risk when used incorrectly, and the risk is very different (and sometimes much greater) than old-world enterprise IAM.
Regardless if you call them microservices, micro-frontends, or a 12-factor app, “cloud-native” is the current best paradigm for developing applications to take advantage of the latest trends in technology, including public clouds and containers. Using the cloud-native approach is not just for new applications either, but as the foundation for new projects that are decomposing the large monolithic applications enterprises rely on.
Traditional approaches to maintaining compliance simply do not work when applied to a cloud-native application deployment or the underlying infrastructure. The traditional approach often depends on physically knowing where things are deployed, then relying on network security monitoring and perimeter access controls like VPNs to handle identification. In the world of clouds – even private clouds within a company’s own datacenter – the exact location can be dynamic based on load. And with the number of cloud-native apps growing exponentially, tracking who is calling what service is just not manually possible anymore.
Here are 4 considerations to maintain compliance in a cloud-native world.
|Consideration 1: Labeling and Grouping|
Organizations have all experienced a significant increase in the number of VM’s in their infrastructure once they enabled virtualization to increase server density. There are exponentially more instances of cloud-native applications running inside containers, and some of them have lifetimes that are measured in minutes. Because of the dynamic and ephemeral nature of cloud-native infrastructure, it is extremely important to group application assets together as best as possible, and have consistent labeling.
An example of grouping is an application with all of its assets in a single resource group in Azure, or a single project in Google Cloud. Consistent labeling is extremely important for tying them together. Extending consistent labeling is vital to providing full visibility and traceability of all parts of an application and the infrastructure it has provisioned. This extends from infrastructure components, to groups and roles, to artifacts that are produced by the CI/CD pipeline.
In addition to the application components, proper labeling allows an organization to quickly identify all data stores that are involved with each application; which allows for taxonomy and data compliance processes to track its movements and systems it interacts with.
Defining the strategy and specific taxonomy for the labels that will be applied, requires the involvement of any existing security and compliance organizations within the company. They will provide insight into how things are currently tracked and categorized, and will be a solid checkpoint to ensure the new cloud labeling will go beyond the basic needs of the DevOps and SRE teams.
|Consideration 2: Application Audit Trail|
Checking all code that is developed into a source code management system is the first step to building an audit-friendly pipeline for the application. Once code is committed, the next steps do not need to be in the same tool – or even in an interconnected tool – but they do need to have the same naming standards to provide clear lines. Also, if the tools are not interconnected, then when doing things like peer reviews, merges, pull requests, updating change logs and affected defects, it is critical to make references to tags and assets that are also being impacted.
Once an application moves to build, test, and release stages, the tools involved need to follow the same naming standard, and use labels wherever possible to provide that continued visibility.
When designing and reviewing the pipelines that handle CI/CD to various stakeholders – including the security, compliance, and infrastructure teams – each will want to have their application checklists accounted for, executed, and the results logged, as the application moves through various stages of the pipeline. With the information provided on those checklists, that have been developed through years of experience, informed decisions can be made when timing releases and ensuring new applications will not put compliance at risk.
|Consideration 3: Access Tracing|
Access tracing is the process of recording every call to every external function, service, or data store the application needs to function. If you were to use a more formal name it would likely be, “Authentication and Authorization Auditing.”
With regulatory compliance in some industries going as far as needing to track who updated, or even viewed, each and every record, tracing every unique call to every service allows that information to be retrieved if and when regulators formally request it.
This level of transaction logging within every application provides more than enough information to build a usage pattern and even apply ML or AI to the dataset to find anomalies before they are discovered by external auditors. These anomalies could be anything from a clerk in one department requesting results from another department or an ex-husband trying to look up his ex-wife’s bank balance.
This type of tracing is invaluable to security and compliance teams when it comes to passing routine audits. There are other types of tracing that apply in a security context, but those are much more focused on defect and performance tracking.
|Consideration 4: Principle of Least Privilege|
The basic idea behind the principle of least privilege is to grant only the exact access a user or service needs in order to accomplish its prescribed function. This can be done by assigning permissions to each and every user and service individually, which is not scalable and can be an absolute nightmare to maintain and audit.
The other approach is for every application to define a set of roles that control the various personas it has available – like individual roles for select, update, and delete – and then assign roles to users, as required. With this role-based approach, it can be quickly determined who has been granted access to which role. On the flip side, it’s easy to see what access a user has across the entire enterprise.
The principle of least privilege is the absolute best way to ensure data is only manipulated and viewed by specific people. Combined with transaction tracing, you can see who made the request and when.
Compliance is serious business. By extending these practices and principles to all areas of an application stack, the decentralized nature of a cloud-native application and its underpinning data can be managed, tracked, and controlled, as required. It is best to work with vendors that are designed for the cloud, like SonraiSecurity, to help automate the tracking of all your cloud resources – from configuration, to account access, to activity monitoring – across all the public clouds in your organization.