2 Months After Log4j: AWS, Azure, GCP and More Remediations

6 mins to read

Log4j: The Recap

There is nothing in the cloud security world like having a severe remote code execution (RCE) zero-day drop on a Friday afternoon before the holidays. On December 10, 2021, this is exactly what happened – introducing the security world to an emerging, malicious vulnerability found in the popular Apache Log4j. Because many organizations use this popular resource and develop in the public cloud, organizations from every shape and size, every industry, including Cloud Service Providers (CSPs) like AWS, Azure and Google Cloud, were all affected. This new vulnerability came to life as CVE-2021-44228 and was given the name “Log4Shell.”  Due to the ease of exploitation and the significant impact it caused, versions of Log4j have become the most significant vulnerability in the past 12 months presenting as  CVE-2021-44228, CVE-2021-45046, CVE-2021-4104 and CVE-2021-45105

The Log4j exploitation of this vulnerability allows for unauthenticated, remote code execution (RCE), and allows complete control of affected systems. Recently, headlines from around the world have covered exploitations from this risk. For example, an Iran-linked threat actor has been targeting VMware Horizon servers by exploiting Log4j security in order to run malicious PowerShell commands, deploy backdoors, harvest credentials and perform lateral movement. These malicious actors follow in the footsteps of attackers targeting previous Log4Shell attacks observed in January.  

Log4j Two Months Later

Recently, Google says there are as many as 400,000 scans for Log4j vulnerabilities against Google Cloud each day, suggesting that IT professionals need to continue to be vigilant and ensure that they remediate vulnerable systems. It is clear that this vulnerability is not simply going away any time soon as bad actors continue to scan vulnerable Log4j instances.

There have already been reports of organizations seeing attacks in the wild for weeks, and there will likely be more throughout 2022 if security teams don’t take appropriate measures.

Due to the ubiquity of the Log4j package, the possible attack surface for threat actors to target is far-reaching. Adversaries and researchers alike are continuing to scour the web looking for vulnerable Log4j versions. As a result, CSPs have been and continue to work with their cloud customers to ensure the infrastructure is secure as well as check the status of customer-installed tools and third-party dependencies in their environments to see if they are affected. While adversaries continue to knock on this door, observations have shown that they are opting to use known open-source tools, native Cloud services, and previously established domains for persistence in their attacks. 

All CSPs are affected in some way. Each provider has given some helpful and useful actions that an organization can take today if they haven’t already done so.

Log4j Remediations: AWS, GCP & Azure

  • Google points Google Cloud admins to several Google Cloud-specific mitigations, including the company’s Cloud Armor solution, Java scanning feature, and threat hunting tools.
  • AWS Log4j remediations point admins to several AWS-specific mitigations, including the company’s AWS WAF, Amazon Route 53 Resolver DNS Firewall, IMDSv2, and other AWS security detection and response tools.
  • Azure points admins to Azure-specific mitigations like  Arc-enabled Data Services, Azure Application Gateway, Azure Front Door, Azure WAF, and other workarounds, response, and mitigation tools.

With Log4j, AWS, Azure, and GCP provide specific guidance to help their customers improve their security posture in their cloud environments, however, this has been overwhelming for those in multi-cloud environments. Since researchers discovered numerous Apache Log4j vulnerabilities, the security workforce has been stretched thin trying to patch systems, de-escalate network intrusions, and manage other data access priorities simultaneously. The magnitude of these vulnerabilities and the tedious remediation process are taking a toll on the already short-handed security workforce, according to  (ISC)².

Because Log4j is a common library, utilized by many enterprises and cloud applications, it is hard to manually review each environment. To better detect malicious activity stemming from known and unknown threats in multi-cloud organizations and to determine the magnitude of the risk of each threat, security teams need a solution that can continuously monitor environments and analyze the information to spot malicious behavior right away.

This approach helps organizations identify anomalous behavior in AWS, Azure, and GCP, across multi-cloud environments. It not only helps organizations better prioritize remediation efforts but also actively watch for exploits targeting those Log4j vulnerable systems and prevent the exploits from happening.

First, your organization should find out if it was already compromised. The malicious actors might already be inside your environment. Because the Log4j vulnerability uses the logging library indirectly — meaning that they use a Java application that contains Log4j library rather than directly running Log4j as a standalone application — it is difficult to detect your organization is running a vulnerable application and even harder for your organization to remediate. The combination of Log4j being hard to find, and the possibility of a remote code execution (RCE) vulnerability can ultimately enable a bad actor to remotely access and control devices. This means your organization could have been affected before you even realized it. 

So how do you find out if you are already compromised?

Access Continuous Security Monitoring

You need to continuously monitor two types of identities when approaching AWS, Azure, and GCP environments continuously. Understanding the types of identities that need to be managed and the appropriate level of access that they require helps to ensure the right identities have access to the right resources under the right conditions. 

Person Identities: Your administrators, developers, operators, and end-users require an identity to access your environments and applications. These are members of your organization or external users with whom you collaborate and who interact with your resources via a web browser, client application, or interactive command-line tools. 

Non-Person Identities: These are your service accounts, operational tools, serverless functions, and workloads that require an identity (role, service principle, etc) to make requests to your public cloud services. These identities include machines running in your environment, such as Amazon EC2 instances or AWS Lambda functions. You may manage non-person identities for external parties that need to access your cloud resources.

Now that you understand all the identities, your first move should be discovery. Start by inventorying your identities and their effective, end-to-end permissions. With an identity inventory and their effective permissions and entitlements, your organization can now determine what data identities can access. You can see context like how they can access the data and what they can potentially do with the data. With this continuous visibility, you can see security drifts and alert on them.

Next, you should determine the resources where those identities are in use. With this visibility, teams can effectively determine where they have risks and then, in turn, manage the risks to ensure that the resources and the data within it stays secure. 

Last, your CIEM tool will automatically map out and visualize your cloud’s identity to data relationships to find potential attack vectors. For example, you’ll want Sonrai Dig’s graph with patented analysis to provide a comprehensive identity to data risk mapping –  enabling you to set the security baseline for what you will monitor for continuous audit.

Next, determine the VMs where those identities are in use. With this visibility, teams can effectively determine where they have risks, configuration changes, and then, in turn, manage the risks to ensure that the VM and the data within it stay secure.

Your next move should be to classify your data. As you already know, not all data is created equal. To manage your data risk, you need to know what is critical to protect now and what can be managed later. For this, you need to classify your data to know where your crown jewels are. Using Sonrai Dig’s patented data classification features, you can identify data based on criteria such as sensitivity (credit card numbers) or PII (names, addresses, phone numbers). You should also be able to classify data based on organizational needs with custom classifiers. Once you establish where your crown jewels are in your environment, and which Identities have access to it, you can take decisive action to protect your most valuable assets.

Just like you would put your most valuable possessions in a safe, secure your crown jewel data – such as sensitive PII –through lockdown. Taking highly sensitive data and locking it down means you’re implementing the least privilege by setting security controls that limited the identities and the permissions that they have to only those that are required. This is where most teams stop, but you can’t stop here. Once you’ve established the least privilege, you need to maintain it.

Getting to least privilege establishes your cloud security baseline. You need to maintain this baseline through the continuous audit of your cloud environment for when there is drift from the security baseline. Again, a tool like Sonrai Security’s cloud platform can help you continuously monitor your environment, 24/7/365, and when a deviation is detected, it alerts the team(s) responsible for protecting the data so that they can take immediate action.

While you go through this next phase with Log4j, you have many teams working around the clock. However, this is not a long-term, scalable solution. Teams need intelligent workflows and automation to keep the environment secure while the team is asleep. Through continuous monitoring, if deviation should occur and something goes ‘bump in the night’, your workflows and automation react to mitigate the risk at the speed and scale of your cloud. Then, it’s game over for the bad guys.

We understand the stakes for remediation couldn’t be higher for security teams. On January 4, 2022, the Federal Trade Commission published a blog post reminding companies that “the duty to take reasonable steps to mitigate known software vulnerabilities implicates laws including, among others, the Federal Trade Commission Act and the Gramm Leach Bliley Act,” in response to Log4Shell’s public disclosure of the Log4j vulnerability. The blog post calls for companies to take immediate steps to reduce the likelihood of harm to consumers that could result from the exposure of consumer data as a result of Log4j or similar known vulnerabilities. Whether you are a Sonrai Security customer or not, we are here to assist you with any cloud security questions or solutions related to Log4j.

You can continue to follow this vulnerability on the US Cybersecurity and Infrastructure Security Agency’s webpage with links to various vendor blogs, along with a list of affected applications.