Determining Cloud Exploitation – Indicators of Compromise

July 28, 2023 | By Chris Vernooy

cloud exploitation

Indicators of compromise are different for every single piece of malware and exploit but there are common denominators.  This article will cover determining cloud exploitation, the most common indicators that may mean a set of cloud resources, accounts, or services have been compromised as well as some discovery techniques and mitigation strategies. 

Unknown or Anomalous Outbound Network Traffic 

Firewalls are not a vestige of a long-ago era, the unevolved…the datacenter. OK not really, datacenters are what all cloud platforms still rely on, but that’s how it feels when people talk about “The Cloud”. Firewalls are required in all cloud deployments just like on-premises ones. Cloud-based firewalls offer some benefits and at other times a traditional (virtualized) security appliance is appropriate. Cloud network logging, when available, is much harder to consume and understand.

Firewalls offer much greater visibility into the network traffic that is going out of your network to places on the internet. Some of the more interesting details emerge when you leverage the inherent ability of firewalls to also block certain destinations such as C&C/C2 servers (Command & Control), tor and other proxies, and other malicious hosts. Aggregating this data from a firewall to a SIEM or to XDR tools allows you to visualize this data and highlights when this happens. Connections to known bad or suspicious external IPs is a strong indicator of a compromised system somewhere in the cloud or hybrid network.

If running containers, a perimeter firewall shows what could otherwise be hidden by encrypted container communications when using a service mesh. For a more detailed explanation of this last part, please check out my article “5 antipatterns in container security”. Consistent traffic to certain hosts that are associated with active threats or countries commonly associated with threat actors (China, North Korea, Iran, etc) can indicate a compromise, as well as strange traffic patterns such as traffic during off hours. Large amounts of data or traffic leaving the network or other suspicious hosts like cryptocurrency endpoints or blockchain addresses would be other indicators. 

Of course, to know what looks anomalous both inbound, internally, and outbound you need to have data flow diagrams (DFDs) that show what normal network behavior looks like. One way to determine this is to set up network monitoring to determine what is normal and create a baseline. Deviations from that baseline should be alerted on and investigated. From that model, security and network operations can be better informed on what to look for and what “anomalous” means in the context of the environment. These are also required for certain compliance frameworks. Not having the tools and models in place to detect and compare outbound traffic patterns is a recipe for being unknowingly breached and letting that breach continue undetected. That in turn risks your intellectual property, customer information (some of which may be protected information) and data, and ultimately your reputation. How many organizations actively monitor or restrict outbound communications? It’s safe to assume that it’s not built into standard operating procedures. Monitoring and restricting outbound communications should become an industry standard.

The Perimeter is Dead; Long Live the Perimeter.

Strange Binaries or Processes

Generate Software Bill of Materials (SBOMs) for virtual machines, network devices, containers, and applications on a regular basis. This is to help detail all of the libraries, software, components, and executables present on a host. Some malware is completely fileless, but it’s not as common, especially when dealing with APTs (Advanced Persistent Threats). There are few places to hide APTs that are not on the filesystem, though they do exist, such as unused storage area regions or firmware. If cloud firmware is exploited the problem is much larger so we won’t focus on those mechanisms in this article. If a threat actor gains persistent access to your cloud-deployed resources it is likely that they have installed a backdoor, such as the campaigns TeamTNT is known for (darkreading.com). If you notice binaries actively running, on storage, or in reports from scanners or SBOMs that you are not familiar with, this is an indication that the resource was compromised. Additionally, on an EC2 instance for example you can see all of the programs that are listening, which generally includes sockets. If you find something listening that you are unaware of, especially on unusual ports, it might be an issue. Rootkits are malware that are written to hide so they won’t necessarily show up using any userland or root tools on a Linux or Windows instance or cloud VM.

I often hear “well we use lambdas so we are safe”. Ok, let’s address this. If I have a lambda that imports packages and one of those is malware or has been maliciously edited, sure it won’t run beyond the time the lambda runs. BUT it does have access to all of the memory, data, or systems in that time that the code needs to do whatever it is I have the lambda doing. Secrets (keys, certs, passwords, tokens, etc), data from any sources the code is interacting with, read/write to cloud-specific resources with whatever permissions it is granted. No persistent threats? That functionally malicious lambda now has all of the ability to use whatever permissions are granted to it – install stuff on ec2 servers, read/write to databases (think: created stored procs or a more fun way to dynamically load code from a saved zip in s3 – do you know every file in your s3 buckets?), etc. There are A TON of fun things we can do maliciously to attack your cloud resources “just from a lambda”. This brings us to our next indicator and how to find malicious behavior – logging!

Strange Behavior in Logs

Cloudtrail for AWS and Azure Monitor (for Azure) are gold mines for monitoring for malicious access because all access into your cloud account is logged here. Whether it is console access, API access, etc the user and actions are logged in cloudtrail. It’s a lot to parse and analyze but aggregating this is a great way to apply automated searches and filters to look for common malicious behavior. Oteemo has used cloudtrail in AWS  to monitor user behavior, flag suspicious actions, find anomalous logins, and help to ensure we aren’t being targeted by threat actors. Logins from unexpected places such as other countries, tor or vpn nodes, or in unexpected states/cities can be a big tip-off that a cloud account has been compromised. Another giveaway are new users set up in the account that doesn’t correspond to a legitimate auditable addition. Multiple failed authentication attempts in the cloud account are another pretty big indicator of a threat actor trying to access it. Recent unauthorized access to KMS or other encryption keys is another indicator or access by an account that should have access should be followed up on to ensure it was actually the user in question and not stolen cloud credentials. Cloudtrail gives you all of this information. From an Ops perspective, cloudtrail can also be used to identify user actions that caused a bad outcome, such as an outage or interruption in service, with a little digging and piecing together the timeline.

It is imperative to ensure you are logging all queries to and usage of all of your cloud resources and services. Database logs will tell you what queries were used and by whom so you can tell if a piece of code, or lambda, was trying to insert, read, or do something funky in your database. Set up filters for logs that contain sql queries with things like `where 1=1` or `where 1 = 0`. This is a common indication of a SQL injection attack. 

Strange Internal Network or Inbound Connections

Seeing weird connections in your VPC flow logs is another indication that something has been exploited. If you don’t use internal firewalls then at least by using VPC flow logs, you should be able to capture some of the malicious or anomalous traffic. Azure logging is a bit different and supports a more basic set of metrics. It should be well-defined what resources should be able to communicate with other resources and via what protocols and ports. This should be implemented technically with security groups and documented in a dataflow diagram (DFD). Finding something that breaks the expected traffic pattern, or is attempting to, even if it’s blocked by a security group, is an indicator of a compromised system. It could also end up being undocumented requirements that were missed so it is important to investigate. Just using security groups (in AWS and Azure) or perimeter firewalls for your cloud isn’t enough. Monitoring exploit attempts is just as important to find and root out a compromised system.  Another key indicator is attempts of a user principal repeatedly attempting to access data or resources they are not provisioned to access or otherwise shouldn’t be accessing. 

Unauthorized Changes to Configurations or Data

Any changes to cloud configurations, other resources, or cloud VM instances are a marker of a threat actor having access to your cloud account. You can use cloudtrail to backtrack and figure out how and what user account made the change. Cloudtrail won’t tell you that something was modified in a malicious way or the impact of the change. Some examples of changes might be to security groups, keys or which keys are used to encrypt certain data or removal of keys, cloud storage that has been opened to the world or downloaded, etc. Deletion of data or encryption of data with keys that administrators don’t have access to or removal of administrators is another key indicator of a compromised cloud account. 

Similarly, if cloud logging was turned off or flow log configurations were deleted, this would be an indication of someone not wanting their activities to be recorded or monitored. 

Hopefully, this was helpful in assisting you, the reader, in what to look for when investigating possibly compromised cloud accounts. There is never a lack of threat actors trying to leverage their way into your systems and steal keys, data, intellectual property or other sensitive information and we have to remain diligent. Sometimes these attacks are more widespread and successful than others. Complacency and repeated failures by threat actors increase the chances of their success later, so we as defenders have to stay vigilant.

Oteemo is a services company that can help you assess your current security posture, perform vulnerability and penetration testing, and help you architect and engineer a more secure design for your cloud and hybrid platforms. 

0 Comments

Submit a Comment

Who We Are & What We Do

As passionate technologists, we love to push the envelope. We act as strategists, practitioners and coaches to enable enterprises to adopt modern technology and accelerate innovation.

We help customers win by meeting their business objectives efficiently and effectively.

icon         icon        icon

Newsletter Signup:

Join tens of thousands of your peers and sign-up for our best technology content curated by our experts. We never share or sell your email address!