Major security breached to Capital One database resulted in 140,000 Social Security numbers, 80,000 bank account numbers and compromised one million Canadian Social Insurance numbers. Breach's costs are expected to be up to $150 million.

 

The person responsible for this hacking crime is Ms. Thompson, a software engineer from Seattle. She hacked into a server holding sensitive customer information, performing one of the largest thefts of data from a bank.  The hacker is a former worker for Amazon Web Services, which hosted the Capital One database that was breached. FBI investigation reveals that Thomson had gained access to the sensitive data through a misconfigured Web Application Firewall (WAF).

 

Understanding the Breaches’ Mechanism:

 

Before pointing a blaming finger on either Capital One or AWS, it's important to understand the mechanism of the breach. First of all, it's important to notify that although data was stolen from an AWS system, yet, no AWS system was compromised in the incident. This is an important statement to understand, as questioning cloud security following this breach will mean pointing that blaming finger in the wrong direction.

 

It does highlight the complexity of maintaining a secure complex cloud infrastructure, which conducts many parts and continuously change according to business needs. So eventually, despite the security measures offered by the cloud provider, cloud servers are more exposed to risk and human mistakes due to lack of knowledge and expertise.

 

In Capital One's case, the main issue was a misconfiguration of the WAF. For whatever reason, it was assigned too many permissions, resulting in it to be able to list all of the files in any bucket of data and to read the content of those files.

 

The hack was made in a well-known method called a 'Server Side Request Forgery' (SSRF) which basically means that a server can be tricked into running commands that it shouldn't be permitted to run. Eventually, a compromise is made on the server-side that fetches a resource i.e the server acts as an HTTP client, fetches the resource and returns it to the original client.

 

 

To prevent it, the WAF should have been able to access only particular domains of the required web server. A whitelist should have been created against the other web pages that are not required.

 

Frequent firewall audits are extremely important for preventing a hack. Rules need to be check that they are as tight as they need to be, there are no necessary rules and that the services and access points are limited according to what is required by the business. Besides checking what is already existing, firewall rule sets are dynamic and change on a weekly basis. Whenever there's a change in rules, a 'what is' risk check must be performed to make sure that those changes don't introduce new risks or outages to the system.

 

This breach raises issues around the 'shared responsibility' cloud security model. Shared responsibility cloud security means that the cloud provider is responsible for updating, patching and securing its' infrastructure, while the customer is responsible for protecting what he runs on that cloud infrastructure.

 

In practice, enterprise security teams need to establish perimeters, define security policies and implement controls in order to manage the connectivity to the cloud servers and protect the data stored on them. These tasks add significant complexity to security management and extremely prone to human mistakes when performed manually.

 

Without using an automated tool, such as CHS by CalCom,  that will control the entire configuration process, there's a real risk of misconfigurations that will lead to security holes and eventually, breaches, like Capital One, had just experienced.

 

You might be interested