Blog Home Page Next article: HIPAA Understood: An Introduction
The Equifax Hack – A Huge Deal
We’ve certainly seen our fair share of big-news hacks over the last few years with household names like Home Depot, Target, HBO, and Anthem Health, to name a few. Even as big as some of these data breaches were – Anthem was 80 Million records – they affected just one or two key areas of our lives, typically one credit card or a subset of our personal information. While still a big deal, the worst part for most people was an inconvenient afternoon updating auto payments on multiple vendor sites or monitoring bank statements a little closer. Though serious in their own rite, they simply pale in comparison to the Equifax Hack – this is a very big deal.
The Equifax Hack affects virtually every aspect of your personal data protection. Most pieces of data used to uniquely identify 143 MILLION people is out there: DOB, SSN, Driver’s License and potentially current and previous addresses, employment history, you name it. Combine that with the personal life information we now share so regularly on social media and you get pretty much everything needed to access accounts and perform password resets or open new accounts in your name. Every account is at risk for 143 million Americans, nearly everyone living in the United States with a credit history. This is no joke, this is not hyperbole, it affects you and it affects me. It is very much a HUGE DEAL!
We are not going to regurgitate the top 10 things everyone should do as a result of the Equifax Hack as there are at least 100 articles published on that already. We are not going to scream for removal of the SSN as the unique identifier for medical services. Put simply, it doesn’t matter anymore. Just spend a few minutes thinking about the logic surrounding this – regardless of what is being used to prove your identity, this hack would have spilled it along with all necessary supporting documentation. It’s all out there and ripe for the picking.
…regardless of what is being used to prove your identity, this hack would have spilled it along with all necessary supporting documentation.
So, how could this happen?
PCI-DSS has strict guidelines to protect our credit cards and related transactions. HIPAA has stringent regulations on PHI/ePHI and requires regular assessments. Most industries have regulations requiring steps that should, could or would have prevented this monumental breach. So, how did it happen? Did Equifax just not care? Was this gross negligence? The reality is we may never know the truth about how it actually happened. We can, however, use what we have been told to hypothesize a scenario (example below) where this could happen even with all the best intentions of creating a secure web application.
ePHI is the most valuable data sold on the dark web today. HIPAA can point you in the right direction, but are you more focused on compliance than security?
Example of Web App Dev Gone Wrong
Equifax stated there was a vulnerability in a web application. They did not claim “Day Zero” or “Complex, Well-Funded Hack.” The following is a possible scenario. Spohn Consulting has no direct knowledge that this is how it happened. That being said, we find vulnerabilities in web applications all the time. This narrative we present explains how and why web applications that have gone through a deep and thorough code review end up online with critical known vulnerabilities. All names have been changed to protect the innocent.
There is a validated need to provide several retailers access to a shared database.
- The Software development team builds the application. The Web App goes through several code reviews throughout the software development lifecycle. It is failed for many reasons and is sent back for modification to correct memory leaks, excess CPU utilization, and slow network access.
- After multiple revisions, the app finally makes it though a functional code review and is given an in-depth security code review. They are months behind, but it is critical to perform all the necessary checks. Several vulnerabilities in the form of escape sequences are identified and removed during this step and the app finally passes all internal code reviews.
- The insurance company demands an independent 3rd party code review. They are now 67 days behind schedule. The web app passes and they submit the results to be filed with the insurance company.
- At the same time, integration beta testing of the compiled code is completed. It is determined that the web form feeding the web app does not allow passwords to meet the new password policy based on the latest NIST password guidelines. The policy now requires special characters and password phrases up to 60 characters in length. The Form Template is modified so that all input forms built will allow password phrases of length and complexity. The minimum password length is left unchanged. The final acceptance test shows it meets or exceeds performance requirements.
- The app goes into production until 120 days later it is discovered that 700k+ unique accounts were compromised.
Failure 1 – The Web App was compiled using a vulnerable version of OpenSSL and PHP. The code was sound, but the compiled (installable-executable application) contained the OpenSSL and PHP vulnerabilities.
Failure 2 – The web form template was modified so all input fields on all forms accepted special characters and up to 60 characters allowing for both SQL injection and cross-site scripting attacks.
Failure 3 – An external test of the compiled application was not tested internally or externally with free tools like Burpsuite, OWASP-Zap, sqlmap or commercial tools from Acunetix, Rapid7 or Tenable.
We regularly present findings to customers who are shocked at the critical vulnerabilities in their code when they were wholeheartedly expecting a rubber stamp of approval.
It is not uncommon for app developers to believe that secure source code makes for a secure application. It’s as wrong a belief as a network administrator believing that so long as his servers are patched, the network is secured. Both are widely-held beliefs that are simply wrong. The final web application with its end-user interface is a mixture of source code and all software hooks compiled and integrated into the website. Vulnerabilities can, and frequently are, introduced by accidental oversite due to the lengthy development process involved. We regularly present findings to customers who are shocked at the critical vulnerabilities in their code when they were wholeheartedly expecting a rubber stamp of approval. They come to us to validate what they believe to be true – a functional and secure product that passed a complete code review and is now in production.
Equifax had to know they were a target and all their external facing applications were constantly being probed for weaknesses of any kind. They were most likely hacked because they believed and trusted in their code review while failing to do a complete Web Application Assessment of the full production system prior to opening it up to the Internet.
Before someone points it out, yes, there was a full session at “Defcon 24” that advocated full Web Application testing in lieu of a security code review. I am pretty sure the presenter used a Shellshock vulnerability introduced in the compiling process as his key example. That was two years ago so if anyone knows the name of the presenter, please let me know so we can credit him by name.
Post your best theory and lesson learned – would love to hear it!!