Bleeding Hearts - Don't Just Blame OpenSSL

sanjayuniken

Charles Perrow, celebrated author and academician, had recorded the argument that as the complexity in modern technological systems increases, so does the risk of unforeseen interactions of minor issues resulting in larger catastrophic accidents. To make his point, Mr Perrow describes the accident at the Two Mile Island nuclear reactor, where a chain of minor incidents and failures, such as a faulty return valve, a tag obstructing the view of a monitoring instrument, a faulty gauge among other things led to the disaster. In “normal accidents”, he concludes, each minor failure by itself doesn’t necessarily have a large impact. However, put together, they can lead to catastrophe.

In more recent times, the Heartbleed OpenSSL flaw was a classic “normal accident”!

Information security is no easy problem. The ever growing features, architectures, customer expectations and interdependence of software and services inevitably inflate the surface area of security considerations.

When discussing security, we seem to fixate on the space of encryption algorithms, passwords, access controls and the likes. We obsess about the NSA’s ability to crack present-day encryption, the presence of backdoors and premeditated weaknesses introduced in the algorithms. We condemn the nexus between software/service providers and the government in providing access to data, systems and communications of their customers. Because, frankly, that makes the discussion more exciting and newsworthy.

We want to assign blame to one clearly identifiable villain. In our zeal to do so, we almost ignore the large number of minor flaws in the decisions made by “standards” bodies, the non-security related code we write and the organizational culture that operates the systems.

Heartbleed - What REALLY led to it?

It is convenient for us to blame the now-obviously-sloppy work of some over-worked programmer for the Heartbleed flaw in OpenSSL. However, there is more to it than a simple line of code at fault here. The line of code was just the straw-that-broke-the-camel’s-back.

Protocol flaws

The SSL/TLS protocol is both sizable as well as overly complicated. The Heartbleed flaw happens to be in a rarely used extension of TLS. Specifically, the heartbeat mechanism is meant to ensure the continuity of an inactive TCP connection. There are two issues here:

a) The heartbeat mechanism is an extension in the protocol - i.e. it is optional and in reality, rarely used.  The added code for a feature not really required in the first place, results in a lower in-practice test coverage. That is life in the software domain - features that seemed like a good idea during standards-discussions, but not very practical for the majority of users.

b) The other, arguably bigger, flaw in the protocol is for the server to respond with the client supplied-arbitrary data payload of client-specified length in the heartbeat response message. This is, in fact, a VERY poor design choice by the protocol designers, since it can be exploited and abused in a variety of ways. 

Consider a few:

 A malicious client can send a very large buffer and possibly crash the application or perform a DDOS attack by consuming valuable memory and TCP resources on the server. Moreover, a specially crafted payload may even trigger other unknown bugs in the SSL stack, the webserver or even the OS, subsequent to the crash.

Another attack vector would be to send a smaller buffer than specified in the buffer length field. This could lead to either a crash or unexpected behavior in the SSL stack/web server. This, in fact, was the specific approach employed in the widely reported attacks on Heartbleed in OpenSSL. 

What should be clear beyond a doubt is that the very nature of the protocol raises the concern of security flaws in implementations other than OpenSSL.

Programming Error

The reason OpenSSL is in the news is that it does not anticipate and handle the above mentioned attack. The widely used library across websites on the internet simply assumes that supplied buffer is at least equal to the size stated in the message. This results in it reading an arbitrary bunch of memory while responding to malformed requests. As luck would have it, under the right set of conditions, it ends up reading and returning very sensitive data such as it’s private key, authentication tokens, etc.

Wow! All other security measures by the site admin fall to the wayside. She can apply all recommended firewall rules, turn off unwanted ports, patch her webserver, ensure her code is safeguarded against SQL injection, password protect her private keys, salt user passwords, change her admin password frequently - all of it and more. Obviously the server has to decrypt and store the private keys in memory in order to use them. To steal the private key, a client on the internet has to simply send a TLS heartbeat request with a small payload while claiming it to be larger than it really is.

Can you blame just OpenSSL ? Sure, it failed to apply basic bounds check in code, but the protocol provided a means for that failure to result in a much larger disaster.

Lessons from Heartbleed

While undeniably the OpenSSL flaw has led to some major damage on the internet, there are bigger lessons from Heartbleed.

 1. Not just encryption algorithms - but seemingly unrelated code blocks such as memory management, file handling, data structures can have huge security implications.

  2. Application protocols must be designed with a “security” focus.

3. In the balance, though it may be easier to exploit open-source software, that very nature helps weed out security issues as well.

What does this mean for Organizations?

This bug has opened up user credentials and private keys to potential abuse by hackers. Enterprises can potentially see unauthorized access to their servers. Hackers can also exploit the compromised keys to gain access to enterprise IP and steal enterprise data. However, one of the biggest worries is a potential future abuse by hackers by patching network devices with Advanced Persistent Threats now and exploiting the enterprise resources even after the organization has patched its servers and renewed security certificates.

The Internet has evolved to support banking, commercial and corporate transaction. However, this has resulted in various security vulnerabilities.  We have seen various strategies such as SSL, digital certificates, IPSec etc. to patch security vulnerabilities of public internet. Enterprises also bought into the idea of multi-factor authentication with One Time Password, Biometric authentication. But, these solutions are either not scalable or are cost intensive and haven’t seen widespread use. Moreover, technologies like server-side digital certificates cannot prevent phishing attacks as the end-user does not know what to and how to check the digital certificate (digital certificates are verified by the browsers and not authenticated).

As Heartbleed has demonstrated, these are not foolproof in the absence of mutual authentication and are open to MITM attacks. Even two-way SSL, with its two handshakes on either side, is not a true mutual authentication. The need of the hour is a true mutual authentication that will ensure the end user and server authenticate each other every time they start an interaction and put enterprises back in control of who are they interacting with.

In addition to this, in conclusion, it goes without saying that enterprises need to ensure they re-evaluate their security infrastructure and strategy periodically and ensure up-to-date software deployments.