Linked by Thom Holwerda on Tue 8th Apr 2014 22:06 UTC
Privacy, Security, Encryption

Heartbleed, a long-undiscovered bug in cryptographic software called OpenSSL that secures Web communications, may have left roughly two-thirds of the Web vulnerable to eavesdropping for the past two years. Heartbleed isn't your garden-variety vulnerability, so here's a quick guide to what it is, why it's so serious, and what you can do to keep your data safe.

Serious.

Thread beginning with comment 586727
To read all comments associated with this story, please click here.
Comment by Gone fishing
by Gone fishing on Wed 9th Apr 2014 03:31 UTC
Gone fishing
Member since:
2006-02-22

I'm guessing this is going to be a pain as patching will not be enough on it's own. If the ssl keys have been compromised they will need to be regenerated and you don't know if your system has been compromised as it is not in the logs - you will need to regenerate the ssl keys.

Reply Score: 4

RE: Comment by Gone fishing
by wigry on Wed 9th Apr 2014 10:53 in reply to "Comment by Gone fishing"
wigry Member since:
2008-10-09

Generating new key is not the problem. blacklisting the existing key is not the problem. Distributing the blacklists and making sure that EVERY application/appliance consults the blacklist before trusting the other party is the problem. There can be millions of keys that must be blacklisted. Perhaps it would be easier to blacklist the intermediate CA keys? But that would require all the previously generated keys to be regenerated. This is massive issue and many appliances use offline blacklists and might not have (easy) means to update the blacklist or might not have enough storage available to hold a list of two thirds of internet keys that have been blacklisted.

Also I've seen the programs where the CRL is not consulted due to performance requirements.

Therefore the man-in-the-middle attack has become every day reality where the stolen keys are used to create fake sites.

Edited 2014-04-09 10:54 UTC

Reply Parent Score: 4

RE[2]: Comment by Gone fishing
by Alfman on Wed 9th Apr 2014 14:47 in reply to "RE: Comment by Gone fishing"
Alfman Member since:
2011-01-28

wigry,

Generating new key is not the problem. blacklisting the existing key is not the problem. Distributing the blacklists and making sure that EVERY application/appliance consults the blacklist before trusting the other party is the problem.


Unfortunately it is a big problem. Public key encryption can scale extremely well due to the fact that no communications are necessary to validate a certificate. This increases not only certificate validation speed but reliability. However the revocation protocol (OCSP) lacks this scalability, and due to the slow/unreliable nature of it many browsers intentionally ignore non-responses.

http://news.netcraft.com/archives/2013/04/16/certificate-revocation...

Something I learned is that many browsers change their revocation checking behavior depending on whether certs are EV certs or not. Some browsers don't even bother checking revocation on non-EV certs.

http://blog.spiderlabs.com/2011/04/certificate-revocation-behavior-...


Closing this hole completely means each and every SSL certificate needs to be untrusted until completing validation against the CA's revocation database. It's doable but mostly negates the benefits of using PKI in the first place. Assuming every certificate has to be checked anyways, then logically it makes more sense for the client to ask the CA if the certificate is in the "good list" rather than whether it's in the "bad list".


There's a poorly supported hack in place today called "OCSP stapling" that allows the negative revokation status information to be cached by the web server to reduce the need for clients to individually lookup certificate revocation, which should dramatically lesson the bottleneck on a CA's servers. However at this point I'd argue we have a system of hacks built on top of hacks, and we'd be better off ditching the revocation infrastructure all together and just using short lived certificates. Not only is this approach simpler, but it would be more secure too.

Ultimately I will be glad if/when DNSSEC reaches a point where we will no longer have to deal with CAs at all.

Edited 2014-04-09 14:51 UTC

Reply Parent Score: 3