Linked by Thom Holwerda on Tue 8th Apr 2014 22:06 UTC
Privacy, Security, Encryption

Heartbleed, a long-undiscovered bug in cryptographic software called OpenSSL that secures Web communications, may have left roughly two-thirds of the Web vulnerable to eavesdropping for the past two years. Heartbleed isn't your garden-variety vulnerability, so here's a quick guide to what it is, why it's so serious, and what you can do to keep your data safe.

Serious.

Thread beginning with comment 586738
To read all comments associated with this story, please click here.
The wonders of buffer overruns in C
by moondevil on Wed 9th Apr 2014 06:57 UTC
moondevil
Member since:
2005-07-08

PL/I and Mesa had bounds checking, but the world decided to go C instead, so keep having fun.

http://blog.existentialize.com/diagnosis-of-the-openssl-heartbleed-...

Reply Score: 3

henderson101 Member since:
2006-05-30

Borland Pascal did also... a lot of non-C compilers do.

Reply Parent Score: 3

moondevil Member since:
2005-07-08

Sure you are right.

I just wanted to mention a programming language older than C and another one of the same age to state that the C language designers ought to know better.

Reply Parent Score: 3

acobar Member since:
2005-11-15

Perhaps, they went with "fast" and "code correctness"? The problem is, on a very complex scenario, not having bounds checking intrinsic to the language proved to be a bad choice for general development.

Reply Parent Score: 2

Soulbender Member since:
2005-08-18

It's more that the OpenSSL team made some terminally stupid decisions:
http://thread.gmane.org/gmane.os.openbsd.misc/211952/focus=211963

Reply Parent Score: 4

cfgr Member since:
2009-07-18

Indeed, I was about to post the same link. Some systems have these checks in place but when people actively work around them...

If someone really wants to shoot himself in the foot, there's only so much you can do to stop him. Unfortunately they shot in pretty much everyone's foot with this one. Hopefully this will cause them to reconsider some practices.

Reply Parent Score: 3

Alfman Member since:
2011-01-28

Soulbender,

Having profiled the standard malloc myself on linux+gcc, I actually believe both points are correct. There are very real performance gains to be had by abstracting the allocator behind a caching mechanism, especially in thread local storage.


For a project like OpenSSL, which is both performance sensitive and security sensitive, there can be divergent goals. Optimizing malloc wouldn't ordinarily cause bugs on it's own, however if it eliminated some of the sanity checks, it might allow some alloc/free/memory leak bugs elsewhere in code to go unnoticed and keep running instead of crashing.

In this case the ideal solution seems to be to compile these optimizations conditionally, and run both versions through a comprehensive barrage of unit test cases. I don't know much about OpenSSL's testing procedures, but the source code does reveal that the caching can be enabled/disabled conditionally by defining "OPENSSL_NO_BUF_FREELISTS", so it's possible OpenSSL is already doing the right thing.

This reminds me of the range checking directive in pascal programs {$R+} {$R-}. The idea was a good one, if you develop your program successfully with range checking turned on and it's working, then that increases the confidence the code will be correct even with range checking turned off. The main caveat is that you don't get any assurances for code paths that were not adequately tested when range checking was on.

Edited 2014-04-09 18:22 UTC

Reply Parent Score: 3