Linked by Thom Holwerda on Tue 8th Apr 2014 22:06 UTC
Privacy, Security, Encryption

Heartbleed, a long-undiscovered bug in cryptographic software called OpenSSL that secures Web communications, may have left roughly two-thirds of the Web vulnerable to eavesdropping for the past two years. Heartbleed isn't your garden-variety vulnerability, so here's a quick guide to what it is, why it's so serious, and what you can do to keep your data safe.


Thread beginning with comment 586871
To view parent comment, click here.
To read all comments associated with this story, please click here.
Member since:


The post on the mailing list mentions that it doesn't seem to be that easy. Also, here's a blog post of his, it seems to be worse than previously thought:

Ah, yes that would be another bug then.

The impact of malloc guarding on the performance is minimal (according to the OpenBSD developers*), however, there's a lot of software that misbehaves and would crash, which is why it's disabled by default.

I don't know much about OpenBSD, so I will try reading more on that (your link is interesting!). On linux, freelists do improve on the performance of the standard GCC malloc even without guard pages. It's particularly slow in multithreaded processes, this was maybe in 2012-2013, I should probably test it again.

However it's not just about performance, it's also granularity. Most allocated objects might range from a few dozen bytes to a few hundred, adding a guard page per each object would require one page for data, plus one page to trigger the fault. So a 100byte structure would still require 8K of ram. This is way too inefficient for a production system, IMHO.

libgmalloc -- (Guard Malloc), an aggressive debugging malloc library
libgmalloc doesn't come without some weaknesses. First, because each allocation requires at least two pages of virtual memory, in 32-bit processes only about 500,000 malloc allocations could conceivably exist before the process runs out of virtual memory. The extravagant use of virtual memory will also cause much more swapping, so the program will run much slower than usual -- usually two orders of magnitude (100x).

Lets say openssl did ship with guard pages by default. The person developing an exploit would have discovered that requests>4K trigger the guard page fault and tuned the exploit accordingly so as to not trigger the fault. I'll concede that <4K reads are less useful than 64K ones, but it might still be done without raising a red flag.

Edited 2014-04-10 21:35 UTC

Reply Parent Score: 3