Linked by Thom Holwerda on Tue 14th Dec 2010 23:55 UTC, submitted by Oliver
OpenBSD Okay, this is potentially very big news that really needs all the exposure it can get. OpenBSD's Theo de Raadt has received an email in which it was revealed to him that ten years ago, the FBI paid several open source developers to implement hidden backdoors in OpenBSD's IPSEC stack. De Raadt decided to publish the email for all to see, so that the code in question can be reviewed. Insane stuff.
Thread beginning with comment 453590
To read all comments associated with this story, please click here.
james_parker
Member since:
2005-06-29

If you are scared or concerned about this article and what may be in Open Source software, you must read this, and think about its ramifications:

http://cm.bell-labs.com/who/ken/trust.html

This technique, which was actually implemented in early Unix systems (which were distributed with all the source code), allows for back doors to be included in source-based projects, while being invisible in the source code itself!

That means, for example, that your gcc compiler may have this embedded in it; you can thoroughly review every line of that source code and find nothing, but if you recompile it from source (to make sure that nothing is hidden in the binary), the back door is re-inserted into the new executable.

BTW, the original Unix hack was finally discovered by a student who was using dbx (which wasn't part of the original Unix system -- it was "brand new code") to debug the login command, and discovered an odd "jump" in the program counter.

This technique may be made more complicated (and harder to find) by incorporating the same form of "hiding hack" in the available debuggers and assemblers as well.

Given today's revelation, I would not be surprised to find this has already been done in common Open Source software.

Reply Score: 9

Eugenia Member since:
2005-06-28

Agreed. I first heard of the compiler-as-a-spyware trick about 6 years ago. Ultimately, even if your code is clean, the compiler you're using to compile that code might not be. This is the TRUE threat.

And while I do love me conspiracy theories, if I was FBI/NSA/CIA, I'd do just that too. It just makes sense from an insecure agency point of view...

Edited 2010-12-15 00:50 UTC

Reply Parent Score: 4

JoeBuck Member since:
2006-01-11

It is easy to prove that gcc does not have the Thompson hack. (Technically, the proof shows either that gcc doesn't have the hack or else all C compilers have the identical hack).

gcc is built using a bootstrapping process. First, gcc is built from its source code (written in C) using whatever compiler you have. Then the compiler is built again, using itself. As a check, the compiler is built a third time with itself and the object code is compared between the stage 2 build and the stage 3 build. It must be byte-for-byte identical or the test fails.

Furthermore, you can show (and people have shown) that you get the identical results if you start from Sun's compiler or various older versions of gcc, and likewise for a number of other compilers. If the Thompson hack were present, you would get different results if you build from source code with a compiler containing the hack, than if you don't.

Reply Parent Score: 9

Eugenia Member since:
2005-06-28

I'm sure the hack has evolved since 1984, the year the Thompson trick was written.

Edited 2010-12-15 01:19 UTC

Reply Parent Score: 2

reez Member since:
2006-06-28

Technically, the proof shows either that gcc doesn't have the hack or else all C compilers have the identical hack.

Oh no, all compilers are malicious! ;)

What about hardware or firmware backdoors?
The US have already been afraid of Asian hardware for this reason, which could be a sign. I mean if the US believe hardware from China could have backdoors the reason is the US have at least been thinking about it, right?

Reply Parent Score: 4

james_parker Member since:
2005-06-29

It is easy to prove that gcc does not have the Thompson hack. (Technically, the proof shows either that gcc doesn't have the hack or else all C compilers have the identical hack).


Actually, the proof is not nearly that strong. Rather than requiring all C compilers to have it, only the set of C compilers on which this test were tried and passed must have it. Now, if a new C compiler, with a clean room design and test were written and the test passed, this would dramatically increase the confidence (it would be imperfect, since there may be some structural indication that this is a C compiler that an infected "booting" compiler would detect and propagate the hack). Also, libraries, assemblers, parser generators, etc., must also be checked.

Given sufficient resources it could be increasingly difficult to detect; however, the US Federal Government (FBI, CIA, NSA) would be one of the very few -- if not only -- entity with the resources to do it; further, the cost of doing so would be far higher than that needed to detect it.

Edited 2010-12-15 01:26 UTC

Reply Parent Score: 2

Kebabbert Member since:
2007-07-27

It is easy to prove that gcc does not have the Thompson hack. (Technically, the proof shows either that gcc doesn't have the hack or else all C compilers have the identical hack).

Interesting, do you have links on this? I want to learn more. Who showed this? Where can I read?

Reply Parent Score: 3

vivainio Member since:
2008-12-26

It is easy to prove that gcc does not have the Thompson hack. (Technically, the proof shows either that gcc doesn't have the hack or else all C compilers have the identical hack).


I don't see the proof.

The example hack shows how compiler injects malicious code to "login" program. If gcc is not "login" program, nothing would be detected.

Reply Parent Score: 2

Delgarde Member since:
2008-08-19

If you are scared or concerned about this article and what may be in Open Source software, you must read this, and think about its ramifications:

http://cm.bell-labs.com/who/ken/trust.html


A well known story, but not as easy to pull off as you might think. The original relies on special code in the compiler binary which a) recognises when it's compiling itself (to re-inject the special code), and b) recognises when it's compiling the login program (to implant the back-door).

Thing is, this *does* rely on that code being terribly clever. To work reliably, the compiler not only must recognise itself, but must also recognise future versions of itself. It needs to handle cross-compilation, e.g an x86 compiler producing an x86_64 target. And it needs to recognise when it's compiling other compilers, e.g gcc compiling Clang/LLVM or visa-versa. It also needs to contain no bugs, lest it attract attention when it goes wrong.

Now, how many people do you think there are who could write code that clever to start with, *and* do so in such a way that it would never be noticed by any of the other smart people.

Reply Parent Score: 2

Bill Shooter of Bul Member since:
2006-07-14

Now, how many people do you think there are who could write code that clever to start with, *and* do so in such a way that it would never be noticed by any of the other smart people.


More than zero.

Reply Parent Score: 4

jrincayc Member since:
2007-07-24

David A. Wheeler shows how if you have a second independent system available, this can be countered. The full details are in "Fully Countering Trusting Trust through Diverse Double-Compiling"
http://www.dwheeler.com/trusting-trust/dissertation/html/wheeler-tr...

Reply Parent Score: 5