This paper was written by Ken Thompson around August 1984. Ken Thompson is the co-father of UNIX: “You can’t trust code that you did not totally create yourself. No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect.”
Since this was written back in 1984, how do we know that Ken Thompson still expresses the same level of paranoia today as he did back in 1984? People change their minds. It’s quite possible that he has developed a level of faith in his source code, compiler, loader-linker, binaries and the systems they run on today that he didn’t have back in 1984. Perhaps he has become completely distrustful of computers and now lives in Sri-Lanka and couldn’t care less about his or anybody else’s code. How reflective should we be?
The software model back then was entirely different. There were “many eyes” that occasionally peered into the code, but there were not “many eyes” that constantly laboured on all facets of the code itself.
Basically, this just hearkens back to the Cathedral-style coding days. We see an entirely different picture with the Bazaar model.
You can’t trust code that you did not totally create yourself.
I’ve actually seen this argument used to attack the security of FOSS software. Funny, isn’t it?
It will get on all your disks
It will infiltrate your chips
Yes it’s Cloner!
It will stick to you like glue
It will modify ram too
Send in the Cloner!
Funny, isn’t it?
I don’t think so. It still amounts to who you can trust, source or not. Having the source without auditing it, or writing it yourself is no different than having binaries.
Basically, this just hearkens back to the Cathedral-style coding days. We see an entirely different picture with the Bazaar model.
It still amounts to whether you can trust the “Bazaar”.
Anyways, this is interesting:
http://www.ussg.iu.edu/hypermail/linux/kernel/0311.0/0635.html
I know that I can trust Open Source software because I review every line that is compiled, as well as monitoring the assembly output of the initial compilation of GCC. I don’t know why everyon else doesn’t do the same, otherwise you can’t claim to have a secure computer. I guess a lot of people use Linux just because it is a “fad”, not to take advantage of the state of the art security and reduced ping times made available by such an amazing operating system.
I know that I can trust Open Source software because I review every line that is compiled, as well as monitoring the assembly output of the initial compilation of GCC.
lmao
I know that I can trust Open Source software because I review every line that is compiled
Really? That’s an awful lot of lines!
“how do we know that Ken Thompson still expresses the same level of paranoia today as he did back in 1984?”
Paranoid is right. If a person going to think like this, how could they ever trust any software? Cause you never know when THEY are out to get you….
If you _do not_ trust code out in the open, How can you ever trust code you _cannot_ see?!?
If it is more profitable for a company to break the law they will do so.
> If you _do not_ trust code out in the open, How can
> you ever trust code you _cannot_ see?!?
You cannot trust code you did not write yourself. Wasn’t that what Ken said? (Sorry, I can’t RTFA because of the $&%&% firewall here…)
This is classic “FUD” people… Wake Up…
If you _do not_ trust code out in the open, How can you ever trust code you _cannot_ see?!?
If you’re going to make this about open vs closed source and saying “FUD”, I’m afraid you’ve missed the point. It’s just saying there are no guarantees, and ultimately, you’re on your own, whatever route you take. Security is Obscurity, DIY, or Blind Faith.
…I decided to ditch my computer and become Amish. I guess it’s a revelation that there’s no absolute security in anything that you do, including computing.
You people missed the whole point altogether, it went right over your head. The point is that you cannot assume code written by a third party is harmless. The Linux community takes code on blind faith and not everyone looks at security. In 2003 Linux was the most hacked OS, thats a far cry from 1996 when Linux advocates were stating that Linux was “hacker proof”. You cannot assume that everyone that contributes to any Open Source project is a 100% honest person just as Microsoft,SCO, Sun and HP cannot assume that all the developers they hire are all trustworthy.
I’d rather trust the unread code from the opensource community than code I’ve written myself really, since most of them are a lot better at writing secure software than I am.
I could not read the article because of the firewall here at work. I did find a Ken Thompson article at http://www.acm.org/classics/sep95/ that has the same remarks about how you can’t trust code, and that I could read here at work. Is this the same article?
http://www.acm.org/classics/sep95/
Is this the same article?
Yep. It’s the same article.
In 2003 Linux was the most hacked OS, thats a far cry from 1996 when Linux advocates were stating that Linux was “hacker proof”.
Oh please .. if your going to march out this drivel again clarify if you are talking about kernel exploits or 3rd party apps or maybe even distro security warnings (some don`t even have known exploits they just find something awry and post a fix).
Untill you show your so called proof please let the grown ups talk eh?
As for the article .. it`s all about common sense, would you trust a complete stranger to come into your house and play on your computer for a few hours without any reservations?
This thing about trust and security is not black and white. So not so that you either trust some system completely or not at all.
Like some wise people have said, security is not so much something you can reach once and then forget, but security is a continuing process.
Any system can be cracked, at least if the administrators are not careful enough. But it neither means that some systems, software and IT practises wouldn’t be much more secure than some others. A good default OpenBSD system is much, much more secure than a default installation of Windows 95, for example. The same with source code, package management, CVS etc. Security and trust are good goals though there may always be weak points. The smaller those weak points stay, the more difficult things are for a potential cracker, and if they are small enough when compared to the cracking efforts needed and possible gains, the less likely it is that someone will even try to crack the system not to mention that he would be a serious threat. So, I see no reason to pessimism and fatalism in this thing.
“Anyways, this is interesting:
http://www.ussg.iu.edu/hypermail/linux/kernel/0311.0/0635.html“
Someone tried to inject a trojan and hit the alarm system. Very interesting? If you wanna give examples, i can do so too, with even more spectacular trojans. Borland — gotta love ’em. Hmmm. NSA and Windows 98? At least SELinux can be audited.
Commenting my own text…=)
[i]A good default OpenBSD system is much, much more secure than a default installation of Windows 95, for example.[i]
Actually I should have written that even the default installation of some several release numbers old version of OpenBSD is much, much more secure than Windows 95 even with all its official security and other updates…
Now that Karig pointed out the mirror, a message to those here claiming that Open Source is better because you can review the source code: read the article again!
This is the (in-)famous Kernighan Trojan. You could look at source code all day and would not find a trace of it!
Did you build your Linux from source? Did you bootstrap the gcc with which you did so? Did you bootstrap the compiler with which you did bootstrap your gcc? Did you bootstrap the Assembler with which you did bootstrap that compiler, typing in hex codes? What editor did you use to do so?
There is no security, period.
On the other hand, you can cross-compile compilers with other compilers, and then use that compiler to cross-compile another compiler. And so on. It is very unlikely that the bug will be able to deal with all that.
Here’s how the trojan works:
The login program is trojaned with a hidden account with root access. If you find out about it, it’s a matter of fixing the code to login and recompiling, right?
But the compiler is bugged too, and if it finds out it’s compiling login, it will reinsert the hidden account. Say you find this out too, and decide to recompile the compiler. As it turn out, the compiler knows if it’s compiling itself, and can insert the bug that inserts the bug in login.
The only way out is to write a compiler from scratch, in executable form, in hex.
This is the (in-)famous Kernighan Trojan. You could look at source code all day and would not find a trace of it!
There is no security, period.
As to the Kernighan/Thompson Trojan:
Linux Weekly News has a good comment thread on the subject and some examples presenting how even that sort of security and Trojan problems can be fixed:
http://lwn.net/Articles/79801/
There may be no 100 % sure security, but so what? There sure is always more security or less security. All security issues can mostly be fixed, more or less, if there’s enough will, work and wisdom.
Security is an illusion brought about by raising the cost of something beyond it’s value.
Often today’s free sofware code is digitally signed using PGP or GPG. This form of encryption provides very strong protection from tampering.
This fact needs to be put before the public more often.
Here’s what the document “Attacking Malicious Code” http://www.cs.cornell.edu/Info/People/jgm/lang-based-security/malic… has to say about the Thompson Trojan issue:
Thompson’s compiler trick (a famous Trojan Horse to be built into the C compiler that made use of the login program [Tho84]) is an example of incorrect enforcement of safety policy. In this case, implicit policy assumes both that the compiler properly produces object code from source code and that the login program requires a correct password to be entered. In fact, the developer of the compiler can circumvent this implicit policy (which is not enforced technologically).”
Some people here are missing the point. His point had nothing to do with open versus closed source. His point was that really, you can never fully trust code.
The reason is that at any point of the chain from your program all the way back to the CPU microcode could have, for example, a trojan.
Even with Gentoo and building everything from source you are trusting the first compiler. If it is introducing trojans into the code as it builds things, now you have code with trojans. You may know the source for what you’re building forwards and backwards, but the compiler itself could be interjecting things you aren’t aware of. Of course, you could disassemble the resultant code, but now you’re trusting the disassembler. Did you write the disassembler? What compiler did you use to compile it?
It is indeed a vicious cycle of paranoia, but it is a valid concern.
Funny you should mention Amish. They don’t directly use technology, but they commonly pay other people to do so. So by one level of indirection they are trusting untrustworthy code.
As a side note, it is rather interesting that an Amish person won’t touch a computer to type a document into MS Word, but they will pay someone to do that.
I would argue that Ken wasn’t speaking about network security as much he was talking about bug codes in general. The fear seems to be bugs that must be traced back to code that you don’t know how to write. Like his example, if you are writing a c compiler in c and you must add in a new escape character. Well, unfortunately you have no clue how to code in any language lower than c so how will you tell it what to do with this new escape ‘p’ (dunno if it exists, but we’ll call it the “pee” character, it causes a BSOD ).
I don’t think this article was meant as a rebuttal of open source. It’s a good article, thanks OS news.
“Since this was written back in 1984, how do we know that Ken Thompson still expresses the same level of paranoia today as he did back in 1984? People change their minds.”
http://www.cs.bell-labs.com/cm/cs/who/ken/
A level of faith?
Re-read.
Understand.
Further:
http://www.sans.org/rr/catindex.php?cat_id=36
http://research.lumeta.com/ches/secure/index.html
http://research.lumeta.com/ches/talks/index.html
http://www.tracking-hackers.com/papers/berferd.pdf
http://www.daedalus.co.nz/~don/cuckoo.html
Often today’s free sofware code is digitally signed using PGP or GPG. This form of encryption provides very strong protection from tampering.
And what if the trojans are in the original code? And what if the compiler adds the trojan?
[i/]Someone tried to inject a trojan and hit the alarm system. Very interesting? If you wanna give examples, i can do so too, with even more spectacular trojans. Borland — gotta love ’em. Hmmm. NSA and Windows 98? At least SELinux can be audited.[/i]
Yes, I know all about ’em. But tit-for-tat doesn’t solve or address anything. For some reason you’ve interpreted this an attack on open source, and if that’s the case, you’ve missed the point.