Trusted operating systems have been used for some time to lock down the most sensitive of information in the most sensitive of organizations. But with security concerns rising and changing by the hour, it’s now a matter of trust for any organization looking to tighten its computing ship. Several vendors, including Red Hat, Sun Microsystems and Novell, are responding by adding and/or improving trusted elements in their operating system offerings.
I trust open source software only. How can I know a closed source software provider doesn’t ship with its products some kind of backdoor or logic bomb to control my information? Thus I can only trust the people that provide me the source code for scrutiny. This with a good default security, rootkit detection, good firewall rules, chroot-jailed services, MD5 checking of binaries, intrusion detection and hardware data execution prevention should work for most cases. Configuring all of this the right way is the matter.
And how do you know that the Open Source software you trust so much isn’t equipped with a Trojan Horse as part of the product? Do you perform a code analysis of all of your software to make sure it has not been modified in any way?
Solaris (including Trusted Solaris) has been used by Governments and corporations for years in high security environments, and I have yet to hear about an exploit for Trusted Solaris.
“And how do you know that the Open Source software you trust so much isn’t equipped with a Trojan Horse as part of the product? Do you perform a code analysis of all of your software to make sure it has not been modified in any way?”
Trust. Code availability significantly makes one trust software more than closed source ones. Where there are eyes looking at every peace of software there is some kind of trust in people not doing evil. It doesn’t matter at all if you actually look at it or someone really does. The source code is there and nobody wants their code to be banned forever. The community process allows for this. Perhaps you don’t usually trust people’s intentions when they do something loable like publishing their work for free. But certainly I do.
“Solaris (including Trusted Solaris) has been used by Governments and corporations for years in high security environments, and I have yet to hear about an exploit for Trusted Solaris.”
That’s like saying there isn’t any bug in Trusted Solaris that is exploitable. How much people is actually looking at TS’s code for sure?
By the way, I would trust trust. If there is an environment that is well known for people trusting it, then I could trust it too. But I prefer if source is available.
It doesn’t matter at all if you actually look at it or someone really does. The source code is there and nobody wants their code to be banned forever. The community process allows for this.
In other words, for every piece of open source software you own, you assume somebody else is looking at the code. Of course, everybody else probably assumes the same thing
Perhaps you don’t usually trust people’s intentions when they do something loable like publishing their work for free. But certainly I do.
Well, Opera is free, is it not? And more secure than Firefox anyway. I trust Opera. Not saying it’s any better or worse than Firefox, but I’ve never been afraid to use it.
Opera is free as in gratis, Opera is not free as in [/i]libre[/i], which is the important one as far as code goes.
It does not matter if 1 person or a million people look and analise the code, if that 1 person finds a piece of malware in the code, it will ruin the reputation of the author, or the authors firm.
You might trust all F/OSS programmers, I do not. I also don’t buy into the “more eyes” argument about F/OSS software since you are depending on the skills of a variety of programmers and varying levels of emphasis on security. Not every F/OSS product is developed from the ground up with security in mind, or that all of the code is rigorously checked for potential vulnerabilities before release.
Considering who uses Trusted Solaris, I am sure Sun goes over the code very carefully. A RedHat employee made a snide comment in a presentation to a group of people about the “slow” development cycle of Trusted Solaris, as if the development cycle made it an inferior product because Trusted Solaris was “only” at version 8 and Solaris 10 was in Public Beta. Unfortunately he made those comments in the presence of two experienced Solaris administrators (I was one of them), the other was an experienced Trusted Solaris administrator. Speed and security are no synonmous. We raised the bullshit flag and had the guy backpedalling in short order. What I thought was interesting is he didn’t make the same distinction about Trusted AIX (which compared to the shipping versions of non-Trusted AIX was far behind). Rapid code changes do not contribute to enhanced security. And yes, we use Trusted Solaris.
I do trust reputation, a solid track record and a commitment to security. That is what my employers pay for when they purchase and use Trusted Solaris (and this would be no different if we were using Trusted AIX or IRIX).
I don’t recall seeing such fallacies being used to justify using open
source software before. Let’s look at the statements:
Trust. Code availability significantly makes one trust software more than
closed source ones. Where there are eyes looking at every peace of software
there is some kind of trust in people not doing evil.
Code availability means that whoever wrote the code wants other people
to have the chance to read it and possibly change it. It does not mean
that trust is increased.
Having more pairs of eyes look at a product’s codebase in no way means
that you can trust that the authors of the code or changes to it are
not out to create problems (or “do evil”). More pairs of eyes — where
those eyes are sufficiently trained and experienced — means that you
have a better opportunity of getting meaningful code review.
It doesn’t matter at all if you actually look at it or someone really
does.
Actually, it does matter. If you’re going to play the “well it’s more secure
because more pairs of eyes look at it” then you must take some
responsibility for doing so. That means that you, personally, have to back
up your words with concrete code-review action.
The source code is there and nobody wants their code to be banned
forever. The community process allows for this.
True. There are several ways that you can have your code rejected by
(an) Open Source community. Writing deliberately dangerous code is
one of those ways. However, your point does not enhance your argument.
Perhaps you don’t usually trust people’s intentions when
they do something loable like publishing their work for free.
But certainly I do.
Another red herring. I’ve seen plenty of code which has been made
available under GPL or BSD or *mumble* verified open source licenses,
and yet the intentions were not backed up by the actions.
Unless you back up your intent with concrete actions, and actually
audit the code which you say you trust, you have no basis for your
claims.
For me, though, I trust three organisations to actually deliver a
certifiably trusted OS: Sun, IBM and OpenBSD. Why? Because over
the years those organisations have been proven through their actions
to actually deliver on their promises. OpenBSD I can verify for
myself. As for Sun and IBM…. well I have to trust that if they
can achieve a US DoD certification level, then they are actually
on the level.
========
To summarise: There are no angels and no faith when it comes to secure OS design and implementation, there is only proof through good works.
Edited 2006-05-16 00:27
So what if you have a small open source project with very few eyes actually looking at the code vs. a large propietary software where there are tons of employees that see the code.
Which do you trust more? Surely the later because there are more eyes?
Do you think any remotely large closed source software is developed solely by employees who would turn a blind eye to something bad being put in the software?
I don’t. It’s a slight risk, but I haven’t had any problems yet.
I would trust a small open source project with a few eyes looking at it… except, it is open source, which means anyone and everyone can look at the code, therefore, in theory billions of eyes can look at it.
I do not trust software from a closed source, even if they have hundreds of employess looking at the code, because, in effect, they are employees, and employees will turn a blind eye. Especially if they work for a malware company !
Those folks who say “I only trust code that I can see the source to” should have a serious read of Ken Thompson’s ACM Turing Award lecture titled “Reflections on Trusting Trust”. You can find a copy at http://www.acm.org/classics/sep95/
Trusting software is a much bigger picture than checking for malicious code. There are whole swags of issues that various operating systems have done right and wrong over the years.
OK, I have a Solaris bias (they pay my paycheck) but some of the things that I am starting to see with least privelege are really worthwhile.
There’s also the non-software concept of trusted environments, document classification/protection/auditing etc.
Alan.
It’s easy for me, since I’m so simple minded ), and it sounds like: I don’t trust an OS if it comes from an untrustworthy company (and that judgement is made after some years) and the word “trust” is used too much and too often, which smells like pure marketing more often than not. It’s not a word appended that will make a product trustworthy, and it’s not a pr campaign that will make a the parent company trustworthy, but practice, experience and long term actions. Also, I won’t trust companies which try to feed us with chipped hardware or rights-crippled software under names like trusted hardware or trusted software or trusted computing, since “trusted” means something else that they want it to mean.
Having source code available is a proof that the developers do not have anything to hide.
Why don’t big corporations open up their source code? They have piles of patents, they have copyrights over the code, they also have the money to sue anybody who would dare to abuse their code. Why?
However.. trust is one thing for people and another for corporations, because everybody should know that they do have different interests guided by different values (in case of corporations is actually only one value, money).
–sadyc
Edited 2006-05-16 08:46
> Having source code available is a proof that the developers do not have anything to hide.
In times past: this was the main reason HIS (hospital information system) OSs and apps included full sources.
Such did Hiscom (later Torex, now iSOFT) with thier BOS.
Do you think any remotely large closed source software is developed solely by employees who would turn a blind eye to something bad being put in the software?
Yes, actually, there’s enough spyware out there to make me think that a lot of employees are just doing what they’re told.
No, I don’t think there’s anything as blatant as that coming from IBM or Sun (or even Microsoft), but I do think that a lot of the time the noble motive is ignored in favour of what the stuffed shirts order. And if the choice is losing your job, well….
In order for software to be completely trustworthy to me, the source code has to be available. I can’t think of any companies that I trust to take their word on it, especially if they won’t put their money where their mouth is. If the source isn’t available and nobody else has scrutinised it, then it’s just so much hot air – there could be all manner of flaws in it, just that nobody’s seen them. Whereas if they’re willing to make the source available, it shows that they trust that the product is secure.
> No, I don’t think there’s anything as blatant as that coming from IBM or Sun (or even Microsoft),
Right:
http://news.com.com/2100-1001-239273.html
That could happen in any software. Most people use binaries, so what’s stopping someone (even someone official) from modifying the source locally, compiling that and putting that up for distribution?
The thing is, there are eyes watching closed source software too. Take a look at something like the Sony Music Player that certain CDs installed. It installed a rootkit and it was exposed.
Software used by a lot of people will get exposed if it does anything such as tamper with your system or phone home, and that I trust.
I’m a programmer and I wouldn’t bother to look at the source code for almost any application, let alone *all* of it for any application. So I’m trusting the word of *other* people.
I’m not saying I wouldn’t trust open source software, but I don’t make judgement of whether I’m comfortable with certain software based on whether the source is available or not.
I make my judgement based on many things. Logic (How logical would it be that the company is hiding something I wouldn’t like in the software?), The internet (people are scrutinizing closed source software all across the world, and will say something if they find anything remotely bad. word of mouth is killer on the net), etc.
Is it a risk? Yes. Have I been screwed by it yet? Not even close. Might it happen one day? Yep.
But by that same token, someone could infect some open source software. The source? No, beause of version control systems. But the binaries. Someone could infect the binaries of open source software, since a lot of people (ESPECIALLY on windows) will just use the binaries.
It’s a risk no matter what, and I think the idea of basing your trust level solely on availability of source code is myopic.
Some posters here suggest that the only software they trust is freely available. None of the products reviewed by the eWeek article are free as in beer. And although you can get Fedora Core and OpenSuSe, there is no support other than what is available on the Internet. That is a pretty risky proposition for people who want to protect their data considering the beta nature of Fedora Core (I will not comment on OpenSuSe since I have never used it). None of the operating systems mentioned here were developed in a vacuum, nor were the developers all “giving their work away”. I believe developers should be compensated for their work and if they choose to give their work away, fine. I somehow don’t think that’s quite what the National Security Agency, RedHat and Novell had in mind.
What I am hearing is not so much that F/OSS is more secure, it’s just that some of the people who use it simply don’t want to pay for it. I believe in paying for software because I want support and continued development if I like that product. How many F/OSS projects simply “dried up and blew away” because the people developing them got tired of people just taking thier work and not giving them anything in return? If you feel that RedHat and Novell products are superior to that of Sun, then open your wallet instead of your mouth! I’m sure the developers who have car payments and mortgages would appreciate it.
And while the source code is readily available and can be used for a project, I would not go to a paying customer and build a solution using F/OSS just because it is free as in beer. The first time you get into trouble and cannot find an immediate solution to your problem online will be the point where you wished you had actually paid for support.
And finally I think the whole Trusted OS thing is getting bastardized to solve problems that a Trusted OS was never designed to do. A Trusted OS (at least as far as Trusted Solaris is concerned) was designed to work in an environment where Multilevel Security is necessary (MLS, see http://en.wikipedia.org/wiki/Multi-Level_Security). With SELinux and AppArmor, the idea is now to wrap unsecure applications in an advanced “jail” where through MAC controls, specific actions are either authorized or denied. While I am sure this can be done, I don’t think that (1) this is a “silver bullet” security solution, and (2) if configured incorrectly can cause more problems than using an non-Trusted OS. My concern would be how they “fail” in certain situations since those actions could cause significant problems in a production environemnt. In many cases this is overkill, but that is just my opinion.
There’s two types of trust here. Trust that code is not intentionally malicious or insecure, and trust that code is reliable and secure from unintended consequences.
Open-source is superior in the former case, because the potential exists that someone will look at it, the invitation is there, and if you really cared to you could validate it yourself according to your own criteria. Closed source code cannot be validated. The only cause for trust you have is the reputation of the provider, and historically that’s been a poor metric (you wouldn’t buy something if you already thought it deficient).
In the latter case, that code is reliable and secure from unintended consequences, open and closed source projects are fairly matched. Companies can typically afford process and tools to enhance their development, and potentially they can afford to specifically task the best and brightest on a task of importance. Open-source has a broader range of contributors, but there’s variance between projects on management, priorities, quality, etc. In the best cases, the open-source projects can out-resource and out develop commercial rivals. In the worst cases, you get a cold plate of spaghetti. Ultimately, your “trust” here will be based on a combination of objective and subjective measures of the integrity of the design and implementation process, the methods used, the amount of attention to bugs and such, etc. In this case, trust is not dependent on the open-ness of the source, but rather the manner of development of the code base. Your ability to evaluate that depends on your ability to obtain sufficient information about it.
I’d say that, in general I have more “trust” overall of open-source software than commercial software. I work in the biosciences and closed-source software is of notoriously low quality despite having better documentation. However, in the broader sense, I would say that I would trust mature commercial software to be more reliable than their open-source counterparts and that I’d trust open-source software to be more secure and implement standards properly.
Trust is a very personal issue. I personaly have a bias to trust F/OSS, IBM and SUN and Mac OS X. I personaly do trust (sort of) as well Microsoft. But I don’t trust Microsoft in beeing able to produce solid applications. I don’t have trust in their OS and I don’t have trust in MS Exchange. I as well don’t trust their Office suite completly. I don’t trust their Office suite that it does not include data in their file format I don’t like to be included. Anyway… the question/topic is not about trust in products or companies. It was about trust in your OS.
So my main statement stays, that I trust F/OSS and I do trust IBM and SUN and Mac OS X (but I don’t have big trust in Apple).
But I don’t have trust in this Trusted Computing stuff wich is found in some OS.
There is a short animated film done by Benjamin Stephan and Lutz Vogel. You can search Google about it:
http://video.google.com/videosearch?q=%22Trusted+Computing%…
This small video does exactly reflect my feeling about this technological “trust”.
Trust is in my viewpoint something social and personal (I trust) and technology can help me to have greater trust in something else (in this case a OS), but I don’t want the technology to dictate me what trust is and what not.
All this enforced technological stuff about trust does exactly turn me to the opposite site: I start to not trust or at least I start to be suspicious. I have not allways a exact reason for that. It is just a personal feeling of not trusting (If they would provide me that thechnology and allow me to use it or not, then I would probably try it and then judge on it).
What as well helps me to trust in a OS is, if someone who is very close to me (again: Something personal and social) and he/she trusts in something, then I have automaticly a bias in blindly (to a certain degree) trusting to that OS as well.
cheers
SteveB
Please all you linux people, don’t release the flame bombs.
1. Linux is a pain to set up for gaming, installing/compililing, if it had setup like windows it would be better oromayber a .PBI type system.
2. The software issue, there are plenty of open source, but back to the install issue on that one to.
3. If you want to game you need windows, there’s Transgaming, but again install & setup.
Now PC-BSD might be perfect for this, installs basicly like windows & setup may be the same. If someone just make a app. to let PC-BSD play windows games a lot of people might ditch windows.
Edited 2006-05-16 18:32
You sir win the award for off topic post of the day