During the roundtable discussion at LinuxCon this year, Linus Torvalds made some pretty harsh remarks about the current state of the Linux kernel, calling it “huge and bloated”, and that there is no plan in sight to solve the problem. At the same time, he also explained that he is very happy with the current development process of the kernel, and that his job has become much easier.
“We’re getting bloated, yes it’s a problem,” Torvalds said, “I’d love to say we have a plan. I mean, sometimes it’s a bit sad and we’re definitely not the streamlined hyper-efficient kernel that I had envisioned 15 years ago. The kernel is huge and bloated, and our icache footprint is scary. I mean, there is no question about that. And whenever we add a new feature, it only gets worse.”
Over the course of the last ten releaes, Linux kernel performance saw a 12% cumulative drop in performance, according to a report by Intel. Stability, however, is not a problem, according to Torvalds. “I think we’ve been pretty stable,” he said, “We are finding the bugs as fast as we’re adding them – even though we’re adding more code.”
Torvalds is also very happy with the improvements made in the kernel development process. “The one feature that is most important to me is how the development model seems to be working and it’s working better than it did even six months ago, where I beat up a lot of people over how they did things because it made it more difficult for me,” he said, “It took a while but they seem to have all gotten it.”
“I don’t spend all my time just hating people for sending me merge request that are hard to merge,” Torvalds added, “For me, I need to have a happy feeling inside that I know what I’m merging. Whether it works or not another issue is a different issue.”
The father of the Linux kernel also explained that his motivation has changed over the years. It used to be all about the technology aspect of it all, but now it is more about the community and the fame. Oh, and the flamewars, of course. “I really enjoy arguing, it’s a big part of my life are these occasional flame threads that I love getting into and telling people they are idiots,” Torvalds said, “All my technical problems were solved so long ago, that I don’t even care. I don’t do it for my own needs on my machine, I do it because it’s interesting and I feel like I’m doing something worthwhile.”
I saw the whole webcast and I think Linus has very specific point of view.
Saying that a 11 million lines of code kernel is small would be very hypocritical, but if you look at where Linux is used and what kind of capabilities it has I think the “bloat” is understandable.
Linus probably does not need SELinux, tracing, realtime, support of 4096 CPUs etc. Nonetheless it is all in the kernel and contributes to its size (bloat).
And before people start the driver, microkernel nonsense. Everything that can be build as a module (drivers, filesystems etc.) does not really add much bloat and as Linus said yesterday microkernels are just not working IRL (with good performance and stability and a lot of features). So they are no solution.
And lets not forget there is still uLinux that runs just fine on machines with just 8 MB RAM and no MMU.
what you dont want is ‘linux’ (the kernal) having to include everything for everyone by default. Its sortof moving that way from what I understand. More accurately its not removing features/support that is well past its ‘mainstream usage’ date.
Linus still has stewardship of the kernal and it is him that gets the flack.
Because Linux is so many things to so many companies/users rather than making everyone happy, it makes noone happy!
And what is ‘kernal’ anyway? the only ‘kernal’ I know of is this one:
http://en.wikipedia.org/wiki/KERNAL
You were probobly trying to talk about KERNEL …
I like it. And no OS makes me happy tbh. People make me happy.
And once people acknowledge this problem people will work on it and recent advancements in kernel diagnostics will help bigtime (timechart, ftrace, mutrace)
It will be fixed or migitated.
And lets be real .. while Linux got 12% slower in one specific benchmark CPUs probably got 360% faster.
(And the whole BFS discussion led devs to find a big scheduler regression. Maybe that is the culprit.)
Jack of all trades, master of none.
Also, there’s “uClinux”: http://www.uclinux.org/
Because Linus said this, we should take this granted? Right.
Yes it does, but the question is how much of the “bloat” affects those keen to strip down the kernel for such systems. As I see it, the benefits viz the costs in doing so have diminished over time (especially cf. MMUless systems.). IMO. (This is one reason why systems like VxWorks remain competitive.)
Well, show me the microkernel desktops and super computers and you have proven Linus wrong.
I was gonna say “Haiku”. But no. That’s a hybrid. Then I was gonna fall back to Windows NT and beyond. But that’s hybrid, too. So I guess I’d have to say “Minix”! Now, you’ve got to admit I have you there. The microkernel concept is enjoying runaway popularity with the ubiquitous Minix 3 desktops and supercomputers. Who’d ever have thought it would be Andrew Tannenbaum who eclipsed Gates in the computing arena? I ask you that.
Hi,
The only OSs that are really “working” (in a market share sense) in real life are well funded proprietory OSs (Windows, OS X) and Unix clones. It has a lot to do with history and momentum, and very little to do with technical superiority.
For an example, for anything that’s “close enough” to Unix, once you’ve got the kernel you can slap a large amount of existing open source stuff on top and find thousands of programmers who are familiar with how it works before they’ve even seen it. For something that’s actually innovative you’re screwed twice (no easily ported software and no developers to write native software); and it makes no difference whether it’s (for e.g.) a “non-POSIX” monolithic kernel or a “non-POSIX” micro-kernel.
-Brendan
Who is he really and why should i care what he says?
Why do he want to be known as the biggest online troll?
What is his role in the Linux community? And why he always talkin shit? He needs to take a nap.
As for Linux. I thank him for making it. I like it. I can live w/o some of the liberties i give up to use it.
But i have more fun with it when i use it and get help from online communities. To use it and read into what this guy says is dumb. There are millions of ppl working on it. What more do he want?
Edited 2009-09-22 16:26 UTC
A good, reasonable, and honest comment from the lead developer. Hopefully a good place for self-reflection and evaluation of the project in itself.
In my opinion, the difficult things are related to such questions as how reasonable and sustainable it is in the long-term to try support every conceivable device? Why all device drivers are in one project to begin with? Are the userland-drivers really a dead-end (cf. Xorg)? How much old device drivers are actually deprecated?
The word “bloat” has many faces.
How sustainable it is to try to shove everything into the kernel? How does it affect maintainability, one central criteria of quality software engineering?
Even better: how much extra work is required to maintain and refactor such amount of kernel code? How does it affect the stability of kernel APIs/ABIs (or the lack thereof)?
How does it affect complexity (certainly one of the most dangerous side-effects of “bloat”)? How many dark corners there are in that large amount of kernel code? How does the amount of code correlate with the amount of bugs? How does it affect such activities as code reading, analyzing code, and understanding the kernel?
Does it make the kernel less robust? (No, according to Linus.)
How serious is the effect of “bloat” to kernel security?
Et. cetera.
Make no mistakes: so far the Linux kernel developers have done amazing job in keeping this huge amount of code under control.
Why? First of all, to keep them all current despite changes in the kernel.
There are tons of drivers in the kernel that have had no maintainer worthy of that name for ages. Yet, they still work today. This possible only because anyone that changes the kernel in such a way that drivers are affected is responsible to change them accordingly.
This is why you can still have ReiserFS 3 filesystems even though the internal filesystem workings of the kernel have changed a bit and the original developers have long since completely abandoned it (around the time Reiser 4 started).
if you strip kernel from all that is not needed (for desktop/specific hardware), then it does not matter if things are build in the kernel or as modules, only gains are faster boot and smaller memory footprint but overall performance is the same. In fact it (performance) is deteriorating with each kernel release.
It is possible that recent discussion about cpu scheduler spurred to some extent Linus comments. Of course the problem does not only concerns cpu scheduler.
What options are there? (Serious question). Can the kernel be redesigned in some way to improve size and efficiency without sacrificing the continual growth of functionality? That certainly would be no small task.
Edited 2009-09-22 17:04 UTC
That is the million dollar question.
To wit, my opinion is only worth $0.02. And everyone knows pennies just clog the coin acceptor.
this (size and functionality) has nothing to do with the performance problem.
1) Apply wherever possible specialty tools:
CPU scheduler is a good example what options are there
CFS performance degraded seriously in time. It is big trying to me “smart”(er than..) and at the end we get crippled performance.
Now there is BFS, small and efficient for desktop use only.
Why not to have two schedulers
Trying to do everything is wors idea ever, get one cpu scheduler for servers another for desktop. Nowadays kernel is trying to be jack of all trades with mediocre results.
2) code reuse would also help (including size)
Somehow I very much doubt the scheduler is the problem with desktop performance. X performance and application efficiency probably play a much bigger role.
scheduler was only an example. I don’t think that scheduler only is responsible for the problem.
Kernel is trying to please all possible users, desktop server and so on. This does not work very well.
and yes, it (cpu scheduler) does have a lot to do with (also desktop) performance. If you would check recent lkml discussions, you would clearly see that.
Kernels should be highly modularized, not the opposite. Kernel should contain only basic functions, while any others [drivers, etc] should stay behind this ring.
I recently did a smal review of the linux kernel source – 700MB of the kernel source. Wow.
*BSD for example takes the same 700 for both system and kernel …
I’m curious, which Linux kernel version did you review?
While it’s still huge, my current working copy is ~400MB..which is quite a bit smaller than 700MB.
Edited 2009-09-22 19:56 UTC
Well, it was the most recent stable branch of 2.6.x, but I currently have no patience to redownload it to paste the actual evidence.
and stable ABIs. When Linus says that they fix the bugs as fast as they add new code, so what? The code that gets bug fixed will soon be swapped out for new code. Hence, Linux will always have less optimal quality than it could. This lack of stable ABIs and constant swapping of old code for new code prohibits stability. As even Linux kernel dev Andrew Morton said about the quality of Linux code:
http://lwn.net/Articles/285088/
Q: Is it your opinion that the quality of the kernel is in decline? Most developers seem to be pretty sanguine about the overall quality problem. Assuming there’s a difference of opinion here, where do you think it comes from? How can we resolve it?
A: I used to think it was in decline, and I think that I might think that it still is. I see so many regressions which we never fix.
Linux can never be stable when facing this situation.
And regarding Linux on Top500. That doesnt prove anything. Nr 5 on that list has dual core PowerPC 750MHz. Does that prove that the CPU is best or what? No. The naive Linux has a simple structure and is good for stripping out everything and do just one specialized task: number crunching. Solaris kernel is highly complex and it is not easy to strip out parts you dont need. Linux scales well on large clusters, that is scaling horizontally.
But Linux sucks on scaling vertically, on one big machine with lots of CPUs. That is much harder to do. Sure, Linux runs on SGIs 4096 CPU machine, but how well does it run? That machine is only used for some special tasks. It is not Big Iron, where one Solaris kernel is used for general purpose work. That is much harder to do.
Super computers are not Big Iron, where lots of users login and do work. Super computers are simple to the structure and only computes. That is easy to do. They are basically a bunch of computers in a network, running stripped down Linux kernel. That is hardly scalable, the kernel is tailored.
Solaris install DVD is the same from small laptops EEpc up to Big Iron. THAT is scalability. The same kernel. Not redesigned or retailored.
Very good points.
These themes in addition to the open-sourcing of Solaris is why I switched my C++ software development to OpenSolaris since Feb 2009 and have never looked back.
For me, the 1990’s was C/C++ on MS-DOS/Linux and the 2000’s was C++ on MS-Win2K. SunStudio on Opensolaris is very fine and the platforms I used before cannot compare.
Dear potential unix users/developers,
except for the Linux/Android combination, you should consider the (Open)Solaris option due to it’s mature design.
Solaris has been in use in many sectors of the enterprise world for many years and has had the benefit of being coupled with the scalable processor architecture (SPARC), stimulating the need for Solaris to be designed to handle big iron type (vertical) scalability.
Linux is more x86-centric (AMD, Intel) and these CPUs have never had the scalability potential like the SPARC platform (the x86 design goals are much different to the SPARC design goals). This has allowed Solaris’ design to be more advanced than Linux since Solaris had access to an advanced CPU platform (i.e. SPARC) for so many years.
Like the previous poster mentioned, it is impressive that a single Solaris distribution can run a wide range of hardware in an efficient manner.
Consider the scenarios:
– Sun have a consistent message for (Open)Solaris, an operating system that has first class support for all of their hardware, i.e. try (Open)Solaris …
– IBM/HP flip-flop between Linux and their proprietary unix systems.
Who would you consider ?
I’ve tried for years to like Solaris and I’ve downloaded and tried just about every version in the last 7 years.
Before that, I paid for the x86 kit from Sun for Solaris 2.6 and 7.
The upshot? If you’re x86, Solaris is still painfully slow and needs a lot more disk space compared to Linux, xBSD and Windows XP.
The one time I had prolonged use of an UltraSparc, Solaris 8 performed well but I still installed Redhat after several months use.
Why? Because I wanted to see how well Sparc Linux would be and guess what – it was faster, had more apps / tools available at hand ( although Blastwave have long done a good job of providing packages of the common useful *nix tools that were missing.
But, most important of all, the compiler environment back then on Linux ( around 2001-2 ) just freakin’ worked!!
Solaris had me digging for the right version of CC; setting LD_LIBRARY_PATH to arcane directories and forever worrying about which version or AR I needed.
On Redhat, 95% of the time, I just typed “make”
“Consistent message”, whatever that means, doesn’t matter one bit when you don’t know whether the OS will be around 5-10 years from now. It’s anyone’s guess what will happen with OpenSolaris, but Linux, not being dependent on revenue stream of any one company, just won’t go away.
A scalable kernel as you describe is not really what you want, since there will always be limits to its efficiency. The Linux approach, where you can choose to compile things in or not.
No matter how well designed a kernel is, having all the code to support a system with thousands of processors and terabytes of memory is not necessary when you have one cpu and a few megabytes.
“A scalable kernel as you describe is not really what you want, since there will always be limits to its efficiency. The Linux approach, where you can choose to compile things in or not.
No matter how well designed a kernel is, having all the code to support a system with thousands of processors and terabytes of memory is not necessary when you have one cpu and a few megabytes.”
But if you have to recompile and stripp down the Linux kernel, how scalable is it then? Not at all. It is flexible and customizable, yes. But Solaris kernel is the same, it runs from laptops up to Big Iron with many CPUs serving tasks such as number crunching, serving many users doing office work, etc. The same kernel. To me, that is scalability. Otherwise I could take a C64 and reprogram it and call it scalable, which is not true.
And also, I want one single kernel. Not many different kernels. I know the behavior of this one kernel. If there are many versions, I have relearn.
I prefer one official distro, to many different versions all having different charastericts. If SUN can create one distro that serves all purposes well, it is better than having many distros tailored to different purposes – In my opinion. YMMV.
I prefer a robust kernel that is good at everything, than highly specialized kernels doing different things. To me it is proof of not being general, bad design. It is better to have one grand theorem that encompasses all special cases, than many different theorems with slight modifications. That is ugly hack. I think. Better to generalize. But that is difficult and requires much thinking and plannning.
You confuse ABIs with APIs apparently.
Trying to keep a stable ABI actually goes against stability. Since everything you do can break the ABI (even more so since we are talking about an _internal_ ABI), you are forced to do all kinds of hacks to avoid it, and that leads to terrible code and, in the long run, bugs and stability problems.
You have bought Linus T propaganda. It IS possible to have a stable API and ABI and retain backwards compatibility AND also develop new functionality without the code going bonkers. Solaris is proof to that. Solaris has frozen ABI and API since many years and old binaries run on the largest Enterprise Big Iron without recompile. Just copy from your desktop and off you go. If not compatible, you should file a bug.
And noone can argue against the rapid development of Solaris high tech: ZFS, DTrace, Zones, etc. And STILL keep stable API and ABI. It IS possible to do. You just need to know what you are doing and design well.
Solaris first version, called SunOS, where not that good. It took several years and tries, and then SUN released Solaris which is far better. SunOS is the first iteration and not good. Just like Linux. Linux should ditch everything and make a good design because he has lots of experience now. He can create a good kernel at last. I hope.
Sorry, but regarding Userland, Linux has a very stable ABI!
Most of the time you don’t need it because most apps are packeged by the ditributor anyway.
Proof?
I can still run a game (Heroes of Might and Magic 3) which I bought somewhen in 2001/2002. At that time the 2.4 Kernels still have been in use.
What you might refer to is library binary compatibility. That is not an issue of the Kernel, but of Distributors and proprietary Software Vendors. Proprietary Software should bring with them ALL needed libraries, best compiled in statically. Then there is NO issue at all of running a 1999 program on 2009 Linux machines.
Linux’s userspace ABI _is_ stable. You can get a binary from 10 years ago and run it on a current system, provided it was compiled statically. You cannot run an old dynamically compiled binary, even a simple one, because of misc glibc ABI breakages and gcc changes over the years, not because of the kernel ABI.
Any one of those technologies was probably implemented at the expense of internal API changes. Solaris does not have a stable internal API/ABI any more than Linux does.
Now, Solaris does have an external stable ABI/API for drivers, while Linux does not. But what good is it doing for Solaris? Where’s the thriving third-party driver community for Solaris?
If you think there are no advantages in avoiding the commitment on a stable API/ABI, then you should think about why Linux is now way more successful than Solaris.
Oh no! The legendary rewrite from scratch!…
As for making a good design: Linux has no design. At least on any kernel-wide scale. And picking up what I said above, this is what makes Linux so successful.
The kernel evolves according to the current requirements. There is no over-engineering or over-design. But there is also no restrictions on change, so the faults and shortcomings can be fixed when they’re found.
Because of this, Linux now supports almost every architecture under the Sun and has a large community of people and companies working on it, each one following their own interests.
As long as the current model keeps working as well as it has been working up until now, it won’t be changed just to please the people using Linux on the desktop.
The desktop is not a priority concern for most people developing the kernel, servers and embedded systems and single-purpose appliances are. It is also a difficult market to enter because it is already locked by Microsoft and Apple, and also because having a consistent desktop experience requires an unifying vision of what that experience should be, which goes against the very model of multiproject opensource software. The kernel just follows the path that allows it to maximize quality and reach, and that means flexibility and no stable internal ABI.
Graphics card companies just don’t play along because the desktop Linux market is too small and filled with people that don’t give them much profit. Which sends us back to the previous paragraph.
“Now, Solaris does have an external stable ABI/API for drivers, while Linux does not. But what good is it doing for Solaris? Where’s the thriving third-party driver community for Solaris?
If you think there are no advantages in avoiding the commitment on a stable API/ABI, then you should think about why Linux is now way more successful than Solaris.”
Do you really believe Linux is more successful than Solaris because Linux has no stable ABIs? Then Solaris should ditch it’s stable ABI and at once we will see Solaris being more widespread again! I will tell Larry Ellison this! Thank you for your analysis. AIX and Windows should follow. But… heck, Windows has no stable ABIs, and it is successful. Maybe you are right? Backwards compatibility must be ditched to get successful? Huh? Maybe you have found the solution!?
“As for making a good design: Linux has no design. At least on any kernel-wide scale. And picking up what I said above, this is what makes Linux so successful.
The kernel evolves according to the current requirements. There is no over-engineering or over-design. But there is also no restrictions on change, so the faults and shortcomings can be fixed when they’re found.”
Yes, that is the problem with Linux. Linux has no design. Everything changes all the time. New code swaps in all the time. This introduces new bugs. It is said that Windows requires SP1 before the bugs get ironed out. It takes time. What happens if the code gets swapped out for new code all the time? Then new bugs will be introduced all the time. Which makes Linux unstable. This constantly moving target is not good for Linux.
Linux is more successful than Solaris because it has a development model that allows for the participation of individuals and companies each with different goals for the final product. Flexibility is a part of that formula, not the only reason, no.
It isn’t? It looks like it is working to me. I have had absolutely zero problems on my Linux servers.
And having no overall design does not mean that individual components have no design, or are badly designed or badly programmed.
Linux is more successful than Solaris, not because of the technology, because Linux sucks tech wise. Look at BTRFS, what is that? A ZFS wannabe. And then you have DTrace wannabees. etc. I saw a list of SUN tech that Linux had copied and it was huge. Things like NFS(?) and what not. Cant remember that huge list.
Windows has worse tech than Linux and Solaris, and Windows is more successful than both.
Linux is more successful than Solaris because of Politics and that you can found a huge Linux company and be a USD billionaire. Look at RedHat. No one can found a Solaris company and become a billionare. Because SUN owns Solaris, the official distro. But no one owns Linux, there is no official distro. If Linus T released an official distro, his distro, then all other distros would die. And Linux would loose momentum. You can not found a huge FreeBSD company and become a dollar billionare, because someone “owns” freebsd.
Linux, no one owns. Any one can get rich. There are volunteers that do all the hard work with coding, and your company just packages the stuff and sell it. Of course everyone goes there. No work, and just get the money. For free. Like selling air. Where money is, people and companies goes. If Linus T said “this Linux distro is the official” then everyone would loose interest in Linux. Just like FreeBSD. Or Solaris. You can not become a Solaris billionare.
And good that you dont have problems with upgrading your Linux servers. Others have:
“I have been grumbling for the last week about breaking compatibility. That needs to be avoided. Just last saturday, I had to roll back a new Centos environment from 2.6.30 to 2.6.24 because the kernel dropped a function call after 2.6.27 that a library I use needs. The developers of the library have not made an upgrade available, and I can’t switch at this time.
I personally have not noticed any significant improvements in the kernel for quite a few releases now. I actually try to avoid upgrading my kernel on my workstation, but my package manager makes this into a royal PITA because I don’t have an option to say “don’t bug me about this…I don’t want to.”
Edited 2009-09-23 19:56 UTC
Ok, I’ll bite.
I don’t care about BTRFS, I want a reliable (as in “does not eat my data”) and stable (as in “fsck will not eat my data”) filesystem, like “ext3”.
In the server space most people think like me. That’s why nobody cares about ZFS but Solaris fanboys. In the real world, either RAID+LVM is (more than) good enough, or you have an EMC storage array that does all that for you.
I’m not saying that these technologies aren’t nice. I’m saying that being good doesn’t necessary mean having the technologies that geeks drool over. Solaris is a good OS, but Linux is also a good OS, and no less good than Solaris.
Windows is more successful than both because Microsoft uses its dominance on the desktop as leverage. People go the Microsoft route because it’s familiar to them, and because, in many cases, they really have no choice.
Building a company around a Linux distribution is no recipe for success. Red Hat is the only case where a company successfully based its business on a Linux distro, but they never became that big from it, and today most of their revenue probably comes from other services above the OS layer.
Companies are attracted to Linux because of three things:
1. It’s free: they can take it and do stuff without caring about royalties (and on embedded systems, small royalties amount to lots of money when the unit volume is large);
2. Licensing: the GPL means they can contribute without fearing their competition will just pick those contributions and stick them in their own proprietary product:
3. Flexibility: like I said, the Linux development model allows contributors to try new stuff, and to modify existing stuff to better accomodate their needs. This does not mean bad code, and the examples of stuff rejected because of it are numerous (you can start with Reiser4, for example).
And contributions to the kernel are not done by individuals any more. Individuals are few, and most of them have some company backing them up indirectly.
And yet if you asked the same developer if Linux was better then Solaris his answer would be a unqualified “f–k yes it is”.
Quoting the developer of a OS that is responsible for driving Sun into near bankrupcy is not going to help your case much.
heh. Well I guess that means you have no clue what your talking about.
It proves that IBM knows it’s shit when it comes to designing computer architecture. Your talking about Blue Gene there, which is a huge step above anything else out there in terms of high performance computing.
Which leads to another reason why Sun Microsystems is getting bought out by a database company: The Sun Sparc chips are not competitive against IBM POWER platform.
Yes.
So… A high degree of complexity, opaqueness, and being unwieldy is now a GOOD thing?? Did you work on the team that designed Vista?
Huh? It runs on computers that are much “higher end” then anything Sun ever sold.
Yes. That’s why major Wallstreet firms uses Solaris for for their business-critical realtime trading systems…
Oh, shit. Thats right. They don’t.. they use Linux.
The answer, independent of any metrics would be:
A hell of a lot better then Solaris ever could. Nobody has bothered to port that OS to Itanium.
Yes. That’s right. Maintaining some database backend for excel spreedsheets is going to be oh-so-much more difficult then nuclear simulation and whole-earth evorionmental simulators that process, generate, and move terabytes of data per second.
Well.. your acting like the inability to adapt Solaris to a wide variety of workloads is a good thing. If all of this was so “easy” then why does Solaris suck at it so much?
And your also acting like Linux isn’t used in “big iron”. Why has Linux driven Sun out of business then?
Solaris sales are measured in hundred of thousands, linux server sales are measure in millions.
Yes. Now even EEE users can now know the joy of running like shit with Solaris chugging away.
First of all, Linux did not bankrupt SUN and destroyed Solaris. You apparently believe that Linux is so much better, it bankrupted SUN. That is a weird conclusion to draw. All enterprise OS has suffered, not just Solaris. Even AIX, HP-UX, and Mainframe. The point is that the Enterprise OS is far more stable than Linux, but Linux catches up quickly and is good ENOUGH. Even Windows catches up quickly and is soon good enough, but that doesnt prove anything about Windows’ merits.
The reason many switch to Linux is Politics. If the CEO says we are going for Linux, then it is. You dont know where I work, but it is one of the most wellknown (if not THE most) firm on Wall Street. We have several LARGE famous enterprise systems on Solaris. But suddenly, we have to port them to Linux. I personally doubt Linux will do as good as Solaris, but an order is an order.
Ive read about many companies starting with Linux, and then later when their work load increases, they have to switch to a real Unix such as Solaris. I can post several such articles if you wish to see yourself. The articles say that Linux becomes unstable under high load, scales bad, etc. Dont you believe me? Should I post them articles?
Regarding Top500 and nr 5 on the ranking Blue Gene which uses lots of PowerPC 750MHz CPUs. This ranking doesnt prove that the PowerPC 750 MHz CPU is the 5th fastest CPU on the earth. Do you really believe so? The thing is, supercomputers face different problems than Big Iron. Watt consumption and cooling is one of the greatest hurdles. Therefore you want to keep the Wattage down, e.g. by using slow CPUs. The Linux kernel running on Super Computers IS a stripped down version. It can only do one thing; number crunching. And that is actually far more easier to do, than to do a general purpose kernel like Solaris, with lots of complex locking, etc. It is like comparing a GPU vs a CPU. The GPU can only do one thing, but fast. A GPU can never replace a CPU. It is far more difficult to create a fast CPU, than create a fast GPU. A GPU is simple to it’s structure. Specialization is always simpler to achieve. Generalization is harder.
Regarding your comment that IBM PowerPC is faster than SUN Niagara, well that is not correct either. There are many cases where a Niagara outclasses Power CPUs. For instance, three IBM P570 Power servers with a total of 12 Power6+ CPUs running at 4.7GHz get 7000 SIEBEL 8.0 benchmark. One SUN T5440 using 4 Niagara at 1.4GHz get double that score. In SIEBEL 8.0, one Niagara at 1.4GHz is six times as fast as one Power6+ at 4.7GHz. According to official white papers from IBM and SUN. Do you want to see them? I can paste so you can see that I am not lying. There are also lots of other cases where one 1.4GHz Niagara crushes one 5GHz Power6. That is ridiculous. How can a 5GHz CPU be many times slower than a 1.4GHz CPU? Bad legacy design. Above all, how people say that Power6 is faster than Niagara? Say you want to transport 10000 people to a place. You have a Porsche, called Power6, which has two seats and runs in 5 minutes. You also have a large Bus, called Niagara, with 100 seats and runs in 15 minutes. Which finishes first? Is the Porsche faster than the Bus, when we talk about large loads? For small loads the Porsche is obviously faster, but then you can as well use an x86 instead. For large loads the Power6 chokes.
And also, there lots of people saying that Linux code quality is so-so, including Linus Torvalds. There are many testimonies saying that Linux doesnt cut it when increasing load and doing large stuff, it becomes unstable, etc. On the other hand, there are not many testimonies from companies that say: “Solaris doesnt cut it when increasing load, it becomes unstable under load”. Why is that? Have SUN threatened Solaris customers to shut up? Or are there no such testimonies? Are there many testimonies saying IBM AIX becomes unstable under load? No? Why not?
Solaris code IS mature and complex. It scales well on Big Iron and also well on large clusters. Which is easier to do. Linux is simple and naive. Heck, it doesnt even have a design.
100 per cent sure the comma stays outside the quotes. Add this to your list of “10 most annoying things in Internet news”.
That reads like a joke but I’m afraid he was serious.
Linus should hang out here on OSNews, he would fit right in.
I’ve been taught the Dutch ELDA rules for these matters. There’s somewhat of a debate in Dutch linguistic circles about this. Nowadays, you are only supposed to put punctuation marks within the quotes if they are part of the quote.
Why use Dutch rules on an English website? Because I can. It’s my subtle way of promoting my native tongue .
Punctuation marks have always supposed to be left in the quote, and only when they are part of the quote. That is not a Dutch thing, but an English Language thing. Normally something that is learned in English classes in grade school. What you used was proper English.
I’ve always found the English rules, at least, to be a bit silly. To me, delimiters in natural languages shouldn’t really be all that different than in programming languages. What happens inside them happens within its own syntactical scope. If you choose to quote a comma in a quote, inside the quotes, then fine. If not, then just leave it out completely. If you choose to leave it in, and then find that your chosen sentence structure also requires a comma just past the end-quote… then use it, even if that means doing:
,”,
The goal of writing, in most cases, should be clarity. And if silly linguistic rules get in the way of that, then ignore the rules. Of course, a decision to ignore the standard rules can have consequences with respect to clarity in and of itself. So you have to use your best judgment, taking into account your intended audience, etc. But when in doubt, do what feels right… and *clear*.
Edited 2009-09-22 23:58 UTC
I tend to agree. When I started out with Linux it was small and workable. Now it’s getting a little big. Maybe it’s time to go through it and get rid of legacy code and code which won’t be supported (Aka 386/486 processors, Motorola 68×000 series, legacy hardware, etc) I think the 3.0 (or 2.8 series) should at addressing this issue.
No stable ABI, please. When in-kernel ABI changes, there is a good reason. We don’t nees only badly mantained or abandoned binary out-of-tree drivers that don’t get modified for months/years to support e.g. power management improvements when a new kernel revision is released.
It’s not hard to keep up with linux driver ABI changes because improvements are incremental, tested each iteration and that way a few if any new bugs are introduced. For binary blobs which in existance, it’s a minor problem to port their code to a new kernel release. In fact, no-ABI forces them to mantain it to support new OS features. Fortunately we have in-kernel drivers for most relevant hardware, minus some graphic cards.
And who really needs graphics cards anyways? When Linux is such a roaring success at 1% of the desktop market I can’t believe that people still question the unstable abi approach. Hardware companies just absolutely love it and consumers don’t mind waiting months for a driver to appear in the kernel. Did I mention how open source drivers are always high quality compared to proprietary drivers? There’s never any missing functionality.
Now excuse my while I go fly my magical pony around the block.
Actually, who cares about Linux on the desktop? No matter what it’s still stuck at that 1%. The real deal for Linux is, well, everything _but_ the desktop.
The kernel development model reflects this, and the reliability and performance on servers, and the flexibility on embedded devices, is the result. It is a success really.
a driver shall be updated (meaning : the user shall have to update his installation of a certain driver) only when it exhibits stability, functional or performance deficiencies on the system it’s designed for (assuming the same system doesn’t show deficiencies with other drivers and devices, in that case the driver is not the culprit )
that’s a tenet of sane os design (and sw design in general, since it has to do with correct code and data encapsulation and isolation) and that’s how things are done on every os – except linux
it’s not hard to grasp the concept that the kernel project’s license, and the way it’s designed and structured are ORTHOGONAL matters, and having a stable in kernel ABI does NOT inherently keep drivers from remaining open source, either…
or, maybe one should assume that, since the system itself is -admittedly!- more a sw freedom promotion vehicle, needing only to “work” somehow, than a masterpiece of elegant top down sw design, that by the way happens to be free and open source – the above is a too difficult concept to grasp for advocates of such system?
(paraphrase) in fact, no-abi forces users to upgrade the whole kernel, including external compiled drivers, when a new kernel relase with new features -related or unrelated to the i/o model and subsystem- appears and is shoved down the users’ throats downstream
the driver code base could just be a separate project, or just a separate tree subfolder, from the kernel proper’s code base
and although this may seem like a minor feat, it would make a whole world of difference for users and sysadmins (who can install kernel updates without fear already installed and working drivers need be updated too, nor) and also distributions (who don’t have to backport drivers and features to the relatively stable kernel they em- and de- ploy )
It is *exactly* what Theo was saying 5 years ago, but it was considered as a troll (sure when you start to criticize Linux).
http://www.forbes.com/2005/06/16/linux-bsd-unix-cz_dl_0616theo.html
Now Linus is saying it …
Yadda, yadda.
Andrew Tanenbaum said it 18 years ago.
Problem is Andrew and Theo never programmed a kernel that would do what people wanted, so people used Linux.
Theo does. My firewall is solid as rock …
Great. I’ve done that. I agree that OpenBSD makes a great firewall.
But try doing other things with it. Oh, I’m sure you can give examples where you can. You can use it as a web server. You can even use it as a desktop. But it is going to be missing desired features.
Look. For the things I need an OS to do in my personal and professional life, Linux is my OS of choice. But I am a posix fan before being a Linux fan. And I was a Unix fan years before there was such a thing as Linux.
But Linux and FreeBSD and OpenBSD are complementary. The posix world is stronger for having all of them rather than just one. (I’m not exactly sure where NetBSD fits in, but maybe it does, or maybe it’s a fifth wheel. Not sure.)
I don’t think that rock-throwing from the OpenBSD side is wise. Especially since Linux could subsume OpenBSD’s duties more easily than OpenBSD could subsume Linux’s. Even if not optimally.
Linus is not stating some new-found personal revelation. We’ve *all* known that complexity vs functionality was becoming a more major issue. And for some time.
See Linus’ comment in one of the links regarding the relationship between “unacceptable” and “probably unavoidable”.
Edited 2009-09-22 20:57 UTC
That’s a matter of taste, REALLY. I use OpenBSD for almost everything – from embedded systems to the desktops, so please – don’t judge only upon your own experience.
Regards,
marc
Precisely. But capability and features do figure in. I could make do with OpenBSD on a desktop, too. But how many other people would be at all happy with it in that capacity? It won’t run the software that my business users need, that’s for sure. And I prefer Linux’s features to OpenBSD’s more spartan environment, myself. A matter of taste? Sure. But where the rubber meets the road (my business desktop XDMCP servers) OpenBSD is a complete nonstarter. That doesn’t mean that OpenBSD is bad, or that determined and smart people can’t make it work on a desktop. (And I should mention here that for certain uses it is unparalleled.) But your personal tastes (and requirements) lay outside the norm, as do mine.
Hell, even with Linux’s functionality and application breadth, there are times that I have to grin, bear it, and deploy a Windows workstation. Not for myself. But for my users. Someone tells me, personally, that I need a particular OS to run their app, and I tell ’em to eat shit. As likely you would. But it never does any good.
Edited 2009-09-22 22:16 UTC
Well, I still do think that this is a matter of taste. Why? because I don’t have to give any damn for the typical users and I wasn’t talking about the typical users at all. I don’t think that typical user should run OpenBSD, that would be completely out of any logic, wouldn’t it be? Obviously most people will probobly choose something more suitable for their personal needs and taste – it all depends on the situation.
All I said was that – say – security conscious, or security-oriented person might want to / is able to run everything on OpenBSD and this is actually happening in a wild, as the worlds wide and open, although it is good to have that sense to choose the right thing to the right task.
I am just oppose to your statement that OpenBSD is good only for one purpose – the firewalls. I think, that it is good for various purposes unless you prefer something else.
Regards
Then I was not clear enough on that point. (I kinda felt like maybe I wasn’t.) I think that any application which is especially security sensitive, and for which the features of the Linux kernel are not of great value, are a good target for OpenBSD. And within those parameters, there is a great deal of leeway for personal taste.
It’s just that as an admin, I do have to be concerned about keeping my business desktop customers happy. And I need everything that Linux’s current “bloat” can provide me, and more.
Don’t be so gullible, OpenBSD (Theo’s work) is not much more than a forked version of NetBSD.
Theo didn’t wrote a kernel, he forked an OS.
Edited 2009-09-22 20:58 UTC
I don’t want to upset you, but you’re clearly wrong. I’ve compiled both NetBSD and OpenBSD kernels and I know what lays in the sources. They are definitely not the *same*, and the fact that OpenBSD is forked from NetBSD doesn’t mean, that it’s 1:1 copy of NetBSD.
As of my personal experience – I find OpenBSD much more clear and coherent, than NetBSD.
Again – that’s just my personal opinion. I also advise you to not to judge upon your own experience.
Say what? Instead, people should simply adopt your views, ignoring their own personal experience?
Edited 2009-09-22 22:33 UTC
“Say what? Instead, people should simply adopt your views, ignoring their own personal experience? ”
Absolutely not. I’d rather see some serious discussion, not only ‘likes’ vs ‘dislikes’.
I like OpenBSD, but that doesn’t make me think that NetBSD is somehow “worse” than OpenBSD just because it is not OpenBSD system. That’s all.
I think I miscommunicated in another post, and you pounced on it. And then you miscommunicated in a separate post and I pounced on it. All without conscious intention. Funny, isn’t it. 🙂
I think we’ve got it sorted out now, though.
“Theo didn’t wrote a kernel, he forked an OS”
Isn’t a kernel an OS? I thought he forked a platform which included a kernel.
(sorry, I couldn’t resist.. cheap shot I know but it amused me for five minutes)
You didn’t understand my point, my point is that Theo de Raadt didn’t wrote OpenBSD from scratch, instead, he forked the NetBSD code base.
So comparing him with Linus Torvalds, a person that started his own kernel, it’s kind of silly.
Don’t get me wrong, Theo is a cool guy and he has made very good things such as SSH, OpenBSD is also one of them, but I wouldn’t compare a person that forks a code base with someone else that creates one, like Linus Torvalds, who has started his own kernel and has been leading one of the biggest and greatest open source projects out there, that is Linux.
Edited 2009-09-22 23:10 UTC
Well, I’m a fan of Linus. And I think that, as an OSS personality, Theo is a bit of a turd. But I don’t really think that the start is as important as how one actually runs the race. Sure, Theo had a spat with the NetBSD guys (Imagine that!) and forked off OpenBSD. But OpenBSD came into its own long ago. And there is certainly nothing wrong with borrowing and sharing.
Linus started out on his own. But, like Theo, his greatest contribution has turned out to be his performance in his role as the leader. (He certainly cannot be credited with the current Linux kernel in any capacity other than as the BDFL.)
So I certainly think that the two can be compared. And depending upon the criteria selected, either Linus or Theo might “win”. Though if one views things in that competitive way… it’s much easier to find criteria such that Linus wins than such that Theo wins. I’m just not sure that that is the best way to look at things, since OpenBSD has advantages of its own.
Edited 2009-09-22 23:29 UTC
was just a joke in passing based on your suggesting that “kernel” and “OS” where different objects while the OS is by definition the kernel, the bit of software between userland and hardware.
More seriously though, I thought Linus started with minix code and built from that.
OpenSSH. SSH was originally developed by somebody else.
Dude, you should learn history. Theo is one of the founder of NetBSD. It’s his baby, he wrote it. The name NetBSD even came from him.
So he wrote the kernel (at least a part as Linus for linux…). What is your argument now ?
http://en.wikipedia.org/wiki/Theo_de_Raadt
The NetBSD project was founded in 1993 by Chris Demetriou, Adam Glass, Charles Hannum, and de Raadt.
Because of the importance of networks such as the Internet in the distributed, collaborative nature of its development, de Raadt suggested the name “NetBSD”, which the three other founders agreed upon.
Edited 2009-09-23 08:46 UTC
If we want to talk about operating systems, we need to first understand what we’re about to talk.
People calls Linux only as kernel and operating system being Linux + something. They are totally lost from technology how operating systems work. So what ever they then try to explain about OS, are wrong because they do not know basics.
First, in history. There were not operating systems. Programs ran on bare hardware. All programs included all the hardware control codes, how to move disk drive head, how to read from memory, how to execute code on CPU, how to print data to printer and monitor and so on. All programs became bloated and very difficult to ran on other computer (updated, older, different) because programs did not have correct control codes to those devices.
So, engineers invented the kernel. It got other fancy names as well. like operating system, supervisor, core, master program, controller and so on.
The OS came between hardware and program. Later you could have multiple programs running on “same time” because the OS operated the hardware resources for the programs. The programs came easier to code because you did not need anymore care about device controlling codes. The operating system (kernel) toke care of that for the programs. Then when more programs started to be ran on same hardware, got great idea to share code between them. Came software libraries. So you had libraries what could be multiple programs use and get more advanced programs by this way, save some storage space and lots of programming time. Because you did not need to invent everything by youself, just use already existing libraries.
On that time, almost all software was free (as beer and speech). And no, GNU did not exist at all on then, early 60’s. RMS started GNU after the big companies had stepped on the arena and started to close the software source codes, trying to control them and demanding lots of money to use computer time on different places.
Time did fly and all were happy because the OS (kernel, supervisor, core so on) could have only very limited amount of memory where to run. But memory got cheaper and you got more of it. So OS’s started to grown bigger and bigger, until they broke the limit of hundred of thousands lines of code. They became very hard to program and maintain.
Then became totally new idea of building operating system. They sliced up the operating system for every wanted slice what was protected from each other. Most important part was left to control all of them. So born the idea of the microkernel. There is possibility to build with microkernel a server-client or layered architecture operating systems. Plus the older monolithic structure.
Soon on 90’s the microkernel architecture was most popular, seemed to be that all OS’s were build with that. All were taunting how monolithic OS structure is bad, unstable, unsecure and even it was faster, it was obsolete. So they told.
On that time, we had over 95% of operating systems on markets, having a microkernel structure. Only big servers were running old UNIX OS, the monolithic kernel.
Then Linus started own OS, he designed it in first place to use monolithic structure, instead microkernel what Minix was. He got lots of critic from it from Andrew S.Tanenbaum.
After few years, RMS, the founder of GNU project, started to demand fame from Linux. Because Linux got all the attention of the free open source operating system, and not GNU. So RMS needed to start own propaganda and started to demand that all people calls Linux as GNU/Linux, because you can not do with computer anything with just Linux. You would need GNU tools to compile Linux and your programs. Fact just stays, you do not need GNU tools at all to run them. All GNU tools needs an operating system to work.
So GNU continued developing their own operating system called Hurd. To protected their propaganda, they called it as GNU/Hurd.
Many believes that Hurd is kernel like Linux. Well, they are wrong. Because Hurd is not kernel, it is operating system like Linux. Hurd is microkernel structured operating system. Hurd’s microkernel is GNU Mach. Mach was most popular microkernel used on server-client arcitechtured operating systems. It was almost 99% of them.
It was normal to speak about operating systems how kernel was most important part of them.
Linux was monolithic kernel, the old structure to build a whole operating system alone in kernel space, supervisor mode and binary (until 2.2 release).
So, RMS and GNU started to speak about Hurd as it would be just the kernel. Even it was microkernel + operating system servers/modules.
That it is still even on these days. Linux kernel is the operating system called Linux. Linux does not mean anything else than monolithic kernel == operating system. It does not mean distribution, what is totally different term. It does not mean software system (Ubuntu, Fedora so on) or development platform (GNU/Linux). Linux only means operating system.
Problem comes on that people do not understand the monolithic and microkernel structures. They believe they are just different ways to build a kernel. But they are different way to build a operating system. The software what operates the hardware and software and shares data between them, offering nice interface for each other and to user by I/O devices. OS is not fancy software what you see on screen. Those are different softwares. OS does not include C-lib or compiler, those needs the operating system itself to work on the computer. Without operating system, software libraries, compilers, text editors and so on has no way working on computer, unless you code them to control the device. What would be stepping back to long time the history to position when operating systems did not work.
So when we are talking about operating system, we need to understand what kind structure it has. If it is monolithic, it is the whole operating system.
If it has microkernel, the kernel is not the operating system but part of it. The operating system is then microkernel + operating system servers/modules. (do not mistake them to daemons or services, they are totally different software again).
There are few famous microkernel based operating systems. Today most used is NT. Microsoft own operating system what has microkernel structure. Todays newest NT release is 6.1. What runs in the Windows 7. And do not fall to marketing lies about “Hybrid kernels” how they are operating systems what has all the good sides of monolithics and microkernels. They are pure microkernel structured operating systems, but implented just littlebit different. The modularity and structure is still same as on pure microkernel based operating system. Even MS does not call NT anymore as hybrid, only as microkernel based operating system. Same thing is for XNU what some people knows as “Darwin” (Darwin is XNU operating system + development tools. What makes Darwin as development platform, not kernel or operating system. XNU’s microkernel is Mach.)
Then there is Minix, the famous OS what Linus used first before started to build Linux OS.
And then GNU’s own operating system, the Hurd. What has GNU Mach microkernel on it.
Then on other hand, there is monolithic OS’s like Linux, SunOS (operating system of Solaris/OpenSolaris), FreeBSD, NetBSD, OpenBSD, DragonBSD and many others.
On these days, monolithic operating systems are more used on supercomputers, servers and embedded systems than microkernel-structured operating systems. But microkernel structured operating systems ran all the desktops, Mac OSX Snow Leopard and Windows 7.
OK. How about falling to actual facts? The NT kernel started out giving a sort of half-nod, with lip service, to microkernel concepts. But is now a huge chunk of code that all runs in the same memory address space. And that is what micro vs monolithic is about: What code shares memory address space, and code does not. The current Windows kernels are monolithic. Calling them even “hybrid” is a stretch, really. In the same sort of way that calling Linux a hybrid kernel, because it uses dynamically loadable modules, is. And they are both that way (monolithic) because when the engineers got down to the nuts and bolts… that’s what made sense. It’s the difference between beautiful theory and real world practice.
Ivory towers are nice. And academically speaking, tenured professors like Tannenbaum can afford to live in them. The rest of us settle for nice practical houses, apartments, and condos.
Edited 2009-09-23 00:48 UTC
It is not matter of same/shared address space or call space. You do not even have only two address-space on microkernel or monolithic OS’s. You have multiple ones. Every process is on own address-space, but they can work on needed spaces.
The definition is about structure. Is the kernel alone, all other operating system parts dispatched from it to own processes. That is always a microkernel. Monolithic is _neve_ a “hybrid”. That is just so pure marketing propaganda what hits people. [/q]
Not even Microsoft use hybrid term anymore. They speak about pure microkernel structured NT operating system. Why? Because it was just trial to get marketed “new” OS structure to academic world, and it failed totally.
Even Microsoft next research operating system called Singularity, has pure microkernel structure. Even that they do not have more than one address space.
That is just twisting technology for marketing uses. “We have best sides of all operating systems, this is flawless and best what you can get”.
Theory is always theory, but even you implent the theory and you tweak it littlebit to get it work in reality, does not change the technology how it was designed and how it is implented. This far none of “hybrid kernels” does not have anything so different from server-client arcitechture that they could not be called as server-client arcitechture. They can not be called as monolithic as well, because their modularity is on binary and arcitechture level.
Even Linux has modules, but that is not on arcitechture level, but on binary level. When you load the module in Linux, it works just like it would be compiled inside it, not as module. The module is attached same way, it is just loadable to save some memory and gain some stability.
Now you are just saying that the arcitechture differences does not override right away the binary differences.
You can move how much you like, drivers or other parts of operating system between address spaces, to be existing with the microkernel on the kernel space or among other processes on userland, but they do not anyway, get joined to microkernel. How much you would like them to be so. The microkernel is even then, only a microkernel and it does not change to be including device drivers, memory or process management what are own modules. You are almost speaking like kernel space is same thing as kernel. What it is not.
Many people makes mistakes just with the different address spaces. Like they are what rules over the architecture. The microkernel’s server-client architecture idea was that all not needed Operating System features are removed from kernel itself. It does not matter are those sliced parts in kernel space or user space. Those parts do not belong to the microkernel itself. You can keep always the microkernel, and switch all other parts of the OS. To build up different OS’s, still using always same kernel. You can always move the OS servers between kernel space and user space, never changing the architecture or even binary structure, only their position.
On monolithic operating systems you can not do that. You do not get any operating system parts outside of the kernel space. You can change the binary structure, but all modules are still on architecture level same way as without the separation on binary level.
Example: http://en.wikipedia.org/wiki/File:Windows_2000_architecture.svg
Can you find the microkernel being separated from other parts of OS, or are process management, memory management, drivers and so on, part of it? Why does Microsoft even call NT as microkernel-based operating system? Why they say that NT use a microkernel? Because they can not hide that fact, it has a microkernel. When we talk about microkernel, we talk only about the kernel. Not about the servers/modules.
It is just funny, because when not even Microsoft call NT as hybrid kernel, but microkernel-based, many wants there to be somekind “better” and undefinable operating system model than server-client. Just like somehow it would make them better. And just for mentioning, the server-client architecture what even Minix use, allows the servers/modules be moved between address space, nothing special there.
Man you are confused. A kernel is not an OS. An OS consists of several parts. A car is not an engine. A car consists of several parts.
But if you really believe that a kernel is an OS, then I understand you think that RMS is falsely trying to get credit for the Linux kernel. Which he dont.
RMS believes an OS is a kernel + much much more. RMS thinks that GNU should get credit for the “much much more” part. RMS doesnt want credit for “kernel” part.
If you really believe that a kernel is an OS, then you are wrong.
If you were correct in that the kernel is the entire OS, then you are right. But not many support this view.
Mmmm… I think you’re confused. OpenBSD (and all BSDs) are a complete OS/system. That is, a kernel and a userland part. Linux is a kernel. Just it.
Have a nice day 😉
TooManySecrets
Right, and there is not much else you can do with it at an acceptable speed. I guess it still does not have a unified buffer cache, does it? Or fine grained locking for SMP systems?
You could as well just download the UNIX V7 source code and tell us how slim it is. But the world has changed, and the 8 core 64 GB RAM world (what we will be running at home in a short time) is just more complex. And Linux (or NetBSD/FreeBSD) runs terribly much better on my quad core home machine than OpenBSD ever will.
And OpenBSD have terribly less security holes than Linux.
You can bash OpenBSD for SMP performance, but they can do the same with security. OpenBSD is a pioneer with a lot of security features that has been implemented in other OSes few years after.
Security is all about what applications you run, and how well the applications can be sandboxed.
Does OpenBSD have anything even close to SELinux, or is it just about using the old & bugfixed versions of the cherrypicked secure applications?
Security, you mean the absence of a mandatory access control framework? Or even not a standardized kernel authorization framework like Linux and NetBSD (kauth) have had for years? I am a whole lot happier to use my webserver in a sandboxed SELinux or AppArmor environment, than on OpenBSD.
You know, security is not only about disabling every service in the default install and doing a proper audit. Those things help, but other UNIXes have far more preventive security measures. And companies like Red Hat have been pushing the envelope a lot.
Yes, thats why OpenBSD is used at Defcon for the network infrastructure, it’s because Linux and NetBSD are so more secure …
You can put all security features you want like MAC, if your os is full of security holes, it won’t change anything. Anyway features like MAC are usually so hard to put in place that they are never used.
OpenBSD implement things that make the OS less vulnerable for attack by design.
You know, security is not only about adding some new crazy new security features that nobody use. Those things *can* help, but OpenBSD have far more preventive security measures like auditing, W^X, modified malloc, network stack using randomization, ProPolice etc. And project like OpenBSD and its security gurus have been pushing the envelope a lot.
Some reading for you http://kerneltrap.org/OpenBSD/SELinux_vs_OpenBSDs_Default_Security
Also the gurus, Brian Kernighan, etc said the same thing about Linux code. The quality is so-so. Just as the Linux kernel developers themselves say, for instance Andrew Morton. When will people wake up and realize that Linux code is so-so? It is not the best in the world, as they think.
There has always been a discrepancy in mindset. Linus has always been pragmatic, and Linux has often been shipped when it is ‘good enough’. And apparently it was good enough, since it runs fine on both low-end and high-end systems.
There may be a day when it is not good enough, e.g. when organizations start to demand proven-good kernel subsets for security. But those systems, if they ever come, will not look like C-based UNIX kernels either, and will probably resemble something like Singularity/Midori closer.
Elephant – a mouse designed by a committee. See Linux.
Elephant crossed with a Rhino – elephino, but it runs NetBSD!
The problem with that analysis is this; when you increase scalability there is always going to end up having a performance hit somewhere along the lines – either perceived (‘where is teh snappier’) or real (via fine grained benchmarking). Linux has increase massively in scalability over the last decade but that comes at a price.
I do think that the use of the term ‘bloat’ is widely abused given that it ignores what the root of the word actually implies. Bloat implies an disproportionate increase in size via disk/memory usage when compared to the features it brings to the given piece of software. Windows is bloated because of unnecessary backwards compatibility not because of it being ‘feature rich’ (a term I loath when used to justify bloat). Linux isn’t bloated because if you went through the kernel with a fine tooth comb you would be very hard pressed to find something sitting in there that can’t be justified.
In terms of the larger picture relating to distributions; if Linux keeps developing in the direction it is, it will reach a plateau where the difference between Linux and Windows are so monute in terms of hardware support and software availability (in terms of quality pieces of software), it’ll be a viable alternative. My parents both are running Archlinux on their machines; my mum has an unsupported printer but for NZ$50 I bought turboprint and it now works. If one takes into account NZ$50 to support a piece of hardware and the operating system is free – that is cheaper than windows
Believe me, if things keep going the way the are and Apple keeps staying static in its development – 2011 might be the year of the desktop for me
Yea but people said the same thing 10 years ago.
Everyone in 1999 thought that by now Linux would have at least 20% of the desktop market. After all, it was free and always getting better, right?
Did he claim such a thing? One person is not 20% .
10 years ago there was a HUGE gap between Windows and Linux, each year the gap has become smaller and smaller. Windows 7 has been released, Windows Vista was released 2-3 years ago. If you read what I wrote I made a set of assumptions which placed conditions on success or failure. Those conditions included GNOME 3.0/2.30 when based on the roadmap, will deliver a comparable experience superior to Windows 7 when installed on a machine where all the hardware is compatible with Linux.