Max Brunning has posted a brief comparison of how these three Operating Systems approach a number of basic kernel tasks. He has a brief look at: Scheduling and Schedulers, Memory Management and Paging, and File Systems. Max is an Instructor who spends his time teaching Solaris internals, device drivers, and kernel crash dump analysis and debugging.
Linux divides machine-dependent layers from machine-independent layers at a much higher level in the software. On Solaris and FreeBSD, much of the code dealing with, for instance, page fault handling is machine-independent. On Linux, the code to handle page faults is pretty much machine-dependent from the beginning of the fault handling. A consequence of this is that Linux can handle much of the paging code more quickly because there is less data abstraction (layering) in the code. However, the cost is that a change in the underlying hardware or model requires more changes to the code. Solaris and FreeBSD isolate such changes to the HAT and pmap layers respectively.
Just one example of having to choose between better portability or better performance. I’d bet that your typical user isn’t generally going to notice the difference in performance between the three, and IMO portability is always a good thing.
I found that particularly interesting considering Linux runs on just about every architecture imaginable, but Solaris only supports x86, amd64, and sparc variants, with ppc and xen in progress.
“I found that particularly interesting considering Linux runs on just about every architecture imaginable, but Solaris only supports x86, amd64, and sparc variants, with ppc and xen in progress.”
You are correct that Linux has been more widely ported, however, having a larger number of ports does not mean it’s more portable. The less machine independant code one has in a codebase, the more machine *dependant* code there is that needs to be rewritten in order for a port to be done. The article gave one (of many) possible examples of a deficiency in Linux WRT the amount of machine independant code.
More MI code == less work to port.
More MD code == more work to port.
You are correct that Linux has been more widely ported, however, having a larger number of ports does not mean it’s more portable.
That’s a fantastic piece of double dutch. What’s the point of more portable code if you’re simply not using it? As it stands Linux is the most portable based on actual ports, and that’s the be-all and end-all.
“That’s a fantastic piece of double dutch. What’s the point of more portable code if you’re simply not using it? As it stands Linux is the most portable based on actual ports, and that’s the be-all and end-all.”
Way to miss the obvious. See my second post in this thread and do try to understand. You might be aided in that endeavor by not thinking in such a simplistic fashion.
Well, even if Linux is ported to more platforms, the discussion was about the amount of coding needed to port it to one, which is higher when compared to other platforms.
However, Linux has more manpower and can be less portable but more ported at the same time!
“Well, even if Linux is ported to more platforms, the discussion was about the amount of coding needed to port it to one, which is higher when compared to other platforms.
However, Linux has more manpower and can be less portable but more ported at the same time! ”
Absolutely correct, as that appears to be the situation. Thank God there are people here that can understand simple concepts =D
Way to miss the obvious. See my second post in this thread and do try to understand.
Way to go to paint over the bleeding obvious.
You might be aided in that endeavor by not thinking in such a simplistic fashion.
The simplistic view is that if Solaris et al are supposedly more portable then their efforts haven’t ended up with very many ports. Ergo, the more portable code is completely useless to everyone unless it actually produces more ports to different systems.
I find it really funny how many people simply cannot think in a logical fashion.
The simplistic view is that if Solaris et al are supposedly more portable then their efforts haven’t ended up with very many ports. Ergo, the more portable code is completely useless to everyone unless it actually produces more ports to different systems.
That is a very naive interpretation. Just because a codebase is more porable doesn’t mean that there should have been more ports. You are missing a key problem here, Motive. There simply hasn’t been a need for anyone to port the code to anything else.
Sun had a need to port Solaris to x64 and they did very quickly mind you. And unlike Linux, Solaris wasn’t opensourced, so third parties couldn’t actualyl port it to anything else. Only time will tell if OpenSolaris can garner enough developer community support to encourage multiple ports.
Let’s recap. The portability of code does not affect the number of ports there are, motivation does. Portability of code only affects the time the port takes. Any code can be ported to any platform. How long it takes is another matter. The only conclusion you can make with linux as an example is that the motivation to port linux was quite high.
That is a very naive interpretation.
Well, it’s the simple straightforward one.
Just because a codebase is more porable doesn’t mean that there should have been more ports.
Stone the crows. Really? What’s the point of that portable code then?
You are missing a key problem here, Motive. There simply hasn’t been a need for anyone to port the code to anything else.
The portable code is useless then. They might as well come up with code that is slightly less portable but will run faster and better on SPARC or x86. Certainly for Solaris, they are the only two real architectures.
Sun had a need to port Solaris to x64 and they did very quickly mind you.
In terms of porting and hardware support, Solaris is still a very long way behind Linux. Porting something that mostly works to a new hardware archtecture, given that they’d done some work on it years before, isn’t that hard. I doubt whether their portable code helped them that much.
The portability of code does not affect the number of ports there are, motivation does.
My point exactly. Which means there is little advantage, if any whatsoever, of Solaris having more portable code since there is always going to be little motivation. The two primary archiectures for Solaris will always be x86 and SPARC.
The only conclusion you can make with linux as an example is that the motivation to port linux was quite high.
The bleeding obvious again.
The portable code is useless then.
Useless?! The simple fact that they can support a great number of platforms with far fewer resources has everything to do with portability. Without that advantage, I imagine it’d be much more difficult to stay competitive.
They might as well come up with code that is slightly less portable but will run faster and better on SPARC or x86.
In other words, build Linux all over again. What’s the point?
I’ve been using Linux rather than a BSD or Solaris, and even I have to say you’re arguments are complete hogwash.
The simplistic view is that if Solaris et al are supposedly more portable then their efforts haven’t ended up with very many ports. Ergo, the more portable code is completely useless to everyone unless it actually produces more ports to different systems.
Maintaining ports takes ressources. Even if we put them together, Solaris and the BSDs probably don’t have half the ressources (human, material and financial) that Linux got.
So while you are true, I would not dismiss the efforts of the Linux alternatives so quickly. If NetBSD can manage to run on 85% of the platforms that Linux supports while supporting a complete userland, I wonder what they could do with their ressources. While I don’t use it (yet), I respect them.
I don’t know which is funnier, that you spend most of your posts pretending that you have a deeper understanding of commercial development only to make an argument demonstrating that your understanding of ‘portability’ is lacking, or that you demonstrated without any doubt your own hypocrisy with respect to logical reasoning.
I don’t know which is funnier, that you spend most of your posts pretending that you have a deeper understanding of commercial development only to make an argument demonstrating that your understanding of ‘portability’ is lacking, or that you demonstrated without any doubt your own hypocrisy with respect to logical reasoning.
Thanks for that wonderful piece of insightfullness. When you’re ready to respond to an actual comment with an actual answer, let us all know.
Do I really need to repeat that your valuation of portability is comically inept? It seems pretty self-evident. Your own ability to reason logically is truly a sight to behold.
Sorry japail, Linux is more portable than either NetBSD or Solaris for the simple fact that it runs on more architectures, and it runs on architectures that the others simply cannot handle at all (eg. ones without memory management units).
If you want to try to argue it about size of code, let’s look at Alpha, which is an architecture with relatively few obscure platforms, and will be relatively stable:
Linux 2.6 HEAD: 1,642,094 bytes of machine dependant code
NetBSD HEAD: 2,316,107 bytes of machine dependant code
Bzzt.
The last thing you may resort to is some rubbish about ‘cleanliness’ or something. Unless you can back up your claims with a proper analysis and comparison of porting efforts required, you’ll get a BZZT here too.
Do I really need to repeat that your valuation of portability is comically inept? It seems pretty self-evident. Your own ability to reason logically is truly a sight to behold.
Well, in theory Solaris in many ways should be more portable. Unfortunately, since they will only realistically ever support to hardware architectures fully that portability is useless, so it doesn’t work out in practice. That’s called making a sensible design decision based on what you’re actually going to be doing.
However, since Linux has been ported effortlessly to many, many platforms, including those used in embedded systems, the point about portability is a mute one. All you have to do is look at the platforms Linux and Solaris have been ported to and do a comparison to see whether Solaris’ portable code is of any practical real world use. You’ll se that it isn’t.
Like I said, when you have an actual answer to an actual comment here give us a call.
NetBSD uses the same MMU abstraction as Solaris (the PMAP/HAT layer), and has been ported to about as many architectures as Linux. Yet, the developer base of NetBSD is probably a minute fraction of the size of the developer base of Linux.
Portability is a good thing all else being equal
All is not qequal. First off, paging is slow, so the performance improvements probably are noticable to a desktop user.
Second, Linux doesn’t just run on desktops. It runs on mainframes, where a few slowdowns here and there pile up to a major loss, and on highly resource constrained machines where the extra layers not only introduce a CPU bound performance penelty, but the additional memory required to maintain the various levels of indirection are a major burden.
That said, I believe that NetBSD’s model is the best – NetBSD runs in a surprising number of places because it pushes the machine dependent parts down to the lowest part of the stack, yet it still runs in embedded environments.
You could say the opposite thing just as well – Your typical user will be using a platform which linux supports, and is not going to notice the difference in portability. Speed he could always use.
Sure, getting linux to another platform is a bit of work, but there are so many more people working on it, which is why linux runs on so many more platforms than the other two.
Linus Torvalds was always one for practicality over elegance, after all.
With all the security talk, I didn’t see anything about SELinux. So does anyone include this when talking about Linux security?
There was no mention of security in the article. Did you read it?
“Solaris Vs. Linux” Link on that page did.
As the person who submitted the link, I found it refreshing that Max was willing to have a simply technical look at how the three operating systems performed these three types of operations; looking at the similarities and differences in the approach.
I hope that the discussion can continue in the same vein and not descend into any kind of zealotry. The discussion so far has been pretty good and reminiscent of the osnews of old.
Alan.
—
http://blogs.sun.com/tpenta
Too bad you add to be a zealot … and add ruin it.
Yep, I can only agree with you – it is nice, technical, for geeks who dig those things. It is nice to see such articles and I would like them more – instead of Microsoft/Linux/OSX/BSD/whatever/real life zealotry what has plagued OSNews for last years.
Maybe we can drop that ‘in the end, there will be the one’ meme and go on with our lifes and work more on coorporation, making IT really matter for common crowd.
It should be suprising for many that Linux actually has been ported to and runs on more platforms that any other OS including NetBSD
http://www.kroah.com/log/2004/09/29/
It should be suprising for many that Linux actually has been ported to and runs on more platforms that any other OS including NetBSD
But not from one source tree, including equal userland on each platform. On NetBSD you can cross-compile to any supported platform with only one command, and you will get a fully functional kernel and userland without additional patches, etc.
>>It should be suprising for many that Linux actually has
>>been ported to and runs on more platforms that any
>>other OS including NetBSD
>
>But not from one source tree, including equal userland
>on each platform. On NetBSD you can cross-compile to any
>supported platform with only one command, and you will
>get a fully functional kernel and userland without
>additional patches, etc.
Yes, from a single source tree. Last time I counted, Linux supports nearly 2 dozen CPU architectures, and NetBSD about 16 or 17. Exact numbers depend on exactly how you define a CPU architecture (eg. bit-ness, endian-ness, mmu capabilities).
However, Linux is definitely more widely ported than the NetBSD kernel.
Yes, from a single source tree.
But it is useless to have nothing but a kernel. You will have a much harder time getting a uniform source tree for the standard userland (UNIX-like) programs, and having that compile on all platforms resulting in a system that is equal on all platforms.
NetBSD has all that, and it can be done with the build.sh command, without having to fiddle with cross-platform compilers (the build infrastructure takes care of that).
I am not saying that it is not possible with GNU/Linux. But no one that I know of has created such build infrastructure. I guess no one is interested in doing it either, since most obscure older platforms are barely used (pc532, cesfic et al).
Last time I counted, Linux supports nearly 2 dozen CPU architectures, and NetBSD about 16 or 17. Exact numbers depend on exactly how you define a CPU architecture (eg. bit-ness, endian-ness, mmu capabilities).
I won’t argue with that. I guess that it is a difference of using ‘portability’ in a narrow or wide sense .
>But it is useless to have nothing but a kernel. You will
That’s not true at all. It is a great building block for anyone from mainframes, to firmware (yes Linux does run some service controllers and firmware in servers), to any number of embedded applications and appliances.
>have a much harder time getting a uniform source tree
>for the standard userland (UNIX-like) programs, and
>having that compile on all platforms resulting in a
>system that is equal on all platforms.
No harder than changing the source tree of your NetBSD userland, or having a program portable between versions of BSD, or other versions of UNIX, etc.
>NetBSD has all that, and it can be done with the
So does an equivalent Linux system, eg. Debian.
>build.sh command, without having to fiddle with >cross-platform compilers (the build infrastructure >takes care of that).
The build infrastructure rebuilds the toolchain. This is really not a lot of difference to installing the correct cross compiler and build environment on a system like Debian.
>I won’t argue with that. I guess that it is a
>difference of using ‘portability’ in a narrow or wide
>sense .
A properly written application has basically zero portability problems between architectures when compared with a kernel. I don’t consider application level ‘portability’ to even be worth mentioning, seeing as it is a solved problem since the 70s.
Actually the one userland thing that is needed is a compiler. NetBSD and Linux both use gcc to meet their portability requirements.
Which is a pointless observation, given that Linux probably has 50 times as many users as NetBSD! The question is, if both architectures had the same amount of developer resources, which would be easier to port?
As a programmer, I like the NetBSD code better. It’s cleaner, has better seperation of components, and its abstractions are more well-defined.
IIRC, is almost entirely dominated by I/O performance. Spending time reducing computing cycles isn’t going to do a lot unless the implementation/design is abysmal in the first place.
A really good read. Nice to see OSNews finally reaching some of the standard from the good old days.
Also to write an exploit for Sparc you need to shell out at least $500 to get an UltraSparc box,
Perhaps the author hasn’t had the need to get one from Ebay for much less than $500.
Solaris is stable,in its default configuration not any more secure than Linux ( look at all the services that are by default enabled on Solaris).The author didn’t mention Dtrace as a the tool for shellcode (exploit) writers.Solaris is a good and stable OS without a doubt but there has been written a lot more software for Linux.Solaris userland is still very prehistoric compared to the Linux and *BSD.
You could also root a Solaris box with a pre-existing exploit, setup shop that way. Or sign up for an account with some various Solaris shell account service. A lot of ISP and Universities also use Solaris and definitely have an UltraSPARC or two laying around. Finding a sparc environment to work in isn’t a problem these days.
Actually, in a normally running system, page faults can be very important for performance and often have nothing to do with IO.
The thing which is interesting to me, is that he says page fault handling in Linux is machine dependant.
However in Linux, the low level code simply has to find the address of the fault, and whether or not it is a write access, then pass that info to the generic fault handler (handle_mm_fault).
The architecture must then implement the set_pte routine in order to set up a given physical translation (paddr, protections) at a given virtual address.
How much higher can you make the abstraction than that?!?
But I do agree that Linux’s page fault performance outstrips Solaris and FreeBSD.
Actually, in a normally running system, page faults can be very important for performance and often have nothing to do with IO.
There are multiple types of faults that affect performance. On Processors that implement TLBs a tlb miss needs to handled fairly fast. Processors like pentiums walk the page table in hardware and resolve the miss and processors like some UltraSparc resolve the misses in software using temporary software caches of translation.
However, at a higher layer those misses that can’t be resolved are translated to a page level miss and also a protection violatation that came from the cpus mmu hardware. The handling of the above in Solaris is platfrom independant. I haven’t done a thorough analysis of all the page fault handling code in Linux and Solaris but my guess would be, from the authors conclusions, that the machine dependant code in linux also exists in the layer Solaris has abstracted to an independant layer.
How much higher can you make the abstraction than that?!?
The question should be how much lower can you make that abstraction. That is what the author was implying, Solaris and freebsd lower the abstraction and have more machine independant code.
But I do agree that Linux’s page fault performance outstrips Solaris and FreeBSD
Could you post the benchmark results that lead you to that statement? I woudl love to see how much cost Solaris and FreeBSD incur dues to thier design descision.
However, at a higher layer those misses that can’t be resolved are translated to a page level miss and also a protection violatation that came from the cpus mmu hardware. The handling of the above in Solaris is platfrom independant.
As it is in Linux.
The question should be how much lower can you make that abstraction. That is what the author was implying, Solaris and freebsd lower the abstraction and have more machine independant code.
Err yes, that’s the question. I wonder the answer?
Could you post the benchmark results that lead you to that statement? I woudl love to see how much cost Solaris and FreeBSD incur dues to thier design descision.
Linux has basically always been faster in every benchmark I have ever seen.
http://www.ussg.iu.edu/hypermail/linux/kernel/9904.2/0127.html
Is an old one, showing Linux is 7-8 times faster than Solaris at ‘protection faults’ on the same UltraSPARC hardware.
http://seclists.org/lists/linux-kernel/2003/Apr/1701.html
Fast foward, and Solaris 8 has just about caught up to Linux’s 4 year old numbers on a system running more than triple the MHz and forward a processor generation or two.
I don’t have up to the minute results on the same hardware, however I have no reason to believe the situation has changed. Do you?
As it is in Linux.
Care to go in more detail.
I don’t have up to the minute results on the same hardware, however I have no reason to believe the situation has changed. Do you?
You actually have no real numbers then, nor do you ahev any information on Solaris 10. Solaris is now on version 10 and every benchmark I have seen to date has Solaris as fast if not faster than linux on the same box.
There is do much change in Solaris 10 when compared to Solaris 10 that your statement just means that you haven’t kept up to date, and so your claim is unfounded at the very least.
I didn’t make claims that i couldn’t back up. You shouldn’t either.
Solaris is now on version 10 and every benchmark I have seen to date has Solaris as fast if not faster than linux on the same box.
Could you post the benchmark results?
Maybe he is referring to this:
http://www.anandtech.com/systems/showdoc.aspx?i=2530&p=5
or some of the stuff on the Sun’s website.
Yeah, my brain wasn’t working last night. page FAULTS, yeah, those are IO bound.
God damnit, are NOT.
Why isn’t NetBSD included in this comparison? I’m interested to see how the various *BSDs differ.
Because you didn’t write it.
Could have fooled me…
Also, he says that memory zones are very much an artifact of x86 architecture. x86 ISA DMA may have been one of the initial drivers of the zoned memory allocator, however it is used for “high memory” on architectures like x86 PAE and PPC32, and also to provide NUMA awareness.
He says FreeBSD performs NUMA aware memory allocation, however I’m not sure this is true.
Does Solaris or FreeBSD have something as recent Linux experimental patches, adaptibe readahead http://lwn.net/Articles/152279/ or patch that prevent huge swap trasher task to stop other from disk i/o ? And I do not understend why XFS and Reiser4, fastest and most cutting edge FS did not noticed in comparison?
Well, Reiser4 is still not part of the official Linux kernel. Plus, it’s designed to work with multiple OSs, not just Linux.
http://slashdot.org/~Moulinneuf/journal/94302
I have tried different Linux and BSD flavours, pure Linux or BSD, no X. I have always the same problem. My keyboard (non US) is not working.
I do not know if this is a kernel issue or a something like a “driver issue”.
On Linux, loadkeys is only doing its job partly. Some keys are “one to one” remapped. Dead keys are not working (from there, the driver/kernel idea). I spent hours googling, it never succeed to find any complete and satisfying answer.
I can only tell how FreeBSD works for me. I use non-US keyboard (Swedish) and it works excellent. Default setup is US thogh so you have to change that from default.
Here is how you set it up. http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/lang-setu… Again the excellent Handbook provides the answer.
A lot of the discussion in the article keys on the hacker culture that spawned Linux vs. the BSD-derived designs of Solaris and FreeBSD. While Solaris and FreeBSD tend to have a calculated, layered approach with nice naming conventions for structures, Linux subsystems jump up and down through conceptual layers. Solaris and FreeBSD seem driven by design ideals, while Linux development seems driven by practicality and performance. In Solaris and FreeBSD, it starts with a design and improves with extension. In Linux it starts with extension and improves through hacks.
However, and the end of the day, it comes down to what works best for you. The Linux kernel I’m running right now is very reponsive and stable, the drivers support all of my hardware, and the packaging system provided by my distributor makes it easy to install a tremendous amount of software. The technical merits of the three kernels are debatable (and the differences aren’t very significant), but I think there is a particular Linux distribution that offers a better overall experience for most users (home, office, IT, etc.) than any of the other commodity UNIX systems.
While Solaris and FreeBSD tend to have a calculated, layered approach with nice naming conventions for structures, Linux subsystems jump up and down through conceptual layers. Solaris and FreeBSD seem driven by design ideals, while Linux development seems driven by practicality and performance. In Solaris and FreeBSD, it starts with a design and improves with extension. In Linux it starts with extension and improves through hacks.
I’m not sure what you’re saying here, if you are asserting these points or you’re saying these myths are a product of a bygone age.
But seriously, Linux is very well layered and designed, and features do not ‘start with extension and improve through hacks’.
Linux really is far more complex a system than FreeBSD for example, in terms of synchronization (very big issue), scalability, portability and yet it hangs together and runs on systems ranging from 512 CPUs and 4TB RAM, to IBM mainframes and virtual machines, to 1MB embedded chips without memory management units.
This would simply not be possible (nor would the sheer number and diversity of developers) if Linux was just hacked together.
Contrast this with FreeBSD, which had been about 3 years late with their ‘smp scalable’ big ticket version 5 release, which utterly failed its primary objective (they have it barely running on a 12-CPU system, however benchmarks from there show that the kernel itself scales to between 1 and 4 CPUs, depending on the workload).
Not to try to rubbish the FreeBSD project, but I’m simply trying to squash this myth.
What I would like to see is a comparison with the Plan 9 and Inferno kernels. That would be much more interesting as they are designed from scratch rather than based on the 30 years old Unix design.
plan 9 information can be found here
http://cm.bell-labs.com/sys/doc/9.html
inferno information can be found here
http://www.vitanuova.com/inferno/index.html
ive not used any myself so i cannot comment there diffrences, The first Fortran compiler was developed for the IBM 704 in 1954–57 and fortran is still used today.
Just because something is a old design doesn’t mean anything is wrong with it.
Why reinvent the wheel?
> Why reinvent the wheel?
Not reinventing but improving the wheel
As Plan 9 was designed by the same guys at the Bell Labs who designed Unix, they probably had a good reason to do so.
true, from the looks of it, the basic idea of “everything is a file” that made unix strong have been extended under plan 9 into not just a near truth but into a full truth. everything realy is (atleast in representation) a file under plan 9.
thing is tho that this ability looks like it will be worked into linux in the near future to, with plugable user-space file system support.
from what i read the rest of plan 9’s special features was not so much in kernel space as in user space…
still, today people that are not that much into the know think of the os as everything from the desktop down. and when asking for change most often think about changes to the gui and other user interfaces.
thats one of the true features of a linux based system, there is no single “true” linux system as its a stack of tools on top of the linux kernel. this stack can and will be changed at will. but its still the linux kernel at heart.
the bsd’s on the other hand is presented as a singel vertical package from one end to the other. this makes it look like its harder to change anything in the stack as its not supported by the bsd of choice.
is there a bsd from scratch?
> the bsd’s on the other hand is presented as a singel
> vertical package from one end to the other. this makes
> it look like its harder to change anything in the stack
> as its not supported by the bsd of choice.
Actually I think this is a strong point of BSD, *if done right* (and I do not know if they did it right). Why would you want to change anything in the stack unless it doesn’t work? And if it doesn’t work, one should rather fix it than clone it.
Can you give a good example where the “single vertical package” idea fails? I’m assuming bug-freeness (otherwise just fix them) and a user who is not interested in tinkering with the system just for the sake of it.
– Morin
im not saying its a strong point or a weak point, but linux is more likely to see new stuff tried while bsd will keep seeing ortodoxy repeated over and over.
atleast there is a theoretical chance of that. lately there have been some new bsd based creations out there. but the similar number of linux based creations are 10/1 atleast.
basicly its diffrent os’s for diffrent tasks. nothing bad about it at all.
Fast foward, and Solaris 8 has just about caught up to Linux’s 4 year old numbers on a system running more than triple the MHz and forward a processor generation or two.
<p>
now you’re just trying to start arguements here
> now you’re just trying to start arguements here
No, parent poster asked for my basis for stating that I thought Linux page faults were faster than Solaris, and I provide evidence up to 2003 (unless you have some good reason to think the 900MHz USIII tested should have slower fault performance than the 270MHz USI).
Nobody ever seems surprised by these numbers or to care much, as such it is not really something I thought of as a sore spot for Solaris people.
If you would like to provide some info to correct me then I’m all ears.
No, parent poster asked for my basis for stating that I thought Linux page faults were faster than Solaris, and I provide evidence up to 2003 (unless you have some good reason to think the 900MHz USIII tested should have slower fault performance than the 270MHz USI).
How did you get that from the data you posted? There are so amny variables in those two benchmarks that you can’t draw a reasonable conclusion from them.
Nobody ever seems surprised by these numbers or to care much, as such it is not really something I thought of as a sore spot for Solaris people.
Funny i count atleast two people who have.
The only benchmark that would be interresting in the year 2005 would be Solaris 10 vs FreeBSD 6 vs Linux 2.6
How did you get that from the data you posted?
By reading it.
There are so amny variables in those two benchmarks that you can’t draw a reasonable conclusion from them.
Err, no.
1999, ultra5 270 MHz, lmbench prot fault
Solaris: 21 us
Linux: 4 us
2003, Sun V480, 900 mhz USIII, 8 Mb
Solaris 8: 3.7 us
Funny i count atleast two people who have.
From people who have no numbers to refute mine.
Solaris 10 is still very much the Solaris codebase and while it has gotten faster, I think it is safe bet to say that Linux’s fault handler performance is better than Solaris’s. You asked me why I said that, I gave you numbers.
Show me your numbers – why are you so surprised about this?
Err, no.
1999, ultra5 270 MHz, lmbench prot fault
Solaris: 21 us
Linux: 4 us
2003, Sun V480, 900 mhz USIII, 8 Mb
Solaris 8: 3.7 us
Think. Don’t just read think. You are comparing a particular linux version 2.2.x to be precise and projecting performance numbers. You are not interpreting data. Past performance of a code base doesn’t apply to future versions. Just because 2.2.x performed better doesn’t mean 2.4 and 2.6 do, ecspecially when the whole VM layer was rewritten miday through 2.4.
You haven’t really produced any vaild data to support your general claim that linux (any version of it) has better page fault handling performance than Solaris.
Solaris 10 is still very much the Solaris codebase and while it has gotten faster, I think it is safe bet to say that Linux’s fault handler performance is better than Solaris’s. You asked me why I said that, I gave you numbers.
That may well be. But linux has changed many many times since then and has become more complex if anything else. Without current numbers all you have said is meaningless.
1999, ultra5 270 MHz, lmbench prot fault
Solaris: 21 us
Linux: 4 us
2003, Sun V480, 900 mhz USIII, 8 Mb
Solaris 8: 3.7 us
Oh and one more thing. Prot. fault performance is as useless a benchmark as it gets. Most people don’t care how quickly thier Apps. get SIGKILL or SIGBUS, pretty much the two most common scenarios if a user land program does get a proteciton fault.
Err yeah right, idiot.
A protection fault is the measurement of the trap handler and OS dependant translation lookup mechanism. Probably the best measure of page fault performance there is.
And to your comment that nobody cares: actually, protection faults are among the most common types of pagefault. Any COW activity is driven by protection faults, as are many demand memory allocations on Linux systems.
So, how about you stop skirting around the issue and come up with your numbers, huh?
A protection fault is the measurement of the trap handler and OS dependant translation lookup mechanism. Probably the best measure of page fault performance there is.
I think misses and pages that have been swapped are more imporant for most Apps.
And to your comment that nobody cares: actually, protection faults are among the most common types of pagefault. Any COW activity is driven by protection faults, as are many demand memory allocations on Linux systems.
Right. COW pages only matter for fork perfromance and not much else.
Unless an App is doing lots of forks and then writting to COW pages, which wouldn’t be the normal case for preformance critical apps, protection fault handling performance should be in the noise. Translation misses that cause slow down for a running process are more meaningful. Look at the last column on the lmbench output.
So, how about you stop skirting around the issue and come up with your numbers, huh?
I am still waiting for real numbers from you. You made a ridiculous general claim and failed to back it up. The onus is on you to provide numbers, so where are they.
I think misses and pages that have been swapped are more imporant for most Apps.
Well you think wrong because the IO cost of swapping far outweighs the page fault cost. A healthy running system should not be continually swapping.
Shows how much you know.
Right. COW pages only matter for fork perfromance and not much else.
Err, what do you mean “right”. You just tried to claim that protection faults are only triggered on an illegal memory access from the application. Don’t try to say “right” as if you meant something else.
And no, they don’t only matter for fork performance. As I said, they matter for some types of memory allocation in linux, and also virtual machines and garbage collectors.
But this is rediculous – the basic page fault measurement is basically the same whether it is a protection fault or a not present fault. Unless Solaris does something incredibly stupid and has wildly different fault handling paths than Linux, that is.
Unless an App is doing lots of forks and then writting to COW pages, which wouldn’t be the normal case for preformance critical apps, protection fault handling performance should be in the noise. Translation misses that cause slow down for a running process are more meaningful. Look at the last column on the lmbench output.
‘page fault’? Yeah Linux beats Solaris hands down on that too.
I am still waiting for real numbers from you. You made a ridiculous general claim and failed to back it up. The onus is on you to provide numbers, so where are they.
Err no, see the URLs I provided. They are real numbers and the only claim I made was that Solaris running a machine 3 times faster only just managed to catch up to Linux, which was shown in the numbers.
You’ve done nothing but spout a load of drivel and pretend to know something about operating systems that you don’t.
Well you think wrong because the IO cost of swapping far outweighs the page fault cost. A healthy running system should not be continually swapping.
Shows how much you know.
hmm… no it actully shows how incredibly stupid you are. Most reasonable people would chase down big performance bottlenecks not little ones. Therefore, I said those matter most.
Err, what do you mean “right”. You just tried to claim that protection faults are only triggered on an illegal memory access from the application. Don’t try to say “right” as if you meant something else.
No.. I meant to say “right” to imply the inconsecquentiality or your absurd claim. You are too caught up with yourself to know it.
And no, they don’t only matter for fork performance. As I said, they matter for some types of memory allocation in linux, and also virtual machines and garbage collectors.
Hunh?? Say What? what type of memory allocator and garbage collector reiles on protection faults????
‘page fault’? Yeah Linux beats Solaris hands down on that too.
Data to support that???? Your data or URLs are outdate and useless. Move on.
<I.Err no, see the URLs I provided. They are real numbers and the only claim I made was that Solaris running a machine 3 times faster only just managed to catch up to Linux, which was shown in the numbers. [/i]
No you dufus.. your data is worthless.
You’ve done nothing but spout a load of drivel and pretend to know something about operating systems that you don’t.
I happen to write kernel code for a living and get paid very well for it thank you.
hmm… no it actully shows how incredibly stupid you are. Most reasonable people would chase down big performance bottlenecks not little ones. Therefore, I said those matter most.
We’re talking about fault performance, dimwit, rembmer?
Hunh?? Say What? what type of memory allocator and garbage collector reiles on protection faults????
A zero page based COW on demand memory allocator like you would find in Linux.
As for garbage collector, try google, my happy little idiot.
Data to support that???? Your data or URLs are outdate and useless. Move on.
No they’re not. They show Solaris from 2 years ago just bearly catching up with Linux from 6 years ago on a machine that was triple the speed at least.
I happen to write kernel code for a living and get paid very well for it thank you.
That’s very surprising because you don’t seem to know jack shit. You must be the reason why the Solaris kernel is so slow, eh? Ha ha ha!
Hi man, why don’t you use a registered name? Please be fair…
OK, I will register a username and start posting with it just as soon as anyone posts a single half credible reason why they defend Solaris’s page fault performance so strongly without any real numbers.
Otherwise there is no point in arguing with imbiciles who simply can’t accept some points of view whether evidence exists or not (oops, what have I been doing the last 10 posts?)
Anonymous (IP: 203.173.3.—):
OK, I will register a username and start posting with it just as soon as anyone posts a single half credible reason why they defend Solaris’s page fault performance so strongly without any real numbers.
Dude.. the guy you’re insulting is somebody who puts his name behind what he says.
Since not everybody here is a kernel hacker, IMHO this fact alone gives him quite a lot of credibility, especially compared to people who hide behind anonimity – i.e., what you’re doing right now.
It’s true that in the Watergate scandal the declarations of an anonymous person made a difference… but kernel issues are *not* the Watergate scandal. When talking about kernel issues, I really can’t see any reason to remain amomymous if somebody is in good faith and knows what he’s talking about.
I don’t give a rat’s arse. I put up *numbers*. No names, no waffle, plain numbers.
When that courtesy is returned to me, I will register a username.
Since not everybody here is a kernel hacker, IMHO this fact alone gives him quite a lot of credibility, especially compared to people who hide behind anonimity – i.e., what you’re doing right now.
Err no, since the thread is about page fault performance, and I put up real numbers. Meanwhile, everyone else was changing the subject, making incorrect statements about how virtual memory managers work, and getting very worried about my anonymity.
In which case, they’re the ones with no credibility. Names, no names, doesn’t matter. We’re talking about numbers.
It’s true that in the Watergate scandal the declarations of an anonymous person made a difference… but kernel issues are *not* the Watergate scandal. When talking about kernel issues, I really can’t see any reason to remain amomymous if somebody is in good faith and knows what he’s talking about.
I really can’t see a reason why this should matter at all if it is kernel issues being discussed and I know what I’m talking about.
I have an idea: stop changing the subject and spouting all this irrelevant waffle and concentrate on the point at hand.
“I don’t give a rat’s arse. I put up *numbers*. No names, no waffle, plain numbers.”
Meaningful numbers would come from a comparison of all OSes involved (moder, up to date versions), running on the very same hardware. I don’t believe I’ve seen such from you, or anyone else here on this particular topic.
Oveall a very good article.
Besides, if you want to know some of the reasons why Linux is better and faster you only need to read some of Larry McVoy’s comments:
http://www.ultralinux.org/faq.html#q_1_15
It is time for everyone to stop kidding themselves and to stop asking, rather stupidly, for benchmarks and results from umpteen different kinds of scenarios (which you could go on forever doing) just because they don’t like the figures that have been presented to them.
Solaris and other Unices are based on code and design that is thirty years old. As Larry says, one would hope we have learned something during that time, and Linux is the result. If Sun truly want a modern Unix system that performs well then all they need to do is dump Solaris and contribute to Linux. Cost-wise, it would also be a heck of a lot less expensive.
>Solaris and other Unices are based on code and design that is thirty years old. As Larry says, one
>would hope we have learned something during that time, and Linux is the result.
Bwahahahaha… Oh you poor silly Linux trolls make me laugh…
Bwahahahaha… Oh you poor silly Linux trolls make me laugh…
I know you mod down comments you can’t reply to. That….doesn’t make me laugh.
Besides, if you want to know some of the reasons why Linux is better and faster you only need to read some of Larry McVoy’s comments:
The comments are from 2003 (prior to the release of Solaris 10), so they may have been valid for older Solaris releases. He argues that Linux is “better” because they care about performance. Performance is clearly a priority for Solaris 10:
http://opensolaris.org/os/community/performance/
Their motto is rather interesting anyway
If another major system is faster than OpenSolaris, it is a bug.
There are many opinions about Solaris vs Linux vs BSD etc., but the numbers are worth a lot more than opinions, I only wish there were more numbers to compare the 2.6 kernel vs S10 vs BSD. In the numbers I have seen published on the net S10 and 2.6 are very close. In some benchmarks Solaris 10 is ahead, in others Linux – so I doubt that either one is inherently faster than the other.
The comments are from 2003 (prior to the release of Solaris 10), so they may have been valid for older Solaris releases.
Since the lineage of Solaris is still very much intact (i.e. it isn’t a completely new system) I’m afraid his comments still hold true. Solaris 10 is simply not that different. You only need to look at the people who’ve moved to Linux because of Solaris’ historical threading problems (Python users will know that well), particularly on uniprocessor machines, to see that.
Performance is clearly a priority for Solaris 10:
http://opensolaris.org/os/community/performance/
Their motto is rather interesting anyway
You’re going to need more than a link to a webpage. Larry McVoy’s comments about a cache here and there being lost still hold true. Additionally, the layers of inconsequential crap that supposedly make Solaris more portable, for which there is no demonstrable practical use, still hold true as well. Just goes to show what can happen when you delude yourself with theory.
I’m afraid all the above article was doing was rummaging around trying to find some excuses for Solaris.
particularly on uniprocessor machines
Correction. That should be multi-processor machines.
Since the lineage of Solaris is still very much intact (i.e. it isn’t a completely new system) I’m afraid his comments still hold true. Solaris 10 is simply not that different. You only need to look at the people who’ve moved to Linux because of Solaris’ historical threading problems (Python users will know that well), particularly on uniprocessor machines, to see that.
Sure point out the source files and relevant code that Larry mentions from the OpenSolaris site. Extrapolating from some outdated information is a bad idea and does nothing to prove your point.
You’re going to need more than a link to a webpage. Larry McVoy’s comments about a cache here and there being lost still hold true. Additionally, the layers of inconsequential crap that supposedly make Solaris more portable, for which there is no demonstrable practical use, still hold true as well. Just goes to show what can happen when you delude yourself with theory.
No you need more than a link to an old outdated quote by Larry Mcvoy who left Sun way before Solaris 10 was even conceived.
By the way the proof is in the proverbial pudding. Solaris 10 benchmarks very well against the latest linux releases by SUSE and Redhat. So those layers of “inconsequential crap” don’t really make much of difference.
The only person deluding themself is you. You obviouslyy have no fromal training and I can bet you you couldn’t look at a line of Solaris kernel code and pick pieces that trash the cache and make reasonable models and test cases to prove the performance problems Mcvoy talks about.
Sun and the Solaris team are very very serious about performance. Dtrace is a prime example of just how serious they are, infact the linux guys still have two or three different tool kits ( not yer mainstream mind you) that don’t implement all the features of Dtrace put together. Whereas, Sun engineers, developers, sys admins and customers can shoot down performance bottle necks on production systems today. And guess what that head start, coupled with the motto and drive that a previous poster referred to will make sure Solaris performs better release after release.
Meanwhile the linux developers can debate binary drivers, linus can decline patches for the tracing toolkits, debuggers and crash dumps. They can also rewrite a few different subsystems mid realease and debate how many different subversion number exactly makeup a devleopement treeand unstable tree, in other words figure out their development methodolgy yet another time for the 100th time.
Don’t confuse the issue with facts! The very benchmarks the trolls are quoting are so old they are irrelevant and like most of the benchmarks quoted by the Linux zealots, compare apples and oranges. When is the last time you compared the rate of context switches when choosing an operating system? And isn’t it funny how they gloss over the AnandTech results (because Linux lost).
I have been reading the discussion for this article since it was posted and basically knew it was too good to be true that an article talking about Solaris and Linux in the same breath would go for long without some trolling (sorry Alan). But seeing the ferocity of the trolls defending Linux I wonder who they are trying to convince of the “superiority” of Linux, us or themselves?
Sure point out the source files and relevant code that Larry mentions from the OpenSolaris site.
I’m afraid you can only get so far continually running around in circles asking other people to produce evidence. The figures are there, you go off and produce a sensible response, someone else comes back…..
Extrapolating from some outdated information is a bad idea and does nothing to prove your point.
You can only get so far by screaming outdated. What exactly have Sun done with Solaris 10 to make those figures outdated? Feel free to tell us.
No you need more than a link to an old outdated quote by Larry Mcvoy who left Sun way before Solaris 10 was even conceived.
If you’re going to reply and tie yourself in knots you’re going to need to tell us, and prove to us, what Sun has done to make those figures outdated.
Solaris 10 benchmarks very well against the latest linux releases by SUSE and Redhat.
What Solaris benchmarks? Christ sonny. You’re screaming your head off asking for figures from other people, and the most you’ve got are crap benchmarks you haven’t even linked to.
Oh right, those ones Sun carefully controlled to dump on Red Hat. What I want to know is can I do a simple lmbench test and have those simple results, mentioned before, disproved.
So those layers of “inconsequential crap” don’t really make much of difference.
Make much difference? So the layers of inconsequential crap are there then?
You obviouslyy have no fromal training
I really don’t think you want to be replying with that considering all the comments that have gone before, but of course, liars can’t be read. I really do believe you’re that sincere. Oh well. If you’ve been through formal training I think we should all learn to skip it.
and I can bet you you couldn’t look at a line of Solaris kernel code and pick pieces that trash the cache and make reasonable models and test cases to prove the performance problems Mcvoy talks about.
The problems are proved by the figures – be all and end all. You prove to me why they’re wrong, I or someone else comes back with a response…… I read that in a manual somewhere. Formal training kicking in there.
That would be your job anyway, if you are what you say you are (role reversal eh – practiced by you throught this thread!) First of all, I don’t want to look at (and have never needed to) look at a line of Solaris kernel code (especially since the thing is closed!) Please don’t preach to me about Open Solaris – it isn’t open, OK, and you’ve got a long way to go to understand what an open source project actually is.
From my own experience of actually using the thing it isn’t good enough, and if it isn’t good enough I use something else – as simple as that. Sun’s ex-customers have felt that way consistently for about five years, and no amount of pulpit preaching is going to change it.
Dtrace is a prime example of just how serious they are, infact the linux guys still have two or three different tool kits ( not yer mainstream mind you) that don’t implement all the features of Dtrace put together.
This is where your silliness shines through, and it is a silly trait all Sun engineers seem to have. It must be something in the water. Just because you have half-decent debugging tool (and that’s what DTrace is), it doesn’t mean you’re serious about performance – some would argue far from it.
Performance thinking is about making design decisions, making sensible architectural decisions and having a plan for the most sensible and straightforward way of going. That’s my formal training talking there. When Larry McVoy wrote that piece, that’s what he was talking about. You notice, of course, Larry didn’t come out and say “Linux is better because they can debug their own shit better than ever before!” He actually listed architecural and design reasons as to why. Small wonder he left Sun.
Whereas, Sun engineers, developers, sys admins and customers can shoot down performance bottle necks on production systems today.
Well they tend to have to do it more than anyone else. That’s another serious issue Sun people tend to have that I touched on above. “Yay, let’s run round like headless chickens and track down problems and performance bottlenecks on production systems!” How about you just design the thing right, then there’d be less problems? Mind you, people wouldn’t be spending shitloads of money on Solaris support and engineers they don’t need, would they ;-)?
And guess what that head start, coupled with the motto and drive that a previous poster referred to will make sure Solaris performs better release after release.
Trademwarked marketing slogan, Sun Microssystems 2005 :-). ROTFL.
Meanwhile the linux developers can debate binary drivers,
Debate over. There aren’t any, and the hardware support of Linux is still better than Solaris ;-). Have a look at what actually is supported on Linux, especially on x86, or try running a Solaris desktop some time. It’s pot luck whether you get the thing installed, if you don’t die of old age before you get it completed :-).
It’s a far worse situation than Linux was in around 2000.
linus can decline patches for the tracing toolkits, debuggers and crash dumps.
It’s a question of philosophy I suppose. It’s not that Linus doesn’t find that stuff useful, nor that he hates tools, but because he doesn’t feel that kernel devs spending 80% of their time farting about with debugging tools and debugging stuff is particularly productive. There are times when it’s necessary, no question, but there’s a balance. Claiming that DTrace turns water into wine, as Sun is doing, is just ridiculous and shows the folly of their philosophy on things. Linus would rather have a sensible structure and architecture that fits the purpose before going down that road. Its been working for quite a while now, so he’s doing something right.
They can also rewrite a few different subsystems mid realease
I bit melodramatic, but even if they have, no one’s noticed when they’ve installed a new kernel or upgraded their system. The thing just works, which is a problem for Sun if that ever happened with Solaris.
how many different subversion number exactly makeup a devleopement tree
No, they don’t use Subversion and you can’t on a project like that.
in other words figure out their development methodolgy yet another time for the 100th time.
It’s called evolution and active development, and it’s why Linux has, and is, crapping all over Solaris. The figures bear that out as well – not just raw benchmarks but the amount of people who’ve jumped ship. In other words, cold hard revenue which Sun has lost.
You can’t beat that.
Benchmarks like these:
http://www.sun.com/software/solaris/benchmarks.jsp
And if half of what you said was true, then I would be finding large Sun, IBM, and HP hardware for pennies on the dollar at eBay. That isn’t the case, within a two hour drive from my house I can visit a large bank, a major ship builder and a number of military facilities and all of them have one thing in common, they run Solaris (as well as AIX and HP-UX).
I don’t see people tossing out multi-million dollar investments in hardware and software to go to Linux. Despite what you say it just simply isn’t happening to the degree you think it is. Just because you managed to convince one person to go to Linux doesn’t mean squat.
You can believe whatever you want, the facts from where I sit say otherwise. It is why we have people from RedHat visit us on a regular basis, because they hope that they can convince us to drop Solaris for Linux. And based on what we see, there is no real benefit for us to switch. And I am sure a lot of other people see it the same way. It is a choice, isn’t that what FOSS is about, or is it a choice when you “choose” Linux?
If you’re going to reply and tie yourself in knots you’re going to need to tell us, and prove to us, what Sun has done to make those figures outdated.
Improved performance to beat linux.
What Solaris benchmarks? Christ sonny. You’re screaming your head off asking for figures from other people, and the most you’ve got are crap benchmarks you haven’t even linked to.
Scroll through this thread they were posted.
http://www.anandtech.com/systems/showdoc.aspx?i=2530&p=5
Here is more:
http://www.anandtech.com/systems/showdoc.aspx?i=2458&p=6
http://www.anandtech.com/systems/showdoc.aspx?i=2458&p=7
http://www.anandtech.com/systems/showdoc.aspx?i=2458&p=9
Oh right, those ones Sun carefully controlled to dump on Red Hat. What I want to know is can I do a simple lmbench test and have those simple results, mentioned before, disproved.
Not the ones Anandtech a pro pc site did. Sure you can run lmbench and provide results… just don’t provide outdate results.
This is where your silliness shines through, and it is a silly trait all Sun engineers seem to have. It must be something in the water. Just because you have half-decent debugging tool (and that’s what DTrace is), it doesn’t mean you’re serious about performance – some would argue far from it.
No you obvious stupidity shows… with that statement.
Performance thinking is about making design decisions, making sensible architectural decisions and having a plan for the most sensible and straightforward way of going. That’s my formal training talking there. When Larry McVoy wrote that piece, that’s what he was talking about. You notice, of course, Larry didn’t come out and say “Linux is better because they can debug their own shit better than ever before!” He actually listed architecural and design reasons as to why. Small wonder he left Sun.
Larry mcvoy was in Sun in a different era when linux was in a different era. Linux then had crap SMP support and was optimized for a single cpu machines. Comiple the linux kernel with SMP and one without and run LMBENCH and watch the numbers change. Solaris has SMP support built int. And 2.0.x is so old its no longer practical to call it anything close to the linux we have today. Move on.
Well they tend to have to do it more than anyone else. That’s another serious issue Sun people tend to have that I touched on above. “Yay, let’s run round like headless chickens and track down problems and performance bottlenecks on production systems!” How about you just design the thing right, then there’d be less problems? Mind you, people wouldn’t be spending shitloads of money on Solaris support and engineers they don’t need, would they ;-)?
Another idiotic comment….
Debate over. There aren’t any, and the hardware support of Linux is still better than Solaris ;-). Have a look at what actually is supported on Linux, especially on x86, or try running a Solaris desktop some time. It’s pot luck whether you get the thing installed, if you don’t die of old age before you get it completed :-).
Another useless comment…
It’s a question of philosophy I suppose.
Yeah a philosophy comapanies likes SGI and IBM et al. that have experience are trying to change.
It’s not that Linus doesn’t find that stuff useful, nor that he hates tools, but because he doesn’t feel that kernel devs spending 80% of their time farting about with debugging tools and debugging stuff is particularly productive. There are times when it’s necessary, no question, but there’s a balance. Claiming that DTrace turns water into wine, as Sun is doing, is just ridiculous and shows the folly of their philosophy on things. Linus would rather have a sensible structure and architecture that fits the purpose before going down that road. Its been working for quite a while now, so he’s doing something right.
What??? Linus has no experience in enterpirse computing and he admits that he is happy with linux on the desktop. His myopic view of the world is not great architecture, it is acrhitecture that works on desktops. Which for fools like you is all that matters. Where as IBM, SGI, HP and Intel are breathing life into linux into the enterpise… and fixing Linuses boneheaded mistakes. Linux would be nowhere without them.
No, they don’t use Subversion and you can’t on a project like that.
Not that subversion. Subversions like 2.7.x.y. Geez what a tool???
It’s called evolution and active development, and it’s why Linux has, and is, crapping all over Solaris. The figures bear that out as well – not just raw benchmarks but the amount of people who’ve jumped ship. In other words, cold hard revenue which Sun has lost.
No it’s called “Oh crap.. We need to change something”
Name one linux company that is successfull and makes the amount of revenue Sun does. Valinux is bust… SGI is bust.. REdhat makes 60 million a quarter. Sun makes 2.9 Billion…. IBM.HP make more revenure of other things than they do with linux. Linux is a marketting gimick to sell hardware that’s it.
Besides, if you want to know some of the reasons why Linux is better and faster you only need to read some of Larry McVoy’s comments:
McVoy left Sun more than a decade ago. His numbers and that website are as outdated as it gets. They are comparing linux 2.0.x to Solari 2.5.1(sunos 5.5.1)… Solaris is at 10 (SunOS 5.10) and linux at 2.6. A lot has changed since then in both Oses and thsoe numbers are as meaningful as the number of votes in an election are relevant to two political parties victory decision today.
BTW his comments about Solaris’ sys call interface is so old it was obsoleted years ago.
Solaris and other Unices are based on code and design that is thirty years old. As Larry says, one would hope we have learned something during that time, and Linux is the result. If Sun truly want a modern Unix system that performs well then all they need to do is dump Solaris and contribute to Linux. Cost-wise, it would also be a heck of a lot less expensive.
They have learned quite a bit and are giving linux a run for it’s money. Look at the recent benchmark results (some from anandtech were posted above). Not only that linux isplaying catch up on many features Solaris has had for decades, oh and Solaris is now also opensource.
They are comparing linux 2.0.x to Solari 2.5.1(sunos 5.5.1)… Solaris is at 10 (SunOS 5.10) and linux at 2.6.
Nevertheless, it just showed how much Sun was trying to pull the wool over peoples’ eyes. The lineage of both Linux and Solaris is very much still there.
They have learned quite a bit and are giving linux a run for it’s money.
Well that’s just an admission of defeat right there.
Not only that linux isplaying catch up on many features Solaris has had for decades
I don’t see where. Certainly most educational establishments I’ve seen have have moved from Solaris to Linux, saved an absolute bundle on hardware costs and experienced a performance boost to boot. I know, I worked in one and the head tech guy there was a die-hard Solaris and Mac fan. Unfortunately, at some stage he just had to face reality as he realised that Solaris just didn’t matter to the wider community writing software for *nix systems. They’re not missing anything they had with Solaris – quite the opposite.
oh and Solaris is now also opensource.
I think there’s this perception amongst Sun people that open sourcing Solaris will magically give them what the Linux (and to a lesser extent BSD) communities have. Certainly, with a lot of third-party open source software it will help with the troubleshooting of the problems that Solaris seems to have with some third-party open source software. That’s if anyone actually cares that some things don’t work on Solaris of course, which is something my friend came up against.
Nevertheless, it just showed how much Sun was trying to pull the wool over peoples’ eyes. The lineage of both Linux and Solaris is very much still there.
That statement is so absurd I don’t even want to delve into the absurdities to give you benifit of doubt.
Well that’s just an admission of defeat right there.
So what is the problem. Solaris developers are real men and can accept that they were wrong/ misguided. Solaris developer picked maintainability over hacky performance. Now they are improving performance. Nothing wrong with that. They have improved performance to a point that hey have surpassed linux. And have developed to tools to surpass it even more.
I don’t see where. Certainly most educational establishments I’ve seen have have moved from Solaris to Linux, saved an absolute bundle on hardware costs and experienced a performance boost to boot. I know, I worked in one and the head tech guy there was a die-hard Solaris and Mac fan. Unfortunately, at some stage he just had to face reality as he realised that Solaris just didn’t matter to the wider community writing software for *nix systems. They’re not missing anything they had with Solaris – quite the opposite.
I didn’t expect you to see etiher. Your paragraph is aboslutely abusrd. People switched from sparc hardware to x86 and saved money. They chose to run linux because it was the unix flavor of they day on x86. guess what Sun sells x86 boxes and Solaris trounces linux on them and they are cheaper than the competitors to boot, are more efficient and have better remote managment than your average assembled white box. Time will tell.
I think there’s this perception amongst Sun people that open sourcing Solaris will magically give them what the Linux (and to a lesser extent BSD) communities have. Certainly, with a lot of third-party open source software it will help with the troubleshooting of the problems that Solaris seems to have with some third-party open source software. That’s if anyone actually cares that some things don’t work on Solaris of course, which is something my friend came up against.
The community may or may not errupt. only time will tell. I have so many annecdotal evidence for linux to mac and soalris switchers that we can back and forth all day. But in the end a lot of pro linux plus points are soon vanishing into the ether with Sun’s new hardware strategies and Solaris 10. Even the linux poster boy google is partnering with Sun.
That statement is so absurd I don’t even want to delve into the absurdities to give you benifit of doubt.
That’s just stamping your feet I’m afraid.
Given that you’ve pulled the wool over your own eyes, I find that funny. You’ve been confronted by evidence born out by actual figures that proves that Solaris has a heritage of just not being plain good enough. Your response? “But, but, but, but…..we have Solaris 10 and this benchmark is so old!!” Well no, that doesn’t disprove the actual figures does it? The reason why there’s no hard benchmarks surrounding Solaris 10 is basically because no one has got around to it. Why? Well you could argue no one is really using it for the wide variety of stuff where they find problems yet. Yep, all the Python users and other open source people have jumped off the boat and they ain’t getting back on – not that they ever were on.
However, lack of Solaris 10 benchmarks is your problem, especially if you’re going to use that as a standard response for everything. The fact is, from the evidence produced Linux has a proven track record of kicking Solaris all over. Your attempts to prove Solaris 10 as this uber, completely new OS where you can just ignore any previous Solaris problems is extremely delusional and rather typical of the response you’ll get from most Sun people.
Both Linux and Solaris have a proven lineage. Yep, things have improved, things may have got faster in certain places but the traits remain the same. Things don’t get re-written in a few years, as much as the Sun people would really dearly love to wish otherwise with Solaris 10.
Your paragraph is aboslutely abusrd. People switched from sparc hardware to x86 and saved money. They chose to run linux because it was the unix flavor of they day on x86.
Well no, that’s not the whole story. People chose it because it was cheaper and they realised they could get just as good, if not far better, performance from a really decent x86 box running Linux. Unfortunately Sun likes to keep the myth going that people only moved because Linux and x86 was cheap. From first-hand experience I can tell you that’s bollocks. The Linux stuff they used to replace Solaris with was so adequate people wondered what the hell they’d been doing all these years.
guess what Sun sells x86 boxes
Yay, excitement. Why did Sun start selling x86 boxes then?
and Solaris trounces linux on them and they are cheaper than the competitors to boot
From the evidence presented here in this thread I wouldn’t say that’s likely, would you? Linux was born on x86, and if you’d read around a bit you’d see it exterminated Solaris on its own hardware as well. That’s not mouthing off like some silly exec – there are figures that bear that out.
As for cheaper. Well, they might be if you want to spend all of the money you save and far more on subscriptions and expensive Sun support for needless and pointless maintenance.
are more efficient and have better remote managment than your average assembled white box.
There’s no figures to bear out that efficency claim – quite the opposite. I’ve also experienced remote management (or any kind of management) on a Sun box, it’s archane and unfriendly as hell – unecessarily so. It’s small wonder you need a full-time Sun engineer to run it, which of course, is the whole point isn’t it? 😉
Time will tell.
Time has told.
I have so many annecdotal evidence for linux to mac and soalris switchers that we can back and forth all day.
Where is this evidence then? It’s funny how I’ve told you about a Solaris and Mac person who’s gone in the other direction – and I’ve told you why. The reason why is not because of Linux Torvalds jumping on stage and shouting “Linux!”, or dressing in a Pengiun suit. It wasn’t even because Linux and the hardware was cheaper. It was for purely pragmatic reasons.
You’ve spent the last thirty or forty comments mouthing off, demanding evidence and figures left, right and centre from people and you’re coming up with non-existent anecdotal evidence. Errr, do you not include yourself in that?
But in the end a lot of pro linux plus points are soon vanishing into the ether with Sun’s new hardware strategies and Solaris 10.
Mmmmm, not seeing it. I don’t see any mass exodus from Linux that bears that out either nor has anyone else noticed it. How many articles have you seen (apart from that GM one where Sun gave the stuff away and payed them) which have described a genuine move to Solaris 10 from Linux? There ain’t many, are there? I don’t doubt many existing Solaris users have started using it, but that’s hardly the same thing.
At best Sun are going to save some of what they already have. Their break-even figures at the moment are basically down to a mass cost-cutting exercise.
Even the linux poster boy google is partnering with Sun.
I hardly think they’re going to start using Solaris 10. Google actually have some clever people inside their four walls. What they’ve announced is a Google toolbar thing with Java, and despite speculation, I doubt whether they have much of a clue where they’re going.
Given that you’ve pulled the wool over your own eyes, I find that funny. You’ve been confronted by evidence born out by actual figures that proves that Solaris has a heritage of just not being plain good enough.
You call that garbage evidence. Evidence in terms of recent independant benchmarks has been provided to you that shows Solaris 10 beating linux on the same hardware. You have blinders on if you are still hanging on to the past. The only people who still have wool over thier eyes are you and the other linux zealots.
Your response? “But, but, but, but…..we have Solaris 10 and this benchmark is so old!!” Well no, that doesn’t disprove the actual figures does it? The reason why there’s no hard benchmarks surrounding Solaris 10 is basically because no one has got around to it.
I guess you can’t read, let alone comprehend. Scroll up and look at the benchmarks from Anandtech. Solaris 10 beat linux that is fact. Those benchmarks are more valid than the one the blind linux zealots have posted so far.
Why? Well you could argue no one is really using it for the wide variety of stuff where they find problems yet. Yep, all the Python users and other open source people have jumped off the boat and they ain’t getting back on – not that they ever were on.
Good for them…
Well no, that’s not the whole story. People chose it because it was cheaper and they realised they could get just as good, if not far better, performance from a really decent x86 box running Linux.
You might want to read the other Sun related Article and see how many big datacenters don’t just trust x86 running linux. A small college is no datacenter.
Unfortunately Sun likes to keep the myth going that people only moved because Linux and x86 was cheap. From first-hand experience I can tell you that’s bollocks. The Linux stuff they used to replace Solaris with was so adequate people wondered what the hell they’d been doing all these years.
Yeah a college moving student labs to linux… oooh big losss there. Colleges buy the crap that they buy on heavy educational discounts. I used to manage my college labs and was privy to negotiations. The HP and Sun Unix boxes were rock solid and the cheap assembled Linux boxes were dying left right and center and constantly needed babysitting. I was horrendous.
From the evidence presented here in this thread I wouldn’t say that’s likely, would you? Linux was born on x86, and if you’d read around a bit you’d see it exterminated Solaris on its own hardware as well. That’s not mouthing off like some silly exec – there are figures that bear that out.
Any sane person would gather the obvious and as many sane people have pointed out to you and the other linux zealots your data is meaningless. The only thing demonstrated by fact and data are the most recent benchmarks showing Solaris beating linux.
Where is this evidence then? It’s funny how I’ve told you about a Solaris and Mac person who’s gone in the other direction – and I’ve told you why. The reason why is not because of Linux Torvalds jumping on stage and shouting “Linux!”, or dressing in a Pengiun suit. It wasn’t even because Linux and the hardware was cheaper. It was for purely pragmatic reasons.
Let’s see hearsay is evidence??? since when? He says she says never stands up as evidence in any court. You evidence is utter garbage. One person, switched.
I used to be a linux evangelist in college, I have since grown up and moved to Mac and Solaris. I stoppe palying with linux after A nightmarish weekend trying to get ALSA and video 4linux to work with my ATI all in wonder card with the 2.4 kernel and trying to get X video to work with a toshiba that used the Cybersomething alladin chipset. I installed XP and never looked back. The X code for Xvideo had paramters that were total guess work and Xvideo just wouldn’t work.
Mmmmm, not seeing it. I don’t see any mass exodus from Linux that bears that out either nor has anyone else noticed it.
Hmmm.. We have already established you are blind to the obvious. Solaris 10 was released this year. Big companies aren’t college campuses and actually evaluate stuf for 9-10 months before migrating production environments.
Linux didn’t become popular overnight. It took linux a good 13 years ( And many many corporate donations) to get where it has today. Only a fool or an ignorant college kid (which you seem to be) expects massive change overnight.
I hardly think they’re going to start using Solaris 10. Google actually have some clever people inside their four walls. What they’ve announced is a Google toolbar thing with Java, and despite speculation, I doubt whether they have much of a clue where they’re going.
There was not much meat in the announcement. But the annoucement mentioned that a hardware deal was comming. So still money in the pocket for Sun.
This is getting boring…I have had enough of it.
Evidence in terms of recent independant benchmarks has been provided to you that shows Solaris 10 beating linux on the same hardware.
I’ll make this one extremely simple for you. W-H-E-R-E?? Where are they?
and the other linux zealots.
Zealousy, yad, yada, yada.
I guess you can’t read, let alone comprehend. Scroll up and look at the benchmarks from Anandtech.
Those benchmarks were conducted on a Sun machine (not exactly my idea if imparial), and they don’t look great considering that Sun must have squeezed the life out of the whole set up to make Solaris look good.
When I see a simple lmbench result that Solaris can’t get right I have to ask myself why. It’s been common knowledge for several years.
Good for them…
Since most of them are ex-Sun customers in educational establishments, yer, it is. But of course, telling customers to sod off is part of Sun’s repetoire.
You might want to read the other Sun related Article and see how many big datacenters don’t just trust x86 running linux.
That’s just a statement out of fear.
Yeah a college moving student labs to linux… oooh big losss there.
It’s more than that. It’s a whole medical school first, and then the whole university of tens of thousands of students. This place was a big Sun customer.
The HP and Sun Unix boxes were rock solid and the cheap assembled Linux boxes were dying left right and center and constantly needed babysitting.
These have been in for over three years. They’ve never budged and replacing anything is infinitely less expensive than all of that incredible and mythical Sun support.
I was horrendous.
I’m sure you were.
The only thing demonstrated by fact and data are the most recent benchmarks showing Solaris beating linux.
Wish, wish, wish. I’d trust an obviously abnormal lmbench result over a daft bar graph of Solaris running on Sun’s own hardware any day. It’s what you see day to day that matters. You still can’t come up with an explanation for it.
I stoppe palying with linux after A nightmarish weekend trying to get ALSA and video 4linux to work with my ATI all in wonder card with the 2.4 kernel and trying to get X video to work with a toshiba that used the Cybersomething alladin chipset. I installed XP and never looked back. The X code for Xvideo had paramters that were total guess work and Xvideo just wouldn’t work.
I think you need to get some therapy now. You obviously haven’t tried Solaris as a desktop. For a supposed kernel hacker you were a bit at sea there, weren’t you? Well, I and countless others did get it working, and from my recent experiences, a hell of a lot more trouble free than Solaris. You lot seriously don’t think that’s a desktop or an easy to set up server, do you?
Only a fool or an ignorant college kid (which you seem to be) expects massive change overnight.
Have no idea what you’re talking about there. The change has happened, and people moved from Solaris to Linux. The fact that you’re talking the way you are proves this because you have some vain “Rome wasn’t won in a day” hope people will move back.
But the annoucement mentioned that a hardware deal was comming.
No it didn’t.
So still money in the pocket for Sun.
Well no, because Sun are barely breaking even, and that was after a massive culling of costs. Considering you’re doing some wishful thinking there, are things worse at Sun than anyone has suspected?
This is getting boring…I have had enough of it.
It was boring when you wandered off screaming Solaris 10 and proclaiming all these benchmarks rather than answeing one guy’s question about why many people had been consistently, for years, getting disappointing results out of Solaris. I take it you don’t know. That’s all you needed to tell us.
I was horrendous.
I’m sure you were.
That was an obvious typo, you ignoramus. “it was horrendous”. Now I know you have no reason left in you. Most tend to start pointing out typos and grammer mistakes when they run out of steam as a inferior mind usually does.
I think you need to get some therapy now. You obviously haven’t tried Solaris as a desktop. For a supposed kernel hacker you were a bit at sea there, weren’t you? Well, I and countless others did get it working, and from my recent experiences, a hell of a lot more trouble free than Solaris. You lot seriously don’t think that’s a desktop or an easy to set up server, do you?
You got Xvideo working on the Trident Cyblerbalde X1 chip. Point me to the diffs/patch or shutup…
that should be Trident CyberBlade Ai1.
Well you could argue no one is really using it for the wide variety of stuff where they find problems yet. Yep, all the Python users and other open source people have jumped off the boat and they ain’t getting back on – not that they ever were on.
OK enough of this mass python exodus to linux conspiracy theory. Where is the data to back this claim?
From all I have seen Python has had bugs that prevented it from working on Soalris. All of those seem to have been fixed with newer versions of Python. Also that goes to show that writting portable bode is crucial.
Writting to only gcc and linux causes problems like the Python folks faced. Every piece of software has bugs but it takes ignorant linux zealots to blow things out of proportions.
Dtrace is being improved to proived visibility into Python programs. A mass exodus away from Solaris would have had the opposite effect.
Another bunch of meaningless drivel. Please grow up.
OK enough of this mass python exodus to linux conspiracy theory. Where is the data to back this claim?
Total lack of interest in the problem from anyone. Standard response? “Well, it works fine on Linux, Solaris provides no advantages over Linux, no one uses it anyway, why bother?” Result? People move to Linux.
Where’s your data by the way? Still working on it?
From all I have seen Python has had bugs that prevented it from working on Soalris.
No, it was Solaris. If it works on Linux, BSD and Windows then it doesn’t have bugs that prevent it running on Solaris. How pretentious can you get?
Also that goes to show that writting portable bode is crucial.
You’re not winning any open source converts to Solaris.
Writting to only gcc and linux causes problems like the Python folks faced. Every piece of software has bugs but it takes ignorant linux zealots to blow things out of proportions.
GCC is the compiler today. If you don’t pay attention to it first and foremost before anything in the Unix and open source world today you are history. Tell that to the Open Solaris people.
If it doesn’t run Solaris doesn’t get used – that’s the real world. You’ll find that out when you start trying to proclaim Open Solaris as an open source friendly project i.e. no one uses it. If it doesn’t do what Linux and the BSDs perform adequately today you can simply forget anyone wanting to help you out with troubleshooting their own software.
Every piece of software has bugs but it takes ignorant linux zealots to blow things out of proportions.
Not working is not blowing things out of all proportion. I must remember to buy me some of that Sun wishful thinking.
Dtrace is being improved to proived visibility into Python programs.
Why should they need to troubleshoot their software that works great on Linux, BSD and Windows?
A mass exodus away from Solaris would have had the opposite effect.
Eh?
Another bunch of meaningless drivel. Please grow up.
Please do. I’m still waiting.
It works easy, works with my phone and runs under
Linux or BSD.
http://www.gprsec.hu/modules/index/
Without wishing to take sides (I have zero kernel programming experience), surely this is just a misunderstanding about terminology.
“Most Portable” – this should apply to the OS that would take least effort to port to another platform/architecture. What you define as the OS is open to debate though, I agree.
“Most Ported” – the OS that runs on most platforms/architecture.
Which is best is a worthwhile subject for debate, assuredly, but I would have thought that the two terms above could be used to gain an objective decision on which OS “wins” either category. And again, which “wins” with a user depends on that user’s needs.
In any case, congratulations to the author for an interesting article, even if I understood very little of it!
All this talk on performance, yet not a word on reliability, maintainability, backward compatibility… And what’s up with old code, anyway?
A small anecdote: I work in the ECE department of my university. Traditionally, our UNIX rooms were powered by Sun. Then Linux came and Sun workstations were phased out. Until last year, we were in the middle of phasing out all these old Sun Ultra 5/10/40s for Linux servers powered by IBM. Since Solaris 10 came out, we are looking once again at Sun hardware. It’s quite likely that the workstations will stay with Linux, but the Linux servers will coexist with some new Sun servers. There is no “winner takes all”, we just use the best tool depending on the situation (performance, nature of the tasks, ease of migration, etc).
I believe this discussion is getting a bit ridiculous. What is funny is there is pratically few evidence out there supporting somebody’s point. Forget antique/incomplete benchmarks: it’s like claiming that smoking doesn’t cause cancer by basing yourself on partial or 60-years-old studies. I am pointing both sides.
We would definitely need a fair and comprehensive benchmark roundup. Some people would inevitably fanboying their favourite, but it doesn’t matter. It would certainly interest those who are seeking a solution or, ultimately, the developers, those who are making improvements. Competition is good.
Thanks for the voice of reason.
I didn’t claim Solaris 10 was faster than linux only that all the benchmarks I have seen till date have shown it to be.
Like I said before I haven’t made anly claim that I couldn’t back up. If I did.. it was in anger and haste and probably unintentional.