Apart from Linux and the various BSD operating systems, there’s another open source UNIX-like operating system (actually, it’s a certified UNIX): OpenSolaris. There are a few key differences between Linux and OpenSolaris, and TuxRadar lists some of them so that Linux users can dive right into Solaris.
One of the first issues that a Linux user will run into when installing OpenSolaris is the fact that it does not support partitions inside an extended partition; if you install OpenSolaris on an extended partition, it will erase all the partitions inside it. This is important to mention, as many Linux distributions install inside extended partitions by default.
Related to this is OpenSolaris’ file systen, ZFS. The Linux kernel does not have support for ZFS, and as such, if you want to exchange data between the two on the same machinem you’ll have to use FUSE. The article erroneously states that this is because the FSF does not consider the CDDL “free enough”, but the actual reason is that the CDDL is simply not compatible with the GPL. If the CDDL is not free enough, then neither is the GPL, as you could argue that the CDDL is less restrictive than the GPL.
The version of GRUB shipped with OpenSolaris has been modified to support booting ZFS partitions (obviously), but this support is not present in the GRUB versions shipped with Linux distributions. As such, it is important to note that if you install Linux after installing OpenSolaris, you need to make sure not to erase OpenSolaris’ GRUB.
While Linux’ hardware support is wider than that of OpenSolaris, the latter does benefit from having a static driver interface. Where in Linux hardware support might actually break as time goes by, 10 year old Solaris drivers will still work today. There’s also a Device Detection Tool which will tell you if your hardware is compatible with OpenSolaris.
The article also details that OpenSolaris will run slower than comaprable Linux distributions, and that the number of applications to choose form is quite limited compared to what Linux distributions generally have to offer.
The article is an interesting read, and further details a number of OpenSolaris-specific features, such as ZFS and Zones.
Robert, I am pretty sure that OpenSolaris is not a certified UNIX system. Also note the UNIX is a registered trademark.
Solaris is, but Sun hasn’t registed OpenSolaris.
Interestingly (for me) BSD isn’t either – I’d always assumed it was UNIX certified.
http://en.wikipedia.org/wiki/Single_UNIX_Specification#Compliance
^ list of UNIX certified OSs in case anyone wants a more detailed breakdown.
I could have sworn that BSD was certified too given its history.
It’s really interesting that while BSD isn’t certified OSX is.
If I remember correctly that’s a money thing. Getting the UNIX certified stamp of approval isn’t cheap and the BSD Foundation couldn’t / can’t afford it. However, it definitely is a very UNIXy UNIX.
More specifically OS X 10.5 leopard and OS X Server 10.5 are. Snow leopard variations have not be certified UNIX yet. Not sure if they ever will.
AFAIK you don’t have to certify just the OS, but each and every version of it. I’d assume that the costs (both in time and, I’d guess, money) wouldn’t be worth whatever advantage a UNIX certification brings.
The reason they BSD UNIX systems like FreeBSD or NetBSD do not have the certification sign is propably because of the price you have to pay for it:
Shipments (Units) …….. Annual Fee
Up to 1 000 …………… $25 000
1 000 to 30 000 ………. $50 000
more than 30 000 …….. $110 000
source: http://www.opengroup.org/openbrand/Brandfees.htm
… and generally, why pay if you already are UNIX?
The most funny part is that you have to pay all this money for every architecture you want you UNIX to be certified on … imagine the price for NetBSD …
The price might be one reason. Another one is that they do not comply to POSIX and Single Unix Specification. By current standards *BSD is Unix as much as Linux is – it is not.
It’s also because nobody gives a flying fart about Unix certification anymore, and never really did.
Unix certification means about as much as the color of the icons on your installation cdroms.
FFS both AIX and OS X are ‘Unix certified’ and they almost nothing in common. It’s completely worthless and it’s sole purpose of existance as a certification is to generate revenue on the part of the trademark holders and to fill in bullet points in the advertising media of the OS vendors.
Nobody with half a brain has given as shit for about 10 years now.
Yeah, that must be the reason why companies spend piles of money for it.
-1 inaccurate. From a developers point of view, well behaved program should compile and work on all certified Unices in the same way. Sure, if you look at the icons on the desktop, they seem to have nothing in common.
Again, why would companies invest significant effort (and money) to get it then?
I did not write the summary, ask Thom.
To get the UNIX label, OpenSolaris would have to be certified POSIX compliant. Solaris is, but OpenSolaris is not, mostly for financial reasons I imagine. So while OpenSolaris is a UNIX derivative, I don’t think it can be called UNIX for legal reasons.
Ah thanks for the clarificiation. I looked this one up, and it said Solaris was certified, so I assumed OpenSolaris was too.
I’ll fix it right away!
There’s more to being UNIX certified than just being POSIX complient (check the link I posted earlier in this discussion)
He said it wasn’t meant to be a comparison, but he breezed by the “performance is much slower than ubuntu on the same hardware” comment. I wonder how much slower … and why. Compiler differences? Layered architecture/kernel differences? I assume file system access times should be competitive?
Anyway, I can always surf the web and [probably] find an actual performance comparison between some version of linux and OpenSolaris.
Just a note, I like the Gnome look in the screenshots…
http://www.phoronix.com/scan.php?page=article&item=os_threeway_2008…
While I did not read the article in its entirety before submitting it, I would of liked to see how they made the determination as to what is “slow”.
My research into improving disk performance on Solaris and OpenSolaris with IDE disks by setting maxphys may or may not help. I am still in the testing mode in using that mod.
This is not correct. There have only been a few
comparisons of Linux, Windows, and Solaris on the
same hardware. For all three operating systems, it’s
the hardware that determines performance, at least to
the first approximation. In the example I saw, with
many benchmarks run on the three operating systems,
each of them came out on top on some of the benchmarks.
In general, all of them were about equal in performance.
Unix partitioning, which was around way before DOS was ever a glimmer in anyone’s eyes, is much nicer/simpler than DOS/Linux partitioning. In essence, *all* partitions are extended partitions (although they take up primary partition slots in the PC MBR).
You create a single partition, then sub-divide that partition up into the filesystems you want to use.
This is one of the biggest hangups that people coming from the DOS/Linux world have.
Once you drop your Linuxisms, though, you come to realise that the Unix way is the better way. You only need 1 partition per OS, instead of 1 partition per filesystem.
Linux has had LVM for 11 years now.
WTF does LVM has to do with this?
Ummm… everything? Is this a trick question?
With traditional Unix partitioning, you create one partition and then divide it up into different filesystems. With LVM, you create one partition and then divide it up into different filesystems.
Oh lord.
Don’t know, ability to dynamically create any number of sub partitions within a single partition.
… Bad hair day?
– Gilboa
I’d like to add that mostly misunderstandings arise from different terminology. One example is the FreeBSD OS which uses the term “slices” for “DOS primary partitions”, and “partition” for the real partitions inside the slice.
You mentioned:
There are artificial restrictions brought into the PC area by DOS, for example the limitation of 4 DOS primary partitions. If you needed more, you would have to use DOS extended partitions containing logical volumes (“subpartitions”).
I completely agree with the concept of having to use one partition per OS, instead of spreading the OS’s components among different primary partitions. This is what the subpartitioning can be used for.
To give an example, a common FreeBSD layout looks like this: ad0 refers to the first disk; ad0s1 is the first DOS primary partition (first slice) on this disk; ad0s1a = /, ad0s1b = swap partition, ad0s1d = /tmp, ad0s1e = /var, ad0s1f = /usr and ad0s1g = /home. If you wanted to put everything into one partition, you would ad0s1a be / and let it cover the whole ad0s1. If you’re running FreeBSD exclusively, ad0s1 covers the whole ad0; if not, ad0s2 would be the next DOS primary partition, for example to install OpenSolaris in it, and Linux into ad0s3.
In regards of OpenSolaris or any other Unix OS that uses ZFS, the partitioning has become much more versatile and dynamic. You can’t get the “partition is full” problem, and you can easily add new disks to your layout without having to add extra mountpoints.
As I said, there are terminology barriers, as well as misunderstandings of concepts.
Let me add a comment to the article: There’s a reason – a well intended reason brought by Unix principles – that there’s /tmp and /var/tmp; the content of /tmp can be erased at system startup (to start with an empty /tmp), and /var/tmp keeps its content between reboots. The /export/home directory has its roots in NFS exports, because other things than home/ could be exported via NFS.
Finally, the article inspired me to give OpenSolaris a new try. I’ve worked with Solaris for many years and always considered it a very stable and reliable operating system.
UNIX “partitioning” is one of the last remaining things that I don’t fully understand when it comes to the basics of the BSDs and Solaris. Well, that and disklabels. Sure, I understand the difference between slices and partitions. But when it comes to the naming scheme and how to properly use the fdisk and disklabel tools… I’m screwed.
Eventually, I might try installing a BSD again and try to get a better understanding of the tools and naming system.
Partitioning is a pretty minor nit here. It’s not something that’s in your face every day, just during installation and can be an issue if space is getting tight. But in no way can the differences be a deciding factor.
[quote]
If the CDDL is not free enough, then neither is the GPL, as you could argue that the CDDL is less restrictive than the GPL.[/quote]
freedom is not about lack of restrictions.
look up the 4 freedoms as defined by the FSF. One’s freedom stops where begins the freedom of others. The FSF also consider the freedom of the people using derivative work.
Oh God no! Does every freaking thread on this board have to devolve into this stupid and dogmatic “discussion”?
Edited 2009-09-15 18:09 UTC
Sorry, I was not trying to bring up a pointless discussion. I was pointing up that the article is correct that the FSF does not consider the CDDL free enough. But anyway, I was wrong:
http://www.fsf.org/licensing/licenses/
Also note I have nothing against the CDDL or OpenSolaris. I was just trying to correct the article.
If I remember correctly, Sun did consider the GPL at one point anyway.
YES.
FSF, RMS, and everybody else in the world acknowledges that CDDL is a Free software license and did so from just about the beginning of all it.
People who say that RMS says that CDDL is non-free are just illistrating their own closed minded moronity. AND they are mearly expressing, in a indirect way, about their irrational and obvious mistaken thoughts about RMS/FSF/GNU/GPL, etc etc etc. In other words, extreme fanboism about a alternative license or non-Linux operating system or something like that.
And it’s ALSO obvious from the beginning that Sun created CDDL intentionally to be incompatible with the GPL, also from the beginning.
I ALSO have documentation on the whole sorted affair from the _main_original_author_ of the CDDL that this was, in fact, intentional because a lot of the core Solaris developers were having a hissy fit that their code would end up something like Linux.
See here:
http://saimei.acc.umu.se/pub/debian-meetings/2006/debconf6/theora-s…
Yes it’s poor quality. It’s very old version of Ogg Theora codec.
Listen for Denise. She was the one that wrote the license.
So here are the FACTS:
1. CDDL was intentionally created to be incompatible with the GPLv2. While Sun says that they have no objections for people incorporating ZFS, it is still under a incompatible license.
2. RMS/FSF/GNU/etc say that CDDL is FREE SOFTWARE license, out of their own mouth.
Funny that the ones complaining the most are usually the same people who use apache httpd and php every day.
Correct. Stallman’s Freedoms have nothing to do with freedom the conventional sense of the word.
You have read about Stallman’s carefully defined version of the word freedom before making a comparision. Only then will you understand the Freedoms that He hath provided for you so ye shall be Free.
Then I guess your freedom, as human, is also not freedom in that conventional sense of the word, since it’s for the most part based on restrictions imposed on everyone else just so you can enjoy it.
And not only you, also your derived wor… children
Unlike Stallman I haven’t provided my own definition of the word freedom. I’m perfectly fine with using Webster’s and if I ever made my own license I wouldn’t use my own definitions as a way of pushing my license. I’d use clear english and define the license parameters in technical terms and let people decide if they wanted to use it or not. I wouldn’t go on a crusade about how competing licenses are against my own carefully defined freedoms.
If you want to compare Stallman’s freedoms with common political freedoms then go ahead, but most people would think it is quite loony to compare being free to speak your mind with being free to look at someone else’s source, secret recipe or proprietary schematics.
My Grandma doesn’t let me know the secret to her meatloaf. I guess that means she is a freedom hater.
Care to explain how FSF licenses redefine the term “freedom”, and how that “new” definition is so utterly different from Webster’s?
To some people
Free = The ability for lazy and exploitative programmers to take code other people have written under a open license and then sell it to a third party by adding some indeterminate amount of functionality combined with a restrictive license.
That’s Webster’s definition of Freedom, I guess. Don’t you know?!
We could go into a lot of reasons as to why Sun chose to use the CDDL, most of them strategic and probably true, but the fact is that they chose it. It’s pretty irrelevant really.
Edited 2009-09-15 20:30 UTC
I love that logic. By GNU’s definition its own license which it created, uses, and endorses is the best.
Well, I should certainly hope so. Otherwise they need to change one of them.
In related news, I declare myself the best “Bill Shooter of Bul” in history. A far superior one to “Carl shooter of Bul”. That guy’s a jackass who couldn’t bill shoot of bull if his life depends upon it. Everything he does ends up carl shot of bull.
After reading comments on that site, and even times past on this site; it would seem linux users get really defensive when Solaris comes up. What’s the deal? Why do they feel so threatened by Solaris?
Because they know, that Solaris has the better technology, and they can’t “steal” it.
I have to agree with you there, for years I read comments by Linux fanboys saying “competition is good”. I guess it is good as long as Linux doesn’t have to compete against it.
And in no time at all, somebody had to mod this comment down (I know it was modded up to 3). I guess a couple of us hit a nerve with the Linux crowd.
And instead of engaging me in a discussion, just mod me down, how lame! Modding me down doesn’t change a thing, it doesn’t change my opinion of Linux or its rabid supporters, it just solidifies it.
Where you making any type of argument? (As opposed to simply throwing statements into the air *, **)
– Gilboa
* I didn’t mod you down, as I’ve already posted a message in this thread. But I would have mod this sub-thread in a heartbeat if I could. This sub thread is pure FUD. (Read: Not a single fact to go around)
** And I yes, I use Linux, BSD, Solaris and Windows to develop code.
Alas, Linux’s device drivers can’t be ‘stolen’. However, it appears that OpenSolaris has finally stolen all of the important userspace software people actually care about. 🙂
Edited 2009-09-15 20:48 UTC
And I thought Linux was just a Kernel…
Or were you talking about GNU/Linux?
Here is the great power, freedom, and liberation at work.
The acronym “FOSS” is a big f–kin’ joke.
It’s no joke, but it’s often misunderstood [points up]
One of the reasons might be that Sun, while being very generous with their own Open Source contributions, spoiled their image by backing SCO financially, and did a sort of mini-crusade against Linux before ramping up their OpenSolaris project. Cooking up the CDDL license that is intentionally not GPL compatible didn’t help either.
So, for better or worse, event though Sun could be a poster child for open computing (and all of us owe them a lot for contributions like OO.o), is still perceived to be a bit too eager to succumb to the temptation of dark side if it could be in their financial interests. Which makes sense, they are not a charity and can’t pay their personnel with goodwill alone.
It’s even worse:
Linux on X86 is a threat for the only CPU architecture available under GPL (SPARC).
It shouldn’t be a shock that they went after Linux when cheap x86 boxes and Linux destroyed their profit margins.
Of course it was a poor strategy but they were threatened by Linux at the time and McNealy just wanted it to go away.
Anyways those days are over and McNealy/Sun have been thoroughly punished for their actions.
That isn’t limited to just Sun, it also hit HP and IBM hard. And just because they now drink the Linux kool-aid doesn’t mean it didn’t cost them as much if not more than Sun.
Both HP-UX and AIX suffered because they pulled development talent from their respective UNIX variants to satisfy the demand for “cheap x86 servers running Linux”. That is why HP still produces the Integrity line of servers starting price by request (the last time I checked was $300,000.00) and IBM still produces the pSeries. For all the capabilities that Linux doesn’t have like the ability to create an vPar (HP-UX) or LPAR or WPAR (AIX).
Sorry, but this is completely untrue. At the time, SCO was shaking their legal fist at everyone involved with UNIX or GNU/Linux. Sun was *legally required* to license certain rights from SCO. Remember that Sun is a publicly traded company, so unlike most GNU/Linux distributions has a lot more legal requirements to worry about.
They did not “fund SCO” anymore than customers of SCO that licensed copies of SCO Unix did.
Edited 2009-09-15 20:51 UTC
Actually, nobody was legally required to give SCO anything. They got all their money from Microsoft and Sun. Coincidence, no? Why not SGI, HP…?
Yeah, if some company suddenly found need for 50000 SCO licenses right when the scam started, it would have raised some eyebrows as well.
SGI and HP weren’t launching a new operating system based on UNIX in 2005 either were they?
Think about it. Then think about all of the items Sun has open sourced since then. Then make the connection between licensing and software.
Your speculation is nothing more than that; speculation.
Maybe because of the fair share of “linsucks” or “Linux is for bitches” posted?
On that site alone I see more solaris users being offensive than anything.
Or maybe you are imagining things? Or maybe you are mistaking the attitude of a few people for that of “Linux Users” in general? Or maybe you are trying to stir up a conflict? Haven’t we had enough of that?
Edited 2009-09-15 19:17 UTC
Exactly!! Let’s see if we can have a discussion of the pros and cons for a change!
A conversation based on “to each their own” would be better.
Edited 2009-09-15 19:40 UTC
Or just the differences. Not all differences are pros or cons. I’m mostly a Linux guy, these days. But I was a Unix guy for years before there was any such thing as Linux. If Solaris and/or OpenSolaris can get POSIX in the door where Linux would be less likely to, even if only because of the Sun name, I’m perfectly fine with that. It’s time that all of us in the unix-like OS community came to terms with the fact that the real enemy is not each other.
Well, pros and cons from the article:
I fail to see how having a fixed ABI interface will help you when you have far, far less device support. The whole point of having a stable ABI interface is to have more hardware support and drivers – and there isn’t.
ZFS is certainly an advantage, without doubt. I remain to be convinced and will need to see it really tested on a lot of hardware like little NAS RAID systems with little memory (and I apply that equally to Btrfs I might add), but being able to do things like snapshots in an integrated system, with a GUI no less, is certainly an advantage. Linux has LVM but it isn’t guaranteed to be there nor do you have any good userland tools or GUIs for it as a result. You can’t underestimate just having something ‘there’.
Definitely an advantage, but not by much. The only thing you can compare it with in the Linux world is OpenVZ but that generally needs a different patched kernel. For those who know they want it, probably not much of an issue but then I do say that KVM being kernel integrated will be important. The userland tools are comparable, but it’s probably easier to manage resources with Zones. Beancounters in OpenVZ take some getting used to. You can also ‘group’ processes with resource limits that you can’t do [well] with Linux systems.
Not suprised. The hardware support just isn’t there, nor is the optimisation that comes from developing such a system for many years as Linux has been. There’s a big legacy there. I’m not convinced about ZFS there because it has redundancy that you simply don’t need on many systems.
Definitely a disadvantage. It was clear that the GNU userland tools were better, were seeing more development and more usable years ago (I cannot stand tar on Solaris) but Sun insisted on not migrating to them or not improving their own userland tools for reasons best known to them.
All in all, OpenSolaris is something that Solaris desperately needed for its own well-being. However, given that a lot of things that are relevant to ‘Unix’ desktop users and developers are almost ten years behind – when the Unix workstation world largely moved to Linux on commodity hardware because Unix systems like Solaris wouldn’t – then it’s a very tough sell now.
Edited 2009-09-15 21:55 UTC
Personally I think about the stable ABI interface most of the times I update opensolaris, and *every* time I update my linux boxes (due to the Nvidia drivers)
Correction: I think this also has to do with the binary blob clause … which Solaris doesn’t have, and I appreciate.
Edited 2009-09-15 23:03 UTC
The problem that I have with the article is that it equates a stable ABI to having better driver support. It just doesn’t.
It stands to reason that over time, and all other things being equal, it would be a positive factor. Of course, all things are not equal, and Linux has the jump on late-comer OpenSolaris for drivers. We’ve all heard the kernel devs’ rationale for not having a stable internal abi. OpenSolaris, if we watch it carefully, eschewing personal bias as much as possible on the topic, has the potential to indicate something about how much that lack of a stable internal abi is costing us.
We should pay attention to this experiment.
Edited 2009-09-16 14:43 UTC
It will help you, the next time you upgrade the System. I have here kernel modules that were compiled with Solaris 10, and they still work on the newest OpenSolaris Build.
If the driver situation is so bad, why do I have OpenSolaris working on an EeePC, several other Laptops? The only thing not working is Bluetooth.
We have a production environment with tens of TB on ZFS. Works like a charm.
There were lots of fixes in ZFS to run better on all kind of Hardware.
I can’t understand your complaining about RAM usage. RAM is dirt cheap today and getting even cheaper tomorrow. And it’s not like wasting your money. All free RAM is used for the ARC Cache.
The interesting part of (Open)Solaris Containers are the resource management capability, you can almost tune every setting. Combined with Crossbow, I wonder why not more hosters are using Solaris.
In Linux there are many performance tweaks that put your data at risk. Avoiding that, adds of course overhead.
Some GNU Tools offer more options. But most of them are not POSIX compliant, and should not be used for scripting. Some are even broken (e.g. GNU tar).
From a user and developer point of view, the gaps are closing. From a server admin’s view OpenSolaris is miles ahead.
Why does a storage device have to have so little memory? In an age where gigs of RAM can be put into a small space, why only 128 MB? Isn’t that like pointing a double barreled shotgun at both feet?
Price. The consumer market is cut-throat competitive. Linksys dumped Linux for vxWorks in the WRT54G to save less memory than that.
While I am not generally a detractor of ZFS, I will say that the idea of a *filesystem* having significant *memory requirements* has always given me pause.
ZFS has some nice features in the context of consumer NAS devices. But memory footprint is not one of them.
The “memory requirements” thing is getting out of hand. I have a Enterprise 5120 that has 8 GB of memory and has a 1.3 TB volume attached to it. ZFS uses memory as long as it is not required by either the OS or applications.
This particular machine is a development box and runs Java applications without any problems. The developers would be all over me if there was a performance problem (perceived or otherwise).
Nagios and sar show varying states of system memory, but at no point has the system went down to a memory condition.
I don’t think memory is that expensive that a consumer grade device cannot have a GB of memory in it. For that matter why would a geek want a device with so little memory in it in the first place knowing that the performance will suck?
ZFS doesn’t require gobs of RAM. It uses it if it’s available. It can be tuned to use as little as 64 MB of RAM via sysctl (haven’t personally tried anything lower than 64 MB).
Several people on the FreeBSD mailing lists are using ZFS on FreeBSD with as little as 512 MB of RAM. With the right tuning and optimising of the OS and services, you should be able to get things running with less.
However, why would you want to? A storage device (even an el-cheapo home NAS) should have lots of RAM for caching. Otherwise, performance is just going to be crappy. You don’t need 4 GB of RAM in a home NAS box, but 256-512 MB would be doable without adding much to the cost.
I was not trying to stir up conflict I was asking a legitimate question.
Just look at this thread and answer to the question with a straight face.
in general on OSNews, I mainly see evidence to support the second possibility I mentioned: mistaking the attitudes of a few vocal people as representing that of a community in general. This happens all the time, on a variety of topics. And I would guess that you, yourself, have fallen into that trap. Just think a little harder about the concept of communities to free yourself from it.
Edited 2009-09-16 13:54 UTC
Ya ya.
As a reasonable person you can probably understand that these vocal people capture the attention of readers and ruin any kind of intelligent discussion. I am just waiting when the “paid Microsoft shil” -comments appear here.
If I’m fallen to some trap, then fine, but I believe these comments beyond reason are part of the Linux culture. I do not like Slashdot and it would be sad if this place would turn into one. Even LWN is nowadays polluted by these idiots.
Like it or not, these are the people who you encounter first in any public Internet forum related to Linux.
http://www.osnews.com/user/strcpy
Date Joined: 2009-05-20
Status: Active
Bio: One of my hobbies is to troll here and make fun at the expense of Linux fanboys.
…….
And your point is?
I can assure you that no ‘Linux user’ feels threatened by Solaris, especially considering that all the important software is shared with OpenSolaris. However, when you get articles comparing Linux distributions and ‘OpenSolaris’ and effectively telling us that it is a comparative alternative at times then you’re going to get some comments questioning it and asking why.
When I installed Indiana I was transported back to installing Linux in 2000, together with its hardware support. If Sun had started OpenSolaris then then it might certainly have been relevant to people now, but it just isn’t. Sorry.
Then stop posting idiotic BS to every story featuring Sun and/or Solaris.
It is comparative alternative to Linux. A good one too.
For me it like was most operating system installations. But then again, I do not rate operating systems by the pretty pictures shown during the installer. Actually, installing Linux was pretty much exactly the same procedure in 2000 as it is in 2009.
OpenSolaris is Unix, but as been noted, not UNIX(tm). (Like it would matter.) Already this implies that certain knowledge is required; surely the target audience is not the same as Ubuntu’s.
As for hardware support, I feel that it is quite adequate. But I find it annoying that this has become a straw-man argument when Linux is compared to anything. Maybe you are so eager to use it as an argument because it is often used as an argument against Linux?
For those with open mind and spare hardware: test it out and see it for yourself. Bizarre gadgets won’t work for sure, but “standard” PCs as well as quite a few laptops work without problems from my experience. (If you are doing servers, you already know what you are doing, or at least should know.)
EDIT: grammar.
Edited 2009-09-16 07:25 UTC
I read through a few of the comments and all I saw is that opensolaris has its share of problems and zfs isn’t really ready for prime time yet.
Given that linux has a huge install base, that opensolaris isn’t some magical perfect os, and the uncertain issue of sun/oracle’s control, it’s not difficult to understand why it’s not that exciting.
That being said, if a company has some issue with running linux or bsd (due to lack of a desirable single corporate backer), I’d prefer them deploying opensolaris over any windows product any day.
Edited 2009-09-15 21:18 UTC
Don’t let the Linux fanboys fool you on ZFS, I have been using it in production since the 6/06 Release of Solaris 10 came out on multiple 1 TB disk arrays (StorEdge 3510 and 2540). And while it is not perfect for every task, it works extremely well.
And while I am sure any number of people can point to “horror stories” about ZFS disasters, that can be said about any filesystem. The only problem I have with ZFS is that when you build a system with ZFSroot, you cannot perform a Solaris Flash installation of that machine.
We’ve also been using ZFS since August 08 (on FreeBSD). The only major issues we had were due to pilot-error on the initial setup (don’t use 24 drives to create a single raidz2 vdev!!). Since then, our two backup servers have been doing remote backups for over 100 Linux and FreeBSD servers without issues. Having daily snapshots is so much simpler to manage than full/incremental backup sets.
Been using it at home for network storage as well, on a 32-bit system. The occasional lockup while compiling KDE updates, but otherwise good. No data corruption or data loss, which is nice. Again, having daily snapshots is so convenient.
I used ZFS on 1GB RAM Pentium 4 for over a year without problems. Granted, ZFS being 128 bit, doesnt like 32 bit CPU as I only got 20-30MB/sec speed. Now I have a Quad core with 4 GB RAM (which must be considered as non exceptional computer) and I have never had a ZFS problem. In fact I have done weird things to my ZFS raid. Shut down the computer in the middle of different things, because I trust ZFS. I have talked to other people on forums etc, and they never had ZFS problems either. Maybe some sysadmins had problems with their high load, large installations – but for a normal average user – nope. ZFS is problem free. You only hear about those having problems, but you never hear about those where ZFS runs without problems.
I can not see myself going away from ZFS. Maybe btrfs if it is as easy to manage as ZFS. But ZFS has been out for five years and still has some bugs. And SUN has very good engineers. Btrfs will have some bugs even 5 years after. Then ZFS will have improved still. I believe I have to stick to ZFS for quite some time. At least 5 years.
You guys just can’t help bringing political BS and fanboyism to every single thread right? I couldn’t care less about Stallman or how offensive / defensive Linux / Solaris users are!
By the way, the article itself isn’t very noteworthy: Anyone with a live cd / usb will find out soon enough, I was expecting some more actually useful tips, because the last time I checked out, there were quite a few sensible differences in the userland apps (not just the custom GNOME, I’m also talking about cli tools).
That depends what software repository you use …
You are not limited to IPS, but even IPS has a lot of custom repositories.
You can also use openpkg or NetBSD’s crossplatform pkgsrc, here is simple way to add pkgsrc to OpenSolaris installation:
% pfexec su –
# pkg install SUNWgcc
# pkg install SUNWcvs
# cd /usr
# cvs -z9 -d [email protected]:/cvsroot co pkgsrc
# cd /usr/pkgsrc/bootstrap
# export PATH=/usr/ccs/bin:{$PATH}
# ./bootstrap
I can bet money they would consider Unix 32v on a VAX, no longer a ‘Unix’.
The open group & their ilk have done their best to keep Unix fragmented, fought over, and trampled by Windows NT, while Linux busily ate their lunch while they charge insance prices for the honor of being called a “unix”.
I’m sure Microsoft thanks them every day for letting Windows NT soak up most of their market.
This is exactly the same type of joke as all the certifications that plague the IT industry.
Sometimes to get a certain contract, you or your products need to be certified otherwise you won’t get the deal. This has created a type of Mafia industry around all these types of certifications.
I do know that some certification is needed and many are indeed quite good, unfortunately most of them are a joke.
Not to mention the attitude that *BSD is not a UNIX anymore, when it is CLEARLY derived from the 32v core which is UNIX.
IMHO these ‘unix’ people have destroyed what was the very meaning and core of the REAL Unix.
Considering how many people think that Linux is UNIX, they have already become a kleenex.
If they wanted to ‘help’ the industry they would encourage openess (real OPEN software) and maybe propose something to bring UNIX out of the 1970’s… But I know why fix what isn’t broken….
The fact that OS/400 even gets listed as a “unix” just tells you how bogus “UNIX” is now.
OpenBSD is far more a UNIX then OS/400.
O_O Is OS/400 really considered UNIX?
I know I’ve seen it somewhere…
But in the meantime here is z/os listed as a unix:
http://www.opengroup.org/openbrand/register/brand3470.htm
I bet if I had $150,000 lying around I could get CP/M branded a UNIX.
The whole thing is a joke.
I’m young unexperienced Unix user but I have observed a right decision of direction on OpenSolaris Development .
This unix becomes more user friendly and more ese to use for average user .
For example in 2008.11 release , system hadn’t got support for multimedia , Package Manager Usually Crashed when too much packages and repositories were installed .
In 2009.06 release Package manager is upgrated and work without problems , multimedia players are in repo’s , more programs are porting
Developers added majority graphical configurator and boot progress is faster .
So It’s for me good prognosis for OpenSolaris’ future .
Well, I couldn’t make start GUI on installation cd – I have both integrated and normal videocards (GF6100 and GF9600).
And installer is only graphical.
I had to run vncserver and connect to it from another box. Only then I could perform install.
Updates are perfect – you can always boot in the older version, if you want. ZFS snapshots are great too, but too RAM-consuming.
Nearly all hardware I use was diagnosed out-of-box. Though, SUN GNOME is really, really slow on A64 2800+ / 1Gb of RAM / PATA 20Gb HDD I dedicated to OpenSolaris.
Yep, I am cheap. And I am used for Linux distros to work well on Celeron 400 / 192Mb. I mean, browser, vnc connection, some consoles, xchat and openvpn tunnels.
Maybe it’s zfs to blame, maybe something else. But when you try to change window manager – oops, community is still working on KDE and XFCE, all tools are gnome-only, etc.
When I compiled and installed fluxbox – it started to fly, though.
As a desktop it does not differ much from Linux distros – except for a much more sane (and limited) package management. Though, gnome-centrism and some other glitches (I could not make layout switching work, firefox didn’t start from user, etc) are somewhat distracting from work.
I had troubles compiling some software, namely wbar and openvpn (I’m using 121 milestone of OpenSolaris).
Overall it is definitely not a choice to revive an old typewriter.
I’d advise to use it as your main OS if you’re an OS geek, or if you need to get a grip on OpenSolaris/SunOS for your work.
Edited 2009-09-16 12:04 UTC
Well, it is a fact that if you want ZFS you need RAM. ZFS consumes quite RAM in fact, the same system installed with UFS or ZFS (nevada build) as a lot of difference. But ZFS is far faster than UFS because it uses RAM as cache.
For personal use it’s not a big gain if we want to run it on older computers but in real enterprise world … well … let me say this … VxVM and SVM take a long time to sync raids … with ZFS it’s blazing fast. In the case that I saw we are talking about 1 hour passing to 15 minutes … this is how fast it syncs.
ZFS snapshots don’t consume RAM. They consume disk space, and only for the files that have changed since the snapshot was made.
“[…] to make sure that it became a usable home system rather than just a server OS, Sun hired Ian Murdock, founder of the Debian project, to produce OpenSolaris.”
And that was the worst thing they could possibly do. Murdock is responsible for all of the bloat in Debian, he probobly has a different perception of the world, his thinking goes too slow, so he needs a slower OS I suppose.
And Gnome … ? I mean, c’mon! why not letting the end-user decide what GUI [if any!] he wants for himself?! what a selfish dictate. Some people find Gnome completely obtrusive and unusable, and I count myself into this group of people. [I know I can build OSOL myself, but I usualy don’t have time for such things. I also know I can use Schillix or anything else, but it’s not the point]
I tried Solaris and then OpenSolaris in the past and I was equally disappointed. System runs terribly slow, startup scipts are completely not in my personal taste [they’re unnecessarily overcomplicated] as I prefer BSD-like startup scripts. I won’t mention all the “auto-tools” implemented into the system – I find them unintuitive, as you’ll probobly get further by simply editing the scripts by hand.
Anyway – maybe OpenSolaris is innovative in some points [ZFS, DTrace, etc] and you can’t argue with that, but I can imagine similar tools on other platforms as well.
I consider OpenSolaris merely a test field for Solaris – nothing else. Sun wants to have a free ride on OSS community and they certainly get it.
Edited 2009-09-16 12:40 UTC
“Nearly all hardware I use was diagnosed out-of-box. Though, SUN GNOME is really, really slow on A64 2800+ / 1Gb of RAM / PATA 20Gb HDD I dedicated to OpenSolaris.”
2008.11 version I used on 3gb 800mhz ram , 250 GB ATA HDD and Core 2 Duo E6550 , and I HAD a lots of problems with IPS and TimeSlider , In current version this problems are resolved .
So you have got enought ram , probablly 2gb ram is needed
Tar in solaris console is really nightmare and I’m not able to using it , I allways user gnome’s archive manager , similar problem IS deleting not empty folders throught console
Yes, I know I don’t have enough RAM.
With fluxbox I have ~170Mbs free (~850Mb occupied), with gnome I swap constantly.
So, yes, 2Gb is a must and 512Mb will suck from HDD even without Xorg.
That means no-go for most pre-2007 machines. A real disadvantage for a geek, because most of us test new OSes on second or third machine, keeping the main (the most powerful one) for gaming^Wwork.
tar – oh, yeah. bzip2 -d and then tar xvf… Even AIX has better tar.
And, yes, translation to Russian is half-finished and sometimes pathetic – you need to switch on English again to understand what they meant.
Considering that I have had two comments modded down to -1 for nothing more than pointing out how thin skinned members of the Linux community here are, I have an idea that might limit the “mod the %^(*&% down” thing.
Before you can mod a post down, you have to submit a 1,000 word essay as to why the post should be modded down. This, of course is subject to the approval of the OSNews staff. And once the approval is granted (based on how fast the staff can read all those essays), the post is modded down.
With the lack of meta-moderation here, I am sure a number of users would like some degree of sanity checking of the use of the moderation system, particularly in the fashion it is being used with this article.
Wow, it’s getting bad here when suggestions are now getting modded down. Another idea would be to take away the ability to mod down posts altogether!
Just like a Slashdot, each passing day.
I’d go for a little more “moderate moderation”. Say, you would need ten points to mod a comment by a single point down.
A few years ago I used to meta-moderate for Slashdot. While I had a bullet-proof Slashdot karma, I could never use it because I was always meta-moderating (no good deed goes unpunished). I mentioned it to Cmdr Taco more than once and he laughed it off, until I logged out for the last time.
From what I saw at that time if you got modded as a troll, you pretty much deserved it. The majority of the posts I looked over were modded well. In the six or seven months I meta-moderated I only changed one post.
This place is still like the Wild Wild West!
In my opinion OpenSolaris Port on ARM may reduce demand for ram , Embed systems don’t have a lot of memory and osol in this port uses less ram .
Maybe sun enginners on next release in february will optymilise ram using ? How knows they got 4 months for it
Embedded OpenSolaris sounds like a very bad idea. Ditto for file systems like ZFS.
Have you ever used it?
I don’t need to test solaris to determine that it’s unsuitable for embedded systems. It’s a server OS, with optimizations relevant to servers, and very little prior art in embedded space.
I know that ZFS on arm in bad idea and It’s not going to beeing aceppted on embeed devices , but It would be a milestone on zfs optimilisation fact of launching osol on arm means lot of work on ram-consumption optymilisation
It’s a shame really. In 1999, the chances of your hitting a Solaris/SunOS machine between your browser and the web installation of the site you were going to was probably near 70%. The chances of hitting a Linux machine? 10%, maybe. You were probably more likely to hit a FreeBSD machine (Yahoo).
Now, the chances of hitting a machine running Linux is almost 100% Between the Cisco routers, wireless access points, load balancers, not to mention web servers, app servers, and DB servers. Linux is the Solaris of the 90s. Well respected, well trusted, and carrier-grade.
The chances of hitting a Solaris system now? I’d guess 5% maybe. Even less likely for OpenSolaris (despite Schwartz claims). Mostly likely somewhere in the storage stack, but certainly not in the network devices along the way, and most people replaced their outrageously expensive Sun web/app servers in 2000-2002 with x86 Linux servers, and didn’t bother going back when Solaris re-x86’d.
It’s sad, because I really liked Solaris a lot. It just got a lot less relevant. Not irrelevant, mind you, just a lot less relevant.
The most shameful part, is that when companies saw their operational costs go down, that they thought they were getting a bargain. Unless Linus himself created a business and the applications along with it, there is *no* company that can claim to give the same level of “enterprise support” for Linux that can be claimed by Sun for Solaris on sparc or IBM for AIX.
I would disagree. While it’s certainly logical to make the argument that Linux doesn’t have support the same way that Sun and AIX has in theory, in actuality there’s not a Linux support problem.
That is, people who actually use Linux don’t complain about a lack of support. Between the many Linux vendors such as RedHat and SuSE, to the OEMs who provide their own support such as F5 and Cisco, as well as just doing simple Google searches (is Google the world’s leading support resource?) you don’t see people complaining. I’d say Linux support is on par with Sun, at the very least.
Not to say Linux doesn’t have its challenges, just like any other ecosystem. It’s just that support isn’t really one of them.
AIX, HP-UX, Mac OS X, SCO OpenServer, Solaris, and Tru64 UNIX (maybe a few others) are UNIX®.
BSD Unix, OPENSTEP and numerous others old and new are are Unix.
Linux, MINIX, Plan 9, and QNX are Unix-like.