Linked by Thom Holwerda on Mon 20th Jul 2009 19:16 UTC
Sun Solaris, OpenSolaris The Linux desktop has come a long way. It's a fully usable, stable, and secure operating system that can be used quite easily by the masses. Not too long ago, Sun figured they could do the same by starting Project Indiana, which is supposed to deliver a complete distribution of OpenSolaris in a manner similar to GNU/Linux. After using the latest version for a while, I'm wondering: why?
Permalink for comment 374407
To read all comments associated with this story, please click here.
I have different experience
by Kebabbert on Tue 21st Jul 2009 12:18 UTC
Kebabbert
Member since:
2007-07-27

For me, OpenSolaris 2009.06 installed in 15 minutes. It boots in 30 secs and poweroffs in 5-10 seconds. It is well known that OpenSolaris has limited hardware support and it will not even install on some configuations. I suspect the reviewer has some hardware that OpenSolaris dont like and therefore behaves strangely. For me OpenSolaris is fast and snappy. I suspect that if the reviewer tried OpenSolaris on other hardware, he will find out that it behaves just like an ordinary Linux. I promise you that if OpenSolari sucked on my hardware, I wouldnt use it either. Im not a masochist. I would have prefered Ubuntu then.





I have now automatically upgraded OpenSolaris 2009.06 to build 117 via the "Update Manager" and everything is smooth and fast. One of the things with ZFS I love is the BE (Boot Environments). Before upgrading or installing a package via IPS, ZFS automatically takes a snapshot, i.e. BE. All these BE will show up in GRUB. If you find out that your new update/installation breaks something, you just choose to boot from your old BE in GRUB. You can have many BE to choose from. One with Oracle vXXX and another with vYYY etc.

When ZFS snapshots, it just writes all data to another place on the hard drive. All old data are intact and untouched. Now you choose via GRUB which BE to boot from. This makes it safe to try out anything, and to rollback via a reboot, in GRUB. This is a killer feature. Imagine a server and you have to do a rollback. Just reboot and everything is identical to as it were.

And for ZFS, it is a memory hog. It grabs all RAM it can get, as a cache - but immediately releases all RAM when asked for. On a smaller system, there are surprisingly little disc activity with 8GB RAM. ZFS is a Enterprise Filesystem and it wouldnt work if ZFS refused to release all RAM when asked for. Stating otherwise if FUD. But it is nice to see that SEGEDUNUM nowadays have stopped stating that ZFS requires several GB RAM. I and others here, have told him many times that 512MB RAM suffices, but to no avail. But now at last he seems to have understood. Better late than never.

Anyway, the MAIN REASON to use ZFS is one and only. It protects against silent corruption. ECC RAM corrects flipped bits caused by radiation, power spikes, bugs, etc. ZFS does exactly the same thing, it protects the data against flipped bits. No other filesystem/solution does that. fsck checks only the metadata, the data is never checked. Hardware raid doesnt fix this problem. For instance, HW raid would never ever never ever fix this problem, described in this short text:
http://blogs.sun.com/elowe/entry/zfs_saves_the_day_ta

In fact, a modern hard disk dedicates 20% of it's surface to error correcting codes, and still there are errors that it can not repair or even detect. The chance of getting an errorneous bit is constant, albeit very low approximately 1 bit will be wrong in 3 x 10^7 bits, according to physics centre CERN:
http://storagemojo.com/2007/09/19/cerns-data-corruption-research/

The larger the drives, the more bits you will read and soon you will have read several batches of 3 x 10^7 bits. The more bits you read, the more often you will get silent corruption, which the hardware doesnt notice and doesnt tell you. In today's large RAID, you will always get flipped bits and it will only get worse as discs increase in size. Therefore raid-5 is soon obsolete (there are too many bits involved):
http://blogs.zdnet.com/storage/?p=162

However, ZFS fixes all these problems. It is designed to fix this kind of errors, among others. ZFS chief architecht:
http://blogs.sun.com/bonwick/entry/zfs_end_to_end_data
"The job of any filesystem boils down to this: when asked to read a block, it should return the same data that was previously written to that block. If it can't do that -- because the disk is offline or the data has been damaged or tampered with -- it should detect this and return an error.

Incredibly, most filesystems fail this test. They depend on the underlying hardware to detect and report errors. If a disk simply returns bad data, the average filesystem won't even detect it."

This is THE text on ZFS to read, by the ZFS chief architect:
http://queue.acm.org/detail.cfm?id=1317400
"one of the design principles we set for ZFS was: never, ever trust the underlying hardware."





We also have to remember that OpenSolaris is very young and it evolves very quickly. In short it will be on par with Linux, I speculate. Should you have tried a totally new Linux distro that is 2 years old, maybe you would have problem too?

But when all works, and the hardware is supported, OpenSolaris is a very pleasant experience.

Reply Score: 3