Sun has recently released Solaris 10. It is currently free, as in beer, and most of it is promised to be released under an OSI approved license in the second quarter of 2005. Most everyone reading this probably knows all of that. The release and subsequent open sourcing of Solaris 10 has caused quite an uproar in the Open Source community and the IT industry as a whole. Read the review here.
I tried to install int on a Athlon XP 1800+ and it took freaking forever. I ended up stopping it and installed Ubuntu in 10 minutes.
For a general server setup to run a web server or what ever solaris 10 is overkill.
I am sure this OS has features that are not available in linux…and I am sure very few people care. Those that do are probably already Solaris users. Sun is too late getting into the community development game (not to be confused with opening their codebases), and the window of opportunity for a Microsoft alternative closed after the linux 2.6 kernel.
Does anyone know if the x86 version can run in Virtual PC 2004, I would love to try it out.
Thanks
Despite the trolls:
1)Too late
2)Took forever to install
Solaris does have some impressive features, such as dtrace and the new Service Management Framework (aka SYSV replacement). I personally have no use for Zones, however, it is impressive that the Zones still only use on running kernel.
I have download Solaris 10 and all I need is a few spare moments to install it. For the record, I am a *nix user (*BSD, Linux).
Its worth a look.
I had a prerelease running on VPC and am currently installing it right now…
How do Zones differ from BSD jails ?
When can we expect Sun to be shipping this on CD or DVD?
I liked the way they put the ISO’s on their site.
I run it on VMWare workstation beacuse it stucked on a normal boot…
The installation was realy slow (i used the minimum componenets)
And the most annoying thing was it couldn’t detect my video cad (GeForce4 mx440)so I had to stay on 640×480@60hz mode
Over all it detected all the rest of the hardware automaticaly incuding the EMU 10k DSP on my sound card.
Im seriousy waiting for the final release!
Can I throw Solaris 10 on a 2nd IDE drive with little hassle? I have a 10GB disk I use for testing that I could throw it on…
Anyone who thinks Solaris 10 is unimportant is either uninformed or has never really administered anything.
The fact is, much of what Solaris 10 is and does is pretty cool and will end up appearing in Linux, and possibly Windows, at some point. This is enterprise stuff, not for slippery little 10 workstations LANs or your home web server. The Sun servers I’ve worked with serve hundreds of transactions every minute of every day. They run applications for thousands of concurrent users. This is big iron. This is enterprise level, high quality, high availability machinery. One Sun server we had acted as the mail relay for over 65,000 users for a US Department of Defense organization.
Solaris isn’t built for your desktop, although it could be used that way. Solaris is big time UNIX for serious administrators. So, unless you understand the kind of environment Solaris is tuned for, or want to, move along please.
Anyone who thinks Solaris 10 is unimportant is either uninformed or has never really administered anything.
<P>
If this is true, then why do you say this later on:
<P>
Solaris isn’t built for your desktop, although it could be used that way. Solaris is big time UNIX for serious administrators. So, unless you understand the kind of environment Solaris is tuned for, or want to, move along please.
<P>
Doesn’t that reinforce why Solaris is unimportant to so many of us?
<P>
Yes, I appreciate some of its attributes in an enterprise server environment, but from the point of view of a home user or a even corporate desktop user, Solaris has very little to offer that I didn’t already have, and it’s missing some features that would make it much more useful (to me).
That’s right, I forget. This forum supports some markup but not the same subset as several other forums I frequent (where <P> tags are required to enforce paragraph breaks).
And people wonder why I hate web-based forum software.
I wish this was comp.os.general.discussion instead… 🙁
Does this guy have any idea what SVR4 Unix is? The reason why Solaris is laid out in this fashion is for compatibility with SunOS (BSD based). There might be some operating systems that can be installed without a reboot, but most I have come in contact with (Solaris and Windows) require a reboot. I really don’t know if that is a “big” feature or not, after you install a Patch Cluster (standard Solaris practice) you have to reboot.
The slow response time is more than likely due to Solaris’ “safety factor” use of PIO mode for IDE transfers with CD/DVD ROM devices. During the beta testing there were some people who had problems using IDE CD/DVD drives in Ultra DMA Mode:
Feb 11 23:25:39 robert4 genunix: [ID 846691 kern.info] model DVD-ROM DDU1621
Feb 11 23:25:39 robert4 pci: [ID 370704 kern.info] PCI-device: ide@1, ata1
Feb 11 23:25:39 robert4 genunix: [ID 936769 kern.info] ata1 is /pci@0,0/pci-ide@
11,1/ide@1
Feb 11 23:25:39 robert4 genunix: [ID 935449 kern.info] ATA DMA off: disabled.
Control with “atapi-cd-dma-enabled” property
Feb 11 23:25:39 robert4 genunix: [ID 882269 kern.info] PIO mode 4 selected
Feb 11 23:25:39 robert4 genunix: [ID 935449 kern.info] ATA DMA off: disabled.
Control with “atapi-cd-dma-enabled” property
Feb 11 23:25:39 robert4 genunix: [ID 882269 kern.info] PIO mode 4 selected
This is easy enough to fix using the eeprom atapi-cd-dma-enabled=1 and reboot (as root). The reason for the choice of root shell being /sbin/sh is because binaries in /sbin are statically linked, and in case of a “data disaster” the last thing you need is to find out you cannot log in as root because /usr/lib was not mounted (for those of us who use a separate partition for /usr).
Solaris 10 is being placed as the Enterprise desktop, not an individual’s machine. Most people could use it as a desktop machine if they chose to (with some customization). I hope to have my review of Solaris 10 here soon which should answer some questions from an administrator’s point of view.
”
web-based forum software.
”
I think it’s more like a comment…ubb or vbulletin is a forum.
If it’s ‘Big Time UNIX for serious administrators’, why are they giving it away for free download? Surely the ‘Big Time serious administrators’ aren’t averse to paying for tools that do their ‘serious Big Time administration’ work?
Most UNIX-savvy people that I know have already have ‘moved on’ – They’ve moved on from Solaris 7/8/9 to Linux or FreeBSD which actually does cater to desktop, low end and non-expert users, as well as the high end with the appropriate vendor (SGI, IBM etc.) extensions and support.
Sun dropped it’s workstation market, and with it it’s ability to acquire new users on the platform. Now Solaris admins are simply a dying breed. If OpenSolaris doesnt offer an opportunity to pick up new users at the low-end, well, i don’t see a future for Sun’s OS at all
I mean, if Solaris’s one and only strength is the high-end server market, then yeah, it is pretty irrelevant and unimportant to me (I have managed many low-end Linux and Windows servers professionally but simply dont own or need any ‘big iron’), and i fail to see how Sun are going to manage to capture any new users if this is the focus of their efforts. They really do need to widen the appeal of Solaris if this OpenSolaris effort is going to make any kind of impact.
If OpenSolaris truly has zero to offer anybody except those that are already employed in a multi-million dollar data center then it’s doomed to fail, when the current ‘high end’ gets eaten by the increasing capability of the next generation ‘low end’ – which, in various specific areas such as film rendering, web server farms and scientific computing it already has.
Solaris x86 installs are painfully slow by my experience. Things go a hundred times faster on this ancient ultra 10 I’ve got sitting here. Perhaps my hardware just wasn’t supported, although it seems to me I have nothing exotic. A bare bones athlon system and a bare bones xeon system are what I tried it on, and on both the installation took ages. And by ages I mean it must have been 3 hours into the install before the disk partitioning tool finally came up. I just couldn’t handle it. I guess I’ll give it a year or so and see where there at at that point. But I’m not going to get my hopes up about Solaris running well on anything other than Sun hardware.
If you are running on a desktop, I understand no use for zones.
If you are running a server, the only thing you should be running is zones (if you can get away with it). If someone breaks into a zone, they don’t hose the whole host. You simply shut down the zone, debug the problem and re-boot the zone (or boot another zone …).
Run all applications you can in a zone, and lock down the global zone. Only provide console access to the global zone.
My 2 cents worth.
John Clingan
Sun Microsystems
http://blogs.sun.com/jclingan
“If it’s ‘Big Time UNIX for serious administrators’, why are they giving it away for free download? Surely the ‘Big Time serious administrators’ aren’t averse to paying for tools that do their ‘serious Big Time administration’ work?”
Erm, dude, you pay for support, not for the OS. Have you been living under a rock the past 4 years?
Well, until Solaris was released free as in beer, you paid for it.
Were the Sun management team living under rocks for the last 4 years, do you think?
Well, until Solaris was released free as in beer, you paid for it.
Again wrong. Solaris for SPARC has been available as a free download for ages. It just wasn’t free for x86 (you had to pay bandwith costs). Do some research .
The free download version was not licensed for commercial usage, only evaluation as I understand it.
You had to pay for Solaris licenses if you actually wanted to use it in the course of your business activities.
“The reason for the choice of root shell being /sbin/sh is because binaries in /sbin are statically linked […]”
While this was true for Solaris versions < 10 it is not true anymore. Solaris 10 does not provide static linking of system libraries.
An even in Solaris 9 /sbin/sh was a fallback option in case the shell listed in /etc/passwd for root could not be executed.
I assume they stick with /sbin/sh for backwards compability (something Solaris is famous for (in strong contrast to Linux)).
Well, it can’t be as bad as Longhorn Alpha installation, now thats a painful installation.
“I am sure this OS has features that are not available in linux…and I am sure very few people care. Those that do are probably already Solaris users. Sun is too late getting into the community development game (not to be confused with opening their codebases), and the window of opportunity for a Microsoft alternative closed after the linux 2.6 kernel.”
That’s just bullshit. Why would it be to late? It’s not like two OSes are “enough” and end of all. Or would you think the same if there was just one? There are actually TO FEW alternatives now adays if you ask me. Also I don’t enjoy the Linux community thank you.
Well you are right:
robert4:/sbin $ldd sh
libgen.so.1 => /usr/lib/libgen.so.1
libsecdb.so.1 => /usr/lib/libsecdb.so.1
libc.so.1 => /usr/lib/libc.so.1
libnsl.so.1 => /usr/lib/libnsl.so.1
libcmd.so.1 => /usr/lib/libcmd.so.1
libmp.so.2 => /usr/lib/libmp.so.2
libmd5.so.1 => /usr/lib/libmd5.so.1
libscf.so.1 => /usr/lib/libscf.so.1
libdoor.so.1 => /usr/lib/libdoor.so.1
libuutil.so.1 => /usr/lib/libuutil.so.1
libm.so.2 => /usr/lib/libm.so.2
That could make recovery of a trashed system real interesting.
For Mathman, how much RAM and disk were in those machines you used to install Solaris on?
And for Anonymous (IP: —.paradise.net.nz), I don’t know about you, but I do most of my administration from the command line (dtterm or PuTTy sessions). Solaris has never been strong on graphical administration tools (except Solstice Enterprise Volume Manager, now Veritas Foundation Suite). Most Solaris admins I know only use X to allow them to have more term windows open. And from the Solaris admin point of view, if you GUI tools to admin a box, you need to go back to school or stop using Windows! The environments I work in (US Armed Forces) do not allow X to run on most machines for security reasons. And what if you were working on a machine from a Lights Out Data Center?
The services (daemons) have been completely overhauled, and unless you have time to go to http://www.sun.com/bigadmin and read about it, you will be lost for days as to why changing /etc/inetd.conf had no effect, how to turn off sendmail, how to reconfigure runlevels, etc. The single largest production-impacting change to Solaris in the Solaris 10 release is the new services subsystem. You would be wise to learn all about it. I think it’s a step in the right direction, but it’s unbelievably complex and not upward-compatible with the “older” way of doing things. YOu really need to read the “Services Quickstart” over on Bigadmin or you *will* fail with Solaris 10.
Other than that, it takes 2 hours to install on a PC, 2 hours to install on a Sun Blade 2500, and 6 hours to install on an Ultra-60. Enjoy.
Solaris is attractive for many reasons, we have bonafide support and not internet mailing lists, Sun backs em up and Sun support is much better than Red Hat support. Im still waiting for a callback from Red Hat on an RHEL problem I had about 4 months ago that they couldnt help me with. Sun offers indemnification and protects from IP claims that anyone could bring up, and I dont have to spend thousands of dollars on an insurance policy. Sun doesnt demand that I open up my code if i dont wish to, nor does Sun reps talk crap to me if I dont want to port my apps to Linux. Solaris is a trusted brand of UNIX and aside from BSD probably the best. The x86 version is getting better and I expect when they attract more developers it will get much better. Sun has done great work on Solaris 10 and since I have been beta testing it, I wouldnt even consider Linux kernel 2.6 and when Solaris x86 does improve enough, I wouldnt even consider Linux for my desktops if I was looking for a desktop that offers a UNIX type approach.
Why should it matter if the install takes 1-2 hours? Eat dinner, watch TV, or do some work while it installs.
Installation is usually done only once. If time was important noone would be using Gentoo.
I am currently using Solaris 10 as my main desktop machine. Shoot me! All this power and all I am doing is streaming shoutcast and browsing the web. The desktop is really sexy. It browses my Windows Shares flawlessly. I had to install some utilities like the Gimp and a *good* music player but all is fine here.
I can really see OpenSolaris taking off big time. Anyone else up for this? Now all we need is embedded Solaris to run my Set Top Box. Talk about wonderful software, this is it. Solid and Strong.
2 gigs in one machine and 1 gig in the other if I remember right. So that certainly wasn’t the problem.
This just Suns first effort in OSS. Give them time to understand the market and adapt. There is plenty of room for everbody. I even expect Microsoft to get into the market eventually but not until they put up a HECK of a fight.
Please…. a few moments to install? Give me a break.
Also because someone says it takes forever to install does not make them trolls.
Not exactly: http://www.sunsource.net
We’ve been opening code longer than most if you go back to NFS, NIS, etc …
Yeah, I too couldn’t help myself and downloaded Solaris 10.
Even though I’m a long time Slackware user I was hopelessly lost and frustrated in Solaris 10 from the install till using the destops.
Install is realy bad. It doesn’t tell you what it is doing, if you walk away for 5 minutes then it just continues and you have no idea what’s happening.
The X configuration where it askes you about video card, monitor etc. is what one would expect to see 20 years ago but not in 2005.
There is no access to any help or no clues where to get help anywhere, during install, after install, and worst of all, no links to documentation on either desktop.
It’s like they want you to hate the OS and make it difficult on purpouse.
In command like, no tab completion, backspace key doesn’t work, I found the delete key works as backspace (why do have baspace key on the keyboard then?).
And when I started just X without desktop, the ctrl + alt + backspace wouldn’t kill X. I had to cold reboot.
After that the Gnome desktop wouldn’t work anymore.
Some Nautilus error and the thing was hosed as easy as Windows 98. What a joke.
The alt+Fx keys wouldn’t swith to different consoles either.
I’m willing to learn but Solaris is just so obscured and unfriendly that I can’t see it going anywhere as a desktop.
Doesn’t matter how good it is under the hood, if it looks and feels like a Jugo, people will go with something else, like Windows or Linux, especially if they’re already familiar with them.
Shows that Sun is a dinosaur with dinosaur-like-attitude toward computer users.
We all know what happened to dinosaurs.
You work with some little systems. LOL
Quote: “Solaris isn’t built for your desktop, although it could be used that way. Solaris is big time UNIX for serious administrators. So, unless you understand the kind of environment Solaris is tuned for, or want to, move along please.”
Yeah and it suits the 0.00000001% of people who need it’s features and performance. The average business doesn’t need a server with dtrace or zones. It’s overkill. It’s features for a features sake call.
I’ve installed Solaris 10 on one of my hard drives here (and yes I noted to myself that it’s a server o/s not a desktop o/s) and I can’t honestly say it impressed me one iota for a huge variety of reasons that i’ve already posted on a previous osnews.com article.
Most of the servers on the web are running either freebsd or netbsd anyways, not solaris or windows. In fact it’s most probably freebsd/netbsd, then linux then windows then solaris in order of most usage. As to uptimes, bsd wins nicely as well.
The main issue with Solaris 10 is the license. Sun can keep Solaris and its CDDL license and i’ll keep my Linux and my GPL and my morals. Sun is the type of company that I’m positive that this is a nice honeypot for others so that Sun (and it’s new girlfriend Microsoft) can have lots of fun with patents lawsuits. A two faced leopard like Sun doesn’t change its spots or habits.
Dave
PS I don’t use bsd (although I have used freebsd in the past), I happily use GNU Linux.
If he had installed the Solaris Software companion as well, ie http://www.sun.com/software/solaris/freeware/index.xml then he would have had xmms as well (as well as kde, cups and lots of other familiar tools
I’ll re-iterate my earlier post. If your running Solaris as a server, you should run your apps in Zones if at all possible. Zones are *not* overkill. Zones are lightweight. *EVERY* enterprise should run Zones if they value security. Running applications in local zones should be the rule, running applications in the global zone should be the exception. Period.
*Every* sysadmin (desktop or server) and Solaris developer should leverage the heck out of DTrace. You’ll never regret it. The time to diagnose problems is collapsed big time. If you ever find yourself running sar, top, or any of the “stat” tools, you are heavily advised to learn DTrace.
On the the earlier zones rap comment: z-z-z-z-zones, $#*%@ zones, z-z-z-z-zones
RE: Rich Steiner (IP: —.sita.aero) – Posted on 2005-02-25 22:08:59
That’s right, I forget. This forum supports some markup but not the same subset as several other forums I frequent (where <P> tags are required to enforce paragraph breaks).
Yes, but how about actually CLOSING those tags. All very well opening, but if you don’t close them, then they’re a waste of time.
Also, down below it clearly states: “The only HTML/UBB tags allowed are for bold and italics.”
So, unless you understand the kind of environment Solaris is tuned for, or want to, move along please.
I’ve been doing UNIX systems administration for like 7 years. Lots of solaris and irix and hpux and aix and linux and bsd and stuff. So, yeah, I understand the kind of environment Solaris is tuned for. And, frankly, I think its amateurish crap.
I mean, don’t they have a QA department? Their install system has been borked from day one. I’ve been installing Solaris since back when it was called SunOS 4.x. I swear they didn’t update their install system that whole time. Eventually they did put a web browser in the thing. But damn. Does it take an open source movement like Linux to light a fire under Sun’s butt for what?
I sure hope they have improved Solaris 10. Solaris 9 wasn’t much to talk about from any end user’s perspective.
Yeah, we all know how UNIX is rock solid stable and efficient. What’s your point?
Sun must be doing something right, because everytime they come up in a discussion here or at Slashdot, the emotions run high and the flamewars run long. Best of all, the people trying to debunk Sun come off sounding like children. This is really great stuff, folks!
Okay, first, Solaris is not overkill for a webserver. It has minimal install modes that are fine for this, and it’s new TCP/IP stack should do nicely. Solaris also has potential to be extremely secure, when set up properly. After all, Trusted Solaris will be based on the same code base as regular Solaris, soon.
Second, for desktop use, that is exactly what JDS is targeted to. JDS is common across Linux x86, Solaris SPARC, and Solaris x86, so you can pick the underpinnings you want but get the same desktop environment.
Third, Sun is giving it away for free, because the market dictates that, now. The market price for an operating system is effectively zero, now, (sorry Microsoft) so Sun has to leverage Solaris as a platform for selling systems and services. They have effectively reorganized their business to do this.
Fourth, this is not Sun’s first effort in OSS. Sun has been doing OSS since they were founded, back when they used Berkeley UNIX in their first workstations (circa 1982). OSS did not start with GNU, nor Linux–OSS essentially began when programming began.
Fifth, it takes like ten seconds to find out that http://docs.sun.com and http://sunsolve.sun.com are Sun’s main documentation repositories. They are well organized and will have everything those “willing to learn” need.
Sixth, regular businesses _do_ have uses for zones and dtrace. Just because those uses are outside the view of your blinders doesn’t make them not useful. Also, there are no moral problems with the CDDL. Sun’s lawyers took the software patent landscape and made a genius move with the patent grant in OpenSolaris. OpenSolaris is well protected from “patent terrorists,” which actually puts it on a moral highground, IMO. The days of some sort of idyllic patent-free world are decades away. The CDDL works with the world we live in, today.
Quote: “*EVERY* enterprise should run Zones if they value security.”
So, you’re saying that Solaris wasn’t secure prior to the introduction of zones? mmm? Please elaborate. Solaris has had a relatively low cracking rate (when patched of course) for quite a while, even allowing for the very small percent of users involved.
Quote: “The time to diagnose problems is collapsed big time.”
If the system is that good, why are you getting problems? Servers generally have minimal hardware (that they need to be installed for) – ie no sound card, no flashy graphics card, generally no onboard lan/sound/video to worry about. Generally just a nic and a average video card. The main thing with servers is the CPU, ram and hdds. Secondly – servers systems, especially those of the Unix breed generally don’t run X, which means less of a load on the system, and they generally don’t have much more than a minimal install of the operating system and whatever minimum amount of software the admin can get away with to do the job. So, are you saying that in these circumstances, Solaris is that dodgy that it may stuff up, and needs stuff like dtrace to help sort the problem out? mmm?
Sorry, I don’t buy the zones, dtrace crap. It hasn’t been needed in 25 years of computing, and I really doubt it’s needed now. Sure, it’s cool, but there’s a big difference between cool, and *needed*.
Dave
Since you were so kind and polite as to ask those questions, I shall reply in kind.
1) Zones will allow enhanced security, by isolating parts of the system, you will be able to use one big server to provide multiple servers whilst maintaining security – that is, if one ‘service’ is compromised, its just that particular zone in which the service resides, that will be vulnerable vs. without zones, where by if all servers were running on a single ‘zone’, and one is vulnerable, the whole system and other services are subject to being exploited.
Zones, however, isn’t a new concept. How they’ve implemented it is very smart, but in terms of the base technology, mainframes have been doing it for years.
2) Diagnose problems are problems when there are conflicts between software, hardware and operating system.
Are you here to claim that your operating system of choice has never had any one of those issues? are you as so bold to claim that Solaris is unique and that your chosen operating system would *never* have a problem?
3) dTrace is useful for companies who wish to push as much system performance out of the machine as possible.
If their server is 1 year old, and yet, they find that the speed has dropped, they may wish to investigate before either replacing it or upgrading a component. They might actually find that what is causing the problem could merely be a poorly written script, a missbehaving application, a memory leak in a custom written application etc. If it costs them $100 to solve the problem, in terms of labour costs, but it saves them $900 because they don’t need to upgrade their server, the move to Solaris and the use of dtrace has already justified itself.
”
3) dTrace is useful for companies who wish to push as much system performance out of the machine as possible. ”
this technology isnt new either. boosting strace is good enough
I downloaded the images and burned the 4 CDs. Installed Solaris base install on a second drive. What bugs me is that you have to re-partition your boot drive in order to install a bootloader. I mean, this reminds me of dual-booting OS/2 during the early days. There seems to be no LILO/GRUB equivalent for Solaris, and I can’t even figure out how to make a boot disk. Guess I am just ignorant, but I can’t even boot the system installed unless I do some partition reconfiguration. That sucks.
Dano.
/sbin linking to libraries on /usr … talk about nonsensical decisions
I’ve appreciated this review, but it’s a pity the author doesn’t say a word about ZFS which is an important feature os Solaris 10.
I see a trend here. People expect solaris to behave in the same way as linux. It’s similar to windows users getting frustrated with linux because it does not behave like the former.
These arguments can be used against any new feature in any OS in any time frame. Is this slashdot?
“So, you’re saying that Solaris wasn’t secure prior to the introduction of zones? mmm?”
Solaris is very secure but not impervious (hence security alerts – like SSH …). Plus, all it takes is a 3rd party application (like Apache) or mis-configuration by a sysadmin (we’re not perfect) to compromise a system. We have added *another* layer of security at no cost to you to contain a breach in security.
“If the system is that good, why are you getting problems? ”
The “system” is much larger than Solaris. It’s storage, networking, applicatons, etc. If you want to better understand how these interact (with the glue being Solaris), then DTrace is your friend. If you don’t find value in Dtrace, don’t use it. I personally think it would be a mistake not to learn Dtrace.
DTrace is more than a tool for performance tuning as brought up by another thread. It makes Solaris more transparent so you can diagnose non-performance related problems more quickly as well.
Seems like Solaris 10 didn’t address the dreaded USB malloc() bug. urrgggg.
Solaris users want webcams, too!
Because ZFS, like “Project Janus” did not ship with Solaris 10 GA, both are supposed to come in the first update.
You have to give Sun credit, they are trying, a bit late for mind share perhaps, to start something big here. If Sun can trim the company a bit to make it more nimble, I think they have a fighting chance here. I dont think Sun is worried about Joe Sixpack, they want the banks and heavy hitters. At the same time, Joe can give S-10 a spin if he wants, and for that I commend Sun; hopefully programmers around the world will be intrigued enough to get an ecosystem going that mirrors GNU’s. IF that happens, S-10 within 3 years will be as big as ever when compared to linux and BSD.
I’ve been beta testing Solaris 10 for almost a year now. It’s a huge step forward for Solaris. It now has features not found in Linux, Windows, etc. While people here may say that Zones, Dtrace, etc have counter-parts in Linux, this is only partly true. While Linux may have simliar features, they are not out of the box or polished. They often require patching to the kernel, which means a recompile of the kernel (talk about backward to enable virtualization). They also don’t come with easy tools or documentation. Dtrace is dynamic, it has over 30k points of instrumentation in Solaris, has an extensive guide, it’s own scripting language, doesn’t crash your system, plus a lot more. In Linux, there simply isn’t an equivelent. Same goes for Zone( N1 Containers). There are several projects out there for doing virtual servers in Linux, but they are not out of the box, require patching, cost more in overhead because they boot another kernel image, and simply aren’t ready for business use.. you’d be better off running vmware. With Zones in Solaris 10, you have simple commands to create and administer your zones, the overhead is only ~40mb, uses a jailed environment at the kernel level, boots in seconds, and is extremely secure. There simply isn’t a Linux equivelent! Then you pile on features like FireEngine, ZFS, NFSv4, Cryptographic Framework, SMF, etc. you have a product that is ahead of the game.
Now for all the people that were complaining about installing Solaris 10. I’m sure there are some no-name PC’s out there that will have issues with installing it because of things like IRQ conflicts, bad APIC implementations, and funky PnP features. I’ve found that if you remove IRQ conflicts and turn off PnP, Solaris 10 is happy. I’ve installed it on a number of no-name PC’s that use mobos from people like sanyo, gigabyte, etc. and I’ve installed in on desktops from Compaq/HP. I’ve installed it on laptops as well, with not issues. If you disable those features in the BIOS, things just work. If the install is taking a long time, it’s probably something causing conflicts. I’ve installed Solaris 10 on Ultra60’s, Ultra2’s, Ultra10’s, v240’s, v890’s, and the PC’s above. It should only take 2-3 hours tops. The installer is not bad at all. The great part it can be done with the X11/java installer or in text mode. If you go with the java installer, it’s very easy to install. You even get a browser that comes up in kiosk mode while it’s installing. Yes, you do have to reboot after the first disk if you are installing via cd’s. It will boot from the hdd and proceed. If you are installing from DVD, this does not have happen, it will install everything then do a reboot. Installing from DVD is faster, because you don’t have to swap cd’s;) Of course, if you install via jumpstart, no interaction is required and it’s faster:)
As for userland utils, everything that people have listed here is either in /usr/sfw or it’s on the companion cd that installs software into /opt/sfw. In /usr/sfw you’ll find all sorts of goodies (Samba, Webmin, GCC, mysql, ncftp, net-snmp, gimp, gphoto, openssl, tons of gnome apps, etc.). In /opt/sfw/ you’d get (KDE, xfce, xmms, emacs, etc). So there is a plenty of freeware components installed. Even Apache 1.3.x and 2.x are included. All of the components under /usr/sfw are part of the Solaris OE, which means that Sun supports it and does a code review for security flaws, something you don’t get with Red Hate or Suse. If you need more stuff, look at sunfreeware.com or blastwave.org. Pkg-get is awesome!
Now for desktop stuff.. there is JDS. If you don’t like the theme.. you can change it! It was geared for enterprise desktop usage, which is certainly has. It includes StarOffice 7, mozilla, evolution, gaim, gimp, and even some games and apps like jdiskreporter. Through APOC, you can lock down the desktop and give a consistent corporate desktop to all your employees. APOC is backend-end with gconf and LDAP, and has a pretty web interface. This is good stuff for the corporate desktop. Was it meant for home use or people who like flashy desktops, well no! If you want that, get a Mac!
I’ve been using Solaris 10 at home without any issues. If I want to do multimedia stuff or play games.. I use my Mac or a gaming console. I’m working on a pilot project at work to get Solaris 10 deployed. The big wins for us are:
1. It scales on large hardware with 8+ CPU’s (Something Linux and Windows can’t do.. SGI altix does not count in the business world).
2. It’s very secure and auditable (Zones, process privilages, BART, Crypto Framework, IPfilters, etc.)
3. Resource management for high utilization(CPU shares, memory, application groupings with projects, etc).
4. Debuging utils for everything (Dtrace.. need I say more?)
5. Consolidation (Zones and resource management). No one wants to manage a zillion Linux boxes. Sorry, it’s really not cost affective.
6. Has features that reduce dependency on third-party vendors (ZFS replaces the need for VxVM/VxFS).
7. Big vendors and ISV’s support it! (Oracle, Sybase, Veritas, BEA, etc).
8. It runs on multiple platforms (Sparc, x86, Opteron).
9. If we choose to drop PC’s for desktops and use Sun Rays, there is a corporate desktop included (JDS).
So for a business, Solaris 10 has a lot of uses in the datacenter. All of those features are well suited for the challenges governments, financial institutions, military, health, science, telecom, etc face today.
For the home user.. watch out. People like Comcast are working on next generation set-top boxes that use SunRay technology to deliver desktop sessions over cable to your TV. Don’t be surprised you get a customized JDS desktop from your cable provider in the near future. Now think about that, your grandmother could use Solaris 10 and JDS without even knowning it in the near future from her TV! There is a lot of development in the cable and ISP industry to deploy such technology. That will change the need for PC’s at home and will ultimately help computers become more transparent to every-day people.
“People like Comcast are working on next generation set-top boxes that use SunRay technology to deliver desktop sessions over cable to your TV.”
Sun really has a lot of tech that Microsoft can only dream about, right now. SunRay is supposed to work over broadband, now, which is exactly what cable modems are.
I think the fact that Sun always stood behind their R&D spending, even in hard times, will pay off over the next few years.
You should also get up to date, Linux scales just fine now. http://news.zdnet.co.uk/0,39020330,39184546,00.htm <– Linux running on a 64 processor Superdome with a standard 2.6 kernel. Since you couldn’t even get that right, why should we believe anything else you have to say comparing Solaris to other systems?
“The free download version was not licensed for commercial usage, only evaluation as I understand it.
You had to pay for Solaris licenses if you actually wanted to use it in the course of your business activities.”
I seriously can’t understand comments like these? My first version was:
“WHAT?! You have to pay for software!”, the more intresting one however is:
This guy think the OS should be free, why I don’t know, probably because he wants free software and don’t want to pay anything. Still he complains because the free version wasn’t for commercial use. But wasn’t earning money an evil thing just two seconds ago?
There is a difference between scaling and scaling well. FreeBSD can scale up to that amount, but whether or not it actually properly utilises those added CPUs, well, thats a different story entirely.
How fine grained components are, dictates how well they utilise those extra CPUs, and even Linus has admitted there are still large parts of the kernel that are under the control of the ‘huge f*cking lock’.
Solaris on the other hand, is incredibly fine grained hence its ability not only to scale up and beyond 128cpus, but also the ability to utilise the grunt of the machine to its full potential.
Actually IIRC, Solaris could be used for commercial purposes on machines up to 2 cpus, then after that, you had to pay for a license.
Did you read the article? HP found nearly linear scaling of the 2.6 kernel through 64 processors with both CPU utilization and I/O throughput. It’s spelled out for you right there in the article. If you’d like to find me some quotes from Linus saying Linux still has smp scalability problems why don’t you post it. I posted an article showing that it’s not true and since you didn’t even bother to read the article, I’m not about to believe you cause you say so.
Simple truth is, if Solaris still has an smp scalability advantage over Linux, it’s not much of one. And Linux definitely scales smaller (PDAs, cell phones, watches) and bigger (biggest supercomputer in the world).
“Simple truth is, if Solaris still has an smp scalability advantage over Linux, it’s not much of one.”
One difference is that Solaris has had good SMP performance since about 1996, when the first big Enterprise servers started to ship. Also, Solaris has very stable kernel APIs, meaning a program written for Solaris on an Enterprise server in 1996 still has a great chance of running today on Solaris 10. It would be fairly accurate to say that the Linux kernel, overall, is where UNIX (not just Solaris) was about eight to ten years ago. Before anyone flames back, consider, also, how much this says about how far Linux has come.
HP has to do something to justify the Integrity line of servers, since Itanium is crashing around them. The SuperDome can be expanded to 128 CPU’s, why didn’t they “max it out” for the Linux tests? Besides, who in their right mind is going to spend that kind of money (six to seven digits) to run a “free” OS?
And just because Linux can run on a PDA, doesn’t mean it scales better than any other OS. In fact the PDA, Cell Phone, and watch is irrelevant, they are not SMP machines. That does not demonstrate scalability, it just means people are going to take the time and trouble to make something work on a device. The same can be done with BSD, Windows, and other operating systems. And I am sure the Linux used on a PDA is far different than what is used on the SuperDome or Altix.
And what does this have to do with Solaris? Unixconsole is right, to have either a Linux or Windows solution with more than 8 CPU’s you have to cluster, or spend big dollars on big iron. Not all applications are cluster aware or work well in a cluster environment. Plus the additional overhead of managing the switch fabric to ensure good connectivity between nodes and additional staff to support the cluster. Big iron works out to be a lot cheaper, whether it be a SuperDome, pSeries, or a SunFire.
The Linux “benchmark” was a test, now how about showing us some production use of Linux on a SuperDome!
Last I checked Linux could scale well up to 512 procs. Frankly, there’s nothing impressive about Solaris anymore. You look at all the supposedly “innovative” features Solaris has, and you realize Linux has had many of them for years. I mean, when did virtualization technologies become innovative? Or tracing and debugging tools? Pfft…
It used to be that Solaris was all powerful and scalable and stable. But today, some of the worlds most powerful and scalable supercomputers run Linux. Take a peak at the TOP 500 most powerful computers in the world. How many of them run Solaris? How many run Linux?
Case closed.
“And just because Linux can run on a PDA, doesn’t mean it scales better than any other OS.”
Also consider that the weakest PDAs are still 32-bit computers more powerful than the first 386 PCs that Linux originally ran on. The only difference is that a 386 PC weighed more than 50lb, including monitor, and didn’t fit in my pocket.
“You look at all the supposedly “innovative” features Solaris has, and you realize Linux has had many of them for years. I mean, when did virtualization technologies become innovative? Or tracing and debugging tools? Pfft…”
This post displays the same lack of eduation of the issues that has been debunked a dozen times already. Containers run with basically no performance penalty out of the box on both SPARC and x86, unlike IBM mainframes (performance penalty) or anything on Linux (patching and recompiling kernels).
Dtrace is unlike any debugging tool before it. All you have to do is go read any article about it. Just two paragraphs in should be enough.
“Case closed.”
What is SGI supposed to use? AIX? Solaris? HPUX? Or Linux, which is not a competitor’s OS? SGI, then, had to take it upon themeselves to adapt Linux to work for them (writing drivers, patching the kernel, etc.). I’d bet the Altix isn’t a stock kernel from Red Hat, and that resulting kernel will run only on Altix. Sorry, no SGI kernel for you.
Who cares? I can do all that junk on Linux. That’s all that matters.
“Who cares? I can do all that junk on Linux.”
The problem is that you can’t. Linux can do a few things via kernel patches that kinda sorta tangentially approach approximations of dtrace and containers, but they really are far short on completeness, maturity, integration, usefulness, ease of use, performance, and documentation. If that suits you, fine.
Quote: “Linux can do a few things via kernel patches that kinda sorta tangentially approach approximations of dtrace and containers, but they really are far short on completeness, maturity, integration, usefulness, ease of use, performance, and documentation. If that suits you, fine. ”
Yes, some of the tools or things that you’re talking about are relatively new in the Linux land. Give it a few years and they’ll be very nicely matured. Open Source can do everything that closed, proprietary source can do, it might take us a bit longer, but we’ll get it.
The thing yourself, and others are not understanding, is that for most of us, dtrace and zones are just not needed. Even in the enterprise. Laws of complexity generally mean that the more you have, the more that can go wrong, and generally the more that can go wrong, leads to more actually going wrong, and generally going wrong in a larger way. I still think that the old k.i.s.s principle when building servers is the best way of keeping uptime and general reliability.
Dave
“Open Source can do everything that closed, proprietary source can do, it might take us a bit longer, but we’ll get it.”
Often OSS has come up with stuff that’s better (even Sun’s JDS is based on GNOME), and that’s not in dispute. What is getting so tiresome in these forums is that people keep saying things like DTrace and Containers are old hat, which they are not. They go down a list of features, checking off things to troll on from their dorm rooms, and they take all the projects tangentially related to Linux and lump them all under the “Linux” brand. It’s as if they think people at home running Linspire will ever see what makes an Altix and Altix or what makes Linux-on-mainframe work or have the expertise to configure SELinux or apply custom patches to their kernels for virtualization. They claim the mere existence of something even remotely related, requiring extra downloads, configuration, compilation, testing, and wishful thinking is enough to check off that row on their feature list. They conistently fail in their “arguments” to acknowledge that the Solaris 10 CD-ROMs contain everything Sun advertises, in place, ready-to-use, whether a person is running some whitebox Opteron PC or a 24-CPU server. There’s no custom patching, all the documentation is there. Anyone who tries to claim equivalence is just missing the point. Or did software patch configuration management and basic operating system integration testing become part of system administrators’ job descriptions?
Honey, I can actually come up with two (although from the same company):
Linux reaches 2.6 milestone – http://news.zdnet.co.uk/software/linuxunix/0,39020390,39118636,00.h…
“Whereas the 2.4 kernel works on servers with four or sometimes eight processors, the 2.6 kernels will stretch to 32-processor systems, Morton said.”
Although from Morton, Linus’s 2ic, his input is still valid.
“Not everything is better, though. Garloff said the part of 2.6 that communicates with memory is less efficient, imposing a practical limit of 24GB of memory to the 32GB that 2.4 could handle. However, he believes that programmers will address the problem.”
“In addition, 2.6 requires somewhat more memory to run and shows worse performance when it has to use hard drives as extra memory under heavy loads, Morton said.”
I’m not trashing Linux, but lets not try to make out that Linux is in the same league as Solaris – its like saying that Windows is in the same league as z/os.
You do realize that your quotes are directly contradited by the fact that it does run on a 64 proc system and it does scale linearly? And even if you want to go with your quotes, they more or less completely debunk your assertion that Linux can’t scale past 8 cpus? As for the memory quotes, those sound like they are talking about x86 with PAE. Linux has been able to address more than 24GB of memory on 64bit systems back when the Alpha port was first done (2.0 timeframe).
You are trashing Linux with faint praise. Fact is, Linux is in the same league as Solaris. The article I posted shows it. The fact that it runs the biggest super computers in the world shows it. You can wish it away with all your might, but the world has passed Solaris by.
And what does this have to do with Solaris? Unixconsole is right, to have either a Linux or Windows solution with more than 8 CPU’s you have to cluster, or spend big dollars on big iron. Not all applications are cluster aware or work well in a cluster environment. Plus the additional overhead of managing the switch fabric to ensure good connectivity between nodes and additional staff to support the cluster. Big iron works out to be a lot cheaper, whether it be a SuperDome, pSeries, or a SunFire.
No, Unixconsole is wrong because Linux does run on big iron. As a matter of fact, it runs on 2 of the 3 types you listed (SuperDome and pSeries). It also ran on 32+ processor AlphaServers. That most people have decided that big iron is unnecessary for damn near everything is different story. There’s a reason that Sun is so big on Opteron these days and it’s not because SunFire’s are flying out the door. As for why HP didn’t run Linux on a maxed out SuperDome, who knows ask them. Whatever their motivation doesn’t change the fact that it demonstrates Linux linearly scaling to 64 procs; something that the Solaris crowd continually claims Linux can’t do.
The Linux “benchmark” was a test, now how about showing us some production use of Linux on a SuperDome!
The test proves the point, that unixconsole is wrong. Whether or not there is a system like that in production is irrelevant. It just means that the Solaris crowd is going to need to find something new that Linux supposedly can’t do.
Obviously you choose to ignore what you want to support your pro-Linux position. So HP tested Linux on a SuperDome, that doesn’t mean that ANYONE is going to spend millions of dollars on a machine to run mission critical applications to prove the point for the Linux fan club that Linux works. There is a huge difference between benchmarks and production.
For the record I have stopped Linux deployments in my previous and present jobs for technical reasons. The first case was where the environment had to be certified EAL4, which at that time Linux (regardless of distro) was nowhere near being certified. The seven machines using RedHat Linux were replaced with Solaris x86.
The second case is in my present position where there were some serious concerns about the performance of our webfarms, using Solaris on V100’s and shared content stored on a SunFire 3800. The response time was slow for dynamic content and the decision was tentaively made to replace the entire webfarm with SMP servers running RHEL. That is until I ran sar and process accounting and determined the cause of the problem was most of the machines were connected to switch ports set either to 10 MBit or 100 MBit Half Duplex. The webfarm machines were for the most part at idle. Linux would not have solved this performance issue either, in fact in this particular case we would be introducing an application with a recent port to Linux (read ALPHA software) and still have the same network issues. When you troubleshoot problems, you don’t add variables, you eliminate them. And in this case, lots of new variables would have been added.
The environment I work in is a mix of Windows 2000, Solaris, and RedHat Linux. Even our resident RedHat “fanboi” is starting to have doubts about the direction where RedHat is going, particularly when it comes to cost. Our management likes support contracts (just as most businesses do).
The acid test of Linux deployments is whether or not they are being used in five years. When that goal is reached, I think there will be cause for celebration by the Linux crowd.
Haha, so Linux isn’t being used enough.
And how is Dtrace and Containers mature? I guess they’ve been in the market for 7 years now.
“And how is Dtrace and Containers mature?”
Sun felt that it was appropriate to include them in the retail Solaris 10 kernel, after a long beta test period. Also, containers have been around several years as a separate product up until now. There is an enormous difference between Sun integrating a feature into their kernel for sale to banks, defense agencies, and telecom companies, and someone applying some relatively untested third-party patch themselves. Software is more complex than you might think, and you might ask one of your instructors (if you’re in CS/CE) about how complexity scales in software. If he/she can’t answer your question, you might want to drop that class.
I see. So your instructors taught you that complex software becomes mature after a beta test program. Fascinating!
And it’s okay for SUN to patch containers into Solaris, but it’s not okay if a third party does the same on Linux, right?
“I see. So your instructors taught you that complex software becomes mature after a beta test program. Fascinating!”
You are amazing. Sun integrates these things into their stable production kernel after a testing program and will stand by them. Can you guarantee the performance of a third-party patch across point releases of the mainstream Linux kernel source? Can you? Really? Are you serious? Are you really sure nothing important changed? Have you tested it? How much time did it take? Was it worth the cost to your employer’s bottom line? What if you need to apply another patch to the kernel? Can you guarantee it won’t break the other patches? Can you properly test their interactions? Can you? Really? Are you serious?
Try advocating a patched-up custom Linux kernel to an employer one day, and enjoy watching their subsequent facial expressions. Linux is good stuff, but there are a lot of things not apparent in a University setting, where academic interests are vastly different than business interests. Sun fully understands what this all is.
One day, if IBM takes all these patches, rolls them into a single kernel, tests it, packages it, and puts their name behind it, then you can say something. Not one day before.
Obviously you choose to ignore what you want to support your pro-Linux position. So HP tested Linux on a SuperDome, that doesn’t mean that ANYONE is going to spend millions of dollars on a machine to run mission critical applications to prove the point for the Linux fan club that Linux works. There is a huge difference between benchmarks and production.
What am I ignoring? unixconsole blatantly stated that Linux can’t run on systems with 8+ cpus. I showed that not only does it but HP benchmarked it and it scales linearly. Thus I proved him 100% wrong. Now you’re trying to divert the topic and are asking for “deployments”. Well, honestly, I don’t know whether or not anyone has deployed that type of configuration. The number of places where big iron makes sense gets smaller and smaller every year. I can tell you that people will pay millions of dollars on systems and run Linux on them. Go look at the list of the top 500 supercomputers and find out how many run Linux (at least 150 from my quick glance). Or go read about the zSeries deployments running Linux. You can hide your head in the sand if you want, the facts are still the same, Linux scales.
And honestly speaking, what do your two examples prove about anything? Linux didn’t support the security level you wanted so you didn’t use it. OK, makes perfect sense. And Linux even supports it today so you have more choices. For the second, your software vendor doesn’t support Linux yet. OK, so you don’t use Linux. Neither of those issues are really Linux issues though, now are they?
Finally, if you want to see Linux installations which have lasted over 5 years in big name locations, go look at Wall Street (started switching in 99/2000) and the big CG studios (switched over their renderfarms in 97/98 and desktops in 01/02). If you haven’t realized that businesses take Linux seriously and that it’s here to stay, you need to take off the blinders. Linux may not be capable of any task today, but if you think that everyone is going to suddenly stop using it and start using Solaris, you’re out of your mind. Solaris really does not have any advantages to me, or many other people from what I’ve seen on these forums. And I can tell you that where I work, we have absolutely no desire to replace our Linux infrastructure with Solaris.
“or many other people from what I’ve seen on these forums.”
I wouldn’t take forums like Slashdot and OSNews too seriously. The quality of the arguments is adolescent, at best, and embryonic, at worst. Linux is good and useful and here to stay, for sure, but read what some people have said above. It’s breathtaking.
If you are going to quote things then you should quote whole facts. Yes, you can run Linux on an LPAR on a zSeries mainframe, what is the PRIMARY OS? It takes 77,000 lines on proprieatry IBM code to make Linux run on the LPAR created on z/OS.
Comparing supercomputers to SMP machines is like comparing apples and oranges, again ignoring the facts to support your position. Now show me a system (not a cluster) running Oracle and SAP R/3 with Linux using more than 16 CPU’s. It is also a tired argument, Solaris was not designed for supercomputers, so why compare it to a supercomputer?
And you should be careful about quoting “facts”, no version or distro of Linux has been certified at EAL4, if you don’t believe me, check http://niap.nist.gov. RedHat Enterprise Linux 4 is suuposed to EAL4, hard to do when it hasn’t been evaluated yet. If I am evaluating an operating system for Government use, it had better meet the security requirements, if it doesn’t it is not deployed. And that is most certainly a Linux issue. That is why RedHat is spending the time and money to get RHEL 4 certified at EAL4.
If Linux osftware was so widespread then I should have unlimited choices, well guess what, I don’t. I am sure there are a lot of software companies waiting to see how Linux shakes out before they commit various resources to development of Linux products.
And that is your choice to use Linux, unfortunately I don’t see it that way. And I am not sticking my head in the sand, I just don’t see Linux in the same way as you do. Oh yeah, you mentioned things that Linux can do that Solaris can’t, can Linux create mulitpathed network interfaces and can you install it over a WAN with SSL certificates?
Haha, some of you are really in denial.
EAL4 Cert -> http://linux.slashdot.org/article.pl?sid=05/02/20/1820218&from=rss
Can Solaris run on my PDA? How about my toaster? Haha…
Keep gagging over SUN’s marketing residues.
If you are going to quote things then you should quote whole facts. Yes, you can run Linux on an LPAR on a zSeries mainframe, what is the PRIMARY OS? It takes 77,000 lines on proprieatry IBM code to make Linux run on the LPAR created on z/OS.
The only way to make AIX run on a zSeries is in an z/OS LPAR. Are you now going to say that AIX can’t scale? And where is this IBM proprietary code? Last time I looked it’s all available from the standard kernel download from kernel.org. Or are you going with the standard Sun definition of “proprietary”.
So what are you trying to argue? Linux scales to at least 64 processors, I posted benchmarks to prove it. People have chosen to run Linux on multimillion dollar hardware, something you claimed people wouldn’t want to do. You’re flat out wrong, just admit it and stop trying to divert people’s attention.
Comparing supercomputers to SMP machines is like comparing apples and oranges, again ignoring the facts to support your position. Now show me a system (not a cluster) running Oracle and SAP R/3 with Linux using more than 16 CPU’s. It is also a tired argument, Solaris was not designed for supercomputers, so why compare it to a supercomputer?
Linux wasn’t designed for supercomputers either. But it now runs the biggest in the world. Guess what, Linux is going to run on any type of hardware that people find it desireable to run it on. Whether that be a watch/cell phone/pda, a massive smp system or a supercomputer. Why are you arguing this? Linux scales, period. Linux runs on vastly more varied types of hardware than Solaris. And people will run Linux on everything from a $200 pda to a supercomputer which costs 10s of millions of dollars.
And if you go over to http://www.tpc.org, you’ll find the #9 TPC-C non-clustered system by performace runs Oracle 10g and SuSE Linux. You should note that it’s a 32 processor Itanium system. Whether or not someone is running that configuration in production with SAP is neither here nor there (besides the fact that I have no access to every company in the world to check if they are running that configuration). The argument was Linux doesn’t scale. That argument as been proven 100% wrong. Or are you trying to say that the results that I’m posting are not true?
And you should be careful about quoting “facts”, no version or distro of Linux has been certified at EAL4, if you don’t believe me, check http://niap.nist.gov. RedHat Enterprise Linux 4 is suuposed to EAL4, hard to do when it hasn’t been evaluated yet. If I am evaluating an operating system for Government use, it had better meet the security requirements, if it doesn’t it is not deployed. And that is most certainly a Linux issue. That is why RedHat is spending the time and money to get RHEL 4 certified at EAL4.
Ah, “facts”. You should look for facts in places other than Sun marketing literature. http://www.novell.com/coolsolutions/tip/11688.html. SuSE is EAL 4 certified. Red Hat will be soon. Where are your facts that Linux doesn’t scale? Where have you posted anything other than opinions this entire thread? And certification is not a Linux issue. Linux is perfectly capable of being certified. It is certified. Thus, it’s not a Linux issue, it’s a vendor issue to pay to get it certified.
If Linux osftware was so widespread then I should have unlimited choices, well guess what, I don’t. I am sure there are a lot of software companies waiting to see how Linux shakes out before they commit various resources to development of Linux products.
Your first comment is laughable. You don’t have unlimited choices on any platform. Show me the software to run a CG studio on Solaris. Last time I checked, Renderman, Shake, Maya, etc aren’t available on Solaris. So, using your words, if Solaris software was so widespread, I should have unlimited choices. Well, guess what, I don’t. There are a lot of software companies waiting to see how Solaris x86 is going to shake out before they commit resources to it. What is your point? I think it’s that you don’t really have an argument to make and are reaching for something, anything, to make Solaris look more desireable than Linux. Why not point out some of those facts you seem to like. You know, real facts, not made up ones like the ones you’re posting.
And that is your choice to use Linux, unfortunately I don’t see it that way. And I am not sticking my head in the sand, I just don’t see Linux in the same way as you do. Oh yeah, you mentioned things that Linux can do that Solaris can’t, can Linux create mulitpathed network interfaces and can you install it over a WAN with SSL certificates?
You are sticking your head in the sand. Maybe if you didn’t attempt to divert the conversation every time you were shown to be wrong, I could agree with you. For your last 2 questions, if you’re asking if you can install Linux over a WAN with SSL certificates I’m going to ask why you’re now talking about installers? Short answer, yes. Long answer, it depends on the installer. Red Hat ES and Fedora both will happily install via a web server which is SSL secured. It’ll also install via hard drive, NFS, serial console, FTP and regular HTTP. As for multipathed network interfaces, here is a tutorial for setting up load balancing across multiple multiple links using a multipath routing with weighting: http://linux.com.lb/wiki/index.pl?node=Load%20Balancing%20A….
How about this, before posting again and making yourself look even more foolish, do some research. And try to stay on topic. It’s tiring having to prove entirely new things wrong every time you post. It’d be easier if you’d either a) give up or b) try and show how something I said is wrong.
How about this, before posting again and making yourself look even more foolish, do some research. And try to stay on topic. It’s tiring having to prove entirely new things wrong every time you post. It’d be easier if you’d either a) give up or b) try and show how something I said is wrong.
May be you should heed your own advice. How is talking about linux on topic in a an article about Solaris 10?
You were off topic. Keep your linux arguments out of Solaris articles.
I was responding to someone who explicitly stated that Linux doesn’t support 8+ CPUs. I made 1 simple post with a link showing that Linux does support 8+ cpus out of the box and it does so quite well. Since then I have merely responded to people who have been trying to dispute that fact. The sooner you Sun fanboys stop lying about Linux’s capabilities, the sooner we can stop posting on Solaris threads to correct those lies.
Or, in big bold letters since you couldn’t even keep track of the thread up to this point …
I didn’t post any Linux arguments on a Solaris thread, I responded to the Linux arguments that Solaris fanboys posted.
Isn’t he hot under the collar, let’s see say something negative about Linux and the tirade begins. I had a long response to his last post, I’m no longer going to worry about it. I think Chris just showed his true colors to us all.
“1. It scales on large hardware with 8+ CPU’s (Something Linux and Windows can’t do.. SGI altix does not count in the business world).”
Where does it say that Linux does not support more than 8 CPU’s. Unixconsole states that Linux does not scale in the same fashion as Solaris. The SuperDome test is just that, a test. So who is wrong here?
I didn’t post any Linux arguments on a Solaris thread, I responded to the Linux arguments that Solaris fanboys posted.
Why don’t you post that on a linux article? Let Solaris Fanboys say whatever they want.
Here is what you linked to:
“The 2.6 kernel is NUMA aware,” said Cabaniols. Some patching was necessary, he said, but “all patches developed for the BigTux project are going into the mainstream Linux kernel and are included in standard distributions.”
So to answer your question Linux doesn’t scale to 64 cpus without BigTux patches. So no it wasn’t the standard kernel that scaled to 64 cpus. Eventually when the patches go in it will but not vanilla linux from kernel.org.
To close the issue.
Solaris regularly scales to larger than 64 cpu systems. The SunFire 15K had config of 72 cpus and max of 106. The SF25k has 72 dual core cpus meaning 144 cpus, I am not sure if the same trick to get 106 cpus is possible on the SF25K, if it is then you can easily have a 212 cpu system and Solaris will scale.
Talking about scaling to 64 cpus is so 90’s in Solaris land. Solaris was scaling to 64 way systems since the mid 90s.
Also fujitsu has had 128 way systems running Solaris for a while.
http://www.b2net.co.uk/primepower/fujitsu_primepower_2500_xa_server…
There’s a different between someone running Linux on a large Itanium box in a lab and someone running commerical applications on a SF25K. The difference is that one configuration is supported and doesn’t require any custom tweaking and the other has no business use and requires custom tweaking. So from a business perspective, Linux running on a large Superdome or an Altix has little if any use.
Oh.. and for those who think Solaris is not used in any render farms.. look at the credits for films like “Finding Nemo”. You’ll see a line that says “Rendering provided by Sun Microsystems, Inc.” Solaris is used in a wide range of industries every day! Don’t kid yourself! Just because Sun doesn’t sell hardware to everyone on a regular basis doesn’t mean it’s not being used. Solaris systems have a rep for running for years without issues. There are countless companies still running old sparcstations and old ultra enterprise servers. The point being that Sun makes a solid product that doesn’t need to be upgraded every year. Compairing the PC world and enterprise Unix world is a joke, they are completely different. It’s like compairing window running a pc to a z/OS machine running multiple OS’s. Speaking of mainframes, Solaris 10 is perfect for replacing them. Not that Sun hasn’t been helping companies cart mainframes out of the datacenters over the past few years.. I know I’ve done it myself:)
Chris (IP: —.dslextreme.com) just exposed how wrong some SUN fanboys are concerning Linux scalability. If you have bothered to read the beginning of the topic, you wouldn’t need to argue with him.
Isn’t he hot under the collar, let’s see say something negative about Linux and the tirade begins. I had a long response to his last post, I’m no longer going to worry about it. I think Chris just showed his true colors to us all.
“1. It scales on large hardware with 8+ CPU’s (Something Linux and Windows can’t do.. SGI altix does not count in the business world).”
Where does it say that Linux does not support more than 8 CPU’s. Unixconsole states that Linux does not scale in the same fashion as Solaris. The SuperDome test is just that, a test. So who is wrong here?
Did you read what you quoted in your own post? Here, in bold:
“1. It scales on large hardware with 8+ CPU’s (Something Linux and Windows can’t do.. SGI altrix does not count in the business world).”
It says it right there, in the part you posted. And the SuperDome test proves 100% that it can. Thus, unixconsole and you are wrong. My true colors are that I can’t stand Sun fanboys flat out lying about Linux’s capabilities. Go ahead and try to bad mouth me, I think that anyone with the slightest bit of reading comprehension will see how laughable your commentary is at this point.
From the article:
“Running three different benchmarks on a standard Linux distribution based on the 2.6 kernel, the Superdome showed linear improvements for kernel compiling, memory bandwidth, and the HPL common supercomputer benchmark.” (emphasis added)
Stardard Linux distro. It says it right there. It does later talk about them adding some patches, but if you notice in your own quote he says that those patches that were necessary went into the mainstream kernel and are in stardard Linux distros. Do Sun fanboys have reading comprehenstion problems? I mean, seriously, your own quotes are completely contradictory to what you’re saying. One more time “standard Linux distro“.
And finally, I never said Solaris couldn’t scale to more than 64 processors. And I’m not sure if Linux can’t scale to 128 processors. All I know is that a Sun fanboy said Linux can’t scale past 8 CPUs and I proved that 100% wrong. If you want to turn this into a pissing match then you’re going to get Linux fanboys asking why Solaris can’t scale to supercomputers and then you’re screwed.
Stardard Linux distro. It says it right there. It does later talk about them adding some patches, but if you notice in your own quote he says that those patches that were necessary went into the mainstream kernel and are in stardard Linux distros. Do Sun fanboys have reading comprehenstion problems? I mean, seriously, your own quotes are completely contradictory to what you’re saying. One more time “standard Linux distro”.
Learn to read. Aparently linux fanboys do have a reading comprehension problem!!!
“The 2.6 kernel is NUMA aware,” said Cabaniols. Some patching was necessary, he said, but “all patches developed for the BigTux project are going into the mainstream Linux kernel and are included in standard distributions.”
“Are going into” is not “Already in”
If you want to turn this into a pissing match then you’re going to get Linux fanboys asking why Solaris can’t scale to supercomputers and then you’re screwed.
Oooh I’m scared.
http://www.serverwatch.com/news/article.php/1137251
http://www.unipers.com.sg/Pages/param.html
http://www.hoise.com/primeur/01/articles/corner/AE-PR-03-01-19.html
“As usual these days for supercomputers, the machine will consist of a cluster of SMP-nodes coupled by a ccNUMA switch. The final configuration will consist of 768 new UltraSparcIII processors.”
“standard Linux distro”.
BTW, Linux distros are notorious for having kernels with patches not in the mainline tree. Standard linux distros are a misnomer, especially in terms of kernel features.
If it ain’t in the mainline tree it’s not in linux period. Like I said scaling to a >128 Cpus isn’t a problem for Solaris. Solaris solved the scaling problem a decade ago. It’s old news, get over it.
There’s a different between someone running Linux on a large Itanium box in a lab and someone running commerical applications on a SF25K.
Ah, you’re going to go with the same lame copout as the other Sun fanboys. You got caught saying something blatently untrue and now you’re going to try and back out of it.
The difference is that one configuration is supported and doesn’t require any custom tweaking and the other has no business use and requires custom tweaking. So from a business perspective, Linux running on a large Superdome or an Altix has little if any use.
Well, you can go buy a SuperDome from HP with 64 processors running Linux right now. That seems to be a supported configuration and, since they’re selling it, someone must have a business use for it. Do you have some sick fetish for being wrong? Link: http://www.hp.com/products1/servers/integrity/superdome_high_end/sp… (I’m sure you’ll note that they list both Red Hat ES and SuSE ES as supported OSes).
Oh.. and for those who think Solaris is not used in any render farms.. look at the credits for films like “Finding Nemo”. You’ll see a line that says “Rendering provided by Sun Microsystems, Inc.”
I’m not sure if it does say that in the credits for Finding Nemo, but they sure didn’t use Solaris in their render farm. See, Pixar uses, develops and sells, their rendering software. It’s called RenderMan. And if you head on over to their website (https://renderman.pixar.com/) and click on Technical Specs, they list the supported platforms. You’ll see that Solaris is not among them. So there is absolutely no chance that Finding Nemo was rendered on Solaris. If you google for information about Pixar’s render farm, you will find that they used to use an SGI Irix farm and then switched to Intel and Linux. They are slowly adding Mac OS X, but considering that Steve Jobs is the CEO of Pixar, that’s not surprising. Still, no Solaris there. Maybe Pixar bought some x86 systems running Linux from Sun?
Finally, I never said no one uses Solaris. I never said there is anything wrong with Solaris. If you read every one of my posts, I never said anything derogatory about Solaris. I have, however, debunked every one of the “facts” that you Solaris fanboys are parading around. Really, just admit you’re wrong so this thread can end. You are the one that posted that Linux can’t scale past 8 CPUs. Are you still going to try and continue with that?
Are you guys really so dumb as to continue this? It says, in your own post, that the patches are already in standard Linux distros. They also said that they have been submitted upstream. This is an out of the box standard Linux install. No need to add any patches whatsoever. HP will sell you the box right now, it’s listed on their website. I posted a link to the actual system.
If it ain’t in the mainline tree it’s not in linux period.
ROFL. Seriously, you guys are funny. Now you are putting your fingers in your ears and yelling “nya, nya, I can’t here you”. You can buy Red Hat ES or SuSe ES and you get the kernel that HP used. Supported, out of the box. You’re really making yourself look silly now by continuing this argument.
Like I said scaling to a >128 Cpus isn’t a problem for Solaris. Solaris solved the scaling problem a decade ago. It’s old news, get over it.
Where did I say Solaris can’t scale? Unixconsole said Linux can’t scale past 8 CPUs. Do you deny that? Do you deny that I have completely, utterly, 100% shown that now to be true?
“As usual these days for supercomputers, the machine will consist of a cluster of SMP-nodes coupled by a ccNUMA switch. The final configuration will consist of 768 new UltraSparcIII processors.”
See, now you turned it into a pissing match. So don’t go and try to pretend that I brought this up.
BlueGene/L DD2 beta-System (0.7 GHz PowerPC 440) … link: http://www.top500.org/sublist/System.php?id=7101. That system has 32,768 processors, it runs Linux and it’s the fastest supercomputer in the world. IBM alone as 161 Linux based supercomputers in the top 500. Do you want to continue that pissing contest now? I mean, you’re not going to be beat #1.
Well, you can go buy a SuperDome from HP with 64 processors running Linux right now.
No you can’t. From the link you posted further research reveals that you can only have 8 cpus per nPar with RHEL and 16 with SUSE.
Oooh “Standard linux distros” can’t scale to more than 8 in one case and 16 in the other. That too from a link you posted from the company you made the poster child for linux scalability. Look the company isn’t sellling boxes that can run 64 cpus in one partition. What hp did is a lab experiment, it isn’t ready for primetime. And it isn’t in any standard linux distro HP sells and supports.
http://h18000.www1.hp.com/products/quickspecs/11717_div/11717_div.H…
The table is hard to copy/paste but here is the data.
“The table below shows the maximum size of nPars per operating system:
Red Hat RHEL AS 3
SUSE SLES 9
Maximum size of nPar
8 CPUs, 128 GB RAM
16 CPUs, 256 GB RAM”
BlueGene/L DD2 beta-System (0.7 GHz PowerPC 440) … link: http://www.top500.org/sublist/System.php?id=7101. That system has 32,768 processors, it runs Linux and it’s the fastest supercomputer in the world.
Who told you BluGene runs linux? You linux fanboys never cease to amaze me. While the bluegene/L runs linux for the front end the main compute nodes don’t run linux.
Sorry you are way out of your league here. I don’t want to waste my time getting into silly arguments with you.
http://www.linuxdevices.com/articles/AT7249538153.html
The role of Linux
According to Manish Gupta, manager of emerging system software at IBM, Linux played a fundamental role in the design of BlueGene/L. “We wanted to provide a familiar Linux-based environment, so we support Linux on the front-end host nodes and service nodes.” The Linux-based host nodes manage user interaction functions, while the Linux-based service nodes provide control and monitoring capabilities.
Linux is also used in I/O nodes, which provide a gigabit Ethernet connection to the outside world for each group of 64 compute nodes, or every 128 processors. Thus, the full BlueGene/L system will have 1024 I/O nodes, which essentially form a Linux cluster. “Thousand-way Linux clusters are becoming fairly standard,” notes Gupta.
The actual compute nodes — the 128,000 processors — do not run Linux, but instead run a very simple operating system written from scratch by the Project’s scientists. “The kernels are so simple that, in many cases, the kernel doesn’t even handle I/O. The I/O nodes service I/O requests from the application.”
ROFL. Seriously, you guys are funny. Now you are putting your fingers in your ears and yelling “nya, nya, I can’t here you”. You can buy Red Hat ES or SuSe ES and you get the kernel that HP used. Supported, out of the box. You’re really making yourself look silly now by continuing this argument.
Please try to maintain less school boyish attitude. I posted enough data to prove you 100% wrong as you would say.
You can’t buy a 64 way superdome running linux. Well you can but you can only have 8way or 16way partitions and have 8 or 4 seperate npars running on that superdome, but not one 64 way npar. Sorry just threw your “standard linux distro” argument out the windows.
BlueGene being the number one super computer has more to do with the hardware and I/O technologies and not much to do with linux. Linux is only used in parts of bluegene, in a Systems Managment role. The computer nodes (the ones that do the real work) are running an IBM propietary OS.
I think you should stop now.
No you can’t. From the link you posted further research reveals that you can only have 8 cpus per nPar with RHEL and 16 with SUSE.
OK, so you can’t buy the system today. Thank you for correcting me. 16 CPUs is still more than 8 though, so you’re still proving Unixconsole wrong. And shipping system or no, HP still went on record saying that standard linux distributions supported a 64 processor SuperDome with no modifications. Or are you saying that HP lied?
Regarding Bluegene, did you read your article? Bluegene runs Linux on all the front end and I/O nodes. The compute nodes run an embedded OS that doesn’t even support I/O. Those nodes are controlled by the I/O nodes which run Linux. You program in a Linux environment, you execute the program in a Linux environment and then Linux routes your program to compute nodes which are controlled by ASICs with an extremely lightweight embedded OS. Are you going to say that Linux doesn’t run my computer because the nVidia GPU, the Northbridge, the Ethernet adapter, etc all have embedded OSs in ROM or ASICs? Having an ASIC or ROM with an embedded OS to control dedicated hardware is very common when a full blown OS is overkill.
Please try to maintain less school boyish attitude. I posted enough data to prove you 100% wrong as you would say. .
No, you provided enough data to prove that I was incorrect about the availability of a 64 processor single partition Superdome. But you have done nothing to dispute Linux’s ability to run on large system. If you’d like another example, here’s an IBM system, http://www-1.ibm.com/servers/eserver/pseries/hardware/highend/595.h…. That’s an up to 64 way Power5 system which runs AIX or Linux. If you look at their Performance data page, you can see them specifically talking about a 32 way Linux system. They don’t appear to specifically mention any other Linux configuration.
I’ll stop right as soon as you Solaris fanboys admit that you’re wrong and Linux can scale past 8 CPUs. (Btw, unixconsole also said Windows doesn’t scale past 8 CPUs and that Superdome does run Windows 2003 Datacenter on a single 64 way partition.). I’ve posted an article showing that HP proved it in the lab. I’ve just added a shipping IBM system which has performance data for a 32 processor Linux Power5 system running SuSE and Red Hat ES.
I’m seriously tired of this. You’ve all been proven wrong, Linux scales past 8 CPUs. How many times do I need to repeat this? I admitted I was wrong about the availability of that Superdome, are you all incapable of admitting you were wrong about Linux’s scalability?
And finally … Linux is only used in parts of bluegene, in a Systems Managment role. The computer nodes (the ones that do the real work) are running an IBM propietary OS.
Yes, Linux is only used on some of the nodes of Bluegene … the ones that control the entire system. You do realize that on a supercomputer, the System Management nodes are the OS nodes, right? The compute nodes are simply handed a packet of code and data to work on, and when they’re done they send the result back to the management nodes. Linux isn’t run on the compute nodes because it’d be a waste, the compute nodes don’t need a full blown OS. And the compute nodes “do the real work” in the same way a GPU does the real work of rendering graphics. Without the “management node”, aka the CPU and OS, the GPU is useless. The same is true of the Bluegene architecture. And Linux is, according to that article, still running on 1024 of the processors. Even without the compute nodes, the Linux part of Bluegene is bigger than the Sun system you pointed out.
In conclusion, if you yourself are going to link to pages showing Linux scaling past 8 CPUs, it’d be nice if you simply admitted the original point (which was that unixconsole was wrong and Linux scales past 8 CPUs, if anyone has forgotton) and let it be done with. I’ve still never said 1 bad thing about Solaris and yet I’m getting slammed repeatedly. This entire thread would have been all of 2 posts long if you all just said, “oh look, unixconsole is wrong, Linux (and windows for that matter) does scale past 8 CPUs”. Whether or not that type of system is common is irrelevant. Hell, large SunFire systems are hardly common. And it doesn’t make you guys look good when someone you’re calling a “Linux fanboy” is pointing out facts and figures to show that saying Linux doesn’t scale past 8 CPUS is wrong. Rather than say, “oh yea, it does scale past 8 CPUs, my information my be wrong/old/whatever” you go on the attack. Are Solaris fanboys really that afraid of Linux getting into the “last bastion of Sun”, aka, big iron? Solaris does have some nice features Linux doesn’t have, I never said otherwise. All I’ve been trying to do is dispute the misconceptions/lies/whatever which Solaris fanboys have been dishing out. I think we can all see who the real fanboys are … those who needs to prove that their “opponent” has something wrong with it no matter what the facts or evidence. I mean, if you weren’t actually simple fanboys/astroturfers/whatever you’d have been able to admin that Linux scales further than 8 CPUs, right?