“George Weiss, Gartner’s open-source analyst, recently said that Microsoft Windows will not suffer irreparable damage on the server side at the hands of Linux over the next five years. He’s right. Microsoft will fall flat on its face all by itself, and Linux will pick up afterwards. It’s very simple.”
lol, we saw you moulinneuf.
lol, we saw you moulinneuf.
-5 in just 5 seconds …
If Linux continues lacking powerful integrated tools like ISA or all the things that AD can do it’ll be windows who dominates the server space.
That includes the GUI tools for the most common operations. Windows server is the only server OS (besides mac os x server, who can’t really compare with windows server in too many situations) that does use the GUI as a way to simplify the administrator’s life. I don’t understand what’s so wrong with administrators using GUIS – what administrators want is to automate all they can so it makes their lifes simpler. Scripting can help to automate things and make things easier. But GUIs and usability also can help to build tools that allow to do some operations with a single click. It just makes your life easier, just like scripts.
Administrators are *also* users – users of administration tools, but users – and hence the usability field applies to those tools just like it does for desktop apps (which doesn’t means that administrators should learn, dominate and use the command line and configuration files for many other things). And in this field, just like in the desktop, Linux is really behind of windows, and it’s increasingly becoming an issue for linux servers. When I install a debian server and you login you get a ‘#’, when you install windows server you get http://www.zdnet.de/techupdate/artikel/2003/04/Windows2003_07.jpg . Regardless of how much administrators should know, there’s a difference.
Edited 2006-10-04 13:19
Why should one use the shell commands when administering a linux server?
– Scripting is based on connecting the shell commands in a usable automated way. Using the shell commands every day means that you think in terms of the tools required to do the scripting.
This makes for an easy tranisition from manually entering commands to creating scripts to do your work for you.
The GUI offers a generic solution(imagine having tick boxes for every possible setting in apache) which is nice and easy, until you have a problem that isn’t generic. You end up with way to solve the problem because you lack the understanding.
Once you commands are known they are just as easy to use as a GUI but much more flexible.
It’s even simpler, why would one use a gui, that uses resources when you have a more than capable CLI? Guis also tend to contain more bugs, simply because it uses more code. More bugs = more downtime/problems = more loss of money.
On the other hand, if you want a GUI for your servers go Novell, Yast does a wonderfull thing when it comes to what you asked for. So there you have both, the choice to simple administration (GUI), or the choice of going for the real stuff (CLI).
Probably not that simple.
Linux may have a head start in virtualization, but there is no reason to believe that Microsoft won’t manage to get on that train as well. As the auther points out they are working with Xensource just like most Linux distros.
Then there is VMWare that in its more advanced versions
runs directly on the bare iron, and allows you to run whatever OS you like on top, giving no particular advantage to either Linux or Microsoft.
Besides, virtualization is not the solution to all the problems in the world as you might believe if you read the article. Virtualization also adds more complexity to the system. This complexity needs to be managed, and the company that make it managable will be the winner.
True, so true…. :p
You can almost reinstall an entire windows machine in the same time it takes to spell your way through trouble shooting a broken linux machine.
(Before flaming: i only use linux for servers myself though, but still think it’s troublesome in many ways)
Besides the time spent to setup a Kickstart server (only done once), it takes us 15 minutes to install a new Linux system once it is racked. This includes the time spent on any configuration of the kickstart files (hostname, ip, etc) which we have scripted.
If you *really* must, what’s stopping you from reinstalling a broken linux machine?
You can almost reinstall an entire windows machine in the same time it takes to spell your way through trouble shooting a broken linux machine.
Maybe it wouldn’t take so long if you set the mouse down and learned how to type with more than two fingers.
Where in those two racks of 198 blades that make up your web site do you plan on plugging in a mouse? What is your gui worth over a serial cable? Heck you’re in your shorts at home dialed into the dsu/csu, hop over to the router and telnet to the host because the net is down and it’s not failing over, real bummer when all you know is click.
“Where in those two racks of 198 blades that make up your web site do you plan on plugging in a mouse?”
Mouse? How about a TCP/IP KVM?
Or better yet, use RDP.
Windows is damned easy to administer remotely.
Mouse? How about a TCP/IP KVM?
> You can do that with linux. We have an IP KVM but never use it because SSH over the same connection is much more convenient.
Or better yet, use RDP.
> You can do that with linux, you can set up VNC in ubuntu/debian in about 2 minutes… Except you loose all the performance benefits of not running a GUI on the machine. Of course, with windows, you can’t turn off the gui anyway. Anyway, RDP is slow. scrolling lists of error messages, for example is particularly exasperating.
Windows is damned easy to administer remotely.
> Ever tried adding 200+ new users to an AD? Guess what, Microsoft provided an interface to do that, it’s called the command line. GUIS are great for blundering into server setups without needing to actually understand what you are doing. Once things get more complex or something goes wrong, actually knowing what you are doing / what the OS is doing is worth more than all the time that you’ve ‘saved’ by not working it through from the start.
“GUIS are great for blundering into server setups without needing to actually understand what you are doing. Once things get more complex or something goes wrong, actually knowing what you are doing / what the OS is doing is worth more than all the time that you’ve ‘saved’ by not working it through from the start.”
An alternative view is: What a load of bull you are spewing. Only a moron would spend time writing scripts if a couple of mouse clicks would get the same info.
The beauty of Windows is that you get to choose. Linux is all about no choice at all.
ROTFL.
<gasp> now *that* was hilarious <wheeze>
“The beauty of Windows is that you get to choose. Linux is all about no choice at all.”
See… I think you’ve got a bit confused there.
Good one! /+5 Funny!
Oh wait, you’re not kidding.
Oh well..
/Moves along.
– Gilboa
“The beauty of Windows is that you get to choose. Linux is all about no choice at all.”
…I wouldn’t like to live in your dimension.. seriously
Hey, I used windows just the other day, had my choice between, “classic”, and, “xp”, so there.
Take a (real-world) example. You have 50 workstations sharing 30 Printers in various configurations. You want to change a printer’s address accross the network.
Windows: Your only ‘GUI’ option is to go around every machine individually, login and configure printers, then modify every user profile to do the same. You can script it but try to find good documentation. Printers can be defined in any one of 4 locations in ‘the Registry’. Modifying users’ registries is a pain in the ass to script.
Linux: one word: Cups (http://www.cups.org/book/index.php).
When you’re actually deaing with a real network, ‘a couple of clicks’ x 1000 or more turns out to be quite a lot of work.
Stephen
Basically I see a lot of comments. One is the AD. It’s well known that Novell has NDS while AD is a weak copy of that stuff. ISA — since when is that a powerful integration tool ?
Also, it’s well known that a GUI doesn’t enerally have the options to work on srevers, like stated with the blade servers for instance.
Note that unix servers in general are easier to be resrrected, mostly without loss of other services, while windows is a complete disaster at these points.
yes, to administer unix, you need to understand things. With most windows administration chores, you only need to be careful that you don’t drool too much on the system while clicking on pictures.
Reinstalling a windows machine is also easy, damned easy and it’s even easier to have a piece of hardware to start under linux compared to windows. Especially if yiou add a few non-standard things W2K3 doesn’t even know about. And yes, you can load those drivers, not via CD but floppy disk. This kind of troublesome steting up, restarting, playing disk-jockey with linux/unix is pretty rare.
Anyways, the windows people will fall over me. people who professionally deploy different systems in th ereal word do know that the windows stories are general good for a horror-story.
At our place, the windows specialists ask quite a lot for assistance by the unix people for trouble shooting. Why should that be….
GUIs are good, but editing human readable config files have its advantages too. For one thing, it makes it simple to document who, why and when a certain change was maid. The admin can do this directly in place. It is also easy to add comments like “don’t change this setting system X depends on it” directly in the config file where it is hard to miss for a fellow administrator.
It is also very simple go back to previous settings, just make it a habit of making a copy of each config file adding the data as a suffix, and you can easily go back. You could even use a versioning system like subversion.
BTW, what makes you believe that there are no central directory services for Unix/Linux. Such tools have been available for very long time starting with NIS and now relational databases and various LDAP servers such as openldap and FedoraDS. The latter even have a nice GUI for you.
I will disagree with you. The shell on Linux is your scripting language. So the same exact commands and processes you use on a day-to-day basis, you can easily throw into a script. With Windows, the GUI you master is not easily scriptable — you need to learn a completely different process to script the system.
On *nix, it is trivial to confiugre hundreds or thousands of similar servers due to the ability to copy and paste text-based configuration files and use standard search and replace techniques to adjust as necessary (ie IP addresses, system names, etc). In addition, tapping into SSH makes it a breeze to do this automatically and over a secure channel.
Then there is the resource benefit — a GUI takes up more system resources and has much more overhead .. so a significant portion of a lower powered system is required for non-essential processes (GUI).
With *nix, it is rather easy to determine what configuration changes took place (those are the only ones in the config file). With Windows, some options might be enabled by default — but by looking at the GUI, you are unaware which options are enabled by default and where were customized. In addition, in a well maintained network, versioning is inplace to allow an administrator to go back-in-time to see changes to a config (and even who made changes and why) .. with Windows, not so much.. when something breaks, unless your writing down changes elsewhere in a consistent manner, your troubleshooting is significantly more guesswork than analytical.
I find that due to the relative ease of getting a cookie-cutter Windows server up and going (as per your screenshot) that this has fostered many self proclaimed “gurus” that ultimately cause a significant amount of harm to their customers due to poorly configured and secured networks. Perhaps initial ease of use in setup is NOT a good metric for an administrator. I’ll be honest with you — I almost think that some of the overly long and cryptic commands of *nix forces an admin to be more efficient by scripting and automating their systems.
Now see how easy it is to point, click, administer 5,000 servers. Unix based operating systems, including linux, are much better in this regard because of thier superior scripting abilities. This is my job…
“Now see how easy it is to point, click, administer 5,000 servers.”
Well, with group policy its pretty simple.
And I happen to think there is more choices for scripting Windows.
So you are saying that a Junior level windows systems administrator can script all of his day to day tasks?
Didn’t think so.
So you are saying that a Junior level windows systems administrator can script all of his day to day tasks?
Sure. If he wanted to. What task don’t you think he could script?
Don’t write about systems you know nothing about.
# is fine for us techies cos we know the system and can do anything we want from #.
If you want an OS to be as simple as a VCR menu then I’ve got news – a server can do about million different things whereas a VCR plays tapes. Therefore you have to *study*.
I can tell a reason it won’t:
Open Solaris.
If I got paid to write overly simplistic and nonsensical tripe like the author, I would be very happy indeed.
I’m sure you could get a job at it if you really applied yourself. Practice, practice, practice.
Let’s consider that Xen as delivered with SuSE is limited to SuSE only virtualization (for now) and RedHat has not delivered the version of their product that has Xen, I don’t see much of a threat to Microsoft at all.
There is some concern (including comments printed here) that Xen is not “ready for prime time” and if I was evaluating OS virtualization, that would be a prime consideration. People who are seriously looking at virtualization want stability as well as functionality.
I noticed Steven forgot to mention VMWare, which would work with both Linux and Windows and is “Enterprise Ready” out-of-the-box. One should not limit themselves to just what the OS provides, considering most of the players are late to the game compared to Sun, IBM, and HP (Zones/Containers (Sun), LPARS (IBM) and nPars/vPars (HP)).
“submitted by anonymous”
By Duffman * (1.26) on 2006-10-04 07:43:58
lol, we saw you moulinneuf.
I dont post or submit as anonymous , when I cant log in or sign in I sign my text. I aint a coward or ashame of who I am and what I say.
It was a joke …
The article has a good point, the cost of OS licenses may be a problem for people deploying virtualized systems. But for many Windows users moving to a virtualized Windows solution may still be cheaper than migrating to Linux. And MS can always decide to change its licensing model when they see Linux and others starting to pick up. This way they can charge a higher price for those who want to move to a virtualized solution fast and can afford it, and later charge a smaller amount for those who couldn’t afford the greater prize.
Also, I don’t see any justification for the assumption that people abandoning Windows will move to Linux. There are plenty of alternatives, including the BSDs and the OpenSolaris based ones.
Also, I don’t see any justification for the assumption that people abandoning Windows will move to Linux. There are plenty of alternatives, including the BSDs and the OpenSolaris based ones.
Many Windows mainly shops already use Linux for some purposes. Take where I work they were a Windows only shop when I started. First they are persuaded to use Linux for a new important mission critical Oracle database (RHEL).
Now even though IT has adopted AD, Linux Samba servers (Novell/SUSE) belonging to IT have started appearing on the Network. They are probably many shops like this in medium to large enterprises that are predominantly Windows but are increasingly using Linux for various purposes. I work in RD not IT so I don’t fully keep up with what IT is are up too.
When a Gartner’s analysist speaks, business are listening to decide where to head to. Sure, they’re not always right, but they’re mostly right and their opinions are not based or personal liking but on solid business facts. That’s why businesses pay Gartner to request their help.
Then a fan-boy replied in an article based on personal liking with no evidences at all, predicts Microsoft will tumble (not because of facts but because of his own feelings), ignores all numbers and does he really expect someone will decide ANYTHING based on this short bag of nothing?
Hello, Mr. Vaughan-Nichols, did you EVER hear about analysis? I could listen to someone who tries to put some meat in his articles, bringing some facts, showing some numbers, some trends… u know… facts.
You should always raise your eyebrow when someone states that “it’s very simple”…
The nub of the article is that server virtualization using Xen or similar will favour Linux over the next few years. The reason given is that with Windows, running extra iterations will soon attract hefty extra licence fees and so price Windows out of the market.
The trouble with this line is that Microsoft is nothing if not pragmatic. If they see that their presence on the server will head south unless they change their business model, then they’ll change their business model and offer new licence arrangements. A company as competitive as Microsoft is not going to sit on its hands.
In addition there are other considerations, as others have mentioned, such as integration with the Windows product family, ease of use, gui administration and so forth, which may mean that folks are still happy to pay a “premium” for Microsoft over Linux.
The initial cost of a license only comprises 5% of the total TCO for an operating system product. There are lots of reasons which promote Linux as a solid server product; however, equally important is the fact that Windows is highly integrated, has excellent tools, and has comparable TCO. Given that, it’s difficult to see how Linux or any other OS is going to take significant market share away from Windows server anytime soon. I would argue that OpenSolaris will pose a serious challenge to the presupposed supremacy of Linux.
Say it some more and look more foolish.
TCO is a JOKE…..anyone who believes in TCO’s are clueless.
The total cost is everything. Only looking at the license cost is totally clueless.
Make sure when you are talking about TCO that you include the cost of attorneys and software audits for when the MicroSoft police come knocking at your door.
Make sure when you are talking about TCO that you include the cost of attorneys and software audits for when the MicroSoft police come knocking at your door.
Hmmmm… that’s odd. Neither I nor anyone that I know has ever been audited by these so-called “MicroSoft police”. So, I guess that cost for me is zero.
Actually, one cost that you do have account is license management. With open source, you can freely create as many copies as you like. With proprietary software, you need to keep track of all copies or ration licenses to ensure that you’re in compliance. This is a nontrivial beaurocratic and cost burden.
Actually, one cost that you do have account is license management. With open source, you can freely create as many copies as you like. With proprietary software, you need to keep track of all copies or ration licenses to ensure that you’re in compliance. This is a nontrivial beaurocratic and cost burden.
Unfortunately, I’ve heard that RHEL and SLES have the same compliance burden.
Yeah, that’s a burden, but keep in mind that you don’t have to keep track of copies that you aren’t going to ask RHEL/SLES for support, so its less of a burden.
Personally, I prefer the modified pre-incident model whereby you pay for an expected number of support calls and pay a slight penalty if you go over when you expected. This allows problem free users to have large deployments without much of a cost issue and troublesome users who only have a few deployments to get the dedicated support they need. No license tracking required.
One of the good things with going with Debian or Ubuntu is that you usually have a choice of service providers so you can shop around and pick the one that suits you best.
http://news.zdnet.com/2100-3513_22-6104891.html
http://www.debian.org/consultants/
http://www.ubuntu.com/support
Yeah, that’s a burden, but keep in mind that you don’t have to keep track of copies that you aren’t going to ask RHEL/SLES for support, so its less of a burden.
Not true. You still have to distinguish between copies used for support — and those which aren’t supported. So, it doesn’t appear that you save anything.
What does virtualization actually save you? I would say it’s a way of better utilizing your hardware so that it is not running near 0 percent most of the time.
In reality, unless you use a free OS, a standard/generic PC maybe with dual core, you end up having to spend more money on specialized hardware specifically setup for virtualizing. For example, the “blade” servers compact a lot of machines into 1 chassis. But this is expensive.
Also, unless you run a free OS, then you have to spend more money on the OS, as well as all other software.
In the end, the hardware vendors and software vendors make moremoney off of you when you try to virtualize your real world apps.
Basically, virtualization allows you to save datacenter space and probably power. Other than that, if you are not careful, you end up spending much more money for specialized hardware and software to do the job.
Go with cheap hardware and a free OS.
> Go with cheap hardware and a free OS.
Unfortunately, the realities of the way that datacentres are run suggest that Linux isn’t a free OS any more than Windows, and you can be sure that either a) Red Hat will want payment for each instance or b) they won’t and *if they need to* Microsoft will adjust their pricing accordingly.
If you want the support, or at least the offer of support, the Red Hat is far from free, and is in fact
quite costly compared to Windows and Solaris.
Its not clear that many businesses will go for virtualisation as much as some people seem to think anyway. My experience has been that project teams and support groups aligned to business users tend to get a private stack of kit, including ther own dbms and fileserver, and this is ultimately so that change management is easier to organise. If you have a dozen business sign-offs with different agendas and priorities involved when you have a sper-server upgrade or patch to deploy, its going to be harder to get those signatures. Its been possible to consolidate services onto fewer boxes for years, but we haven’t been good at doing it.
Actually Intel won’t be making any processor without Virtualization anymore and it does not need expensive hardware.
MY Intel dual core centrino has VT support and it is not expensive at all. So Virtualization is becoming a commodity and not any specialized thing.
Basically, virtualization allows you to save datacenter space and probably power. Other than that, if you are not careful, you end up spending much more money for specialized hardware and software to do the job.
Go with cheap hardware and a free OS.
There is no doubt, that there are a lot of situations where the increased cost of more specialized hardware, outweighs the benefits of having to buy fewer physical boxes. However, there are also a lot of situations where such an investment makes sense.
And in reality, the costs of managing an infrastructure, is often much much larger than the cost of the OS and the HW.
The real advantage of virtualization is that we will be able to abstract your datacenter ressources from the physical boxes. This gives us a lot of opportunities to manage our infrastructure more efficiently.
Some of the advantages are:
– much faster provisioning of new OS instances (for development, testing or production)
– the possiblity to create test clones of live production (to test configuration changes & patches)
– ability to move virtual instances between physical boxes (OS no longer tied directly to the HW)
– ability to reallocate ressources between instances (f.ex at peak-loads such as months-end batchjob running on one instance)
While this certainly isn’t something for everyone, I think that virtualization offers huge benefits for companies running datacenters.
I can tell a reason it won’t:
Open Solaris.
Yer. Everyone is just so excited about OpenSolaris (where’s the ISO by the way?) that has so much more to offer than Linux and has way better hardware support on x86.
There is huge interest in OpenSolaris. This weekend at the Ohio LinuxFest, I gave out all 55 OpenSolaris starter kits that I had (Sun was just nearing the end of a batch when I requested) within the first 2.5 hours simply by having them on the corner of my booth and people asking about them. Since I was there from 8AM-5PM (9h), if I can assume straightline extrapolation, I could have given out 200 if I’d had them on hand.
Which ISO would you like? Would that be the Commercial Version (Solaris), that you can download from Sun for free at http://www.sun.com/software/solaris/get.jsp ? Or the advanced release (Solaris Express) that you can download from Sun for free http://www.sun.com/software/solaris/solaris-express/get.jsp ? Or the developer version (Solaris Express Community Edition/Nevada) that you can download through the OpenSolaris.org website http://javashoplm.sun.com/ECom/docs/Welcome.jsp?StoreId=7&PartDetai… ? Or perhaps one of the four community distributions (Belenix, Shillix, Nexenta, MarTux; see http://www.opensolaris.org/os/about/distributions/) that you can download from various sites?
Solaris does have a lot more to offer in the server space than linux does. Solaris has Dtrace, Zones/Containers, ZFS, a useful threading model, scalability, SMF, FMA/PSH, more available applications, a better support model, lower support costs than RedHat or SuSE.
Linux is a good operating system, and it is great that it is helping people to get better computing than they could get previously. However, it isn’t the end all and be all of operating systems. It isn’t going to crush all competition until it is the only thing. And, even if it does, then by necessity the various distributions/sellers will continue to drift further and further apart until it is as fragmented as UNIX used to be.
spotter,
I’m partial to the BSD’s, but Solaris is some killer stuff. Dang, I would love to get my hands on one of those Niagara boxes! That’s a beastly amount of power.
I’m partial to the BSD’s, but Solaris is some killer stuff. Dang, I would love to get my hands on one of those Niagara boxes! That’s a beastly amount of power.
Ubuntu runs faster on Niagara than Solaris according to Canonical’s initial testing.
Edited 2006-10-04 18:06
> However, it isn’t the end all and be all of operating
> systems. It isn’t going to crush all competition until
> it is the only thing.
Agreed. In a healthy ecosystem (i.e. one promoted by open source)
there is a rich diversity. Linux is a lot like the human race.
It’s one of the most adaptable and a very good general purpose
species (operating system) out there. However, just because
humans are successful, doesn’t mean that other species have
inevitably go extinct. There are many special purpose animals
(e.g. insects, mice which greatly outnumber us) or general
purpose animals that don’t compete with us (e.g. deep water
fish) or even benefit from our existence (pigeons, squirrels).
> drift further and further apart until it is as
> fragmented as UNIX used to be.
Unlikely. One of the key reasons Unix fragmented is because
Unixes *wanted* to fragment so as they could use vendor lock-in,
planned obsolescence, and ruthless nickle and diming unbundling
to squeeze as much money as they could from customers. Another
key reason is that licensing and trade secrets prevented
code from shared. Open source is a lot less liable to these sorts
of flaws. Forks can happen, but inevitably, one of two things
tend to happen. Either one project dies off and the useful features
get absorbed into the surviving one, or both projects tend to
agree upon some sort of standardization (e.g. Freedesktop, LSB, etc).
Yes, indeed – Solaris is so great that I simply won’t use it.
I have tried it over a period of time and have come to the conclusion that SUN has a very good marketing team (certainly not as good as Apple’s, but they’re doing their job well). No matter how strong you insist that I use zfs, dtrace, zones etc. It’s all very nice, and some aspects are indeed worth looking at it. In the end it is not enough – and I am sure most people will react that way. Why? Quite simple, there may be some interestingf aspects but these are certainly not enough to make me change a bunch of servers – in fact not even my box at home.
Believe me: People will try it, notice that it hasn’t really much to offer and return to their Linux distribution.
Edited 2006-10-04 21:15
Funny you should say that. I just bought a used Sun Blade 2000 and I figure that I’ll leave Solaris on it. If I don’t like it, then I install Ubuntu.
Pretty much exactly what you described.
The author showed ignorance to such an extent that it wasn’t even funny.
1. With Longhorn Server or whatever it is called when it is shipped around 2007 year end will have built in Virtualization for Intel VT and AMD SVM hardware. The new virtualization will implement a very thin hypervisor layer. So his assertion that Microsoft doesn’t have virtualization story is completely wrong. For low end machines which don’t support VT/SVM, there will be Microsoft Virtual Server which is free.
2. I was at PDC and talked to some of their virtualization folks. Microsoft is adding new really cool management tools built around virtualization and I believe that is the area where Linux really lack. Enterprise needs good management tools for their infrastructure so i don’t see how Microsoft can fall flat with their management tools backed by their hypervisor and the longhorn kernel modified to run really fast on Microsoft hypervisor.
3. For really high-end virtualization needs there will always be VMWare’s ESX. ESX has the hypervisor plus hardware drivers built into the hypervisor itself which gives it really high performance though at the expense of a slightly larger attack surface.
4. XEN is good but it is unstable and it lacks good management tools.
Overall i think the author has ignored many important facts and just probably wrote what he wishes.
If Linux continues lacking powerful integrated tools like ISA
We’ll just paint over ISA, OK? 😉
or all the things that AD can do it’ll be windows who dominates the server space.
Linux really needs a distributable infrastructure alternative to AD, but unless Microsoft makes it’s licensing of AD, Windows and other software scale to the large numbers required then people will just simply live without it.
> Linux really needs a distributable infrastructure alternative to AD
It’s called Kerberos, and it’s what MS based AD on.
LDAP and Kerberos.
They shouldn’t be permitted on servers…period.
Good thing that your generation soon will be retired.
As long as there are humans using a computer, usability is top priority.
What has that lack of a gui have to do with usability? I can do anything and everything from a cli that you could not imagine doing from a gui. A gui can only provide what the designer assumes you need to do. If that is not the case, then what?
Remote administration? Way, way too slow, cumbersome and insecure.
In time “your generation” will grow up. Who knows, you might be asking “my generation” for a job.
“Good thing that your generation soon will be retired. ”
You must own a lot of stock in computer hardware. I get calls from people because they took their computer system to the shop for repair and were told it couldn’t be fixed and that they would have to reformat and reinstall Windows. Or that it was a bad motherboard and it would be very expensive to repair. In most cases a quick trip to the CLI and the system was up and running in just a few minutes.
When my generation retires your generation will be sitting there scratching your heads trying to figure out how to fix the problem because you don’t know what the command line looks like, let alone how to use it.
Windows will eventually lose out because it tries to eliminate access to the very functions that allow one to quickly and easily recover from a disaster. GUI’s have their place, but there are times when you have to roll up your sleeves and get dirty. The reason GUI’s are popular is because they tend to remove the need for any real knowledge of the system. Which is the problem because without knowledge of the system all you can do is remove and replace. That will work most of the time, but there will come a time when that philosophy will fail you.
protagonst…
Well crafted comment and spot on.
Every time we’ve hired a so called Windows admin, the moment he/she gets out of that sterile gui environment, they seem to flounder because they don’t seem to understand the “concepts” behind what it is they are doing. If they see something in that gui they haven’t seen or don’t understand, they’re lost. It’s kind of like your english teacher forcing you to memorize a Shakespearean passage. Easy enough, but you need to understand the words, the concepts, the reasons in order to get the most out of it.
I think gui’s tend to allow and foster than behavior.
You can easily run a web-based GUI on another machine that doesn’t actually use a GUI on the server. This is what AS/400 and OS/390 (or whatever IBM is calling them now) administrators now do all the time. Worst case, you can bring a Unix GUI up when you need it and shut it off when you’ve finished. You still can’t do that with Windows.
** Marty McFly:
Listen Doc,
Have you tried openSUSE or SLES? Did you know that it has YaST? YaST has many features geared for securely networking your Delorean to your home in Hill Valley. These features can be found in the “Network Services”, “Novell AppArmor” and “Security and Users” catagories.
On SUSE YaST comes in two flavors: ncurses (GUI) and X (GUI). But of course you can use command-line tools if you please.
** Dr. Emmett Brown:
Great scott! Do you know what this means?
** Marty McFly:
“This damn thing doesn’t work” ?
** Dr. Emmett Brown:
No No.
I can harnest this power of SUSE and channel it into the flux capacitor displacer unit! We only need to invent a “flux capacitor” YaST module.
What do you think Einstein?
Woof… Woof…
** Marty McFly:
Uhm yea I suppose you could do that too.
> Good thing that your generation soon will be retired.
Just wait until you get old(er) and some kiddy blurts that kind of wisdom at you…
microsofts stated roadmap, in that regard is to have future tools based on powershell, and .NET so anything exposed in the gui tool will be scriptable. in addition, to the scripting and config files having even more exposed.
whats so hard about that? to each his own, windows scripting languages are pretty powerful, you can do some crazy things if your good at scripting and “buildinging your own tools” type of admin.
like WMI, allows you to not only do hardware and software querrys, but allows you to do things like WMI pings, and add it to scripting and program logic.
i can have simple scripts that querry all hardware in the domain, or look for machines with installed software. sure u can do the same things in linux, unix world but thats not the point.
the tools for shell,CLI admin exist in windows world, and are pretty powerful if you use them, u can even use Perl if you want.
in the end just use the best tool for the job. period.
-Nex6
Great example: WMI. What a steaming pile of facaecious matter.
whats wrong with WMI?
it does allow, you to do cool stuff quickly. sure its syntax and useage are not excatly the greatest. but thats not the point.
The WMI UR(I|N|L) syntax is great, except that noone bothers to properly document it. It’s so extensible that it can’t be properly documented.
WMI Service on Windows 2000 Server seems to stop working after one connection.
Error management is another layer of complexity on top.
None of the WMI scripts that I’ve written have worked reliable in the manner the documentation states. Every Shell-script-over-ssh script that i’ve written has worked fine.
The thing about virtualization of Windows is that you need a license for each instance after the initial 4. That’s the cost factor he’s talking about.
very naive. Microsoft isn’t frigid like Novell or IBM. Needs change all the time. Microsoft is the one that made cheap commodity servers common place. Unlike Novell and IBM, Microsoft listens to it’s customers.
Very True. As i already said anyways, this author is quite ignorant and Linux fanboy as it seems otherwise he would have spent sometime looking at Microsoft’s upcoming server virtualization with fresh new hypervisor and performance critical devices…
SJVN would stop being so “optimistic”. Yes, I like Linux, but Windows’ decline is going to be slow and imperceptible.
Linux sales used to grow at 60% a year, then 40%, then 20% … now its down to 6% per year.
Its now growing slower than Windows.
This is just a panic-driven whistling by the graveyard article.
What this proves is not that Linux growth is miniscule, but that companies who sell Linux servers can’t make as much revenue at that as at selling UNIX or Windows servers.
In the IDC report, it’s actually stated:
—
After fifteen consecutive quarters of double-digit, year-over-year revenue growth, spending on Linux server moderated significantly, growing 6.1% to $1.5 billion when compared with 2Q05. Linux servers now represent 12.0% of all server revenue, up slightly from 2Q05. Linux server shipments grew 9.7% with the volume server segment representing the majority of both revenue and units.
Microsoft Windows servers showed positive growth as revenues grew 3.1% and unit shipments grew 11.0% year over year. Significantly, quarterly revenue of $4.2 billion for Windows servers represented 34.2% of overall quarterly factory revenue, as customers deploy more fully configured Windows servers as part of server consolidation and virtualization initiatives.
—
So now, Linux has reached the billions up from the millions range of revenue, growing at 6%. Windows is at 3%, but Windows revenue is currently 4 times that of Linux, whereas we’d be talking tens of times a few years ago.
Unit shipments of Linux servers were primarily composed of volume server shipments, which is nowadays the only healthy and growing part of the server market as a whole. I believe these are mostly blade servers.
Unit shipments of Windows servers were up 11% compared to Linux’s 9.7%, which makes Windows unit growth appear to be faster than Linux’s, but this is actually due to the way most Windows servers are sold: with the OS installed, one server per service. Usually with Linux, you buy OS-less servers (in volume) and then do your own mass-installation, and have the servers do several things at once.
Obviously when buying servers without an OS installed, you can’t count them towards either Linux or Windows. Also, if Linux’s unit growth is coming mostly from lower-cost volume blades, whereas Windows is on more powerful machines (there was no mention in the report that Windows is being sold primarily on volume servers) then of course its revenue will be larger.
Also add in the fact that MS encourages buying one server per service, and you see another reason why so many Windows servers are sold.
Don’t get me wrong, I don’t *hate* MS, but the idea that Linux is threatened by them is silly. MS, however, should feel threatened by Linux in the long run, and given MS’s improvement in quality these past three years, I think they agree.
I think open source will never generate the revenues that closed source can. But does that mean open source is an inferior product, or not as widely deployed? Of course not. It offers a more modest business model, but a more robust development model.
Edited 2006-10-04 19:06
Linux only need .4% share to call every year “Year of Linux Desktop”. I don’t know how many needed to be called “Dominate the future of Servers”. Many prefers *BSD or Solaris than Linux, even there are some ISPs that still use FreeBSD 2.1 for their servers.
Microsoft is usually late. This has worked to their advantage. They let others invest in R&D and take the risks, then microsoft steals the business. That is simple.
Microsoft’s competitors would have to open source everything & provide support and continue to innovate to give microsoft a run for its money. They aren’t competing with Win98 anymore. And IT staffs are being dumbed down to the point where the clean easy to use GUI is a necessity. It’s cheaper for management to pay for a fairly reliable easy to use Microft product and hire zombies, than to hire real IT people who can manage Linux.
It’s cheaper for management to pay for a fairly reliable easy to use Microft product and hire zombies, than to hire real IT people who can manage Linux.
An alternative view: Linux is too complicated to waste your time on, especially when its sales will start dropping soon. On top of that, RedHat is too damned expensive.
Pick Windows. Lower TCO. It will be around in a couple of years. Way less security holes in IIS6 than Apache (which is only good for hosting parked domains anyway).
An alternative, alternative view. Don’t listen to this drivel-spewing fanboyish twit.
The article makes a simple point. Visualization is the future in todays mega core world. The *Pricing Model* or Windows makes this very expensive per machine to *nix. The only argument I see against this is that Microsoft will have to adjust its pricing model. If this is really true the people are arguing that *nix will make Windows cheaper.
.
I feel somewhat embarrassed in this thread at all the TCO arguments, because most arguments here are from a technical administer perspective. I even saw someone quote a percentage which I found somewhat embarrassing. TCO is really really hard to measure, and as a result really easy to massage to your own results depending on how you measure it. TCO can change when revenue lost when your website is down. TCO can change when a fileserver is down to a virus leaving thousands of workers without access to there files. TCO can be hiring a specialist for installation. TCO can change in a very viable way depending on the cost of hardware vs people, remember we don’t all live in the developed world. Last time I looked the average worker is meant to work about 25% of the time, 40% is considered high, how hard something is to administer is a non-issue except to the administer.
.
GUI vs CLI. This is a silly argument esp here. GUI’s *can* allow a shorter learning curve, and make some tasks *easier* to administer, but you can do some awful clever things with the command line that s why Microsoft wanted Monard. The reality is that if the functionality is already available on an OS the GUI’s are relatively trivial to write…and the GUI is the area that *nix has been getting better at and fast.
Edited 2006-10-04 17:57
With Windows Xen-enabled, companies will be running virtualized instances of Windows within Linux.
Now, what has become clear, even to Microsoft, is that Linux is a superior product for delivering IT infrastructure, such as network servers, grid computing, and the like.
Where Windows is superior, however, is in application support — for no real technical reason, but for a multitude of business/marketplace reasons, which is just as much a reality as the tech side of things.
So where I see the future is in using Windows and Linux in tandem: Windows runs virtualized within Linux, getting all the benefits of security and stability from that platform, while Linux gets the benefits of Windows applications.
There are also products emerging that allow Windows admins to use GUIs to manage large numbers of Linux servers.
SVJN is a fanboy, but I think he does have a point. If virtualization represents the commoditation of IT reaching a new watershed up the stack, from hardware into the OS, then how can a non-commodity OS survive?
The only option would be to price Windows licenses at virtually nothing, no pun intended.
Is it possible that MS could even make as much money at a service-based model instead of a product-based one?
Not at the OS level … the “gold rush” in IT has moved much higher up the stack. There’s some money to be made in commodity OS, but it’s more on par with what we’re seeing out of Red Hat, not MS.
If virtualization represents the commoditation of IT reaching a new watershed up the stack, from hardware into the OS, then how can a non-commodity OS survive?
On the other hand … Windows 2003 R2 Enterprise licenses allow 4 free virtual servers of any kind to be run concurrently with the main OS.
http://www.inacom.com/newsletter/dec05/whatsnewinwindowsserver.aspx
Thats what we plan to do. Buy One W2K3R2 Ent license, use VMware Server (free) and virtualize 4 servers on each box. VMware is free. The virtualize boxes are free. We get 4 VM’s per box which gives us some leeway in terms of resources to bring up more than 4 VM’s for disaster recovery.
Plus, if you buy DataCenter Edition, you get unlimited virtualization rights:
http://www.microsoft.com/windowsserver2003/evaluation/news/bulletin…
No doubt MS is adjusting, I’ve seen the pricing, but I wonder how they will be able to keep up their revenue expectations if they start allowing you to get four (or unlimited) copies of Windows for the price of one.
Virtualization is leading many people to create more OS images on the same number of servers; only a few companies are actually getting rid of servers.
Before: MS sells one Windows license per server.
After: MS sells one Windows license per server.
MS’s revenue is probably the same.
Now, what has become clear, even to Microsoft, is that Linux is a superior product for delivering IT infrastructure, such as network servers, grid computing, and the like.
How, then, do you explain the growth of Windows server market share, then?
So where I see the future is in using Windows and Linux in tandem: Windows runs virtualized within Linux, getting all the benefits of security and stability from that platform, while Linux gets the benefits of Windows applications.
It could also work the other way around, too. TCO is essentially the same for both OSes.
The only option would be to price Windows licenses at virtually nothing, no pun intended.
You’re looking at this in a very one-dimensional way. Virtualization allows companies to consolidate a lot of their existing infrastructure (ie. reduce number of physical servers while maintaining compatibility with existing applications). And, while virtualization favors lower-cost licensing, the comparative TCO for Windows and Linux is essentially a wash either way, so it depends on how much investment companies are willing to do to migrate from existing infrastructure. That isn’t a slam dunk for Linux.
Not at the OS level … the “gold rush” in IT has moved much higher up the stack. There’s some money to be made in commodity OS, but it’s more on par with what we’re seeing out of Red Hat, not MS.
Nah. Look at revenue growth for both MS and Red Hat. I think you’ll see that the OS hasn’t been commoditized to linear returns. Far from it.
The reason for Linux increasing popularity is not just that it’s free, Linux gives the user freedom from Corporate greed, more control over the operating system. In terms of quality (atleast on the server side) Linux is far superior to what Micr$oft has to offer in terms of value, stability and security. Most of Windows Administrators don’t want to switch to Linux as they would have to start from the scratch. Ignorance, if self-imposed has no cure.
I dont think thats a completly true statement.
I admin both windows and linux, and most of the windows admins i know are either playing with Linux, or have setup some dev linux box either in vmware or test boxs.
but, many windows shops cant just dump windows wholesale and jump on the bandwagon, CIOs and corperations do not work like that.
it starts of slowly, with *Nix servers doing things they do well and windows servers doing things they do well. personally i think thats the new model, is hybird networks, where both windows and Unix based machines exist and are intergrated.
-Nex6
The reason for Linux increasing popularity is not just that it’s free, Linux gives the user freedom from Corporate greed
True, in fact Linux could not exist without Corporate stewardship.
True if you would call a loose bunch of consultants, hobbyists and two guys in a garage outfits, “corporate”.
Oh, c’mon. There are a ton of corporate-paid developers working on Linux. Look at IBM. Novell. Etc.
LDAP isn’t the same thing as AD. LDAP is a protocol. Its comparison in AD would be a subservice of AD. Kerberos goes along the same lines. Novell’s eDirectory is the real model which AD is based on. It’s a complete package for network identity and management, as is AD. Moreover, eDirectory is still available. It even scales better than AD. Plus you can run it on NetWare, Windows, Linux, and maybe even Solaris.
Edited 2006-10-04 19:47
Wow. The usual furball commences. Fun!
I predict that…
Microsoft will make lots more money from licencing multiple copies of their operating system and server applications and CALs to run multiple times on the same chunk of hardware.
I predict that there will be lots more installs of Linux. And BSD. And Solaris. And whatever else.
—
I predict that everyone (both various distros, *nix version, and Microsoft) will be forced to be more standards compliant as heterogeneous networks become more common.
Drawing on a real world example. Lets say I work in a company of less than 50 people, primarily of software developers, with half a dozen non-technical support staff. I run primarily Debian on my servers, with a mixture of Windows and Linux (and maybe Mac OS X) clients. I decide I would like to make use of Novell’s Groupwise Calendaring/Scheduling software. For this, Novell is fairly adamant I run it on SuSE.
Our financial department asks for some networked payroll software that demands to be run on Windows. More over, the vendor refuses to support it if the server is being run on that copy of Windows.
Some of my clients are demanding that the main product run on OpenSolaris, and I need another couple of copies of Solaris to run Oracle and DB2 on, which for some weird* reason just will not co-operate.
My developers require fresh copies of Windows XP, 2000, and 98 in three different languages to test their software on, and try out various patches.**
—
Virtualization obviously makes deploying and managing all these instances a lot cheaper. The ‘winner’, if you want to call it that, will be the OS that all these machines run on. It will be the infrastructure that everyone depends on to talk to the others. It won’t matter if you have the best management tools on the market (although that will certainly help), if you refuse to let anything run except Windows on your virtual platforms, nobody will care. If you try to lock your virtual platforms into only running ‘pre-approved’ vendor operating systems, companies will look elsewhere.
There’s a good chance that the platforms that host your virtual platforms will also form the backbone of your infrastructure. The most successful infrastructure will be the most standards compliant and accessible to other platforms. That – in my opinion – is the key to ‘winning’.
* This is a completely made up example. I don’t actually know of any incompatibilities between DB2 and Oracle.
** This is real. We do this where I work every day!
keep in mind that you don’t have to keep track of copies that you aren’t going to ask RHEL/SLES for support
I’ve read that Red Hat requires customers to buy support for every copy of RHEL that they have. They used to allow customers to have unsupported copies, but then customers would have hundreds of machines and only buy support for one.
I think Microsoft and Windows are being squeezed at the edges. Someone recently made the point that the easiest Linux deployments are made by people with lots of tech savvy OR no attachment to Windows. I think that’s an oversimplification, but correct in essentials. The problem with this is that it’s a slow process, but I see less and less technically-oriented people moving to Linux every day and liking it.
There is so much more to a desirable server OS than the ability to do virtualization. I’m not saying Linux doesn’t have many other desirable characteristics, but companies will not be discarding Windows any time soon solely because it isn’t ideal for virtualization.
“Good thing that your generation soon will be retired.”
Actually, I have another 24 years of produtivity left. Guess what I will be replacing along the journey.
A GUI, with the exception of MicroSnap products, are nothing more than a shiny little front-end for the pwerful commands that are available at the shell.
While I watch Windull systems admins, a classification that is more as a joke than reality, go through pain, maybe they are into that type of thing, when they administer their pieces of crap – I generally allow the business of to move forward with my Linux, *nix, BSD and OSX computers.
Also, while they play from one GUI to another I put together super commands that generally do more in one press of the key than several clicks of the mouse in a GUI.
Have fun with your WinToy. Here’s a dime for your collection for the next upgrade.
Edited 2006-10-05 12:17