IBM has been chanting Linux for a long time now but the company has never clearly explained why it prefers Linux to Windows. I have written an opinion piece which gives the motives behind commercial backing Linux is receiving. The article also details the impact commercial interests are likely to have on the future development of Linux. The URL of the article is here.
It’s like the pirates of Silicon Vally Part 2
SGI really wants to port xfs to windows? why?
I would disagree with your pricing / technology argument. Both are huge advantages for linux – pricing is flexible (its a competitive market, whereas microsoft is a monopoly over windows, so prices can vary much more for linux). Technology-wise, linux has the advantage of being modifiable by corps without a huge licensing deal. And new features are constantly being added. If linux (kernel) continues to evolve at its current rate, it will easily surpass 2k3 and future server variants (its already surpassed desktops) in the next few years, prolly match aix / irix / etc. by 2008ish (conservative estimate) – prolly kernel 3.0 or so
Couldn’t agree more with the main points. However, 2 _small_ criticisms.
1) contradictory on Sun’s performance. You rightly claim that Linux is not good for “everything”. But you criticize Sun for recognizing this? I agree, though, that Sun is myopic on Linux and they need a more open stance.
2) downplaying the forks will become less obvious in the near future – this is inevitable. The _reason_ for forks in Unix was hardware specialization and ROI for the hardware vendor. This is coming to Linux “big time” in the next couple of years. Already Oracle software and others are “standardizing” on one or two kernel groups and _not allowing_ personal customization. This will continue. This will strengthen RH and Novell and be good for everyone except the customers that pay the proprietary bill :-). Again though, this goes back to Sun’s perspective. They rightly see that Linux will cost for anything with “serious” computing – and will push Solaris instead.
Overall, though you get an A for the article. (Would be A+ without contradiction on Sun).
>SGI really wants to port xfs to windows? why?
Maybe because SGI wants to sell Itaniums (something they already do) with Windows Server, in which case they would need to offer powerful tools like XFS to their customers? Don’t forget, SGI is primarily a hardware company, not an advocacy OS group.
Linux does has some tehnological advantages over windows. Its modular design, support for more types of hardware, Open source design makes it easy to customize. The author is correct however in the cost of ownership. I find it amusing to see companies which were much more proprietary in nature than microsoft (IBM) (NOVELL) (SUN) (APPLE) jumping on the open source bandwagon. IBM SUN and NOVELL sell service, and the fact is linux requires much more support and service than windows. Shoot they will probably give you the os because they are betting you are going to be calling them up for support every other day. I support linux completely but unless an organization has linux expertise on staff, (salary about twice as high as windows expertise) its going to end up spending a lot more money running linux.
Linux will support very diverse hardware suitable for very many uses. Windows will continue to run only on mainstream hardware and this will become a liability. Every large corporation needs some fancy hardware. Linux will integrate much better with the fancy hardware while Windows will not.
I liked the article and it makes clear considerations.
I don’t agree that hardware vendors (even if it’s a piece of fancy hardware) will dare to give prevalence to Linux on the driver and tools department.
Linux will never run better on this hardware with Open Source Comunity based drivers. The vendor always knows their hardware better.
___________________________
Those considerations about SUN and their desktop make a lot of sense. SUN doesn’t fully understand Linux has an advantage for tech bussiness. Anyway, thanks to SUN for making the nice OpenOffice/StarOffice for Linux.
Many years will go by before a true Linux desktop start receiving profesional native applications.
______________________________
SGI really wants to port xfs to windows? why?
Maybe because SGI wants to sell Itaniums
They wanted in the past, and it was a huge waste of time and dev effort, Microsoft didn’t wanted to integrate the XFS FS on a XFS specialized version of NT4 workstation kernel. SGI wasted time to get a WIndows NT workstation for their graphics/movie technologies (farm rendering, OpenGL …).
I think they lost interest completely as we speak.
“Linux will support very diverse hardware suitable for very many uses. Windows will continue to run only on mainstream hardware and this will become a liability. Every large corporation needs some fancy hardware. Linux will integrate much better with the fancy hardware while Windows will not.”
I fail to see how Windows will fall behind when it comes to support of ‘fancy’ hardware. I guess what it comes down to is from who’s perspective is this hardware fancy to? Because as it stands now, most of the hardware that I would consider fancy are supported by Windows: High-end sound cards, high-end video cards, latest MP3 players, etc. Unless I am not understanding that article’s conclusion correctly, is it saying that Linux will gain the upperhand in this area due to Microsoft not evolving their OS? I don’t see any evidence in the article to support this conclusion.
I also don’t see how running on mainstream hardware will become a liability to Microsoft. Mainstream = majority = money. And even then, Microsoft is not incapable of upgrading their own OS into supporting the newest ‘fancy’ hardware out there – especially when the newest and fancy hardware are made directly for Windows. If the hardware that this author considers fancy lies in the niche market, then I doubt it would matter to Microsoft, anyway. Overall, I think it was a nice article. Just the conclusion was a bit confusing given the direction of the body.
The reason accessor methods have a prefix (instead of just var(x) vs. x=var()) is C can’t use the same name for the variable and the two methods. Java can but unfortunately chose to keep using (as such pointless) prefixes.
It surely doesn’t explain why IBM went for Linux rather than BSD to be honest.
I’d say the reason here is indeed commercial and is related to the complexity of Linux. As most have figured out by know, it takes heavy maintenance to build something on top of the linux kernel, such as a distro. This suits IBM very well as they can sell a huge amount of consultancy to make things work.
Big Blue might look like a niceplayer, but after all, they don’t want Linux user friendly, then they’d be out of a job..
i think they mean fancy hardware for servers
then “High-end sound cards, high-end video cards, latest MP3 players” are not fancy hardware just toys. ofcourse some fancy graphics could be neded for rendering farms like sgis thing with 16ati radeon gpus a head. it more fancy stuff like
large storage arrays and fast communication for clusters
>>It surely doesn’t explain why IBM went for Linux rather than BSD to be honest.
>>
I think that is clear enough. Remember that BSD groups are responsible for the _whole_ OS, not just the kernel. IBM and others can leave the kernel to Linus and his lieutenants, then tweak the rest for the “supported” and “unsupported” versions. They can claim that they support Linux (and they do!), but develop Linux versions that are their own.
Not so with *BSD. Those guys have a development cycle that revolves around the whole enchilada – they are much harder to herd in that kernel and drivers are all their responsibility.
I remember reading something a few years ago saying that when IBM initially looked into an open source operating system, Linux was ahead of BSD in terms of multithreading/scalability, and that ended up being more of a deciding factor than anything else.
So MS is going to be made obsolete by Linux’s forkless ability to focus on the commoditized and standarized PC hardware on the low end and it’s lack of flexibility at the high end. How long until server hardware becomes commoditized and standardized? IBM, etc. will profit by creating those standards but what then?
New markets are emerging in India and China but even they can’t outpace Moore’s Law.
> As a technology Linux is not radical. It does not offer anything noticeably different from what Microsoft is offering.
Oh yes it does, it’s just not obvious to some people
IBM never said it prefers linux over windows.
A large part of IBM is service and that includes providing service to what the customer wants.
IBM supports Linux ? You must be kidding me. I’m tired of reading articles about how IBM, Dell or other big vendors support Linux. People who continue to spread this belief should go to those companies websites and try to buy a system configured with Linux : they’ll understand the meaning of “chasing your own tail”.
IBM and co. put up a nice face but (in reality) when customers want to get a PC with Linux, they’re told that all systems come with Windows XP and the choices are either XP Pro or Home. I even read a piece in which somebody (the headmaster of a school) who insisted on getting dozens of systems equipped with the penguin was told by IBM to get lost.
Have you read the disparaging comments made by IBM, Novell, HP et al. about Linux, the “hobby OS that is produced by a loose bunch of amateurs” ? How many times have you heard criticism from these guys about Microsoft inability to produce a version of Windows that is not a malware carrier ? That’s not what I’d call supporting Linux.
It is a sad state of affairs when a corporation like IBM or HP poses as the Linux champion, only to turn around and kneel in front of Microsoft.
>>its going to end up spending a lot more money running linux.
what an over simplification.
i’ll setup debian/postfix/cyrus/horde mail server
or i’ll setup an exchange server.
i’ll setup freebsd/samba3/nfs server
or i’ll setup win2kas
your choice.
i’ll charge you the same.
my buddies who are fluent setting up sites on iis, apache, php, asp, cgi, mssql,mysql,postgresql whatever….your choice
and they’ll charge you the same.
if your paying a lot less for your mcses then someone’s getting taken for a ride.
a good techie cost the same, windows, linux or other.
the only reason you might pay less for a MS admin vs a *nix admin is because you paid for an administrator not an engineer.
microsoft admins (or pseudo admins) are a dime a dozen.
a microsoft techie that’s on the level of an engineer and truly understands what’s going on is gonna cost a lot.
people like to toss around the whole TCO, ROI, SYNERGY and other bullshit terms.
they are attempting to make like they are “in the know” and can see the big picture.
fact is, you and 99% of middle management couldn’t calculate a correct tco for a lemondade stand.
With the Billions of dollars being poured into Linux it really makes me appreciate Mac OSX even more. I wish IBM and SGI luck with the Linux investment. With Both OSX and Linux combating MS, maybe the monopoly will change.
> IBM supports Linux ? You must be kidding me. I’m tired of reading articles about how IBM, Dell or other big vendors support Linux. People who continue to spread this belief should go to those companies websites and try to buy a system configured with Linux
1. http://www.dell.com/linux
http://hardware.redhat.com/hcl/?pagename=hcl&view=certified&vendor=…
2. IBM has at least 20 engineers working on the Linux kernel. I *think* HP has a few. As do Intel, SGI, NSA, Dell…
One down.
> I even read a piece in which somebody <snip>
One swallow (?) does not a winter make. And I suspect that ‘somebody’ exists solely in your imagination.
Two down.
> It is a sad state of affairs when a corporation like IBM or HP poses as the Linux champion, only to turn around and kneel in front of Microsoft.
After loudly proclaiming this, perhaps you’d like to spend some time in a cage with an IBM lawyer?
Lawyer vs troll – now that’s a deathmatch I’d pay to watch.
>could be neded for rendering farms like sgis thing with 16ati
>radeon gpus a head. it more fancy stuff like
>large storage arrays and fast communication for clusters
Renderfarms do not use any fancy graphic cards its all about
proccessor speed, memory, harddisk speed and network speed.
Most proffesional renderfarms in 3d graphics use Linux for their renderfarm.
Many people believe that if Microsoft went out of business, there would be no such thing as a computer anymore. That’s what happens when a Monopoly takes over the market with an idea ‘the 2d desktop GUI’, and now Microsoft wants to race into ‘the 3d desktop GUI’ which seems natural, don’t you think. It’s all about sales. Fortunately I am saved from buying into that market based on hype, because I use Linux and older x86 hardware. It’s awesome being able to stay away from all the commercialism, and I get to learn how computers actually work from the programmers/hobbiests perspective using the Linux (open source) platform.
When you use a platform like Linux and write applications for yourself, all of the commercial side doesn’t matter anymore. You have complete and total control over your system and you should be confident in the fact that you can tune out for a decade or two and never upgrade anything. There is that much content out there and that much x86 hardware out there. You should never spend one penny on a damn thing until your hardware falls apart, than pick up a dirt cheap replacement part that is a year or two old. It’s incredibly interesting learning the science of software, and joining the development of new technologies once you have enough knowledge, but at the same time, put your money into investments such as real estate, and forget what company X is doing, to hell with them.
It surely doesn’t explain why IBM went for Linux rather than BSD to be honest.
BSD lost because of a licensing scheme that does not encourage contributions (but rather allows them to keep them proprietary). It’s the proof that forcing people to contribute works better than giving them more freedom at the potential cost of less contributions. Or, especially in the case of private contributors, making sure that the contributor’s work is not turned into a proprietary product makes them feel more comfortable when they are volunteering.
SGI really wants to port xfs to windows? why?
RTFA – this is in the past tense in the article and refers to the period when SGI had a MS agent as CEO and was trying to move SGI over to Win NT workstations.
Hardware I would consider fancy are supported by Windows: High-end sound cards, high-end video cards, latest MP3 players, etc.
The article is not refering to hardware goodies for home users – the article is talking about specialized hardware for the enterprise.
It surely doesn’t explain why IBM went for Linux rather than BSD to be honest.
Answer the GPL – there is a lot of code in the Linux kernel with the statement “Copyright of IBM” on it. The GPL means that an IBM competitor cannot take the code that IBM hackers created and turn it proprietary – then do a closed development of it to compete back against IBM. This scenario would be possible under the BSD license. Under the GPL if an IBM competitor wants to develop IBM contributed code it has to return to the community (which includes IBM)the derivative code.
IBM and co. put up a nice face but (in reality) when customers want to get a PC with Linux, they’re told that all systems come with Windows XP and the choices are either XP Pro or Home
Again the article is refering to to servers and enterprise systems and explains the reason why IBM isn’t really supporting Linux on the desktop (yet).
Listen guys stop looking at the article from your home desktop perspective – whether you are a “Windows Fanboy” or a “Linux Zealot”. Look at it prom the perspective of how enterprise computing will develop in the future.
My thoughts exactly. Thank you.
The article first says there’s no cost advantage for Linux over Win, then later says IBM will save on new technology development costs with Linux. It also first says there’s no technology advantage to Linux over Win, then later points out Linux’ advantages regarding technology development. (At least regarding enterprise-level servers, I think you might get the balance of opinion saying that Linux does have a technology advantage over Windows at this point.)
Good explanation as to why there’s no point in IBM pushing Linux for the desktop. Makes one wonder how Sun sees this as a viable strategy.
Re: chemicalscum, since under the BSD license IBM wouldn’t have to reveal any enhancements it made to BSD, keeping others from taking advantage of IBM’s enhancements can’t be the reason for choosing GPL’d Linux over BSD. It also doesn’t account for IBM’s choice of an open development model vs. putting more resources into their own proprietary Unix, AIX. Multithreading/scalability mentioned above by Anonymous sounds more likely as the reason for choosing Linux over BSD.
>>since under the BSD license IBM wouldn’t have to reveal any enhancements it made to BSD, keeping others from taking advantage of IBM’s enhancements can’t be the reason for choosing GPL’d Linux over BSD.
yes but who would use this ibm closed source version of BSD that is the question? by contributing to Linux if anything they are guaranteed a growing userbase, while if they were offering their own closed source version of bsd well they would have to start from a user base of 0
It seems this guy got it totally wrong. Specifically the case of SUN. I dont know the timeline that this article was actually written and the time difference between the writing and posting of the article.
Yep, dubhthach, it certainly couldn’t have hurt IBM to push an OS that already had some word-of-mouth behind it (though I’m guessing in the enterprise server market IBM lent Linux name recognition more than the other way round). An interesting question in this regard is why IBM chose Linux rather than adapting AIX, its own proprietary Unix, which does have name recognition and a user base in the enterprise server market.
Re: chemicalscum, since under the BSD license IBM wouldn’t have to reveal any enhancements it made to BSD, keeping others from taking advantage of IBM’s enhancements can’t be the reason for choosing GPL’d Linux over BSD. It also doesn’t account for IBM’s choice of an open development model vs. putting more resources into their own proprietary Unix, AIX. Multithreading/scalability mentioned above by Anonymous sounds more likely as the reason for choosing Linux over BSD.
My reasoning explains why IBM wouldn’t choose a BSD based open source route if it decided to go open source and why a GPL based OS is preferable for them. Pretty self evident really when you think about it.
Why IBM decided to go open source is a different question there are I think three main reasons:
1. To leverage the free use of all the free programming capability in the FLOSS world.
2. To provide a basis of synergy in the development of software between it and its competitors that use open source such as HP. IBM heavily suports work on Apache, HP does the same with Samba and they both gain. Both together with Intel have heavily supported IA64 kernel development and speeded up the process.
3. Though not the same thing, open software and open standards in a way support each other and open standards are they way to develop enterprise computing in the era of the web and BTB.
But they are just guesses.
There is another possible interpretation – about a quarter century ago a group of anarchists took over control of the centres of policy making at IBM. First they introduced open hardware (the PC – effectively against their own proprietary interests), then they supported open global networking (TCP/IP and thus helped lay the foundation for the WWW) and finally adopted FLOSS with its anarchic development process – But then I guess you would need a tinfoil hat to believe that one.
Yes – four main reasons
4. The one mentioned in the article the ability to customize the software. Either by IBM or it customer (neither can do it with Windows – only IBM can do it with AIX)
No one expects the Spanish Inquisition !
Yes I am teaching myself to program in Python at the moment.
aye it’s an interesting question
From what i remember at the time IBM was working with SCO regarding porting AIX to IA64 (project monterey?)
now back in 1998 everyone was talking about how Itanium was going to kill RISC (something it def. hasn’t) so maybe IBM saw it as cheaper to push Linux instead of porting AIX to IA64, result cancellation of project monterey.
Linux does has some tehnological advantages over windows. Its modular design, support for more types of hardware
Well if you’re comparing windows with linux then you have to compare them by their respective kernels as linux is only a kernel. Then you need to compare them in technical terms, the problem is that this usually leads to useless “M$ is evil” flamewars.
Then you need to choose wich kernels you want to compare. Ok, there is only one linux kernel (albeit with some variations) but there are at least three diferent windows kernel codebases in use (win9x, winCE, winNT or whatever the MS maketing machine currently calls them) and all of them are completely diferent architectures. I take it you were refering to NT wich is used on windows NT/2000/XP/2003.
The NT kernel has been ported to x86, alpha, itanium, powerpc and in fact the first version was developed on alpha (or some other risc, can’t really remember). The kernel architecture was designed to be portable, if there aren’t more ports blame it on MS business department and the crappy win32 subsystem.
I want to see anyone prove that the linux kernel is more modular. Loading one big binary image on boot is not what i would call modular, yes, linux also has modules but lot of stuff is still built into the (monolithic) kernel, take a look here: http://tutorials.findtutorials.com/read/category/97/id/379 .
And there are more things, NT had threading from from day 1, whereas linux only recently started having decent threading and still AFAIK nothing that compares to NT Fibers, linux hardware abstration and a coherent driver model are still a mirage.
In short, you can say what you will about microsoft but the fact is that the NT kernel has a beautiful design, the people who did it knew what they were doing and if anyone has to be blamed on the shortcomings of windows it’s the people that make microsofts business and marketing decisions.
PS: For info on both windows and linux kernel check this: http://cs.uml.edu/~cgould/
once in a while here there’s someone that makes some intelligent comments. Thanks Sagres!
>The NT kernel has been ported to x86, alpha, itanium,
>powerpc
Please provide me a link where i can download that kernels
>And there are more things, NT had threading from from day 1
more depth info here: http://linas.org/linux/threads-faq.html
>I want to see anyone prove that the linux kernel is more
>modular.
http://www.kernel.org. Download it and use it before you comment.
>if anyone has to be blamed on the shortcomings of windows >it’s the people that make microsofts business and marketing
>decisions.
Right so the people who developed NT are not to blame
if NT again freezes or produces a BSOD? no it all the fault
of the marketing guys..
Please provide me a link where i can download that kernels
I can’t, you’ll have to ask Dave Cutler to mail them to you 😛
>www.kernel.org. Download it and use it before you comment.
And how would downloading the kernel prove it’s more modular?
>more depth info here: http://cs.nmu.edu/~randy/Research/Papers/Scheduler/ and notice that even after they did that, NT still has the lead with more than 20 threads queued, like i said, this only changed recently with 2.4 and now with 2.6. You should also consider that had they used Fibers instead of kernel Threads on NT, Linux would be toast.
Right so the people who developed NT are not to blame
if NT again freezes or produces a BSOD? no it all the fault
of the marketing guys..
If marketing guys make their software engineers add compatability with win9x drivers to 2000/XP then yes.
And BTW, if you don’t believe me then ask yourself this, have you ever heard Linus Torvalds saying anything bad against the NT kernel or Dave Cutlers work like he has on Mach kernels?
From what i remember at the time IBM was working with SCO regarding porting AIX to IA64 (project monterey?)
now back in 1998 everyone was talking about how Itanium was going to kill RISC (something it def. hasn’t) so maybe IBM saw it as cheaper to push Linux instead of porting AIX to IA64, result cancellation of project monterey.
nzheretic has posted an article (I can’t remember the URL) that argues that it was HP and Intel working who first ported intel to IA64 (setting up OSDL as part of the process). And that it war thid that was a major factor in causing IBM to drop Project Monterey and move over to Linux.
With regard to Sagres, AFAIK NT was never ported to PPC and of course as we all know that the NT kernel was a rip off from VMS when Dave Cutler moved him and his team across to MS from DEC after DEC decided to abandon VMS in favour of Unix.
I second the above opinion – thanks Sagres for making my day! A bright commentator is very rare indeed.
I’m not knowledgeable of the inner workings of the NT kernel, but I’ve read some histories of MS and the coming of NT. You’re correct, AFAIK – most of the problems in NT early on came from the ABSOLUTE requirement that NT run WINDOWS binaries without change to either the binaries or the OS. This was Gates’ ‘had to have’ feature. Cutler delivered, but with the instabilities that Cutler himself predicted. Cutler is _the bomb_ in the OS world, second to none.
Look, I use BSD mostly and am a *NIX freak. But NT was a major accomplishment in CS history, since it provided FOR THE FIRST TIME the backward compatibility that no other system had ever done. (Even the trade press was very pessimistic that this could be done when NT was first Beta tested – the history was that bad!) IMO Gates is STILL demanding this as well, which is why Longhorn is going to take so long to do.
I’ll remind everyone how little Linux is able to offer the backward compatibility (even my favorite Linux programmer Icaza has written a lot about this problem). Slam MS all you want, but they’ve done some interesting things along with the crap that we all rightly slam. And hiring Cutler to do NT was freakin’ genius.
“Then you need to choose wich kernels you want to compare. Ok, there is only one linux kernel (albeit with some variations) but there are at least three diferent windows kernel codebases in use (win9x, winCE, winNT or whatever the MS maketing machine currently calls them) and all of them are completely diferent architectures. I take it you were refering to NT wich is used on windows NT/2000/XP/2003.”
If you name there are 3 Windows kernel codebases i suggest you also make a difference between the stable kernels of Linux: 2.0, 2.2, 2.4, 2.6.
Comparing kernel vs. kernel also is unrealistic under various circumstances since Windows is not a kernel only and cannot be obtained in such way and because only a kernel is no go for say a server. Besides that technical aspect, it’s imo especially important in the circumstance TCO is being discussed.
“In short, you can say what you will about microsoft but the fact is that the NT kernel has a beautiful design, the people who did it knew what they were doing and if anyone has to be blamed on the shortcomings of windows it’s the people that make microsofts business and marketing decisions.”
Regarding your last sentence: it is clearly impossible to blame a (multiple) programmer(s) when his (her, their) program(s) crashes. Such was clearly beyond his (her, their) control, it’s all because of the business and marketing decisions. Do you think those who decide business and marketing also decide on every piece (boils down to every character) of code a programmer creates? Do you think they review it in detail (and if so, do it theirselves)? Sounds rather funny, doesn’t it?
Please proof only the VMS team have worked on the NT kernel even regarding it’s latest versions. Please proof the 2K/XP/2K3 versions are still as portable as the NT3.x/NT4.x versions. Yes i know there’s a beta of Win2k for Alpha, that doesn’t proof much (beta) nor does it say much about the newer versions. Only speculations.
“I want to see anyone prove that the linux kernel is more modular. Loading one big binary image on boot is not what i would call modular, yes, linux also has modules but lot of stuff is still built into the (monolithic) kernel”
Hello brightness, it is a _monolithic_ kernel, not a microkernel? Actually, Linux 2.6 has been made even more modulair. Virtually everything can be compiled as a module. You can check that out yourself, you can download the source. It even has an embedded option which makes the kernel less bloated.
“And how would downloading the kernel prove it’s more modular?”
Well you could download kernel 2.4. Do make menuconfig. Compile as much as you can as module and check how many modules you have and check the size of bzImage. Then you do the same with 2.6 and compare them. Either with embedded option on or not. You have the source. Now, can i check the portability and modularity of Windows 2003 server myself (for $0, please)? TIA. Answer: No, impossible.
Another advantage regarding freedom is the fact that other people can use the Linux kernel for a specific usage (ie. embedded) and distribute such freely. It means less work for others and it is both cost-free and free/libre (though a cost can be asked for the GPL code). Both is impossible with Windows, you either have to buy such as complete package or DIY which would take time and would not allow you to distribute your work. You’d still have to buy licenses anyway.
Futhermore your source about the scheduler is, just like your article about Windows _2000_, out of date:
About Linux 2.0.30
Made with Netscape 4.03 Linux 2.0.33 (*giggles*)
Last modified in october 2001
Sounds ‘slightly’ old, doesn’t it?
We are living in end december 2003 (and Netscape’s browser is quite obselete regarding Linux). The current kernel version is 2.6.0 for the 2.6 stable branch and 2.4.23 for the 2.4 stable branch. You have an analysis on these versions too?
Wow, that was fun (:
Always comes down to “more freedom”, yes? You completely ignore the points about backward compatibility or the scheduler. Just the same old song – “I can’t see the source so I can’t know the real internals.”
“Fun” is watching the GPL freaks run the same tape over and over again. Hey, when you pull your head out there has been quite a bit of comment on the re-implementation of old ideas – Linux has already gotten _this_ far because it already had a PROTOTYPE (ie UNIX) to shoot for. NT was written from scratch with a completely “free hand” for Cutler. It is the clone of NOTHING. Cutler himself always downplayed NT as “new technology”, since everything “new” is eventually “old”. As Linux is already (indeed was when Linus first started the project!). I use it, but it is hardly groundbreaking.
Unless, of course, we want to restart the “freedom” tape. But I wonder, I really do, how much of your precious source you actually read. I’d wager that the real answer is very little. But no matter! Your answer is predictable and lamentable. PRESS REWIND NOW!
Backwards compatibility is something Windows isn’t brilliant at either afaik. I’ve read a lot of angry articles about this regarding games. I don’t care much about it either since i always work around such (includes BSD. For example when OpenBSD switched from a.out to ELF). Eventually there’s this funny thing called “static binary” and “recompile”. For me, it ain’t an issue, therefore i don’t discuss it.
The scheduler part is adressed because his/her source is old which is thoroughly proven; his/her analysis isn’t about a new 2.4 or 2.6.0 version. Only a wild claim regarding these, but no analysis is provided for regarding the latter 2 trees which what actually matters to me. If you’re gonna discuss Linux 2.0 i suggest you also discuss Windows 9x but the point is both are history.
What makes it even more complicated is that there are multiple schedulers for 2.4 http://www.google.nl/search?q=scheduler+patch+Linux
IIRC RedHat Linux comes with a non-standard one.
And the fact in Linux 2.6 the scheduler got improved http://bulk.fefe.de/scalability/
“Linux has already gotten _this_ far because it already had a PROTOTYPE (ie UNIX) to shoot for. NT was written from scratch with a completely “free hand” for Cutler.”
Surely, Cuttler and friends forgot all their VMS knowledge when they wrote the NT kernel.
Now, where’s your source about the portability regarding Windows 2003? Or your source Cuttler’s team only wrote the NT kernel; even the newer versions? Because if you claim “NT is portable” it includes important parts of the Windows OS since the OS is binded together with the kernel (earlier explained already) and if you claim “Cuttler and friends wrote NT kernel” it includes all NT versions (also earliier explained already).
No, instead of that, you throw in some claims about me, and some “GPL freak” ad hominem. I don’t know you either, but come on, show us these brilliant sources! Why hide behind fallacies? I, for one, am eager to read articles about this. Then again, not some _out of date_ articles about old versions which are either (soon) unsupported, years old, not important anymore, not important in a discussion about future. Therefore i suggest we discuss versions which do matter. Versions from 2002, 2003 which are supported: Windows XP, Windows 2003 Server, Linux 2.4 newer versions, Linux 2.6.
Linux has already gotten _this_ far because it already had a PROTOTYPE (ie UNIX) to shoot for. NT was written from scratch with a completely “free hand” for Cutler. It is the clone of NOTHING. Cutler himself always downplayed NT as “new technology”, since everything “new” is eventually “old”
The team which created NT mostly came from DEC, Cutler put that as a condition of his joining Microsoft. These people created VMS, and they created NT. There are more than passing similarities between VMS and NT. Microsoft was actually sued by Digital, and they paid up and were forced to port to Alpha IIRC. The only difference is that there is a specification for Unix like systems in general, and none for NT which is publically available at least.
So NT was not new in the sense that it had not been done before, but in that is was not DOS.
So please take your uninformed opinion elsewhere. There was no free hand for Cutler. He was hired from DEC for a reason. It wasn’t to start thinking anew about operating system design.
From what i remember Dave Cutler was working on a microkernal replacement of VMS for DEC called Project PRISM, i believe there was new hardware as wellinvolved. the whole thing ended up been shelved and DEC went with the ALPHA and ported VMS to it.
Most of the PRISM team went to MS and took their ideas result NT.
As for the name i’ve also heard that it came about because it was origianly been written for the Intel i960 RISC chip which was codenamed N10 (hence NT) when MS realised i960 wasn’t going to be ready they moved development over to MIPS, After that it was ported to x86 and Alpha, i believe there was a Sparc and PPC port out there but i’m not all together sure.
>>
Backwards compatibility is something Windows isn’t brilliant at either afaik. I’ve read a lot of angry articles about this regarding games.
>>
I don’t play games, so I can’t speak there. But if you think that 2003 Server “broke” compatibility with 2000 or NT4 you are mistaken. The older binaries work fine, even with enterprise level software with millions of code lines.
With _no_ kernel recompiles, no OS changes.
You couldn’t say that for (as an example) RH7.3 to RH8. Hell, I still have to run the unsupported RH7.3 because of these exact problems.
You are simply misinformed. And you also “hide” behind the charge of ad hominem fallacy. The previous person was making a point about the NT kernel, and _you_ my friend introduced (and I quote) “…the importance of freedom….” You know perfectly well the meaning of the introduction of _those_ terms into the discussion (and if you don’t, you are doubly misinformed). So, either you are ignorant, or my characterization of you stands. You opened the door to show your flag – when I call that flag its true color I’m guilty of ad hominem.
Sure nothing is perfect on this earth, the Linux kernel is far from perfect as is the NT kernel but the Linux kernel is maturing quickly and its a very good alternative for a kernel build by a money hungry, power hungry company called Microsoft. So even if the NT was twice as good as the LInux kernel i would still choose the Linux kernel because of its nature and freedom.
Do not forget this article is about the future of the Linux kernel the Windows/NT kernel is doomed to die anyway
Can we keep things technical and not go into the philosphy of freedom, i’ve read enough “my license is better then your licesne threads” to do me for another year *roll-eyes*
as for rh7.3 to 8 well don’t take my word on it but when going through list of packages i could install in fedora using synaptic, i remember seeing a number of “compat” packages for redhat 7.3 backward compatability. The problem there in truth wasn’t kernel related more that redhat went playing with compilers and libraries.
ypu could always try a fedora test box with those installed to see if it works for you, wouldn’t hurt to try anyways =)
thanks, man. I’ve a lack of machines just now, but I’ll indeed follow your advice and try Fedora when a machine is free.
I was rather speaking about playing games on Windows XP. DirectX is another problem by itself. Not only old ones btw. For example GTA2 isn’t very old, like the older games which were from the DOS/W95 era. Recompile is not even an option in such situation whereis this is an option with RedHat. You do know how to make a backup of important files on RH7.3, install RH8.0, and copy the important files over, don’t you? Not much of an issue if you ask me.
Therefore, like i said, i don’t care much anyway. If Microsoft did this like you state, which i believe, they did a good job regarding that. But not regarding compatibility as a whole, for example in the case of DirectX and 9x.
For one you cannot quote me regarding
“…the importance of freedom….”
since i nowhere wrote exactly that. So don’t claim you quote me on that since what you state is not a quote.
http://dictionary.reference.com/search?q=quote
Second, it is only a small part of the post and imo it is an important part; therefore i add it. Can’t stand the heat? Don’t read it.
Now where’s the articles? Oh yeah, i forgot. You ignore these parts. The posts by anonymous/*.zw and dubhthach about Cutler, though not provided with source, were interesting to read imo. My questions regarding the CURRENT state of partability and WHAT Cutler exactly wrote are still unaddressed.
The Alpha line has been discontinued, as have the others that Cutler wanted to port the OS to. It’s a moot point. That MS _is_ porting it to AMD’s Opteron means nothing of course. Or that the big evil MS is going to use IBM’s PPC chips in the next Xbox. There are several good general books on the history of NT, but you’re not really interested. You like to argue, that’s all. Pointlessness cubed.
I’m wrong about it all, oh great one. And Cutler sucks too. We bow to your great OS powers. OohmataboomoohmaOPI. OohmataboomoohmaOPI. Bless us, we implore. The ignorant unwashed and anonymous cowards with jobs are in need of your wisdom. OohmataboomoohmaOPI.
Please grace us with your code! OohmataboomoohmaOPI.
If Dave Cutler talked as much as 1% of what Steve Balmer blabs out I would have no problem giving you links of his interviews, but alas, the guy keeps a low profile so this was all i could find:
http://www.microsoft.com/windows2000/server/evaluation/news/fromms/…
As for the current state of portability i can only guess, but taking in consideration that 2000 ran on alpha and xp is almost identical (a minor version update), and also that there are betas of xp64 for itanium and amd64, then i guess it’s still portable (the kernel anyway).
My theory is that if you can port something to itanium then you can pretty much port it to everything.
Look, I use BSD mostly and am a *NIX freak. But NT was a major accomplishment in CS history, since it provided FOR THE FIRST TIME the backward compatibility that no other system had ever done. (Even the trade press was very pessimistic that this could be done when NT was first Beta tested – the history was that bad!) IMO Gates is STILL demanding this as well, which is why Longhorn is going to take so long to do.
That is exactly the reason why Windows is so broken. Design flaws from Windows 3.1 are still making their way into XP and beyond just for compatibility’s sake. There comes a point when it just becomes too unwieldy, and a total overhaul is necessary.
I also have to take issue with the poster who claims NT is more modular than Linux. First of all, NT hasn’t been a microkernel since before NT4. It was way too slow and it was abandoned. NT actaully has much more in kernel space than Linux does. Linux can almost be completely modularized at this point. NT still crashes due to programs “outside” of the kernel. How modular is that?