The breakdown for the various editions of Windows Server 2008 was revealed this morning by Microsoft, and the big news there is the almost total lack of change: Retail server software editions for the next Windows Server will fall right in line with the current Windows Server 2003 R2 editions, including the number of client access licenses provided in the basic package.
I really hope its based on minwin with win32/unix/linux compatiblity.
It will be based on “Minwin” (they’ll probably have an ‘edgier’ marketing name for the technology soon) and Windows has been able to emulate *nix for those few apps needing it since Windows 98.
… albeit with the world’s slowest fork(2) implementation.
MinWin *is* marketing. Microsoft’s marketing and HR teams realized that the perception among college-age developers that Windows is huge, bloated, and unwieldy was driving prospective developers away from the platform. MinWin is an attempt to demonstrate publicly that, although Windows is large and complex, the codebase is more structured and manageable than one might think.
They try to get away with calling MinWin a microkernel, but in reality it’s just a logical subset of their existing monolithic NT-based kernel. They managed to split out the source code, make it separately buildable, and jazz it up for demonstration purposes.
I’m sure it was a somewhat useful engineering exercise internally, but it was primarily targeted at people like us here at OSNews. They need to sell us the idea that Windows development is sustainable, that they have a plan to mitigate code complexity and to combat “Brooksian” communications overhead.
It’s a belated response to the success of bazaar-style development models such as Linux and KDE. See, our software is made up of parts, too. Vista was a fluke. We can scale. We can hack on this codebase for decades to come. No dead ends here. It’s not a mess, we know what we’re doing, and we’ve got it under control.
That’s the message that underlies MinWin.
MinWin *is* marketing. Microsoft’s marketing and HR teams realized that the perception among college-age developers that Windows is huge, bloated, and unwieldy was driving prospective developers away from the platform.
That’s pretty much it. Honestly. How many articles and stories have we had over the last eight years or so, especially in the run up to Windows 2000 and beyond, that helpfully told us how Windows was being redesigned, being made more modular, more object oriented and less of an ‘all-in-one’ pig?
All crap.
Geez. Usually I respect your opinion, but what you’re saying is absolutely idiotic. MinWin was not public until Eric Traut mentioned it and all the ‘marketing’ you’re seeing is various people in the tech press picking up on this because it looks like a “good story.”
I’ve been working on Windows for a few months now. There are hairy pieces here and there, but it’s really not as unmaintainable as you imply.
MinWin was not public until Eric Traut mentioned it and all the ‘marketing’ you’re seeing is various people in the tech press picking up on this because it looks like a “good story.”
Just like all the other stories.
Minwin is just a striped down kernel. I assume this means most of the kernel device drivers and subsystems have removed.
I think Minwin is a proof of concept more than anything because no practical OS can be made from such a striped design. Well, not if you want any decent hardware support that is. 😉
Edited 2007-11-12 22:16
“””
“””
I am not an MS fan. But I was actually pretty excited about the Minwin concept… at first. I really liked the sentiment. But then I took a look at what MS’s idea of “min” actually is:
http://www4.osnews.com/permalink?280016
61MB, and it can… just… barely… run an in-kernel web server. The video the story that post is attached to emphasizes just how limited minwin’s capabilities actually are. So what is it doing with all that virtual memory?
As another poster pointed out, NT nominally had a POSIX personality, for all the use it was without comprehensive support from MS. POSIX compatibility is and always was a checkbox for the marketing department.
Edited 2007-11-13 00:34
The article you’re linking to states 40MB in runtime memory, 25 MB on disc (divided among 100 files).
Besides, Minwin is not meant to run on its lonesome — although, it would be a neat challenge to wrap as small of a wrapper as possible around it while still keeping it functional and create a “D*** Small Windows.”
EDIT: Silly quoting system.
Edited 2007-11-13 00:54 UTC
“””
“””
Yes. I know it does. But the video clearly shows 61MB of virtual in use. View it and watch carefully. The virtual machine is allocated 39MB of *RAM*. But with plenty of swap to fill in. And minwin is eating generous portions of that.
Of *course* minwin was not meant to run on its own. But as a very limited capability “minimal” core, it is consuming resources that should allow for a full featured server. What’s up with that?
I note with amusement your other post criticizing the Unix model’s use of memory. I can run a monolithic Linux kernel and lighttpd on a little over 8MB total.
Though it is in multiuser environments, with many users running multiple copies of the same applications that the Unix model really shines.
After our discussion last time, I spent some time playing around with the MinWin ISO image. You have to understand that MinWin is not a product and it is not really an attempt to squeeze Windows or to make an embedded system. For instance, many of the files in the MinWin image are actually language files. MinWin is more of a division of the existing Windows into a minimal bootable component for internal organizational purposes. I would forget about comparing MinWin with your minimalistic Linux router stack.
“””
I would forget about comparing MinWin with your minimalistic Linux router stack.
“””
Well, in the context of a thread entitled “Hope its based on Minwin”, give me some rope^Wslack. 😉
I can do the same on my desktop with 24MB. But not less. It won’t boot on 23MB. With 24MB it boots and runs the webserver and has plenty of free/cache/buffer memory. As an efficiency fiend, I find that disappointing. It’s a 2.6 thing, I guess.
Edited 2007-11-13 01:22
“A new Windows OS is revealed to be sold like a Windows OS.”
Kind of silly news.
Now, if it was “A new non-Windows OS is revealed to be sold like a Windows OS,” that might be interesting…
Windows Server Enterprise (25 CAL): $3999
RHEL Advanced Platform (Unlimited Client): $1499 (for support)
Mac OS X Server (Unlimited Client): $999
Windows is expensive. A 25 CAL windows costs more than twice RHEL UNLIMITED CLIENT!!! And 4 times OSX UNLIMITED CLIENT!!! Businesses would save a lot of money if they didn’t use windows. Forget about the FUD and lies. Windows has an ENORMOUS TCO. Linux isn’t always crashing, it doesn’t need a PhD to run it. In fact, it’s quite the opposite. Why are companies switching to win server when a much cheaper alternative that will be easier to transition to (UNIX to Linux) is available? Reply if you have any idea why.
People pay for Windows Server because it is (gasp) very good and it runs the software they want, just fyi
Edited 2007-11-12 22:38 UTC
Linux costs me $1 for me, with support.
The OS is free but it costs me about one dollar worth of my time configuring it. The support is also free because I figure its not a great investement charging myself for my services.
Edited 2007-11-12 22:50
Those aren’t the only costs. What about software development costs? What about interoperability? How can you say it is across-the-board ‘expensive’? Yes, the fees you point out are clearly more but those aren’t the only costs in the entire scheme of things for a business.
If you look at *all* the IT expenses a business makes, those fees might not be substantial, especially for large business that have lots of Windows PC’s and who often hire or subcontract Windows developers. Those same developers who developed the client application may be the same ones who developed the server-side app which, as a matter of convenience or skillset, develop a Windows-based server app.
Which ignores vendor lock in, whose costs can range from the thousands to the hundreds of thousands; it might be cheap to enter but once you’re there, you’ll be screwed for all you’re worth – why? because what is the alternative?
You’ve already paid a huge amount moving to the platform, and now you’re stuck using proprietary technology that is platform dependent all because some penny pincher on the 5th floor was counting the beans of today than estimating the costs in the future.
How many of these so-called ‘Windows integrators’ are going to end up creating a solution which glues all Microsoft technologies together into a proprietary ball of pain? every last one of them. Sure, I can understand running windows on the desktop, that makes sense but for the server, something that will never directly be interacted by an end user – whether it runs Windows or Linux or Mac OS X shouldn’t even be an issue.
The server offers Active Directory, which is superior to any alternative on Linux or OSX. LDAP doesn’t even come close to the rich features of Active Directory.
Windows XP (and Vista) clients are written to take advantage of the integration.
Edited 2007-11-13 03:21 UTC
Actually dude… hate to burst your happy bubble, but Novells eDirectory (formerly NDS) blows the pants off Active Directory. Not only that, but feature for feature it blew the pants off of AD before AD was even released. It is one of the very few things Novell did exceedingly well on. http://www.novell.com/products/edirectory/
The only killer feature of AD is group policy. Now *that* is sweet for desktop admins. Also, Microsoft bundles AD with their server offering. That makes it hard to not have a large marketshare.
Hello? ZENworks anyone?
ZENworks makes group policies look like a joke. It allows you to manage Windows, Linux, Netware, Blackberry, Windows CE and Palm workstations/servers/devices from a single console, deploy software, apply policies, etc.
Given the features it offers, the price is actually cheap. Novell Open Enterprise Server 2.0 includes ZENworks (those “Desktop Management”, “Linux Management”, etc in the product flyer are ZENworks).
By the way, it also supports Active Directory group policies.
Samba 4.
http://searchenterpriselinux.techtarget.com/originalContent/0,28914…
Just add Samba 4 to this:
http://www.howtoforge.com/fedora-8-server-lamp-email-dns-ftp-ispcon…
… or maybe this …
http://www.howtoforge.com/perfect_setup_centos5.0
… and away you go … for $0 … even if you are stuck with^h^h^h^h have Windows clients. Even if you have more than 25 Windows clients … it still won’t cost you!
Enjoy!
Glad to be of help!
http://www.linuxformat.co.uk/modules.php?op=modload&name=News&file=…
http://wiki.samba.org/index.php/Samba4
http://lwn.net/Articles/248246/
(OK, so it still isn’t ready for a large group of servers, but it is on its way).
Edited 2007-11-13 09:54
The server offers Active Directory, which is superior to any alternative on Linux or OSX. LDAP doesn’t even come close to the rich features of Active Directory.
OS X’s directory server is pretty good, and in the open source world we now have Red Hat and Fedora Directory Server which has been around far longer than AD in the form of iPlanet and SunONE. It’s more mature, has far bigger installations and it can replicate parts of the directory read-only where AD simply can’t.
Windows XP (and Vista) clients are written to take advantage of the integration.
True, and Linux desktops lack that overall integration, but if it’s centralised management that you want then most people tend to just centralise applications and settings by mounting over NFS or something. Far easier, and far less error prone than trying to use some brain-damaged system to distribute applications and settings to all desktops.
What do you do when your NFS share goes down or you have a network partition?
What do you do when your NFS share goes down or you have a network partition?
It doesn’t ;-).
If you have any desktop that relies on the network in any way, for network drives or directory information through LDAP, then loss of the network or the servers will be a problem. Nothing else is going to help you.
And you do realise that Microsoft has admited that the vast majority of its customers do not use the enhanced features of Active Directory – hence the big education push to get people to integrate their server and desktop together.
Also, Novell offers an alternative called Novell eDirectory along with Novell Zenworks; Sun offer their directory server along with their Taratula software which does the same thing as Zenworks. TWO products that offer the SAME functionality without vendor lock-in.
Try again with the examples because I’ve yet to see someone who can come up with a legitimate example of how they *MUST* use Windows over an alternative operating system on the server.
How many of these so-called ‘Windows integrators’ are going to end up creating a solution which glues all Microsoft technologies together into a proprietary ball of pain?
In the absence of anything easier and more straightforward to use and program for, yes, that’s what many people are going to do. Doing network IPC is stil way, way, way easier with Windows and Microsoft programming technology than it is with anything else.
How so? I’ve been in situations where companies have purchased software they don’t *truly* need; its the old story that when these managers are given money they spend it, but if it were their own money, I can assure you, they certainly wouldn’t be going out and saying ‘integrate everything together – because according to xyz article from xyz research group, it improves productivity by 0.0001%”.
” Doing network IPC is stil way, way, way easier with Windows and Microsoft programming technology than it is with anything else.”
Strange, you’ve never heard of Java, since IPC is extremely easy with it (and if you use something like JAX-WS or GWT there are even easier ways of acheiving the same thing). Strange you haven’t heard of it … especially since according to TIOBE it is programming language which has the highest popularity (http://www.tiobe.com/tpci.htm).
Strange, you’ve never heard of Java, since IPC is extremely easy with it (and if you use something like JAX-WS or GWT there are even easier ways of acheiving the same thing).
Nope, I have heard of it, and DCOM is still the easiest way of achieving some decent, simple network RPC – especially if you want rich objects like recordsets passed. You also get half-decent tools like Component Services. Google Web Toolkit doesn’t compare, because it’s built for something different. JAX, again, is a web service component, and is a different usage from using DCOM where you want to pass rich, native objects over the network (although marshalling limits what you can feasibly pass). JAX-WS is particularly simple, and it is grossly over-engineered on a lot of ways – as a lot of web service APIs are. Way, way, way too much XML config going on there. Too many interop issues as well, which is one big advantage of DCOM.
However, that probably isn’t the case now, because Microsoft are determined to ruin all that with Indigo or WCF. Again, there is simply too much pre-config stuff going on with Indigo for my liking, like JAX.
Edited 2007-11-14 10:46
Say what?
http://en.wikipedia.org/wiki/Distributed_Component_Object_Model
“Distributed Component Object Model (DCOM) is a Microsoft proprietary technology”
Case closed, brother. DCOM is exactly the opposite of “interop”. No soup for YOU!
It is deprecated anyway. Hardly “advantageous”, is it?
Edited 2007-11-14 11:38
Case closed, brother. DCOM is exactly the opposite of “interop”. No soup for YOU!
You don’t understand what interop means in this case, that’s why. It means that the client and the server actually understand what is being passed, which is where web services take a huge step backwards over a precipice.
It also isn’t proprietary, as other implementations of DCOM have actually been made – but of course, there are Microsoft only bits in their implementation ;-).
It is deprecated anyway. Hardly “advantageous”, is it?
It isn’t deprecated at all, and it hasn’t been deprecated in favour of Microsoft .Net, as that article ridiculously tries to claim. .Net still uses DCOM, although you now have other options as well, which include .Net remoting and WCF. It is still widely supported in all version of Windows.
But this is the very problem, it is not widely and fully supported outside of Windows.
Hence no interop.
If you think otherwise, then you do not understand “interop”.
“Interop” does not mean “works with different Windows machines”.
If as a developer you thought DCOM had good “interop”, then you are cutting yourself out of a large and growing market.
There are, apparently, 70,000 Microsoft employees … but there are also the equivalent of an estimated 1.5 million FOSS full-time developers.
I’ll leave it as an exercise for the reader to figure out which group will outpace the other over time. I’d also leave it up to readers who are also developers to ponder which “ecosystem” they should be thinking about making their future applications work with, to reach the widest market through the best “interop”.
Or maybe, just maybe, it is possible that developers really should think about a good cross-platform development system.
Edited 2007-11-14 13:10
But this is the very problem, it is not widely and fully supported outside of Windows. Hence no interop.
You didn’t understand the context, end of story. What people want is easy to use, easy and fast to deploy, accurate network IPC. Nothing outside of DCOM has been, and DCOM is by no means perfect at all. Draw your own conclusions.
I’d love to be able to say that is not the case, but it is.
“Interop” does not mean “works with different Windows machines”.
Again, you simply don’t understand what interop means and the context here. Interop means the client knowing what the server is sending, which is a pretty big window of failure in web services. Whining over something cross-platform is going to get you nowhere. We have that already, and it’s supported even by .Net. It’s called web services, and it’s crap.
There are, apparently, 70,000 Microsoft employees … but there are also the equivalent of an estimated 1.5 million FOSS full-time developers.
You really don’t understand what is being talked about here.
Or maybe, just maybe, it is possible that developers really should think about a good cross-platform development system.
They certainly will do if the tools are there ;-).
It’s pretty obvious in the ‘server space’ (that is, related to this thread) you can’t have worked with big enterprise (banks etc) or national government. You have to do interop with machines other than Windows, since you don’t get to choose their hardware, which means apart from the special case of Windows-to-Windows, DCOM is useless.
Of course DCOM is easy going Windows-to-Windows, since you’re limited to architecture that is exactly the same. It it even easier if you were to serialize Java objects and squirt them across the wire to another Java VM, in the limited case of interop between Java servers. But this isn’t a general interop solution (and you want ‘general’ since, as I said before, you can’t control what architecture your customers have).
An earlier poster mentioned that DCOM is deprecated (that is, works but is considered a legacy solution that could be replaced by Microsoft one day). It’s legacy especially since its wire protocol is based on a (little endian) i386 with the vtable layout of an early MS C++ compiler. Hmmm, building a system on that is a great decision for future proofing, not! See how all the Visual Basic 6 afficianados got shafted to see how well relying on legacy development technologies is gonna work out. Therefore couldn’t recommend DCOM for new stuff. Just because DCOM works now doesn’t mean it’ll get priority support from Microsoft in the future (just look at how the technologies DCOM replaces DDE,OLE,COM,COM+ still work but are not priority maintenance items). Dot NET will become the center of gravity for Windows (interop) development whether you like it or not (that’s the Microsoft Way after all, so you’d better be planning for it).
Recently did a project where a Java webservice on Unix gear (Sun box) was accessible through JAX-WS by a Dot Net client on PC gear (as well as Java clients). Now that is true interop! Not quite so easy with DCOM. I completely agree with you that there is far too much XML floating about to configure these things, but unfortunately how everyone [else] wants to do their config these days.
Thankyou.
That is precisely the point, but you made the case so much more eloquently than I.
On this very site is another topic about supercomputers:
http://www.osnews.com/story.php/18913/30th-Edition-of-TOP500-List-o…
It turns out that Windows is not a player in “serious grunt” computer architectures. And, exactly as you said, apart from the special case of Windows-to-Windows, DCOM is useless.
If you constrain yourself to using a deprecated protocol useful only for the special case of Windows-to-Windows, you are cutting yourself off from the future and from a large and growing part of the overall computing environment. Your applications, as a result, will very likely be short-lived, with an ever-decreasing “target audience”.
On this very site is another topic about supercomputers
This has nothing to do with supercomputers, so no, you don’t know what you’re talking about. HPC is entirely different.
It turns out that Windows is not a player in “serious grunt” computer architectures.
You’re going to need an awful lot more grunt to run web services – needlessly ;-).
If you constrain yourself to using a deprecated protocol useful only for the special case of Windows-to-Windows
Like I said, it isn’t deprecated, is still a part of .Net and there are other solutions now from .Net remoting to Indigo.
Let’s just come to a right understanding that web services are an awful way to do networked IPC that doesn’t work as much as some people pretend they do. We need a better way of doing things, and outside of the Windows world no one has come up with anything of very great significance.
Let’s just not mention RMI either ;-).
It’s pretty obvious in the ‘server space’ (that is, related to this thread) you can’t have worked with big enterprise (banks etc) or national government. You have to do interop with machines other than Windows…
I’ve worked in many banks and large institutions sweetheart, and seen web services crash, burn and fall over many times. It is not a solution for networked IPC, end of story. Cross-platform support comes a distant second to things actually working.
Hmmm, building a system on that is a great decision for future proofing, not! See how all the Visual Basic 6 afficianados got shafted to see how well relying on legacy development technologies is gonna work out. Therefore couldn’t recommend DCOM for new stuff.
The point has sailed right over your head. DCOM is simply far easier to do and deal with, which is why it is in widepread usage. Sadly, web services are not a replacement.
Dot NET will become the center of gravity for Windows (interop) development whether you like it or not (that’s the Microsoft Way after all, so you’d better be planning for it).
Like I said, DCOM is still a part of .Net, and there is .Net remoting and the stuff now in Indigo. .Net is meaningless here.
Recently did a project where a Java webservice on Unix gear (Sun box) was accessible through JAX-WS by a Dot Net client on PC gear (as well as Java clients). Now that is true interop!
JAX-WS is over-engineered shite, as are all web services APIs, which you’d have picked up on had you read what I wrote. I’ve seen many web services projects canned for DCOM and other approaches because they were just too complex, didn’t scale and were simply way too heavy.
Now that is true interop! Not quite so easy with DCOM.
Well, I hope you matched up your data types and I hope it can handle data types that don’t perfectly fit your model ;-).
Network IPC is a contradiction. You mean network RPC 😛
Can you give an example of what makes it easier in Windows? The winsock API leaves MUCH to be desired last I looked at it.
Yeah, so that’s the problem with the IT and Windows. The problem is that the majority of Windows sysadmins in smaller companies are merely clueless Windows users (if we talk worldwide anyway) and the majority of the developers doing half-assed implementations of databases or whatever based on FoxPro could equally care less about standards or the future. Everyone just wants a quick buck and if the software crashes, works one day a week, looks like a flaccid arse it is accepted as “normal”. Coincidentally the very same people have a huge amount of intertia, because they are passionless people and limited in knowledge even in their own fields, and they certainly abhor any type of learning. Tell the guy about the alternative or the standards and you get the same kind of talk in return, where they defend their own piece of smelly turd. So I’d say this is a general problem. That’s why the apps for, say, MacOS are generally better – because most of its users have migrated from Windows for some reason and they needed to learn new things and they certainly don’t have an aversion for doing that.
Open source developers will collaborate with you for free.
Yes, what about it? Windows server doesn’t have any.
$3999 plus CAL fees for numbers of clients above $25. How can you say its not across-the-board expensive?
Total cost of operation = (install) + (runtime costs * things you can’t do).
The Unix model is flawed; even very basic operations can use up to twice as much memory and CPU cycles as a similar NT operation. But can this ever change? No. Unix is set in its ways, being based on “standards” that calcified back in the 70. Windows, however, has had several notable overhauls — and because they based the ability to use applications based on an external interface (taken to an extreme in .NET), instead of even simple programs having to interact with the operating system directly.
Not to mention the power of a brand name. When you see a program that runs in Windows, how do you tell if your system can run it? While certain exceptions can be made for high-grade resource-grubby software, for the most part, you look at the OS it’s designed to run on — Windows release date N or greater, and it runs (at least, for ten years or so; you might have to fiddle with compatiblity settings by right-clicking after that point). Unix, on the other hand, is today more of an idea than an actual OS; even with those calcified standards to adhere to, you have no idea how it’ll actually be implemented, and whether or not your program will get the results you are expecting.
This goes for programmers, too. Let’s have a simple example: Gnome or KDE? No, I’m not trying to restart that flamewar, but I will point this out: Gnome and KDE are virtually identical in what they do — and completely different in implementation. If you choose only one or another, you are effectively leaving the other group cold (or worse, leaving it up to Gnome/KDE’s imperfect ability to run the other’s programs, adding in bugs you have to test for); if you choose to build for both, congratulations, you’ve just doubled your codebase and the number of bugs that could crop up. And that’s just getting something to display; I won’t go into things like the several competing (and virtually identical in purpose but totally incompatible in execution) file systems. Windows, on the other hand — my programs coded back in Windows 95 on Borland C++ still work today on XP and Vista, despite the fact that the kernel has changed twice from a GUI over MS-DOS (95) to Windows NT (XP) to Minwin (Vista). That ability to provide a consistant interface to programmers, and to continue to run old code years after the Unix model tells you to “rewrite and recompile,” is something that the closed-source model is uniquely capable of, and something the Unix model will probably never do.
I’m not saying Windows is infalliable; there are some major problems in its OS model. The fact that the kernel has changed three times in ten years proves that, and within ten years, I’m sure another one or two will crop up. But Windows has the ability to internally change to a superior model — to undo the problems caused by a lack of foresight — without forcing coders or users to change as well. Unless some miraculous series of events grants control of Unix to a central entity, which can create code so tight and so standardized that all parties in the Unix world agree to it, Unix will never again have this ability.
And vice versa.
Let’s just say NT and Linux (as a modern variant of the Unix model) are based on different design decisions that lead to different problems.
Unless you provide real world examples (microbenchmarks can be used to prove anything) your statements are fairy tales at best.
Furthermore, maintaining backwards compatibility is, first and foremost, a business decision. This issue is mostly orthogonal to the underlying type of OS.
And who will fix your insecure app that works right from Win98?, at least recompiling apps with new security features. Times are changing and no one really wants the legacy baggage anymore, why have a app from Win98 working in Vista when you can have a app newly compiled and updated?
Your talking about Unix being so old so why are companies still use unix in Windows for their operation, it cuts both ways you know. At Least companies can take charge of their own opensource apps without waiting for the vendor to update it.
Edited 2007-11-13 01:56
We have all seen what that “uppdated appz” do .
They munch infinity memory and are most often overwritten win98 appz opensource stuff are slightly better while latest msn7 is version 1-7 overwrites miranda does quite well without slowing down , same can surely be said about norton pro-bloat while clamwin is a joy to use.
1) This is a debate about servers, the issue of desktops don’t enter into the equation. When they do enter the equation one has to realise that Red Hat/Fedora, OpenSuSE/SLED, Indiana/Solaris and Ubuntu have all standardised on GNOME for their desktop.
So lets assume we go out into left field and when you mean desktop you mean, the server being used in the case of using something like ‘Sun Secure Desktop (the old Tarantula software) – even then, everything is still standardised on GNOME. If you as a programmer ignore the fact that the majority use GNOME, and get hung up on the KDE vs. GNOME debate, the issue is with you, not the UNIX community.
2) How does UNIX use ‘twice as much memory’ as Windows? can you substantiate such a claim? Why do you make such stupid claims about UNIX when what you’re actually referring to is Linux? Do you actually know the difference between *BSD, UNIX and Linux? do you realise that Solaris is UNIX and you can run ancient software on Solaris without any problems?
3) Calcified? last time I looked Mac OS X is UNIX 03 compliant – I’d hardly call that a calcified operating system.
4) You mentioned .NET – you realise that there is .NET and Java on UNIX as well?
I’m wondering if your long winded post was actually serious or whether it was just one big giant piss take – like Mike Cox off ZDNet’s online forums.
When they do enter the equation one has to realise that Red Hat/Fedora, OpenSuSE/SLED, Indiana/Solaris and Ubuntu have all standardised on GNOME for their desktop.
Alas, UNIX’s history is littered with such bold ‘standardisation’ claims, but those claims were always out of touch with what people actually wanted from a desktop and what people actually ended up using.
…even then, everything is still standardised on GNOME. If you as a programmer ignore the fact that the majority use GNOME, and get hung up on the KDE vs. GNOME debate, the issue is with you, not the UNIX community.
I’m afraid that is simply not the case – although some people wish that it were. This doesn’t go away because a few people are telling everyone that a couple of enterprise vendors have standardised on x and y. The history of UNIX is littered with such bold statements – “a, b and c are standards” – and it’s what killed its usage as a desktop and hampered it as a server against Windows. Windows was never a ‘standard’, but it got widely used because people just picked it up and used it.
Just because a couple of vendors have standardised their enterprise distributions on Gnome, which very, very, very few people use when compared with their free variants, that simply doesn’t translate into a critical mass of real world usage.
On servers, the vast majority don’t use a GUI so none of that matters, and for those that do, it is merely a shell to run some, pretty inferior if I may say so, graphical administration tools. That is all that it is.
The “O” in TCO stands for “Ownership” not “Operation” and you just pulled that formula out of your arse.
Wow, this is almost as stupid as the OP’s post.
Oh yeah, because Unix today is *exactly* the same as in 1975. Yeah, nothing has changed since. We’re still running on PDP-11’s here.
Meanwhile, Windows has had many overauls…3.x -> Win95, Win95 -> NT. Wow, that’s two.
I’d write more but the incorrectness and stupidity of your post was giving me a headache.
So this horrible thing you call POSIX that is so bad and I quote, “No. Unix is set in its ways, being based on “standards” that calcified back in the 70.”
From http://www.opengroup.org/austin/papers/posix_faq.html
“””
Q3. What is the latest version of POSIX.1?
The 2004 edition of the 1003.1 standard was published on April 30th 2004, and updates the 2001 edition of the standard to include Technical Corrigendum 1 (TC1) and Technical Corrigendum 2 (TC2).
“””
Talking smack is fine. Next time do it about something you actually understand. I happen to be a member of the Austin Common Standards Revision Group that updates and modernizes these so called ‘calcified’ standards.
*thwaps Almafeta with cluebat*
Insight + 1
“””
“””
And if you have the in-house expertise already, and don’t need third party support or software certification, CentOS gives you the top of the line RHEL version for free.
Not necessarily. It’s a bit more involved than just using a different OS with a cheaper license.
It does? How? Where’s your evidence? (Not saying it doesnt but simply saying it does doesn’t make it so)
Because the application they require only run on Windows? Because they have in-house Windows skills already? Because the CTO got a nice kickback?
Seriously, not every situation is suitable for Linux (or Windows) and ignoring that fact is a nice recipe for disaster.
Windows has an ENORMOUS TCO.
It does? How? Where’s your evidence? (Not saying it doesnt but simply saying it does doesn’t make it so)
For openers, it’s more expensive to buy. Secondly, the cost of Client Access Licenses is absolutely unreal. You need CALs for Active Directory where none are needed for Samba, you need CALs for Exchange where none are needed in the open source world and CALs for SQL Server depending on usage.
If you’re implementing Terminal Services then you’d assume that centralising your applications would save you money and administration. Not so. You need the requisite number of TS CALs, as well as a licensing server (yes, a server purely for licensing) which can hand out licenses on a per-connection basis. I can say from experience that this is a pain to administrate. In the Unix/Linux world we just forward an application through X, or we use something like FreeNX or one of the other remote desktop options or we run applications over NFS or something.
The hoops you have to jump through are incredible sometimes, and the only reason why anyone uses Windows is because X application is written for Windows. Unfortunately, that doesn’t make Windows any cheaper or cost effective.
Ubuntu server : $0
http://www.ubuntu.com/products/WhatIsUbuntu/serveredition
Windows Server Enterprise (25 CAL): $3999
RHEL Advanced Platform (Unlimited Client): $1499 (for support)
Mac OS X Server (Unlimited Client): $999
Ubuntu server : $0
or Mandriva server 0$
or debian server 0$
or centOS server 0$
or freeBSD 0$
or OpenBSD 0$
or …
or …
or …
Sorry if this seems off topic but it made me giggle a little that someone would point out only Ubuntu server for 0$ as if it was the only 0$ server available. That’s pretty much the same for any of the Linux true distributions (ie. DVD includes both workstation and server repositories) or the BSDs, possibly even OpenSolaris depending on how it evolves.
First of all, $3999 might sound a lot, but it really is very cheap compared to the rest of the expenses. Second of all, can some PhD tell me the cost of owning a RedHat Server System or a Windows Server System is after seven years? I’m not being hard on RedHat or anything, they are very cheap and affordable for an enterprise too. I am just saying that OS cost is not a major expense when buying any of the systems. I would not know anything about Mac OS X Server because even Steve Jobs does not take it seriously.
Steve Jobs’ mindset has always been far away from nuts and bolts architecture of server infrastructure. He’s always been about the “look and feel” and form over function(although he’s gotten better with that) than anything else. His hardware choices are what drove NeXT into the ground. The engineers at Pixar were able to succeed in spite of Jobs’ attempts to meddle. He’s finally been able to make some hardware decisions not based on poor criteria(like Apple III’s fans anyone?). In short, just because Jobs isn’t introducing a directory service at WWDC with glitz and glamour, doesn’t mean Apple’s back end hardware and software won’t make the grade.
Also, on the comment about Linux not having any proper directory server stacks, that’s like saying no company other than Toyota can manufacture trucks. By the way, LDAP isn’t a complete stack, it’s a protocol. By the way, comparing LDAP to Active Directory is like comparing apples to pie.
Edited 2007-11-13 18:29
“can some PhD RedHat Server System or a Windows Server System is after seven years”
Justifying any pricing based on a single purchase for *7 y years* I think highlights a major difference between Linux Distributions. On Windows Server+patches thats 7 years old and on Linux running a Current OS. Thats ignoring issues like malware; exploits in the wild etc etc. btw $3999 doesn’t sound a lot, of course in many places where the costs of the OS outweigh the cost of people several times.
“Justifying any pricing based on a single purchase for *7 y years* I think highlights a major difference between Linux Distributions. On Windows Server+patches thats 7 years old and on Linux running a Current OS.”
Huh??? WTF??? The less you touch a server, the better it is. If it works, why fix it? That is the Unix way of thinking so I don’t get where you are coming from. That is the reason why RHEL is so popular, because it works and works great for 7 years. Now let’s go back, and see the TCO of running RHEL for 10 years versus running Windows for 10 years, assuming that there will be an initial installation, and later an upgrade 7 years later:
RHEL: $1500 * 10 = 15000
Windows: $3999 * 2 = 8000
Now, where is your cost saving? As I said, I have nothing against RHEL nor Linux so I don’t know why someone would mod me down for speaking the truth. In fact, we do use RHEL in the enterprise, and there are good reasons to do so. However, we have never deployed RHEL or Solaris with the primary motivation that it is “cheaper” than Windows.
” btw $3999 doesn’t sound a lot, of course in many places where the costs of the OS outweigh the cost of people several times.”
Hey, if your willing to come and work for me for $3999 over seven years, I have a position to offer you. And just because there is a lot of labor explotation in third world countries does not mean something is overpriced. I do not follow your logic.
RHEL: $1500 * 10 = 15000
Windows: $3999 * 2 = 8000
http://www.microsoft.com/windowsserver2003/howtobuy/licensing/calov…
Sorry ok first up check the article above…then check out the little *unlimited* after the redhat one, essentially the maths goes a little like this.
RHEL $1500 x years
Windows $4000 x (number of devices/25) + corporate disadvantage of running legacy software.
If you seriously plan on running windows server for 7 years good luck with that. Enjoy the incredible maintenance costs, downtime due to malware etc etc.
The thing is TCO costs are complicated, but you don’t seem to get the basics. If your point is that in the well of west then the initial layout is neglagable compared to *other costs*, the reverse is true for less affluent parts of the world…the majority…but it wasn’t it was based on some bizarre notion that the computing world remains static this is a very Microsoft centric view of the world, many companies live in the fast evolving world or GNU.
Now do the same sums, but include a third option … RHEL for 1 year, then replace it with the exact same Centos code after that.
RHEL: $1500 * 10 = $15000
Windows: $3999 * 2 = $8000
RHEL + CENTOS: ($1500 * 1) + ($0 * 9) = $1500.
There is your cost saving.
As you said yourself, if it works great, why touch it?
I don’t get where you are coming from, thinking that Windows + lock-in is cheaper. Anybody with even half a brain can work out that that just isn’t so.
Oh … BTW, speaking of half-brained accounting, how come you forgot about the CALs?
Edited 2007-11-13 21:51
I don’t know why I keep getting marked down, I guess someone hates the truth and can only resort to childish behavior. Anyways, I don’t care much, so go on and mark down. And let me state my view point again, there are many reasons why someone would choose to deploy Linux. Such reasons include performance, scalability, stability, configurability, vendor lock-in protection, philosophical viewpoint, to whatever. However, I would not consider “being cheaper” a valid reason to deploy RHEL.
“If you seriously plan on running windows server for 7 years good luck with that. Enjoy the incredible maintenance costs, downtime due to malware etc etc”
One would have to just look around his own datacenter and see how many instances of Windows 2000 server are still there. I still count many many instances. We will probably be retiring these servers during the next two years but they are currently alive and kicking and have been for seven years. In addition, in most of the companies I have worked for (enterprises between 200 – 1000 people), we have had little to no problems with malware/spyware.
“Now do the same sums, but include a third option … RHEL for 1 year, then replace it with the exact same Centos code after that.
RHEL: $1500 * 10 = $15000
Windows: $3999 * 2 = $8000
RHEL + CENTOS: ($1500 * 1) + ($0 * 9) = $1500.
There is your cost saving. ”
Good luck explaining to your boss why you cannot call Oracle, SAP, BEA, Sybase, NetApp, EMC, Dell, HP, etc. to bring your mission critical server back up. Sure, it might be a bug in their system/hardware, but I am sure you will be able to hack CentOS to just work. Oh yeah, you are losing a million dollars an hour during your downtime, but I bet the CEO is going to be really happy you saved them $1500.00 on licensing and whatever amount on salary when they fire you.
“I don’t get where you are coming from, thinking that Windows + lock-in is cheaper. Anybody with even half a brain can work out that that just isn’t so.”
And buying alternatives just for the sake of alternatives is not cheaper either. You need to do an analysis of any solution you deploy, part of the analysis would include the dangers of “lock-in” to one vendor and potential risk/costs. But only a person with half a brain will go to his CEO/CFO and try to say RHEL is cheaper and leave it at that. For example, how difficult is it to migrate DB2 from Windows to Linux?
“Oh … BTW, speaking of half-brained accounting, how come you forgot about the CALs? ”
I did not forget about CALs. I just did not want to do an overly complex calculation of all possible scenarios. For example, how many CALs do you need, what exactly is accessing the server, do you need per server or per client CALs, how many servers are you using, how many workstations are you using, are you using RedHat workstation on the clients or are you running Windows, is it a fully supported environment, is it a completely Microsoft free environment, if not then do you already own per client CALs, do you have to pay extra for antivirus in the enterprise, do you have to pay extra for software by choosing one solution, how about support costs, how about training, planning, upgrade costs, how about employee costs, etc?
Firstly – you had a full year of full support, how come you didn’t test it then? Why did you muck with it after that year, causing it to come down? Didn’t you already say – If it ain’t broke, don’t fix it?
If you are going to count downtime and associated costs versus support costs … then you need to count it for Windows as well. Your $3999 for Windows doesn’t include support, or downtime from virsuses (which are in the vast majority WINDOWS virsues), or virus protection software, or as I pointed out CALs, or whatever.
If you have a million an hour riding on downtime, then you are certain to be up for a huge cost in paying CALs, and paying RedHat for support for the full seven years is way, way cheaper than trying to keep Windows server licensed and supported for the same period.
You just simply aren’t comparing apples with apples. If you did that and were honest with yourself and your employers, then any viable Linux or BSD or OpenSolaris server option would win on TCO every time, by a huge margin, over Windows server.
If your CEO were at all clued in, he would be far more likely to fire you for buying Windows, exposing the company to security risks, costing the company money the time for removing viruses, rootkits, botnet zombies & the like, costing the company money through having a requirement have staff to keep track of licenses, costing the company money for downtime every “patch tuesday”, exposing the company to risk through leaking of its data, exposing the company to unnecessary legal risks arising from licensing audits, and costing the company money bigtime for locking them in to a single and abusive supplier.
You are stupid, aren’t you? The $4000 for Windows 2003 Enterprise are WITHOUT support. Please compare Windows 2003 Enterprise with unlimited 24×7 support with RHEL with unlimited 24×7 support.
Without support, RHEL is *free*. It costs $0.
OS and software licenses are the largest chunk of the cost if you don’t change/add to your support staff. Even if you count hardware costs, software licenses (in the non-OSS world) are still the largest chunk.
Our local school district moved from Windows on the desktop to Linux on the desktop (originally via thin-client setups using LTSP, now using diskless setups using generic Debian).
There were upfront hardware costs for servers as we had to purchase a new $3000 server for each school (originally dual-P3 1 GHz w/ 4 GB RAM and 2x 200 GB HD; now dual-Opteron 2 GHz w/ 8 GB RAM and 4x 500 GB in RAID5). However, these replaced dedicated Novell servers, so the cost isn’t new, and is actually a lot lower as we built our own servers with redundant parts instead of going with an overpriced Dell/HP/IBM system.
There were upfront Hardware costs for clients as we had to purchase new systems for each of the schools. However, we were able to use used systems from Computers for Schools at only $50 per system instead of $1500+. And we have since been able to build our own $200 systems (AMD Sempron 1.8 GHz, 512 MB RAM, onboard nVidia graphics, sound, and NIC; no CD, no floppy, no HD). Since we would have to replace the hardware anyway, this cost is not new.
We have saved over $30,000/year on Novell server licenses alone.
We have saved over $50,000/year on AV licences.
We have saved over $100,000/year on Windows XP, MS Office, and Windows Server licenses.
We have saved over $30,000/year on DeepFreeze licenses.
We have saved over $20,000/year on AutoCAD licenses.
And there’s probably some more yearly licenses in there that we no longer have to pay for. Don’t forget that yearly license fees also tend to increase each year, so our current yearly savings are much higher (the above is using the costs of the last year we paid the fees, no increases have been taken into account).
We are able to centrally configure and manage each school’s desktops as there’s only 1 server per school to manage (all the software is installed on the server, there are no HDs in the clients, but all software is run on the clients). We offer a full KDE desktop to each student and staff member, along with all the standard KDE software, a tonne of OSS educational software, a full office suite, a tonne of educational games, CAD apps, and more. All for $0 in licensing fees.
We have been able to provide more software, more hardware, more service, and expand our department from 5 to 15 … all while our departmental budget has shrunk from $1.8 million/year to $200,000/year (over the past 7 years).
We have even been able to update the software on a yearly basis for $0 in licensing fees (starting with RedHat 6.2, through to 7.2, then Debian 3.0, 3.1, and now 4.0).
Don’t try to tell me that running Windows software is cheaper than running non-Windows software. I know for a fact (at least in the education sector) that it is not. Even when you add in all the non-software, non-hardware costs (like staff, training, certifications, etc), it’s still cheaper to migrate to non-Windows environments.
In case anyone is wondering, google for “School District 73 Kamloops” for more information. We’re able to do more with fewer people and smaller budgets, than any other district in the province. Several other districts are starting to follow our lead, now that we’ve worked out the kinks (and abandoned the thin-client model).
Edited 2007-11-14 01:42
Precisely.
Here is further backup (from a different context) to your point.
http://www.itwire.com/content/view/15298/1023/1/0/
The third page is particularly relevant to this thread:
http://www.itwire.com/content/view/15298/1023/1/2/
I really can’t trust closed source Operating System as MS windows.
I run FreeBSD for all servers I own and run.
You can test the RC version of Windows 2008 Server for free, until April 2008:
http://www.microsoft.com/windowsserver2008/audsel.mspx
I think they also send you a DVD.
I know you like to criticize LoseThos for no active directory, but tell me… does the Linux “ls” command support an active directory? Of course not. Now, lay-off LoseThos.
LoseThos has no file sharing locks. It’s unregulated and leaves-it to you not to shoot yourself in the foot. One cool thing is you can open the same file mutlitple times with the editor and the last-one to save wins.
The borland integrated development environment used to support opening the same file twice. It’s useful when you want to view two parts of a document. I was pissed when Microsoft didn’t allow that.
Some people prefer regulation. It means less freedom. Regulation is when you need to be protected from yourself.
Has anyone tried Sfu (services for UNIX) ? It has UNIX command line tools, X server and Motif libraries. One can even compile some UNIX programs. It runs on top of Windows server. I have no opportunity to play with it.
How close it is to UNIX experience ?
It’s a pain in the ass. Try Cygwin if you need to use or build Unix software on Windows.