Actually, it does come to my mind. Windows 2k3 and XP are the most stable OSes I run. They are unsecure and they look like crap, but they are MIGHTY stable on my machines. And you know, I have run A LOT of OSes/distros/you-name-it in the past 10 years.
Windows for supercomputers is not based on the sucky Win9x kernel you know.
I am sure many will argue here that this is done just fine with the existing, and field tested, linux solutions availible today and they will not see the need. The interesting thing I see Microsoft bringing to the table here isn’t a solution that is technically faster or more efficient than a highly specialized linux based one but instead a solution that will enable people to write software for clusters much more quickly and easily. If a company can throw together a small to medium sized cluster/supercomputer and write a software with a ClusterFC or a Cluster.Net (you know what I mean) where much of the tedious code has already been written and can be easily implemented in a Visual Studio type of enviornement, with MS Visual Studio/programming support is a phone call away for any issues, that is where it would be worth something. Computers to throw in the cluster are compartively cheap – programmers and engineers to make it do what you want are comparively expensive in a non-academic enviornment.
I know from first-hand experience right now that clustering of Windows, specifically Windows 2003 at the moment, is an absolute unbelievable joke. I’ve always managed to steer clear of Windows and MTS in the past on this, but unfortunately on this occasion I got roped into doing things with Windows 2003 and COM+ (Enterprise Services *snigger*). It is nowhere near up to the task of servicing one site with a couple of thousand people on it – we’re talking basic here. No, adding more servers doesn’t help because the hard and fast rule is that you need to saturate all of your machines in the cluster before adding more hardware benefits you. If you don’t you’re wasting money and not getting better performance at all. Windows is shockingly bad at this, however stable it appears to be for people around here.
They’re taking this to a supercomputing conference?! I bet they’re drawing lots at Microsoft as to who will represent them.
Sun seems at the forefront of the grid computing revolution, selling applications that provide fault tolerant grid services and even selling access to their grid itself…
Good joke people. Windows can’t manage right resources on home computers.
Think about all the resources that will be wasted if it runs on a cluster. It’s better that you don’t
We all know that Windows is the most secure operating system in the world. A cluster runing on win will be a delight for crackers. Would you risk your bussines ? I ain’t.
A clustered arch would need a fast file system. NTFS anyone?
Honestly, I would be at peace with myself if I knew that it was on Reiserfs, XFS, CXFS (amazing speed, super stability).
Oh, forgot about the fact that Windows runs only on iX86.
My cluster, a personal project, has 4 G4’s, 7 iX86, and a PlayStation2. Try running Windows
With Linux and NetBSD it works flawless.
No offence, but this story should be moved to Fun and Entertainment.
>>Actually, it does come to my mind. Windows 2k3 and XP are the most stable OSes I run. They are unsecure and they look like crap, but they are MIGHTY stable on my machines. And you know, I have run A LOT of OSes/distros/you-name-it in the past 10 years.
Please, who ever has a comment, at least comment if you have experience in super-computing / clusting. I did a research for a Natinal Laboratory, and I can assure you they will not put Windows to manage Physics data or anything else.
Windows 2003 Server hasn’t been that bad for me in stability. But then agian the servers that I run here are not that taxed when it comes to people connecting to them, nor are they clustered. But would I use Windows in clustering. Probably not, since there are other OSs out there than are more stripped down and ready to run on all types of hardware with better performance than what MS can offer right now. I’m sure they will show numbers that they are better than everyone else, but with some very fugged up numbers or only the areas that they beat others.
Catalin Nicolescu, I would love to see a write up on that mixed bag of a cluster?
If Microsoft offers this in a slimmed down version like they had said about specialized versions of 200X Server, then this makes perfect sense. My only beef with MS after Win2000 is business practices, specifically Product Activation.
If I can *really* turn off all the extra crap like IE and WMP, it would be a Good Thing[tm].
Price/Service is always going to be an issue, of course.
I’ve been runing Windows Linux and BSD since ’92. It can’t manage resources. I’ve worked with lots of archs and OSes over the years, belive me. A cluster (for bussines) is a serious thing, and I would run a Win on one only if was working for M$.
This is going to be a stripped down (or at least disabled according to the article) distro to keep users from running Windows Cluster Ed. as a regular server.
The question is, how stripped down will it be? Will they be able to extricate IE? If not, then I don’t think the distro will be stripped down enough.
>>Windows can’t manage right resources on home computers.
>I think it manages just fine.
I run Windows 2003 Server Web Edition. No modifications(as in Extra software) Doesn’t get used other than to serve ASP websites for my college homework.
When I first started it up and was browsing around I was shocked! it was performing faster than my 2.4GHz Linux Box and it’s only a 400MHz
Now 2 weeks later it’s starting to act like a 400MHz, so slow that pages just about time out before they are sent.
Tell me how that is managing it’s resources just fine?
The NT core WAS ported to Alpha and PPC several years ago, with the last version being a beta of NT4.
And it was an utter commercial failure: firms that spend money on alternative architectures know what they want. They want unix.
MS has not even managed to get right the 64bit version for itanium and opteron: are you really so sure they’d be able to pull versions for PPC, SPARC, RISC and so on?
Security doesn’t matter that much in most clusters, they tend to be isolated from external networks. Physical security and other paranoias apart, of course.
Fast file system in a clustered arch? Nodes don’t even need a hard disk. The master node that keeps the results is another issue, and it can use the most expensive SCSI disks if necessary.
Cluster.Net. Hmmm. Sounds interesting. Except for the reason that cluster programmers are specially careful about getting bang for the buck. Running an VM in every node on top of a commercial OS doesn’t sound cheap neither attractive. Same for easy languages that take freedom from hardcore optimizing. Think about it: parallel programmers are expensive, no matter the languages they know. The algorithms they know and apply is what’s valued, not the amount of lines per hour they can type. It’s not business software, it’s scientific software. The real money saver is in the hardware and software licenses, not the people. And there Windows can’t stand against linux or freebsd.
Still, it will be interesting to see the modifications to Windows for clusters (headless windows machines) and the license pricing.
Also in the security sphere, will normal AV tools run on Windows Cluster? One rogue laptop plugged into the wrong might infect the whole cluster, which would be a Bad Thing.
I for one really don’t see the problem with using Windows as a multi-node OS. For one thing clustering across a business network is completely different than clustering across a scientific process grid. You need vastly fewer of the services, and beefed up inter-process communication.
If it weren’t for the cost of licensing and the overhead of admin tasks, you could run coarse-grained scientific computing today on a whole bunch of WinXP boxes. I’ve done this with SETI@Home for years. Other people have set up render farms for 3D programs like 3D Studio Max. Doing grid computing in Java is the same on Windows as it is on Linux. Maybe not the most efficient but it’s good enough and Java is very easy to do distributed computing with.
I think what’s missing under Windows is a centralized admin capability to configure many boxes at once. With a nice GUI.
What you get with Windows is a lot of great development tools and a very well known development environment. The whole point of grid computing is that the nodes are so cheap you don’t care if you lose a few during computation. Does Google care if they lose a few servers? Hell no. They have thousands of beige boxes and they lose dozens at a time without impacting their basic serices.
Grid computing under Windows is very pragmatic. the fact is that anything that runs well on a Beowulf cluster will run just as well on a Windows cluster. And if you give a physics department a Windows cluster don’t think for a second they won’t run with it.
Okay folks, let’s first see how this new branch of the Windows-family evolves before putting her down. That’s a bit cheap ain’t it?
Windows XP and especially 2003 (I concur with Eugenia there) are extremely stable. Also, the WinNT kernel has proven itself to be highly portable, since it runs on MIPS, PPC, Alpha, Itanium, and x86 (-64). That’s not bad.
Also, I’d like to draw a comparison between MS and Apple here. MacOS wasn’t exactly known for it’s stability up to version 10.2– especially the pre-OSX version we’re hell when it came to stability. Now, OSX is rock solid and never dies on me. Why can’t MS do the same trick?
Actually– they have already done it. 9x or NT. World of difference.
Well, truth be told, XP is a lot more stable than previous versions of Windows. Where NT 4.0 presented you with a BSOD every just so often, in XP it’s only the shell that crashes and restarts. Big improvement? Think not.
As for 2K3, an OS that asks me WHY I want to shut it down or restart it and after reboot kindly asks me if it can phone home about it? Please. No matter how stable or unstable it may be, for me, this kind of behaviour is enough to send it straight to the trashcan.
I would really like to know the changes MS have made to their server platform to make it suitable for clusters, seeing how server 2k and 2k3 gobble up system resources like there’s no tomorrow.
>>Security doesn’t matter that much in most clusters, they tend to be isolated from external networks. Physical security and other paranoias apart, of course.
What are you talking about. That is the purpose of super computing. Many Labs invest so they can use it. And take advantage of it. Supercomputers here are accessed by computers en Europe and vice-versa. Security not important. Dont make me laugh. That is the number one priority, together with the ability to do millions of calculations to solve problems. You never seem a supercomputer I can see.
>>they’ll have to give it away? i don’t doubt that for a second.
They will not use it. I am talking about real physics labs. Tell that to the people in CERN. This is a joke. People, one more time: Supercomputing is not for windows.
I concur with you wholehartedly. Unfortunately, it’s not just businesses that are run by managers who know exactly d*ck about the field they work in, it’s also universities that are run by management nitwits. Therefore, unfortunately, I’m afraid that Windows will actually appear on university supercomputers. Especially if you look at the ties MS has with universities (think sponsorships etc.)
Well, if the MS cluster edition will offer to compensate the damages of one of the nodes going down on a 256 nodes computation just 5 minutes before the end of the job, why not?
After all, before you could always find the reason of the crash, now you cannot. Time to hire a legal firm, with all the money they will charge…
If Windows XP is so stable why can’t I install it?
Windows 98se –check
Windows ME- check
Debian — check
suse 9.2 — check
Windows XP is the only OS that I can’t install on my Pentium 4 Dell desktop. It tries, It loads up the software, As soon as it tries to find the mouse ps/2 port it crashes so fscking hard it the only option is a hard reset.
That means I am 6 for 6 in times I have tried to use or install windows XP and times it needed a complete reinstall, or another OS to use the machine.
This isn’t a tech support forum. You would be better off ringing Dell. Do you really think that Dell sells Pentium 4s that don’t work with XP? It’s what they sell with the machines. Obviously there is something wrong / incompatible with your hardware somewhere. If it works in 999 out of 1000 Dells then the problem is not with the software.
We don’t have an official product announcement from MS yet and people are already aggressively taking sides. We all know that MS won’t all of a sudden end up with a huge grant from DARPA to develop cluster technology (i.e. HPCS). I think the big question is who is their target market. Not the target market from the marketing people’s point of view, but what the developers expect to be able to support come next fall. For example, maybe 15 or 20 dual processor Xeon boxes and what interconnects will be supported. If they are in this range, they certainly will not be a contender for an ASCI contract or weather modeling. But they would be suitable for some types of databases, rendering, etc. There are many uses for a cluster and we just have to wait until the details come out.
“What do you mean I have activate each CPU seperately to make sure that I benefit from Genuine Windows?!?!?!”
“My God, look at this… each virus / spyware is running on its own CPU!!!!”
I have a dual boot with Win2k3 and Xandros (the boot default). Funny thing is often when I boot Win2k3, later I will see my machine running Xandros. If Win2k3 is sooooo stable, then why does it keep rebooting?
There are Linux zealots in this blog who will always shout after Windows. Times have changed and Windows 2003 is very stable, I mean it. I don’t say it’s secure, but it has nothing to do with Win98 people
Stop criticizing something you don’t know just because it’s Microsoft. At least give it a try for a month and then we talk about stability.
When I first glanced at the article summary, I thought they were retargetting longhorn for supercomputers so they wouldn’t have to optimize code.
Heck, that still makes more sense, now that I’ve actually read the article. Windows? Clusters? This is like Military Intellegence, only with more BSOD.
Windows has been rock solid stable for 5 years. It’s been solid since Windows2000 came out, several years before WinXP.
As someone who grew up on BSD Unix, has a copy of Redhat V2 autographed by Linus, has compiled, optimized, and run code on a Cray, has 45K SETI units, and who has administered a domain of stable PC’s, I say “Choice is good” in the supercomputer cluster OS business.
I think it’s a shame that a lot of people have lost their objectivity. I don’t see C# as a big development language for quantum chromodynamics calculations, finite element modelling, or big bang simulations. That might be a bit of a stretch… Maybe Fortran# 😎
about you Eugenia… Sometimes I really truely wonder. No offense of course, but XP cannot be considered stable by any means compared to well, every other multi-user OS in production (I.E.; Win2k, Win2k3, RHEL, MacOS X, even F/Net/OpenBSD and Debian). XP drives me nuts.
I have in over a year of hard use crashed my archlinux machine maybe once. I’ve seen a lot of linux lockups, and 99% of them have been hardware/driver related. A couple were user impatience from bad code.
That said, bad forking is a good way to crash any OS.
Yes, XP is usably stable; but from my experience it was a definite step back from 2k. It consistently begins considering applications as crashed when they don’t respond every couple of seconds; this is annoying. It likes to let apps lock up their window controls, this is annoying. But it’s definitely a lot better than 9x!! And it’s just wierdly slow in some spots on some machines; like right clicking on the desktop. Some machines it’s instantaneous, some it’s doggedly slow. I never noticed that on 2k.
The people who buy clusters, don’t use windows on their desktops. I don’t think this will succeed except in the business world. In the research world (where clusters are really useful), this is going to flop. Microsoft will have to get behind this like Apple does, and shell rediculous resources into marketing and sales to get it to go; and Microsoft is bad at marketing and worse at sales (unless you wanna buy over ten thousand units).
I’d be more concerned about Apple monopolizing the small cluster world. They got their eye on the prize, and they’re selling hard. They have some compelling features and equipment. And in the end, you want a cluster that works; not one you put together manually. The people who buy small clusters would not want to do the work Microsoft has you do (for example Windows Domain setup).
This is a troll. Why do you say XP is unstable? Do you get BSOD? Here we have have no BSOD, no frozen screen, and no automatic reboot with WinXP. It is not secure, but it is very stable. I don’t think you use WinXP. You seem to be criticizing something you only heard about.
> In the research world (where clusters are really useful), this is going to flop
Well if you think researchers only use Unix workstations
Go visit the universities
> Microsoft is bad at marketing and worse at sales
Oh yeah they are so bad at marketing that you never see ads in newspapers, nor on major web sites. It’s so bad at marketing that Linux now owns 90% of desktops and servers. It is also very bad at sales. This year Microsoft will dig its debt and owns money to all banks.
XP can be plenty stable. as in, never crashing, never getting s BSOD stable. Its very easy to tweak xp to get rid of all the extra crap and make it very stable…. and this is coming from a person that uses linux and doesnt really care for windows all that much. I can make xp very stable. If you dont know how and it crashes on you then boohoo. its your problem. Stop bashing the OS.
The Cornell Theory Center has been using Windows for HPC for ages. Yes, universities really do use Windows (at least this one does, a lot). Look (I did the google for you)…
I think they will have more success with this than the desktop in the long run. People will always pay for high security on big iron and MS will be able to write this as well as anybody else eventually if they choose to. But, on the desktop price will be everything and you can’t beat OSS FREE.
seriously, a cluster is NOT a supercomputer. But, the work of clusters is about as important as the work being completed by BG/L
and I don’t trust microsoft anywhere near any of that. while windows has supported clustering, it NEVER got a lot of use. See, Microsoft has gotten stupid, and we should let them be that way.
They saw the terragrid. microsoft, go away, we got serious work to do. I’m going to play in oberon. hehe.. later.
I think a lot of trolls came out on this thread. Seriously, get over it, WinXP and 2003 are stable already, saying you have had some issue does not make this not true, it’s just your setup.
Furthermore anything MS has ever decided to go into they have become king at, maybe not at first but in time. The created solid home and pro desktops in time. Took over the handheld market, made a big splash in Home theatre PCs, and are steadily marching forward in servers. They have a massive amount of resources behind them, they will succeed.
Processor support really doesn’t matter here. All they need to do is come out with a x86-64 bit setup and they have done what they need to do. They are going to be able to target those running commodity hardware with this. They aren’t going to try to get you to install it on your SGI cluster.
Secondly, I think many of you are missing the target market for this. It’s not so much huge national labs that built highly specialize supercomputers and such. It’s going to be universities and companies who need good number crunchers. It’s going to be for running things like Matlab/simulink models. Where right now most companies just run models on a bunch of winXP boxes stuffed under peoples desk and stager run times and such. This is what MS will target. Let them take a bunch of off the shelf parts, or a pile of dells and turn them into one big computer and let the same engineers run their models from there office just like they did before in windows, but now even faster. Same goes for lots of fluid modeling and FEA software and so forth. These are things that largely work in the windows world and some even have built in support for using multiple windows boxes via there own special means.
A second thing is MS could use what they work out on this to leverage themselves in the future. Eventually they could fold this into the normal desktop version and have developers make their apps using these tools, then in a typically home in a few years where every member has a computer on a network, they can leverage them and use them as one big computer.
Other groups like Seti@home and other @homes may be able to utilize this too for improving their clients if MS releases a package to let regular desktops run apps that work better with programs like seti. The computing power of all the windows boxes in the world combined outweighs anything else right now.
Ok, I’m nowhere near a clustering expert, but I fail to see how an operating system kernel that was designed first and foremost for GUI-centric, desktop computing can be an advantage in a cluster setting without a near-total rewrite. This is almost as strange as someone wanting to use the BeOS or MacOS 9 for clustering. Granted, Windows Server 2003 is a halfway decent server OS, but it still has the heart and soul of a desktop OS. In my limited experience, desktop computing and cluster computing are worlds apart in what is expected from the OS kernel.
Of course, this is slightly similar to the “Desktop Linux” debate that has been raging for years. There is an exception though; over the years Linux has evolved into many forms: Hobby, server, embedded, PDA, desktop, etc. whereas Windows has been mainly desktop and server until this announcement.
I forgot to mention that I do indeed believe Windows 2k and up is very stable, especially compared to 9x. The only times I’ve had crashes from 2k/XP have been either hardware issues or buggy 3rd-party software. However, stability is only one part of a good cluster OS.
Also, if they scale Windows down far enough to be an efficient and capable cluster OS, won’t it cease to be Windows? There won’t be a need for a GUI, a browser, or a media subsystem. Why call it Windows at all if there are no “windows”?
All things considered though, I will applaud Microsoft if they manage to do this one right.
Bad at marketing, and doing lots of it are different things. If Microsoft wasn’t bad with marketing they wouldn’t be despised by half their customers; they’d be able to work in some decent PR. But they suck at PR.
“Well if you think researchers only use Unix workstations
Go visit the universities ”
I live at one, attend school at one, and work with researchers at one. I won’t say more because I don’t think it’s necessary, but we have a few windows machines and they are despised because they lack a decent CLI interface and many tools that people get attached too.
I’m sure a lot of researchers happily use Windows machines, however you won’t see it much on clusters (right now you won’t ever); and you often see researchers running other things on their desktops.
And I stated why I consider XP unstable; it has nothing to do with BSOD. It has to do with stupid random slow downs and application lockups.
You can take your foot out of your mouth now if you like.
chris:I live at one, attend school at one, and work with researchers at one. I won’t say more because I don’t think it’s necessary, but we have a few windows machines and they are despised because they lack a decent CLI interface and many tools that people get attached too.
I’ve attended a few different ones (I have a few degrees) and I’ve worked at a few different ones. I can say this…
Some use Windows… Some use Unix. May GUESS is though, that most use Unix. The reason I say that is because I’ve noticed that at almost all of them there was some kind of Unix presence there, even if it wasn’t the dominate platform. While at the others it seemed like you couldn’t find Windows to save your life.
Because it will include a lot of big Microsoft code such as .Net and their NT kernel. It’s still windows, coding for it will be coding for windows to some extent.
And stability is absolutely important in a cluster. Because you run simulations that take days to weeks and possibly longer; and you don’t want them crashing because of anything other than your own bad code. Efficiency is great, but unless you can sustain a program for days it won’t gain you anything. I think NT can do this, but my thought is it will flop.
But who knows! Maybe Microsoft will pull a rabbit out of a hat. But let’s face it, their record on breaking into markets is baaaaad: xbox, media-center (although this one isn’t doing sooo bad, but tivo who is bankrupt may have more share), tablet/palm (ok, but still losing to palm right? in a waning market), and maybe someone else has a better memory.
Chris:And I stated why I consider XP unstable; it has nothing to do with BSOD. It has to do with stupid random slow downs and application lockups.
Well… On my Windows machines, I have “random slow downs”, but thats actually due to an app I have running in the background that does something every few minutes. If I kill that, then I don’t get any random slowdowns. (I also don’t get application lockups either)
My point being… Maybe there’s something going on with the Windows computers you use?
Frankly… (I’ve mentioned this before in other places) Most people I know who setup Windows don’t do a very good job. (I don’t claim to be the best BTW) And they really hate it when someone tells them what they’re supposed to do.
Of course… I also don’t usually leave my computers running for weeks on end either. Which might explain this too.
Yea, I wasn’t trying to say they only used *nix. But for clusters, well I’ve never seen a Windows cluster or high speed computing system for research. I’m sure they exist, but I see more Linux and now Mac. And desktops, well I’m sure it goes both ways there.
The thing with a lot of researchers is they were probably heavily using computer workstations when Windows couldn’t be used as a viable platform for number crunching (due to low end intel hardware). Not because *nix is better, but because Windows doesn’t offer them something to replace the applications and interfaces they got used to. Same complaint you hear from Windows users trying to more to Linux/Mac. Where’s the start menu?! || There’s no grep?!
Anyway, time will tell. I’m warming up to the idea of it happening; but I just don’t see it.
I still stand that XP is obnoxious to use . That said, 2k was usable!
Chris:But let’s face it, their record on breaking into markets is baaaaad
Well… Part of the problem is that a lot of people hate their guts and so they won’t even look at what they produce unless they absolutely have to.
For example, a lot of people I know swore they were never going to get an Xbox even before they knew what games it had or anything else about it besides the fact that it was made by Microsoft. (Me? I like my Xbox. My one gripe being that some games only allow their saved-games to be stored on the harddrive and that’s a real problem in my opinion.)
If you ask me… Microsoft would do better trying to improve their PR. Of course… At this point that’s going to be hard.
Chris:But for clusters, well I’ve never seen a Windows cluster or high speed computing system for research.
Neither have I. Haven’t heard of one either.
Chris:Same complaint you hear from Windows users trying to more to Linux/Mac. Where’s the start menu?! || There’s no grep?!
Heh. Ya… I agree. People tend to get used to the tools they use.
Also… Some people have had bad experiences with one platform or another that wasn’t really that platform’s fault, but afterwards they feel that it was indeed it must have been that platform’s fault.
When I say random slowdowns, I mean tasks in explorer that take more time than makes sense (I mentioned right clicking on the desktop earlier). And no, mine is setup quite well. Mine runs ok, because I spent time making it usable. I’m proud to say my XP install is over 2 years old (although the use has been light, mostly gaming).
An OS should not slow down due to running it for weeks and weeks; if anything it should speed up slightly. I don’t leave anything but my servers running for months and months; they run until I move (about every three months).
The biggest problem I had with XP was (and I’m not sure if I remember right) but the ip stack became corruptet? I had to run some strange commanded that I googled for forever, and it fixed it. The symptoms were taking something like 20 minutes to download a webpage.
Linux with X-windows running, take your pick Gnome or KDE is unstable, you can claim it is not but it locks up hard to where the only option is the powerbutton.
Actually, I’m pretty sure it’s X itself that locks up (I don’t think KDE or Gnome run with the rights to take the system down). This used to happen to me occasionally about 2 years ago and it ended when I upgraded my card firmware (I have an early MSI GeForce4 Ti4600).
NO keys work or anything, how it got that way is most likely a stupid ass sitting in front of the keyboard.
I’ll give you a little hint: if X locks up, you don’t have to hit the powerbutton. Try Ctrl-Alt-Backspace. If that doesn’t work, and it looks as if the keyboard isn’t working, try Alt-SysRq-K (SysRq is also known as PrintScreen). This will kill the current VT in use, and will override X (if it’s enable in the kernel, which it should be by default – it’s sometimes disabled for servers, as it may cause a security risk). You can then use Alt-F1 to switch to the first console and restart X from there.
Chris:When I say random slowdowns, I mean tasks in explorer that take more time than makes sense (I mentioned right clicking on the desktop earlier). And no, mine is setup quite well. Mine runs ok, because I spent time making it usable. I’m proud to say my XP install is over 2 years old (although the use has been light, mostly gaming).
Hmmm… Interesting… I had that problem on two of my machines. (Not sure why.) As far as being setup well goes though. Everyone I know thinks theirs is setup well. Including the ones that are obviously setup poorly. I’m not saying yours is, but it is a possibility. (Of course its also a possibility I missed something as well.)
rock solid is an os/software that does NOT reboot on it’s own, lose memory or any other resources _AND_ does not require patches (that require a reboot) every couple of weeks !!
The reason that people who run windows say that win2k/xp/2k3 is rock solid is because it’s a bit better than win 9x which were practically un-usable !
Will win2k/xp/2k3 run for a few month (IF you do NOT apply CRITICAL updates … WTF)
->yes
will it be down on resources?
->yes
will it be volnurable ?
-> yes
will it last for years on end without rebooting ?
-> NO
This is not my opinion, but my experience.
The longest time I have run an x86 – Linux server without a reboot is 5 years. Reason for shutdown was RAM/CPU upgrade to accomodate more users. This system was also exposed to the internet, and while it had numerous hack attacks none of them succeded.
During the time I was working with that server it was 0 maintenance other than some of software upgrades which did not cause service outage dureing the upgrade procedure.
Now let’s see ANY version of windows do that !
Currently where I work win 2k3 after 6 months is unusable, and it’s used on an internal ONLY network where only truseted clients have access to it !
You don’t have to register to have a name. So guys, the Anonymous name is getting overused; that or someone has no life! Just put in a name, other than anonymous, Anonymous, or anon!
I can understand, although I wonder about the 0 maintenance. We have some Digital Unix boxes that are zero maintenance, but that’s because they load half their software remotely and because they are fairly low traffic. But one is almost up to a year of uptime!
clustering in Windows is not really anything new. Has anyone here actually used Win2k in cluster mode? I hope that Microsoft has made much progress since releasing clustering services for Win2k. At the time we had it running (almost 3 years ago) it was a complete and utter disaster with a lot of ‘praying’ going on.
Also, the_rock, thanks for your *opinion*. A stock Win2k install with the right drivers and hardware *will* run for years. You need to use hardware that has been specced to go the distance (HP used to be the only manufacturer willing to guarantee 95%+ reliability, I don’t know if this is still the case).
If you are in the position where you are running your own custom developed apps, its not a problem. I’ve run a multicast delivery system for 3 years at which time our lab moved and we had to power the server down.
When you have to start integrating with DB systems, AD, etc. thats where the trouble starts.
He was the lead developer of both VMS and the NT kernel. And since when it comes to clustering VMS can wipe the floor with any *nix pile of crap around I think all the *nix troll around here should drink a nice big cup of STFU.
Wow MS has some guts I must say going into the Supercomputer market but hats off if they can pull it off. Personally I would wait and see what they are able to offer to their clients and find out more technical details about the OS before I comment on its reliability, etc, etc. And as for those people who say XP crashes or is unusable, well I am sorry to hear that but frankly I think that is a bunch of baloney. I have run XP ever since it came out and right now, the current XP’s status as an OS is quiet good. Sure there are security issues but they can be taken care of. You just gotta know how to use it to see what it can offer you at its max potential. That includes tweaking it, configuring it and spending as much time on tightening up the OS as you Linux users spend on tinkering Linux.
“Actually, it does come to my mind. Windows 2k3 and XP are the most stable OSes I run. They are unsecure and they look like crap, but they are MIGHTY stable on my machines. And you know, I have run A LOT of OSes/distros/you-name-it in the past 10 years.
Windows for supercomputers is not based on the sucky Win9x kernel you know.”
Can I conclude from this that you run only Windows machines??
Chris, Dave is working on Windows 64, Michelle was pointing out that the VAX/VMS implementation of clustering is still probably the best that there has been ( I worked with more than a few and it was/is absolutely fantastic ).
Dave Cutler worked on that clustering tech whilst he was at DEC, now he is working on Windows 64. MS wish to introduce a clustering technology and the only question left is do the numbers add up?
I still don’t get how is everybody discussing about the stability of XP as a desktop workstation, or about Win2k3 as a server. They are much more stable than win9x, of course. They are not in my experience as stable as most *nix systems, though they can be tailored to be stable enough for most non critical uses if you really think they offer some advantage. But still this doesnt mean anything about cluster computing performance and stability. It is just data that doesnt apply to this new case.
What I would like to know are things like: is every node getting a whole NT kernel, GUI and everything, or is it being stripped down to the essential? Is it going to be node-fail safe in some way?
Also note that they are targeting business and smallish R&D… big number crunching wont’ give a damn about .Net or integration with a windows infrastructure, it’s only about writing efficient algorythms ( C, Fortran and C++ are the lingua franca of the scientific environment) and having the cluster managed easily and quickly with some remote admin tool.
Most of the people here are teenagers playing wit a server in thier parents basement, helpdesk folks, programmers and maybe the occasional sysadmin that manages simple tasks.
I use Win2k3 in the datacenter along with HP-UX and Solaris. If I were to show anyone the yearly outage report, our Unix boxes have the same ammount of downtime as our windows ones do. I find most outages were due to the following reasons:
1. hardware failure, which happened on both sides of the house with our HP/Dell/Sun/IBM hardware.
2. application errors not directly realted to the OS (i.e Lotus, ArcServe, Tivoli or some other app).
3. someone running cables and accidently unplugging someonething in the process (you know those cable monkeys)
People need to quit bashing MS because it’s trendy. Especially since most here probably don’t have a clue.
I think one of the misunderstandings about windows and clustering is that there are different types of clustering. One type happens at the os level and another type happens at the programming level.
For operating system level clustering windows is fine if not probably fairly good. But for programming level clustering linux is probably better because of its flexbility and openness. I dont know how mac osx is in this matter.
Whilst the lot of you keep babling about it, I am actually testing an experimental system at my work. We have about 130CPUs, P4s @ 2.8GHz running XP pro SP1, interconnected via a third party package. It is not the most powerful, generating about 100Gflops at peak performance, hopefully if the trials go well we will have a 1000 CPUs. Stability is superb and load balancing/management is superb!
We also have a 1000CPU linux system, which is comparable to this, but the windows one is cheaper, as it is multifunctional and truly distibuted. It is admitedly a bit slower due to higher overheads of making it multifunctional
All these have had clustering with service management, high availability, fault-tolerance and load balancing for decades!
And are we supposed to cheer for MS for bringing us what has been the “näkkileppä” (finnish, losely translated as “everyday’s bread”) on the other enterprise platforms forever?
Shit, on Solaris you have the choice of at least THREE different clustering software solutions: Sun Cluster, Veritas Cluster and Oracle Cluster Aware Services.
I would think most clusters used in research would not be on the internet, thus the security problems of Windows wouldn’t be a real issue. How would it be attacked?
Please correct me if I am wrong…I don’t claim to know that much about clustering. I will agree though – it will likely have crappy resource management like every other version of Windows.
It’s all about support, The first people to jump on the MS bandwaggon will be Uni’s/Colleges that are Windows only shops(believe me, I go to one),And the reason why they will go with a systems that might not be 100% reliable or able to take full advantage of the hardware is that they have a perception that they will get support from MS if anything went wrong, or that support will be cheaper because it based on Windows.That’s what i see as attracting people to it.
NT(now as XP) lacks good mem/resource support, just try open your NT machine, keep using it for all kinds work, you will see the mem leak problem. Even in those top long-lasting web servers, seldom are NT based.
If you know this, you are FUDing.
If you do not know this, how can you say about that to people like MS sales man?
NT(now as XP) lacks good mem/resource support, just try open your NT machine, keep using it for all kinds work, you will see the mem leak problem. Even in those top long-lasting web servers, seldom are NT based.
Strange we seem to have web servers running around 100 days right now with no memory leaks or resource problems.
Applications that are poorly written have memory leaks and that can happen on any platform.
I look forward to Windows for clusters. While I won’t be using it, additions/enhancements/changes that may come from it could benefit other versions of Windows. I also tire of relentless and usually unfounding bashing of Windows (and other OS’s, too). Several specific misconceptions really bother me so. I’m up late, so I’ll take a stab at a couple.
1: Windows is bloated
How so? Sure, it’s initial footprint has increased considerably since NT4, but hardware capabilities has increased more. Games are even having a hard time keeping up. And many of the additions are reusable by third-party apps with good results. Avant is a fine web browser, but is less than 2mb because it uses IE’s rendering engine. KDE has the same kind of mass-reusability with good results (Though, it is buttons overkill).
2: Windows is unstable
I’ve found most crashes to be either failing hardware or lousy drivers. Explorer is more stable than than Gnome/KDE on dists (Debian and FreeBSD are notable exceptions (Yes, I know FreeBSD is not Linux. (Boy. I wish nVidia made amd64 drivers for FreeBSD))) Programs crash, but not the system. Just don’t install any ‘ol system utility, and don’t run with administrator privelages (ick. I’m breaking that rule right now). Same applies with my use of Linux. Failing hardware, buggy driver, or crap programs with too much access bring the system down.
3: Windows is slow
At least as a UI, Windows is much faster. Sound never skips, graphics are snappy. There are exceptions, of course, like how windows pauses when I pop a CD in. Linux doesn’t do that. However, X is slower (for a local user. Much better over a network than Remote Desktop). Programs for both feel faster on Windows.
4: Windows is insecure
Spend 10 minutes downloading the latest security patches, enable the firewall, and be descriminating in what you install, and you’re good to go.
One final thought: If you use Linux, use the command line. If you don’t, you’re missing one of the best things of *nix compared to windows. Otherwise, you’re using a not-quite-as-good UI with not-quite-as-good apps. Once you get the hang of it, it’s much faster and more flexible than any GUI.
Stability is not reliability. The two are somewhat related, though reliability is a much harder goal to achieve.
Windows is stable, but definitely not 100% reliable. I’d say 80% which is good enough for about 95% of us. For example, say the video drivers fail. Well then that should just mean that windows can’t give me a good output on the screen. Why should video drivers affect the whole os and cause it to fail? It could use different drivers in the meantime. Windows does have generic drivers it boots with if the appropriate ones aren’t found.
Even then, windows should be able to handle bad essential hardware, occasionaly. Minor errors resulting from bad memory or bad output from the cpu should not cause windows to fail. Say an extremely rare error came up because just by chance, one bit was incorrectly read from memory. Windows should not crash because of one bit. Basically, windows should expect to get bad output every once in a while and deal with it appropriately when it receives it.
The unpredictable nature of windows is also why microsoft takes too long to release a security patch. The only way to make windows completely stable and reliable in my eyes is a rewrite, and that will never happen.
yeah, microsoft is the first thing that comes to mind when stability is critical.
Actually, it does come to my mind. Windows 2k3 and XP are the most stable OSes I run. They are unsecure and they look like crap, but they are MIGHTY stable on my machines. And you know, I have run A LOT of OSes/distros/you-name-it in the past 10 years.
Windows for supercomputers is not based on the sucky Win9x kernel you know.
I am sure many will argue here that this is done just fine with the existing, and field tested, linux solutions availible today and they will not see the need. The interesting thing I see Microsoft bringing to the table here isn’t a solution that is technically faster or more efficient than a highly specialized linux based one but instead a solution that will enable people to write software for clusters much more quickly and easily. If a company can throw together a small to medium sized cluster/supercomputer and write a software with a ClusterFC or a Cluster.Net (you know what I mean) where much of the tedious code has already been written and can be easily implemented in a Visual Studio type of enviornement, with MS Visual Studio/programming support is a phone call away for any issues, that is where it would be worth something. Computers to throw in the cluster are compartively cheap – programmers and engineers to make it do what you want are comparively expensive in a non-academic enviornment.
I know from first-hand experience right now that clustering of Windows, specifically Windows 2003 at the moment, is an absolute unbelievable joke. I’ve always managed to steer clear of Windows and MTS in the past on this, but unfortunately on this occasion I got roped into doing things with Windows 2003 and COM+ (Enterprise Services *snigger*). It is nowhere near up to the task of servicing one site with a couple of thousand people on it – we’re talking basic here. No, adding more servers doesn’t help because the hard and fast rule is that you need to saturate all of your machines in the cluster before adding more hardware benefits you. If you don’t you’re wasting money and not getting better performance at all. Windows is shockingly bad at this, however stable it appears to be for people around here.
They’re taking this to a supercomputing conference?! I bet they’re drawing lots at Microsoft as to who will represent them.
Sun seems at the forefront of the grid computing revolution, selling applications that provide fault tolerant grid services and even selling access to their grid itself…
Good joke people. Windows can’t manage right resources on home computers.
Think about all the resources that will be wasted if it runs on a cluster. It’s better that you don’t
We all know that Windows is the most secure operating system in the world. A cluster runing on win will be a delight for crackers. Would you risk your bussines ? I ain’t.
A clustered arch would need a fast file system. NTFS anyone?
Honestly, I would be at peace with myself if I knew that it was on Reiserfs, XFS, CXFS (amazing speed, super stability).
Oh, forgot about the fact that Windows runs only on iX86.
My cluster, a personal project, has 4 G4’s, 7 iX86, and a PlayStation2. Try running Windows
With Linux and NetBSD it works flawless.
No offence, but this story should be moved to Fun and Entertainment.
>Oh, forgot about the fact that Windows runs only on iX86.
You are very wrong. Windows’ NT kernel has being ported a number of architectures, including Alpha and Itanium and PowerPC.
>Windows can’t manage right resources on home computers.
I think it manages just fine.
>>Actually, it does come to my mind. Windows 2k3 and XP are the most stable OSes I run. They are unsecure and they look like crap, but they are MIGHTY stable on my machines. And you know, I have run A LOT of OSes/distros/you-name-it in the past 10 years.
Please, who ever has a comment, at least comment if you have experience in super-computing / clusting. I did a research for a Natinal Laboratory, and I can assure you they will not put Windows to manage Physics data or anything else.
Supercomputing is not the same as having two or three servers serving your company. It is not the same, period.
Windows 2003 Server hasn’t been that bad for me in stability. But then agian the servers that I run here are not that taxed when it comes to people connecting to them, nor are they clustered. But would I use Windows in clustering. Probably not, since there are other OSs out there than are more stripped down and ready to run on all types of hardware with better performance than what MS can offer right now. I’m sure they will show numbers that they are better than everyone else, but with some very fugged up numbers or only the areas that they beat others.
Catalin Nicolescu, I would love to see a write up on that mixed bag of a cluster?
If Microsoft offers this in a slimmed down version like they had said about specialized versions of 200X Server, then this makes perfect sense. My only beef with MS after Win2000 is business practices, specifically Product Activation.
If I can *really* turn off all the extra crap like IE and WMP, it would be a Good Thing[tm].
Price/Service is always going to be an issue, of course.
true, but it works so good … it’s a nightmare.
I’ve been runing Windows Linux and BSD since ’92. It can’t manage resources. I’ve worked with lots of archs and OSes over the years, belive me. A cluster (for bussines) is a serious thing, and I would run a Win on one only if was working for M$.
This is going to be a stripped down (or at least disabled according to the article) distro to keep users from running Windows Cluster Ed. as a regular server.
The question is, how stripped down will it be? Will they be able to extricate IE? If not, then I don’t think the distro will be stripped down enough.
>>Windows can’t manage right resources on home computers.
>I think it manages just fine.
I run Windows 2003 Server Web Edition. No modifications(as in Extra software) Doesn’t get used other than to serve ASP websites for my college homework.
When I first started it up and was browsing around I was shocked! it was performing faster than my 2.4GHz Linux Box and it’s only a 400MHz
Now 2 weeks later it’s starting to act like a 400MHz, so slow that pages just about time out before they are sent.
Tell me how that is managing it’s resources just fine?
Don’t joke, please.
The NT core WAS ported to Alpha and PPC several years ago, with the last version being a beta of NT4.
And it was an utter commercial failure: firms that spend money on alternative architectures know what they want. They want unix.
MS has not even managed to get right the 64bit version for itanium and opteron: are you really so sure they’d be able to pull versions for PPC, SPARC, RISC and so on?
Security doesn’t matter that much in most clusters, they tend to be isolated from external networks. Physical security and other paranoias apart, of course.
Fast file system in a clustered arch? Nodes don’t even need a hard disk. The master node that keeps the results is another issue, and it can use the most expensive SCSI disks if necessary.
Cluster.Net. Hmmm. Sounds interesting. Except for the reason that cluster programmers are specially careful about getting bang for the buck. Running an VM in every node on top of a commercial OS doesn’t sound cheap neither attractive. Same for easy languages that take freedom from hardcore optimizing. Think about it: parallel programmers are expensive, no matter the languages they know. The algorithms they know and apply is what’s valued, not the amount of lines per hour they can type. It’s not business software, it’s scientific software. The real money saver is in the hardware and software licenses, not the people. And there Windows can’t stand against linux or freebsd.
Still, it will be interesting to see the modifications to Windows for clusters (headless windows machines) and the license pricing.
I wonder how Cluster Windows handles licensing.
Also in the security sphere, will normal AV tools run on Windows Cluster? One rogue laptop plugged into the wrong might infect the whole cluster, which would be a Bad Thing.
I for one really don’t see the problem with using Windows as a multi-node OS. For one thing clustering across a business network is completely different than clustering across a scientific process grid. You need vastly fewer of the services, and beefed up inter-process communication.
If it weren’t for the cost of licensing and the overhead of admin tasks, you could run coarse-grained scientific computing today on a whole bunch of WinXP boxes. I’ve done this with SETI@Home for years. Other people have set up render farms for 3D programs like 3D Studio Max. Doing grid computing in Java is the same on Windows as it is on Linux. Maybe not the most efficient but it’s good enough and Java is very easy to do distributed computing with.
I think what’s missing under Windows is a centralized admin capability to configure many boxes at once. With a nice GUI.
What you get with Windows is a lot of great development tools and a very well known development environment. The whole point of grid computing is that the nodes are so cheap you don’t care if you lose a few during computation. Does Google care if they lose a few servers? Hell no. They have thousands of beige boxes and they lose dozens at a time without impacting their basic serices.
Grid computing under Windows is very pragmatic. the fact is that anything that runs well on a Beowulf cluster will run just as well on a Windows cluster. And if you give a physics department a Windows cluster don’t think for a second they won’t run with it.
Okay folks, let’s first see how this new branch of the Windows-family evolves before putting her down. That’s a bit cheap ain’t it?
Windows XP and especially 2003 (I concur with Eugenia there) are extremely stable. Also, the WinNT kernel has proven itself to be highly portable, since it runs on MIPS, PPC, Alpha, Itanium, and x86 (-64). That’s not bad.
Also, I’d like to draw a comparison between MS and Apple here. MacOS wasn’t exactly known for it’s stability up to version 10.2– especially the pre-OSX version we’re hell when it came to stability. Now, OSX is rock solid and never dies on me. Why can’t MS do the same trick?
Actually– they have already done it. 9x or NT. World of difference.
So is Microsoft going to charge it’s customers on a per-node basis? How much is it going to cost? $100 per node? $200 per node?
There will be some senior MS execs pontificating about the innovation
of private enterprise when the reality is that MOSIX has had no comparables
in the MS world ….
And of course if you have the publicity budget to say it often enough
and loud enough then history suggests you will be believed….
Well, truth be told, XP is a lot more stable than previous versions of Windows. Where NT 4.0 presented you with a BSOD every just so often, in XP it’s only the shell that crashes and restarts. Big improvement? Think not.
As for 2K3, an OS that asks me WHY I want to shut it down or restart it and after reboot kindly asks me if it can phone home about it? Please. No matter how stable or unstable it may be, for me, this kind of behaviour is enough to send it straight to the trashcan.
I would really like to know the changes MS have made to their server platform to make it suitable for clusters, seeing how server 2k and 2k3 gobble up system resources like there’s no tomorrow.
And if you give a physics department a Windows cluster don’t think for a second they won’t run with it.
they’ll have to give it away? i don’t doubt that for a second.
>>Security doesn’t matter that much in most clusters, they tend to be isolated from external networks. Physical security and other paranoias apart, of course.
What are you talking about. That is the purpose of super computing. Many Labs invest so they can use it. And take advantage of it. Supercomputers here are accessed by computers en Europe and vice-versa. Security not important. Dont make me laugh. That is the number one priority, together with the ability to do millions of calculations to solve problems. You never seem a supercomputer I can see.
>>they’ll have to give it away? i don’t doubt that for a second.
They will not use it. I am talking about real physics labs. Tell that to the people in CERN. This is a joke. People, one more time: Supercomputing is not for windows.
>>Supercomputing is not for windows
I concur with you wholehartedly. Unfortunately, it’s not just businesses that are run by managers who know exactly d*ck about the field they work in, it’s also universities that are run by management nitwits. Therefore, unfortunately, I’m afraid that Windows will actually appear on university supercomputers. Especially if you look at the ties MS has with universities (think sponsorships etc.)
Now you can process errors faster than anyone in the world with your very own Windows cluster!
Well, if the MS cluster edition will offer to compensate the damages of one of the nodes going down on a 256 nodes computation just 5 minutes before the end of the job, why not?
After all, before you could always find the reason of the crash, now you cannot. Time to hire a legal firm, with all the money they will charge…
Is this a joke or something?
If Windows XP is so stable why can’t I install it?
Windows 98se –check
Windows ME- check
Debian — check
suse 9.2 — check
Windows XP is the only OS that I can’t install on my Pentium 4 Dell desktop. It tries, It loads up the software, As soon as it tries to find the mouse ps/2 port it crashes so fscking hard it the only option is a hard reset.
That means I am 6 for 6 in times I have tried to use or install windows XP and times it needed a complete reinstall, or another OS to use the machine.
does it come with msn,msie,wmp,paint,games, and other included programs
This isn’t a tech support forum. You would be better off ringing Dell. Do you really think that Dell sells Pentium 4s that don’t work with XP? It’s what they sell with the machines. Obviously there is something wrong / incompatible with your hardware somewhere. If it works in 999 out of 1000 Dells then the problem is not with the software.
We don’t have an official product announcement from MS yet and people are already aggressively taking sides. We all know that MS won’t all of a sudden end up with a huge grant from DARPA to develop cluster technology (i.e. HPCS). I think the big question is who is their target market. Not the target market from the marketing people’s point of view, but what the developers expect to be able to support come next fall. For example, maybe 15 or 20 dual processor Xeon boxes and what interconnects will be supported. If they are in this range, they certainly will not be a contender for an ASCI contract or weather modeling. But they would be suitable for some types of databases, rendering, etc. There are many uses for a cluster and we just have to wait until the details come out.
“What do you mean I have activate each CPU seperately to make sure that I benefit from Genuine Windows?!?!?!”
“My God, look at this… each virus / spyware is running on its own CPU!!!!”
I have a dual boot with Win2k3 and Xandros (the boot default). Funny thing is often when I boot Win2k3, later I will see my machine running Xandros. If Win2k3 is sooooo stable, then why does it keep rebooting?
I am a Linux fan, and I have no doubt this will suceed eventually, even though it may take years.
There are Linux zealots in this blog who will always shout after Windows. Times have changed and Windows 2003 is very stable, I mean it. I don’t say it’s secure, but it has nothing to do with Win98 people
Stop criticizing something you don’t know just because it’s Microsoft. At least give it a try for a month and then we talk about stability.
When I first glanced at the article summary, I thought they were retargetting longhorn for supercomputers so they wouldn’t have to optimize code.
Heck, that still makes more sense, now that I’ve actually read the article. Windows? Clusters? This is like Military Intellegence, only with more BSOD.
Windows has been rock solid stable for 5 years. It’s been solid since Windows2000 came out, several years before WinXP.
As someone who grew up on BSD Unix, has a copy of Redhat V2 autographed by Linus, has compiled, optimized, and run code on a Cray, has 45K SETI units, and who has administered a domain of stable PC’s, I say “Choice is good” in the supercomputer cluster OS business.
I think it’s a shame that a lot of people have lost their objectivity. I don’t see C# as a big development language for quantum chromodynamics calculations, finite element modelling, or big bang simulations. That might be a bit of a stretch… Maybe Fortran# 😎
about you Eugenia… Sometimes I really truely wonder. No offense of course, but XP cannot be considered stable by any means compared to well, every other multi-user OS in production (I.E.; Win2k, Win2k3, RHEL, MacOS X, even F/Net/OpenBSD and Debian). XP drives me nuts.
I have in over a year of hard use crashed my archlinux machine maybe once. I’ve seen a lot of linux lockups, and 99% of them have been hardware/driver related. A couple were user impatience from bad code.
That said, bad forking is a good way to crash any OS.
Yes, XP is usably stable; but from my experience it was a definite step back from 2k. It consistently begins considering applications as crashed when they don’t respond every couple of seconds; this is annoying. It likes to let apps lock up their window controls, this is annoying. But it’s definitely a lot better than 9x!! And it’s just wierdly slow in some spots on some machines; like right clicking on the desktop. Some machines it’s instantaneous, some it’s doggedly slow. I never noticed that on 2k.
The people who buy clusters, don’t use windows on their desktops. I don’t think this will succeed except in the business world. In the research world (where clusters are really useful), this is going to flop. Microsoft will have to get behind this like Apple does, and shell rediculous resources into marketing and sales to get it to go; and Microsoft is bad at marketing and worse at sales (unless you wanna buy over ten thousand units).
I’d be more concerned about Apple monopolizing the small cluster world. They got their eye on the prize, and they’re selling hard. They have some compelling features and equipment. And in the end, you want a cluster that works; not one you put together manually. The people who buy small clusters would not want to do the work Microsoft has you do (for example Windows Domain setup).
> Yes, XP is usably stable
This is a troll. Why do you say XP is unstable? Do you get BSOD? Here we have have no BSOD, no frozen screen, and no automatic reboot with WinXP. It is not secure, but it is very stable. I don’t think you use WinXP. You seem to be criticizing something you only heard about.
> In the research world (where clusters are really useful), this is going to flop
Well if you think researchers only use Unix workstations
Go visit the universities
> Microsoft is bad at marketing and worse at sales
Oh yeah they are so bad at marketing that you never see ads in newspapers, nor on major web sites. It’s so bad at marketing that Linux now owns 90% of desktops and servers. It is also very bad at sales. This year Microsoft will dig its debt and owns money to all banks.
XP can be plenty stable. as in, never crashing, never getting s BSOD stable. Its very easy to tweak xp to get rid of all the extra crap and make it very stable…. and this is coming from a person that uses linux and doesnt really care for windows all that much. I can make xp very stable. If you dont know how and it crashes on you then boohoo. its your problem. Stop bashing the OS.
The Cornell Theory Center has been using Windows for HPC for ages. Yes, universities really do use Windows (at least this one does, a lot). Look (I did the google for you)…
http://www.tc.cornell.edu/CTC-Main/News/2005/050112.htm
http://www.ctc-hpc.com/services.html
http://www.force10networks.com/applications/profiles-ctc.asp
http://www.vni.com/company/press/ctc_release.html
http://www.top500.org/sublist/Site.php?id=401
I recognize that this is *nix’ market, but if more business users are going to be using HPC, then MSFT definitely has a target.
I think they will have more success with this than the desktop in the long run. People will always pay for high security on big iron and MS will be able to write this as well as anybody else eventually if they choose to. But, on the desktop price will be everything and you can’t beat OSS FREE.
It just hurts Bill Gates KBE to think there is any market where Microsoft is not breaking laws or pushing over old ladies to get top.
Yeah, and there was xenix..
seriously, a cluster is NOT a supercomputer. But, the work of clusters is about as important as the work being completed by BG/L
and I don’t trust microsoft anywhere near any of that. while windows has supported clustering, it NEVER got a lot of use. See, Microsoft has gotten stupid, and we should let them be that way.
They saw the terragrid. microsoft, go away, we got serious work to do. I’m going to play in oberon. hehe.. later.
they’ll have to give it away? i don’t doubt that for a second.
I would think, that its more likely that they pay some high profile customer to run it than that they give it away.
I think a lot of trolls came out on this thread. Seriously, get over it, WinXP and 2003 are stable already, saying you have had some issue does not make this not true, it’s just your setup.
Furthermore anything MS has ever decided to go into they have become king at, maybe not at first but in time. The created solid home and pro desktops in time. Took over the handheld market, made a big splash in Home theatre PCs, and are steadily marching forward in servers. They have a massive amount of resources behind them, they will succeed.
Processor support really doesn’t matter here. All they need to do is come out with a x86-64 bit setup and they have done what they need to do. They are going to be able to target those running commodity hardware with this. They aren’t going to try to get you to install it on your SGI cluster.
Secondly, I think many of you are missing the target market for this. It’s not so much huge national labs that built highly specialize supercomputers and such. It’s going to be universities and companies who need good number crunchers. It’s going to be for running things like Matlab/simulink models. Where right now most companies just run models on a bunch of winXP boxes stuffed under peoples desk and stager run times and such. This is what MS will target. Let them take a bunch of off the shelf parts, or a pile of dells and turn them into one big computer and let the same engineers run their models from there office just like they did before in windows, but now even faster. Same goes for lots of fluid modeling and FEA software and so forth. These are things that largely work in the windows world and some even have built in support for using multiple windows boxes via there own special means.
A second thing is MS could use what they work out on this to leverage themselves in the future. Eventually they could fold this into the normal desktop version and have developers make their apps using these tools, then in a typically home in a few years where every member has a computer on a network, they can leverage them and use them as one big computer.
Other groups like Seti@home and other @homes may be able to utilize this too for improving their clients if MS releases a package to let regular desktops run apps that work better with programs like seti. The computing power of all the windows boxes in the world combined outweighs anything else right now.
Ok, I’m nowhere near a clustering expert, but I fail to see how an operating system kernel that was designed first and foremost for GUI-centric, desktop computing can be an advantage in a cluster setting without a near-total rewrite. This is almost as strange as someone wanting to use the BeOS or MacOS 9 for clustering. Granted, Windows Server 2003 is a halfway decent server OS, but it still has the heart and soul of a desktop OS. In my limited experience, desktop computing and cluster computing are worlds apart in what is expected from the OS kernel.
Of course, this is slightly similar to the “Desktop Linux” debate that has been raging for years. There is an exception though; over the years Linux has evolved into many forms: Hobby, server, embedded, PDA, desktop, etc. whereas Windows has been mainly desktop and server until this announcement.
The FACT no one can run away from is, ‘Servers have a thousand points of failure’…
Reboots, network time outs, daemon’s that hose up, kernel panics, you name it Servers take the cake for being the downtime kings.
Now if you want a Operating System that can handle a serious load and mission critical missions, Mainframe.
Servers = 1000 points of failure, no matter what operating system, who administers it, they are always tinkering with them, CONSTANTLY.
I thought I got nasty responses. Eugenia, you are a brave woman.
I forgot to mention that I do indeed believe Windows 2k and up is very stable, especially compared to 9x. The only times I’ve had crashes from 2k/XP have been either hardware issues or buggy 3rd-party software. However, stability is only one part of a good cluster OS.
Also, if they scale Windows down far enough to be an efficient and capable cluster OS, won’t it cease to be Windows? There won’t be a need for a GUI, a browser, or a media subsystem. Why call it Windows at all if there are no “windows”?
All things considered though, I will applaud Microsoft if they manage to do this one right.
And if you give a physics department a Windows cluster don’t think for a second they won’t run with it.
they’ll have to give it away? i don’t doubt that for a second.
————————————————————–
That’s what MS will do, give it away free of charge. Once people start using it, then they will charge.
microsoft supercomputers? how much would it cost to post a story about my toe cheese curing warts?
Sandra Williams: Eugenia is about as qualified to post this article written by someone else as a criminal is to a saint.
Interesting…
Sandra Williams: Eugenia if you even have a job, I bet it is one flipping burgers or cleaning toilets.
Please, do not attempt to get in the Technology field it is already filled full of your types.
Interesting… You go from talking about Eugenia to talking about yourself.
Yes… Yes… We all know the industry is filled with ridiciously arrogant types who think they know everything and nobody else knows anything.
And yes you’re right… We don’t need you. Thank you for telling us.
Chris: The people who buy clusters, don’t use windows on their desktops.
In general, I think you’re right.
But… I do know at least one person who has bought a cluster and uses Windows as his desktop.
*sigh*
Bad at marketing, and doing lots of it are different things. If Microsoft wasn’t bad with marketing they wouldn’t be despised by half their customers; they’d be able to work in some decent PR. But they suck at PR.
“Well if you think researchers only use Unix workstations
Go visit the universities ”
I live at one, attend school at one, and work with researchers at one. I won’t say more because I don’t think it’s necessary, but we have a few windows machines and they are despised because they lack a decent CLI interface and many tools that people get attached too.
I’m sure a lot of researchers happily use Windows machines, however you won’t see it much on clusters (right now you won’t ever); and you often see researchers running other things on their desktops.
And I stated why I consider XP unstable; it has nothing to do with BSOD. It has to do with stupid random slow downs and application lockups.
You can take your foot out of your mouth now if you like.
peragrin: If Windows XP is so stable why can’t I install it?
Windows 98se –check
Windows ME- check
Debian — check
suse 9.2 — check
If Linux is so stable, why can’t I install it on my Gateway laptop?
Windows XP — check
BeOS — check (Not this OS isn’t even supported any more)
DOS — check
Red Hat Linux ?.? (Can’t remember) — No go.
But… Oh wait… SUSE 9.2 works!
Ok. So I can get a version of Linux working 🙂
Anyway… My point is… The fact that it doesn’t work on your machine does not mean a thing.
chris: I live at one, attend school at one, and work with researchers at one. I won’t say more because I don’t think it’s necessary, but we have a few windows machines and they are despised because they lack a decent CLI interface and many tools that people get attached too.
I’ve attended a few different ones (I have a few degrees) and I’ve worked at a few different ones. I can say this…
Some use Windows… Some use Unix. May GUESS is though, that most use Unix. The reason I say that is because I’ve noticed that at almost all of them there was some kind of Unix presence there, even if it wasn’t the dominate platform. While at the others it seemed like you couldn’t find Windows to save your life.
Because it will include a lot of big Microsoft code such as .Net and their NT kernel. It’s still windows, coding for it will be coding for windows to some extent.
And stability is absolutely important in a cluster. Because you run simulations that take days to weeks and possibly longer; and you don’t want them crashing because of anything other than your own bad code. Efficiency is great, but unless you can sustain a program for days it won’t gain you anything. I think NT can do this, but my thought is it will flop.
But who knows! Maybe Microsoft will pull a rabbit out of a hat. But let’s face it, their record on breaking into markets is baaaaad: xbox, media-center (although this one isn’t doing sooo bad, but tivo who is bankrupt may have more share), tablet/palm (ok, but still losing to palm right? in a waning market), and maybe someone else has a better memory.
Chris: And I stated why I consider XP unstable; it has nothing to do with BSOD. It has to do with stupid random slow downs and application lockups.
Well… On my Windows machines, I have “random slow downs”, but thats actually due to an app I have running in the background that does something every few minutes. If I kill that, then I don’t get any random slowdowns. (I also don’t get application lockups either)
My point being… Maybe there’s something going on with the Windows computers you use?
Frankly… (I’ve mentioned this before in other places) Most people I know who setup Windows don’t do a very good job. (I don’t claim to be the best BTW) And they really hate it when someone tells them what they’re supposed to do.
Of course… I also don’t usually leave my computers running for weeks on end either. Which might explain this too.
Yea, I wasn’t trying to say they only used *nix. But for clusters, well I’ve never seen a Windows cluster or high speed computing system for research. I’m sure they exist, but I see more Linux and now Mac. And desktops, well I’m sure it goes both ways there.
The thing with a lot of researchers is they were probably heavily using computer workstations when Windows couldn’t be used as a viable platform for number crunching (due to low end intel hardware). Not because *nix is better, but because Windows doesn’t offer them something to replace the applications and interfaces they got used to. Same complaint you hear from Windows users trying to more to Linux/Mac. Where’s the start menu?! || There’s no grep?!
Anyway, time will tell. I’m warming up to the idea of it happening; but I just don’t see it.
I still stand that XP is obnoxious to use . That said, 2k was usable!
Chris: But let’s face it, their record on breaking into markets is baaaaad
Well… Part of the problem is that a lot of people hate their guts and so they won’t even look at what they produce unless they absolutely have to.
For example, a lot of people I know swore they were never going to get an Xbox even before they knew what games it had or anything else about it besides the fact that it was made by Microsoft. (Me? I like my Xbox. My one gripe being that some games only allow their saved-games to be stored on the harddrive and that’s a real problem in my opinion.)
If you ask me… Microsoft would do better trying to improve their PR. Of course… At this point that’s going to be hard.
Chris: But for clusters, well I’ve never seen a Windows cluster or high speed computing system for research.
Neither have I. Haven’t heard of one either.
Chris: Same complaint you hear from Windows users trying to more to Linux/Mac. Where’s the start menu?! || There’s no grep?!
Heh. Ya… I agree. People tend to get used to the tools they use.
Also… Some people have had bad experiences with one platform or another that wasn’t really that platform’s fault, but afterwards they feel that it was indeed it must have been that platform’s fault.
When I say random slowdowns, I mean tasks in explorer that take more time than makes sense (I mentioned right clicking on the desktop earlier). And no, mine is setup quite well. Mine runs ok, because I spent time making it usable. I’m proud to say my XP install is over 2 years old (although the use has been light, mostly gaming).
An OS should not slow down due to running it for weeks and weeks; if anything it should speed up slightly. I don’t leave anything but my servers running for months and months; they run until I move (about every three months).
The biggest problem I had with XP was (and I’m not sure if I remember right) but the ip stack became corruptet? I had to run some strange commanded that I googled for forever, and it fixed it. The symptoms were taking something like 20 minutes to download a webpage.
stop talking about Longhorn, yes we know it will take alot of cpu
About being off topic and stirring things up, but was there even discussion about Longhorn:
“stop talking about Longhorn, yes we know it will take alot of cpu “
Linux with X-windows running, take your pick Gnome or KDE is unstable, you can claim it is not but it locks up hard to where the only option is the powerbutton.
Actually, I’m pretty sure it’s X itself that locks up (I don’t think KDE or Gnome run with the rights to take the system down). This used to happen to me occasionally about 2 years ago and it ended when I upgraded my card firmware (I have an early MSI GeForce4 Ti4600).
NO keys work or anything, how it got that way is most likely a stupid ass sitting in front of the keyboard.
I’ll give you a little hint: if X locks up, you don’t have to hit the powerbutton. Try Ctrl-Alt-Backspace. If that doesn’t work, and it looks as if the keyboard isn’t working, try Alt-SysRq-K (SysRq is also known as PrintScreen). This will kill the current VT in use, and will override X (if it’s enable in the kernel, which it should be by default – it’s sometimes disabled for servers, as it may cause a security risk). You can then use Alt-F1 to switch to the first console and restart X from there.
Chris: When I say random slowdowns, I mean tasks in explorer that take more time than makes sense (I mentioned right clicking on the desktop earlier). And no, mine is setup quite well. Mine runs ok, because I spent time making it usable. I’m proud to say my XP install is over 2 years old (although the use has been light, mostly gaming).
Hmmm… Interesting… I had that problem on two of my machines. (Not sure why.) As far as being setup well goes though. Everyone I know thinks theirs is setup well. Including the ones that are obviously setup poorly. I’m not saying yours is, but it is a possibility. (Of course its also a possibility I missed something as well.)
But I can understand your complaint 🙂
rock solid is an os/software that does NOT reboot on it’s own, lose memory or any other resources _AND_ does not require patches (that require a reboot) every couple of weeks !!
The reason that people who run windows say that win2k/xp/2k3 is rock solid is because it’s a bit better than win 9x which were practically un-usable !
Will win2k/xp/2k3 run for a few month (IF you do NOT apply CRITICAL updates … WTF)
->yes
will it be down on resources?
->yes
will it be volnurable ?
-> yes
will it last for years on end without rebooting ?
-> NO
This is not my opinion, but my experience.
The longest time I have run an x86 – Linux server without a reboot is 5 years. Reason for shutdown was RAM/CPU upgrade to accomodate more users. This system was also exposed to the internet, and while it had numerous hack attacks none of them succeded.
During the time I was working with that server it was 0 maintenance other than some of software upgrades which did not cause service outage dureing the upgrade procedure.
Now let’s see ANY version of windows do that !
Currently where I work win 2k3 after 6 months is unusable, and it’s used on an internal ONLY network where only truseted clients have access to it !
just my 2c
How about XP randomly rebooting? It’s becoming a common occurence on this box.
You don’t have to register to have a name. So guys, the Anonymous name is getting overused; that or someone has no life! Just put in a name, other than anonymous, Anonymous, or anon!
I can understand, although I wonder about the 0 maintenance. We have some Digital Unix boxes that are zero maintenance, but that’s because they load half their software remotely and because they are fairly low traffic. But one is almost up to a year of uptime!
Some days I despise every OS.
clustering in Windows is not really anything new. Has anyone here actually used Win2k in cluster mode? I hope that Microsoft has made much progress since releasing clustering services for Win2k. At the time we had it running (almost 3 years ago) it was a complete and utter disaster with a lot of ‘praying’ going on.
Also, the_rock, thanks for your *opinion*. A stock Win2k install with the right drivers and hardware *will* run for years. You need to use hardware that has been specced to go the distance (HP used to be the only manufacturer willing to guarantee 95%+ reliability, I don’t know if this is still the case).
If you are in the position where you are running your own custom developed apps, its not a problem. I’ve run a multicast delivery system for 3 years at which time our lab moved and we had to power the server down.
When you have to start integrating with DB systems, AD, etc. thats where the trouble starts.
He was the lead developer of both VMS and the NT kernel. And since when it comes to clustering VMS can wipe the floor with any *nix pile of crap around I think all the *nix troll around here should drink a nice big cup of STFU.
That’s nice. Back it up or buzz off Michelle. According to what I’m reading, Cutler is working Windows 64, not clustering.
Wow MS has some guts I must say going into the Supercomputer market but hats off if they can pull it off. Personally I would wait and see what they are able to offer to their clients and find out more technical details about the OS before I comment on its reliability, etc, etc. And as for those people who say XP crashes or is unusable, well I am sorry to hear that but frankly I think that is a bunch of baloney. I have run XP ever since it came out and right now, the current XP’s status as an OS is quiet good. Sure there are security issues but they can be taken care of. You just gotta know how to use it to see what it can offer you at its max potential. That includes tweaking it, configuring it and spending as much time on tightening up the OS as you Linux users spend on tinkering Linux.
“Actually, it does come to my mind. Windows 2k3 and XP are the most stable OSes I run. They are unsecure and they look like crap, but they are MIGHTY stable on my machines. And you know, I have run A LOT of OSes/distros/you-name-it in the past 10 years.
Windows for supercomputers is not based on the sucky Win9x kernel you know.”
Can I conclude from this that you run only Windows machines??
Chris, Dave is working on Windows 64, Michelle was pointing out that the VAX/VMS implementation of clustering is still probably the best that there has been ( I worked with more than a few and it was/is absolutely fantastic ).
Dave Cutler worked on that clustering tech whilst he was at DEC, now he is working on Windows 64. MS wish to introduce a clustering technology and the only question left is do the numbers add up?
Well on my machines Xp is not really stable.
I still don’t get how is everybody discussing about the stability of XP as a desktop workstation, or about Win2k3 as a server. They are much more stable than win9x, of course. They are not in my experience as stable as most *nix systems, though they can be tailored to be stable enough for most non critical uses if you really think they offer some advantage. But still this doesnt mean anything about cluster computing performance and stability. It is just data that doesnt apply to this new case.
What I would like to know are things like: is every node getting a whole NT kernel, GUI and everything, or is it being stripped down to the essential? Is it going to be node-fail safe in some way?
Also note that they are targeting business and smallish R&D… big number crunching wont’ give a damn about .Net or integration with a windows infrastructure, it’s only about writing efficient algorythms ( C, Fortran and C++ are the lingua franca of the scientific environment) and having the cluster managed easily and quickly with some remote admin tool.
Funniest thing I’ve read in years!
Most of the people here are teenagers playing wit a server in thier parents basement, helpdesk folks, programmers and maybe the occasional sysadmin that manages simple tasks.
I use Win2k3 in the datacenter along with HP-UX and Solaris. If I were to show anyone the yearly outage report, our Unix boxes have the same ammount of downtime as our windows ones do. I find most outages were due to the following reasons:
1. hardware failure, which happened on both sides of the house with our HP/Dell/Sun/IBM hardware.
2. application errors not directly realted to the OS (i.e Lotus, ArcServe, Tivoli or some other app).
3. someone running cables and accidently unplugging someonething in the process (you know those cable monkeys)
People need to quit bashing MS because it’s trendy. Especially since most here probably don’t have a clue.
Anyways, the article was interesting.
I think one of the misunderstandings about windows and clustering is that there are different types of clustering. One type happens at the os level and another type happens at the programming level.
For operating system level clustering windows is fine if not probably fairly good. But for programming level clustering linux is probably better because of its flexbility and openness. I dont know how mac osx is in this matter.
Whilst the lot of you keep babling about it, I am actually testing an experimental system at my work. We have about 130CPUs, P4s @ 2.8GHz running XP pro SP1, interconnected via a third party package. It is not the most powerful, generating about 100Gflops at peak performance, hopefully if the trials go well we will have a 1000 CPUs. Stability is superb and load balancing/management is superb!
We also have a 1000CPU linux system, which is comparable to this, but the windows one is cheaper, as it is multifunctional and truly distibuted. It is admitedly a bit slower due to higher overheads of making it multifunctional
Spammers have been using clustered Windows systems for years.
AIX
VMS
HP-UX
Solaris
Tru64
NetWare
All these have had clustering with service management, high availability, fault-tolerance and load balancing for decades!
And are we supposed to cheer for MS for bringing us what has been the “näkkileppä” (finnish, losely translated as “everyday’s bread”) on the other enterprise platforms forever?
Shit, on Solaris you have the choice of at least THREE different clustering software solutions: Sun Cluster, Veritas Cluster and Oracle Cluster Aware Services.
Well, i realy dont think Windows is up for the task of running on super computers.
More on this at http://bitsofnews.com/modules.php?name=News&file=article&sid=198
“I can make xp very stable. If you dont know how and it crashes on you then boohoo. its your problem. Stop bashing the OS.”
Spoken like a true a**hole. One shouldn’t have to “know how” to make an OS stable. It should be stable out of the box.
I’m fully waiting for the first *cluster* aware virus or denial of service exploit for this badboy.
New feature: Spyware and viruses that use the MS clustering service to deliver exploit payloads in record time. w00t!
I would think most clusters used in research would not be on the internet, thus the security problems of Windows wouldn’t be a real issue. How would it be attacked?
Please correct me if I am wrong…I don’t claim to know that much about clustering. I will agree though – it will likely have crappy resource management like every other version of Windows.
I’ll add my 2 cents,
It’s all about support, The first people to jump on the MS bandwaggon will be Uni’s/Colleges that are Windows only shops(believe me, I go to one),And the reason why they will go with a systems that might not be 100% reliable or able to take full advantage of the hardware is that they have a perception that they will get support from MS if anything went wrong, or that support will be cheaper because it based on Windows.That’s what i see as attracting people to it.
Have you ever heard the death of blue screen?
That WAS and IS for your beloved NT !!!!
NT(now as XP) lacks good mem/resource support, just try open your NT machine, keep using it for all kinds work, you will see the mem leak problem. Even in those top long-lasting web servers, seldom are NT based.
If you know this, you are FUDing.
If you do not know this, how can you say about that to people like MS sales man?
NT(now as XP) lacks good mem/resource support, just try open your NT machine, keep using it for all kinds work, you will see the mem leak problem. Even in those top long-lasting web servers, seldom are NT based.
Strange we seem to have web servers running around 100 days right now with no memory leaks or resource problems.
Applications that are poorly written have memory leaks and that can happen on any platform.
I look forward to Windows for clusters. While I won’t be using it, additions/enhancements/changes that may come from it could benefit other versions of Windows. I also tire of relentless and usually unfounding bashing of Windows (and other OS’s, too). Several specific misconceptions really bother me so. I’m up late, so I’ll take a stab at a couple.
1: Windows is bloated
How so? Sure, it’s initial footprint has increased considerably since NT4, but hardware capabilities has increased more. Games are even having a hard time keeping up. And many of the additions are reusable by third-party apps with good results. Avant is a fine web browser, but is less than 2mb because it uses IE’s rendering engine. KDE has the same kind of mass-reusability with good results (Though, it is buttons overkill).
2: Windows is unstable
I’ve found most crashes to be either failing hardware or lousy drivers. Explorer is more stable than than Gnome/KDE on dists (Debian and FreeBSD are notable exceptions (Yes, I know FreeBSD is not Linux. (Boy. I wish nVidia made amd64 drivers for FreeBSD))) Programs crash, but not the system. Just don’t install any ‘ol system utility, and don’t run with administrator privelages (ick. I’m breaking that rule right now). Same applies with my use of Linux. Failing hardware, buggy driver, or crap programs with too much access bring the system down.
3: Windows is slow
At least as a UI, Windows is much faster. Sound never skips, graphics are snappy. There are exceptions, of course, like how windows pauses when I pop a CD in. Linux doesn’t do that. However, X is slower (for a local user. Much better over a network than Remote Desktop). Programs for both feel faster on Windows.
4: Windows is insecure
Spend 10 minutes downloading the latest security patches, enable the firewall, and be descriminating in what you install, and you’re good to go.
One final thought: If you use Linux, use the command line. If you don’t, you’re missing one of the best things of *nix compared to windows. Otherwise, you’re using a not-quite-as-good UI with not-quite-as-good apps. Once you get the hang of it, it’s much faster and more flexible than any GUI.
Windows Server 2003 Cluster Edition =
A concerted effort to crank out as many BSOD’s in as short a time as possible.
I can’t wait to play Freecell on it!
Stability is not reliability. The two are somewhat related, though reliability is a much harder goal to achieve.
Windows is stable, but definitely not 100% reliable. I’d say 80% which is good enough for about 95% of us. For example, say the video drivers fail. Well then that should just mean that windows can’t give me a good output on the screen. Why should video drivers affect the whole os and cause it to fail? It could use different drivers in the meantime. Windows does have generic drivers it boots with if the appropriate ones aren’t found.
Even then, windows should be able to handle bad essential hardware, occasionaly. Minor errors resulting from bad memory or bad output from the cpu should not cause windows to fail. Say an extremely rare error came up because just by chance, one bit was incorrectly read from memory. Windows should not crash because of one bit. Basically, windows should expect to get bad output every once in a while and deal with it appropriately when it receives it.
The unpredictable nature of windows is also why microsoft takes too long to release a security patch. The only way to make windows completely stable and reliable in my eyes is a rewrite, and that will never happen.
“What are you talking about. That is the purpose of super computing. Many Labs invest so they can use it. And take ”
Which doesn’t involve placing them directly on the internet.
“advantage of it. Supercomputers here are accessed by computers en Europe and vice-versa. Security not ”
Generally through some sort of gatekeeper machine.
“important. Dont make me laugh. That is the number one priority, together with the ability to do millions of ”
NO. Security is not the number one priority of a supercomputer operating system.
“calculations to solve problems. You never seem a supercomputer I can see.”
That would be you.