“Microsoft is building a high-end feature into Windows for speeding up data access in multiprocessor servers–a feature that to date has been available only in high-end Unix servers, the company said Tuesday. The Redmond, Wash.-based software company is working on support for a technology called non-uniform memory access, or NUMA, one method for designing large servers crammed with processors, said Sean McGrane, program manager for Microsoft’s top-end Datacenter server. The support will be available in two versions of the next edition of Windows for servers, .Net Enterprise Server and .Net Datacenter.” The report can be found at Yahoo! News.
erm, someone should tell microsoft the NUMA is (mostly) a hardware issue. Plus, I dont know what kind of an idiot who can afford the kind of hardware that NUMA is used in would use that shit of an operating system. This sounds like ms fud department going into overdrive as part of that whole “unix is bad” MS campaing.
<OT>
Which I find to be hilarious. Their partner in crime, Unisys if I’m not mistaken, has failed miserably in the unix/large scale computing market. The only reason they’re playing with MS is ’cause thei’re sad, sniffle, that no one took their 64 way intel boxes seriously. whaa. They’re taking their ball and they’re going home
</OT>
when will people learn that it does not pay to play with MS. As soon as MS gets what they want, they destroy the other party.
Silly monkeys.
BTW
if anyone is into techno/ebm, the new apop album is fantastic!
Linux is running on high end servers and mainframes…
It’s gonna be a very interesting OS market.
ciao
yc
I wonder how the marketing ppl are thinking… they promote an Anti-UNIX campaign, yet they say they are building a feature that is yet available only in high-end unix servers…
so … high-end unix servers got something desireable that windows still don’t have and will most likely require 3 versions before they got it running alright… hrm..
Unisys is doing OK, but not great with their ES7000’s (they have sold 600 of those boxes so far, according to their website – at 16 processors or more each). They think they can do better, and they are by no means giving up. IBM has also become a believer in the concept of Windows Datacenter and they will soon be releasing some new boxes that will run it aimed at competing in that same space with Unisys (of course, those boxes will also run Linux).
You’re pretty dopey to think that Microsoft is doing this purely for FUD…they must have specific plans in the works with somebody (not necessarily just Unisys, but probably NEC and IBM also). Just you watch…many companies that laughed their balls off at Microsoft and felt secure in their market segments are now virtually history (Novell was once considered to be unbeatable in enterprise networking, remember – oh, I forgot, you’re too young and you weren’t around then.).
That’s what I like about Microsoft – totally unintimidated by anything and always taking the initiative.
Before anyone takes my head off for not supporting my point with specifics, these xSeries servers are the “new” IBM servers that I was referring to in my above post:
http://www.pc.ibm.com/us/eserver/xseries/x440.html
They can be up to 16-way Intel processor setup and will run Red Hat/SuSE Linux, Windows Datacenter or Novell Netware 6. Yes, it is a relatively ‘small’ rack-mount unit, not as big as a mainframe, but once you add appropriate storage and failover components, etc., etc., it probably starts to get close in size to a Unisys ES7000.
*troll*
The initiative to copy fuctionallity thats been in unix for years?
“They can be up to 16-way Intel processor setup and will run Red Hat/SuSE Linux, Windows Datacenter or Novell Netware 6.”
This isn’t really very exciting. After all, AIX and Solaris have been running on 64 way processor setups for years. And in fact, Sun has 128 way processor boxes now. So what is the big deal about a 16-way Intel processor setup running Linux?
This is why Solaris and AIX will continue to hang around for quite some time. Linux can’t scale well past a 16 way setup.
I’ve always associated numa with SGI boxes (not sure why…), and I thought it was a hardware issue, so not sure what they are ‘building’ into windows…
If you get infected by a malicious virus it no longer just takes out your mail system , it takes out your entire company.
Microsoft, enabling technology for virus writers everywhere. ™
…for my new computer architecture name… it sucks to find a name in use already when you were just growing attached to it…
This is why Solaris and AIX will continue to hang around for quite some time. Linux can’t scale well past a 16 way setup.
<= 2.4 can’t, true. However – the new scheduler, BIO and bounce buffer stuff (and numa specific stuff in some instances) that has already gone into 2.5 (which will become 2.6 or 3.0 in six months or so) has already been shown to scale extremely well on 32 and 64 processor IBM Power4 boxes. RedHat have actually backported some of this stuff and it can be found in the kernels shipped with the RedHat advanced server and Skipjack betas.
True, Simba, big AIX and Solaris boxes can have up to 128 procs, but hardly anyone ever buys those configurations.
If you look at the TPC or SAP benchmarking tests that these vendors run, you will find that they generally use configurations that are more normal in most big companies, which means no more than 64-way systems at most.
Even with only 32-bit 32-way architecture, a Unisys ES7000 is able to annihalate much bigger 64-bit 64-way Unix boxes in benchmarking tests. Read this and weep:
http://www.unisys.es/news/releases/2001/oct/10228070.asp
“These are 32-bit, 32-processor systems outperforming 64-bit servers with many more processors. The potential for gaining substantial price-performance advantages over UNIX/RISC-based machines is enormous.”
Unix architecture is just so olde (yes, I really do mean o-l-d-e ‘olde’) it’s pathetic sometimes – and just wait till ES7000’s stuffed with 64-bit 32-way McKinleys start coming out. BTW, I believe I read somewhere that Unisys said it will be making Linux available for it’s machine, but I can’t remember for sure, sorry – I’m too lazy to go look it up just now.
Read this and weep
You’ve quoted PR material from the Unisys website. What the hell do you expect them to say – “Our machines are unreliable, expensive piles of crap!”?
http://www.tpc.org/tpcc/results/tpcc_perf_results.asp?resulttype=no…
The top ten TPC-C results for single machines – no clusters. Guess what? The Unisys box is seventh. Shall we compare it with the IBM box in joint fith?
Unisys: 32 900Mhz Xeons processors, 165,218
IBM: 24 600Mhz RS64-IV’s, 220,807
The IBM box is 25% faster with 25% fewer processors, each one running 33% slower than those in the Unisys box. So “Unix architecture” might be “olde” but it can still handily whip x86 and Windows.
just wait till ES7000’s stuffed with 64-bit 32-way McKinleys start coming out. BTW, I believe I read somewhere that Unisys said it will be making Linux available for it’s machine
I’m sure they will, given that it is actually Linux that is replacing traditional Unix boxes in the enterprise, not Windows. Curious that.
I have just audited 56 servers for a telco. One of those was a MS box, the rest were Solaris 5, 6, 7 & 8, HPUX, Dynix & Linux. Guess which server died while I was there?
Now these guys have decades of experience with servers and this kind of talk brings roars of laughter when mentioned. It’s just MS trying to find new revenues, the Xbox was just another excellent venture, wasn’t it?
Phil, here’s the first fact, comparing clock speeds of an IBM RISC processor to an Intel CISC processor is not an apples to apples comparison, as you try to make it. A RISC processor of equivelant MHz will always benchmark faster. So the fact that the RISC is 33% ‘slower’ doesn’t really say much at all.
Here’s the second fact, the benchmark that I am referring to is the SAP SD benchmark which measures simulated loads on an actual real-world application, SAP R/3. This benchmark is highly regarded in the industry as being a likely indicator of real-world performance. The benchmark you are referring to is a TPC benchmark that measures raw processing power and does not take into account how efficiently the OS/hardware actually runs a real application.
You won’t take Unisys’ word for it, (or Aberdeen Group’s) fair enough. Let’s go over to SAP’s website and see what they have to say:
http://www.sap.com/benchmark/sd3tier.asp
Unisys ES7000, 32-bit 32-way Pentium III 900Mhz
Windows Datacenter 2000, SQL Server 2000
Fully processed line items per hour: 2,606,000
IBM pServer p680, 64-bit 24-way RS64-IV 600MHz
AIX, DB2
Fully processed line items per hour: 2,573,330
Sun Enterprise 10000, 64-bit 64-way UltraSparc II 400MHz
Solaris, Oracle
Fully processed line items per hour: 1,954,670
Well Phil, I’ll admit that the IBM does relatively well with only 24 RISC processors (for a Unix box), but it is not even able to beat the Unisys ES7000 in this SAP benchmark, is it. And yet, in your TCP-C benchmark, it is well ahead of Unisys in raw processing power. What’s the problem?
As for that Sun Starfire 10000 box, just look at the performance of that POS, and it’s got 64 processors in it. What’s the problem there? Would you like to compare the cost of a Sun Starfire 10000 versus a Unisys ES7000 as configured above and then tell me who is more expensive?
Like I said, RISC based Unix boxes will have a real problem when Unisys and IBM upgrade to 64-bit McKinley and we go through these benchmarks again. Do you seriously think that Linux can beat Windows Datacenter in this SAP benchmark on the same hardware? I don’t think so. Like Simba said, Linux can’t even scale beyond 16-way at this point so it won’t run worth a damn on a 32-way Unisys or IBM box. In fact, I’m not sure anyone even trusts Linux to scale well beyond 8-way, actually. Linux SMP has always been shitty.
Just keep on laughing if you want to, SimBad. Microsoft can afford it:
Marathon Assured Availability solutions guarantees %99.999 uptime with Windows 2000
http://www.marathontechnologies.com/
“Marathon Assured Availability™ solutions provide the fault- and disaster-tolerance companies need to run their core applications on Windows® platforms with confidence.
Go the Distance with Marathon
Marathon Assured Availability-based products extend and enhance the benefits of the Windows operating environment:
Uptime. 99.999% uptime with no failover and full protection from component failures, OS faults, network connectivity failures, and disasters.
Choice of Windows OS and Intel based server brands (Compaq, Dell, HP, and IBM)
Value. Standard hardware, software, and minimal ongoing maintenance requirements mean low total cost of ownership.
World-Class Partners
Marathon has formed partnerships with a wide range of hardware and software providers to deliver Marathon products worldwide.”
Hmm,
Marathon Assured Availability solutions guarantees %99.999 uptime with Windows 2000
Marathon does but Microsoft doesn’t. That says something right there.
Cheers
David
David, in case you haven’t noticed, Microsoft is not a hardware vendor, so they can’t guarantee that you’ll get that uptime on absolutely any old hardware someone feels like trying. Linux/BSD can’t make that claim either, BTW.
Phil, here’s the first fact, comparing clock speeds of an IBM RISC processor to an Intel CISC processor is not an apples to apples comparison, as you try to make it. A RISC processor of equivelant MHz will always benchmark faster. So the fact that the RISC is 33% ‘slower’ doesn’t really say much at all.
Of course you can’t compare processors on clock speed alone – any first year CS student knows that. You misunterstand the significance though – I was actually making a reference to the fact that IBM have POWER4 processors which are actually double the clock speed of those found in that particular box.
Actually, the latest POWER chips aren’t that different from the latest x86 chips at the logic level (now, if we were comparing a traditional RISC ISA like SPARC you might have a point). It’s painfully obvious that you need to brush up on your uArch knowledge a bit.
A RISC processor of equivelant MHz will always benchmark faster.
Blatantly false. The G4 is slower clock for clock than an Athlon for example. The UltraSPARC 3 isn’t exactly a stellar performer either (but then individual MPU performance isn’t what the Scalable Processor ArchiteCture is about) and, as I said above, if you think there hasn’t been a convergence of RISC and CISC (eg. OOOE and SE in POWER4) you know far less than you think.
Here’s the second fact, the benchmark that I am referring to is the SAP SD benchmark which measures simulated loads on an actual real-world application, SAP R/3. This benchmark is highly regarded in the industry as being a likely indicator of real-world performance.
Oh well done. Guess what? So are the various TPC benchmarks.
it is well ahead of Unisys in raw processing power</i.
Neither benchmark tests “raw processing power”. For that you need to look at something like SPECINT and SPECFP.
[i]What’s the problem?
The benchmarks test different kinds of workload, obviously. Perhaps TPC-C is bandwidth limited whilst SAP-SD is cycle limited?
As for that Sun Starfire 10000 box, just look at the performance of that POS, and it’s got 64 processors in it. What’s the problem there?
the E10K is EOL and uses the previous generation of UltraSPARC 2 processors and crossbar interconnects. How about comparing it to one of the current generation Star(cat/fire) boxes?
Do you seriously think that Linux can beat Windows Datacenter in this SAP benchmark on the same hardware?
With the new scheduler and bio work? Yes, I do.
Linux SMP has always been shitty.
Linux’s SMP capability has matured as Linux has expanded into new market areas. When Linux was nothing more than a useful file/printserver OS, SMP scalability was fairly unimportant. Now that Linux is being used in the high end enterprise market, there is a push to improve SMP scalability even more – and this has paid off, as you will see shortly
One of the servers at the telco has been running on solaris 6 (Sparc) for two years with around 10,000 users. The longest I’ve had a MS server up is 3 months. The longest server up time I’ve had is three years with a Novell box, the only reason it had to come down was because the client was moving offices. MS will probably release half a dozen OS’s by then for revenues sake. When MS based servers can do this then we might think about putting some in, ’till then it’s Unix/Linux, mostly Linux ‘cos it’s free!!!!
I’m assuming that the “too young” stab was taken at me. While probably not as senile as you, I was working on Novell servers at the age of 15. I’m 24 now. If you’d like to compare credentials, bring it on, old man.
The difference in clock speed of a RISC and a CISC translates to the difference in electricity usage and special air conditioning requirements. If I have to spend 40-50K to install a special separate A/C system on that particular office floor in order to use the CISC server, I might have spent that money on more RISC servers.
Phil, here’s the first fact, comparing clock speeds of an IBM RISC processor to an Intel CISC processor is not an apples to apples comparison, as you try to make it. A RISC processor of equivelant MHz will always benchmark faster. So the fact that the RISC is 33% ‘slower’ doesn’t really say much at all.
Of course you can’t compare processors on clock speed alone – any first year CS student knows that. You misunterstand the significance though – I was actually making a reference to the fact that IBM have POWER4 processors which are actually double the clock speed of those found in that particular box.
Actually, the latest POWER chips aren’t that different from the latest x86 chips at the logic level (now, if we were comparing a traditional RISC ISA like SPARC you might have a point). It’s painfully obvious that you need to brush up on your uArch knowledge a bit.
A RISC processor of equivelant MHz will always benchmark faster.
Blatantly false. The G4 is slower clock for clock than an Athlon for example. The UltraSPARC 3 isn’t exactly a stellar performer either (but then individual MPU performance isn’t what the Scalable Processor ArchiteCture is about) and, as I said above, if you think there hasn’t been a convergence of RISC and CISC (eg. OOOE and SE in POWER4) you know far less than you think.
Here’s the second fact, the benchmark that I am referring to is the SAP SD benchmark which measures simulated loads on an actual real-world application, SAP R/3. This benchmark is highly regarded in the industry as being a likely indicator of real-world performance.
Oh well done. Guess what? So are the various TPC benchmarks.
it is well ahead of Unisys in raw processing power.
Neither benchmark tests “raw processing power”. For that you need to look at something like SPECINT and SPECFP.
What’s the problem?
The benchmarks test different kinds of workload, obviously. Perhaps TPC-C is bandwidth limited whilst SAP-SD is cycle limited?
As for that Sun Starfire 10000 box, just look at the performance of that POS, and it’s got 64 processors in it. What’s the problem there?
the E10K is EOL and uses the previous generation of UltraSPARC 2 processors and crossbar interconnects. How about comparing it to one of the current generation COMA Star(cat/fire) boxes?
Do you seriously think that Linux can beat Windows Datacenter in this SAP benchmark on the same hardware?
With the new scheduler and bio work? Yes, I do.
Linux SMP has always been shitty.
Linux’s SMP capability has matured as Linux has expanded into new market areas. When Linux was nothing more than a useful file/printserver OS, SMP scalability was fairly unimportant. Now that Linux is being used in the high end enterprise market, there is a push to improve SMP scalability even more – and this has paid off, as you will see shortly
Of course you can’t compare processors on clock speed alone – any first year CS student knows that. You misunterstand the significance though – I was actually making a reference to the fact that IBM have POWER4 processors which are actually double the clock speed of those found in that particular box.
I think if you were actually making a reference to something else you would have just made it, wouldn’t you? How could I possibly know that you were referring to an unrelated point other than what you said? Yes, there are faster Power4 processors, but there are also faster Pentium III Xeon processors also. Yes, I know that modern Intel chips are RISC-like but I don’t imagine them to be so similar in architecture to a SPARC or Motorola that you can even remotely compare them on clock speed and assume that all else is equal and I do know that this assumption is correct, thank you.
Blatantly false. The G4 is slower clock for clock than an Athlon for example. The UltraSPARC 3 isn’t exactly a stellar performer either (but then individual MPU performance isn’t what the Scalable Processor ArchiteCture is about) and, as I said above, if you think there hasn’t been a convergence of RISC and CISC (eg. OOOE and SE in POWER4) you know far less than you think.
So a Power G4 running at 1.2GHz won’t outperform (in SPECINT) an Athlon running at 1.2Ghz? I don’t think so Phil.
Oh well done. Guess what? So are the various TPC benchmarks.
Not quite Phil. I don’t consider the TPC-C benchmarking suite to be a real-world application, like an actual installation of SAP R/3 is. I know which of these applications my clients are more likely to be running on their hardware. There is a big difference between how a real application is developed to optimize transactions as opposed to the implementation of a benchmarking suite that is much more abstract. I consider the TPC-C to be far more generic (and less useful) because fewer specialized advantages of the target platform (OS+DB+HW) are likely to be in play in the implementation, IMO.
What’s the problem? The benchmarks test different kinds of workload, obviously. Perhaps TPC-C is bandwidth limited whilst SAP-SD is cycle limited?
Well, no, that isn’t why SAP runs much faster on Unisys than it does on any Unix RISC box of equivelent raw processing power. The real reason is that Windows Datacenter is an advanced object-based operating system with a fully standardized runtime environment whereas Unix is an aging dinosaur that was never designed with object-based software or runtime environments in mind, and this is where that shows up. In Win2000, COM+ is integrated throughout the entire OS and development tools, whereas in Unix CORBA is an add-on feature. COM+ (unlike CORBA) includes a standard runtime environment that provides developers with optimized object-pooling, thread-pooling and connection-pooling that they can code their apps to exploit whereas there is no standard runtime environment in the Unix world at all for developers to write to, it does not exist. The huge performance efficiency advantage that you are seeing on Unisys is due to this runtime environment and its component pooling algorithms. This is Unix’ problem and it is never going to go away, it is only going to keep getting worse, and worse, and worse. Like I said before, the Unix approach to every application performance problem is to just keep throwing more and more hardware at it and hope that it just goes away, whereas Windows actually uses advanced technology to allow you to get the most out of your hardware instead.
the E10K is EOL and uses the previous generation of UltraSPARC 2 processors and crossbar interconnects. How about comparing it to one of the current generation Star(cat/fire) boxes?
Well, if Sun would bother to submit another recent SAP SD benchmark, I would have used it. In my example, I used the very latest certified Sun benchmark that was available in the table, so don’t blame me, blame Sun because they have been too chicken-shit to submit their scores for this benchmark for the last two years (unlike IBM and Unisys). It is not hard to understand why, is it? I can tell you that if I bought a top-of-the-line Sun Starfire two years ago with SAP on it and found out today that a Unisys ES7000 outperforms it by +25% transactions for what, 75% less money (???), then I would be very…disappointed.
Do (I) seriously think that Linux can beat Windows Datacenter in this SAP benchmark on the same hardware? With the new scheduler and bio work? Yes, I do.
Ya think so eh? Well, we’ll just have to see about that, won’t we? Microsoft is comin’ for ya! Linux is just cannibalizing proprietary Unixes and isn’t really affecting Microsoft’s growth that much, IMO. If Linux cannot beat Windows Enterprise/Datacenter at 8-way configurations and higher, Unix is going to have big, big problems from Microsoft in the not-so-distant future because they never quit.
I think if you were actually making a reference to something else you would have just made it, wouldn’t you?
Looks like I’ll have to be less subtle in the future. I also don’t think you’re in any position to make any assumption about how I think or write, hmmm?
Yes, there are faster Power4 processors, but there are also faster Pentium III Xeon processors also.
There aren’t 1.8Ghz (P3) Xeons though, are there? Do you see or am I still being too subtle for you? Perhaps a simple graph will make things easier for you to comprehend?
Yes, I know that modern Intel chips are RISC-like but I don’t imagine them to be so similar in architecture to a SPARC or Motorola that you can even remotely compare them on clock speed and assume that all else is equal and I do know that this assumption is correct, thank you.
The really bizzare thing here is nobody is actually comparing the various architectures clock for clock, more that one particular machine at ~50% of it’s current maximum ramp is faster than another architecture at x% of it’s maximum ramp.
Do you see the significance or are you still confused?
So a Power G4 running at 1.2GHz won’t outperform (in SPECINT) an Athlon running at 1.2Ghz? I don’t think so Phil.
Can you please point out where I said that? Oh, wait, you can’t. Nice try though, I’ll give you that shame the “oh no, he’s got me – quick make something semi plausable up to cover my mistake!” is to obvious.
Here, let me remind you:
You said:
A RISC processor of equivelant MHz will always benchmark faster.
This is wrong. Any way you spin it, it’s wrong.
I don’t consider
I don’t care what you do or don’t ‘consider’ given your predisposition for being wrong.
Well, no, that isn’t why SAP runs much faster on Unisys than it does on any Unix RISC box of equivelent raw processing power. The real reason is that Windows Datacenter is an advanced object-based operating system with a fully standardized runtime environment whereas Unix is an aging dinosaur that was never designed with object-based software or runtime environments in mind, and this is where that shows up.
rofl. This is hysterical – did you paste that straight from the Microsoft Impressive Sounding but Meaningless PR rubbish website?
There is nothing inherent in the design of ‘Unix’ to make ‘object based software’ magically less efficient than on Windows. Or maybe I’m wrong? Perhaps you can give me some specific examples of how design decisions by eg. the AIX team make that OS suddenly less suitable for “object-based software”.
…
oh, wait, do you mean component based software? As in Component Object Model(+)?
In Win2000, COM+ is integrated throughout the entire OS and development tools, whereas in Unix CORBA is an add-on feature.
Only insofar as something like CORBA isn’t a part of the AIX or Solaris or Linux kernels (although some poor misguided souls did write an ORB as a Linux kernel module a while back).
COM+ (unlike CORBA) includes a standard runtime environment that provides developers with optimized object-pooling, thread-pooling and connection-pooling that they can code their apps to exploit whereas there is no standard runtime environment in the Unix world at all for developers to write to, it does not exist.
Haha. You’ve never actually written any significant componentised Unix software have you? You do realise that you sound like just another bitter Microsoftie who is scared of this strange hybrid new-fangled-old-fangled Unix-Linux thing, right?
The huge performance efficiency advantage that you are seeing on Unisys is due to this runtime environment and its component pooling algorithms. This is Unix’ problem and it is never going to go away, it is only going to keep getting worse, and worse, and worse. Like I said before, the Unix approach to every application performance problem is to just keep throwing more and more hardware at it and hope that it just goes away, whereas Windows actually uses advanced technology to allow you to get the most out of your hardware instead.
Again, this is just so much marketing drivel. I can’t even address any points because… well… you didn’t make any. There are plently of buzzwords, I’ll give you that, but precious little in the way of substance.
Windows actually uses advanced technology to allow you to get the most out of your hardware instead.
Take a look at Solaris 8 and 9 and the new COMA machines from Sun (the Star series) and then say that with a straight face. @_@
I can tell you that if I bought a top-of-the-line Sun Starfire two years ago with SAP on it and found out today that a Unisys ES7000 outperforms it by +25% transactions for what, 75% less money (???), then I would be very…disappointed.
Hah, this is getting funnier and funnier. The industry is continually moving forwards – nobody can stand around waiting for the next big thing if they’ve got a pressing business need. If an E10K fulfilled your needs two years ago (ie. it’s expected lifetime was matched to capacity planning as well as needs of the moment) then it was obviously the right purchase.
Ya think so eh? Well, we’ll just have to see about that, won’t we? Microsoft is comin’ for ya!
*YAAWN* Microsoft has “comin’ for me” for ten years. They still don’t appear to be any closer to actually catching me.
Linux is just cannibalizing proprietary Unixes and isn’t really affecting Microsoft’s growth that much, IMO.
Linux is generally the OS replacing enterprise Unix installations. Windows isn’t, Linux is. Am I being too subtle for you again or can you see the significance in that?
If Linux cannot beat Windows Enterprise/Datacenter at 8-way configurations and higher, Unix is going to have big, big problems from Microsoft in the not-so-distant future because they never quit.
I fully expect that Linux can and will beat Windows Datacenter with no difficulties whatsoever.
I’m just curious Phil. Please tell us all what you know about managed runtime environments on the Unix/Linux platform. Can you name even one? Go ahead, I’m all ears.
http://www.entmag.com/news/article.asp?EditorialsID=5296
“Unisys’ roadmap calls for a second generation of its [ES7000] CMP servers later this year to coincide with Intel’s delivery of its second generation of 64-bit Itanium processors — codenamed McKinley.
Unisys’ initial plans call for starting with a 16-processor version of the McKinley generation CMP system with the boxes eventually containing up to 128 processors. The second-generation CMP systems are supposed to support both 32-bit and 64-bit processors.”
I’m just curious Phil. Please tell us all what you know about managed runtime environments on the Unix/Linux platform. Can you name even one? Go ahead, I’m all ears
Is that the best you can do? Hah.
One? IBM’s TXSeries.
pssst. I think you’re still getting your terminology confused. I guess thats’ what happens when you suck down the marketing whole, eh?
C’mon Phil. Tell us more. What does TXSeries do? How does it work? Can you name me a single Unix application in the world (either commercial or open-source) that actually uses a managed runtime environment? Bonus points if it’s an actual line-of-business program deployed by corporations on large servers.
This should be good.
these people here who claimed NUMA is a hardware thing only aren’t too bright. The OS still needs to SUPPORT it (the article says ‘support’), means process/thread scheduling and memory management.
I went over to IBM and took a look at this TXSeries product you mentioned.
Sorry Phil, that is a transaction processor for crying out loud, not a managed runtime environment. You honestly don’t have a clue what the difference is, do you?
A managed runtime environment (like COM+) is an OS layer that manages the execution of component-based executables and dll’s by controlling object instantiations and termination and also managing the how that code runs in a compartmentalized way (thread/connection/object pooling).
I expect that you will find precious few Unix applications that use one, because there is no standard. SAP R/3 obviously doesn’t. Prove me wrong, if you can.
All of you’re cackling and spitting is not going to help you worth a damn, here, Phil. This is OSNews and this crowd likes to deal in real facts. Maybe you should think about heading over to Slashdot where you obviously belong.
It’s really been a slice talking to you, Phil. Bye.
A managed runtime environment (like COM+) is an OS layer that manages the execution of component-based executables and dll’s by controlling object instantiations and termination and also managing the how that code runs in a compartmentalized way (thread/connection/object pooling).
You’re still getting confused with your terminology, but something directly anologous to the above drivel is a J2EE server such as Websphere or BEA weblogic.
All of you’re cackling and spitting is not going to help you worth a damn, here, Phil. This is OSNews and this crowd likes to deal in real facts. Maybe you should think about heading over to Slashdot where you obviously belong.
“cackling and spitting”? Hardly – I’ve refuted most of your claims with hard facts which are easily verifiable. You, on the other hand, have had to resort to semantic tricks and flat out lies to divert the conversation away from the huge gaps in your knowledge. As for the “crowd”… most are more rabid than the average slashdotter and facts are rarer here than an iceberg in the sahara.
It’s really been a slice talking to you, Phil. Bye.
Oh, the pleasure was all mine. It’s fun to confuse the winzealots on a slow day.