HP has demonstrated Linux running on 64 Itanium 2 processors without any loss of efficiency, and says it’s seeing growing interest in open source from financial institutions. More.
HP has demonstrated Linux running on 64 Itanium 2 processors without any loss of efficiency, and says it’s seeing growing interest in open source from financial institutions. More.
This is great news for Linux now if they could only increase the supportabilityof Linux and slow down the Kernel release cycle(go to patch mode for a few years) then they could very well have an enterprise operating system.
now HP needs to try it on a computer they can sell. It would be better to demonstrate scaling on Opteron, POWER, and UltraSPARC.
>>>This is great news for Linux now if they could only increase the supportabilityof Linux and slow down the Kernel release cycle(go to patch mode for a few years) then they could very well have an enterprise operating system.<<<
Sure, if all you want to do is run servers.
I agree that this is good news, but the current kernel development mode is making great strides and steady momentum. Don’t stop a good thing!
At which point do the Solaris apologists start calling Linux a real OS?
At which point of abundance of quality desktop applications, plus the Microsoft ones that you can run reliably today on Linux, does Linux become ready for the desktop.
Linux is making a lot of sense in the enterprise. Polish up the directory services a bit more and make them easier to use and I don’t think many people or companies will be willing to forgo the time and financial savings.
SGI is using Itanium 2 CPUs in their Altix machines running Linux for some time now, and they are using up to 2048 CPUs. So what is HP trying to tell us? I wouldn’t even be surprised if HP is building on top of SGIs contributions to Linux.
Kaya
then they could very well have an enterprise operating system.
—-
Linux has been an enterprise OS for a long long time already
Sure, now everybody should RUN to buy SuperDome with N overpriced itaniums – to compile kernels and access memory via synthetic benchmark. That’s exactly what N-way huge servers do most of the time.
Still waiting for REAL benchmarks (i.e. Oracle, MySQL, whatever, + some real application server perfomance) to see how scalable is linux compared to, say, Solaris. “Surprisingly”, they are testing on a platform where there is no OS to compare to. (Opterons/IA32/sparc anyone?)
Run a vendor supplied kernel. QA is rightfully shifting to the vendors. While it has been true for a long time now that most users are better off with a vendor kernel, this is now becoming more of an official position.
It allows for a more efficient division of labor. The developers get to do what they do best: devlop. And the vendors have the resources and incentive to do the QA. While this results in some duplication of effort on the vendor’s part, that duplication is actually good for the final product. (More QA issues found and patched, overall.) The results get fed back upstream to kernel.org. Note that it is in the best interest of the distros to send patches upstream, as it minimizes their workload over the long term.
I like this new way of doing things very much. Although many still seem to be clinging to the idea that people should be able to just download the latest source from kernel.org and slap it on their production servers. Time to adapt to reality. 😉