As part of a painfully slow and vague striptease, Sun has started to describe a couple of techniques it will use to improve processor performance in its soon to be released Niagara chip and future Rock processor line. Despite hinting a couple of years back that Niagara would have special technology for handling TCP/IP and SSL loads, Sun has stayed largely quiet on the subject. Recently, however, Sun confirmed that its Niagara processors and Solaris 10 operating system have been tweaked to handle these specialized tasks.
there is zero demand for a new server processor architecture. even intel couldn’t replace x86. ibm has had to go to the embedded market to make power profitable. how does sun intend to just break even on these chips?
there is zero demand for a new server processor architecture.
This isn’t a new architecture. It’s still SPARC.
Intel couldn’t replace x86 with Itanium because
a) they didn’t want to replace x86 – that would be just stupid given it’s their largest market
and
b) because the Itanium was designed as a specialist processor which required massive changes in how programs are written and compiled. The architecture was a bad architecture, in the sense that the world isn’t ready for it yet. It was actually a good architecture in technical terms.
Sun already makes a lot of money from it’s SPARC business – oh yes – it’s the core of their business in fact. I think you’ll find the Niagara and Rock processors will do very well
That, and the fact that Intel played the elitist card of not selling components to smaller companies besides HP, IBM and Dell. The simple fact, it was a nice processor on paper, but in reality it was over priced, under performed, lacked the software and hardware support required – couple that with the rise of Opeteron and its awesome price/performance when compared to Itanium, little wonder it was mothballed.
One needs to also realise that this isn’t the first foray by Intel into the world of VLIW processors; the i860 was meant to deliver 60mflops, but in reality, at best, using assembly it reached 40, and using standard languages like C/C++, it hit 10mflops.
The fact is, they would have either been better off, either resurrecting Alpha OR (double underline) bolting the SPARC ISA (along with VIS) to a processor and using that as the next ISA.
One needs to also realise that this isn’t the first foray by Intel into the world of VLIW processors; the i860 was meant to deliver 60mflops, but in reality, at best, using assembly it reached 40, and using standard languages like C/C++, it hit 10mflops.
—
Not even a horrible C compiler would make a program that ran at 1/4 the speed of the same programed in assembler.
Edited 2005-11-05 15:35
Vector processor or Streaming processors are the king of the hill in my book.
We just need to make better compilers for them.
They are very energy efficient.
Merrimac by Stanford professor. Worked for Cray.
http://radio.weblogs.com/0105910/2003/12/01.html
What are vector processors.
http://www.answers.com/topic/vector-processor
Vector processors are for throughput workloads, for example graphics and scientific computations. General purpose processors like Opteron are good for a balance of throughput and latency. Aggressively multithreaded processors with TCP offloads are for latency-limited workloads like transaction servers.
We just need to make better compilers for them.
They are very energy efficient.
There is this unfortunate thing called reality. Making better compilers is, well, hardware than making CPUs better-fit existing compilers. Engineering is full of ideas that would be just awesome, if the laws of physics allowed them to work at all.