Sun this week is unveiling its long-touted “Niagara” processor, the third major rollout in the past two months for the company, which is aggressively trying to separate itself from its past as a vendor focused solely on its SPARC-Solaris platform for high-end customers. The chip offers eight cores per chip running up to four instruction threads each and addresses the growing issues of energy consumption and heat generation by using only 70 watts of power.
Only 70 watts? Someone call Intel and tell them they have a problem (2-core Xeon 400 Watts).
In a way it’s funny, if you remember Intell proclaiming half a year back that “In the future, chip performance will be measured by performance per watt, instead of number of processor cycles”, right before they announced their advances in low-power computing.
And now they’re being ‘owned’ by everyone else; P.A. Semi coming with low power multicores, ARM designing higher power chips, AMD beating every high-power chip aimed at the server in performance per watt (especially the dual-core Xenons being overly hot), and now Sun offering 32 simultaneous instruction threads for 70 watts.
Good point. Let’s not forget, the P4 and the Xeons are not the only “heater elements” Intel has produced as of late. The Itanium (and Itanium 2) are no less of a powersink.
Intel went on the wrong path, and I don’t think they realized it, yet.
Only the Haifa guys (the Mobility Microprocessor Group) can save Intel’s butt at this point. Maybe.
Intel’s hottest processor is 130W.
That’s heat output. Is this talking about heat output or energy consumption?
All the power is converted to heat, so a 130W processor converts 130W of electricity into 130W of heat.
As for 130W vs. 150W, whatever. It’s not 400W.
Intel’s hottest processor is 130W.
Wrong…
http://news.com.com/2061-10791_3-5940478.html
The single-core “Irwindale” Xeon for dual-processor servers has a TDP of 110 watts and a maximum of 120 watts, according to Intel data sheets. But Paxville for dual-processor servers runs at 135 watts and 150 watts for the comparable figures.
In Xeon MP models for servers with four or more processors, the gap is significantly larger. The single-core “Cranford” has a TDP of 110 watts and a maximum power of 120 watts, but the dual-core Paxville Xeon figures increase to 165 and 173 watts, respectively.
How long until it ends up in a server?
Between Niagra and Fujitsu’s SPARC64 I’d say the SPARC architecture continues to have a bright future.
I am very curious how the T1 will turn out to be. Because so far the Sun SPARCs have been overpriced & slow and completely behind the competition. Another interesting fact is that T1 will be built on 90 nanometer process as opposed to the recent chips by Intel which are 65 nanometers. 65 nanometer manufacturing would have brought down the power consumption even more. I wonder if there are real benchmarks anywhere on how T1 compares with a dual core Xeon.
while i agree its an advantage to have a low-power server, i wonder if this will really change pricing at colos. if not, then who really cares?
i just don’t think the market cares that much about an “energystar” server. what the market cares about is price/performance and the availability of code and support.
for price/performance, the new sun boxes may compare favorably, i have not seen many definitive comparisons so far.
for the size of the “ecosystem” and availability of support, obviously SPARC has been in a long-term decline with even sun grudgingly pushing x86, and to be certain, x86 servers are already quite cheap.
i predict this will be a benefit for long-term SPARC users but irrelevant to everyone else.
I don’t know man. If I ran a huge giant room full of servers, and one chip required 70 Watts of power, and the other required 400 Watts, I would take the time to figure out electricity savings. Electricity costs money. More Watts == More heat == More cooling == More Money as well.
I imagine you can safely cram more of these in a smaller space then as well. Floor space costs money as well.
Now, If I was a small business and just needed one teeny server, I wouldn’t notice any of the savings. Especially, since I’m sure Sun isn’t going to price these things cheaply. However, large scale computing rollouts bring lots more factors to consider into the fold.
Well, I work for a company that develops turnkey systems of a certain type, and because of an Order From Above, we gotta consider using an IBM Xeon blade server. Well, everybody who works with it and develops for it, hates it with passion, and one of the reasons is, it’s a f*cking furnace! So, there’s this huge power wasted on heat, and then there’s this huge power wasted on aircooling, and then there’s costs in air-cooling system maintenance, and then there’s the hidden cost of not being able to put anything near that Hell’s Oven.
because Sun has been so inept and limp over the past five years. Other than Java which I love they have just sucked at doing anything else…they just seem like they have no focus and don’t know what to do with themselves.
I am very curious how the T1 will turn out to be. Because so far the Sun SPARCs have been overpriced & slow and completely behind the competition.
Except for the Niagra line, all future SPARC development for Sun is being done by Fujitsu and will be based off their SPARC64 line. For example, Fujitsu’s quad-core SPARC64 VI+ (“Jupiter”) will use a 65nm process.
I don’t know what competition you’re really talking about in terms of the Niagra. Sun will be able to pack more CPU power per U (and per watt) than anyone else on the planet with this chip. The only thing that comes close at this point is Cell, but Cell’s SPE’s are a far cry from the complete SPARC cores featured in Niagra.
i predict this will be a benefit for long-term SPARC users but irrelevant to everyone else.
Niagra is great for applications which depend primarily on vertical scalability (i.e. databases)
Except for the Niagra line, all future SPARC development for Sun is being done by Fujitsu and will be based off their SPARC64 line. For example, Fujitsu’s quad-core SPARC64 VI+ (“Jupiter”) will use a 65nm process.
That’s absolutely not true. Sun have collaborated with
FJ on the APL line to plug a void left by the termination
of the USV program. This will be FJ designed chips on
Sun designed systems. FJ will also sell Niagara 1.
Sun are developing sparc even more aggressively now
with the Niagara and Rock CMT designs which should
appear in the 2007-2008 timeframe. Sun has the
second largest processor design team in the world –
they are doing something 🙂
This is great for people who are building their own datacenter and want to keep operating costs down. When you can kick a 400W monster like the dual core Xeon’s ass with an 8 core chip that draws 70 watts, you certainly have the potential to massively decrease operating costs when deploying large app clusters in your own datacenter.
I’ll finally be able to put a server run on solar power without problems
And Oracle tells you that you will NEED 8 licences of their database product (10g Enterprise Edition) because the Niagara has 8 cores.
Oracle seem to be the only database compagny that still want to be paid by core.
Darn, upgrading to those SUN will profit Oracle!!!
Solaris zones are a valid partitioning scheme for Oracle. You can allocate one a resource pool containing the number of cores for which you are licensed.
The point is that if you want to use all 8 cores to run Oracle, you’ll have to buy an 8-processor license. But if the resulting performance is similar to a 4-core Opteron system, then it starts to look like a total ripoff.
Not only that, but both IBM (DB2) and Microsoft (SQL Server) said they will licence their product based on the number of physical CPU not by core.
So a 4-core Opteron (being 2 x dualcore CPU) would only cost you 2 CPU licences. But with Oracle, it cost you 4 CPU liences. I beleive they will cave soon. Even big buisness wont pay that kind of money. It’s crazy.
So just imagine having 2 Niagara CPU insite your SUNFire Server. That’s 16 CORE! BIG BIG money for Oracle!
Then not bloody well use Oracle! DB2 is available for Solaris SPARC AND soon Solaris x86 – don’t like Oracle, then don’t use it! its a pretty simple proposition.
If you are an oracle shop and a member of OAUG you can always ask Oracle to review their licencing policy
http://www.oaug.org/cgi-bin/WebObjects/oaug
OAUG Pricing Council Provides OAUG Members the Opportunity to Speak Directly with Oracle About Pricing and Licensing
December 8
The UltraSPARC T1 looks impressive… Everyone knew about 32 threads, but four dual-channel DDR2 memory controllers on chip? That’s far more memory bandwidth (~34GB/sec) than you’ll find in any x86 server right now.
I can’t wait to see how they bench for web servers and MTAs.
And what is the performance of this chip? Sun is hardly known for producing high performance processors. Without that information it’s impossible to know whether this chip will be worth using instead of, for example:
http://issj.sys-con.com/read/117897.htm
A peak of 145W with “two dual-core 1.8 GHz AMD Opteron HE processors (four total cores), 8 GB DDR400 RAM and one terabyte of SATA storage leveraging Hitachi 500 GB hard drives, with a mere 500 BTU/hour of peak thermal output.”
Using Opteron EE processors (30W peak) you could get that down even more.
70W fully loaded. The N1 systems will be smaller, have
higher throughput, produce less heat and draw less
power than the system you have quoted.
I do believe that Sun’s designation of a core to mean below :
1 core = 2 processors
1 processor = 2 threads
Therefore = 1 core = 4 threads
Niagara = 8 core = 32 threads
All other vendors do not use this designation. To them a core is a processor by itself.
This is going to play havoc with all the software licensing.
P.S. Do correct me if I am wrong.
fluffybunny’s mappings are incorrect.
There are 8 cores on the UltraSPARC-T1. These are kinda-sorta equivalent to a cpu in the pre-CMT definition.
Each of those 8 cores can run 4 threads simultaneously (compare with Intel’s Xeon HT at 2 threads simultaneously).
Thus 8 cores per physical-processor-socket * 4 threads per core gives 32 threads.
Yes, it does appear to be playing havoc with software that gets licensed on a “per-cpu” basis where the definition of “cpu” is somewhat fluid.
Licensing shouldn’t be a problem with OpenSolaris…
IIRC, this first one will be aimed at the webfarms and U1 servers, but IIRC, isn’t the next Niagara going to be SMP enabled? can someone correct me on that?
Niagara 1 will be single socket only. Each socket has
8 SPARC V9 compatible cores capable of running 4
strands (hardware threads). This is presented to the
operating system as 32 processor entities for the
scheduler and processor utilities such as psradm, psrset.
The scheduler is CMT aware. The part will be available
in various ‘bins’ (based on frequency and enabled cores).
As someone has already stated, N1 has 4 on-die DD2 memory
controllers giving high system bandwidth. Memory is
also relatively close due to the single socket design.
It also has a 12-way set associative lvl2 cache. In
terms of performance, you won’t see many benchmarks
until the end of the calendar year. However, the
target was to outperform 4 way X86 boxes. This
becomes very compelling when you factor in TDP, HVAC and
real estate costs. (some companies yearly power
costs are in the double digit millions $)
Niagara 2 will be an evolution of this. It will increase performance and strand count and move more
functions onto the die (system on a chip). It will
also improve fpu performance.
As for multi socket Niagara, you will have to wait and
see. AFAIK, nothing has been announced.
Thank you for the heads up; regarding the comparison with the 4 way boxes; from what I have heard, the cost of these Niagara boxes will be very competitive from a price/performance stand point – I have even heard rumours that they’ll undercut similar configured (4 way x86 boxes) servers being sold by competitors.
What will also be interesting is the improvement of fpu performance, whether SUN will be inclinded to bring Niagara to the point where one can use it in a server used to host applications for those accessing services via SUN Ray clients or even the possibility of seeing these Niagara chips being put into workstations for tasks requiring massive throughput.
What will also be interesting is the improvement of fpu performance, whether SUN will be inclinded to bring Niagara to the point where one can use it in a server used to host applications for those accessing services via SUN Ray clients or even the possibility of seeing these Niagara chips being put into workstations for tasks requiring massive throughput.
FPU on N1 is limited. If your code has > 2-3% FP instructions it’s going to suffer.
However, it does have a hardware encryption pipe which
will be accessible via PKS-11 framework.
Niagara 2 will have a better FPU – one per core – but it
will only exist to mitigate ‘death by flops’ rather
than to provide any compelling FPU performance. The
FP design is Rock which is due 2008 on the latest
roadmaps.
FPU on N1 is limited. If your code has > 2-3% FP instructions it’s going to suffer.
However, it does have a hardware encryption pipe which
will be accessible via PKS-11 framework.
That should be good; however, IIRC, don’t SUN sell encryption off-loading boards which simply slip into the PCI slot?
Niagara 2 will have a better FPU – one per core – but it will only exist to mitigate ‘death by flops’ rather than to provide any compelling FPU performance. The FP design is Rock which is due 2008 on the latest
roadmaps.
Regarding the FPU performance, I don’t think mega fpu is required, but if decent fpu performance is provided which allowed it to be used as either a workstation processor on steroids or a processor used in a server for a thin client/centralised processing setup.
I’d love to see a workstation by SUN, using this processor.
Sun has gone on record as saying they are porting PostgreSQL to their high end SPARC servers. This should create a great alternative to Oracle, DB2, and SQL Server.
It will be interesting to see how it ports to the new architecture.
Sun has gone on record as saying they are porting PostgreSQL to their high end SPARC servers. This should create a great alternative to Oracle, DB2, and SQL Server.
Well, there is Sybase, one of the best performing databases on the SPARC platform – whilst Oracle seems to roll along like an obese man with constipation, Sybase seems to fly.
It will be interesting to see how it ports to the new architecture.
Goodness gracious me; the Niagara/N1 is SPARC v9 compliant processor; hell, in laymens terms, its 8 SPARC II processors bolted to a catridge, and each processor has SMT as to allow it to run 4 threads; making a grand total of 32 threads.
First to address the point made by rhavyn:
Sun is hardly known for producing high performance processors…
Yes and no. It is true that “Sun is not known” for this but the second part of the statement is not entirely correct. Many high-end visualiation customers (nuclear/weather/etc. simulation) organisations use Sun machines.
RISC vs. CISC aside, there is something about the Sparc platform/architecture that “causes” – if you may – Sun to be one of the top candidates in these application categories, most of which push computing to its limits.
Maybe someone with a more technical understanding of Sparc (JonAnderson???) could make a stab at answering this without going into the worn out RISC vs. CISC arguments. I am sure there is more to the picture than that and I am sure everyone here has beaten to death (or at least witnessed the beating to death) of that argument.
One thing I can personally add to this (my technical/engineering knowledge is not that advanced) is that from a high level (mine) it seems that Sun has some pretty genius/unique ways of setting up their motherboards and other chip-level electronics (see http://research.sun.com/people/mybio.php?uid=14675 for some info).
Reading some of the stuff this guy (and others) do one could declare “all motherboards are equal – but some are more equal than others”.
… onto the future (before i forget)
=====================================
Bascule mentioned Niagra is great for applications which depend primarily on vertical scalability (i.e. databases).
With:
BeFS – Be OS
Storage – linux (open source)
WinFS – Windows
Reiser 4 – unix/linux
Spotlight- Mac OS, etc.
… I would say (or rather, suggest) that the biggest “market” for the above mentioned abilities (as stated by Bascule) is the computer sitting right in front of you.
It may not be now but give or take 5 years and you will have the beginning of applications (regular applications that is, not just “niche” solutions) leveraging a decent portion of database filesystems’ abilities.
If history stays true to this “sentiment” software will arise that will deliberately exploit the database abilities of these filesystems, not as an after thought or an alternative/fancy way to store your file attributes that many developers are probably doing at this “infancy” stage of what i will term “the database applications years” OR “the data wars” [forgive me – weak attempt at humour].
I beleive that the future of computing is massively parallel everything (yes, including desktop computing) and having these “database friendly” microprocessors now put Sun and Fujitsu in a strategic position going forward.
It’s a T2000, and it’s in the sexy Galaxy chassis. The thing feels cold compared to the computers around it.