While the X86 world hops from one to two processing cores, startup Azul Systems plans to integrate 48 cores on its second-generation Vega chip, expected next year. The first-generation Vega processor it designed has 24 cores but the firm expects to double that level of integration in systems generally available next year with the Vega 2, built on TSMC’s 90nm process and squeezing in 812 million transistors. The progress means that Azul’s Compute Appliances will offer up to 768-way symmetric multiprocessing.
But what, besides pure number-crunching (which isn’t so much a java forte just yet afaik) would really benefit from 768-way smp? Just musing, in a web application, you’d probably need at least twice that many threads going all the time to get anywhere close to full cpu usage. How many web sites are there that see that kind of traffic and wouldn’t be scaled far cheaper using dns round-robin, content distribution and such?
Of course there are bigger things that webapps that I’m not familiar with and there is the general trend towards more cores rather than super-powerful single cores, but this seems a bit extreme.
Also, I wonder what kind of memory requirements (size) there are for a java app on the order of 1000 threads. I’d also be curious to know if they have any sort of segregated memory between the running apps and the not-so-virtual virtual machine. (Do they both use generic memory in the same physical chips? Same address space?)
that is alot of cores.
I could really use that to write my Design Specifications. It might speed up MS Word. Oh, but I’d have to have an OS that could run on that many processors. :/
could probably utilize that power.
Then you would prob. climb pretty fast in the ranking list over crunched units. 🙂
hehehe
To be a little bit serious, i think 3D animation studios will be more than happy to get hold of a few of these systems to crunch frames for movies and so on.
Also, think about all the different clusters and computer systems that are fighting for an place on the list over fastest systems. A few blades with these CPU:s will reduce both costs, units to build the cluster and also energy and still reach the top position.
To be a little bit serious, i think 3D animation studios will be more than happy to get hold of a few of these systems to crunch frames for movies and so on.
Also, think about all the different clusters and computer systems that are fighting for an place on the list over fastest systems. A few blades with these CPU:s will reduce both costs, units to build the cluster and also energy and still reach the top position.
Sure, ‘cept all that software is currently written in c (or some other native compiled language) isn’t it?
Oh yeah? Well, I’m going to make a processor that has 1,000 cores!! Take that!
Azul sells a large “Java Appliance” designed to basically run application servers, or more specifically, anything worth the network hop to offload to one of these devices.
For example, the CPU’s they’re running aren’t x86s. In fact, I don’t know WHAT they are, but they run the JVM (at the moment).
Anyone who’s been to any large colo has seen the gazillions of 1U servers, the Blade server racks, etc. The new Sun T1000 and T2000 servers are going to put some serious dents in those kinds of deployments, and this Azul system should have a similar effect.
You’re putting more eggs in one basket, but the hardware is SO powerful and getting so cheap both in $$$ and resources, you end up clustering these things instead of racks of machines, and come out ahead in the process.
The Sun machines are more for back office applications, moving bits around vs actual hard core 3D rendering or science simulations that need for math. I imagine the Azul machines are very similar.
So, we’re just getting more consolidation going here. Nobody cares about MIPs when it’s context switches and networks that are killing you. These machine work on addressing that problem.
As I understand it, the chips themselves are the jvm. That is, they aren’t some generic processor running somebodies software virtual machine, they are the ‘virtual’ machine.
And all that talk about crunching, like I said before, that’s useless if you don’t have apps in java to run on them. Can you name something written in java that is used for such things?
I can only imagine that garbage collection on an app with that many threads would be something ridiculous. :S
They have a paper called “Pauseless Garbage Collection” on their side.
http://www.azulsystems.com/products/whitepaper_abstracts.html
“This paper describes the Azul garbage collection algorithm and its benefits as compared to traditional garbage collection approaches.”
There are also some intriguing papers like:
“Optimistic Thread Concurrency:
Breaking the Scale Barrier”
“optimistic concurrency improves throughput by allowing code that would otherwise be serialized to be safely executed concurrently.”
Edited 2006-03-28 11:11
Sweet, thanks
Just so everyone knows, “azul” means “blue” in portuguese.
A CPU with multiple cores is ideal for pure functional programming languages like Haskell and ML: since a pure functional program does not have destructive updates, it can be greately parallelised.
pure functional programming languages like Haskell and ML
ML isn’t purely functional, because it has mutable storage (“ref”) and because I/O is usually based on side effects.
ML as a family isn’t purely functional because it doesn’t enforce referential transparency. Purely functional programs do afford more flexibility in scheduling by making data dependencies explicit, but the job of automatically providing multiprocessor parallelism without annotations in a performant manner is nontrivial. It’s even more difficult to implement for primarily lazy languages.
In bioinformatics and phylogenetics we can clog pretty much any cluster or SMP computer with our computational apps. I can easily imagine running one MrBayes bayesian analysis with 48 MCMC chains for WEEKS, evaluate some gazillion tree topologies with TNT using simple parismony or explore a thousand different parameter combinations for direct optimization with POY and STILL just look at a tiny fraction of all possible solutions to our biological problems. Sounds like nice tech though as compared to what we have at the moment, but how much will it cost? If there is one thing we do not have in the Academic sphere it’s heaps of cash.
…it reminds me of that Bill Cosby skit…
(Bill talking to Carroll Shelby) “but I NEED a car that does 180 or better so I can get to work on time!”