Intel has built a prototype of a processor with 80 cores that can perform a trillion floating-point operations per second. CEO Paul Otellini held up a silicon wafer with the prototype chips before several thousand attendees at the Intel Developer Forum here Tuesday. The chips are capable of exchanging data at a terabyte a second, Otellini said during a keynote speech. The company hopes to have these chips ready for commercial production within a five-year window.
I have been expecting this ever since they launched the first dual-core chip. Parallelism is challenging in programming, but I’m more than happy to multitask — in hardware — normal sequential programs, if nothing else works.
Will we see the day when software multitasking becomes obsolete?
Will we see the day when software multitasking becomes obsolete?
Probably not. If you have a big enough machine, there can be thousands of threads running concurrently, many more than there are available CPU cores in the machine.
Also there are a lot of comments on “what good is this for?” every time a news item on multicore CPUs comes up. Well, did we need 650 MB storage when the CD-ROM came out? Did we need more than 640 KB of memory originally? Do we need 64-bit CPUs? Does the average person need Gigabit ethernet?
Don’t worry. Software makers will figure ways around this and provide APIs so that developers can take advantage of multicore CPUs as much as a developer can use threading today. You might not even notice that you are using several cores for your program. Parallellization is only an obstacle until it’s solved.
And that is just what Intel wants: http://www.theregister.co.uk/2006/09/27/intel_terafied/
2014 may look like this : 12nm, 100W, 96MB of cache and 8 billion transistors and up to 288 core chips.
http://regmedia.co.uk/2006/09/27/2014.jpg
The link suggests that Intel is finally conceding that the x86 instruction set is not the right way to go for massively parallel processor arrays while a few x86s on the side would still be quite usefull. Thats a relief if it comes to pass. If that happens computer architecture could get quite interesting again.
Its the Memory Wall that is the real problem and I fear Intel will only add a Thread Wall to the problem. With good multithreading in cpu and memory side, the Memory Wall can actually be completely replaced by a Thread Wall alone, but it requires some inside out thinking to do this. Design the memory system first to have all the throughput you want for many concurrent issues, then the processor array design falls into place.
To get there will also require replacing the huge SRAM caches which are demanded by latency intolerant SS OoO BP designs with latency hiding designs that can use multiple interleaved DRAM arrays which leak many orders less than SRAM.
my 2c
Thats a lot of cores. I want one of those under my desk!
wonder how quick that will rip a song to iTUnes from a cd :-p
Your CD/DVD-ROM is your bottleneck even now. With an optimized SSE/MMX Windows build of Ogg Vorbis is took just under 5 seconds to encode a 4 minute track to level 8 quality.
That was on a single core, Athlon 64 3200+ Venice non-overclocked.
YMMV
wonder how quick that will rip a song to iTUnes from a cd :-p
Well, todays single core CPUs can do about 5GFLOPS… You do the math ๐
And how many kilowatts will the power supply need to be rated at?
Better yet, they better find a decent way of transporting liquid nitrogen. If a dual-core MacBook aready melts, you can imagine what an 80-core MecBook would do .
According to the plan all reproduction will take place in a petri dish by then so you won’t need testicles or have to worry about all that heat, rf and emf radiation in your lap.
I think a bed would still be more comfortable than a huge petri dish.
Edited 2006-09-27 17:52
That would depend on what’s in the dish.
You guys are sick!
A number of Commercial Software houses are going to have to change the way they license software.
Imagine a server system where the CPU costs say 20-30K and the software for that server costs well in excees of $2M.
Some software can cost up to $50K per Dual core regardless of how many run the S/W via Virtualisation or in IBM terms LPARS.
The numbers don’t make commercial sense.
At the moment with say a 4CPU XEON then the software costs are manageable but here? No way could I go to the FD and ask for that much.
//A number of Commercial Software houses are going to have to change the way they license software.
Imagine a server system where the CPU costs say 20-30K and the software for that server costs well in excees of $2M.
Some software can cost up to $50K per Dual core regardless of how many run the S/W via Virtualisation or in IBM terms LPARS.
The numbers don’t make commercial sense.
At the moment with say a 4CPU XEON then the software costs are manageable but here? No way could I go to the FD and ask for that much.//
Linux rules in this domain.
http://hardware.newsforge.com/article.pl?sid=05/11/15/1443249
http://www.vnunet.com/vnunet/news/2123736/linux-clusters-join-super…
80 times $0 = still equals $0.
80 times $0 = still equals $0.
Super Computing software and applications cost $0?
So I can go out and get top notch software to do biology, medicine and predict the weather for free?
Intersting!
I was not talking about O/S costs.
I was talking about commercial applications and tools made my people like
ORACLE
Microsoft (Non O/S products)
IBM
Siebel
etc etc etc
Some companies charge on a per CPU basis and at say $50K per CPU for the whole System regardless of how many CPU’s actually run the application.
This Business Model HAS to change with systems like this. If these companies continue this pricig model then they will be as dead as the dodo in no time at all.
Heh, I doubt Oracle will change their pricing scheme. I think we already pay $10000000000000000000000000000 for our database servers as it is, what’s another $100000000000000000000000000000000000000?
What does that mean for Cell though?
And how many kilowatts will the power supply need to be rated at?
I guess those 80 cores are not full feature x86 cores. It is probably more like IBM’s Cell, at least the SRAM seems to indicate that.
Edited 2006-09-27 16:15
Core wars, it’s played just like liar’s dice.
core war is actually a pretty cool old ASM language based game
Besides running realistic games that take so long to make that 80 cores will seem old, and playing hi-def video, what are 80 cores good for?
Can you hear that sound… that’s Gentoo users howling in joy, with one of those their “emerge -e world” is limited by IO bandwidth.
But there are plenty of uses for multiple cores, personally I have a dual core right now and I would love a quad core. Sure 80 cores a bit overkill for a desktop now but in 5 years things are likely to be different, we would be able to do new and exciting things with that sort of technology.. who knows? Or as they say don’t knock it till you try it.
80 cores and I’m sure MS will release a version of windows that makes the whole thing feel about as fast as a 33mhz. 486
I was thinking something like a 386SX and after many months of fine tuning the system it would be on par with a 386DX ๐
With FOUR megs of RAM!
Funnily enough despite how long Vista has taken, the Singularity OS would be a far better fit on such a massively parallel cpu. Singularity is based on the simple idea that every object is a protected process and draws on RMox inturn based on occam. Mind you its already been done years ago and now long forgotten too.
Once upon a time there was a processor that natively supported concurrency on 1, a few or many discrete processors and had the scheduler, message passing and point to point processor or peripheral links in hardware but shamefully had no MMU. At the time though memory was so tiny you needed lots of these expensive cpu+memory chips to do anything usefull but it did include IEEE FP long before Intel did. It ran a nix like operating system called Helios distributed across the processors.
The native programming language occam directly supported concurrent programming in a straightforward way even sequential software guys could master, ie communicating nestable processes model hardware blocks. By the time Singularity comes out on an N core 86, it will only be repeating ancient history but using far more hardware to do so.
Its a shame, take n thousand superb chip designers who don’t do OS design and m thousand OS builders who don’t do cpu design (Windows, Linux, whatever) and what do you get, 2 entirely different worlds colliding. On the other hand a large room full of engineers built a cpu and the OS and a || language altogether and solved most of the problems already, just 2 decades ago. Now imagine a bigger room of engineers using todays fab technology to do the same, it would be a joy to behold.
If you are talking about transputers, I think they are still used in signal processing.
//80 cores and I’m sure MS will release a version of windows that makes the whole thing feel about as fast as a 33mhz. 486//
As I said above, Linux rules in this domain.
http://hardware.newsforge.com/article.pl?sid=05/11/15/1443249
Windows doesn’t even register, it is off the radar screen entirely.
“The last few Top500 Supercomputer Site lists left little doubt that Linux is the operating system of choice for these bleeding edge systems, but the latest list highlights the popularity of Linux in supercomputing and cites it as the OS of choice for 78% of the world’s fastest machines. 391 of the systems rely on Linux of one flavor or another — far more than Unix (yesterday’s supercomputing king), Mac OS X, Solaris, or any others. Microsoft Windows didn’t even turn up on the list.
Erich Strohmaier, list co-founder and editor, said that although 64-bit and multi-core processors are playing a larger role in the evolution of the supersystems, there are no signs that Linux will be dropping down the list. “Linux is the dominating OS in the supercomputing community and will keep this role,” he said. “If anything, it will only enlarge its prevalence.””
Edited 2006-09-28 04:08
As I said above, Linux rules in this domain.
http://hardware.newsforge.com/article.pl?sid=05/11/15/1443249
Windows doesn’t even register, it is off the radar screen entirely.
There will be some powerful and *fun* super computers that come from this for sure!
Edited 2006-09-28 05:30
That would depend on what’s in the dish.
Well, I don’t know this Petri dish, but it definitely depends on the dish ๐
Guess its time to start using larger keys
๐