Linked by Thom Holwerda on Thu 3rd Dec 2009 22:25 UTC
Intel "Intel's experimental 48-core 'single-chip cloud computer' is the latest step in multicore processing. Intel officials say such a chip will greatly improve performance and efficiency in future data centers, and will lead to new applications and interfaces between users and computers. Intel plans to have 100 or more chips in the hands of IT companies and research institutes next year to foster research into new software. Intel, AMD and others are rapidly growing the number of cores on a single silicon chip, with both companies looking to put eight or more cores on a chip in 2010."
Order by: Score:
Flash
by Cody Evans on Thu 3rd Dec 2009 23:33 UTC
Cody Evans
Member since:
2009-08-14

...and flash video will still be slow as hell...

Reply Score: 5

RE: Flash
by flanque on Fri 4th Dec 2009 07:10 UTC in reply to "Flash"
flanque Member since:
2005-12-15

Only on Linux.

Reply Score: 1

RE: Flash
by cerbie on Fri 4th Dec 2009 07:11 UTC in reply to "Flash"
cerbie Member since:
2006-01-02

Intel can't fix a problem created and prolonged by Adobe.

Reply Score: 2

RE[2]: Flash
by kaiwai on Fri 4th Dec 2009 13:34 UTC in reply to "RE: Flash"
kaiwai Member since:
2005-07-06

Intel can't fix a problem created and prolonged by Adobe.


He wasn't blaming Intel, read the damn post, it was prety obvious he was talking about the fact that no matter how much power is thrown at Flash, it still sucks.

Reply Score: 4

RE[3]: Flash
by cerbie on Sat 5th Dec 2009 04:07 UTC in reply to "RE[2]: Flash"
cerbie Member since:
2006-01-02

He said Flash video, which works fine for nVidia users, and Adobe could make it work fine for everyone using Flash, if they so desired.

If it were about throwing power at it, then it makes no sense, as these 48-core CPUs would be pitiful at that kind of work, even if Flash didn't suck.

Reply Score: 2

27 million transistors per core
by sbergman27 on Fri 4th Dec 2009 01:25 UTC
sbergman27
Member since:
2005-07-24

Note that each core is roughly comparable to a Pentium 3. And that the whole thing has about twice as many transistors as Nahalem.

It's flexible. But not all that powerful.

Reply Score: 3

flanque Member since:
2005-12-15

Its about parallel processing, rather than brute force.

Reply Score: 2

RE: 27 million transistors per core
by Cytor on Fri 4th Dec 2009 08:18 UTC in reply to "27 million transistors per core"
Cytor Member since:
2005-07-08

I've read that it's based on P55C architecture, so that would make it more like a massively parallel Pentium MMX.

Isn't that P5 architeccture essentially what the Atom is based on?

Reply Score: 1

viton Member since:
2005-08-09

I've read that it's based on P55C architecture,
You couldn't read about this chip, because there is no public info except of this announcement.
It is not a Larrabee, but different CPU. Althought they could reuse the core.

Isn't that P5 architeccture essentially what the Atom is based on?
Atom has nothing to do with P5. It is totally different.

Reply Score: 1

Cytor Member since:
2005-07-08

Well, German online magazine heise.de says, these are similar to P55C-cores. They do not name sources, tough.
<a href="http://www.heise.de/newsticker/meldung/Intel-stellt-Single-Chip-Clo...

Reply Score: 2

The problem is the software
by ggeldenhuys on Fri 4th Dec 2009 08:49 UTC
ggeldenhuys
Member since:
2006-11-13

The major problem now is the software. Most software out their today do not take into account multi-core CPU's, so it's as if they run on a single core anyway (mostly single threaded applications).

Yes the OS can do some scheduling, but it's down to the software applications to take charge of multi-cores and multi-threading.

Reply Score: 1

RE: The problem is the software
by Kochise on Fri 4th Dec 2009 09:16 UTC in reply to "The problem is the software"
Kochise Member since:
2006-03-03

Erlang @ http://www.erlang.org/ just do that kind of stuff, massive multi-core programming, and with ease and elegance ;)

Kochise

Reply Score: 1

RE[2]: The problem is the software
by Lennie on Sat 5th Dec 2009 00:09 UTC in reply to "RE: The problem is the software"
Lennie Member since:
2007-09-22

I've just not been able to get used to the syntax yet.

Reply Score: 2

RE: The problem is the software
by Laurence on Fri 4th Dec 2009 14:25 UTC in reply to "The problem is the software"
Laurence Member since:
2007-03-26

The major problem now is the software. Most software out their today do not take into account multi-core CPU's, so it's as if they run on a single core anyway (mostly single threaded applications).

Yes the OS can do some scheduling, but it's down to the software applications to take charge of multi-cores and multi-threading.


This wont be targeted at general consumers (who, for now, most new computers are sufficiently powerful).

This will be targeted at mainframe users who already do complicated distributed processing or server admins who want to consolidate several hardware resources into one host running several virtual machines (each assigned 2 or more cores)

For these users - this type of CPU is the future.

Edited 2009-12-04 14:25 UTC

Reply Score: 2

RE: The problem is the software
by sbenitezb on Fri 4th Dec 2009 15:33 UTC in reply to "The problem is the software"
sbenitezb Member since:
2005-07-22

The major problem now is the software. Most software out their today do not take into account multi-core CPU's, so it's as if they run on a single core anyway (mostly single threaded applications).


Depends on what software you are talking about. I guess a 48 core cpu isn't intended for the flash playing/web browsing user, but for servers and hard core desktops (designers, CAD, gaming). Most desktop and single threaded applications won't ever need so many cores.

Reply Score: 2

umccullough Member since:
2006-01-26

Most desktop and single threaded applications won't ever need so many cores.


Except when they're loaded with malware - in which case the more cores there are to run the malware in the background, the better the user experience will be ;)

Reply Score: 3

Lennie Member since:
2007-09-22

I have a feeling more malware does not make for a better user experience.

Reply Score: 2

tylerdurden Member since:
2009-03-17

I take intel branding this as a "research chip for cloud computing" was not enough of a hint that this was not intended to be a desktop chip?

Reply Score: 1

RE: The problem is the software
by wannabe geek on Sat 5th Dec 2009 14:45 UTC in reply to "The problem is the software"
wannabe geek Member since:
2006-09-27
RE: The problem is the software
by tylerdurden on Sun 6th Dec 2009 23:29 UTC in reply to "The problem is the software"
tylerdurden Member since:
2009-03-17

... and that is why this is a research vehicle. Jesus, do some of you bother to even read the article before becoming Capt. Obvious?

Reply Score: 1

I wonder what Intel could come up with...
by Tuishimi on Fri 4th Dec 2009 15:08 UTC
Tuishimi
Member since:
2005-07-06

...if they took a year off from their current chip designs and try to innovate a little, maybe come up with a chip that is super conductive, or perhaps optic or chemical in nature instead of using electricity as the driving force.

Reply Score: 2

One word: Stream
by deathshadow on Sat 5th Dec 2009 22:16 UTC
deathshadow
Member since:
2005-07-12

As someone who's writing C code that compiles to CUDA, you'll excuse me if the idea of a mere 48 cores on one die isn't exactly blowing my skirt up... The 'crappy' little Ge8800GTS driving my secondary displays has more cores than that - My primary GTX260 has 216 stream cores, and that new ATI 5970 everyone's raging about has what, 1600 stream processor cores per die? Sure, they are "stream processor cores", but that's still basically a processor unto itself (albeit a very data-oriented one) as evidenced by how much can be done in them from CUDA.

I really see this as Intel's response to losing share in the high performance arena due to things like CUDA - much like ATI's "Stream" if that ever becomes anything more than stillborn. (since I don't know ANYONE actually writing code to support ATI Stream) - or more specifically nVidia's Tesla.

Hell, look at the C2050 Tesla, 512 thread processors on one die - GPGPU's are threatening Intel's relevance - increasingly I suspect that as more and more computing is offloaded VIA technologies like CUDA we are likely to see x86 or even normal RISC CPU's relegated to being little more than legacy support and glorified traffic cops and I/O handlers, with parallel processing GPGPU's handling the real grunt work.

We have historical precedence for this approach too - look at the move from 8 bit to 16 bit with the Z80 and 68K. Trash-80 model 16 used a Z80 to handle keyboard/floppy/ports as well as running older Model II software, while the included 68K handled the gruntwork of running actual userspace programs. The Lisa and original mac were similarly divided, though they didn't use their Z80 for legacy... Or even look a few years later in the game console world where the Sega Genesis was a 68K and a Z80, the Z80 used to handle the VDP, two sound chips, and provide legacy support back to the SMS/SG-1000, while new games used the 68k for it's userland.

As such Intel is late to the party on using lots of simpler processors on one die - time will tell if that's going to be too little, too late. If they work it out so you could mount it alongside x86 and give it a compatibility layer for CUDA code (which nVidia more than approves of other companies doing) they might make a showing.

If not, it's going to end up just as stillborn as ATI's "stream"

Reply Score: 2