Linked by Thom Holwerda on Tue 26th Sep 2006 23:14 UTC
Intel Quad-core processors are only the beginning of what a revitalized Intel has to offer, the company's top executives said here Sept. 26. The chip maker will deliver in November its first quad-core processors - chips that incorporate four processors each - for both desktops and servers, said CEO Paul Otellini here, in an opening keynote speech at the Intel Developer Forum. The quad-core chips themselves will offer up to 70 percent greater performance in desktops and 50 percent in servers.
Thread beginning with comment 166035
To read all comments associated with this story, please click here.
improvements ?
by kevinlb on Wed 27th Sep 2006 11:02 UTC
kevinlb
Member since:
2006-08-09

You win something with X cores if you run at least X-1 heavy applications, so your system will still be responsive. But in today use, for most users 1 core was sufficient, for geek, dual core was useful (ripping a dvd while playing a game) and for anybody that need a very responsive system. But now, 4, and more then, for not specific usage, it's a waste of CPU and nothing else. It will be useful for servers.

(This comment consider that most applications are not efficiently threaded)

Edited 2006-09-27 11:03

Reply Score: 2

RE: improvements ?
by borat on Wed 27th Sep 2006 11:48 in reply to "improvements ?"
borat Member since:
2005-11-11

i agree. i think dual cores is a pretty great thing even for standard desktops as it can make your system much more responsive when multitasking. however with 4 cores (and fast cores at that) in today's desktops i think there is going to be a lot of people firing up their multithreaded hd video encoder and wondering why each core is only running at 60% load. they will be hitting other bottlenecks, the bus will become saturated with disk io traffic and memory accesses. i'm sure someone can crunch the numbers and get a rough estimation of the impact of this phenomenon.

the new CSI bus from intel is supposed to be their answer to this problem but it doesn't appear to be anywhere near desktops yet. its slated to first be available for xeons and itaniums sometime in 2007.

Reply Parent Score: 1

RE[2]: improvements ?
by stare on Wed 27th Sep 2006 16:59 in reply to "RE: improvements ?"
stare Member since:
2005-07-06

however with 4 cores (and fast cores at that) in today's desktops i think there is going to be a lot of people firing up their multithreaded hd video encoder and wondering why each core is only running at 60% load.

"Test results with the software packages Main Concept with H.264 encoding and the WMV-HD conversion make this very clear. We noticed performance jumps of up to 80% when compared to the Core 2 Duo at the same clock speed (2.66 GHz)."

http://www.tomshardware.com/2006/09/10/four_cores_on_the_rampage/pa...

they will be hitting other bottlenecks, the bus will become saturated with disk io traffic and memory accesses. i'm sure someone can crunch the numbers and get a rough estimation of the impact of this phenomenon.

"Our test results reveal that a FSB1333 (true 333 MHz) does not entail advantages - at least not based on the tests at a CPU clock speed of 2.66 GHz. At CPU clock speeds of 3.0 GHz and above, and memory speeds beyond the DDR2-1000 mark (true 500 MHz), the FSB1333 shows what it is capable of.

One should not forget - viewed from the perspective of the Pentium 4 - that the Core 2 micro-architecture offers a few features to ease the strain on memory access, whereby higher FSB or memory speeds barely register any speed advantages."

http://www.tomshardware.com/2006/09/10/four_cores_on_the_rampage/pa...

Reply Parent Score: 1

RE[2]: improvements ?
by Earl Colby pottinger on Wed 27th Sep 2006 21:37 in reply to "RE: improvements ?"
Earl Colby pottinger Member since:
2005-07-06

Problem is you are assuming large data flows to each CPU, there are programs that the more CPUs you have the less data needed to be sent to per CPU.

Example: The other day I wanted to resize a collecting of pictures I have (using TAR on BeOS). A quick rough resize is done as fast as I move the mouse, but the smooth-resize takes a few seconds per picture. There is no reason that the picture could not be broken up in 80 or more overlapping pieces and each piece sent to a separate CPU for processing. When you look at the amount of data moved into the CPUs, there is not that much, most of the pictures were less that a meg in size.

Note: following the numbers are just examples not real wold measurements.

If a modern single CPU processes a picture a second, we have to move 2 MBytes/sec on the buss. If we have 80 CPUs of the same power, we have to move 160 MBytes/sec to keep up. This is not even close to what a modern buss can do. For image processing the code for any one function is relatively very small, it will run out of the local cache.

I don't mean to say the above are the true real world figures but I do know in the graphic and photo that doing a lot of simple functions in parallel on a chunk of data makes a lot of sense.

And there are a lot of graphic and photo people out there.

Reply Parent Score: 1