Linked by Thom Holwerda on Mon 28th Sep 2009 23:15 UTC, submitted by poundsmack
Microsoft It seems like Microsoft Research is really busy these days with research operating systems. We had Singularity, a microkernel operating system written in managed code, and late last week we were acquainted with Barrelfish, a "multikernel" system which treats a multicore system as a network of independent cores, using ideas from distributed systems. Now, we have a third contestant, and it's called Helios.
Permalink for comment 386832
To read all comments associated with this story, please click here.
Member since:

This sentence isn't correct: a RISC is a kind of Instruction Set (visible by the compiler) which allow efficient hardware usage by the compiler.
An x86 compiler cannot access the 'core RISC' inside an x86 CPU so it's not a 'core RISC', it's just that both RISCs CPU and x86 CPU share a lot of silicon.

Your splitting hairs. To paraphrase what your saying is

The core of the modern x86 is not "risc", it just is the same design as "risc" cpus. Its a design philosophy in my eyes. Instead of a lot of complex instructions you use a cpu that has a small set of fast instructions and your depending on your compiler to get it right.

Intel and AMD processors have logic that takes the x86 instruction sets and break them down into RISC-like instructions that are then executed by the rest of the processor. You can think of it as a hardware just-in-time compiler or something like that.

Uh? You can also use multiple core to accelerate one single software, but it's difficult to program yes.

So your agreeing with me then.

Probably, but note that GPUs now have a cheap, huge memory bandwith (thanks to it's fixed on board memory configuration) that the GCPU won't have at first..
It's possible to use different algorithms to use less memory bandwith, but first generation GCPU won't be competitive with high end GPUs.

Yes memory bandwidth is a issue with IGP.

But the problem with the current design is that with more and more applications using the GPU as a "GPGPU" you will never really have enough dedicated memory on that. On a modern composited desktop your looking at massive amounts of video RAM needed to cache out all those application window textures and whatnot.

Its the same reason why on a modern system with 8GB of RAM OSes still insist on having swap files and swap partitions. To make things go faster you want to use as much RAM as possible.

So all that latency stuff adds up.

So Instead of burning out hundreds of thousands of cycles on BOTH your cpu and gpu shoveling megabytes worth of data back and forth over PCI Express during normal application you end up with all the cores sharing the same cache.

Then instead of spending 200 dollars or whatever on a dedicated external video card they can spend that money on increasing the memory bandwidth from main memory to the processor and make all that fast dedicated video ram a part of your normal main memory.


Imagine a application that uses GPGPU instructions and CPU instructions in the same execution loop.

Since the GPGPU is only fast at certain things it would be desirable to easily program using both the GPU and the CPU.

So with a dedicated separate video card each time you execute a loop in that program your burning through much more cycles just moving data back and forth over the PCI Express bus then what it actually costs to execute it.

By integrating the GPU and the CPU into the same processor as seperate cores and then using the same memory and cache for both things a much slower cpu and gpu could massively outperform a otherwise faster dedicated video card for that sort of thing.

And be much easier to program for...

Edited 2009-09-29 15:32 UTC

Reply Parent Score: 1