Linked by Thom Holwerda on Mon 1st May 2006 19:56 UTC, submitted by Nicholas Blachford
Hardware, Embedded Systems "Can you imagine getting a new PC and finding your software runs no faster than before? You probably can't imagine it running slower. For some types of software however, that is exactly what is going to happen in the not too distant future. Faster processors running slower may sound bizarre but if you're using certain types of data structures or code on large scale (8+) multicore processors it might actually happen. In fact it might already happen today if you were to run legacy software on one of today's processors which feature a large number of cores."
Thread beginning with comment 120037
To read all comments associated with this story, please click here.
virtualization
by looncraz on Mon 1st May 2006 20:42 UTC
looncraz
Member since:
2005-07-24

AMD is preparing essentially reverse-Hyperthreading.

That is the CPU will feature an ultra-high speed complex scheduler (much like a super-micro kernel) that will utilize Out-or-Order execution techniques to spread single threads over multiple cores within the processor.

For some odd reason many people think this to be too demanding of a task and will causes less speedup than you might expect (say, 50% improvement, factoring in for basic sensibility). While there is some credibility in the arguments, I have found such existing technologies (albeit playing slightly different roles) in play in nearly every computer in the world. Already CPUs have OOE (Out-of-Order Execution), meaning the code that is inputted is not done in the order it was received. So what can you do with the code not currently being executed, that is older and we know cannot be dependent on the later instructions for proper execution within the processor? Simple, throw the older code bits onto other cores to get done instead of sitting around.

BUT, we can take it one step further: When some instructions come, send them to the first available core, re-order normally, and be done with it. Externally the CPU would appear to be a single core executing mighty quickly. No need for special compilers or software modifications to see benefits from multi-cores.

However, complexity is only added when providing programmable access to each core individually. This can be handled much like SMP, except you have to now rid yourself of the single-CPU virtualization so that SMP kernels and such work as intended naturally.

To get around this, you blend the two together, still throwing instructions to all cores for simultaneous (or nearly, more like over-lapping) execution, but also allowing direct to those cores, there will be some hardware locking involved, I'm sure, but the benefits will be worth the relatively small effort.

In fact, a few minor tweaks can create processors for different purposes. Use full virtualizing (maybe a BIOS switch?) when your using outdated single-threaded software and need the best performance you can muster, use JUST virtual SMP for file-servers and other applications where the instructions, while simple, are wildly threaded and optimized for SMP due to the nature of the work being done. And, of course, have both enable for a good mixture of single-threaded performance and threaded performance.

Now, think of a CPU with 8 cores. Your running Windows Vista, imagine a 2 CPU limit, well if you virtualize 8 CPUs, it will not help you out much, you will be using just two effectively well. BUT, why not allow the virtualization of 2 CPUs with 4 cores?

See where I'm going? Can't wait for the fun to begin.

--The loon

(With my fifty cents worth)

Reply Score: 5

RE: virtualization
by snozzberry on Tue 2nd May 2006 18:17 in reply to "virtualization"
snozzberry Member since:
2005-11-14

I wanna mod that down just so I can mod it back up again to 5. Nifty points, sir.

Reply Parent Score: 1