Linked by Thom Holwerda on Thu 18th Aug 2005 16:46 UTC, submitted by Nicholas Blachford
Intel "At next week's Intel developer forum, the firm is due to announce a next generation x86 processor core. The current speculation is this new core is going too be based on one of the existing Pentium M cores. I think it's going to be something completely different."
Thread beginning with comment 19943
To read all comments associated with this story, please click here.
Blachford on Crack again, OSNews as pusher!
by on Fri 19th Aug 2005 07:15 UTC

Member since:

1. Expecting existing systems to use many smaller cores in hyperthreading mode and getting efficiencies out of it with standard programs requires that the OS is aware of those "processors" and can and will use them effectively. Sorry, there's too many systems that don't use virtual processors, and too darned many applications that cannot reasonably be SMP enabled to where such an architecture makes sense for a single process. Now, if you actually WANT to have a bunch of things running all at once that don't interact, sure, you could do that; the cost would be that there needs to be a HUGE amount of cache to make any single thread that's running be even remotely worthwhile. Sure.....

2. While TransMeta did make their CPU's do translation of code in software into another ISA, it would require a huge amount of cache to make it worthwhile on the fly: such a critter would not reasonable as a multicore device, because of cache, and huge translation latencies. It doesn't matter how fast the interrupt handler code operates if it can't be translated before the interrupt is no longer too late, and where are you going to put the code for everything else? In more cache?

3. Once again, while you could (in theory) then store all the translated code out to main memory, that would either require doppelganger memory systems (one for original x86 code that hasn't been translated as well as data, and another memory system for translated code) which would be a horrible mess, or an OS that takes a processor that does that into account, perhaps by handling it as a special type of device driver to work with translated code. No, doesn't seem like a wise move for something that's backwards compatible with what's on the shelf now.

4. Nicholas should actually spend some time writing actual code that's heavily threaded, and prove that what he's created is *correct* as well as efficient. While what Sun has announced they will do with one of their future processor solutions supports a huge number of threads, they aren't likely to be threads that interfere with each other and still get decent performance. A huge number of algorithms simply can't be made super-parallel, and those that can (in theory) may require so much locking that the overhead would make it impractical.

I think I'll stop there: his article is pure entertainment, much like the Cell article was in the wild claims. I think he needs to stop taking Star Trek technology as science and start treating it as fiction until proven otherwise ;)

Reply Score: 1