Linked by Thom Holwerda on Thu 18th Aug 2005 16:46 UTC, submitted by Nicholas Blachford
Intel "At next week's Intel developer forum, the firm is due to announce a next generation x86 processor core. The current speculation is this new core is going too be based on one of the existing Pentium M cores. I think it's going to be something completely different."
Thread beginning with comment 19769
To read all comments associated with this story, please click here.
hw/sw dialogue
by butters on Thu 18th Aug 2005 21:48 UTC
butters
Member since:
2005-07-08

In the beginning, software was tough to write, mostly because the hardware was tough to program. Then in the 90s hardware got really good in a hurry. Memory became plentiful, execution units became blazing fast, and compilers for higher-level object-oriented languages became efficient on these platforms.

Software became very easy to write. Anyone could learn to program, even business and creative writing majors. In most cases, you didn't need to know anything about the underlying architecture or the limitations of the hardware.

Then transistors got really small, frequency went sky high, and everyone went out and got 350W power supplies. However, this was not helping the burgeoning pervasive computing and mobile computing markets.

So hardware companies, looking to satisfy these new markets while pulling chip yields and profit margins out of the danger zone, take a different approach. They say, ok, software developers, you've had it pretty easy for a while, but we're going to send you a series of warnings that the era of multithreading is upon us. You'll have 5-10 years to understand that there are limits to how fast you can do a single task, and unless you divide your code into multiple simpler tasks, your software will not be able to scale.

Software developers say, that's fine, hardware designers, as long as your chips can figure out how to split our code up for us, we'll be happy. One hardware designer came out with Hyperthreading, an attempt to appease the simple-minded software developers by automagically extracting thread-level parallelism from their code. The results were mixed, because most code was written with the expectation that stuff will be serially, like normal people think in their heads. Sometimes, this HT technology made code slower, and the software developers were not pleased, instead preferring another hardware designer who figured out how to make memory latency a little faster.

Hardware designers all over the world continued singing the praises of multithreading, talking about multicore processors, virtualization, and distributed computing. Software designers continued to say that they didn't want multithreading. They wanted fast singlethreaded performance.

At some point in time, software guys are going to have to realize that there is no such thing as dramatically faster singlethreaded performance. Not within a reasonable power envelope and die area, at least. Multithread or get left behind.

Reply Score: 2

RE: hw/sw dialogue
by nimble on Fri 19th Aug 2005 06:12 in reply to "hw/sw dialogue"
nimble Member since:
2005-07-06

One hardware designer came out with Hyperthreading, an attempt to appease the simple-minded software developers by automagically extracting thread-level parallelism from their code.

I think you've misunderstood hyper-threading, it's simply a way to run two threads on a single core, invented in order to utilise the ridiculously long Netburst pipeline a bit better.

To the software developer it looks much the same as a two processor machine.

Extracting thread-level parallelism is up to the programmer, very occasionally with a bit of help from a clever compiler.

Reply Parent Score: 2