Linked by Thom Holwerda on Fri 28th Oct 2005 11:17 UTC
Hardware, Embedded Systems Herb Sutter, a software architect from Microsoft, gave a speech yesterday at In-Stat/MDR's Fall Processor Forum. Addressing a crowd mostly consisting of hardware engineers, he talked about how the software world was ill-prepared to make use of the new multicore CPUs coming from Intel and AMD.
Thread beginning with comment 52696
To view parent comment, click here.
To read all comments associated with this story, please click here.
rayiner
Member since:
2005-07-06

I must respectfully disagree about the difficulty in writing concurrent code; although more difficult than writing linear code

I would say its much more difficult than writing linear code. Take a look at the effort that has been put into making Linux, BSD, etc, highly scalable on SMP machines. You need sophisticated locking strategies even to do relatively simple things. Moreover, the resulting code is not very amenable to closed-form analysis, since the theory regarding concurrent computations is lacking.

With a few exceptions, C++ is also not a bad language to write concurrent code in.

It's no worse for writing concurrent code than most existing languages. I don't think that's good enough.

If it is approached with the idea that no more than one thread "owns" a piece of memory (generally exclusive right to update or delete it), the management is relatively straightforward and far more efficient than the two alternatives given. The greatest problem I have found is managing thread exit; threads must not exit while owning any memory.

How easy is it to impose that discipline on an entire program without language support? Conceptually, manual memory management is simple too --- remember to free very object you've allocated. The fact that almost all major software has memory leaks suggests that its not as easy as it sounds.

If a reasonable threading model is started with, most problems yield relatively easily.

Some problems yield more easily than others. Our simulation software at work is highly concurrent, and enforcing concurreny in it isn't particularly complicated. However, that's because the problem domain is naturally concurrent (ie: simulate 10,000 portable radios on a battlefield). Even then, I've still spent a few weekends debugging race conditions. With other things, compiler algorithms for example, there is just so much interaction between threads that enforcing any kind of discipline is complicated.

I have worked extensively with highly concurrent software for over a decade on Unix platforms, utilizing both concurrency using shared addressing in an interprocess and intraprocess context simultaneously; much of that effort being part of a DBMS engine, and learned to do it without any specialized training other than reading and experimenting.

With some training, any reasonable developer ought to be able to learn and use the techniques in six months, sufficient for most application work.


I think the gaming industry is interesting to consider in this context. They've clearly got some good programmers, and they certainly have a need to make maximum possible use of peoples' hardware. Yet, even Carmack didn't think it was worth the trouble to make Doom III multithreaded, even thow he clearly forsaw the need and tried supporting SMP in Quake III. If six months was enough for someone to write good multithreaded code, don't you think developers wouldn't be as frightened about the multicore future as they seem to be?

Reply Parent Score: 1