Linked by Thom Holwerda on Fri 28th Oct 2005 11:17 UTC
Hardware, Embedded Systems Herb Sutter, a software architect from Microsoft, gave a speech yesterday at In-Stat/MDR's Fall Processor Forum. Addressing a crowd mostly consisting of hardware engineers, he talked about how the software world was ill-prepared to make use of the new multicore CPUs coming from Intel and AMD.
Thread beginning with comment 52631
To view parent comment, click here.
To read all comments associated with this story, please click here.
rayiner
Member since:
2005-07-06

Funny, those are the issues Sutter (and others) are bringing to the C++ committee--that the language will have to address threads explicitly, even if it harms the single-threaded case.

I'm not talking about threads. Native thread support in C++ won't make parallel code any easier to write than it is in Java. I'm talking about basing the language on an explicitly concurrent model of computing. The precise computing model to use is still an area of intense research, but Pi calculus (as used in Polyphonic C#), seems to be a popular one.

And excuse me if I don't trust the same people who came up with auto_ptr (and encouraged shared_ptr) to come up with a proper parallel C++.

Additionally, his arguments finally convinced me that GC in C++ is a good thing.

But you'll never be able to have a good GC in C++! As long as you can convert an integer into a pointer, you'll need a conservative GC, and that makes a lot of the really high-performance GC algorithms unusable.

And I'm confident he's familiar with the C# efforts.

Polyphonic C# is a seperate research project being funded by Microsoft --- not a part of their main C# effort.

Since C# has some problems in the MT arena (all languages do) I don't see why it has a better chance at addressing MT issues than C++ does.

The standard committe is likely to be the primary reason why any concurrency concepts in C# will suck (just as the metaprogramming concepts suck). They refuse to break compatibility with "classical" C++ and C, and are thus stuck to using inferior solutions. Moreover, pointers and guarantees of byte-level object layout will likely become another issue. For the overhead of a truely parallel language to be tolerable, you'll likely need powerful optimizers, which are exceedingly difficult to do in a langauge that makes as many guarantees about memory layout as C++ does. Hell, C++ can't even support a compacting GC!

Reply Parent Score: 1

emarkp Member since:
2005-09-10

I'm talking about basing the language on an explicitly concurrent model of computing.

Fair enough. I disagree on this point.

And excuse me if I don't trust the same people who came up with auto_ptr (and encouraged shared_ptr) to come up with a proper parallel C++.

Both of those have their place. If you're just knee-jerk objecting to them, then I'm afraid your credibility drops to near zero.

But you'll never be able to have a good GC in C++! As long as you can convert an integer into a pointer, you'll need a conservative GC, and that makes a lot of the really high-performance GC algorithms unusable.

Hell, C++ can't even support a compacting GC!

Now you're just exposing your ignorance. Read up on what has been done in Managed C++. Part of it includes the ability to include or exclude objects from the GC heap. Different rules apply to the GC heap objects. The CLI GC is compacting, and the (just released) VS2005 includes Managed C++ with GC.

Reply Parent Score: 1

rayiner Member since:
2005-07-06

Both of those have their place. If you're just knee-jerk objecting to them, then I'm afraid your credibility drops to near zero.

It's hardly knee jerk opposition. I speak as someone whose been programming in C++ for years. I think the whole "modern C++" thing is great, but smart pointers are just plain dumb. Even the Boost guys admit they are slow. They don't fill a niche that needs to be filled --- they are slower than GC, and more cumbersome to use.

Now you're just exposing your ignorance. Read up on what has been done in Managed C++.

Managed C++ isn't C++. It's C# dolled up to look like C++.

Reply Parent Score: 1

james_parker Member since:
2005-06-29

I must respectfully disagree about the difficulty in writing concurrent code; although more difficult than writing linear code, but it is mostly a matter of learning a few additional concepts and adjusting one's perceptions slightly.

In contrast, it is far more difficult to introduce concurrency into existing linear code without dramatically curtailing the concurrency. In many cases this cannot be done without significant rearchitecture.

With a few exceptions, C++ is also not a bad language to write concurrent code in. Neither garbage collection nor smart pointers are needed. If it is approached with the idea that no more than one thread "owns" a piece of memory (generally exclusive right to update or delete it), the management is relatively straightforward and far more efficient than the two alternatives given. The greatest problem I have found is managing thread exit; threads must not exit while owning any memory.

If a reasonable threading model is started with, most problems yield relatively easily. Tracking down problems is more difficult, but the basic debugging tools are getting far better and the techniques, although nonlinear, are reasonably straightforward.

I have worked extensively with highly concurrent software for over a decade on Unix platforms, utilizing both concurrency using shared addressing in an interprocess and intraprocess context simultaneously; much of that effort being part of a DBMS engine, and learned to do it without any specialized training other than reading and experimenting.

With some training, any reasonable developer ought to be able to learn and use the techniques in six months, sufficient for most application work.

Reply Parent Score: 1

rayiner Member since:
2005-07-06

I must respectfully disagree about the difficulty in writing concurrent code; although more difficult than writing linear code

I would say its much more difficult than writing linear code. Take a look at the effort that has been put into making Linux, BSD, etc, highly scalable on SMP machines. You need sophisticated locking strategies even to do relatively simple things. Moreover, the resulting code is not very amenable to closed-form analysis, since the theory regarding concurrent computations is lacking.

With a few exceptions, C++ is also not a bad language to write concurrent code in.

It's no worse for writing concurrent code than most existing languages. I don't think that's good enough.

If it is approached with the idea that no more than one thread "owns" a piece of memory (generally exclusive right to update or delete it), the management is relatively straightforward and far more efficient than the two alternatives given. The greatest problem I have found is managing thread exit; threads must not exit while owning any memory.

How easy is it to impose that discipline on an entire program without language support? Conceptually, manual memory management is simple too --- remember to free very object you've allocated. The fact that almost all major software has memory leaks suggests that its not as easy as it sounds.

If a reasonable threading model is started with, most problems yield relatively easily.

Some problems yield more easily than others. Our simulation software at work is highly concurrent, and enforcing concurreny in it isn't particularly complicated. However, that's because the problem domain is naturally concurrent (ie: simulate 10,000 portable radios on a battlefield). Even then, I've still spent a few weekends debugging race conditions. With other things, compiler algorithms for example, there is just so much interaction between threads that enforcing any kind of discipline is complicated.

I have worked extensively with highly concurrent software for over a decade on Unix platforms, utilizing both concurrency using shared addressing in an interprocess and intraprocess context simultaneously; much of that effort being part of a DBMS engine, and learned to do it without any specialized training other than reading and experimenting.

With some training, any reasonable developer ought to be able to learn and use the techniques in six months, sufficient for most application work.


I think the gaming industry is interesting to consider in this context. They've clearly got some good programmers, and they certainly have a need to make maximum possible use of peoples' hardware. Yet, even Carmack didn't think it was worth the trouble to make Doom III multithreaded, even thow he clearly forsaw the need and tried supporting SMP in Quake III. If six months was enough for someone to write good multithreaded code, don't you think developers wouldn't be as frightened about the multicore future as they seem to be?

Reply Parent Score: 1