In the conclusion of his parallel programming series, David Chisnall looks at using threads. Threads are not a traditional way of achieving parallelism on UNIX platforms, but the newer POSIX standards come with a comprehensive set of functions to support them.
to those who know something about it.
and C programming, that too.
The very first example in this article returns a pointer to an item on the stack. It’s the sort of subroutine we give to potential interns as part of the interview process. “Can you see the mistake?” We don’t hire interns who don’t.
Chisnell claims that threads make programs more difficult to debug. While this is true for the way most people write code (especially those who return pointers to items on the stack) on a historical note, the orignators of the proposal that became pthreads brought the proposal to us at POSIX because they were in a camp that found threaded programming for UI based code easier and more bug free then the then-popular event model.
Edited 2007-01-22 08:17
The very first example of this article has a function that returns a pointer to a heap-allocated memory block. Cannot see what’s wrong with that.
Anyway, I would like to see any example of code include range and/or basic failure tests. At least to educate and entice newbie programmers to do so.
My mistake.
Note to self: Never review code late at night
to those who know something about it.
and C programming, that too
You mean, not people like you then.
The very first example in this article returns a pointer to an item on the stack. It’s the sort of subroutine we give to potential interns as part of the interview process. “Can you see the mistake?” We don’t hire interns who don’t
No, it doesn’t, it returns a pointer to an item on the heap. So obviously you avoid all the good programmers out there, and keep the bad ones. Good job !
Chisnell claims that threads make programs more difficult to debug
They are. Concurrency brings all kind of problems. Shouldn’t be news to you if you really took classes in this domain (which was my favorite). Perhaps you’re a genius though.
they were in a camp that found threaded programming for UI based code easier and more bug free then the then-popular event model
But … the event model use threads too. But they avoid them like the plague. And rightly so. Nearly every code with threads had race conditions.
they were in a camp that found threaded programming for UI based code easier and more bug free then the then-popular event model
But … the event model use threads too. But they avoid them like the plague. And rightly so. Nearly every code with threads had race conditions.
When you comment on a historical observation, it is nice to comment on the actual history.
The then-popular event model (circa 1985) did not expose threads to the programmer and its concurrency was often implemented without any threads at all.
Garret Schwart (sp? — it’s been a long time) brought the first thread proposal to the Posix committee and argued the advantage of threads over events for programming UI code.
The very first example in this article returns a pointer to an item on the stack. It’s the sort of subroutine we give to potential interns as part of the interview process.
What happens when the intern points out that you are using calloc, and it is actually heap allocated?
they were in a camp that found threaded programming for UI based code easier and more bug free then the then-popular event model.
Well, what exactly is an event model? It covers a wide area, and it actually uses threads. The only caveat is that you want to keep that thread use down to an absolute minimum, because the last thing you want are lots of awful race conditions in user interface based code that you’ll have to try and debug.
User interface programming opens a can of worms in terms of the possibilities of things to go wrong, so I can’t see thread based programming being more bug free there.
Edited 2007-01-22 12:36
The very first example in this article returns a pointer to an item on the stack. It’s the sort of subroutine we give to potential interns as part of the interview process.
What happens when the intern points out that you are using calloc, and it is actually heap allocated?
I look around very sheepishly and then hand them the program I meant for them to look at.
Well, what exactly is an event model? It covers a wide area, and it actually uses threads. The only caveat is that you want to keep that thread use down to an absolute minimum, because the last thing you want are lots of awful race conditions in user interface based code that you’ll have to try and debug.
The then-current event model was much simpler. It was often implemented without threads. The underlying system used interrupt service routines to serialize i/o events (and thus the name ‘event model’) and place them on a queue from which they were removed by the ‘event handler’ and processed one at a time.
Looks ok to me.
I guess it’s time to have an “Yes, the comment is built on factual errors” option for modding down
Only if you can mod by half points for comments that are half wrong and half write.
> The very first example in this article returns
> a pointer to an item on the stack.
Since when calloc() allocates a memory block from the stack, not heap???
…especially if there are lots of changes, lots of critical sections, and points of multiple locking of objects.
Here is a relevant page from people with much more experience in it:
http://www.algorithm.com.au/talks/concurrency-erlang/
One problem writing an article by expanding a bullet list of function interfaces is that one too often winds up with a lot of trees, but no forest.
What makes multi-thread programming an order of magnitude more difficult than linear programming is controlling access to data shared among the threads. The article offers some lip service to this problem by talking about MUTEX facilities, but appears to focus on the bogosity of “critical regions”: wrap a particular piece of code while it does something.
What should I do when I write code in another part of the program? Does it need a wrapper too? Just what the heck am I protecting here, anyway?
Instead, one should focus on protecting a shared data item whether it’s an “int” or something more complicated. The point of serializing access to data is to protect the DATA, not the CODE, right?
Perhaps the ultimate indignity is the CAUTION blurb at the end of part 3, “ON ONE CONDITION”, that takes pain to point out that one should AVOID BOTHERING WITH SYNCHRONIZATION if the overhead is more than that of just accessing the data without it. What is what this says, isn’t it? Therein lies madness.
Perhaps the author was attempting to seque into the next section. The place for that was in normal paragraph text, not in a highlighted CAUTION saying it’s all too much trouble to bother with anyway.
Perhaps this will be covered in part 4 of the article. No wait, it’s over.
Love the metaphors.
“Perhaps the ultimate indignity is the CAUTION blurb at the end of part 3, “ON ONE CONDITION”, that takes pain to point out that one should AVOID BOTHERING WITH SYNCHRONIZATION if the overhead is more than that of just accessing the data without it. What is what this says, isn’t it? Therein lies madness.”
Well said. Thread synchronization is about correctness, NOT speed. It’s a shame advice like this is still given out.