Linked by Thom Holwerda on Thu 31st Aug 2006 22:53 UTC
General Development "Concurrent programming is difficult, yet many technologists predict the end of Moore's law will be answered with increasingly parallel computer architectures - multicore or chip multiprocessors. If we hope to achieve continued performance gains, programs must be able to exploit this parallelism. Automatic exploitation of parallelism in sequential programs, through either computer architecture techniques such as dynamic dispatch or automatic parallelization of sequential programs, offers one possible technical solution. However, many researchers agree that these automatic techniques have been pushed to their limits and can exploit only modest parallelism. Thus, programs themselves must become more concurrent."
Thread beginning with comment 157721
To read all comments associated with this story, please click here.
Parallel difficult? Why?
by luzr on Fri 1st Sep 2006 06:23 UTC
luzr
Member since:
2005-11-20

I must say I am starting to be a bit tired about all these "threads are difficult, we need new paradigm" articles.

IME, this is all hogwash. Programming with threads is not that difficult, after learning some basic principles. Just because somebody somewhere was caught by racing condition or something similar does not justify introducing "new principles".

Reply Score: 2

RE: Parallel difficult? Why?
by corentin on Fri 1st Sep 2006 09:22 in reply to "Parallel difficult? Why?"
corentin Member since:
2005-08-08

> IME, this is all hogwash. Programming with threads is not that difficult, after learning some basic principles. Just because somebody somewhere was caught by racing condition or something similar does not justify introducing "new principles".

When "somebody somewhere" translates to "nearly everybody, everywhere" it does.

Just like manual memory management, synchronization is NOT easy and very error-prone in non-trivial programs. And it doesn't scale well, anyway.

Reply Parent Score: 1

FunkyELF Member since:
2006-07-26

Just like manual memory management, synchronization is NOT easy and very error-prone in non-trivial programs. And it doesn't scale well, anyway.

Well put. I like the comparison to memory management. I am not a parallel programmer but I'd like to be. Memory management was hard to master in C and C++ but I'm glad I did before I learned Java with garbage collection. I feel parallel programming is the same way. I better learn it at a lower level before a new paradigm emerges making it easier.

Reply Parent Score: 1

BryanFeeney Member since:
2005-07-06

Indeed, and what's more, it requires a whole lot more forethought than memory management. I usually end up modelling things out on paper, with UML and - occasionally - state-diagrams before splitting up a program. The sort of gotchas that can occur with concurrent programming (especially using the current crop of popular languages) are nightmarish.

The solution is to create a completely new paradigm. One example is the use of futures (http://en.wikipedia.org/wiki/Future_%28programming%29) and "promise pipelining" currently seen in languages like E and Alice.

Of course, given that it's taken 30 years for techniques like total OOP and lambdas to arrive in a mainstream language (C# 3), I'm not optimistic about any of these techniques gaining mainstream acceptance in the near future.

Edited 2006-09-01 15:44

Reply Parent Score: 1