Linked by Thom Holwerda on Thu 31st Aug 2006 22:53 UTC
General Development "Concurrent programming is difficult, yet many technologists predict the end of Moore's law will be answered with increasingly parallel computer architectures - multicore or chip multiprocessors. If we hope to achieve continued performance gains, programs must be able to exploit this parallelism. Automatic exploitation of parallelism in sequential programs, through either computer architecture techniques such as dynamic dispatch or automatic parallelization of sequential programs, offers one possible technical solution. However, many researchers agree that these automatic techniques have been pushed to their limits and can exploit only modest parallelism. Thus, programs themselves must become more concurrent."
Thread beginning with comment 157935
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[2]: Parallel difficult? Why?
by BryanFeeney on Fri 1st Sep 2006 15:43 UTC in reply to "RE: Parallel difficult? Why?"
Member since:

Indeed, and what's more, it requires a whole lot more forethought than memory management. I usually end up modelling things out on paper, with UML and - occasionally - state-diagrams before splitting up a program. The sort of gotchas that can occur with concurrent programming (especially using the current crop of popular languages) are nightmarish.

The solution is to create a completely new paradigm. One example is the use of futures ( and "promise pipelining" currently seen in languages like E and Alice.

Of course, given that it's taken 30 years for techniques like total OOP and lambdas to arrive in a mainstream language (C# 3), I'm not optimistic about any of these techniques gaining mainstream acceptance in the near future.

Edited 2006-09-01 15:44

Reply Parent Score: 1

Sphinx Member since:

Damn, actually plotting and planning a program before coding, good form there.

Reply Parent Score: 1