Linked by Thom Holwerda on Thu 31st Aug 2006 22:53 UTC
General Development "Concurrent programming is difficult, yet many technologists predict the end of Moore's law will be answered with increasingly parallel computer architectures - multicore or chip multiprocessors. If we hope to achieve continued performance gains, programs must be able to exploit this parallelism. Automatic exploitation of parallelism in sequential programs, through either computer architecture techniques such as dynamic dispatch or automatic parallelization of sequential programs, offers one possible technical solution. However, many researchers agree that these automatic techniques have been pushed to their limits and can exploit only modest parallelism. Thus, programs themselves must become more concurrent."
Thread beginning with comment 157905
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[2]: Parallel difficult? Why?
by FunkyELF on Fri 1st Sep 2006 14:21 UTC in reply to "RE: Parallel difficult? Why?"
FunkyELF
Member since:
2006-07-26

Just like manual memory management, synchronization is NOT easy and very error-prone in non-trivial programs. And it doesn't scale well, anyway.

Well put. I like the comparison to memory management. I am not a parallel programmer but I'd like to be. Memory management was hard to master in C and C++ but I'm glad I did before I learned Java with garbage collection. I feel parallel programming is the same way. I better learn it at a lower level before a new paradigm emerges making it easier.

Reply Parent Score: 1