In a classic case of kicking in open doors, research firm Gartner has concluded that software will not be able to keep up with the ever-increasing number of cores in modern processors. “By Gartner’s reckoning, the doubling of core or thread counts every two years or so – and for some architectures, the thread growth is even larger with each jump – will affect all layers of the software stack. Operating systems, middleware, virtualization hypervisors, and applications riding atop this code all have their own limitations when it comes to exploiting threads and cores, and Gartner is warning IT shops to look carefully at their software before knee-jerking a purchase order to get the latest server as a means of boosting performance for their applications.”
Paralellism is one of the hardest things to do (well) in traditional OO languages, but it doesn’t matter that much when you are talking single core machines. With the i7, we are now at the point where unless software takes advantage of multiple threads, it will not be able to take advantage of a machines hardware.
Functional languages like Erlang and F# are getting a lot of buzz right now in the commercial software world for just that reason. We aren’t even fully transitioned into the managed world yet, and it is already easy to imagine a time where things like C# and Java will be considered obsolete.
Share nothing is easy.
You can do share nothing in C, make a bunch of threads, make some copies of some data structures and off you go.
The problems with parallelism are the problems that require shared data structures and therefore locking of some kind.
We can make use of multicore processors today. How people expect to make use of them is what’s being debated.
Imagine painting a room. With 1 guy painting, it may take an hour. With 2 guys, you halve the time to half an hour. With 4 guys painting, you might get it down to 20 minutes (room is starting to get a little crowded, things get in the way, etc). With 8 guys, you’re just not going to get any painting done. Similar to increasing the number of processor counts, you hit a limit at some point where a particular problem just cannot be broken down into smaller problems that can be solved concurrently.
On the other hand, instead of shoving more painters into a room and expecting them to complete the task in a shorter period of time, how about making them paint more rooms at once? With 8 painters, you could end up painting 8 rooms in an hour. So instead of trying break a task down into n number of smaller tasks that can be executed concurrently, you could just do n full sized tasks concurrently.
The problem with this approach is that you stress the memory subsystem. Imagine all 8 painters rushing to the van to get new paint all at once ….
Get more vans.
Economists call that the Law of Diminishing Returns.
I love the painting analogy – until more multi-core software comes along, I’m doing the “painting more rooms” as much as possible on my computer.
From the article:
“The impact is akin to putting a Ferrari engine in a go-cart; the power may be there, but design mismatches severely limit the ability to exploit it.”
Ahh whatever I’m all wet already, like to burn some rubber on that thing.
How about a sound way of reasoning about all of that parallel fuss? Something akin to modern trigo.
Such that you know what the software/hardware system does before you flesh it out?
MORE GLITCHY CORES == MORE GLITCHY SOFTWARE
Reminds me of FreeBSD 5.x when there was a move away from the giant lock. As they pulled parts out of the giant lock and fine grained the code the result were huge huge numbers of bugs and issues which were suddenly exposed.
It is going to be interesting to see whether in the future whether some code bases are so crappy in quality that the software company might end up having to throw out large sways because the cost of retrofitting it to a multicore environment is going to be costly and time consuming.