Linked by Thom Holwerda on Fri 19th Mar 2010 13:00 UTC, submitted by Jim Lynch
General Development "With chip makers continuing to increase the number of cores they include on each new generation of their processors, perhaps it's time to rethink the basic architecture of today's operating systems, suggested Dave Probert, a kernel architect within the Windows core operating systems division at Microsoft."
Thread beginning with comment 414450
To read all comments associated with this story, please click here.
Doesn't address the main problem
by butters on Sat 20th Mar 2010 12:00 UTC
Member since:

This doesn't address the main problem, which is that user software is not pervasively threaded.

Assigning each processor to a process doesn't fix the single-thread performance ceiling, and it doesn't let single-threaded processes utilize more than one processor when those resources are available.

I don't see what problem this approach intends to solve. Next to user programs, modern OS kernels tend to be comparatively brilliant at multi-threading (not that there's no room for improvement), and nothing in this proposal endows existing user programs with shiny new powers of parallelism.

Reply Score: 2

PlatformAgnostic Member since:

Welcome back, after a long hiatus! I agree pretty strongly with you.

I think the problem though isn't even the apps. You want to run on as few cores as possible with the typical app anyway, because you're using less power that way. And you still have to have acceptable performance when you're running on the machines of today (often netbooks), so why code for the super-fancy 4-core as well? What do you get out of it as an ISV.

To really make use of the extra CPU, we need to change the vision of what we do with computers. Multicore isn't worth it if it doesn't improve someone's life. For instance, if we had a highly parallel application that could do image processing, or voice recognition, or machine learning and save someone some time or entertain them in some way, this compute power would be woth something. But you also have to factor in the power consumed to achieve that.

Large parallelism is an obvious win in the server space, where there is usually a lot of independent pieces of work to do from many users, but it's hard to translate down to client thusfar where there's only one user, except in gaming graphics applications.

Reply Parent Score: 2

cerbie Member since:

Programs aren't pervasively multhreaded, because the hardware and software platforms cause diminishing returns for the hours a programmer puts into the work.

So, we shouldn't make it easier to use resources in a more parallel fashion, because programmers aren't already doing it well.

Isn't that a bit of a chicken-and-egg situation?

By making it easier for the developer to make use of those resources, making parallel applications can become easier, as we'd see more of them being made. This idea is one of many to tackle that problem, this time by just getting out of the way.

Ideally, you won't end up needing to code for a bunch of cores, but will have the whole dev system, from the bottom up, making it easier to use many processes and threads than to not do so, making gains from having more of them as automatic as having a faster CPU is, today...but without using functional languages everywhere to do it.

Edited 2010-03-21 06:46 UTC

Reply Parent Score: 2