Linked by Thom Holwerda on Sat 11th May 2013 21:41 UTC
Windows "Windows is indeed slower than other operating systems in many scenarios, and the gap is worsening." That's one way to start an insider explanation of why Windows' performance isn't up to snuff. Written by someone who actually contributes code to the Windows NT kernel, the comment on Hacker News, later deleted but reposted with permission on Marc Bevand's blog, paints a very dreary picture of the state of Windows development. The root issue? Think of how Linux is developed, and you'll know the answer.
Thread beginning with comment 561394
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[8]: makes sense
by Valhalla on Mon 13th May 2013 11:46 UTC in reply to "RE[7]: makes sense"
Valhalla
Member since:
2006-01-24

I'm not going to explain anything anymore

Huh? From what I can tell you haven't explained anything, you made a blanket statement of 'Stable API/ABIs are good engineering like it or not' which totally disregards reality, which is that stable api/abi's are NOT automatically 'good engineering' if those stable ABI/API's turn out to suck and then you are stuck with them.

I doubt there is a single long term 'stable' API/ABI today that would look the way it does (in many cases probably not even remotely) had the original developers been able to go back in time and benefit from what they know now.

So it's a balance, will you keep code from 'yesterday' which 'today' turned out to be a crappy solution in order to maintain compability or will you allow yourself to break it and force improved changes upon those interfacing with your solution?

The kernel devs chose the balance of being able to break anything inside the kernel, while keeping userland interfaces intact.

This is not a perfect solution because there simply is no perfect solution, but it means that their changes has practically zero impact on user space (which certainly limits the improvements they can make) while allowing them free reign to improve within the kernel space.

And since the kernel is monolithic this means there's a ton of in-kernel functionality which can be enhanced without breaking compability, the exceptions in practice are those few proprietary drivers residing outside of the kernel, which needs to be maintained by their vendors against kernel ABI changes.

because your counter argument is "it works for me" so far.

Oh I think I've fleshed out my argument a lot more than that, meanwhile yours still seem very little beyond 'it doesn't work for me'.

Reply Parent Score: 4

RE[9]: makes sense
by lucas_maximus on Mon 13th May 2013 17:59 in reply to "RE[8]: makes sense"
lucas_maximus Member since:
2009-08-18

At the end of the day, I think a simple google for "my wireless is not working anymore in <distro x>" proves the point.

While it was better than it was, it still sucks. YET Linux advocates make up all sorts of excuses why it is okay. They could have I dunno supported a legacy interface until kernel version X which wouldn't have been much effort if the internal structures are as good as you say they are. This would have given hardware companies and OEMS roadmap that they could follow.

There always been problems because they made a bad choice for desktop users and oems that may have wanted to support it. Might be a good choice for progression of the kernel itself, but it is a shit choice of desktop users, which is one of the many reasons that Linux is and will always be a failure on the desktop.

The fact of the matter is while you get up-votes on here, changing interfaces and APIs tends to piss third party developers off. If there wasn't the legal problems with the BSDs at the time, Linux wouldn't have even got off the ground because there wouldn't have been picked up as a "free" *nix like alternative.

Reply Parent Score: 2