Linked by Thom Holwerda on Sat 11th May 2013 21:41 UTC
Windows "Windows is indeed slower than other operating systems in many scenarios, and the gap is worsening." That's one way to start an insider explanation of why Windows' performance isn't up to snuff. Written by someone who actually contributes code to the Windows NT kernel, the comment on Hacker News, later deleted but reposted with permission on Marc Bevand's blog, paints a very dreary picture of the state of Windows development. The root issue? Think of how Linux is developed, and you'll know the answer.
Thread beginning with comment 561272
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[6]: makes sense
by Valhalla on Sun 12th May 2013 19:29 UTC in reply to "RE[5]: makes sense"
Valhalla
Member since:
2006-01-24

Except I actually do run Linux.

Then tell us what distro you are using and what these kernel breakages you keep suffering are.

As I am running a bleeding edge distro I am fully aware and even expect that things may break, yet it's been amazingly stable (granted I don't enable testing, I'm very thankful for those who do and report bugs which are then fixed before I upgrade).

I don't pretend it is perfect

Neither do I, as I said I had to downgrade due to a network driver instability, still that's one showstopper in a five year period on a bleeding edge distro. Vista didn't make it past it's first service pack before it started to break drivers, so much for 'stable' ABI.

and I don't make out that poor decisions are good when they blatantly aren't.

How are they 'blatantly' poor decisions, you offer no arguments, please explain.

Now let's see, more hardware device support out of the box than any other system, ported to just about every architecture known to man, used in everything from giant computer clusters, embedded devices, servers, hpc, 3d/sfx, super computers, mobile phones, tablets, fridges, etc.

But yeah according to you and bassbeat these areas don't need driver stability, they obviously can't since according to you guys Linux drivers 'just keep breaking', update the kernel and whooosh there goes the stability.

Reply Parent Score: 3

RE[7]: makes sense
by lucas_maximus on Mon 13th May 2013 00:34 in reply to "RE[6]: makes sense"
lucas_maximus Member since:
2009-08-18

I'm not going to explain anything anymore, because your counter argument is "it works for me" so far.

Reply Parent Score: 0

RE[8]: makes sense
by yoshi314@gmail.com on Mon 13th May 2013 09:09 in reply to "RE[7]: makes sense"
yoshi314@gmail.com Member since:
2009-12-14

from the looks of it it works for a lot of purposes, not just him.

Reply Parent Score: 1

RE[8]: makes sense
by Valhalla on Mon 13th May 2013 11:46 in reply to "RE[7]: makes sense"
Valhalla Member since:
2006-01-24

I'm not going to explain anything anymore

Huh? From what I can tell you haven't explained anything, you made a blanket statement of 'Stable API/ABIs are good engineering like it or not' which totally disregards reality, which is that stable api/abi's are NOT automatically 'good engineering' if those stable ABI/API's turn out to suck and then you are stuck with them.

I doubt there is a single long term 'stable' API/ABI today that would look the way it does (in many cases probably not even remotely) had the original developers been able to go back in time and benefit from what they know now.

So it's a balance, will you keep code from 'yesterday' which 'today' turned out to be a crappy solution in order to maintain compability or will you allow yourself to break it and force improved changes upon those interfacing with your solution?

The kernel devs chose the balance of being able to break anything inside the kernel, while keeping userland interfaces intact.

This is not a perfect solution because there simply is no perfect solution, but it means that their changes has practically zero impact on user space (which certainly limits the improvements they can make) while allowing them free reign to improve within the kernel space.

And since the kernel is monolithic this means there's a ton of in-kernel functionality which can be enhanced without breaking compability, the exceptions in practice are those few proprietary drivers residing outside of the kernel, which needs to be maintained by their vendors against kernel ABI changes.

because your counter argument is "it works for me" so far.

Oh I think I've fleshed out my argument a lot more than that, meanwhile yours still seem very little beyond 'it doesn't work for me'.

Reply Parent Score: 4

RE[8]: makes sense
by bassbeast on Thu 16th May 2013 18:29 in reply to "RE[7]: makes sense"
bassbeast Member since:
2007-11-11

Sadly friend that is ALL you will get because ultimately the broken driver model has become a religious element, a way to "prove the faithful" by how much they will get behind an obviously and demonstrably bad design.

Quick, how many OSes OTHER than Linux use Torvald's driver model? NONE. How many use stable ABIs? BSD,Solaris, OSX,iOS,Android,Windows, even OS/2 has a stable driver ABI.

I'm a retailer, I have access to more hardware than most and I can tell you the Linux driver model is BROKEN. I can take ANY mainstream distro, download the version from 5 years ago and update to current (thus simulating exactly HALF the lifetime of a Windows OS) and the drivers that worked in the beginning will NOT work at the end.


And before anybody says "Use LTS" that argument doesn't hold water because thanks to the again broken design by Torvalds most software in Linux is tied to the kernel so if you want more than a browser and Open Office? You WILL be forced to upgrade because "this software requires kernel x.xx" or be left behind with older non supported software. With Windows with the exception of games that require a newer version of DirectX (which is rare, most have a DX9 mode for this very reason) you can install the latest and greatest on that 10 year old XP machine and it JUST WORKS.

Again let me end with the simple fact that after NINE YEARS I'm retiring the shop netbox. That is TWO service packs and at LEAST 3000 patches and not a single broken driver, NOT ONE. If Linux wants to compete then it actually HAS to compete, not give us excuses which frankly math can prove doesn't work. Look at the "Let the kernel devs handle drivers" excuse. You have 150,000+ drivers for Linux, with a couple of hundred new devices released WEEKLY..how many Linux kernel devs are there again? if you pumped them full of speed and made them work 24/7/365 the numbers won't add up, the devs simply cannot keep up...which is of course one of the reasons to HAVE a stable ABI in the first place, so that the kernel devs can work on the kernel while the OEMs can concentrate on drivers.

Sorry for the length but this one really irks me, if you like running an OS that is rough because of reasons? Go right ahead, i wish you nothing but luck. But when you compare that broken mess to either OSX or Windows I gotta throw down the red flag and call bullshit, its not even in the same league. Oh and do NOT bring up embedded or servers as that is "moving the goalposts" and honestly i don't care how cool your OS is at webserving, I'm not selling webservers and that isn't the subject at hand. Linux is broken ON THE DESKTOP and that is what we are discussing so try to stay on topic.

Reply Parent Score: 2

RE[7]: makes sense
by Morgan on Mon 13th May 2013 10:45 in reply to "RE[6]: makes sense"
Morgan Member since:
2005-06-29

Then tell us what distro you are using and what these kernel breakages you keep suffering are.


To my knowledge he runs Ubuntu but that's irrelevant; this discussion is about NT kernel vs Linux kernel performance, not a dick measuring contest.

I can see where you're both coming from because I live in both stable LTS and bleeding edge testing worlds at the same time. On my workstation it's Windows 7 Pro and Ubuntu LTS because I like to actually be able to get work done without wasting time fiddling around with broken crap. On my laptop it's "anything goes", as that device is my testbed for all the shiny new distro releases. Any given week I might have a Debian based distro, an Arch based one, a Slackware derivative or even Haiku booting on it. I greatly enjoy having both the stability of my work machine and the minefield that is my portable.

I feel like I say this every week lately, but if you're running a production machine for anything other than OS development, you should stick to a stable (preferably LTS) release of your favorite OS. If you're a kernel hacker, a distro contributor or you're just batshit insane, go ahead and use a bleeding edge distro as your production machine, and enjoy all the extra work that goes into maintaining it.

Reply Parent Score: 3

RE[8]: makes sense
by Valhalla on Mon 13th May 2013 12:40 in reply to "RE[7]: makes sense"
Valhalla Member since:
2006-01-24

this discussion is about NT kernel vs Linux kernel performance, not a dick measuring contest.

First off, I haven't even glanced at my dick during this entire argumentation (I need to keep my eyes on the damn keyboard) ;)

And while the original discussion was about Linux vs NT performance, this offshoot is about bassbeast's claims that the lack of a stable driver ABI is causing user space office software to crash and holding Linux back on the desktop.

He was painting this picture of this 'company' which must either run a bleeding edge distro or is downloading and installing new kernel versions off git and then goes:

-'shit! this new untested kernel release broke the proprietary driver and as such cost us $30k of products! we're screwed! oh, man if only this could have been avoided, like with one of them stable driver ABI's like I hear windows has, bummer!'

And then he used this 'very likely' scenario as the reason of why Linux has not made it big on the end user desktop.


I can see where you're both coming from because I live in both stable LTS and bleeding edge testing worlds at the same time.

Same here, stable for work, batshit crazy bleeding edge for leisure.

Reply Parent Score: 2

RE[7]: makes sense
by acobar on Mon 13th May 2013 15:10 in reply to "RE[6]: makes sense"
acobar Member since:
2005-11-15

Ok, there is more to this thread of arguments than I would like to discuss about but, to put things on perspective, I will write some possibly inaccurate and anecdotal specific cases that apply to me.

First, X drivers are NOT part of the linux kernel but, frankly, does it matter where it fits on system stack if you just want to boot to a nice GUI? Yeah, I thought so.

Some disclosures:

d.1) I have openSUSE 12.3 installed on all my computers that I really use (so, I am discounting the very old ones I just play with out of pure nostalgia). They are four.

d.2) Three of them have Windows systems also (1 xp, 1 vista and one 7);

d.3) One is a laptop, the others are all desktops;

d.4) I prefer nVidia graphic cards because of their drivers on linux. Two of them have such cards.

Now, this is what I can tell:

1) I compile some programs on my machines to add features that are left out by regular repos due some restrictions (mainly because they are not "totally" free additions) or because I want to use specific patches that are not applied by default. If you use scientific packages you are going to find it is a common case;

2) I don't use openSUSE nVidia drivers repo because of a very simple problem: when I compile some programs that depend on X devel libraries and generate the appropriated rpms, they will display a dependency on those nVidia drivers, what I try to avoid because not all my machines have a nVidia card (or the ones I supply the packages to);

3) Because of '1' and '2', when a kernel upgrade happens (only one time since 12.3 or 12.3 RCs - I am really not sure when it happened), I am thrown to the shell prompt (what I enjoy, but it is a completely different subject). On MS windows it does not happen but I experienced BSOD from time to time (rare on Xp, very rare on newer versions) and they were never related to video dirver issues. Both systems developed their ways to cope with related problems (drivers): unroll on MS products and shell + update/revert/blacklist on linux side. I prefer the last solution but I am very aware that the former, at least to me, seems like a more "unexperienced user" fool proof solution, and yes, I always install the "recovery prompt" and things alike on windows "just in case" (not only on my machines but on any I may have to support);

4) Things "break" on linux hardware support. With openSUSE 11.4, I was able to use an old nVidia FX 5200 PCI card to happily coexist with an integrated intel video driver through xinerama. It all stopped to work on 12.3 no matter how hard I tried by editing xorg.conf (after be sure it was properly picked), had to bite the bullet and buy a new card. The same thing worked flawlessly on xp, only after hacking with vista; it was "no way" with windows 7. As explained on second paragraph, I am aware that X is not part of linux kernel;

5) The common argument that if something is supported on linux kernel then it is going to work properly on newer versions is bullock. For network devices, storage devices and other very important subsystems of servers if may be true, but I had lots of problems with video capture devices, first and foremost because the kernel drivers are only part of the "integrated" solution, you will need also a proper interface working on user space and some of them just stop to work because we had gtk or kde updated and the old application is not compatible with them. To be fair, it also happens on windows but, for this specific case, I do find that there is better support not only by driver options but on applications as well, even though they push (the hardware/software manufacturer) the user to buy a new kit when a new ms windows version roll-up.

6) Some of the "pro" arguments towards use of MS Office and Adobe Creative Suite are also ridiculous. The former is only needed for some quite few cases but if you ask, almost all say that they "need!" it, press them to tell what specific feature they may miss and watch things go funny. Photoshop and Illustrator can be very successfully replaced by gimp and inkscape on web development on most cases also. Problems start to pop if you need to work on press industry though. As a side note, I like more Scribus than InDesign for small projects that requires only PDF generation (probably because of years of Pagemaker use).

Where I think linux really shine is on automation of process. It is really hard to beat it when you properly assembly a string of operations to be performed by scripts (bash, make and all). Perhaps, some sort of equivalent thing could be accomplished on MS camp by PowerShell but I don't see "scriptable" imprinted on DNA of MS world apps stack what impair what can be accomplished. So, for people like myself that just love to make most of things "set and forget" (or almost, of course), there is probably a kind of leaning towards linux, I guess.

Reply Parent Score: 6