Linked by Thom Holwerda on Mon 14th Mar 2011 18:59 UTC
Talk, Rumors, X Versus Y And over the weekend, the saga regarding Canonical, GNOME, and KDE has continued. Lots of comments all over the web, some heated, some well-argued, some wholly indifferent. Most interestingly, Jeff Waugh and Dave Neary have elaborated on GNOME's position after the initial blog posts by Shuttleworth and Seigo, providing a more coherent look at GNOME's side of the story.
Thread beginning with comment 466326
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[6]: F**k this shit!
by oiaohm on Wed 16th Mar 2011 03:38 UTC in reply to "RE[5]: F**k this shit!"
oiaohm
Member since:
2009-05-30

"A userspace driver frameworks its far more stable.


So all drivers are better off in userspace?
"

nt_jerkface good question.

Some drivers at moment will perform badly at the moment done userspace. Mostly due to context switches. But if there was enough demand merging of kernel mode linux would become important. That basically fixes those performance issues completely.

There are a small number like memory management cpu initialization, video card first initialization that simple cannot be done as userspace basically the ones that a base kernel image of linux will not run without. Yes they are still drivers even that they are in the single file kernel blob.

All the module loadable drivers other than a very small percentage(The ones that have todo operations from ring 0 like virtualisation ie ring 0 is not on offer to user-space for very good reasons) would be in most cases be better off done using the userspace api's.

In fact some drivers in the Linux kernel are being tagged to be ported to the userspace api simply to get rid of them out of kernel space.

Most important thing about being done off the userspace api is that if you are having kernel crashes and you suspect a driver if they are are done in the userspace api you could basically switch to a microkernel model. Run the driver in userspace. Application or driver crashing in userspace normally does not equal complete system stop so making it a little simpler to find that one suspect driver.

So why are they not. fuse cuse and buse are the last 8 years. So drivers prior to that were done in kernel space because there really was no other way that would work.

Next is kernel space does have some advantages. Those advantages do explain why the API in there is unstable.

Main reason for using the kernel space over userspace is speed. For the speed there is a price Kernel space driver can bring the system to a grinding halt with a minor error. No such thing as a free lunch.

Due to the fact that kernel space is for speed. Any design error has to be removable at any time. So the API in kernel space are in flux. BKL was a classic example. Good idea a the time. Many years latter it had to go. Stable Kernel ABI based on internal kernel structs would have prevented that removal as fast as it was. Why you are using Kernel mode explains why Linux kernel mode is in flux.

So the deal you choose between with user-space and kernel mode basically is.

Userspace highly stable, issueless with future versions of kernel, unless something really rare happens never crashes you computer(ie driver might get restarted) but slightly slower depending on the device this may even be undetectable, can be cross platform and cross arch at basically the same speed.

Kernel Space. Fast, Can crash your computer with even the smallest error, Will have issues with future versions of kernel at some point due to ABI/API/Locking changes, normally not cross arch or cross platform if cross arch or cross platform normally as slow as the userspace api used from kernel mode or worse slower than using the userspace interfaces in the first place(what is the point).

Note those deals apply to Windows Linux and Solaris to different amounts. Linux with its faster kernel major version cycles shows up will have issues with future versions of kernel more often. Lot of people remember getting Windows 7 and XP before it and finding a stack of devices no longer worked safely ie add the driver upset computer.

Due the risks in kernel space is why Linux people want the source code in there so it can be fully audited. Basically do you like the Blue/Red screen of death or Linux kernel panic. If no you really should agree with what the Linux developers want. Even MS is giving up on the idea. Most of the gains of kernel space are not with the loss in stability.

The way I put is that closed source driver makers wanting to use kernel space are like carrying possible drug using gear into an airport and trying to refuse having you ass and other private areas inspected. Basically inspection should be expected.

Do I expect driver makers or any poor person who has to be inspected at a airport to be happy about it. No I don't that would be asking too much. But they should be understanding why they got what they did.

Reply Parent Score: 2

RE[7]: F**k this shit!
by nt_jerkface on Wed 16th Mar 2011 18:12 in reply to "RE[6]: F**k this shit!"
nt_jerkface Member since:
2009-08-26

nt_jerkface good question.


It was a rhetorical question that both you and I know the answer to. GPU drivers are the big stinking elephant in the room and there is also the reality that most hardware companies would prefer to write a proprietary binary driver for a stable interface.

The way I put is that closed source driver makers wanting to use kernel space are like carrying possible drug using gear into an airport and trying to refuse having you ass and other private areas inspected. Basically inspection should be expected.


Why not let users decide? The security paranoid can use basic open source drivers and everyone else can use proprietary drivers.

Like most defenses of the Linux driver model you ignore problems resulting from that model that users continually face.

The user-space interface is only mostly stable. Every year you defend the Linux driver model and every year a new Linux user gets some trivial device broken from an update and goes back to Windows or OSX.

There is no Linux distro that can be trusted to auto-update itself along with a typical desktop application suite and a basic set of peripherals. No Linux distro has a reliable record. Linux would have far more than 1% if the people at the top were concerned with building a system that finds a balance between the needs of users, open source advocates and hardware companies. But Linux is designed by open source advocates with little regard for users or hardware companies. Linus doesn't want proprietary drivers in his precious kernel and will gladly sacrifice marketshare to achieve this goal.

Reply Parent Score: 2

RE[8]: F**k this shit!
by smitty on Wed 16th Mar 2011 18:42 in reply to "RE[7]: F**k this shit!"
smitty Member since:
2005-10-13

Why not let users decide?

They do decide. They decide to update to a new kernel/distro every 6 months rather than sticking with LTS/Enterprise distros that freeze the API for years and test out binary drivers extensively. Users choose to decide that's not important to them.

Reply Parent Score: 2

RE[8]: F**k this shit!
by oiaohm on Wed 16th Mar 2011 23:45 in reply to "RE[7]: F**k this shit!"
oiaohm Member since:
2009-05-30

"nt_jerkface good question.


It was a rhetorical question that both you and I know the answer to. GPU drivers are the big stinking elephant in the room and there is also the reality that most hardware companies would prefer to write a proprietary binary driver for a stable interface.
"

This in fact shows how little you know. GPU driver itself is two halfs. One to prep code for GPU and one to control where GPU takes and places memory.

The code prep is in fact in most case a speed boost if in userspace. The fact that GPU can be got to write back anywhere in memory the memory management controls are critical to be in kernel space.

Nvidia of all things has code prep and memory management in kernel space. The result is a more unstable more harmful driver than what it should be. Since bugs in processing instructions for GPU can crash the complete kernel. Worse the processing instructions for GPU could receive anything from user-space. Processing for GPU is a highly complex operation its very much like running a compiler in kernel space not wise at all.

Would giving up the secret to memory management to there GPU really expose there trade secrets no it would not. Since that information is already known.

Remember I said there are some items that are not suitable. Closed source memory management is one of those things since that can create secuirty flaws so simply. What you need to be in kernel space is normally items that need to be 100 percent audited to have a secure system.

ATI/AMD, VIA and all other Video card makers have accepted the fact that the memory management of video cards and processing for video cards should be split. Nvidia is the only hold out. Nvidia GPU drivers on Windows Vista and 7 are also not design to the MS specs requesting the two parts split either.

The elephant in the room is not GPU drivers. It is Nvidia and its for all OS makers not just Linux. Instructions given for stability of OS are not being obeyed by Nvidia.

Very little really needs to be in kernel space and what ever is not required in kernel space the driver makers can keep closed source as much as they like. Mostly stable is not true at all.

Besides the mainline Linux kernel is only lacking distribution support to have the Longterm kernels provide a kernel abi that closed source drivers can use. Yes distributions not messing with longterm kernels and agreeing to use the same compiler.

Userspace API provided by the Linux kernel is a work around to the lack of cooperation from lots of distributions. It also provides increased stability in many cases.

Simple fact of the matter nt_jerkface you don't know the topic and every argument you have had is a dead end. Not based on facts of the situation. Each time you are going to lead to particular parties doing the wrong things on all platforms.

Reply Parent Score: 3