Linked by Thom Holwerda on Fri 16th Dec 2005 14:57 UTC, submitted by mlauzon
Windows "Microsoft will move the graphics for its next version of Windows outside of the operating system's kernel to improve reliability, the software giant has told Techworld. Vista's graphics subsystem, codenamed Avalon and formally known as the Windows Presentation Foundation, will be pulled out the kernel because many lock-ups are the result of the GUI freezing, Microsoft infrastructure architect Giovanni Marchetti told us exclusively yesterday."
Thread beginning with comment 74154
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE: It's about time
by prismX on Fri 16th Dec 2005 15:36 UTC in reply to "It's about time"
Member since:

It looks like people have a short memory. Graphic drivers did not run in kernel mode until Windows NT 4. It was included to boost performance, as graphic was rendered by software. Now graphic is rendered by hardware, so there is no need to run these drivers in kernel mode.
Btw, in Linux, graphic drivers do not run in user modes, as monolithic kernel requires compilation of drivers with the specific kernel.

Reply Parent Score: 5

RE[2]: It's about time
by on Fri 16th Dec 2005 15:42 in reply to "RE: It's about time"
Member since:

Actually, with linux and X only a small part is in the kernel. Basically enough stuff to talk to the video card and to the X driver. I'm sure Microsoft has the same kind of thing, only standardized for all drivers.

Reply Parent Score: 0

RE[3]: It's about time
by oxygene on Sat 17th Dec 2005 08:53 in reply to "RE[2]: It's about time"
oxygene Member since:

though XFree86 (and derivatives) need root privileges to gain access to I/O port access (and various other things), which is basically the same as a restricted kernel level (the video driver could just as well flood the IDE I/O ports, no protection being done there).

the only reason they need kernel drivers every now and then is for interrupt handling, dma and the like - stuff that ends up in a kernel trap and needs handling from there on.

there are designs to make that more secure: miniports on winNT (for a decade or so, and many drivers actually run in userland already - it's not that much of a change for microsoft), beos' accelerants, kgi on linux (shot to death before it was really born)

Reply Parent Score: 1

RE[2]: It's about time
by on Fri 16th Dec 2005 19:23 in reply to "RE: It's about time"
Member since:

For clarification, graphics *drivers* have always run in the kernel in Win32 systems: the question is how much of GDI/other GUI stuff has run in kernel space, and what percentage of what's currently being done by a graphics driver in kernel space was done in user space.

BeOS has always had a very sane way of doing this, having a minimal amount of code of a driver in kernel space, and an "accelerant" that's called from the App_Server (the user space program that controls the GUI and does all drawing) that simplifies writing graphics drivers while adding to stability. The more stuff you can remove from kernel space, the better stability and easier it will be to create drivers, and create correct drivers. It may (perhaps) have some amount of performance overhead, but it doesn't appear in BeOS to be a major performance eater.

Now, what will be very interesting is to see how Microsoft changes things with their move of putting more back in user space, when it comes to 3D acceleration performance. But if you think about it, if the 3D rendering is GPU-limited, even this move won't make a significant difference with modern hardware.

Reply Parent Score: 0

RE[3]: It's about time
by rayiner on Fri 16th Dec 2005 19:40 in reply to "RE[2]: It's about time"
rayiner Member since:

The performance implications of userspace graphics depend a lot on the underlying hardware. As hardware has gotten more advanced, the effective latency to it has gone up, making more indirect methods of accessing it less expensive.

Consider a primitive 2D accelerator. It has some memory mapped I/O registers that applications write directly draw primitives. Now, in a protected system, you cannot have the graphics driver running in the client's address space, so what you do have the client call into the kernel and allow the kernel driver to do the I/O writes. Putting this in the kernel is more performant in this scenario than having a server, because you don't have a context switch.

Now, consider a modern 3D accelerator. You can program them via MMIO, but to achieve the best performance, you have to use the DMA engine and control the GPU via command packets. In this model, you can put most of the graphics driver, the part that constructs the actual command packets, in user space. This is very fast (a local library call is faster than either IPC or a kernel call), and still secure (the user space driver cannot actually bang registers and cause a system crash). Now, you have a kernel call (or an IPC --- it doesn't matter a whole lot, because command packets are big enough that they don't get sent all very frequently) that verifies the command packet, and programs the DMA engine to upload it to the GPU. This overall mechanism is both secure and quite fast --- once you're constructing command packets anyway, a little overhead in moving them around doesn't matter so much.

Reply Parent Score: 2