Linked by Thom Holwerda on Sat 23rd Dec 2006 00:48 UTC, submitted by dumbkiwi
KDE This is a response to the article yesterday on the progress of GNOME and KDE. Aaron Seigo, a lead KDE developer, sets out the current state of progress with KDE 4, pointing out that KDE 4 is on track for a release in mid 2007. "Thom points to a quote from me that our goal is to have a 4.0 ready sometime in the first half of next year. That gives us until sometime in June and I'm still thinking we can make it."
Thread beginning with comment 195691
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[2]: A lot of crap on that blog
by rayiner on Sat 23rd Dec 2006 18:35 UTC in reply to "RE: A lot of crap on that blog"
rayiner
Member since:
2005-07-06

Kwin is build to be able to use XRENDER or OpenGL, so hardware acceleration will be there for all. now many effects can't be done with XRENDER i suppose but even with linux, you can't expect Vista-like effects on Win '95-hardware...

As I said, composited windowing is 2002 level technology. The technologies offered by Vista and Leopard (and even Tiger today) go way beyond that.

Arthur, the Rendering engine in Qt4 is much more capable (and mature) than Cairo, so KDE doesn't have to wait for Cairo to get a decent performance.

Arthur has the same problem Cairo does. If it goes through XRender, it can't do all of Vista's and OS X's pixel-shader based tricks. If it goes through OpenGL directly, it'll hit the DRI stack's limitations on context switching and concurrent rendering.

Reply Parent Score: 3

superstoned Member since:
2005-07-07

well, we're getting more technical than my knowledge can support, but afaik Arthur can have several rendering backends. as a lot of work is going into X.org lately, couldn't that fix this deficit? after all, Zack Russin is put to work on X.org technology and integration with Arthur. In time, they may be able to extend things like XRENDER, AIGLX or other extensions to enable stuff Vista and Mac OS X have as well.

Reply Parent Score: 2

rayiner Member since:
2005-07-06

The problem is that Arthur or X.org is just one piece of the stack, while the solution to accelerated drawing and compositing cuts through the whole stack.

Consider how Vista handles drawing a path in a window: the app calls into Avalon to draw and fill a path. Avalon decomposes the mathematical form of the path and stores it into a texture. It then instructs D3D to draw a polygon encompassing the projection of that path onto the window. It associates with this polygon a special pixel shader that will later handle the actual rendering of the path. This data is packaged up and handed off to the D3D driver, which handles effectively virtualizes the GPU, and effectively manages the render command streams from the various apps and the textures to which those apps render. Once the driver dispatches the app's command packet, the GPU loads the shader, draws the polygon, and then the shader reads the geometric data of the path from the texture, and fills in the correct pixels on the polygon to render the path.

Much of this technology is not there yet on the Linux side. Consider how Cairo handles rendering a path: the app instructs Cairo to stroke and fill a path. Cairo tesselates the path to trapezoids, and sends the data to the X server via the RENDER extension. Then, RENDER rasterizes, in software, the data to the window's pixmap, and Compiz comes along and uses GL_texture_from_pixmap and OpenGL to composite the window to the front buffer. The only OpenGL client in this scenario is Compiz, and the DRI handles the single client just fine.

Note that there is nothing particularly wrong with this model. You can achieve very high-quality and adequately fast rendering in software. You can make a very nice desktop on this model (it's basically exactly what Tiger does if you don't consider CoreImage/CoreVideo). However, its not in the same league as Vista. You're still drawing vector graphics on the CPU, like NeXTStep did in the early 1990s.

In order to use Blinn's vector texture technique like Vista does (which is really the only practical way that exists right now to do high-quality anti-aliased accelerated vector graphics on existing GPUs that doesn't involve comically high levels of multi-sampling), several pieces of this stack need to be changed.

1) Cairo needs a back-end that preserves the bezier information in the path, and doesn't tesselate it before sending it forward. IIRC, there is a back-end interface in Cairo, or at least being tested in Cairo, that allows this.

2) RENDER needs to be able to pass the full geometric data from Cairo to the GPU. There are a number of potential solutions to this. First would be extending RENDER to expose pixel shaders and more general textures. Second would be to extend RENDER to allow Cairo to send it full bezier paths and associated fill information. Third would be to ditch RENDER, and use GL directly.

3) The DRM needs to be able to efficiently handle managing the memory of all of these window textures, which will be used in ways that are different to how textures are used traditionally. For example, when windows are resized, textures will be allocated and freed much more rapidly than a system designed for more conventional applications might expect.

4) Depending on the solution to (2), the DRM might need to handle large numbers of GL clients more efficiently. Specifically, if (2) is solved by ditching RENDER and having Cairo render via GL directly, it will need to deal with the fact that suddenly instead of a few GL contexts, you're suddenly dealing with one for every window. You might get around this by using indirect GLX, and then multiplexing multiple GLX streams to a single GL context owned by the X server.

Reply Parent Score: 5

segedunum Member since:
2005-07-06

As I said, composited windowing is 2002 level technology. The technologies offered by Vista and Leopard (and even Tiger today) go way beyond that.

I suppose the root question, as an end user, would be what would the more advanced approach buy me, and would I be able to tell the difference?

Reply Parent Score: 3

sbergman27 Member since:
2005-07-24

"""I suppose the root question, as an end user, would be what would the more advanced approach buy me,"""

As admin of about 50 Linux desktops, I've been asking myself the same thing while reading through this thread. The conclusion I have come to is "absolutely nothing". My users need to browse the Web, send and receive email, do word processing and spreadsheets, and run a curses based accounting package. The local users do this via regular old X protocols from thin clients. The remote users come in over NX.

I'm not saying that these technologies don't have their uses. But I don't see where they mean diddly to me and my business users. And I think that the kind of users I support are probably the more common case.

Reply Parent Score: 3

rayiner Member since:
2005-07-06

I think hardware-accelerated compositing technology is going to be one of the biggest steps forward in user interfaces in the last decade. Apple is just scratching the surface of what's possible with the technology. The benefits range from the aesthetics to efficiency. First of all, not-ugly is generally better than ugly, all else being equal.

Second, things like animation can allow for subtle cues that reduce the cognitive load on the user. For example, many people find multiple desktops in Virtue Desktops to be useful in a way they never found multiple desktops on other systems to be. That's because the animated transitions help them keep track of the spatial orientation of the various desktops.

Third, scalable graphics offer substantial potential for compressing large amounts of information things like Expose are just the start. Imagine an IDE that used Expose-like techniques for browsing source records, used vector graphics to display complex class and call-graph hierarchies, automatically scaling the most relevant data up to be readable and scaling less relevant data down to fit more information on the screen.

Fourth, even if you don't consider these things to be useful, you can't argue that the lower-tech approach is comparable to Vista. It might be inferior along dimensions that aren't important, but objectively it is still inferior in those dimensions.

Reply Parent Score: 5

archiesteel Member since:
2005-07-02

Arthur has the same problem Cairo does. If it goes through XRender, it can't do all of Vista's and OS X's pixel-shader based tricks.

I followed this discussion and while I understand that Vista has access to some pixel shaders that are trickier to use under Linux, I do wonder how much those pixel shaders are used by Vista, and how much more they bring.

In my experience as end-user, I'm not sure I would notice much difference. What can't be noticed easily will not be missed much, even if it takes a year more to implement...

Seriously, there's stuff in Beryl now that is much ahead of what will be in Vista when it ships. Sure, the technology is better and there'll be third-party add-ons to get all kind of cool effects software, but these will not be available right away, and they won't be part of the default package. If anything, it'll be like a more advanced version WindowBlinds, cool stuff but try to convince your IT department to have it installed on your workstation.

Inless I'm missing something fundamental, it doesn't seem to me that this slight advantage for Vista won't make that much of a difference.

What do I know, anway...I'm just happy to have a hardware-accelerated exposť effect on my Kubuntu laptop. That's more than enough for me! :-)

(I have to admit I'm starting to like the exposť-like function even *more* than virtual desktops...)

Reply Parent Score: 5

rayiner Member since:
2005-07-06

I followed this discussion and while I understand that Vista has access to some pixel shaders that are trickier to use under Linux, I do wonder how much those pixel shaders are used by Vista, and how much more they bring.

Pixel shaders are at the core of Vista's 2D rendering architecture. When Avalon renders a path on the screen, it does not tesselate it to triangles then use the GPU to render the triangles. Instead, it draws a polygon that covers the projection of the path, and uses a pixel shader to fill in the regions that are inside the path, ignore the pixels that are outside the path, and anti-alias regions that are on the edge of the path.

This allows them to achieve very high-quality coverage-based anti-aliasing without having to use any full-scene anti-aliasing in the scene itself. That's a huge win because because even 16xFSAA (which itself incurs a huge memory cost and is only even supported on the latest cards) can't touch the quality of a good coverage-based anti-aliasing system like the one in Cairo's software renderer.

Without pixel shaders, using the GPU for rendering while maintaining quality becomes a rather difficult exercise. Basically, you either just punt and do everything in software (what Cairo currently does on AIGLX), or you use OpenGL only very late in the pipeline (what XGL does), thus giving up a lot of the potential benefit of using the GPU.

Note that this point doesn't just apply to pixel shaders. What happens when GPU's get advanced enough where you can just up an run most of pixman (Cairo's software renderer) on the coprocessor? How are you going to facillitate THAT through XRENDER?

The difference between Vista and Linux's stack as it stands today is the difference between OS X Jaguar and Vista. It's the difference between using the GPU just for some desktop effects, versus leveraging the GPU for the whole graphics pipeline. You can get a very good desktop with just the former (GNOME + Compiz is a VERY good desktop), but you're also giving up a ton of potential (and not to mention losing the feature war).

Edited 2006-12-24 16:29

Reply Parent Score: 4