Linked by Thom Holwerda on Sat 23rd Dec 2006 00:48 UTC, submitted by dumbkiwi
KDE This is a response to the article yesterday on the progress of GNOME and KDE. Aaron Seigo, a lead KDE developer, sets out the current state of progress with KDE 4, pointing out that KDE 4 is on track for a release in mid 2007. "Thom points to a quote from me that our goal is to have a 4.0 ready sometime in the first half of next year. That gives us until sometime in June and I'm still thinking we can make it."
Thread beginning with comment 195719
To view parent comment, click here.
To read all comments associated with this story, please click here.
Member since:

well, we're getting more technical than my knowledge can support, but afaik Arthur can have several rendering backends. as a lot of work is going into lately, couldn't that fix this deficit? after all, Zack Russin is put to work on technology and integration with Arthur. In time, they may be able to extend things like XRENDER, AIGLX or other extensions to enable stuff Vista and Mac OS X have as well.

Reply Parent Score: 2

rayiner Member since:

The problem is that Arthur or is just one piece of the stack, while the solution to accelerated drawing and compositing cuts through the whole stack.

Consider how Vista handles drawing a path in a window: the app calls into Avalon to draw and fill a path. Avalon decomposes the mathematical form of the path and stores it into a texture. It then instructs D3D to draw a polygon encompassing the projection of that path onto the window. It associates with this polygon a special pixel shader that will later handle the actual rendering of the path. This data is packaged up and handed off to the D3D driver, which handles effectively virtualizes the GPU, and effectively manages the render command streams from the various apps and the textures to which those apps render. Once the driver dispatches the app's command packet, the GPU loads the shader, draws the polygon, and then the shader reads the geometric data of the path from the texture, and fills in the correct pixels on the polygon to render the path.

Much of this technology is not there yet on the Linux side. Consider how Cairo handles rendering a path: the app instructs Cairo to stroke and fill a path. Cairo tesselates the path to trapezoids, and sends the data to the X server via the RENDER extension. Then, RENDER rasterizes, in software, the data to the window's pixmap, and Compiz comes along and uses GL_texture_from_pixmap and OpenGL to composite the window to the front buffer. The only OpenGL client in this scenario is Compiz, and the DRI handles the single client just fine.

Note that there is nothing particularly wrong with this model. You can achieve very high-quality and adequately fast rendering in software. You can make a very nice desktop on this model (it's basically exactly what Tiger does if you don't consider CoreImage/CoreVideo). However, its not in the same league as Vista. You're still drawing vector graphics on the CPU, like NeXTStep did in the early 1990s.

In order to use Blinn's vector texture technique like Vista does (which is really the only practical way that exists right now to do high-quality anti-aliased accelerated vector graphics on existing GPUs that doesn't involve comically high levels of multi-sampling), several pieces of this stack need to be changed.

1) Cairo needs a back-end that preserves the bezier information in the path, and doesn't tesselate it before sending it forward. IIRC, there is a back-end interface in Cairo, or at least being tested in Cairo, that allows this.

2) RENDER needs to be able to pass the full geometric data from Cairo to the GPU. There are a number of potential solutions to this. First would be extending RENDER to expose pixel shaders and more general textures. Second would be to extend RENDER to allow Cairo to send it full bezier paths and associated fill information. Third would be to ditch RENDER, and use GL directly.

3) The DRM needs to be able to efficiently handle managing the memory of all of these window textures, which will be used in ways that are different to how textures are used traditionally. For example, when windows are resized, textures will be allocated and freed much more rapidly than a system designed for more conventional applications might expect.

4) Depending on the solution to (2), the DRM might need to handle large numbers of GL clients more efficiently. Specifically, if (2) is solved by ditching RENDER and having Cairo render via GL directly, it will need to deal with the fact that suddenly instead of a few GL contexts, you're suddenly dealing with one for every window. You might get around this by using indirect GLX, and then multiplexing multiple GLX streams to a single GL context owned by the X server.

Reply Parent Score: 5

superstoned Member since:

sounds like a lot of work, but also it sounds like it's doable.

i don't think it's weird that a new Windows release (after 5 years...) puts the linux desktop behind on some stuff... but not on everything, there are areas in which linux has the lead.

no idea how long it'll take to catch up where linux is behind, but i think it will be before the next Windows version. and by that time the areas in which linux is ahead have been improved as well. overall, the doom scenario Thom painted for us is imho - as i said it before, overdone.

btw, interesting writing, do you have any idea how this is with Arthur? for Cairo, it sounds like it "just" needs another painting backend...

Reply Parent Score: 5