Trolltech’s Zack Rusin
today introduced Glucose on Freedesktop Xorg mailing list. Glucose is a new “OpenGL based acceleration architecture” that “uses XGL code”. It does not require any change on the drivers side, except a call to glucoseDriverInit.
With this addition X is basically fully 3d accelerated for capable drivers!
Also see Zacks blog post: http://zrusin.blogspot.com/2006/08/glucose-and-graphics.html
Now, if only nvidia would supply some compatible drivers…
You mean ATI, nvida drivers work flawlessly with Linux and XGL. They have even reached preformance parity with their windows counterparts.
ATI drivers are not good performers for latest games but they sure work great with XGL. I’m using a “probably unsupported” Mobility X1600 with SLED10 and 8.26 or 8.27 drivers.
EDIT: with 8.26 XGL takes ages to start, with 8.27 that’s fixed
With XGL I’m unable to use all OpenGL extensions for games and celestia, so is glucose the solution for that?
It would be great if desktop effects could be disabled on the fly, when some OpenGL app is started, or if they could coexist at full speed and with all extensions.
Edited 2006-08-16 21:58
But this is an extension to aiglx, not XGL.
That and nvidia still hasn’t gotten around to supporting xorg 7.1 which I’m assuming this is part of.
I read this as an acceleration architecture that bridges the AIGLX server to use the XGL rendering paths. Is this right? I thought that AIGLX already takes care of OpenGL acceleration. If it doesn’t, than what does it do?
So, can anyone make this situation any clearer?
the way I understand it, AIGLX doesn’t accelerate the entire desktop. Glucose will provide acceleration for legacy applications.
AIGLX allows the server to do OpenGL rendering, which means that window managers can use OpenGL to do cool effects without pulling the images down from the server, playing around with them, then copying them into DRI memory as textures and displaying them. Instead, window managers can use an X pixmap as a GL texture directly, and tell the X server to do all the GL manipulations on it.
As you can imagine, this makes GL drawing using pixmaps a hell of a lot more efficient, but doesn’t actually do too much to accelerate the GL drawing.
Then again, Glucose won’t necessarily make things all that much faster, since 2d drawing isn’t actually all that much slower than 3d (and, in fact, on many cards it might be faster — you don’t have to handle all those pesky 3d transformations), but it _does_ mean that you don’t have to do context switching between 2d and 3d, and it also means you don’t have 2d drivers anymore. This means that the X developers can work much harder on good 3d drivers, with less code to worry about, which means more performance and less bugs.
Good for all of us =)
…neccessary all to soon, since i have to say, the 3d accaleration situation doesn’t seem to be too well on linux in my eyes.
I have an HP notebook with radeon 9000 gpu. The OSS drivers work, but arent supportet by xgl (all they do is produce pretty static images and an X11 hangup) and the offical ati drivers (if you dare to call that software junk a driver) does nothing but generate very pretty plasmaeffects all the time.
I should have bought something with nvidia gpu.
When XGL came out, it didn’t work on my system with a GF 4800, too. Video playback just was unusable, even with mplayer’s gl2 renderer, the animations froze from time to time, and so on.
Now, months later, it all works flawlessly. This software just has to mature, and until it it works right with every “popular” graphics chip, it won’t get mandatory anyways.
Additionally, there is to date no software that is or will rely on this techniques. Having Cairo as a prominent example, being the rendering backend of GTK, it supports multiple backends itself and runs fine without any GL, too.
Edited 2006-08-16 15:37
Highly unlikely.
None of these technologies negate the possibility of doing things the good old fashioned way, they simple add compositing “on top” of this.
And with rendering applications both QT and Gtk still support software drawing better better than GL based drawing and likely will for a very long time.
I wouldn’t expect to need GL to run X within the next 5-10 years. Even Vista is going to be backwards compatible to work without 3d capabilities.
And the proprietary Nvidia drivers don’t work at all with AIGLX, which is the way forward, since it doesn’t toss out decades of workingness by writing a new server, breaking everything on the way.
You see, the Nvidia drivers are closed source, which means that nobody but Nvidia can add support for cool new GL extensions that make all these fun effects possible. Hell, Nvidia still hasn’t done the work (ie: the single recompile) needed to get their drivers working with the new ABI for Xorg 7.1 (the ABI break was to allow much much faster text rendering, and to unbreak video with Xgl and AIGLX. If Xorg hadn’t broken driver ABI, you’d be stuck with slightly glitchy video and slower text)
AFAIK we are supposed to see a release fairly soon with proper 7.1 support, but GLX_EXT_texture_from_pixmap isn’t going to be supported until a 9XXX release (whenever that happens)
Yeah. Fast service – 3 months to release a recompiled driver. An indeterminate future date before we get AIGLX support, which has also been around – and built by default – for at least 3 months. Don’t closed, un-updatable modules rock your world?
And the proprietary Nvidia drivers don’t work at all with AIGLX, which is the way forward, since it doesn’t toss out decades of workingness by writing a new server, breaking everything on the way.
Nvidia supports AIGLX, they were behind AIGLX from the start, prefering it’s approach over Xgl.
What’s lacking is the support for some of the new GL features, but nVidia was also clear that they wouldn’t provide support until the standards were finalized.
So what we have here are people throwing standards out and expecting nvidia to jump, when all they said was make up your minds on how you want it addressed and we’ll address it.
As for the 7.1 ABI issues, they’ll be releasing another version in the 87xx series to address that rather than keeping everyone waiting until 9xxx is ready with full support for all the goodies.
They’ve been very open in working with the xorg folks, and made no commitments they haven’t fulfilled. If the fact is that they haven’t dropped everything and re-written their driver to follow a previously moving target is an issue, you’d be better off learning how to reverse-engineer and code your own drivers, or find another alternative.
AIGLX is part of Xorg 7.1 which is available months ago. The Test 2 version of FC6 already has it. AFAIR, Xorg 7.1 change was also done with some of Nvidia developers.
Nvidia supports AIGLX, they were behind AIGLX from the start, prefering it’s approach over Xgl
They don’t support it if it doesn’t work.
What’s lacking is the support for some of the new GL features, but nVidia was also clear that they wouldn’t provide support until the standards were finalized
So you are telling us that you can’t test the new features before having a full theoretical spec ? So no evaluation of corner cases, design errors, all is done “in theory” ?
Welcome to the world of proprietary closed drivers …
Fortunately, they had Intel chips to test all of this and create some standards. If it wouldn’t have been for Intel cards, we would still be far behind.
Notice how everything is tested and works on Intel chips as soon as an alpha is released.
Sorry, but NVidia could have been pristine clear about anything, that still does not make them look any good to me.
So what we have here are people throwing standards out and expecting nvidia to jump, when all they said was make up your minds on how you want it addressed and we’ll address it
Yeah right, again, you’re trying to make others look bad and NVidia look good. Sorry, they’re still the bad ones there.
As for the 7.1 ABI issues, they’ll be releasing another version in the 87xx series to address that rather than keeping everyone waiting until 9xxx is ready with full support for all the goodies
We’re waiting for some months already, you know ! I have the render acceleration disabled for some time now.
They’ve been very open in working with the xorg folks, and made no commitments they haven’t fulfilled
Oh my ! You even managed to use the world open. If I were a conspiration theorist, I would say you’re an astroturfer for NVidia.
But you’re right, the problems we have now with the closed binary NVidia drivers are the same we had earlier with these drivers, when it crashed constantly for months for example.
If the fact is that they haven’t dropped everything and re-written their driver to follow a previously moving target is an issue
I still fail to see what has moved in the target from 1 year before.
Mesa project implemented this in 2 months time, and no, they’re not full time on it.
AIGLX is basically just an implementation of hardware accelerated remote X rendering. This is different from direct rendering using DRI or similar mechanisms, as those speak directly to the card. AIGLX basically makes the X server send the different GL requests to the driver on behalf of the remote application.
XGL is an X server that uses OpenGL for all drawing.
Now, combining those two using glucose, what happens is that you have the advantages of AIGLX, where you can redirect the GL rendering requests to render to a texture on which you can then operate to achieve various effects, and the advantages of XGL, which accelerates normal X drawing using OpenGL.
What you get finally is an X server that draws absolutely everything using OpenGL, with working local and remote OpenGL applications like games and screensavers, that is also able to redirect everything to textures in order for compositing managers to operate on them.
I hope I got all that correctly
Edited 2006-08-16 15:45
sounds very interestint – and hopefully optional heh
The beauty in this design is that if implemented correctly, it should all be optional. The point of glucose is to add GL accelerated render paths using XGL code to the X server in addition to the traditional 2d graphics path. So both should be available.
AIGLX doesn’t really add any baggage, if simplified a lot it is just a bridge between glx requests and direct accelerated 3d rendering à la DRI and co.
Theres no support for 3dfx cards for any of this stuff right? Be kinda nice to be able to run xgl or what not on my voodoo 5…
Its probably to old to be ever supported. Also i think this kind of technology requires rather much memory for textures.
it works on my radeon 7000 16mb card. my voodoo 5 is a 64mb card. so memory wouldnt be an issue i guess.. oh well.
How little Xfree86 did and how much they delayed technologies like this.
Too bad they werent marginalized years ago. I beleive all of these new enhancements and capabilities resulted from Xorg starting a new project while telling Xfree86 to take a hike.
Why they even bother with new releases is beyond me.
Competing implementations will continue for a while I am sure and will only fuel the designs and possibilites. Eventually everything will come back to one system I think but it will be an exciting time for sure.
glue everything and you will get next abstract layer and bloated soft. i think the best way is revolution, new X server based on OpenGL. i don’t understand what are AIGLX and this gluesmth for. why don’t write new server and base it on Xgl? everyone is now teaching one thing “don’t rewrite” but rewriting can give better ideas, optimizations and new view.
why has macosx working well composite engine even using VESA driver (tried with MacOSx86 – i know it’s illegal, i don’t have it installed know ) and linux can’t have? maybe it’s caused by portability of xorg and support for ATI Mach64 (if someone wants use it, he will stay with xorg, not a problem). maybe it’s time to create os specific X server which would perform the best.
“why don’t write new server and base it on Xgl?”
AFAIK, XGL *IS* the new server…
but Xgl must use Xorg. i mean, making it work independent from Xorg.
Xegl is the answer for your question
Yeah, how is Xegl going btw? Any progress?
I’m one of those who really think Xegl is the way to go, instead of glue on top of glue on top of glue on top of old X.
I have nothing against old X and the work put into AIGLX and Glucose etc, on the contrary, it’s just that I think for bleeding edge machines it would be far better to just simply replace all that stuff with Xegl.. But I don’t have all the info, hell I hardly have any, so all I have to go on is what I’ve read about the different approaches. So I may very well be completely wrong..
As long as David reveman is working on Xgl , then he’s indirectly working on Xegl
And no I got no idea about the progress, as far as I know some other guys are working on it(but no way to know how much time they are spending on it)
why don’t write new server and base it on Xgl?
The problem with XGL is that it is a new X server, and as Zack pointed out in his posting we already have an X server that works pretty well.
Given the state of hardware acceleration in Linux and the wide range of hardware, I’ve always thought that a solution would have to be sought based on the current X server. It looks as though with AIGLX and Glucose, people will be able to have their cake and eat it – if they want to.
OK, but current server performs not well. so maybe something should be changed, corrected. why can macosx perform so well, even without hardware acceleration (VESA driver)? i can’t understand. i was shocked after macosx installation .
This won’t be stable before Vista ships.
Linux falls behind again.
OS X performs so well, even on Vesa, because all windows are double buffered independently. Xorg doesn’t do this, and probably won’t ever do this properly without the addition of AIGLX or similar. It ‘just feels smoother’ when your windows dont leave trails of stuff all over the screen, and cannot be seen to visibly repaint themselves, regardless of how slow the underlying system is.
Double buffering is expensive in terms of memory when you have lots of windows,and with non-accelerated compositing everything has to be stored in main memory and blitted to the framebuffer.
Accessing video card memory outside the framebuffer when not using 3D texture requires root privileges under Linux so nobody has bothered to implement DGA for normal window operations in Xorg AFAIK – its only used in apps such as MPlayer with the correct drivers specified.
Complete lack of window manager/toolkit/X server synchronisation for repaints etc. means that X applications feel slower than their OS X counterparts, even if pixel fill rate is faster under the Xorg server. You see applications lag behind resize operations etc.
If you want a double buffered X display on Vesa today, try XDirectFB. There has actually been a solution for a double buffered, ‘true transparency’ X server for years, but it has failed to gain traction in a world dominated by Xorg. There are a few glitches, and CPU usage is also high with XDirectFB, but it works.
In short, Xorg is crap for 2D, has been crap for 2D for years, and will continue to be crap for 2D into the foreseeable future. Using a 3D accelerator and something like AIGLX (still far from ready for primetime) is the only way to get acceptable window drawing and compositing under Xorg/XFree86, otherwise you simply need another X server, such as XDirectFB or XGL.
I don’t know why people keep claiming Xorg is a decent solution, it’s not, and hasn’t been since we left the days of 8-bit colour and monochrome fonts behind. It largely works for most people, but it works poorly when it comes to providing the illusion of smoothness and speed.
There are drawbacks people can point to when comsidering implementing anything else, such as increased memory or CPU usage, and these reasons are constantly used to justify the current (crappy) state of the Xorg desktop, and ensure that no real progress for users is made without forcing everybody to upgrade to 3D accelerators which are neat toys but are largely unsupported under the Free/Open Source Software model.
I dont really get the politics, or even the deepest technical details, but this is the situation as I understand it.
Well, you are probably misinformed, but I will tell you some things, and I don’t know what you mean by saying 2D is crap under X.
There is a Composite extension already in Xorg, and there is a Render extension too, for 2D operations, and it works pretty well, there is EXA too, that accelerates and makes Render better. Composite could use Render or OpenGL for compositing and double-buffering, and in the end, when EXA, AIGLX, Glucosse matures, it will be nice, remember those guys works for free, some of them get paid too, and you are not making progress in acting like a troll, so please reserve your negative comments next time.
here here!!!!
OS X performs so well, even on Vesa, because all windows are double buffered independently. Xorg doesn’t do this, and probably won’t ever do this properly without the addition of AIGLX or similar
Of course Xorg doesn’t do this. But GTK does. I don’t know if Qt does, but I’ve seen no problem in KDE nor in Gnome.
It ‘just feels smoother’ when your windows dont leave trails of stuff all over the screen, and cannot be seen to visibly repaint themselves, regardless of how slow the underlying system is
I see, you’re one of these people that move complicated OOo document windows all day on top of a Firefox window full of dynamic content …
We don’t give a damn if this leave trails on a 100 MHz PC, you should just not have these kind of unproductive behaviour with a slow PC.
When X came out, people weren’t doing stupid things like that.
Accessing video card memory outside the framebuffer when not using 3D texture requires root privileges under Linux so nobody has bothered to implement DGA for normal window operations in Xorg AFAIK – its only used in apps such as MPlayer with the correct drivers specified
What is this nonsense ? What kind of troll is that ? DGA always meant the app using it should be suid root to work, was used mainly for fullscreen (in apps needing fullscreen, like, yes MPlayer, but not only), and is deprecated since a LONG time. NVidia actually removed DGA support from its closed drivers some time ago, making all kind of emulators unable to go fullscreen anymore with these drivers. So what’s this BS about “nobody has bothered to implement DGA for normal window operations in Xorg” ? Of course “nobody” is stupid enough to implement things with a deprecated technology.
Complete lack of window manager/toolkit/X server synchronisation for repaints etc. means that X applications feel slower than their OS X counterparts, even if pixel fill rate is faster under the Xorg server. You see applications lag behind resize operations etc.
You see that in extreme conditions yes. You never or rarely see it in normal operation though. That’s why for years, I wondered how come some people had so much redraw problems. Then I understood that only the very vocal trolls had these problems, or zealots promoting OS X.
Even my users with old 1 GHz Duron and 256 MB RAM say to me their PC fly (with Mandriva, not the fastest distro around), which I find amazing.
In short, Xorg is crap for 2D, has been crap for 2D for years, and will continue to be crap for 2D into the foreseeable future
Define crap, please. Man, would you believe this crap has been used to make films with Cinepaint …
Using a 3D accelerator and something like AIGLX is the only way to get acceptable window drawing and compositing under Xorg/XFree86, otherwise you simply need another X server, such as XDirectFB or XGL
Which is complete BS. The problem is caused by hardware architecture. And no, it’s not limited to Xorg/XFree86, Windows has this problem too.
Compositing is the only way to eliminate this problem with current heavy 3D texture oriented graphic cards.
I don’t know why people keep claiming Xorg is a decent solution, it’s not, and hasn’t been since we left the days of 8-bit colour and monochrome fonts behind. It largely works for most people, but it works poorly when it comes to providing the illusion of smoothness and speed
That’s because you’re a troll, so you can’t understand. Of course, you’re not an authority on the matter, so the fact that you don’t understand doesn’t mean that Xorg is not decent or not right. The fact that you don’t understand could mean a lot of other things though, like that you’re an idiot, or that you should just really use something before spouting crap.
There are drawbacks people can point to when comsidering implementing anything else, such as increased memory or CPU usage, and these reasons are constantly used to justify the current (crappy) state of the Xorg desktop, and ensure that no real progress for users is made without forcing everybody to upgrade to 3D accelerators which are neat toys but are largely unsupported under the Free/Open Source Software model
Your talk sure call for real progress ! I don’t know what Xorg made to you, it’s just a display server !
While you talk though, people provide the lacking support you talk about, and provide us with good solutions. Too bad you find them crappy.
>>OS X performs so well, even on Vesa, because all
>>windows are double buffered independently. Xorg doesn’t
>>do this, and probably won’t ever do this properly
>>without the addition of AIGLX or similar
>Of course Xorg doesn’t do this. But GTK does. I don’t
>know if Qt does, but I’ve seen no problem in KDE nor in
>Gnome.
Thats completely wrong. Neither GTK or QT can do anything about the lack of per-window double buffered surfaces in the X server. Care to back this up?
>I see, you’re one of these people that move
>complicated OOo document windows all day on top of a
>Firefox window full of dynamic content …
>We don’t give a damn if this leave trails on a 100 MHz
>PC, you should just not have these kind of
>unproductive behaviour with a slow PC.
>When X came out, people weren’t doing stupid things
>like that.
Oh i see, so the problem exists, but its my fault as a user for seeing it? Of course! why didnt I think of that! Now, could you explain just why I shouldnt expect to be able to move any window on my desktop around smoothly? And why I’m not justified in thinking that is sub optimal?
>You see that in extreme conditions yes. You never or
>rarely see it in normal operation though. That’s why
>for years, I wondered how come some people had so much
>redraw problems. Then I understood that only the very >vocal trolls had these problems, or zealots promoting
>OS X.
Right, so this problem exists too, but you can’t admit its a real problem because.. ?? i don’t know? I don’t get how you can admit all the problems i pointed out and still tell me i’m wrong.
>>Using a 3D accelerator and something like AIGLX is
>>the only way to get acceptable window drawing and
>>compositing under Xorg/XFree86, otherwise you simply
>>need another X server, such as XDirectFB or XGL
>Which is complete BS. The problem is caused by
>hardware architecture. And no, it’s not limited to
>Xorg/XFree86, Windows has this problem too.
>Compositing is the only way to eliminate this problem
>with current heavy 3D texture oriented graphic cards.
What did i say there that was BS? There is no reason an X Server can’t provide fast double buffered, true-transparency graphics, even with ancient 1990s-era 2D accelerators. What is the hardware limitation you speak of? You then go on to say exactly what i said re. fixing Xorg by using a system like AIGLX. You havent done a very good job of justifying your ‘BS’ label on this one.
>>I don’t know why people keep claiming Xorg is a
>>decent solution, it’s not, and hasn’t been since we
>>left the days of 8-bit colour and monochrome fonts
>>behind. It largely works for most people, but it
>>works poorly when it comes to providing the illusion
>>of smoothness and speed
>That’s because you’re a troll, so you can’t
>understand. Of course, you’re not an authority on the
>matter, so the fact that you don’t understand doesn’t
>mean that Xorg is not decent or not right. The fact
>that you don’t understand could mean a lot of other
>things though, like that you’re an idiot, or that you
>should just really use something before spouting crap.
Hey, i was just responding to a poster who asked why MacOSX could do what Xorg can’t on a VESA framebuffer.
Youve called my claims BS, youve called me an idiot, and youve called me a troll, told me not to spout crap and indicated that the only people who have a right to express an opinion on this subject are ‘the authorities’.
After admitting that Xorg has the problems i specified, and with the exception of the DGA issue (which I really used to illuste of the total lack of coherent design w/regard to acessing video card resources under Linux, but i’ll give you that one.) totally failed to make any kind of useful refutation at all.
Maybe youre happy with a windowing system that feels sluggish and can’t support moving and resizing windows without visual artefacts, maybe you just have really low standards. Thats fine, but theres no need to go round slinging abuse. Youre as much of a troll as I am, and you burying your head in the sand re. Xorgs deficiencies doesnt help any more than me pointing them out in the way i did.
So, let me step back a bit and say this.
I’m sorry if i came across as a troll or caused offence. My characterisation of Xorg as ‘crap’ was wrong, that was a poor choice of words and I shouldnt have put it that way.
I realise people smarter than myself are working on it,and I actually do use Xorg every day at work, and i’ve very grateful that there is any free software solution at all for graphical display.
However, i think the facts presented in my post re Xorgs current limitations around compositing, synchronisation and double buffering support are quite valid, and i’d be interested in anyone who can directly and honestly refute them as I would quite like to go back to using Xorg on my primary desktop OS.
I’ve been thinking about it for awhile now, and I’ve come to the conclusion that XGL, Xegl, AIGLX, etc, all fundementally have the wrong architecture. They all still layer XRENDER on top of OpenGL, and that’s not a long-term solution. XRENDER is not scalable. It’s based on a 1980’s model of rendering that hides the underlying programmability of the hardware. Sure, you can add to XRender to start exposing that programmability, but then you’re basically maintaining a secondary API to OpenGL. That’s not scalable.
XRender, as it is, is not sufficient for *today’s* requirements, much less tomorrow’s. The foremost problem is anti-aliasing. Anti-aliasing is just plain expensive on today’s cards, which use multi-sample AA. It increases your memory usage (as a result of the larger Z-buffer) enormously. Compare the screenshots of Cairo’s software AA and Glitz’s 4x MSAA. Cairo’s is much, much better. It’s not until 16x MSAA that it starts to look comparable (though even 16x MSAA still shows fewer gray levels), and even many high-end GeForce cards don’t do that period.
The proper solution is the one Microsoft chose with Avalon. Get the windowing system out of the business of graphics APIs, and expose the underlying 3D API to the application. Then you can use techniques like vector texturing to get relatively cheap anti-aliasing for vector-graphics without having to use MSAA. More importantly, you can have the OpenGL folks do the whole graphics API design thing, instead of having the window server do it. It’s a long-term solution, yes, but that’s okay. In the near term, we really don’t need hardware accelerated vector-graphics. Hardware-accelerated compositing of software-rendered vector-graphics is still quite usable (OS X Tiger still does that), and application developers are still getting a handle on vector graphics in UIs, so it’ll be awhile before graphics become so rich that software can’t handle them adequately.
I’ve been thinking about it for awhile now, and I’ve come to the conclusion that XGL, Xegl, AIGLX, etc, all fundementally have the wrong architecture. They all still layer XRENDER on top of OpenGL, and that’s not a long-term solution. XRENDER is not scalable
OK good. So you have the theory. Meanwhile, we use what exists today, waiting for your practical solution.
XRender, as it is, is not sufficient for *today’s* requirements, much less tomorrow’s
Which doesn’t mean XRender is not useful or can’t be improved …
The proper solution is the one Microsoft chose with Avalon. Get the windowing system out of the business of graphics APIs, and expose the underlying 3D API to the application
What BS is that ? Exposing the underlying API (be it X11, OpenGL) is already what GTK+ and Qt allows, and always have allowed.
Why would you put the windowing system out of graphics API ? This is nonsense and a huge step backward.
Then you can use techniques like vector texturing to get relatively cheap anti-aliasing for vector-graphics without having to use MSAA. More importantly, you can have the OpenGL folks do the whole graphics API design thing, instead of having the window server do it. It’s a long-term solution, yes, but that’s okay
How is this even going to work ? Like, for a text editor ?
In the near term, we really don’t need hardware accelerated vector-graphics. Hardware-accelerated compositing of software-rendered vector-graphics is still quite usable (OS X Tiger still does that), and application developers are still getting a handle on vector graphics in UIs, so it’ll be awhile before graphics become so rich that software can’t handle them adequately
But why don’t you have to have EVERYTHING accelerated ? Why isn’t this better ? Especially since it’s backward compatible.
For the time being, most of the clever effects we’re seeing through the Compiz engine work equally well under AIGLX and Xgl. I have Xgl running on my Nvidia 6600GT, and AIGLX running on my laptop’s Intel GMA910.
Both handle Compiz well, both run very smoothly.
So in the short term, both solutions can co-exist peacefully.
When i first heard about AIGLX, i thought the Xgl approach was better, but for the time being, Xgl still relys on a stock Xorg server. Until Xgl breaks with Xorg completely, the value of having a “brand new” Xserver, won’t be realized.
It’s not all that “brand new”. The DIX is still the same.
EDIT: Which is a good thing really. Most of what the DIX does is connection handling, authentication, resource sharing and management, etc, and it’s not particularly clear that there are better ways to do it than what X is doing.
Edited 2006-08-17 02:30
Can someone help me out here? I’m a little confused by all of this.
I get that Xgl(x) is a new XServer that uses OpenGL to draw everything. I get that AIGLX is an extension to the “old” XServer to let the Window/Compositing Manager use OpenGL to do funky stuff.
I also understand that Cairo is a library for drawing vector graphics that is used by GTK, and will by used by Mozilla for rendering everything eventually. Cairo can use standard Xlib calls, or instead use something called Glitz to talk to OpenGL directly.
(I have no idea what Qt does, but I expect it’s changed in Qt4.)
So now we have this thing called Glucose, which uses OpenGL to accelerate regular 2D drawing functions. Does that mean that Glitz is now unnecessary? Or that you’d get 2D-drawing-via-OpenGL whether you used Glitz or not?
I understand the way it’s done on OS X, I think, so maybe somebody can give this analogy some marks out of 10: the X Composite extension, together with something like XCompMgr, is roughly like the original Quartz compositing — it looks nice and smooth, but pretty much everything is actually done via the CPU, or the 2D parts of the graphics card. Xgl/AIGLX are like Quartz Extreme, in that they use OpenGl to do all the compositing. And Glucose (and Glitz?) are like the new Quartz 2D Extreme that will probably be released properly with Leopard, that uses OpenGL to draw the contents of the window as well as for just moving the windows around.
*Tristan wanders away before his brain explodes*
(I have no idea what Qt does, but I expect it’s changed in Qt4.)
Qt uses the layer underneath it to draw buttons and send user input back to the application
kde
qt
xserver
xclient
opengl
gfx card
I believe is the correct stack
“I don’t know why people keep claiming Xorg is a decent solution,”…
“It largely works for most people”
Welcome to the definition of decent.
Hey, mono AM radios and 1 megapixel cameras largely work for most people too, but given the alternatives available, i wouldn’t describe them as ‘decent’.
So can somebody explain the distinctions between Xorg, AILGX, XGL and now Glucose?
From what I understand, Xorg is the basic 2D X, which supports OpenGL, but only for direct rendering within the applications ‘windows’… so in order for X itself to be rendered using OpenGL, we now have….
a) AILGLX. A new level of bloat, as it’s another X server running essentially to handle indirect OpenGL rendering requests.
b) XGL. A replacement Xorg which renders everything using OpenGL *directly.
.. and now
c) Glucose. A new level of uber-bloat? Sounds like you need to have AILGLX and XGL installed?
wtf? I’m confused.
You are confused
AIGLX is an extension to the existing X server, not a seperate X server, part of the default X server. It implements the capability of GL rendering through GLX. It is not an extra X server, its the one you’re already running with a new feature. As its an extra feature, programs (like window managers) need to take advantage of it.
Xglx is an EXTRA X server that runs on top of another X server (like Xorg). Its a hackish way to do it and is very bloated, Its running a program on top of Xorg that runs like a fullscreen OpenGL program. Its drawing everything through OpenGL no matter what.
Xegl is a completely rewritten OpenGL API (known as EGL) implementing an entire X server. The EGL protocol is much improved over GLX that is in Xorg, but has no drivers. Its drawing everything through OpenGL no matter what.
Glucose forces everything to render through OpenGL no matter what, like Xglx and Xegl, but doesn’t require the new protocol like EGL, or an X server running on top of Xorg. Its basically Xglx without having to run a second X server.
I’m wondering: given the limitated dpi of our current screen, font hinting helps a lot having nice font on the screen.
But for font hinting to works, it must know where the pixels are, so does these type of architecture based on 3D allow font hinting to work?
Discussions like those are why I think that X needs a big cleaning session. As in redesign from the ground …
This seems completely chaotic and redundant.
… Which is the reason OSS works -so- well.
Like any eco-system, all entities under-go constant evolution; the strong thrive and weak die out and get replaced.
-If- I wanted to live in monolithic world, were only a single solution to each problem is allowed, I’d use Windows instead of Linux.
Yes, having konqurer, firefox and Opera all doing essentially the same thing is a waste of valuable resources, but on the other hand, we will never know how many amazing ideas died out at IBM, Microsoft and Sun, just because some steering committee deemed them useless or ineffective…
Think about it? why develop OpenBSD when FreeBSD is being developed? Why bother spending resources on Linux when Solaris is open?
Simple answer? Because you can.
For some reason, especially for things like X, the “weak” don’t seem to die out and get replaced. If it would happen, then I guess X would drop more old stuff.
I think the XFree86->Xorg fork is helping this to happen… especially the effort at modularization. IMO the biggest pain in the ass compile on a Gentoo system used to be compiling X. It was one huge monolothic build that if it failed 1/2 way through you were screwed and had to recompiling from scratch. It has since been separated out into neat, modularized little packages.
The other day I realized my modular X system didn’t have the program xsetroot. At first I was outraged that I didn’t have this simple little program as part of a default install. I realized though, that most people probably don’t use xsetroot much any more (in favor of fancy DE’s that do this using other method), and if you don’t want xsetroot and a bunch of other little redundant programs that looks like they’re from the 80’s, you don’t have to have them! Better yet, if someone comes up with (just an example) a better/faster/prettier xsetroot, they can implement it and it doesn’t have to be some kludgy hack onto a monolithic X build.
It’s getting there… I think there will be a bit of a swell as everyone makes little glue layers to make things work, but the nice part of the modularity is that excess parts can be trimmed out, and the X server will eventually get the parts it needs to work AND be efficient. I hope.
.. It’s not that easy.
Due to the nature of X, you cannot just drop compatibility with other X implementation. (Like the 12 year old HP/UX machine I connected to, two days ago).
In in-order to maintain the said compatibility, you add things on top of X, instead of adding them inside X.
BTW, if you have had the displeasure of building XFree, you’ll appreciate the huge effort the was sunk in X.org 7.0 in-order to make it modular.
AIGLX,Xorg,Xegl,Glucose…many people ask what are these ,which is better ,which is the right one…These are not the right questions.Better ask:
“Why do these projects _exist_?”
Imagine time has come and AIGLX,Glucose are integrated into Xorg,enabled by default and running stable.So lets call ALL that simply Xorg server.Forgeting about those projects makes life easier(that’s where I am leading you ).And so we have a Xorg server rendering *everything* in OpenGL.Recall what Zack said :
“Furthermore my plan is to provide a smooth transition for apps that would like to mix Xrender with GL, with Glucose it’s a rather simple thing to do.”
If Xorg is using OpenGL for everything then developers will be able to support only 3D drivers to make users run Xorg.The 3D drivers will get stable.
But we will still be using the 20 years old, huge, full of bad code X server.As hardware works(drivers) and users like the eye candy it’s time developers to get developers happy too
…and introduce Xegl.Again a X server drawing everything through 3D ,but brand new ,better code,using the EGL API,and *not including* 2D drivers(XAA,EXA) *at all*.
Use the existing Xorg server to
1.make 3D drivers work,proprietary also(required by Xegl),
2.then make people get exited what X can ,(spread the gossip of an OpenGL X server )
3. then make DE integrate the visual effects(depend on an OpenGL X server),
and the last step: change to Xegl when everything else is ready and waiting for it (and Xegl itself is also ready)
That is the story for graphics ,but lets not forget tha X controls the hardware(mice,keyboard,video cards,monitors) too.As input handling and modeline setting (input hotplug–mice,keyboards;and output hotplug–adding monitors,projectors) are being worked on in the existing X server in order to be well tested and wide used,and eventually migrate the sollutions to Xegl.
Please listen to http://tllts.info/archives/tllts_125-03-01-06.ogg where Zack explains everithing ,and where I ,by the way,got all that ,not the whole [28:15-72:30] will be enough.
Edited 2006-08-17 16:09
Hey.
The link http://tllts.info/archives/tllts_125-03-01-06.ogg doesn’t work unfortunately. I couldn’t find any other link to the file either. Could you somehow host the file somewhere for a short period of time?
Yes:) ftp://212.7.222.172 part of the show,38 KB/s.
Don’t be nervous or greedy.
thanks