David Reveman of Novell shares his thoughts on Xgl, Metacity, and more.“I’ve been getting a lot of mail from people asking me about my thoughts on AIGLX, the GL compositing work being done on Metacity and nVidia’s xdevconf paper. Instead of replying to everyone individually, I though I’d send a mail to the Xorg list.”
That was a good read considering that everyone was bashing XGL after AIGLX was released. It also clears up the reason why Nvidia prefers AIGLX, because AIGLX requires high quality drivers, which Nvidia provides, although ironically Nvidia does not yet work on AIGLX. XGL on the other hand will work on a more wide variety of drivers, as it already works with Nvidia drivers.
And it works fine with ATI drivers
Great post.
I hope people would stop to see this XGL vs aiglx dispute, and look beyond.
I hope people would stop to see this XGL vs aiglx dispute, and look beyond.
True. I think like many things in the OSS world, competition will ultimately make both projects better and the technology will eventually converge. That is the best part of OSS.
I do appreciate the comment about how Compiz is desktop agnostic. This is definitely the way to go, rather than trying to work on seperate KWin / Metacity patches and structural changes.
I appreciate the need for choice in the OSS environment, but having a common core for the window manager is really the only sensible way to go. It’s fresh, new, fit-for-purpose and GL-by-design code. Definitely re-use what can be reused from Metacity et-al, but a fresh start is often required to drive forward innovative and fast-evolving software.
PS – David, thankyou for Compiz and XGL. Also thanks to the people who packaged it for Ubuntu Dapper – its lovely
Its depressing that an oppertunity like this comes along, which gets a lot of commmunity support and motivates everyone and then if you think about what we will have to show for it in 6 months time is a more fragmented desktop then its not worth the price.
For gods sake we need to stop fighting each other and start fighting the enemy.
The appropriate people in Novell and Red Hat have done very well so far at explaining why each is better without resorting to bitterness or flaming; do you know why? Because each is so certain that they will not budge from their current development path.
p.s. I completely love the oss development model and how we are able to do this. Diversity is great. But a premise of the OSS model is that if something better comes up then development could just as well change direction and move there, as decided by the community. This is Red Hat vs Novell not OSS vs OSS, neither is budging, and the sooner we realise that then the sooner we can come up with a third, truly OSS option.
Edited 2006-02-23 19:37
For gods sake we need to stop fighting each other and start fighting the enemy.
You seem to miss the whole point of open source. Start a company if you want to “fight” other people
No You seem to miss the whole point of open source.
I take offence to that. What is the point of OSS to you? To me it is freedom.
Start a company if you want to “fight” other people
A company like red hat or novell who are now fighting for the best compositing infrastructure for the desktop?
OSS should have nothing to do with “fighting” or “enemies”. The point I think you are missing is that neither Red Hat nor Novell have any more influence over “the community” than any other developer. If they develop something, and it works well, then “the community” will gravitate towards it. They will gravitate to the solution that works best for them. If that happens to be two solutions, then sobeit. If one turns out to be better, then the community will gravitate to that. The not so good one will lose users, and developer support, and disappear, or be merged back into the better project.
You forget that this is open source, and that both projects can take stuff that they like and think is good, from the other. Competition breeds innovation. The benefit that open source brings to that principle is that competitors can build easily on other’s innovations, thus accelerating the pace of innovation. Also don’t forget that development resource isn’t a zero sum game.
That’s why I think you’ve missed the point of OSS.
The point I think you are missing is that neither Red Hat nor Novell have any more influence over “the community” than any other developer. If they develop something, and it works well, then “the community” will gravitate towards it.
Currently of Developers which work on core Gnome there is a good split between Red Hat, Novell and n other companies whose employees actively work on Gnome. And also an amazing hobby developer community. Good work is done and Gnome benefits for it.
Then there is this interesting situation where there is a Gnome foundation and Gnome releases which are decided by the community (to some extent) which things are called core Gnome and make it into the platform and desktop releases, etc. Gnome makes releases to distributors. Gnome based distributions have a level of consistancy, thanks in part to a good community consensus of whats in gnome belongs in gnome. Distributors costomize things how they see fit but a core exists and this is what makes Gnome great.
Take a scenario where Novell goes with compiz/Xgl and Red Hat goes with Metacity/libcm/aiglx. There is immediately one clear element of gnome which is different between distributions – the WM. Yes I am all for choice and competitive advantage but where will it stop? Forking nautilus for greater integration with a CM? who knows
I guess my point is the community is in complex ways imfluenced by the few. Gnome is not as neutral anymore as it wants to be. If your Gnome distribution uses X and a community is built around X then whoever put X in there has influenced you. X could be the CM/WM or whatever.
That’s why I think you’ve missed the point of OSS.
I still disagree. The point of opensoure which you explained as …both projects can take stuff that they like … thus accelerating the pace of innovation… I agree with completely.
But I believe that OSS will not be this way forever without work. Keeping OSS the marvel that is is now takes more than sitting round patting our collective backs for what we have created. We must predict and observe and react. We must notice how OSS has changed and is changing and be aware. We must not be so arrogant to think that OSS (the model, the development, the communities, the values and beliefs) is and will remain perfect this way till the end of time.
Take a scenario where Novell goes with compiz/Xgl and Red Hat goes with Metacity/libcm/aiglx. There is immediately one clear element of gnome which is different between distributions – the WM. Yes I am all for choice and competitive advantage but where will it stop? Forking nautilus for greater integration with a CM? who knows
But that is the very “freedom” you say we need to “fight” for. This is the core strength of open source. Anyone can take a project and fork to to their needs. You can’t have it both ways. You can’t say you need to “fight” for this freedom, and then complain when people actually use it.
But I believe that OSS will not be this way forever without work. Keeping OSS the marvel that is is now takes more than sitting round patting our collective backs for what we have created.
Correct, but it involves people exercising the freedoms they have been given, which is exactly what Novell, Red Hat and ultimately the community will do. Freedoms only exist if they are exercised. You seem to want to control the exercise of the very freedoms you claim need to be “fought” for.
But that is the very “freedom” you say we need to “fight” for.
Incorrect. My comments regarding freedom in my other posts were about a bigger freedom. The freedom that we ultimately need to fight for is the freedom that is being taken from us without our knowledge, the freedom to use that which we own as we like (through DRM) and the freedom to innovate and endevour (though the copyright and patent laws). Those are the disappearing freedoms on the horizon.
Now when I say that the OSS model is something worth protecting that is with the mindset that OSS can help save us from those erroding freedoms. And when you say that I am in opposition to choice that is incorrect. I only suggest that we must consider any given choice (e.g. fork vs no fork) in the context of “is this the best thing we can do to protect our disappearing freedom”
FWIW I dont know what the right choice is in this situation but I will argue unneccessary fragmentation is *sometimes* a poor choice. Time will tell if this is an example of “unneccessary fragmentation”.
For gods sake we need to stop fighting each other and start fighting the enemy.
And who might that be? I see no enemy. I see great many friends though…
I see great many friends though…
I agree. You are correct and I should have worded that better. I have been waiting for a post to the Xorg list for a long time, always hoping it would bring better news and when this post came I was frustrated and let down.
if you think about what we will have to show for it in 6 months time is a more fragmented desktop then its not worth the price.
First of all none of this is true. I don’t see how this will create a more fragmented desktop at all. It’s not like only certain applications will be able to use one technology and other applications will only be able to use the other technology. Second, what price are you talking about? It’s free software.
For gods sake we need to stop fighting each other and start fighting the enemy.
There is no enemy. OSS is about creating free software, not about fighting anyone.
Because each is so certain that they will not budge from their current development path.
If you actually read the article you would know that this is not true. Work on DRI drivers will benefit both projects. The article simply states that the XGL creator found problems with the approach of the AIGLX team before they released their code when he himself researched the best method. In the end whoever is right will come out on top. It’s definitely possible that you will be able to pick either approach and just as XFree was pretty much phased out of most disrtos I’m sure the inferior approach to acclerated X will be phased out.
Diversity is great. But a premise of the OSS model is that if something better comes up then development could just as well change direction and move there, as decided by the community. This is Red Hat vs Novell not OSS vs OSS, neither is budging, and the sooner we realise that then the sooner we can come up with a third, truly OSS option.
It’s way too early to say this. Both projects have just been released. Time will tell which project is ultimately adopted. You need people and distros to try this out before the best method is determined. I also take issue with what you consider a “truly OSS” option. What is not open source about either approach?
Second, what price are you talking about? It’s free software.
I meant price figuratively speaking. Put it another way, what is the opportunity cost of the two? Also see next point re; free as in beer vs free as in freedom
There is no enemy. OSS is about creating free software, not about fighting anyone..
Do not think that OSS is JUST about free $. For many it is about freedom, and freedom needs fighting for.
f you actually read the article
I have read every peice of literature on XGL/AIGLX, Compiz and Metacity/libcm around. I have tried both. I could have commented on “this will resutl in a better Xorg and DRI” but I chose not to. Those are benefits of either solution but what is the cost?
It’s way too early to say this. Both projects have just been released
Do you think it will become easier to change a projects direction as the project get older?
Edited 2006-02-23 20:24
Do you think it will become easier to change a projects direction as the project get older?
I think it is impossible to choose the right implementation before it is widely tested. Your solution to implement yet another accelerated X goes against what you said previously anyway. OSS is the ultimate free market. Prices are competitive (free) and the best implementation will win out. I don’t see the point in creating just one of everything. Then there is no competition and improvements are slow and limited. Because of Gnome KDE has improved rapidly and vice versa. Both have taken elements from each other that were popular and both have integrated themselves much better with each other. Also take a look at the XFree/Xorg situation. XFree had been around forever and was the default installation for years and then it was only a matter of months before Xorg took its place. There always seems to be a big fuss about competition in the OSS world but in the end it always benefits the end user.
I think it is impossible to choose the right implementation before it is widely tested
Agreed.
Your solution to implement yet another accelerated X goes against what you said previously anyway.
I do not suggest another accelerated X. I should have made that clearer. I am talking about the compositing side of it. As a Gnomer I see this as bad for metacity and bad for compositing on Gnome. I agree with you completely regarding X. As the article and other suggest both compositors should eventually be able to run on either acclerated X architecture anyway.
Also take a look at the XFree/Xorg situation. XFree had been around forever and was the default installation for years and then it was only a matter of months before Xorg took its place
There is a lot more to it than that. The story of the X fork is very interesting. But, if both X’s had have existed for the same amount of time, which distros would be using what?
There always seems to be a big fuss about competition in the OSS world but in the end it always benefits the end user.
Yes, in the long term. But in the short-medium term? A lot of hard core linux users evangalising/flaming which is better while they tweak their config files and build from source while still trying to convince their mum that linux is easy to use.
Dont get me wrong, I look forward to2 years time when this storm has passed, but until then? batten down the hatches
I do not suggest another accelerated X. I should have made that clearer. I am talking about the compositing side of it. As a Gnomer I see this as bad for metacity and bad for compositing on Gnome. I agree with you completely regarding X. As the article and other suggest both compositors should eventually be able to run on either acclerated X architecture anyway.
I don’t see how. Compiz works with both X servers. It works with KDE also. It seems like it is bringing things together more than anything.
There is a lot more to it than that. The story of the X fork is very interesting. But, if both X’s had have existed for the same amount of time, which distros would be using what?
That would never happen though. The reason it was only a matter of months is because almost everyone saw the benefits to xorg right away, not only with the license but the direction they wanted to take.
Yes, in the long term. But in the short-medium term? A lot of hard core linux users evangalising/flaming which is better while they tweak their config files and build from source while still trying to convince their mum that linux is easy to use.
You’re speaking too soon. This could be over very soon. Who knows. Even if it is not who cares. Let them have it out. We need debate to find out which project is better. With debate you’re always going to get some people who are out of control, especially on the internet. Gnome and KDE still endure this but both of them are getting better all the time and it hasn’t done anything to hurt Linux. In fact if there wasn’t a split in desktops you wouldn’t even have Gnome. KDE has been around longer. Everything else you say has absolutely nothing to do with the discussion.
For gods sake we need to stop fighting each other and start fighting the enemy.
If anything will ever kill OSS, it’s this perception that it is some kind of battle. OSS is about freedom, not about eliminating other people’s freedom.
p.s. I completely love the oss development model and how we are able to do this. Diversity is great. But a premise of the OSS model is that if something better comes up then development could just as well change direction and move there, as decided by the community.
OSS development is about chaos, and that’s the way it should be. Developers should develop wherever they are inspired. Communities can try and add some structure and order to that chaos, but at the end of the day, the community should not dictate anything, it should just choose the best of what’s available for whatever it’s goals are. I’d rather see 100 developers produce 99 useless projects and 1 single brilliant one, than 100 developers harnessed on something 99 of them don’t completely support.
This is Red Hat vs Novell not OSS vs OSS, neither is budging, and the sooner we realise that then the sooner we can come up with a third, truly OSS option.
So, you’re arguing that because the “community” is split between Red Hat and Novell sponsoring divergent projects, the solution is to create a third to somehow unify everything?
Bah, maybe I’m naive but I like the way OSS works. It’s barely hit puberty relative to the rest of the industry; it needs some rebellion, it needs to stretch it’s limits, it needs to believe it can conquor the world and it needs to be slapped in the head with a cold smack of reality from time to time. That’s the only way to truly grow.
Monocultures and structured behavior are great for stability, lousy for growth and innovation. That’s where you need diversity and sometimes reckless abandon. The slightest divergence, the most minor possible mutation in a single gene, can alter the course of evolution in unimaginable ways.
Let the developers keep doing what they do best and have faith that things will fall into place.
I’m afraid I am a little technically challenged so please correct me if I’m wrong but from my understanding xgl is a replacement X server and AIGLX is a plugin for X.
If I’m right in my understanding then both seem like logical paths to me as long as support is there.
The sad part is I wish all the hardware vendors made high quality drivers. If the author is correct and AIGLX will only support high quality video drivers (only nvidia) then the AIGLX model has little support for alternate hardware and needs to write their plugin to support non high quality video card drivers. Either way both still sound like good solutions but I just don’t like being limited to what hardware or desktop environment.
The sad part is I wish all the hardware vendors made high quality drivers. If the author is correct and AIGLX will only support high quality video drivers (only nvidia) then the AIGLX model has little support for alternate hardware and needs to write their plugin to support non high quality video card drivers
What is funny is that Nvidia’s drivers are not yet supported by AIGLX but some DRI drivers are (namely ATI and some Intel drivers). There seems to be more non-working drivers for AIGLX and less features for AIGLX at the present time. This will most certainly change but when?
http://fedoraproject.org/wiki/RenderingProject/aiglx
Yeah, I saw that.. AIGLX sounds good but where is the support? W/O support what good is it? The author might be making an extreme case stating currently nothing supports it but the project might be planning to make it more flexible in the future so more drivers can support it in the future. Or maybe they are extreme nvidia fans?
Dont get me wrong I am a Nvidia customer myself but I wouldn’t write something like this if only 1 brand of hardware could utulize it. Either we are missing some more info about AIGLX support plans in the future or Red Hat has decided Nvidia will be the only choice.
Actually, since Nvidia actually makes decent drivers for their products for linux I guess they kind of are the only choice for linux desktop users.
“Either we are missing some more info about AIGLX support plans in the future or Red Hat has decided Nvidia will be the only choice.”
“Actually, since Nvidia actually makes decent drivers for their products for linux I guess they kind of are the only choice for linux desktop users.”
Xorg runs on more platforms than Linux, wich is why a vendor and platform agnostic solution is the only acceptable choice.
Edited 2006-02-24 03:20
Short Term = Xgl
Medium Term = Aiglx
Long Term = Xegl
Right now, we have an X desktop that is (for the first time in history) “pretty” and capable of very nice 3d effects. Although I agree an X server (Xgl) running ontop of a normal X server is somewhat hackish, it works excellent. Without an interim solution like Xgl, people will think Linux is “lagging behind” windows/mac os in the graphics department and development will be stagnant.
Remember, you have to crawl before you can run.
-shameless plug from a happy Xgl/compiz/Ubuntu user.
Xglx does run ontop of a normal X server.
Xgl is a more generic term.
My take on the Nvidia paper is that they are in the lead and they know it. They like the position they are in and they want to maintain the status quo and not change anything. I did not feel that their technical arguments made much sense, especially the parts about Twin View and Quad Stereo – the Mesa EGL extensions were explicitly designed to support that type of feature. To me the technical complaints felt like a smoke screen obscuring the “don’t upset the apple cart” true motivation.
As for duplication, I would much rather have a single bug free GL based window manager than two three-quarters finished bug ridden ones — finish a common one first and then go compete. Even though it has caused me to receive a great deal of personal criticism, I still stand by the quote from my original paper.
“My experience with the failure of Xegl has taught me that building a graphics subsystem is a large and complicated job, far too large for one or two people to tackle. As a whole, the X.org community barely has enough resources to build a single server. Splitting these resources over many paths only results in piles of half finished projects. I know developers prefer working on whatever interests them, but given the resources available to X.org, this approach will not yield a new server or even a fully-competitive desktop based on the old server in the near term. Maybe it is time for X.org to work out a roadmap for all to follow.”
Edited 2006-02-23 20:12
I hope all of this recent progress and attention will help motivate you to reconsider your decision to stop working on Xegl. I’m just a distant observer, but I don’t think I’m the only one looking to people like you, David Reveman, Keith Packard, and Carl Worth to update the X architecture for the future. Xegl is very important and there are a lot of users hoping it succeeds.
do you guys realize that Microsoft will handicap OpenGL in windows vista.
This is an antitrust violation and should be taken seriously by openGL developers.
I believe that OpenGL will not run natively on Vista; it will have to go through vista’s directx layer reducing opengl performance, and forcing vendors to use directx code instead of one base opengl code for multiple platforms.
however, correct me if I’m wrong.
-B
I believe that OpenGL will not run natively on Vista; it will have to go through vista’s directx layer reducing opengl performance, and forcing vendors to use directx code instead of one base opengl code for multiple platforms.
however, correct me if I’m wrong.
You’re wrong. AFAIK, there will be no difference as long as you have a graphics driver (from ATI or NVIDIA) that supports OpenGL. It is only the default driver that MS supplies which will be crippled, and no one should expect good performance from that anyway.
I remember hearing about this a while back.
http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cg…
It involves using OpenGL while using the composited Vista desktop. I don’t think it would affect full screen games, but would affect things like 3D modeling apps.
> but would affect things like 3D modeling apps.
That is the only remaining OpenGL turf on windows that matters.
It is only the default driver that MS supplies which will be crippled, and no one should expect good performance from that anyway.
Oh, but people will. New kids just getting into gaming will be coming faster than than an “You need to upgrade your drivers for full OpenGL performance” campaign can educate them.
The crippled drivers will perpetuate a myth that OpenGL is slow and then pretty soon you’ll be hearing “OpenGL games suck!” a lot from people who don’t know what they’re talking about.
Game developers will see that DirectX games just work, while OpenGL games cause tech support issues (having to explain to newb gamers complaining about slowness that they need decent drivers).
And so OpenGL will slowly die. And with it, game portability and 3D on Linux as well.
Edited 2006-02-24 00:45
OpenGL isn’t going to die just because games move to D3D. OpenGL existed before 3D gaming, and it will continue to exist after games abandon it.
Remember the reason NVIDIA makes OpenGL drivers for *NIX at all. It’s not to support the total of 3 games that run on Linux. It’s to support the very high end 3D modeling programs that run on Linux. It’s to support customers like ILM, who buy tons of Linux boxes with NVIDIA cards to run Softimage.
Remember the reason NVIDIA makes OpenGL drivers for *NIX at all. It’s not to support the total of 3 games that run on Linux. It’s to support the very high end 3D modeling programs that run on Linux.
Ok, maybe it won’t hurt Linux. But lack of out-of-the-box support will still kill OpenGL as far as Windows games are concerned.
Edited 2006-02-24 02:43
It’ll kill OpenGL in terms of Windows games, but that was never the primary OpenGL market anyway. In the OpenGL universe, games are relatively unimportant.
But it will cripple OpenGL windowed apps performance on windows regardless whether you use MS or vendor driver. (with AVALON enabled, but why pay for Vista when you have to have its greates features disabled?)
As existence of those apps is really the reason why GPU companies do any serious OGL developement and the upcomming support issues (coupled with DX10 niceties) will make apps producers reconsider their 3d framework decision, with Vista MS can seriousely undermine hurt OGL relevance on Windows. As Win is dominating platform – it can kill OGL althogether.
Wow quite the prediction. You do realize that Microsofts default drivers in XP have OpenGL support that is kind of subpar too, right? And has it hurt OpenGL? No.
Why should they be forced to actively write support into their product for a competitor?
It’s the microsoft’s perception of open standards as comoetition that make it so fscked up company.
I of course I wouldn’t care it their monsourous leverage didn’t drag the whole industry into sickness.
Yes, and it’s issue only for windowed applications beacuse they will have problem running at the same time as Aero engine, i.e. Aero will shut down (in fullscreen it isn’t visible anyway).
I feel a great sense of trust in David’s comments.
Having already used Xgl on both Nvidia hardware as well as the Intel integrated board on my laptop, i’m suprised by how well the system performs.
It seems to me that AiXgl is sort of a waste of time since Red Hat described it as more of an incremental step. Why use a stopgap solution when the “final” solution already works reasonably well?
As for Compiz/Metacity, I think David is right on. An agnostic solution is important to OSS in general, even though i happen to be a Gnome user.
Xgl isn’t the “final” solution because it still has to run on top of another X server. Xegl is seen as the goal and that is still a ways off. The problem, as I understand it, is that Xorg includes too much code which talks directly to hardware. It basically has its own drivers for input and video. These drivers can conflict with kernel level drivers (e.g. fbdev on Linux) because they both try to manage mode setting, etc. The idea behind Xegl is to move drivers out of X by allowing the X server to use the EGL APIs for mode setting, multi-head, etc. Drivers then just implement EGL. Mesa apparently makes this a lot less work. I believe this is what Jon Smirl meant by “leveling the playing field” in his comment above. I’ve read arguments against this approach describing it as Linux centric. Others worry that venders (nvidia, ati) won’t want to rewrite their drivers to use EGL. I view this as something X needed to have done yesterday (honestly, who whats to continue hand editing a *.conf file), and effort spent giving Xorg more life (AIGLX) is just making it that much harder for Xegl.
I’ve read arguments against this approach describing it as Linux centric.
So far none of the shipping versions of EGL have been on Linux. EGL is a Khronos API, http://www.khronos.org/egl/ It is totaly platform independent. Current implementations are on proprietary embedded systems and cell phones. It is likely that EGL will be used in the PS/3.
Quote from the Khronos page: EGL can be implemented on multiple operating systems (such as Symbian, embedded Linux, Unix, and Windows) and native window systems (such as X and Microsoft Windows).
Just because I built a non-released implementation of the EGL API by using the Linux fbdev drivers, the entire EGL API has now been tagged as Linux centric.
Edited 2006-02-23 23:53
For anyone wanting context, read the “Xegl lives!” thread:
http://lists.freedesktop.org/archives/dri-egl/2005-May/thread.html
Alan Coopersmith from Sun was the person raising concerns about Xegl being Linux centric. Several people point out that he is mistaken and that OpenGL/EGL as well as the input subsystem are platform agnostic. It is only the OpenGL/EGL stack and input drivers that are platform specific.
Me too.
Xegl may be a better way to take advantages of modern graphics rather than AIGLX. And Xgl is making the way straight to that. So before drivers for Xegl is ready, we can adopt to Xgl with current drivers, then why AIGLX? Because of high quality driver of one vendor, we choose AIGLX, that’s not a good idea.
from what I read, xgl is capable of working with a wider array of hardware, since it doesn’t rely on features (like texture-from-pixmap) being in the drivers, but can make do with putting them in mesa. Hardware support for xgl is likely to be much more widespread.
Further, xgl is a stepping stone to xegl, or an xserver running entirely on opengl. Nvidia is opposed to that, preferring to work within the current framework (as aiglx does), though that’s understandable coming from the company with the best Linux drivers.
While the X-on-OpenGL model demonstrates what the graphics hardware is capable of, everything that the X-on-OpenGL model can achieve is equally possible with the current framework. Furthermore, the current framework offers flexibility to driver developers to expose vendor-specific features that may not be possible through the X-on-OpenGL model.
Seems they like the aixgl method better since they can add things to their drivers that make aixgl work better on their cards. Makes business sense for them, but isn’t as nice for owners of other cards. Xgl avoids that, letting a wider array of cards work.
On the other hand, if xgl does lead (as planned) to xegl, we’d be in the same situation of being at the mercy of the driver writers, no different than the aixgl route. At least then, owners of lesser cards would have xgl to fall back on.
Furthermore, the current framework offers flexibility to driver developers to expose vendor-specific features that may not be possible through the X-on-OpenGL model.
I was wondering about this. If you implement X on top of OpenGL, it sounds like you’d be limited to the OpenGL feature set. Is this true? Would this be sufficient for things like accelerated HD video?
Furthermore, the current framework offers flexibility to driver developers to expose vendor-specific features that may not be possible through the X-on-OpenGL model.
That statement is utter non-sense. There is an active standardized way of extending the OpenGL API via the OpenGL ARB. Nvidia obviously knows this since half of the extensions approved by the ARB have been written by Nvidia.
What I believe NVidia is really concerned about is that the proposed arvitecture requires them to do OpenGL over OpenGL layering.
XWindow + OGL
|
XGL
|
OGL(3DApp1) OGL(3Dapp2) OGL(3DAppp3)
And this will hurt the OGL performance the same way it will on Vista.
AFAIK there is not a standarized way to virtualize OPENGL Api and especially extensions so their fears about vendor-specific features being lost in the process have some merit imo.
You are forgetting about direct rendering with XGL. With direct rendering the apps talk straight to the OGL stack and bypass the server.
The main argument for an indirection layer comes from systems with mixed hardware, one nvidia and one ATI. Without adding an indirection layer apps have to bind to one stack or the other. What do you do when a window spans two screens? You need a small layer to control this. This is an experimental feature with no working implementations. I suspect Nvidia has an internal indirection layer of their own when multiple non-SLI NVidia cards are used.
Another argument for indirection is to allow XGL to modify the behavior of uncooperative vendor’s OGL stacks. That indirection can be removed if the vendor cooperates.
Neither of these will impact performance the way MS is handling Vista. Vista sucks because the OpenGL calls are being translated to DirectX. Vista has no direct OpenGL implementation unless the vendor (NVidia/ATI) supplies it.
Neither of these will impact performance the way MS is handling Vista. Vista sucks because the OpenGL calls are being translated to DirectX. Vista has no direct OpenGL implementation unless the vendor (NVidia/ATI) supplies it.
Windows has never had hardware accelerated OpenGL without vendor-supplied drivers. The only difference with Vista is that you get acceleration whether or not you have an ICD, which is no different than how it works under XP. Prior to XP, OpenGL rendering was handled all in software if you didn’t have a vendor driver.
Yes but two stacks utilizing the same hardware cannot coexists unless some arbitration layer sits below them.
And MS is unwilling to provide one, while making DX a requirement for even basic desktop usage.
This efectively makes OGL a second class citizen on Windows whether you have dedicated driver or not.
Actually the proposed way of implementing OGL is simply a effect of the chosen architecture.
Wouldn’t it be better that all of you start
using Xgl/compiz (at least those lucky ones that Xgl works for them),
enjoy,
meanwhile keep quiet and ….
wait for stable AIGLX,
decide which is better,
start to use choosen one (now even more of lucky ones),
enjoy even more,
meanwhile keep quiet and ….
wait for stable XeGL,
pick the best out of three (working for everybody),
now enjoy even more than more
instead of this pissing contest which is better. They all are completely different, and each one will suit best to some specific group of users, who are directed by their needs and hardware. What you like and what you don’t like will be the least of the important topic here and what works best will rule them all.
It is becoming the same as Gnome/KDE flaming contests which one is better.
Best option would be that all those projects would decide on common effect plugin system. Not the common rendering solution (one can’t work best for everybody).
I think a common plugin system is a great idea and have spent a great deal of time lately looking into how to do just that. I was heartened by davids comments in the article saying how he will take a look into libcm
Im my ideal world libcm will contain the effects and a plugin system. A common standard saying how things will call them. Perhaps some kind of common points in a window/screen life at which effects can be applied.
e.g.
Minimise/Max
Show
Hide
Stack
Keypress
On Rx of some WM Hint
etc
etc
This 2d-3d switch is going to feel a _lot_ like the 16-32bit switch, for x86?
Certainly 32bit-64bit isn’t a big deal (same memory management features, mostly, so it’s not like a new programming paradigm). But going to 3d sort of is:
You no longer have to think: How long will this gui operation take? Should I not popup the window unless I precalculate processing will take more than X because it stinks to have it spend _most_ of its time making the window not processing? Oh, I could make this spin and do this and that, and it won’t slow down my program! Do I wanna just run a video here, that’s easy enough to support.. Sort of like the 386-486 fpu addition, when you would have had to think: Can I do this without a float unit? Now you won’t have to think: Can I do this without gl?
And for users.. “So what’s a gfx card again?” “So, how many megs do I need?” “My computer doesn’t have a what, and that’s why I can’t use what, which makes it so I can’t see the pretties you have?”
And the other similarity being to DOS->Win32. Support for those 2d’ers will have to exist for *nix people for a good 6 years or more, simply because many *nix users love using unbelievably old hardware. And similarly, back in the day, many people took over a decade to get their DOS software ported. I just talked to a guy today who is contracted to port a DOS program to NT. I told him good luck, and find a good memory dubugger .
>Compiz will work fine on Xorg with the nvidia driver once they
>release a version with the texture-from-pixmap support Xgl
>already got.
When nvidia will release their new drivers, compiz will work on xorg without Xgl. It’s a bad news for Xgl because nvidia users will stop using it
When AIGLX was ready to be tested, Xgl was ready to be used.
I read some comments suggesting AIGLX is the next step but isn’t there more commonality with XGL and XEGL?
I don’t fully understand all this but I think I get some parts. First of all isn’t XGL just an opengl xserver running on the existing Xorg server? Won’t XEGL be an opengl based x server running without xorg? So where does AIGLX fit in with all of this?
If I understand AIGLX at all (and I probably don’t) it depends on drivers that support the neat eyecandy features that compiz can do. So it’s works by going driver->eyecandy instead of opengl->eyecandy. Because most stuff has been working like this before it’s closer to the current generation of technology than XGL is since it’s running everything through OpenGL.
So I come back to my first question. Assuming I have a clue what I’m talking about isn’t XGL closer to XEGL and therefore AIGLX is just a small upgrade to the current xorg and would probably even distract from getting something like XEGL?
If true then it’s sad to see AIGLX being trumpeted above XGL. I’m absolutely sure Redhat has good reasons for their direction as they are a smart crew but it appears to me, no matter how hackish XGL might be right now, it is a step towards a better X future. It just seems like basing the desktop on a common API like OpenGL (ala Apple) makes so much more sense (especially since ATI and others have decent OpenGL support).
I currently have XGL/Compiz on my machine too and it’s amazing. The best thing is it’s stable and it works now.
Edited 2006-02-24 00:31
This article provides a lot of the background information you need to understand the debate.
http://dri.freedesktop.org/~jonsmirl/graphics.html
It is written before AIGLX existed so AIGLX is not covered.
Does latest ATI firegl driver support texture_from_pixmap extension? Xgl is nice but sometimes it behaves jerky (while wobbling big windows for example). I have radeon 9500 Pro 128Mb which should have decent performance even for such big textures.
There are also scheduling/priority problems (haven’t tried with nice though). Some background processes cause small glitches (maybe processor load is high because of missing extension?), another thing to iron out.
Apart from that, it seems almost ready for regulas usage. Compiz certainly needs bit more polishing to feel at least like gnome/kde wm’s.
Big pieces missing are direct rendering support and multihead capabilities (xinerama, etc.).
Anyway, I’m just waiting for some breath-taking copmpiz pixel shader plugins.
You should specify MESA’s libGL path for compiz to work.I’am using 9500 with 128MB,and it works fairly well!