Username or EmailPassword
There many possible software strategies for the Linux desktop to take advantage of GPU off-loading. I still don't think an consensus has been achieved on this point.
1) Fork - the current X server is mature and it fully supports 2D hardware just fine. It's not too hard to add new 2D drivers to it. For OGL capable hardware build a new server like Xegl with an xlib compatibility layer. The new server is completely dependent on OGL and has removed all of the ancient 2D support.
2) New server only - Build a new server like Xegl but port support for all of the old 2D hardware to it by using software Mesa to emulate OGL hardware. Single code base but a lot of effort will need to be put into software Mesa.
3) Build both a 2D and OGL based server in the same app. That's what we are doing right now.
There are many diverse opinions for and against each of these strategies. I personally favor the first one since I believe it requires the least coding effort. The third one requires the most effort in the long run but it can be built incrementally.
I also think that it is a futile effort to try and support both the 2D and OGL hardware from a single server. It makes the server too big for the low end 2D users and it forces unnecessary constraints onto the high end OGL user. It is a simple observation to make that these OGL based composition manager demos are never going to run on old 2D hardware.
I do get upset when I hear the political arguments that we shouldn't "screw the low end". Looks to me like the low end is getting all the support right now and it's the high end that is getting screwed. That argument is like telling owners of SCSI hardware that they have to run it in IDE emulation mode since it would be "unfair" to the IDE owners.