Linked by Thom Holwerda on Mon 10th Sep 2007 20:24 UTC, submitted by hechacker1
AMD "This morning at the X Developer Summit in the United Kingdom, Matthew Tippett and John Bridgman of AMD have announced that they will be releasing their ATI GPU specifications without any Non-Disclosure Agreements needed by the developers! In other words, their GPU specifications will be given to developers in the open. Therefore you shouldn't need to worry about another R200 incident taking place. The 2D specifications will be released very soon and the 3D ones will follow shortly."
Permalink for comment 270225
To read all comments associated with this story, please click here.
Member since:

Maybe I wasn't too clear in my question. The last possibility you mentioned as in the GLX extension is one option. You run your applications on the (headless) application server and display the output on your client having a massive 3D card using the locally DRI/OpenGL accelerated X server.

The other possibility is to have a 3D card in your (headless) applications server, do all OpenGL computations there and return the results somehow to your relatively weak client device with some older 3D card unsuitable to playing modern games with the local DRI/OpenGL accelerated X server.

For instance I have a Dell C600 laptop with an Ati Rage 128 Pro graphics card and it works fine for normal things. But when I want to play newer games I would have to buy a new laptop since I can't upgrade the graphics on this machine.

My suggestion is if I would put a modern AGP 3D card in my server (a repurposed old desktop machine) and run the 3D games and applications there and only display the output on my laptop in an efficient way, would it work so I wouldn't have to buy a newer laptop just because of the old graphics card.

Edit: I am probably thinking of what the Fusion project is going to with the integration of CPU and GPU, but that's still a few years away.

Edited 2007-09-11 09:51 UTC

Reply Parent Score: 1