Linked by Thom Holwerda on Mon 10th Sep 2007 20:24 UTC, submitted by hechacker1
AMD "This morning at the X Developer Summit in the United Kingdom, Matthew Tippett and John Bridgman of AMD have announced that they will be releasing their ATI GPU specifications without any Non-Disclosure Agreements needed by the developers! In other words, their GPU specifications will be given to developers in the open. Therefore you shouldn't need to worry about another R200 incident taking place. The 2D specifications will be released very soon and the 3D ones will follow shortly."
Thread beginning with comment 270225
To view parent comment, click here.
To read all comments associated with this story, please click here.
psychicist
Member since:
2007-01-27

Maybe I wasn't too clear in my question. The last possibility you mentioned as in the GLX extension is one option. You run your applications on the (headless) application server and display the output on your client having a massive 3D card using the locally DRI/OpenGL accelerated X server.

The other possibility is to have a 3D card in your (headless) applications server, do all OpenGL computations there and return the results somehow to your relatively weak client device with some older 3D card unsuitable to playing modern games with the local DRI/OpenGL accelerated X server.

For instance I have a Dell C600 laptop with an Ati Rage 128 Pro graphics card and it works fine for normal things. But when I want to play newer games I would have to buy a new laptop since I can't upgrade the graphics on this machine.

My suggestion is if I would put a modern AGP 3D card in my server (a repurposed old desktop machine) and run the 3D games and applications there and only display the output on my laptop in an efficient way, would it work so I wouldn't have to buy a newer laptop just because of the old graphics card.

Edit: I am probably thinking of what the Fusion project is going to with the integration of CPU and GPU, but that's still a few years away.

Edited 2007-09-11 09:51 UTC

Reply Parent Score: 1

jdub Member since:
2005-08-19

You're optimising for the wrong thing. Sending relatively tiny GL commands and textures to a cheap, 3D-enabled thin client is going to give you a much better experience (and bang for buck) than trying to pump a very rapidly changing framebuffer across the network.

You mention "massive 3D card", which is where I think your thought process goes wrong. It's just not the case now that these things are inappropriate or too expensive for thin client hardware.

Reply Parent Score: 4

Kokopelli Member since:
2005-07-06

This is not a "plug and play" solution but if you have a bit of patience you could try using the chromium project:

http://chromium.sourceforge.net/doc/index.html

I used it for a while in combination with Xdmx to power my main workstation setup (6 monitors hooked up to 4 computers), but the configuration was non trivial. I tore that system down in favor of a single workstation with larger monitors and compiz. While it was up the system did do GL acceleration across the entire wall though.

Reply Parent Score: 2