Slashdot reported that “3DLabs has posted a series of white papers on OpenGL 2.0 covering topics such as improving parallelism, timing control, minimizing data movement programmable pixel pack and unpack and (most notably) a proposal for a hardware independent shading language.”
Would it be possible for an OpenGL lib to use a Graphic card processor and the main CPU of a machine at the same time, to render the same scene ?
Well some cards sort of do already. Cards that don’t have harware transform and lighting have to do any tranform and lighting on the CPU becase all the grapphics card does is draw the plogons and texture, well it’s something like that.
Here’s a better explantion from the website for flight gear a GNU flight sim
> Here is a bit of general background information on OpenGL and 3D hardware > acceleration contributed by Steve Baker ([email protected])
> Updated by Curt Olson (9/25/2000)
> When you are drawing graphics in 3D, there are generally a hierarchy of > things to do:
>
> 1. Stuff you do per-frame (like reading the mouse, doing flight dynamics)
> 2. Stuff you do per-object (like coarse culling, level-of-detail)
> 3. Stuff you do per-polygon or per-vertex (like > rotate/translate/clip/illuminate)
> 4. Stuff you do per-pixel (shading, texturing, Z-buffering, alpha-blend)
>
> On a $1M full-scale flight simulator visual system, you do step (1) in the > main CPU, and the hardware takes care of (2), (3) and (4) >
> On a $100k SGI workstation, you do (1) and (2) and the hardware takes care of > (3) and (4)
>
> On a $100 PC 3D card, you (or your OpenGL library software – which runs on > the main CPU) do (1), (2) and (3) and the hardware takes care of (4).
>
> On a machine without 3D hardware, the main CPU has to do everything.
>
> The amount of work to do each of these operations goes up by one or two > orders of magnitude at each step. One eyepoint, perhaps a hundred objects, > tens of polygons per object, hundreds of pixels per polygon.
>
> Hence, putting step (4) into hardware is vital – you could easily need to > draw a million pixels for each time you read the mouse. Putting step (3) into > hardware is also very nice – cards like the nVidia GeForce are now doing > this.