OpenGL 2.0 was formally launched today and with it the completion of the graphics API’s Shading Language specification for vertex- and pixel-shader programming. OpenGL Shading Language was approved by the OpenGL Architecture Review Board (ARB) in June 2003 as an extension to OpenGL 1.4, which has since been updated to version 1.5.
Are there any libs + API implementations available yet? Looks like it’s time to get brushed up…
I have read somewhere that OpenGL was designed to last for many years to come. Is it easily extendible? What are the most interesting new features? I read things about shader support, but that’s pretty old already…
I wonder if the new cards supporting OpenGL 2.0 will allow for some really neat never-seen-before effects
See http://www.opengl.org/about/arb/notes/meeting_note_2004-03-02.html#… for a good summary of new features.
it is supposed to lessen the pain of using modern graphiccard features in opengl. right now you have to use many vendor specific extensions. so you will not see anything you haven’s seen before.
As always, this all sounds fascinating, but I’m not one of God’s chosen graphics-technology-understanding people, so I’ll have to make do with not really having a clue what they’re talking about.
I’ll just look forward to enjoying the even cooler graphics and games which will doubtless come about as a result of this 😉
My understanding is that video drivers have included multitudes of OpenGL plugins with support for most of this stuff for a while now, and the game makers have been using them. Its like linux distro makers who backport new development kernel features and patch up their kernels to add extra features before the are included in the stable vanilla kernel. This 2.0 release only marks the point where all those plugins get included into the official “vanilla” OpenGL.
Thats my understanding anyway. Feel free to correct me if Im wrong…
I read in some old proposal for OpenGL 2.0 that it would unify the hardware programming interface as well (like VESA, but supporting accleration and 3D). What happened to that? I really hate it that every piece of hardware is completely incompatible with everything else, even if the feature sets are largely the same.
— “I really hate it that every piece of hardware is completely incompatible with everything else…”
At the same time, if you try to force them all to have completely compatable hardware interfaces, then you run the risk of stifiling development of new technologies due to unforseen limitations in the standards. Personally, I think that the card makers getting every last possible frame per second out of their GPUs is well worth the annoyance of relying on them for drivers.
“At the same time, if you try to force them all to have completely compatable hardware interfaces, then you run the risk of stifiling development of new technologies due to unforseen limitations in the standards.”
True, but they can always make extensions. Backward compatibility with VGA didn’t stop SVGA cards from improving in every possible way, either. Besides, if you have a problem working with the standard, then so will everybody else, meaning that the standard should be revised.
My preference is to have a card that Just Works, rather than a faster card that doesn’t work. And the truth is that hardware manufacturers only write drivers for a limited range of platforms, and they are the only ones who can write drivers, because the rest of the world doesn’t get to know the specs (well, unless they reverse engineer them, but that takes time).
— “True, but they can always make extensions.”
But isn’t that essentially the current state of things? They’ve been going beyond the VESA standards for so long that they’ve completely surpassed it. On modern video cards, the VESA interface is really just translated into the cards own interface, like Transmeta Crusoes do for x86 instructions.
In other words, your solution appears to in fact be what you are complaining about in the first place.
— “…and they are the only ones who can write drivers, because the rest of the world doesn’t get to know the specs”
Ain’t that the truth! Seems like the best solution is to just get them to open up about the existing hardware interfaces though. Frankly, the reasons why they don’t are the exact same reasons they don’t use a sharded standard hardware interface like you propose either… competitive advantage (well, perceived advantage anyway).
I guess the real challange then is getting them to realize the benefits in opening up and letting their users give them free driver development outweigh any disadvantage in giving your competitor(s) the same open access to your technology.
Ok, men! Let’s port to BeOS!
Michael VinÃcius de Oliveira
~ BlueEyedOS.com Webmaster ~
“— “True, but they can always make extensions.”
But isn’t that essentially the current state of things? They’ve been going beyond the VESA standards for so long that they’ve completely surpassed it.”
Well…if you recall, my original post was about OpenGL providing ABI compatibility accross cards. VESA gives you access to the framebuffer of your card – but nothing more. No accelleration, no 3D. OpenGL is supposedly a standard for graphics programming that does provide access to accelleration and 3D. However, if every card still has to be programmed differently, even for the basic functionality – and I think the XFree86 drivers and the non-support for accelerated 3D is enough evidence of this – then the standard is a lot less valuable than it could have been.
I read (a long time ago) that OpenGL 2.0 was going to standardize the hardware interface. From all I can tell now, that hasn’t happened. This means that we still need to write drivers specific to one model of video card, even to get the basics to work. What I meant by “they can write extensions” is that although the interface to the standard OpenGL functions can be standardized and also used for most extensions to OpenGL, IF there is some functionality that is better accessed through a different interface, such an interface can be supported in addition to the standard one. That way, the card works just like any other (meaning it works with a standard driver), and also offers the advantages that its own interface brings.
VESA allows for a common interface across all graphics cards, but it doesn’t allow for anything chip-specific. In order to get 2D accleration, you still need to write card specific drivers. If you were to have a VESA-2DAccelerated standard, you would essentially need identical 2D chips on all cards. The same would be true of 3D, in order to have a magic universal driver, all cards would need to run the same chip.