The restriction of textures to power-of-two dimensions has been relaxed for all texture targets, so that non-power-of-two textures may be specified without generating errors. Non-power-of-two textures was promoted from the ARB texture non power of two extension.
whoohooo! i had to write a dirty resizing algorithm to resize video on the fly when being used as textures, but now OpenGL will accept any size texture!!! alright!
i’m sure there’s many better features then that, but that’s the first one that caught my eye.
There is nothing new in OpenGL 2 compared to OpenGL 1.0 + extentions. The only thing they do is to promote extensions into the core. This time it was mostly new shading stuff that got promoted.
That’s kinda true. I liked 3D Lab’s proposal for an OpenGL-overhaul much better, but I think they figured out that all that would be too much for one release. I’m looking forward to the uber-buffers proposal appearing in a hopefully not-too-far-away OpenGL release.
I’m glad to see, anyway, that the ARB is serious about keeping the speed of OpenGL-releases up. OpenGL 1.3, 1.4, 1.5, and 2.0 were all released within the space of 3 years. Compare that to the 9 years it took to get from OpenGL 1.0 to OpenGL 1.3.
Better specs in the core lead to more Hardware compliance of those specs. Now if you want to say OpenGL compliant you either have to say it’s on OpenGL 1.x or you have to add this stuff in. This will force hardware manufacturers and driver writers to adopt the newer standards.
Kinda true, by using the GL_EXT_texture_rectangle or the NV equivalent for example.
But the new OpenGL 2.0 ‘GL_ARB_non_power_of_two’ is superior, fixes a lot of hassle.
You don’t need to use a specific ‘target’ (glEnable(GL_TEXTURE_RECTANGLE_EXT)) – now just use GL_TEXTURE_2D for example;
Also, texture coordinates are no more required to be from (0,0)-(w,h) but remains (0,0)-(1,1).
It’s basically the same features of the D3DPTEXTURECAPS_NONPOW2CONDITIONAL DirectX8 equivalent.
This also fixes the problems on ATI – when they will support it, only the Geforce6800 have it at this moment – where the ‘rendering to texture’ cannot be performed in non power of two textures due to a lack of features in the WGL interface (no WGL_BIND_TO_TEXTURE_RECTANGLE_RGBA_NV equivalent in GL_EXT_texture_rectangle)
I’m curious: as a Mac developer, and novice opengl programmer using it for simulation visualization, I’ve been pretty disappointed with Apple’s OpenGL ( admittedly I only recently discovered GLEW, and that at least makes the extensions feel “real” not tacked on )
Anyway, all the sample code I see/examine is for Windows, and it seems to me, as an outsider, that OpenGL support on Windows is excellent.
So, what I’m wondering is wether this is just because NVIDIA and ATI provide good developer support? I assumed Windows just had a better opengl implementation…
Though of course, I’m not surprised, since DirectX is MS’s baby.
OpenGL implementation on MacOS X is quite good in fact.
Also you don’t need to do the ‘weak link (doing the GetProcAddress hell on Windows on every entry point and having the own header file with the extensions for ARB_multitexture for example. It’s already provided in the OpenGL framework. So no need of ‘GLEW’ or equivalent.
You need to just test that the extension is supported, that’s all. It simplify a lot the writing of an OpenGL program.
The thing that you need to wait the next revision of MacOS X in order to get the new extensions (like OpenGL HLSL extensions will be in Tiger I guess)
There is also some Apple extensions that works on ATI and nVidia that does the implementation of GL_NV_vertex_array_range on Windows. This intend to avoid people to write ‘nVidia’ or ‘ATI’ specific code.
So in overall, it’s better supported than Windows. But the thing is that drivers are updated more frequently on Windows, so newest features are coming sooner than on MacOS X.
Btw, nVidia gives source code of the drivers to Apple and it’s Apple is doing the implementation.
The restriction of textures to power-of-two dimensions has been relaxed for all texture targets, so that non-power-of-two textures may be specified without generating errors. Non-power-of-two textures was promoted from the ARB texture non power of two extension.
whoohooo! i had to write a dirty resizing algorithm to resize video on the fly when being used as textures, but now OpenGL will accept any size texture!!! alright!
i’m sure there’s many better features then that, but that’s the first one that caught my eye.
There is nothing new in OpenGL 2 compared to OpenGL 1.0 + extentions. The only thing they do is to promote extensions into the core. This time it was mostly new shading stuff that got promoted.
That’s kinda true. I liked 3D Lab’s proposal for an OpenGL-overhaul much better, but I think they figured out that all that would be too much for one release. I’m looking forward to the uber-buffers proposal appearing in a hopefully not-too-far-away OpenGL release.
I’m glad to see, anyway, that the ARB is serious about keeping the speed of OpenGL-releases up. OpenGL 1.3, 1.4, 1.5, and 2.0 were all released within the space of 3 years. Compare that to the 9 years it took to get from OpenGL 1.0 to OpenGL 1.3.
Is regarding MESA and how they’re going in the 2.0 compliancy department? will we see that OpenGL accelerated using OpenGL 2.0?
Better specs in the core lead to more Hardware compliance of those specs. Now if you want to say OpenGL compliant you either have to say it’s on OpenGL 1.x or you have to add this stuff in. This will force hardware manufacturers and driver writers to adopt the newer standards.
Lived under a rock in a cave? You’ve been able to do that for years with extensions.
Kinda true, by using the GL_EXT_texture_rectangle or the NV equivalent for example.
But the new OpenGL 2.0 ‘GL_ARB_non_power_of_two’ is superior, fixes a lot of hassle.
You don’t need to use a specific ‘target’ (glEnable(GL_TEXTURE_RECTANGLE_EXT)) – now just use GL_TEXTURE_2D for example;
Also, texture coordinates are no more required to be from (0,0)-(w,h) but remains (0,0)-(1,1).
It’s basically the same features of the D3DPTEXTURECAPS_NONPOW2CONDITIONAL DirectX8 equivalent.
This also fixes the problems on ATI – when they will support it, only the Geforce6800 have it at this moment – where the ‘rendering to texture’ cannot be performed in non power of two textures due to a lack of features in the WGL interface (no WGL_BIND_TO_TEXTURE_RECTANGLE_RGBA_NV equivalent in GL_EXT_texture_rectangle)
Btw, I think Doom 3 is using that new extension.
Maybe by the time Longhorn is released, Microsoft will finally ship OpenGL 1.2. Slackers.
– chrish
Chris,
I’m curious: as a Mac developer, and novice opengl programmer using it for simulation visualization, I’ve been pretty disappointed with Apple’s OpenGL ( admittedly I only recently discovered GLEW, and that at least makes the extensions feel “real” not tacked on )
Anyway, all the sample code I see/examine is for Windows, and it seems to me, as an outsider, that OpenGL support on Windows is excellent.
So, what I’m wondering is wether this is just because NVIDIA and ATI provide good developer support? I assumed Windows just had a better opengl implementation…
Though of course, I’m not surprised, since DirectX is MS’s baby.
OpenGL implementation on MacOS X is quite good in fact.
Also you don’t need to do the ‘weak link (doing the GetProcAddress hell on Windows on every entry point and having the own header file with the extensions for ARB_multitexture for example. It’s already provided in the OpenGL framework. So no need of ‘GLEW’ or equivalent.
You need to just test that the extension is supported, that’s all. It simplify a lot the writing of an OpenGL program.
The thing that you need to wait the next revision of MacOS X in order to get the new extensions (like OpenGL HLSL extensions will be in Tiger I guess)
There is also some Apple extensions that works on ATI and nVidia that does the implementation of GL_NV_vertex_array_range on Windows. This intend to avoid people to write ‘nVidia’ or ‘ATI’ specific code.
So in overall, it’s better supported than Windows. But the thing is that drivers are updated more frequently on Windows, so newest features are coming sooner than on MacOS X.
Btw, nVidia gives source code of the drivers to Apple and it’s Apple is doing the implementation.
Nvidia gives the source code to apple? that doesnt sound like them at all…