nVidia Corporation, today introduced the Cg Language Specification – C for Graphics. Cg is a high level programming language that enables content developers to create cinematic-quality real-time graphics easier and faster. Developed in close collaboration with Microsoft Corporation, Cg gives developers a new level of abstraction, removing the need for them to program directly to the graphics hardware. The common, familiar C-like syntax enables rapid development of stunning, real-time shaders and visual effects for graphics platforms, and is compatible with Microsoft’s recently announced High Level Shading Language for DirectX 9.0.
Another useless language to abstract the real issues and to make programs more bloated and resource-hungry.
C + OpenGL + other APIs (e.g. Crystal Space) don’t enough?
why we need new language, actually?
i will be very happy if it has good reasons
Did either one of you actually read anything about it? There’s a very good preliminary introduction over on ExtremeTech.com
Anyway …
Anything that will make more powerful features more readily accessible to programmers is a pretty good thing in my book. Expecting programmers to be able to “program to the metal” is a little bit unrealistic, especially given how often the video card landscape changes every 6-8 months.
If Cg can help take some of the pain & suffering out of applying better graphics f/x, then more power to them!
I’m just hoping that the newer cards coming from Matrox and the like will also adopt this new language. It might be too much to hope, but I’d also like to see the video card companies get together and hammer out the details of Cg formally.
I read the article. I don’t understand whether this is meant to compete with OpenGL, or what, exactly.
Though it should be clear that this is not a complete API at all, or even a programming language meant or even able for developing anything resembling a complete graphics engine, I’ll see if I can’t help clarify matters for you. This language is for the creation of pixel and vertex shader programs… real tiny things that sit on your video card, executed a lot (once per every pixel in the case of pixel shaders). Until now, developers had to write these programs in assembly language. Now, nVidia has provided us with a language to ease the creation of these shaders. It depends on the graphics library (Direct3D); it does not replace it.
My old website, GameDev.net, should be covering this issue with considerable detail, I’m assuming. They also have volumes of information and tutorials regarding shaders.
Just to elaborate on njm’s comment. Cg is not a language in competition with C or OpenGL or anything else. It is specifically for use with vertex and pixel shaders. Some background info: In a traditional graphics card, the graphics pipeline is fix function. A program generates object data in the form of verticies, color values, normals, and texture coordinates. Then, the geometry portion of the 3D pipeline takes over. It applies rotations/transformations/scaling to the vertex data, and then projects the scene to a 2D window. Then, in a step called triangle setup, the project 3D data is turned into 2D triangles which are then sent to the rasterizer portion of the pipeline. Here, the rasterizes uses the color and lighting information to draw shaded triangles on screen. In a modern GPU, the process is very different. The vertex and normal data from the application is sent to a processor called the vertex shader. Here (this is where Cg comes in) the vertex shader runs a program that analyzes the input vertex along with some other program state and generates an output vertex. So, for example, you could write a vertex shader program (in Cg!) that rotates input verticies around a certain point. The key benifet of vertex shaders cool effects (like cloth waving in the breeze) that require dynamic geometry can be handled easily. After the vertex shader, part of the traditional geometry pipeline is carried out, specifically culling (getting rid of unseen verticies), projection, and triangle setup. The OpenGL hardware then goes over each of the resulting triangles and generates pixels to fill those triangles. For each pixel, it sends data to the pixel shader, which runs a program (again Cg), which takes input data (such as color values, texture data, light-maps, bump-maps, etc) and generates an output pixel value. The major advantages of pixel shaders is flexibility and generality. Instead of fixed effects like texturing or bump-mapping, pixel shader programs can implement a huge variety of effects. Plus, makes much more sense (to me anyway) to replace all the unrelated fixed-function effects with one general mechanism.
As always there are some other interesting articles about nVidia and the GeForce 4 on latest EETimes & EEDesign concerning the chip side of things about how they handle a design of about 10Mgates, modelled with 400k line of C & 800k lines of Verilog.
http://www.eedesign.com/story/OEG20020612S0051
Apart from Intel & AMD, this is where the Si action is, just wish I had taken a job there instead of ……..
Would this be a .NET language?
It was said that Cg would be an open standard that other chipset manufacturers like ATI could implement. Even if Cg doesn’t make sense to some people, we should realize that having a common development language between different chipsets would make it easier on game developers to take advantage of features from different chipsets. Right now, if a developer wanted to write pixel/vertex shaders for a particular chipset, they would have to go to the website and get the docs on that specific chipset.
I sure hope Cg makes its way to the Mac because I’ve been looking for Mac specific docs from both ATI and Nvidia. I emailed Nvidia and got nothing back (2 months ago).
I’ve seen this before. Looks like everyone and their dog has their own version of C.
I can just see thousands of developers working in to the wee hours trying to integrate Cg into their current code base, and cursing nVidia in the process.
Couldn’t a well designed well documented C/C++ library do the same thing?