“It’s been a while since the last update on the Open Graphics Project, so I’ve put together this article to fill in the community on what’s been happening, what’s going to happen, and how we can make what happens happen faster.”
“It’s been a while since the last update on the Open Graphics Project, so I’ve put together this article to fill in the community on what’s been happening, what’s going to happen, and how we can make what happens happen faster.”
so will probably get modded down, but it has always bothered me that one of the worst icons is the one for 3D Graphics articles. Heck, even I’d be willing to bang something up that looked better than that. Someone from OS News contact me if you want it to happen, or post here with contact info if you can’t access mine.
Adrian
PS Or, if someone else wants to, they could probably do a better job than I, I’m just offering.
How good will the 3D pipeline be?
This will be a low-end card, so obviously I am not expecting latest games to work on it or anything, but I was wondering if this card will be enough for the 3D Accelerated desktops that are coming(Xgl, Vista’s Aero, Tiger’s Quartz, etc…)
It’s intended that it will support full acceleration for all sorts of things not supported by other cards, including accelerated composited desktops. It just won’t be fast enough to play Doom 3.
Due to the expected cost of the graphics card is priced in comparison to current graphics cards are there really any benefits to switching from NVIDIA or ATI? Such as with the open source drivers will the user be able to have features such as Hardware Overlay Plane support enabled for 3D programs such as Maya, XSI, etc? Currently Hardware Overlay Planes are only supported on DCC cards like the NVIDIA Quadro which are costly to consumers.
Well, I’m looking at the wiki. Expect somehting along the lines of a much faster Matrox G550. Disgustingly good 2D performance, but very basic but usable 3d (NO programmable shaders, NO hardware “transform and lighting” AFAICT). This will be a solid, DirectX 7 era card.
This would actually be a great card no matter what operating system you used, and seems to be able to accelerate most of Xlib/XRender/Cairo/Arthur/whatever. It could probably do a fair job on Quartz Extreme and WGF as well. Hardware accelerated vector editing, SVG, PDF/Postscript rendering and display, etc. sound good.
Basically, you don’t want this card if:
1. Your professional work involves complex 3D. note that the card as is will likely be good enough for those staring at wireframes and basic models all day.
2. You like recent 3D games runnng fast.
3. You are preparing for future accelerated desktops that will require Stupid Pixel Shader Tricks (Core Image, Aero Glass)
4. You aren’t the adventurous sort willing to invest in something risky that could be good for everyone in the future: an unkillable and unbuyable player in the graphics hardware maket. Early versions of this card will basically be an FPGA, and may not even have a PCI interface, much less AGP or PCIe. The retail, mass-market ASIC version comes much later. OF course, having a reasonably priced FPGA isn’t too bad, now is it?
–JM
Sound like a good alternative to workstations at first and evolve into a full blown competitor for ATI and nVidia (tough, nVidia is a little bit more OSS friendly than depicted in the Open Graphics site)
I really hope that more projects like this are started. It is not so much the competition to ATI and nVidia that I value (and honestly, they’re going to have a hard time challenging those two). It is the possibility to use such hardware as components in custom devices. The hurdle to create such custom devices is much smaller if 90% of the hardware design can be downloaded from the net, and the ability to customize the design (as advertised for free software) is much more important for hardware than for software.
– Morin
to stop NVIDIA/ATI just releasing their code/drivers/architecture on a previous generation product using FOSS and effectively killing this off?
Dead on Arrival
It’ll be slow as hell and ATI and Nvidia will have cards that are not only much faster, but cheaper.
They have a market of about 7 deranged FOSS political ideologues.
A whole lot more people than political ideologues are going to find this useful. Right now, alternative OSs are all in a bind because there is no well-documented open 3D hardware. With OS X and all its fancy tricks, users will come to expect the features provided HW-accelerated UIs, but alternative OSs won’t be able to offer them. Reverse-engineering things (like what is being done with NVIDIA hardware on BeOS) is extremely complicated. The Open Graphics project greatly lowers the barriers to HW-accelerated GUIs on alternative OSs.
Looks like a few commenters are shills for ATI and nVidia.
The point isn’t to compete with ATI or nVidia. The point is to do something they’re unwilling and unable to do, which is to release “internal secrets” of the way their chips work.
nVidia has never released register specs and never will. They won’t even release them under non-disclosure. Although nVidia have a driver for Linux, they break a lot of driver development rules and are always behind everything else. It’s a known stability concern.
ATI used to release specs, but as a result of competition with nVidia, have stopped releasing specs for their newer-generation GPUs.
These too companies are too busy locking horns over the much more lucrative Windows market to ever pay attention to the OGP. And they’re too afraid of each other to want to release critical internal secrets just to compete with the OGP. It’ll never happen.
We’re not a threat to them (we’ll just make Linux users stop bugging them to waste time developing drivers), and they’re not a threat to us, because they simply cannot release their specs without hurting their business.
Slow as hell? Someone hasn’t looked at the specs. For a desktop card, this will scream. And for those of you who can’t separate “3D graphics” from “games”, get a clue. There are more uses for 3D graphics than first-person shooters. Ever heard of CAD? How about simple stuff that is yet-unaccelerated for X11 like alpha compositing of windows?
Let’s do a little mental experiment and compare X11 support for Radeon to X11 support for the OGC card. Putting aside the fact that Radeon support is generally buggy, not every capability of the Radeon is supported by the drivers. The driver developers have contented themselves to accelerating only the most critical functions, like bitblts and solid fills. But in many drivers, lots of other things are still unaccelerated, like lines, stipples, tiles, text, etc. So, run your “x11perf -copywinwin 500” on a Radeon 9000 and compare it to what you’d expect from OGC, and the Radeon will win. But do a full x11perf and run the results through Xmarks, and you’ll see a much more even match.
As a graphics driver engineer for Tech Source, I have time and time again beaten competitors who were using the same GPU for one simple reason: I accelerated EVERYTHING. As a chip designer and driver developer at Tech Source, I am very well aware of just what is critical and what isn’t. I know how to make the right compromises to get the job done right.
Oh, one other thing the other guys don’t do right: DMA. I’ve worked with some of these other chips and they simply DO NOT have the right interrupt signals to do DMA efficiently, where GPU usage is maximized and CPU usage is minimized. So while a Radeon may beat us at one large bitblt, we’ll beat them at lots of small ones, which is by far the more common case, PLUS we’ll use less CPU time in the process.
For OGC, this may sound like speculation, since OGC doesn’t exist yet. But it’s not speculation in the sense that I have done this before with other GPUs. Our secret sauce is in the drivers.
I will go you a step further.
Did you use some of the first Graphic computers with graphic hardware you could program? I am thinking of the Amiga, and Atari computers but there were others. Remember the speed of the computers and graphic chips at that time? 7-8 Mhz CPUs, 14-28 Mhz graphics chips sets.
Now look at the speed of modern CPUs 3-4 GHz and graphic chips clocked to who knows where. But often we don’t see the full preformance gains.
One main reason for this was because early computers were programmed right down to the metal, today drivers act like a buffer between hardware and programs slowing things down. And there is no documentation to program at that level even if you want to.
What really matters here is we can’t see how much delay is caused by the closed driver code vs the graphic card’s hardware.
If we had an OpenHardware Graphic Card it would be possible to keep tuning and speeding up the software drivers to make them faster, it would be possible to make a lite-weight driver that does not give you all the functions, but gives you the best speed a driver can. And last but not least for some custom software jobs the program can hit the hardware directly for max speed. Very useful for dedicated machines. It also helps in the case of dedicated machines that now you know all the APIs of the hardware, that porting to new hardware becomes easyier if the old hardware no longer is available.
I fully expect that a 3D graphic card with all it internals documented can be made to run faster than a closed card with equal specs.
For general usage the OGC definitely sounds like a nice card. Most folks confuse *Stable enough apps dont crash ~often~* with *STABLE*. Many folks also do not realize that X is so under utilized by commercially available garden variety graphics cards due to very poorly implemented drivers. Personally I would love to have a fully stable linux graphics card that *JUST WORKS* thank you very much. Sure the 1st gen card may not be a gamers delight, but looking at the specs I think most semi new linux based games will play acceptably on it. Further more, most folks do not realize that many “crappy video cards” are actually the result of adequately designed hardware shackled by badly implemented drivers. Fact is, I would rather have a so so hardware design with Terrificly Engineered Drivers than the best hardware on earth with lousy drivers.
Peace
Linux isn’t the only free/open operating system, and while Linux has some level of support for most hardware out there, it doesn’t have complete support. And still, Linux users should consider themselves lucky, comparing what they have to what other non-Windows users have to make do with.
There definitely is a market. Embedded computing is huge. Sloppy, closed, Linux-only video drivers simply won’t do here.
FPGAs have so much potential in the mass-market. Stuff some in your PC and you get a supercomputer.
I just want to clarify that what I mean is that Linux users shouldn’t settle for what they have (closed drivers by ATI/nVidia) but instead support the Open Graphics project and open-source device driver projects, as it will benefit both future Linux, on whatever CPU architecture*, and other** open-source platforms.
* ATI/nVidia can not be expected to support Linux (with drivers) on every possible hardware architecture. Open-source drivers fill that spot, but without hardware documentation they are bound to be incomplete.
** I hope Linux doesn’t kill all the FOSS competition. That would be a sad day for computing, regardless of how good Linux is.
The performance specs are not unimpressive. 400 mpixels/sec and 6.4GB/sec memory bandwidth should be enough to support rather graphically rich UIs using Cairo/Glitz. The relatively high memory bandwidth is an important feature, because it’ll come into play for composited UIs.
Why do we always consider Linux as the only os that needs drivers?
We have Skyos,Menuetos,Clicker,Syllable,BeOs,Aros,Reactos and many other operating systems being developed that would like to have better graphics support that plain VESA!
I know they would all love to have a graphics card with full specs provided.
Believe me, the change this project will spark is HUGE!
“the ASIC will cost about $2 million to fabricate”
WTF?? That’s not a good reason not to move forward. Tapeout on a shuttle to build your first couple hundred demos and prototypes and then you can get some investment capital.
Offer them in a limited supply for a few months and if the demand picks up and you get even minimal investors tsmc will extend the credit for a full mask set.
I can’t imagine that you’d have to come up with the full $2M before taping out your asic. If your plan actually has legs you will be able to move those “demos” at like $500 each pretty fast.
Anyway, is a full mask run really going to cost $2M?? In my group we always go on a shuttle so I’m not really familiar with those costs but I know other groups in my company have done full mask runs and we’re pretty small. I don’t think we committed to that much money at one time for anything (except of course when we bent over to get raped by cadence).
The idea behind doing the ASIC is to reduce parts cost. We want to do a run of 100,000 so that we can sell chips at $30 or more (depending on volume), and this will keep the OGC price down. OGD is a lot more expensive because it’s lower-volume and has way more chips on it (less integrated). People are welcome to buy OGD to run the OGA core (when it’s ready) all they want. It’s just a lot more expensive, and not many people are going to be willing to pay that much for a slower card. And yes, the FPGA is that much slower.
Unless Andy and Howard run out of time, OGD will be dual-head. The first OGC may not be. It’s a cost issue.
We are developing OGD without the use of any Tech Source equipment. It’s important that there not be any conflict of interest. Some of that is of our choosing so that no one else can claim ownership of our work. It’s a blessing to have an employer who doesn’t try to put restrictions on what you do on your own time, even if there’s a chance it could compete with them. You should try working at one of the major graphics companies. They own you. (Unless you live in California.)
And to be clear, once more, OGD is NOT a GPU. It’s an FPGA experiment board with RAM and video hardware on it that will be used as the development platform for a GPU. A GPU is a complex beast, and all we have on that part is a complete spec. OGD is intended to be an enabler, in multiple ways, for us to develop OGC. Not only is it a development platform, but it’s also a revenue stream we can use to possibly work full-time on the rest of it. That’s why I say OGD is due THIS year, while OGC is due NEXT year.
If you think this project isn’t going fast enough, please, by all means, get involved. We could use the help.
“If you think this project isn’t going fast enough, please, by all means, get involved. We could use the help.”
I wasn’t trying to say you guys weren’t going fast enough. I was just saying that I thought your schedule was really aggressive. I guess I understood that you would have a fully functional OGD by the year’s end with all the RTL that would ultimately be on the OGC completed. If the goal is just to have the board built and ready for sale then I guess that seems more likely.
As for helping you guys out, maybe I will. I hack verilog. Every day, all day. I work at a small startup and I work a lot. So I’m sort of busy. However, my boss it toying with the idea of having us spend 1/2 day every week working on pet projects (like google does). If that happens maybe I will help you guys out. It looks like a really fun project.
If you could just put the FPGA onto the graphics card, the PROM could be updated on a live system and then loaded just by rebooting, and it cost me $100 more I would buy it.
I looked at the Xilinx website and I don’t even see your Spartan III 4000. Anyway, their best speed grade is only $130 bucks. Is this really not fast enough? If so, that is too bad.
The idea of an open graphics card is cool but only really becomes powerful when the end user or anyone can up up the hood and implement their own favorite tweak or update, resynthesize (hmm that could be a problem for an end user I guess) and then update.
That would garner you a much larger user and developer community. People from Xorg could dynamically push back one implementation specifics which when updated could be rolled out to users in deployed systems. You could add better openGL support and update people later. It would also enable you to start selling much sooner. People would buy just on the promise of future releases now, even with just a barebones 2D only implementation.
I guess the speed thing probably kills it huh?
This would actually be really cool because then I could resynthesize with some smaller set of fx to create room or just use existing space and add all kinds of random stuff in there. Mpeg decoder? Uhh, uhh, well I’m sure it would be incredibly cool. I don’t know about talking to it since AGP/PCI would presumably be busy but you know, it would still be cool.
I really can only work with dual head. This is also an area where most graphics cards have really shitty support in Linux. Part of the problem is X of course since render doesn’t work in dual head unless you have an Nvidia card and driver which tricks Xorg into thinking it is a single head (but yet still exports the Xinerama variables, I’m really don’t know much about it except that it works and ATI doesn’t except as one bit nasty head.)
Oh my god! You guys are using f–king icarus and gtkwave? I thought you said your company was blessing your project? I gues that doesn’t mean using their tools huh. Painful man. Really. iverilog is ok I guess (slow) but using GTK wave!? Pain, really deep searing pain. Ouch.
I just checked out your code. You guys haven’t done jack so far. You need help. You’re really going to have demos by the end of the year?
Wow. I guess I’ve never worked on a GPU before but I imagine that working full time on another job that this would take until the end of the year just to have first cut RTL that you can even sim and maybe synthesize if you don’t really verify it. Just hacked in. Then getting your board to actually work (since it won’t or at least you won’t know until you first program the fpga and try it out) will take at least another month of part time work. If you want to do full on rigorous verification you can expect another month at least.
I don’t know. Maybe you guys are freaking bad ass veri-gurus. Maybe a gpu is a lot simpler than I know. Still seems crazy.