“Excitement in the Open Graphics community is quite high as it approaches its first production run of the FPGA-based ‘Open Graphics Development’ board, known as ‘OGD1’. It will be available for pre-sale this month with the first units expected to ship soon thereafter. As an insider in this group, I had a unique opportunity to interview several of its members, including: Timothy Miller, the experienced hardware engineer who first started the project (as well as the company, Traversal Technology, which will produce and sell OGP designs), and Patrick McNamara, an interested amateur tinkerer who founded the Open Hardware Foundation.”
I know nothing about fpga or driver programming [yet ], but I would like to get such a card nonetheless.
The first ones are probably going to be really expensive, but when later versions reach an acceptable price range this is going to be a really nice thing.
Imagine hardware acceleration for dirac and theora (something amd/nvidia/intel won’t to, I guess) and a hardware accelerated framebuffer, because there are finally chances to get decend framebuffer drivers.
If you don’t know anything about FPGA hardware development, you really don’t want to start on such an expensive card just yet. Instead you should trawl through the web and comp.arch.fpga news group and get an edu board for closer to $100. Both Xilinx and Altera work with a number of educational companies (Digilent, etc) to deliver many choices in low cost boards that often have many of the interfaces on a PC, VGA, serial, USB, ethernet etc, you find or invent your own IP. The downside is the specs for these esp the VGA are very limited maybe 640×480 and 4bit color but the cost is low. The real nice thing about these low end boards is the design software is a free download from the FPGA vendor website and there isn’t much too lose except your time. For the high end boards like this open graphics card, I strongly expect the size of the FPGA is past the free point, but I need to check that.
The card photo looks pretty impressive to me (I am an FPGA EE), so they finally appear to have done a dual DVI with the PCI interface in the Lattice chip and the graphics hosted in the Spartan 4000 chip. I didn’t see the price but I wouldn’t mind getting one both for graphics work and just hosting other FPGA computing engines, maybe even my own processor.
The beauty if FPGA is that it is “programmable.” Which means that if any bugs come up in the hardware, the company just releases a new firmware and it can be programmed. That also means that any new optimizations or features can be added to hardware at a later time.
The only down side I see (not in this product) is that FLOSS doesn’t have many (any?) tools that allow you to program an FPGA. I understand that we can download the Xilinx/Altera tools but they are only compatible with RHEL/CentOS and only compatible for the i386 architecture. I use PPC so I can’t do my FPGA work on this machine.
Regarding free open source tools for FPGA design.
The issue has been pretty thoroughly hashed out in comp.arch.fpga and other places over many years. There was once the possibility of some open tools back when FPGAs really were just buckets of LUTs (a few thousand actually) and the internal wiring fabric wasn’t that difficult to figure out from the bit files.
Once FPGAs added multipliers, Blockrams, serdes, DLLs, and dozens of other must have neat features and reached 100K luts, it obviously becomes impossible to understand how the tools really do place & route. Only a handful of people outside the vendors really know the deep stuff to really comprehend how such tools can be written and they generally get hired to work for those vendors.
The vendors don’t really charge that much for the tools anyway, free for FPGA parts that might only cost $20 or less but when the parts get bigger, you need more support anyway so the costs go to a few $K. Even Intel and others charge top $ for their C++ compilers. gcc exists because the market for compilers is both huge but mostly low $ and coexists nicely with high end compilers.
There will never be free open source tools for hardware and I know most of whats out there (Geda, VHDL simulators etc), not very useful given the rapid rate of development and ever changing architectures of FPGAs.
Also last time I used them, they run fine under W2K on up and usually with some issues under RH Linux, there is lots of help on c.a.fpga for that.
This is one of the big gripes in the article. The Windows based tools come free but those same tools for Linux cost an arm and a leg.
It exists because the GNU project needed a compiler. It evolved and is the most widely used compiler today because it is free, flexible and damn good. Why pay for a compiler when you can have one for free that will do just as good if not better.
They don’t have to be open sourced. Going back to my first reply is the article points out the chip vendors provide the Windows tools for nothing so why not the Linux tools as well? If it is development & maintenance costs then geez, why not develop the tools cross-platform so a single source base runs on both. Not exactly rocket science…well maybe it is for Windows developers.
Almost all embedded/device development will be Linux centric within the next 3 years. This means that low-level software and hardware engineers will be using Linux as their everyday OS. Chip vendors and hardware engineers need to get off the MS nipple and start producing and using tools that provide for efficient end-to-end product development not what is convenient for them.
One of the biggest headaches in product development is once the hardware engineers toss the project over the wall the software developers have to fiddle around between two systems because to program the board you only have a Windows tool but the OS that is going to run on the damn thing is eCos, uCLinux or some other *nix based image. Having observed this first hand I can say it is nothing short of a total cluster $%#@.
I don’t think we disagree on anything serious here.
For the most part the FPGA tools are cross platform although not always done in a very even manner and the vendors got stuck with some bad choices regarding cross platform API kits in the past. There is a lot of complexity in these tools and much of the tool chain came from other parts vendors in bits and pieces bringing along many OS, GUI integration issues. Most of the guts of the synthesis code base is largely independent of OS & GUI layer.
Once RH became mature I remember the endless requests some years ago in the newsgroups for more and better Linux support and trying to get off of wine hosting the windows version. One advantage Linux/Unix had all over windows was that 64b had been available in workstation land for years and the larger FPGAs were really starting to bust the 32b address limit of windows as ASICs had done a decade before.
Since Linux/Unix were more capable of running the higher end FPGA version just as most all ASIC work is done under nix, it is not surprising it needs pay for use. Maybe you know the prices of ASIC tools, usually 6 figures for each tool. I now regard the use of windows in the ASIC, FPGA workstation space as a temporary shortsighted mistake but it made sense for lots of smaller companies with no nix infrastructure, and maybe it still does for some.
Note I never used any FPGA Linux versions, they weren’t ready back then and I wasn’t doing uber large projects, but all my ASIC work was nix based.
Another issue with Linux of course was which version & flavor to use & support. The majority of FPGA users freeloading at the bottom have used windows forever, so the vendors paid more attention to fixing windows users gripes.
I guess with Vista now being 64b, windows will now hang in there forever even if it looks pretty poor in comparison to any nix cmd script based tool flow. Hey I wonder how these tools fare under Vista.
Regarding biggest headache in hardware to software projects on different OSs, I hear you, been there done that in different sense. Now if the entire project is done by multiple EE/CS who all mostly work both hardware & software then you can truly do things in a more straightforward way, but as long as hardware and software engineers are literally speaking different tongues, its going to be messy.
On a side note, whenever I see MS adverts for their embedded stuff in my EE embedded mags, I rip em out right away, they tell the grandest lies. MS doesn’t even show up anymore at the embedded hardware shows anymore.
It depends on your definition. Because it uses FPGAs, OGD will always cost 10x the price of a comparable ASIC-based graphics card. Is a $200 OGD “reasonable” if it provides similar capabilities as a Rage 128?
ATI/Nvidia/Intel GPUs are programmable, so their functionality is not limited to what the vendor gives you. Nvidia GPUs can accelerate Dirac because someone in the community wrote the GPU code to do it. Now that some GPUs are programmable and have open specs, there is no reason for OGD to exist.
Well, “open specs” is debatable.
Maybe the specs are open enough to be able to write a 3D open source driver for it, but you would probably not be able to make the graphics card invert a matrix for a finite elment program, or other crazy stuff.
And even if the specs of the graphic cards are open NOW, what about 10 years later.
I think we mostly have to thank the OGD crowd for the sudden flexibility of ATI and NVIDIA. They made the commercial guys think about real customer demand.
It is true that OGD will likely cost 10x or more than that of an old card, so whats the point one might ask.
Personally I’d say if you want to do in hardware what can be as easily done in software on a platform that is open enough and persistent, then go the software route (I am a hardware jock).
The only credible reason to go the hardware route is that sometimes it is easier to do hardware than software for super bit twiddly stuff like the old crypto math and many other apps too. If an application is already an ASIC built in real logic and gates and not just hidden firmware, then moving that to FPGA is just a downhill operation with the FPGA v ASIC the main cost issue. If the ASIC can be turned into general purpose software, that squeezes out the FPGA option altogether.
If you go hardware graphics and can double justify the effort by timesharing the FPGA fabric to do some serious compute work that has nothing to do with graphics then it gets interesting again. Although nVidia now allows GPUs to also be used for non graphics work too, so both approaches need to be compared.
Since GPUs and CPUs are always going to be built on real silicon say 45 or 65nm today and FPGAs are built on 65 or 90nm and are really virtual hardware, the FPGA has a fundamental disadvantage in clock frequency and amount of useful logic you can place on a FPGA. The Spartan 4000 is already only in the middle or low end.
FPGas though can be looked at as collections of systolic Blockram engines. If you can build a computation around these 2 port rams, you can get upwards of 1000 engines running up to 300-400MHz operations per sec (Virtex 2 Pro that is), and that could mean maybe at the high end 100B ops per sec that might be impossible to do in software except on multiple CPUs.
It really all depends, unfortunately there aren’t that many hardware guys around v software guys to try both ways.
Also I doubt anybody at nVidia or ATI has ever heard of OGD and they wouldn’t have had any impact, that came from Intel/AMD change of heart on open source IMHO.
Once this is ready I’ll likely be buying one for my project trying to ressurect the old Atari GPU. Works in simulation, need to get it into a card tho to make sure.
I was really looking forward to this, but it seems like it won’t make much of a difference now.
Intel has opened it’s specs. So has ATi/AMD.
I saw the advantage when they were the only ones out there (potentially)offering something with fully open specs, but now they’ve lost their edge.
Pay more for less performance when others have open specs………… why?
Hardware source code? (There isn’t much now though)
… but, frankly, this effort will always lag behind the serious reach of commercial competitors such as nVidia, ATI, and others. Let’s face it: The primary thing that gives these video card vendors an advantage is that their intellectual property is cloaked in secrecy. As soon as you demand that the driver architecture be completely opened up, it has a chilling impact on commercial development. These companies aren’t going to feed their competitors; more likely, they’ll demand patent licensing revenue, as a precursor to opening up.
Nonetheless, this effort could still be useful in industrial applications where open-ness is more important than raw performance/throughput; for example, bidding on vertical applications that have special requirements and a need for (driver, firmware) source code.