Efforts to implement NVIDIA’s Video Decode and Presentation API for Unix (VDPAU) on the open source Radeon Gallium3D drivers (for AMD/ATI chipsets) are reportedly just beginning to work. Being Gallium3D-based means this new VDPAU state tracker is using GPU shaders and not the dedicated Unified Video Decoding (UVD) engine found on modern Radeon HD graphics processors, but using shaders is still a big performance win for HD video playback compared to pegging the CPU constantly. Also, MPEG-2 is the only codec known to work at this time. Once the basic state tracker functionality works, support for other video codecs, such as VP8 and H264, should be relatively easy to add.
but I feel that developers waste their efforts. What we need is standardized hardware. This effort is good only for linux and it hides the fact that there are only 2 players in the GPU industry. In my understanding we need a hardware interface like USB/USB mass storage for example in order to make use of accelerator cards and leave once and for all the hardware dark ages. Even in cloud computing people are trying to avoid vendor lockin. The above work is linux/ati-nvidia lockin and though I love linux this is not something I am very happy. We need standardized accelerator cards that could work out of the box with every OS that chooses to support them. Kudos to developers but it is not how things are supposed to work. Unfortunately I see the same attitude in ZiiLabs and VIA. Where are the standard bodies?
As if that is ever going to happen.
The standard bodies have tried to do something…
http://en.wikipedia.org/wiki/VBE#VBE.2FAccelerator_Functions_.28VBE…
…it failed miserably.
It was a good idea, and back in ’96 it was quite advanced too, but it simply wasn’t flexible enough and thus didn’t gain enough traction. I actually had a card with a subset of VBE implemented and was toying around with it for a while, just out of curiosity.
Sadly hardware vendors have absolutely no interest in implementing a modern version of such; user lock-in is so much more profitable.
USB hardware is ridiculously simple and has an incredibly limited scope compared to what a GPU does. It was also built on the back of the standards that came before it (UART) and has had over 30 years to mature. The main thing that makes USB appropriate for hardware standardization is that it has a very simple and narrow external interface to software – it does nothing more than move bytes back and forth.
Pointing at USB as an example is silly – a GPU is at least a few orders of magnitude more complex on the outside (let alone internally).
We had standardized graphics hardware once, remember VESA BIOS Modes? There are only about 20 or so actual functions available in VBE 3.0 – its useful to a point, but it only implements a very tiny subset of what a video card can actually do even in the basic world of 2D. Regardless, the vast majority of real world video drivers did not use VESA BIOS, it was ok as a fallback when nothing else was available but it was much slower than using drivers written against native hardware functionality. And it never even thought about dealing with 3D…
Even fixed function 3D is insanely complex. 3D with unified shaders is even more complex, but at least the external facing stuff is gradually converging on like-minded interfaces. Regardless, we haven’t even reached a point where the basic rendering method used by 3D hardware has converged – you have some cards that use tile-based rendering, some that use z-occlusion, some have even proposed doing ray-tracing. Some development is better done using immediate mode APIs, others deferred rendering. I have no idea how you could possibly standardize all this in hardware when there are so many non-trivial differences that have to be exposed to make writing a functional API layer workable.
VDPAU is a vendor lockin? How? Sure Nvidia originally wrote it, but it is wide open. SG supported it in their Chrome GPUs (for what that is worth), so other 3rd parties could have done it. It is no longer an Nvidia API – it is a Linux API. All Intel, ATI, etc. need to do to support it is write support for it in their drivers.
We have that now with OpenGL, and it doesn’t require standardized hardware. If you specifically mean video decode acceleration remember we are talking about GPUs – there is no standard for that because the video card for the most part doesn’t actually implement anything that looks like video decode acceleration… VDPAU (ideally) uses general purpose shader features on the GPU to do its work – there is not “special” hardware and therefore no “special” hardware interfaces.
If you want fixed function video decode accelerators there are many available, and standardizing those might even make sense – but that has nothing to do with VDPAU or GPUs in general.
Edited 2011-05-18 19:42 UTC
Only two players for GPUs? I wonder what these should be …
I guess you meant just AMD and Nvidia, though you are totally wrong on that. There are a lot more players and some major like Intel.
And yeah the “crappy” Intel GPUs are good enough to accelarate videos.
Btw. Gallium3D is cross plattform, so not just Linux.
What he meant is there are only two manufacturers who matter.
Intel matters a lot. I really wonder how people could think otherwise.
And that is just on desktop. On embded — which is also targeted by Linux — it is a total different story.
Sorry for not mentioning Intel or Imagination. But my comments apply. Gallium is not a hardware interface anyway. I believe in simple standardized 2D accelerated framebuffers and standardized accelerator cards. So I can buy them like a USB/IEEE1394/PATA PCI/PCIexpress cards and add to my whatever system. The lockin kills small businesses, kills small OSes like Syllable/Haiku, kills research (like Genode/Fiasco) and kills the fun of computing. One standard driver for all framebuffers, one driver for opencl accelerator cards, one driver for all sound cards and so on that should encapsulate standards. This is what I require as a consumer.
Finally, Intel G45 VA-API Support Is Available
http://www.phoronix.com/scan.php?page=news_item&px=OTQ1NA
“The Intel G45 chipset was released in the summer of 2008, but only this week is it now possible to take advantage of VA-API video playback acceleration for this Intel integrated graphics processor.”
As far as I know, there is also code available to provide a translation layer between the VA-API and VDPAU APIs.
Like the Radeon drivers for AMD/ATI, Intel graphics drivers for Linux are also open source. The only difference is that Intel’s drivers are written by Intel, whereas the Radeon drivers for AMD/ATI chipsets are written by the open source community from programming specification documents provided by AMD.
http://www.x.org/docs/AMD/
Keen observers will note that these programming documents do not cover the UVD video acceleration hardware features of AMD/ATI chips. I believe this is due to the fact that video DRM functionality is inextricably embedded in the UVD hardware, and AMD have agreed to not disclose this functionality to open source drivers. It is for this reason that open source video decode acceleration for AMD/ATI chips, which this thread topic is about, must be done via GPU shaders.
On this page:
http://wiki.x.org/wiki/RadeonFeature
Video decode (XvMC/VDPAU/VA-API) using the 3D engine is a work-in-progress for the Gallium3D drivers, whereas video decode (XvMC/VDPAU/VA-API) using UVD is not available for older cards (which do not have the requisite hardware) or TODO (not started through lack of information) for the newer cards which do have UVD hardware.
Intel’s drivers do not have this problem. The dedicated video acceleration hardware features of Intel graphics chips is directly utilised by the Intel drivers.
Edited 2011-05-19 09:55 UTC
The easiest thing will be to make part of the drivers contained in firmware and provide a standard interface to any OS. Like an EFI or “BIOS” for graphic cards.
You’d still have to standardize on an interface to this, though – which would either kill innovation, or require updates frequently enough that you end up not having much of a standard anyway.
This should work for any hardware that has a gallium driver for it, and that framework was carefully designed to be portable across hardware and operating systems. Although at the moment, Linux + ATI/NVidia is pretty accurate.
In my understanding we need a hardware interface like USB/USB mass storage for example in order to make use of accelerator cards and leave once and for all the hardware dark ages.
USB Mass Storage is AFAIK in a simple term a proxy between USB and PATA/SATA. It is transferring merely an already-standardized protocol of hard drives. Do you install different driver for each different hard drive? I do not think.
Let’s see all of USB devices. Unless it has some standard protocol for that part of devices, each has it’s own driver to work properly.
Therefore, as someone else pointed out, you have come up with inappropriate example.
Actually, OpenGL should be the de-facto standard but except for Microsoft devices (Windows machines and Xbox) and some(few) devices with own proprietary API, OpenGl is actually the standard to tinker with Graphics for most of computer devices.
Unfortunately, this post is about video decoding but about graphics driver so … may I vote you down as off-topic ? :p
Please, no.
We wouldn’t have the kick-ass GPUs we have today if a hardware compatibility requirement had been enforced.
Yes, it would make the life of hobbyist OS developers easier, but it would be at the expense of the majority, who’d rather have *fast* hardware acceleration than simply have access to basic functionality across a wide range of devices.
I’d rather have two major vendors with decent products than a whole bunch of vendors with semi-sucky products, like we had back in the dark old DOS days.
On the OS-software interface side, OpenGL and DirectX are standardized, with relatively infrequent changes.
Why couldn’t this happen on the hardware-OS interface side ?
It’s not as if GPU vendors innovate so much that they need new standards all the time anyway. From time to time we see a new feature like unified shaders or tesselation, but most of the time it’s really just stacking more and more shaders on a single chip and making them run faster.
Edited 2011-05-19 13:50 UTC
Remember how slow the OpenGL consortium used to be?
That was for agreeing about software standard. Now care to wager how long it would take to get people to agree about hardware standards? We’d end with too little or too much, after taking way too long.
If you want to be able to support new tech, then we don’t have a set-in-stone standard, and what good is it, then?
A minimalist interface, say modesetting and basic 2D acceleration + compositing would go a far way, and could probably done in UEFI. I don’t believe in anything more than that.
I do wish hardware vendors would be more open and release specifications, but OTOH there’s the whole point about R&D costs and patented tech that might even have been licensed from other companies.
They where “slow” just as much because of old school fixed function GPUs as they where because every company had to reinvent the wheel by coming up with their own proprietary method of doing the same thing that did nothing but case massive compatibility problems with accelerated apps and games back in the bad old days.
These days with unified shaders they can implement allot more features allot faster, hence the *.1 releases backporting new features to older GPUs that can still handle the new extensions.