Linked by Kroc Camen on Fri 1st Jan 2010 15:36 UTC
Opera Software HTML5 Video is coming to Opera 10.5. Yesterday (or technically, last year--happy new year readers!) Opera released a new alpha build containing a preview of their HTML5 Video support. There's a number of details to note, not least that this is still an early alpha...
Permalink for comment 402192
To read all comments associated with this story, please click here.
RE[5]: Comment by cerbie
by lemur2 on Sun 3rd Jan 2010 11:06 UTC in reply to "RE[4]: Comment by cerbie"
lemur2
Member since:
2007-02-17

It depends on what one means by the word "performance." If you mean video quality and encoding speed, then yes I'd say Theora 1.1 and H.264 seem to be just about even. When H.264 jumps ahead though is decoding performance, and the reason is simple. There are video accelerator chips for H.264, and at the moment there are none for Theora. It's sort of a catch 22 situation: there are no accelerator chips for Theora so we won't see any major content producers use it, but until one of them does start using it there won't be any demand for accelerators. Right now, all Theora decoding is done in software by the CPU. It's not an issue on desktops or set top boxes, but on laptops and portable devices it's a mother of a battery guzzler. To be fair, so is H.264 without hardware video acceleration, but that makes little difference to content producers and providers. Currently, H.264 delivers in a huge area where Theora does not, and if I had to bet on a codec that eventually replaces H.264 over licensing I'd bet on VP8 or whatever Google ends up naming it and not Theora. I can only say one thing for certain: next year is going to be very interesting in this area.


It depends on what you mean by decoding performance. Certainly it is true to say that a video decoding and rendering in hardware is far faster than doing it in software, but that is NOT by any means a codec-dependant observation.

As I said in the grandparent post:
Even if a given video card does not have a hardware decoder for a particular codec, once the bitstream is decoded from the codec format by the CPU, the rest of the video rendering functions can still be handed over to the video card hardware.


Decoding the video data from the codec-compressed format into raw video data is but a small part of the problem of rendering video. This part of the task requires only a few seconds of CPU time for every minute of video. The amount of spare CPU time then is over 50 seconds per minute. If the decompressed video from a Theora-encoded video bitstream is passed on at that point to the graphics hardware, the CPU need be no more taxed than that.

The actual savings from having the video codec decoder function also implemented in the graphics hardware is not much at all ... all that one saves is a few seconds per minute of CPU time.

As you say:
If you mean video quality and encoding speed, then yes I'd say Theora 1.1 and H.264 seem to be just about even.


That is true. And when it comes to playing video, if the decoding is done in hardware (say for h264, which some graphics cards do have a decoder for), then the only savings is a few seconds per minute of the clients CPU.

However, be advised that a number of programs that play Theora perform the entire rendering of the video stream in software (i.e. in the CPU rather than the GPU), and so they do not use the graphics hardware at all. However, be advised that these programs also tend to do the same for h264, and the playing performance once again will be no better for one over the other.

Edited 2010-01-03 11:15 UTC

Reply Parent Score: 2