Linked by Kroc Camen on Fri 1st Jan 2010 15:36 UTC
Opera Software HTML5 Video is coming to Opera 10.5. Yesterday (or technically, last year--happy new year readers!) Opera released a new alpha build containing a preview of their HTML5 Video support. There's a number of details to note, not least that this is still an early alpha...
Thread beginning with comment 402187
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[4]: Comment by cerbie
by darknexus on Sun 3rd Jan 2010 09:49 UTC in reply to "RE[3]: Comment by cerbie"
darknexus
Member since:
2008-07-15

It depends on what one means by the word "performance." If you mean video quality and encoding speed, then yes I'd say Theora 1.1 and H.264 seem to be just about even. When H.264 jumps ahead though is decoding performance, and the reason is simple. There are video accelerator chips for H.264, and at the moment there are none for Theora. It's sort of a catch 22 situation: there are no accelerator chips for Theora so we won't see any major content producers use it, but until one of them does start using it there won't be any demand for accelerators. Right now, all Theora decoding is done in software by the CPU. It's not an issue on desktops or set top boxes, but on laptops and portable devices it's a mother of a battery guzzler. To be fair, so is H.264 without hardware video acceleration, but that makes little difference to content producers and providers. Currently, H.264 delivers in a huge area where Theora does not, and if I had to bet on a codec that eventually replaces H.264 over licensing I'd bet on VP8 or whatever Google ends up naming it and not Theora. I can only say one thing for certain: next year is going to be very interesting in this area.

Reply Parent Score: 2

RE[5]: Comment by cerbie
by lemur2 on Sun 3rd Jan 2010 11:06 in reply to "RE[4]: Comment by cerbie"
lemur2 Member since:
2007-02-17

It depends on what one means by the word "performance." If you mean video quality and encoding speed, then yes I'd say Theora 1.1 and H.264 seem to be just about even. When H.264 jumps ahead though is decoding performance, and the reason is simple. There are video accelerator chips for H.264, and at the moment there are none for Theora. It's sort of a catch 22 situation: there are no accelerator chips for Theora so we won't see any major content producers use it, but until one of them does start using it there won't be any demand for accelerators. Right now, all Theora decoding is done in software by the CPU. It's not an issue on desktops or set top boxes, but on laptops and portable devices it's a mother of a battery guzzler. To be fair, so is H.264 without hardware video acceleration, but that makes little difference to content producers and providers. Currently, H.264 delivers in a huge area where Theora does not, and if I had to bet on a codec that eventually replaces H.264 over licensing I'd bet on VP8 or whatever Google ends up naming it and not Theora. I can only say one thing for certain: next year is going to be very interesting in this area.


It depends on what you mean by decoding performance. Certainly it is true to say that a video decoding and rendering in hardware is far faster than doing it in software, but that is NOT by any means a codec-dependant observation.

As I said in the grandparent post:
Even if a given video card does not have a hardware decoder for a particular codec, once the bitstream is decoded from the codec format by the CPU, the rest of the video rendering functions can still be handed over to the video card hardware.


Decoding the video data from the codec-compressed format into raw video data is but a small part of the problem of rendering video. This part of the task requires only a few seconds of CPU time for every minute of video. The amount of spare CPU time then is over 50 seconds per minute. If the decompressed video from a Theora-encoded video bitstream is passed on at that point to the graphics hardware, the CPU need be no more taxed than that.

The actual savings from having the video codec decoder function also implemented in the graphics hardware is not much at all ... all that one saves is a few seconds per minute of CPU time.

As you say:
If you mean video quality and encoding speed, then yes I'd say Theora 1.1 and H.264 seem to be just about even.


That is true. And when it comes to playing video, if the decoding is done in hardware (say for h264, which some graphics cards do have a decoder for), then the only savings is a few seconds per minute of the clients CPU.

However, be advised that a number of programs that play Theora perform the entire rendering of the video stream in software (i.e. in the CPU rather than the GPU), and so they do not use the graphics hardware at all. However, be advised that these programs also tend to do the same for h264, and the playing performance once again will be no better for one over the other.

Edited 2010-01-03 11:15 UTC

Reply Parent Score: 2

RE[6]: Comment by cerbie
by cerbie on Mon 4th Jan 2010 01:09 in reply to "RE[5]: Comment by cerbie"
cerbie Member since:
2006-01-02

H.264 offloading from AMD and nVidia offer 1/2 to 1/8 CPU use of doing it in software, allowing playback on machines that are otherwise not capable, and allowing post-processing on machines that are. If codec Y can use that kind of hardware, and get better playback than coded Z, which can't, then it is absolutely codec-dependent, regardless of how much of the time is spent just decoding the video stream.

Meanwhile, if they had all had to settle on a common toolset to program their chips, we might have software-based helper programs in Brook or OpenCL by now, and not be worrying about it so much.

Reply Parent Score: 2

RE[5]: Comment by cerbie
by Ed W. Cogburn on Sun 3rd Jan 2010 13:10 in reply to "RE[4]: Comment by cerbie"
Ed W. Cogburn Member since:
2009-07-24

and the reason is simple. There are video accelerator chips for H.264, and at the moment there are none for Theora.


If MPEG-LA decides to get greedy this year, then this could and almost certainly will change rapidly.

Note that one can make a hardware Theora encoder/decoder completely royalty-free, so for hardware makers, making such a thing would be cheaper than making the H264 equivalent. Then its just a matter of baking the silicon. The only thing stopping this is the current lack of demand.

At this point we seem to be repeating the same mistakes of history over and over. Remember the GIF fiasco of the early Internet? If MPEG-LA were smart, they'd lay low, not push their license fees higher later this year, and give the Internet & gadget makers more time to get hooked on their drug. They should wait 3 or 4 years, and *then* stiff everyone using H264 with a huge bill.

I personally hope they're more greedy than smart, so this would-be 'GIF fiasco 2.0' repeat doesn't even get off the ground.

Reply Parent Score: 1

RE[5]: Comment by cerbie
by wumip on Sun 3rd Jan 2010 15:14 in reply to "RE[4]: Comment by cerbie"
wumip Member since:
2009-08-20

When H.264 jumps ahead though is decoding performance, and the reason is simple. There are video accelerator chips for H.264, and at the moment there are none for Theora.

That's the crappiest argument ever. There WILL be acceleration for Theora if Opera, Chrome and Firefox all support it on all platforms. I can promise you that.

Reply Parent Score: 1

RE[6]: Comment by cerbie
by Damnshock on Sun 3rd Jan 2010 16:42 in reply to "RE[5]: Comment by cerbie"
Damnshock Member since:
2006-09-15

There WILL be acceleration for Theora if Opera, Chrome and Firefox all support it on all platforms. I can promise you that.


Well, I believe you need more than "support" for the acceleration to happen: you need the *CONTENT* to be in that format.

My bet is that if youtube encodes the videos in an open format, that's the one that will win the "battle".

Just an opinion though ;)

Reply Parent Score: 1