Linked by Kroc Camen on Fri 1st Jan 2010 15:36 UTC
Opera Software HTML5 Video is coming to Opera 10.5. Yesterday (or technically, last year--happy new year readers!) Opera released a new alpha build containing a preview of their HTML5 Video support. There's a number of details to note, not least that this is still an early alpha...
Thread beginning with comment 402175
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[3]: Comment by cerbie
by lemur2 on Sun 3rd Jan 2010 07:34 UTC in reply to "RE[2]: Comment by cerbie"
lemur2
Member since:
2007-02-17

"Theora 1.1 (previously codenamed Thusnelda) achieves virtually the same performance as h264, but it is utterly free to use by anyone, anytime, for encoding, decoding or streaming, forever."

This, though, is still a bit of an issue, and will remain one. That statement is 100% false. Any modern nVidia card can even show that to be false under Linux+X with common software (AMD as well, in Windows). I'm not sure how that hurdle will be handled, in the future (GPGPU decoder programs?).


It is not false. It used to be the case that h264 was well ahead of Theora in performance, but recent advances in Theora have seen it catch up.

This is why I specifically mentioned Theora 1.1. H264 is well ahead of Theora 1.0 or earlier, but it is only marginally different in performance to Theora 1.1.

http://tech.slashdot.org/story/09/06/14/1649237/YouTube-HTML5-and-C...

Check it out for yourself here:
http://people.xiph.org/~greg/video/ytcompare/comparison.html

Same bitrate, same filesize, imperceptible difference in quality.

As for video cards playing the videos ... there are several stages in video rendering. Decoding the data stream is merely the first step. It takes perhaps two or three seconds for a CPU to decode a minutes worth of video data, so the codec decoding function is NOT the determining factor in replay performance.

Even if a given video card does not have a hardware decoder for a particular codec, once the bitstream is decoded from the codec format by the CPU, the rest of the video rendering functions can still be handed over to the video card hardware.

Edited 2010-01-03 07:41 UTC

Reply Parent Score: 2

RE[4]: Comment by cerbie
by darknexus on Sun 3rd Jan 2010 09:49 in reply to "RE[3]: Comment by cerbie"
darknexus Member since:
2008-07-15

It depends on what one means by the word "performance." If you mean video quality and encoding speed, then yes I'd say Theora 1.1 and H.264 seem to be just about even. When H.264 jumps ahead though is decoding performance, and the reason is simple. There are video accelerator chips for H.264, and at the moment there are none for Theora. It's sort of a catch 22 situation: there are no accelerator chips for Theora so we won't see any major content producers use it, but until one of them does start using it there won't be any demand for accelerators. Right now, all Theora decoding is done in software by the CPU. It's not an issue on desktops or set top boxes, but on laptops and portable devices it's a mother of a battery guzzler. To be fair, so is H.264 without hardware video acceleration, but that makes little difference to content producers and providers. Currently, H.264 delivers in a huge area where Theora does not, and if I had to bet on a codec that eventually replaces H.264 over licensing I'd bet on VP8 or whatever Google ends up naming it and not Theora. I can only say one thing for certain: next year is going to be very interesting in this area.

Reply Parent Score: 2

RE[5]: Comment by cerbie
by lemur2 on Sun 3rd Jan 2010 11:06 in reply to "RE[4]: Comment by cerbie"
lemur2 Member since:
2007-02-17

It depends on what one means by the word "performance." If you mean video quality and encoding speed, then yes I'd say Theora 1.1 and H.264 seem to be just about even. When H.264 jumps ahead though is decoding performance, and the reason is simple. There are video accelerator chips for H.264, and at the moment there are none for Theora. It's sort of a catch 22 situation: there are no accelerator chips for Theora so we won't see any major content producers use it, but until one of them does start using it there won't be any demand for accelerators. Right now, all Theora decoding is done in software by the CPU. It's not an issue on desktops or set top boxes, but on laptops and portable devices it's a mother of a battery guzzler. To be fair, so is H.264 without hardware video acceleration, but that makes little difference to content producers and providers. Currently, H.264 delivers in a huge area where Theora does not, and if I had to bet on a codec that eventually replaces H.264 over licensing I'd bet on VP8 or whatever Google ends up naming it and not Theora. I can only say one thing for certain: next year is going to be very interesting in this area.


It depends on what you mean by decoding performance. Certainly it is true to say that a video decoding and rendering in hardware is far faster than doing it in software, but that is NOT by any means a codec-dependant observation.

As I said in the grandparent post:
Even if a given video card does not have a hardware decoder for a particular codec, once the bitstream is decoded from the codec format by the CPU, the rest of the video rendering functions can still be handed over to the video card hardware.


Decoding the video data from the codec-compressed format into raw video data is but a small part of the problem of rendering video. This part of the task requires only a few seconds of CPU time for every minute of video. The amount of spare CPU time then is over 50 seconds per minute. If the decompressed video from a Theora-encoded video bitstream is passed on at that point to the graphics hardware, the CPU need be no more taxed than that.

The actual savings from having the video codec decoder function also implemented in the graphics hardware is not much at all ... all that one saves is a few seconds per minute of CPU time.

As you say:
If you mean video quality and encoding speed, then yes I'd say Theora 1.1 and H.264 seem to be just about even.


That is true. And when it comes to playing video, if the decoding is done in hardware (say for h264, which some graphics cards do have a decoder for), then the only savings is a few seconds per minute of the clients CPU.

However, be advised that a number of programs that play Theora perform the entire rendering of the video stream in software (i.e. in the CPU rather than the GPU), and so they do not use the graphics hardware at all. However, be advised that these programs also tend to do the same for h264, and the playing performance once again will be no better for one over the other.

Edited 2010-01-03 11:15 UTC

Reply Parent Score: 2

RE[5]: Comment by cerbie
by Ed W. Cogburn on Sun 3rd Jan 2010 13:10 in reply to "RE[4]: Comment by cerbie"
Ed W. Cogburn Member since:
2009-07-24

and the reason is simple. There are video accelerator chips for H.264, and at the moment there are none for Theora.


If MPEG-LA decides to get greedy this year, then this could and almost certainly will change rapidly.

Note that one can make a hardware Theora encoder/decoder completely royalty-free, so for hardware makers, making such a thing would be cheaper than making the H264 equivalent. Then its just a matter of baking the silicon. The only thing stopping this is the current lack of demand.

At this point we seem to be repeating the same mistakes of history over and over. Remember the GIF fiasco of the early Internet? If MPEG-LA were smart, they'd lay low, not push their license fees higher later this year, and give the Internet & gadget makers more time to get hooked on their drug. They should wait 3 or 4 years, and *then* stiff everyone using H264 with a huge bill.

I personally hope they're more greedy than smart, so this would-be 'GIF fiasco 2.0' repeat doesn't even get off the ground.

Reply Parent Score: 1

RE[5]: Comment by cerbie
by wumip on Sun 3rd Jan 2010 15:14 in reply to "RE[4]: Comment by cerbie"
wumip Member since:
2009-08-20

When H.264 jumps ahead though is decoding performance, and the reason is simple. There are video accelerator chips for H.264, and at the moment there are none for Theora.

That's the crappiest argument ever. There WILL be acceleration for Theora if Opera, Chrome and Firefox all support it on all platforms. I can promise you that.

Reply Parent Score: 1

RE[4]: Comment by cerbie
by cerbie on Mon 4th Jan 2010 00:52 in reply to "RE[3]: Comment by cerbie"
cerbie Member since:
2006-01-02

"It is not false. It used to be the case that h264 was well ahead of Theora in performance, but recent advances in Theora have seen it catch up."

It is false. Nvidia, Intel, AMD, Imagination, and Broadcom, just off the top of my head have H.264 offloading in designs with their names on them. You can give a nice GPU (and drivers, and playback software) to an Atom to get flawless 1080P H.264 playback. Theora has some Google Summer of Code entires doing it on an individual level.

It's not an easy hurdle to get over, though if a, "everyone pays for every product, and every use of that product," license system goes into effect, it may have a chance.

Edited 2010-01-04 00:54 UTC

Reply Parent Score: 2

RE[5]: Comment by cerbie
by wumip on Mon 4th Jan 2010 06:02 in reply to "RE[4]: Comment by cerbie"
wumip Member since:
2009-08-20

Nvidia, Intel, AMD, Imagination, and Broadcom, just off the top of my head have H.264 offloading in designs with their names on them.

Good for H.264! Since doing the same for Theora is absolutely free, and since it is in the intersest of some of the biggest tech companies in the world (Google, Vodafone, Sony, etc.), Theora will be supported by hardware soon enough.

Reply Parent Score: 1