Linked by Preston5 on Sat 27th Mar 2010 11:46 UTC
Multimedia, AV In January, we had read the various arguments regarding Mozilla's decision not to get an H.264 license. This has generated a lot of discussion about the future of video on the web. With Youtube, Dailymotion, Hulu and Vimeo having adopted H.264 for HD video, Mozilla and Opera should use the codecs installed on a user's system to determine what the browser can play, rather than force other vendors to adopt Ogg. Refusing to support a superior codec would be a disservice to your users in years to come. Why hold back the majority of your users because 2% of your users are on niche OSes?
Thread beginning with comment 415596
To view parent comment, click here.
To read all comments associated with this story, please click here.
J. M.
Member since:
2005-07-24

This comparison is quite useless. It does not really say anything.

Firstly, H.264 and Theora are formats, not encoders, and quality largely depends on the encoder. Even if you use a vastly superior format, you can still get a much worse result if you use a bad encoder. A crappy H.264 encoder can indeed give worse results than a good Theora encoder. And even if you use the best encoder for the format available, you can still get an extremely low-quality result, because good encoders are highly configurable and there are many settings that can totally destroy the quality (for example, in an H.264 encoder, you can turn off all advanced features that help the compression tremendously). So yes, it is perfectly possible to encode H.264 video with a vastly inferior quality/size ratio compared to a Theora video. What exactly does it say about the quality of the two formats? Nothing.

Secondly, re-encoding an already encoded H.264 video is not fair. Because the lossy H.264 compression already "cleaned up" the original video, that is, spatial and temporal details have been lost, which made the job easier for the Theora encoder.

Thirdly, compressibility is yet another factor. For example, you can encode H.264 video at 8 megabits per second, and it will look great. Especially if the video is highly compressible. So, then you re-encode the video to Theora using a bitrate of 6 megabits per second, and it will still look great. So what? Does it mean Theora is better, because it needs less bits per second for the same quality? Of course that's total nonsense. It only means 6 megabits per second is superfluous for most videos, and you could probably encode the original H.264 video at 1 megabit per second or less and it would still look great.

That's why all these "tests" showing how good Theora is are completely bogus and that's why all serious audio/video quality tests are made by encoding files from the same source, using encoder settings that are generally considered optimal. Only then you can make any meaningful comparison.

Edited 2010-03-28 07:34 UTC

Reply Parent Score: 6

lemur2 Member since:
2007-02-17

This comparison is quite useless. It does not really say anything.

Firstly, H.264 and Theora are formats, not encoders, and quality largely depends on the encoder. Even if you use a vastly superior format, you can still get a much worse result if you use a bad encoder.


But that is not my point. My point was that if you have a good result, neither the format not the encoder can be bad.

Therefore, given a good result, Theora is not a bad format, and Firefogg is not a bad encoder.

A crappy H.264 encoder can indeed give worse results than a good Theora encoder. And even if you use the best encoder for the format available, you can still get an extremely low-quality result, because good encoders are highly configurable and there are many settings that can totally destroy the quality (for example, in an H.264 encoder, you can turn off all advanced features that help the compression tremendously). So yes, it is perfectly possible to encode H.264 video with a vastly inferior quality/size ratio compared to a Theora video. What exactly does it say about the quality of the two formats? Nothing.


Actually, it does. I used a professionally-made source video. If even professionals cannot get h.264 decent, compared to the efforts of novices (namely, me), then h.264 can't be a good format. I can hear yout protests now, but the fact remains that if it ha been the other way around, that would have been taken as proof positive that Theora was no good.

Secondly, re-encoding an already encoded H.264 video is not fair. Because the lossy H.264 compression already "cleaned up" the original video, that is, spatial and temporal details have been lost, which made the job easier for the Theora encoder.


Oh dear. Oh dear of dear.

1. Lossy codecs do NOT "clean up" original videos.
2. It is the easiest part of compression to throw data away. What ever h2.6 threw away could still have been used by the original encode, but this data was no available to Theora in my test.

The test that I did penalised Theora (as the SECOND lossy codec applied to the video) and not h.264.

Thirdly, compressibility is yet another factor. For example, you can encode H.264 video at 8 megabits per second, and it will look great. Especially if the video is highly compressible. So, then you re-encode the video to Theora using a bitrate of 6 megabits per second, and it will still look great. So what? Does it mean Theora is better, because it needs less bits per second for the same quality? Of course that's total nonsense. It only means 6 megabits per second is superfluous for most videos, and you could probably encode the original H.264 video at 1 megabit per second or less and it would still look great.


I'll say it again so that you might be able understand it: the h.264 was produced by professionals (it was a trailer for Avatar), and the Theora video was produced by a novice (me).

That's why all these "tests" showing how good Theora is are completely bogus and that's why all serious audio/video quality tests are made by encoding files from the same source, using encoder settings that are generally considered optimal. Only then you can make any meaningful comparison.


Well that is true, but I didn't have any uncompressed source, but anyway such a test would only give more advantage to Theora than my test gave it.

Reply Parent Score: 0

J. M. Member since:
2005-07-24

But that is not my point. My point was that if you have a good result, neither the format not the encoder can be bad.

My point is that with a better format and encoder, you can get an even better result. Possibly much better.

Actually, it does. I used a professionally-made source video. If even professionals cannot get h.264 decent, compared to the efforts of novices (namely, me), then h.264 can't be a good format.

You probably don't now too much about H.264. Professionals could simply do what they could given the restrictions they had to work with. H.264 has many profiles. When you're encoding H.264 video for Blu-ray players, for example, you can only use a subset of the H.264 features. When you want it to be playable in QuickTime (and the Avatar trailer is a good example, as trailers are traditionally made for QuickTime), you may have to use an even smaller subset of features, as the H.264 support in QuickTime is atrocious. So you have to use crippled H.264 with inferior quality/size ratio. When you want the H.264 video to be playable on mobile devices, you have to throw away basically everything that makes H.264 worthwhile. So the compression ratio will be very sub-par, even if it was "made by professionals". This, again, does not say anything about the quality of the H.264 format.

"Professionally-made" video is simply an empty phrase, just like "digital quality" etc. It does not say anything about the quality at all.

1. Lossy codecs do NOT "clean up" original videos.


Yes, they do. They reduce spatial and temporal details, which is basically what spatio-temporal denoisers do. People use spatio-temporal denoisers as pre-processing to increase compressibility when they're encoding videos. Video with reduced spatial and temporal details (especially temporal, because of P and B frames) is more compressible.

2. It is the easiest part of compression to throw data away. What ever h2.6 threw away could still have been used by the original encode, but this data was no available to Theora in my test.

That's why the test is bogus.

The test that I did penalised Theora (as the SECOND lossy codec applied to the video) and not h.264.

The test penalised anyone who would like to know anything. The test simply does not say anything. I could perhaps agree with you that it says Theora is not extremely bad. But that's pretty much the only thing it can say.

Well that is true, but I didn't have any uncompressed source, but anyway such a test would only give more advantage to Theora than my test gave it.

This is highly questionable, for many reasons. But until someone makes a real, serious test, then any further discussion is useless.

Edited 2010-03-28 09:54 UTC

Reply Parent Score: 7

saynte Member since:
2007-12-10

Thanks for this well-formed response to a lot of the ridiculous assertions.

Reply Parent Score: 3