Only a few days ago, we discussed the most popular YouTube feature request: HTML5 video support. Apparently, a lot of people want a version of YouTube that doesn’t depend on Flash (me being one of them), and now Google has honoured their request with the HTML5 YouTube beta. Sadly, video quality needs a lot of work, and in spite of the original feature request, it’s using h264 instead of Theora.
As most of you will know by now, HTML5 includes the video and audio tags, which allow you to embed video and audio files in your HTML document as if they were images. All modern browsers support the HTML5 audio and video tags (except Internet Explorer), but sadly, that doesn’t mean the new HTML5 YouTube beta will work on all of those browsers.
The problem is the codec. HTML5 doesn’t specify the codec to be used with the video tag, leading to a situation where everybody’s debating either Theora or h264. Theora supposedly isn’t as good as h264 (note the supposedly, I hear conflicting statements on that one), but h264 is a licensing nightmare, so not all browsers support it. Chrome and Safari both do, but Firefox and Opera only support Theora. Since the original feature request stressed using open standards (i.e., Theora), Google still has some way to go.
YouTube using h264 is just one of the limitations of the HTML5 version of YouTube. It also won’t play videos with annotations, advertisements, or captions; for those videos, it will switch back to the Flash player. This being in beta stage, these are acceptable shortcomings.
What I do not find acceptable, however, is the abysmal video quality of videos played using the HTML5 version. They are ridiculously blocky on my setup (Chrome 4.0 on Ubuntu 9.10), giving me the feeling I’m back in 1999. It’s really quite startling to compare the Flash version of a video to the HTML5/h264 version (I suggest trying it out on this one – isn’t it great I can use OSNews to subtly promote my favourite music?).
Of course, there is also a major, major advantage in using the HTML5 version: processor usage. Playing the A Night Like This video in Flash 10 on Ubuntu eats about 68% (!) of my quad-core processor, whereas the HTML5 version sticks at about 12-20%.
In order to join the beta, simply visit this page and click “Join the HTML5 beta”; you can click it again to leave the beta version.
It almost appears the HTML5 version doesn’t do any AA yet, odd.
FWIW, on my Windows box, using Chrome, there’s a negligible CPU utilization differences between the Flash and HTML5 version of the video you linked (in fact, the flash version almost seems to use slightly less CPU at the equivalent video size).
Still, it’s exciting to start seeing support for HTML5 video on youtube
Edit: clarified difference being evaluated
Edited 2010-01-21 19:42 UTC
Windows users will get favourable benchmarks with Flash.
The problem is Adobe neglect everyone else.
I don’t know about you, but Gnash has far worse performance for me under Linux (under YouTube). For that reason I’m still using the official Flash plugin.
Edited 2010-01-22 18:40 UTC
iMac here. 2.66GHz Core2Duo. Safari. The video you linked to uses 100% cpu and doesn’t play back smoothly. The flash version is also very high (60%) but looks way better and plays smoothly.
Wow, seriously?
On my 2.4ghz C2D running Windows XP here they both only use ~20% (at the “larger” size, but not fullscreen)… weird.
Edited 2010-01-21 19:55 UTC
That is all an awfull lot.
On my trusty old AMD X2 3800 (2GHz) the flash version uses “only” between 11% and 13%
Maybe Debian (Squeez) is that good
Yeah, seems high doesn’t it. I just restarted Safari with the same results. Same on Firefox and Chrome too. Not sure exactly what’s going on.
Odd, for me it’s using about 15-20% CPU with Safari on a 2 year old Macbook (2,1GHz Core2Duo). I’m using Snow Leopard if that makes any difference?
*edit*
And the flash version uses ~20% (safari) + ~60% (flash plugin). And it runs less smooth…
Edited 2010-01-21 20:53 UTC
That’s odd. I am getting 10% usage with safari on my 2.0 GHz C2D iMac and HTML. I got 30% for the flash version.
Weird..
On a fresh Fedora 12 install with latest Google Chrome, that video uses 10% cpu. Can’t check the performance of flash, since its not installed.
…and that is without the Nvidia driver installed.
Edited 2010-01-21 21:31 UTC
I’m curious, are people expanding the video to the larger size when they test? Otherwise the video only uses ~4% on windows for me.
When was Apple ever able to program good video codecs?
When Apple introduced MPEG-4 support, they were never even able to implement Advanced Simple Profile — just a slow, low-quality Simple Profile codec.
With QuickTime 7 AVC support was added — only Basic Profile. No Main Profile, let alone High Profile. Apple’s decoder is also quite slow (it improved with QTX, though) and their encoder isn’t the best quality either.
How can I test it under Ubuntu? I’m getting “Your browser does not currently recognize any of the video formats available.”. Tried with Firefox 3.5 and with Chrome..
Well, let’s see if this improves.. It’s pretty sad seeing my dual core mac (hackintosh) going 100% cpu when playing a simple 720p flash video on youtube…
Sounds like a Firefox problem to me. They could have linked to the media frameworks on each platform (Gstreamer on Linux, DirectShow on Windows, Quicktime on OS X) to show videos that do not have a built-in codec. Yet they did not. Related question: Why the hell are they building codecs into the browser anyway? Let the os handle that via a media framework. Embedding codecs into browsers is just going to be one giant mess in the end, and we’ll have the codec mess all over again like we did ten years ago. They need to either come up with a graceful means of handling this, or make the bloody codec part of the standard so there’s no doubt which codec must be used in order to be compliant. As things stand at the moment, we’re headed for 1999 again.
Tapping third party software around the browser seems to be what we have now. If it’s Flash media content then one has to have Flash player available for the browser to tap into. Quicktime is not native to Windows systems so you’ll need that extra bit installed for QT video. There may be an advantage to each platform’s specific media framework if it renders faster with less resource hit but on first uneducated inspection, it would seem to be a different description of the plugin mess we suffer now.
Theora supposedly isn’t as good as h264 (note the supposedly, I hear conflicting statements on that one), but h264 is a licensing nightmare, so not all browsers support it. Chrome and Safari both do, but Firefox and Opera only support Theora.
Hmmmmmm, its certainly suspicious that it supports the codec which Chrome does but Firefox doesn’t. Hope they fix this ASAP and its not a marketing strategy disguised for “technical” reasons.
I suspect, this only being a beta, that Google have just opted for the same compression standard that their Flash site uses.
Hopefully they’ll offer Theora in later betas or, at the very least when “HTML5-tube” goes live.
Here is my question,
Why does it not work in FF or Opera on Windows and OS X? both platforms support h.264 out of the box. any application that runs on the OS should be able to tie into that and then….OMG… not worry about the playback costs of the codec since they are using their host OS’s capabilities.
Opera is adopting GStreamer, so it should work once Opera released a version of its browser with HTML5 support.
Mozilla is giving contradicting statements. On one hand, Mozilla people say that using external media frameworks would increase video format fragmentation. On the other hand Mozilla is working on exactly that for Firefox Mobile (=Fennec): https://bugzilla.mozilla.org/show_bug.cgi?id=422540
Mobile platforms are a different story. They are fragmented per se and mobile mozillas will probably be part delivered with the os.
This way the responsibility for providing proper experience lays down on a device producer not the foundation.
Besides mobimo devs can’t hook themselves into all video acceleration methods that various mobile devices may require.
Desktop experience is however absolutely at their discretion so limiting specific OS dependencies works in favor of their users.
What am I supposed to do with Firefox or Opera ? I will not download Chrome just to see if it works ? It would be the same to download IE under Wine to get Silverlight working … Come on Google, why you disregard ~40% of the users on the web ? Or worse, why the supposed standard is not standardized ?
To convert them to Chrome users, of course.
MPEG-4 AVC aka. h.264 is standardized by ISO and ITU-T. Theora is not — by no standards organization.
You forget the purpose of a standard. To let everybody use it. Patents have destroyed that option for h264 so the best option everybody can use is theora right now.
Edited 2010-01-22 21:00 UTC
1.) The patents only apply in the US. I’m sick of USA’s legal imperialism. I’m not bound to US laws and neither is 99% of the word’s population. If Americans are unwilling to change their patent laws, they should be the only ones to suffer from inferior video codecs.
2.) Dirac > Theora.
Unlike Theora, Dirac is actually targeted at HD resolutions.
Dirac is widely used in a production environment within the BBC.
Hardware codecs are available.
A subset of Dirac is standardized as VC-2.
It is not the same thing. Chrome works in linux and mac natively. Wine is more like a hack which does not work well most of the time
The reason that you tube didn’t release the ‘open’ codec is cause they already have all their video in h264 format.
It’s just easier to support it since they have 0 re-encoding work to do and it’s all front end changes. If it catches on and management supports it, then more codecs will be supported you can bet.
The cpu time to re-encode all the videos on you tube is probably in the billions of hours by now. Not to mention the storage.
Which is, for that matter, probably the reason Chrome supports 264 and not Theora. Assuming browser support for HTML5 video to be irrelevant until YouTube supported it, Google would logically build in support for the format they’re using elsewhere, and not bother with anything else…
Except that Chrome also has OGG built in too.
Oh, that’s right… it’s Safari that supports only 264, isn’t it?
Yup. Using Theora would cover all HTML5-capable browsers. Only Apple is being annoying here. Their arguments for h264 (hardware chips, for instance) are sound, but that’s no reason to not support Theora also.
We don’t know Microsoft’s stance yet either though, and they’re the 800 lb. gorilla when it comes to ’Web market share. I do doubt Microsoft are going to choose OGG, but hey, who knows. Stranger things have happened.
MS has been backing h.264 as is evident with their strong support for its playback on their current OS and media platforms. I am sure they will choose h.264 support…But that choice would be moot since I think they will dump the trident engine in favor of webkit anyway. There are more interesting things happening in the rest of the browser, why bother with your own rendering engine?
Wrong. Safari delegates video requests to QuickTime. Apple does not prohibit Xiph or anybody else to offer streaming-capable Ogg Theora codecs for QuickTime.
(Last time I checked the available codecs could not stream, but that’s not Apple’s fault.)
Indeed. I made the same point above.
Assuming that Google aren’t already in the middle of a mass re-encode (hense why Theora videos are not available yet), I do wonder if it makes more sense for Google to “hot-encode” their media as an when people request it rather than try and re-encode everything in one huge batch.
There’s 3 ways they could do this:
* encode-on-demand and save the new output as well as stream it out.
This would require approx double the storage space in their youtube data centres, but would require less CPU overhead for popular videos
* encode-on-demand but only stream the output.
This would have huge storage space savings for Google. However the processing overhead of repeatedly re-encoding popular videos would be imense
* and finally, they could also do a mixture of the previous two:
Encode-on-demand everything atleast once – saving copies of popular videos (to save repeat encoding) and destroy unpopular videos (to save storage space).
So, should Google support Theora in later version of Youtube, they will have to weigh up which costs more (CPU cycles or storage space) and then work out their plan of action from there.
Look at Youtube’s setup though. It’s already taking submitted video content and transcoding it into a standard codec then wrapping it in Flash. All they do is tell the system to use a different codec and leave it to transcode new video on submission with existing vodeo transcoded in the (probably already existing) nightly batch processing.
Storing and re-encoding video is what Youtube does already.
Netbooks and smartphones will all gain from being able to youtube at acceptable quality without requiring high powered battery consuming processors.
Sounds like a plan: Atom or ARM based Google netbook/phone with Chrome and html5 video.
Except, YouTube runs just fine on my MSi Wind u100 which is all stock (1.6Ghz Atom, 160GB hard drive, 6 cell battery) except that I put in another 500MB of RAM (1.5GB total) from my MacBook (as I upped that to 4GB and had the two 500’s going spare..) Yes, it runs said Youtube video under OS X, but only extremely flash intensive sites fail to render properly or play back choppily. For what it is – a very cheap run around MacBook “saver” (as in, I then don’t have to risk travelling with my MacBook) I think it works extremely well.
Thom,
I agree with you that it is very disappointing to see Google–which is normally quite good at promoting open standards–pushing a patent-encumbered codec when there are non-patented and documented alternatives that even have free implementations available (e.g. Ogg Theora).
As a member of an online content provider yourself, hopefully you can help to steer OSNews to provide all of its content (including audio and video) in open, patent-free formats so that all readers of OSNews can access it on a level playing field.
One such area of improvement would be to offer the OSNews podcasts in an open format instead of only MP3 since MP3–like H264–is a patented file format.
Yea, because being open is more important than using the best format overall. Sorry, but Theora is crap compared to h264, and MP3 is the only universal format that can be played almost anywhere.
I don’t agree that everything should be proprietary, but (in this case) the better formats won.
h264 used to be significantly better than Theora, but late last year a new version of Theora was released, and that is no longer the case.
The current version of Theora (version 1.1) is almost the same quality as h264. There is a link already provided earlier on this thread where you can check it out if you like.
If we are going to have a discussion on “better formats” for video on the web … lets please just stick to the facts and leave the FUD and mudslinging out of it, OK?
Except, that excludes most people, who patently don’t give a stuff about Mp3 being patented, but also don’t want to have to re-encode or buy a new MP3 Player to support OGG Vorbis.
As an aside: maybe if OGG Vorbis hadn’t used floating point operations in the initial CODEC it would have gained traction, but they did and it’s hard to do those operations when you don’t have a hardware floating point capable processor – like most ARM processors used in such devices.
Firstly there is no “open standard”. That term is an oxymoron. If the whole “open” community agreed that Theora is the “standard” for video encoding then you could claim it is a standard.
H.264 on the other hand IS an internationally recognised standard, irrespective of it’s licensing requirements.
And why would you expect Google to create an HTML5 version of YouTube that doesn’t work by default in their own browser?
As for video quality, currently its pretty ordinary. Processor usage however is minimal – hardly caused the gauge to move.
Generally, “open” used in the context of software refers to being documented, free of royalties and free of patents. A “standard” could be a specific API, network protocol or a file format. So, there can be open standards (e.g. JPEG, PNG, SVG, OpenDocument) and closed standards (e.g. WMV, WMA, DOC).
Edited 2010-01-21 21:46 UTC
There is still the ‘standard’ part of open standard. And Theora can hardly be called a standard; it’s neither a formal standard (ISO, IEC, ECMA or even IETF or W3C), nor a de facto standard. Quite frankly it doesn’t compare favorably to H.264 – its base, the VP3 codec, wasn’t up to the standard of pre-AVC MPEG-4, and there haven’t been that many changes since then.
OTOH the lack of Theora support does look stupid, especially considering that Google now owns On2, the company that provided the base for Theora by open-sourcing their VP3 codec.
I really think that we should not criticize them at this point for using H.264. Their entire video collection was already encoded this way so it was easy to just create a new front-end using HTML 5 video.
Re-encoding all of YouTubes videos with Theora would take months or years.
I really hope though that eventually Google will open-source VP8 and use that for YouTube. Supposedly it is as good as H.264 (but so is Theora, according to some people). They could make all the videos available with both H.264 (for phones and other embedded devices that only have H.264 acceleration) and VP8 or Theora (for real computers). This would add storage costs, but no extra bandwidth costs.
Note: Is anyone else having really odd issues with OSNews right now? Earlier the background of all the articles was green, and the background of much of this comment submission page is orange (EDIT: now it is green, like the article was earlier; REEDIT: now the article is green again! I have restarted Chrome several times…). Is this some sort of server issue? Or just my computer being strange? (It normally works fine; I am using Chrome on KDE 4 on Ubuntu 9.10.)
Edited 2010-01-21 21:13 UTC
Happening here on Windows XP/Chrome as well.
I’m also having it with Firefox on Ubuntu, though not with Konqueror or Arora (which is odd, since they, along with Chrome, are WebKit-ish browsers).
Maybe it’s an issue with AdBlock Plus? I’m using it in both Firefox and Chrome (and I am also using Arora’s built in ad-blocking, which uses AdBlock Plus’s rules, but Arora has no color issues, but can’t seem to view comment pages at all!).
It’s a code issue, but we have no idea how what or why. Adam is working on it.
Google has more computing horsepower available to it than just about any other organisation on the planet. Dailymotion has converted its video to Theora on the openvideo site:
http://openvideo.dailymotion.com/
So I can’t see why that should be a problem for Google.
If anyone still doubts that Theora has effectively caught up to h264, people can check it out for themselves at 720p resolution on the website tinyvid.tv by looking for recent videos posted there entitled Sherlock Holmes Trailer 1 and Sherlock Holmes Trailer 2.
Here is the link to one of them:
Warning, this is a very large file, 87.2 MBytes, as it is a movie trailer (Sherlock Holmes Trailer 2) from tinyvid.tv
http://media.tinyvid.tv/1s2sj33y1ozc3.ogg
You might be able to see it via HTML5 directly in your (non-IE) browser here:
http://tinyvid.tv/show/1s2sj33y1ozc3
Those two movie trailer clips encoded in Theora at 720p video resolution are both excellent illustrations of the current state of the art of the Theora encoder.
Given those two examples, the Theora codec seems to be on par in terms of filesize and image quality with h264.
VP8 is a patented proprietary codec. There are no open implementations of this codec, it is no better than h264 in this respect.
Finally, some aspects of hardware video acceleration (such as scaling and motion compensation) are independent of the codec used as they are done afetr video decoding. AFAIK there is no reason why players cannot use the video post-processing part of the hardware acceleration support in any given device to assist in playing Theora-encoded videos.
http://en.wikipedia.org/wiki/Video_acceleration#GPU_accelerated_vid…
http://en.wikipedia.org/wiki/Video_decoding
http://en.wikipedia.org/wiki/Video_post-processing
Edited 2010-01-21 22:31 UTC
Promoting free standards is one thing, the open source community needs to get its act together. I just tried playing that with Firefox & Xine-browser-plugin… Audio plays, video window remains gray…
This message says a lot:
demux_ogg: Theorastreamsupport is highly alpha at the moment
Needless to say Xine handles h264 inside a .ogg container just perfectly fine. Xine-browser-plugin generally play everything, it even handles (juck!) Silverlight sites…
If Xine can’t handle it, maybe Mplayer can? Don’t think so:
Ogg file format detected.
VIDEO: [theo] 1280×720 24bpp 29.970 fps 0.0 kbps ( 0.0 kbyte/s)
[VO_XV] Could not grab port 131.
====================================================================== ====
Opening video decoder: [ffmpeg] FFmpeg’s libavcodec codec family
[theora @ 0x889ee60]Missing extradata!
Could not open codec.
VDecoder init failed
Again, Mplayer has no issue at all with h264 inside .ogg.
Let’s not complain to Google before all common open source players handle Theora fine. h264 is the best choice.
Edited 2010-01-22 05:00 UTC
Works absolutely fine for me.
Plays in VLC, MPlayer and SMPlayer, and in Firefox without any plugin.
http://ourlan.homelinux.net/qdig/?Qwd=./Theora_720p&Qif=SMplayer.pn…
MPlayer has no issue at all with Theora inside .ogg
Edited 2010-01-22 05:15 UTC
I misspoke here, I’m afraid. It was the Chromium browser that played this file without a plugin, not Firefox.
Firefox loaded the video file and showed the opening frame, but would not play it.
Works find in Firefox on Arch.
Maybe your distro’s packages are still a few versions behind?
I’m running firefox on Arch. Perhaps you have a plugin installed?
I have only Flash and Java.
I will try it with the VLC plugin.
PS: No, still didn’t work. I must have something misconfigured or missing.
PPS: Still didn’t work with the plugin recommended on the Arch wiki, which was gecko-mediaplayer.
I must have really broken something, I suppose.
PPPS: Other video files, including another Theora-encoded video, work fine. It is just that one file that won’t play in Firefox. Strange.
Edited 2010-01-22 10:38 UTC
AFAIK I don’t have any extras installed besides what came with Firefox.
I’m just running whatever was setup by pacman when I installed KDEmod
Indeed. Blurry and blocky output. No match for good AVC encoders that achieve better quality at lower bitrates.
Pffft.
The only “blurry” bits are the out-of-focus background. That is blurry because of the original camera, not because of the video encoding.
The in-focus foreground is perfectly sharp, and not at all blocky.
http://ourlan.homelinux.net/qdig/?Qwd=./Theora_720p&Qif=shot0006.pn…
Your utter irrational bias against Theora is showing.
Heavens knows why you should be biased, since you are being offered the use this very competitive technology for absolutely no cost.
Dude, go to an eye doctor and get a prescription for glasses.
If you seriously can’t see the blockiness and blurriness in faster moving scenes, your eyes are damaged.
Yes, it looks great but do you also have a link to a h.264 encode of the same video at the same bitrate so I can compare side-by-side?
It’s worth noting that although Opera has had experimental builds with <video> support since 2007, they don’t have it in their current release (10.10). They only support <video> and <audio> on the 10.5 pre-alpha.
I’m glad they chose h264. At least for now it’s more convenient, and that’s all users really care about.
How exactly is h264 more convenient?
Theora is a free and open codec and it should be easily available to anyone in almost any browser/media player on any platform. At the very least it should be available to anyone who isn’t going to allow Apple to dictate to them.
h264 won’t be more convenient for anyone next year, when MPEG LA start charging for its use.
Last I heard, there are more hardware acceleration chips for h264 than there are for Theora… meaning less CPU usage on devices which already have them integrated.
Last I heard, there are more hardware acceleration chips for h264 than there are for Theora… meaning less CPU usage on devices which already have them integrated. [/q]
Where devices have h264 chips integrated, at least the video post-processing functions of the chip (the functions that follow the decoding) should still be useable for playing Theora videos, even though the actual decoder function is obviously not applicable.
the decoder function is the most CPU intensive portion!
Do you think? I’m not an expert, but AFAIK the encoder/decoder function is merely digital video data compression/decompression.
A direct analogy would be to observe that it doesn’t take a lot of CPU to zip/unzip a file.
This is not a direct analogy. Video decoding (and encoding if live streaming) is a real time activity. When decoding can’t be done fast enough, frames are dropped… and when enough frames are dropped, the video turns into a slide show.
H.264 requires significant CPU time to decode on a general purpose CPU, especially at high resolutions. This is why there was such a big effort to introduce H.264 hardware decoding in GPUs. Generally, an Intel Core CPU of 2 GHz or more is required to decode 1080p H.264 encoded to Blu-ray specifications, and with a decoder that has been optimized.
With hardware decoding of the entire H.264 bitstream, playback of a 1080p H.264 video takes only a few percent of a modern CPU (with the CPU running at a low clock speed, too).
It would be possible to use something else that required less CPU time to decode – but at the expense of using significantly more bandwidth.
ZIP uses relatively simple algorithms that work on any binary data. It isn’t nearly as effective for moving pictures or audio as a lossy encoding like AAC or H.264.
Furthermore, ZIP is hardly state-of-the-art – compare ZIP to RAR or 7-zip or bzip2. In these cases, superior compression comes at the expense of more computation time.
The functions of streaming the video data, transporting it to the video framebuffer, syncing with the frame rate, motion compensation
http://en.wikipedia.org/wiki/Motion_compensation
inverse discrete cosine transform,
http://en.wikipedia.org/wiki/Inverse_discrete_cosine_transform
etc, etc
are all done after decompression, and AFAIK they are all functions independent of the codec used.
http://en.wikipedia.org/wiki/Video_post-processing
The functions above are the heavy lifting of video replay, AFAIK.
Decompression of the encoded video data is but a small part of the computation required. The hardware decoding in GPUs is AFAIK mostly dedicated to all the computation, timing and synchronisation that is required post-decompression.
I must admit, however, that the point in the pipeline at which the codec-specific computation stops, and the generic video post-processing begins is more than a little vaguely defined.
http://en.wikipedia.org/wiki/DirectX_Video_Acceleration#Overview
http://en.wikipedia.org/wiki/VDPAU
The video APIs do, however, allow for the data to be injected at any of several points along this pipeline, they do not always require injection of video data before the codec.
No, these tasks are all considered to be part of the video compression (just read the pages you linked…). The specific methods/algorithms used vary.
The H.264 decoder typically takes in a H.264 bitstream (which could come from a file or streaming container), and its output is typically a series of uncompressed frames. Some hardware might accelerate only part of the decoding process, but recent ATI and nVidia GPUs and some Intel GPUs contain full H.264 bitstream decoding.
Video postprocessing can require heavy lifting, but H.264 decoding still requires plenty of CPU time if the H.264 decoding process is not accelerated. The things listed on the postprocessing Wikipedia page are universal to many video compression techinques, but they are not required for video playback in many situations.
For example, outputting a 1920×1080@24 fps H.264 video to a 1920×1080@24 Hz display requires no scaling, no telecine or inverse telecine, and no deinterlacing. Deblocking is performed because it is a required decoder feature in H.264. (Although some software H.264 decoders do allow users to disable deblocking to increase the framerate on slower systems, it does not reduce the CPU load significantly.)
It’s not vaguely defined at all. Different video compression methods often use the same basic techniques, but they do not necessarily do everything in the same way. The only “universal” processing tasks are things like color space conversion, scaling, deinterlacing, etc. And H.264 decoding is still CPU intensive when these processes are accelerated or not performed.
This was the case three years ago. But now, most new GPUs have full H.264 bitstream decoding.
The post-decompression tasks aren’t all that CPU intensive. H.264 decoding is by far the most intensive part of playback.
when Theora works in the MKV container then we can talk.
Seriously…. the convenient standard is what you see on the download sites… every video for download is in h264. Even the pirates know what is easiest to use.
I was using Handbrake just last night to convert some downloaded .flv files from YouTube into Theora/Vorbis in a MKV container. I know it is lossy to covert from one compressed format into another, and that to really compare format quality one should encode from the uncompressed video data into each format one is trying to compare, but I don’t actually have any uncompressed video to test with.
Anyway, the MKV files made by Handbrake with Theora/Vorbis encoding all played just fine in SMPlayer and in VLC.
I didn’t see any problem.
There is a thread here:
http://forum.handbrake.fr/viewtopic.php?f=12&t=12424
but the last post in the topic was Sep 26, 2009. The issue didn’t seem to be with Theora anyway.
Ubuntu 9.10, Atom netbook, Latest Chrome dev build
For the linked video in the article I get around 50-60% cpu with the html5 and 90%+ with flash. The flash is better quality though. Looking around the site a bit most flash videos tax my machine equally while the loads varies a bit using html5, some were as low as 30%.
I don’t know why people are surprised at the codec choice, youtube currently uses h.264 for their flash videos and they have always said that they prefer h.264.
Will they change their mind, I don’t know. But I won’t be making any guesses until Google’s acquisition of On2 is complete. They will get access to some new technologies and I don’t think anyone outside Google knows how they plan to use them.
Regardless of any quality discussion, h.264 is for
YouTube definitley the better codec.
The big problem is the transmission of the content configuration of the data stream. Where h.264 defines a clear data stream where the configuration ( width, height, decompression tables) are transported with a defined network abstraction layer that is defined in the spec, Theora has a different aproach. You need to configure your decoder before you can feed it the datastream.
Just in the moment I am in the middle of implementing a video chat system and want to have Theora support next to h.264 as well. The h.264 implementation was pretty straightforward. You just feed the decoder with the datastream and eventually you get decoded pictures. With Theora you need to make sure that the receiver has all necessary configure parameters of the codec, so that he can start decoding the datastream. So
I decided to send the data setup seperatley through a TCP connection where I could be sure that receiver got it correctly. That was really a pain, since the system wasn’t designed to communicate like that.
You can read about that here: http://tools.ietf.org/html/draft-barbato-avt-rtp-theora-01
If I were YouTube I would have chosen x264 as well. It’s just simpler to implement.
Because of course, slapping the VIDEO tag into a document that still has a HTML4.01 tranny doctype and endless needless DIV’s and classes is a “HTML 5” beta.
I guess 280+ validation errors wasn’t enough for them, they needed to add one more and prove they REALLY know nothing about writing HTML.
If you’re going to launch and HTML5 beta, it helps to maybe, oh, I dunno… WRITE THE **** PAGE IN HTML 5!
Edited 2010-01-22 04:02 UTC
Google has been supporting h.264 YouTube for a while now, at least via the iPhone.
It’s admirable that they are trying to support the HTML5 video tag et al., but they already made the investment for h.264 availability versus flash because of Apple. Isn’t this simply offering up a public web interface to the media format they’ve had available for some time now?
Not that there’s anything wrong with that, I’m just surprised it took this long. I’d have seen it as a bigger moment if they offered Ogg support.
Or am I missing something different here? I’m not that familiar with the nuances of codec support, so I could very well be. I’m just asking as a genuine question.
I think the bit you might be missing is that using h264 will likely cost a site like YouTube a fortune in royalties starting next year.
http://www.streaminglearningcenter.com/articles/h264-royalties-what…
It won’t cost any end users, and it won’t cost a normal site (with the odd video or two) very much, but it will probably significantly cost bulk video content suppliers.
Therefore, companies who have as part of their business model the supply of large amounts of video content over the web to large numbers of users will be VERY interested in finding a royalty-free way of continuing that business by the end of this year, I would imagine.
YouTube’s existing content is encoded in h264, and right now it doesn’t cost them anything, and the ability to play high-fidelity Theora content is not yet widespread.
This could all change right around after December 2010.
Edited 2010-01-22 06:48 UTC
Thats why they chose H.264, they already have most of their videos encoded in H.264. So implementing “HTML5 Beta” required no costly encoding, just a change to their video player code.
Their testing the HTML5 capability to iron out the bugs, before they reencode millions of videos.
Edited 2010-01-22 14:48 UTC
No, they are capped at 5 mio. US dollar. Peanuts for Google — by far no “fortune” for them.
Stop spreading those lies, please.
I see negligible CPU consumption with the HTML5 vid. Just idle I get 3%-6% usage (with all the crap I am always running) and when playing 8%-12% usage. But this is a quad core, loads of memory, very nice graphic card, etc.