Linked by Thom Holwerda on Thu 20th May 2010 23:22 UTC
Multimedia, AV There's an incredible amount of momentum behind Google's WebM Project. Opera, Mozilla, and of course Google will all include it in their browsers by default, meaning about 35% of web users will be able to use it with a minimal amount of fuss. On top of that, Microsoft has changed its previously announced plans to make HTML5 video in Internet Explorer 9 H264-only to include VP8 as well. Only Apple's opinion was unclear - until now.
Thread beginning with comment 425788
To view parent comment, click here.
To read all comments associated with this story, please click here.
rexstuff
Member since:
2007-04-06

So, if I understand you right, you're suggesting that we create video streams out of high-level programming language, such as how documents are rendered out of something like postscript or LaTeX, or how video games are rendered using GL shaders?

If so, then I must regretfully say: Sorry dude, I don't think that will work.

Sure, glorious HD scenes can be rendered in as little as 64k, but a) it would be incredibly lossy in the sense that rendering engine shortcomings would make the difference between the originally captured stream and the rendered video as seen by the viewer more than a little... significant. And b) a realistic encoder (converting a stream to textures and geometry) is virtually impossible, barring major paradigm shifts in current video processing

Reply Parent Score: 2

voracity Member since:
2010-05-22

Actually, I think something like this would work. It wouldn't solve any patent issues, but it would mean that 'freezing the bitstream' wouldn't be such a big deal.

You just need to make the bitstream turing complete, efficient, and have access to GL primitives and the like (maybe like WebGL). The idea is that you would embed the codec into the bitstream --- which is really easy to do, since codecs are pure functions and not that big code-wise.

Like I said, doesn't solve patent problems, but (like any turing machine/VM) solves all flexibility issues.

Reply Parent Score: 1

thesunnyk Member since:
2010-05-21

I think it definitely shifts the patent issues. It's no longer something the spec nor the decoder has to worry about. If someone wants to build an encoder using these techniques and pay MPEG-LA, then sure, but that's where this stuff is relegated to. I'd be interested to hear more on why you think it doesn't solve the patent problems.

There are actually simpler machines than Turing machines (and I've no idea what they are) which cannot do everything but for the purposes of "things that can be put on a DSP" are actually what we're looking for. The spec itself would be working to that machine instead, then downconverting to GL. Sort of like the distinction between Java Applets and Java Applications (but obviously both are real Turing machines, just that one is a limited form of the other).

Like you say, part of the intent here is to make the codec really flexible and still hardware supported.

Reply Parent Score: 1

thesunnyk Member since:
2010-05-21

I don't think that's true. All I was saying about the 64k thing is that there's no floor to the theoretical efficiency of the codec. In addition, if you can make a demo in 64k, then theoretically you can make a video of the demo in 64k. If you were capturing a movie, then sure you wouldn't be able to do it, but then again, maybe one day some guy would build an incredible encoder to read the data straight from the matrix. In either case your efficiency can be incredibly good, which is one of the touted features of h264.

The paradigm shift is actually exactly what I'm suggesting, but like I said, if you squint it's not a paradigm shift at all. h264 is a step in this direction anyway when compared to MPEG-2. All we're doing is coming at it from completely the other end, which means no patent issues. Frankly if you don't shift the paradigm some guy is going to come up and tell you you're infringing on their patent.

Reply Parent Score: 1