Despite the recent interest in adopting HTML5’s video tag, there is still one major problem: there is no mandated standard video codec for the video tag. The two main contestants are the proprietary and patended h264, and the open and free Theora. In a comment on an LWN.net article about this problematic situation, LWN reader Trelane posted an email exchange he had with MPEG-LA, which should further cement Theora as the obvious choice.
As most of you are probably aware of, there’s a bit of a codec war going on with the HTML5 video tag. The struggle is between the patented and proprietary h264 codec and the open and free Ogg Theora. While I’m still not entirely sure about which one is better from a pure quality standpoint (a lot of contradicting reports on that one, and I’m not skilled enough to perform my own tests). Theora has the benefit because it’s open and free, whereas h264 has the advantage of having better hardware support.
The big disadvantage with h264 is that you need to pay lots and lots of money for a license to be able to ship and have your users use it. You need a license for encoding, distributing, and even decoding – which is obviously quite problematic for open source projects. Apple and Google have both paid for the license, and as such, both Safari and Chrome support h264; however, since the license does not extend downstream, Linux distributors cannot ship Google Chrome. Chromium, the non-Google variant of the browser, does not have h264 support either. Both Mozilla and Opera refuse to pay for the h264 license.
Thanks to Trelane, we now have the word straight from the horse’s mouth with regards to the downstream possibilities (or, better put, lack thereof) of h264. He contacted the MPEG-LA and asked them if Free and open source developers and products need to pay for the license as well, since he couldn’t find said information in the FAQ. The answers were clear.
“In response to your specific question, under the Licenses royalties are paid on all MPEG-4 Visual/AVC products of like functionality, and the Licenses do not make any distinction for products offered for free (whether open source or otherwise),” explains Allen Harkness, MPEG-LA’s Director of Global Licensing, “But, I do note that the Licenses addresses this issue by including annual minimum thresholds below which no royalties are payable in order to encourage adoption and minimize the impact on lower volume users.”
In other words, h264 is simply not an option for Free and open source software. It is not compatible with “Free”, and the licensing costs are prohibitive for most Free and open source software projects. This means that if the web were to standardise on this encumbered codec, we’d be falling into the same trap as we did with Flash, GIF, and Internet Explorer 6.
The MPEG-LA further reiterated that individual users are just as liable as distributors or companies. “I would also like to mention that while our Licenses are not concluded by End Users, anyone in the product chain has liability if an end product is unlicensed,” Harkness explains, “Therefore, a royalty paid for an end product by the end product supplier would render the product licensed in the hands of the End User, but where a royalty has not been paid, such a product remains unlicensed and any downstream users/distributors would have liability.”
I’m not sure how this would extend towards countries outside of the United States, since software patents aren’t a sure thing all around the world. This perceived unencumberedness of h264 is actually a threat, though, according to Christopher Blizzard.
“[…] These heavily patented formats gain much of the same advantage as free formats – lots of free tools and tons of ad-hoc support from free software people – but with the ability to still enforce and monetize in parts of the world where patents are enforced,” Blizzard warns, “It’s actually a brilliant strategy, even though the outcome is that the true costs of patents are hidden from the view of most people.”
I’ll be blunt and direct, and this most likely won’t be appreciated by everyone, but I don’t care: using h264 for HTML5 video is short-sighted, ignorant, and simply plain stupid. Every web developer choosing to use h264 has learned nothing from GIF, Flash, and Internet Explorer 6. I find it almost delightfully hypocritical that someone like John Gruber, who rails and rails against Flash, finds h264 totally acceptable, telling Mozilla to “get with the program“. A cynical me would say this is somehow related to Apple hating Flash but loving h264, but I’m not in a cynical mood today.
Despite all this talk about h264 vs. Theora, the fact of the matter is that the latter is the best supported HTML5 video tag codec. All browser makers who have implemented HTML5 video have included support for Theora – Chrome, Firefox, and Opera all support it – except for Apple. In other words, the most compatible choice right now for HTML5 video is Theora – not h264.
Yes, h264 has the better hardware support, and yes, it is claimed its quality is higher than Theora’s – but is the short term really all you care about? Are web developers really that ignorant? History suggests yes – Flash, IE6 – but I had hoped we would’ve learned from all this by now. I’d rather face some technological difficulties now that we can actually overcome (hardware, quality) than face a patent and lock-in mess down the road.
Web developers, the choice is yours. Are you ignorant and short-sighted, or are you willing to make a stand for keeping the web open, and finally breaking video loose from its proprietary shackles?
Even I was one of the ignorant masses (well, mass of a small group, but…). Useful info about H.264’s encumbrances did not come right up upon searching, and tended to be glossed over.
We like to have things, implying having control over said things, and paying by codec let’s that work out. You pay to have a thing. Your use of the things is then largely unencumbered.
H.264’s licensing reads like they are trying to apply rent-to-own thinking to a video codec!
Theora is the obvious choice, if not for the reasons already stated, well for the fact you the people making use of it don’t have to pay for it!
No, it’s not. Dirac is the obvious choice when you want patent-freeness. Its quality per datarate is much better at higher resolutions than Theora.
It’s completely illogical that Mozilla refuses to add Dirac support. It doesn’t need to be Dirac exclusively — it could be Theora and Dirac.
That’s another reason why Mozilla should’ve opted for a GStreamer-based solution right from the start. Even if you have no interest in supporting the MPEG-4 codec family, you also don’t need to maintain your own set of patent-free codecs yourself. Dirac, just as Theora, has it’s own set of GStreamer codecs already and since Songbird (Firefox-based media player) uses GStreamer anyway, Mozilla could also share the workload in maintaining the GStreamer integration.
Instead Mozilla decided to use OggPlay — software that wasn’t even maintained when Mozilla picked it up. I don’t know if OggPlay is currently maintained.
It’s almost as if Mozilla has some hidden anti-Dirac agenda….
Actually, Mozilla is not refusing to support Dirac. Their stance is that Dirac, while great for archiving purposes for which it was developed, is currently unsuitable for streaming but they’ll consider implementing it once the technical hurdles are out of the way.
That’s bullsh*t. Mozilla had no problems adopting Ogg Theora, even though it was hardly usable for streaming when Mozilla adopted it. Firefox needs to download the beginning and the end of an Ogg Theora file, because it hasn’t even a length info in the file header. How braindead is that?
BTW, it still doesn’t change the fact that by adopting GStreamer right from the start, Mozilla would’ve gotten Dirac support for free.
Will you ever stop talking crap about GStreamer? d’oh!
It is the same typ of misunderstanding and chaos as OSS HAL, PulseAudio etc. Multitude of crap instead of two GOOD things.
GStreamer is plain EVIL.
Ogg was designed for streaming. Specifically, it was designed for streaming radio using Vorbis. It’s worked properly for years, including storing the length of a stream without having to read the whole file.
Perhaps you’re thinking of AVI, which stores a separate index at the end of the file. That certainly can’t be streamed (easily) without reading the beginning and end of the file.
1.) Learn to read. I never wrote that Firefox needs to download the whole file, just the beginning and the end.
2.) No, I’m not confusing Ogg with AVI.
To Quote Christopher Blizzard from Mozilla:
http://hacks.mozilla.org/2009/12/autobuffering-video-in-firefox/
Surely for a static file, the webserver will sent a size header?
Read the quote. There is no size information in Ogg headers — at least not yet.
The size information would be in the HTTP header.
I miswrote. Not size. Length. HTTP doesn’t tell you whether a clip is 3 or 5 minutes long.
The ogg wrapper is indeed designed as a streaming format, so that an in-progress stream can be picked up and played at any point, even in mid-transmission.
If there is indeed no size information in Theora packets as a consequence of this design, then why can’t this length information (that being length of the video clip) simply be encoded as additional optional information as part of the HTML5 video tag?
With the potential to lie about the length? Why?
To make my statement clear: I’m not saying that this limitation is set in stone. It’s not and it is in progress of being fixed.
My point was just: When Mozilla adopted Ogg, it had flaws for the purpose Mozilla adopted it.
Denying Dirac to be adopted because of some minor flaw it might have, is double-standard.
Either wait until both formats’ flaws are worked out or adopt both at the same time.
If a supplier of a video wants to “lie” about the length of the video on a website (possibly in the hope of exploiting a security bug in the client’s player), then they can do this either as embedded information in the video data itself or as information encoded in the HTML5 tag. Either way makes no difference … either way the length information can be sent to the client player, and either way the source of the video can lie to the client about the length. Either way the client’s player has to be hardened against incorrect (possibly deliberately incorrect) data it may be processing.
Dirac is still not ready, it is not stable enough, and it is only a reasonable performer for very high-end video. Dirac is perhaps useful for video archiving, but right now Theora is far more suitable for video over the web.
I wouldn’t have any problem if Dirac were a lot more competitive, but right now quality-wise Theora is the only open codec on equal terms with h264.
The only issue with Theora is that hardware accelerated video decoding of Theora (perhaps using pixel shaders or GPGPU) is not generally available, it is still in work.
However, until 720p resolution or higher videos over the web become commonplace, because of the lower resolutions of most video on the web today, and because of the less intensive requirements for decoding Theora, this isn’t an issue yet for most web clients.
Edited 2010-02-01 01:47 UTC
False. That was true before “Schrödinger” was released. Dirac is stable now.
You don’t have to confirm over and over again that you have no clue about that matter.
I didn’t talk about Theora. My post was about Ogg.
Right now Ogg is not suitable at all as container.
He meant HTTP headers, no Ogg headers.
For a static file the web server *will* send the Content-Size header.
In bytes, not in duration.
Hence, the obvious solution is to still use Theora as the codec for HTML5 video, and because the ogg data-stream does not include the length (duration) information, to include that length (duration) of the video clip as a parameter within the HTML5 tag.
There could be other solutions as well, but the fact that there is a patently obvious solution such as this makes one believe that people who bring this type of objection up are just desperately trying to find things wrong with using Theora, and not succeeding BTW.
Probaply most idiotic and error prone solution in history. Clearly this sort of information should be inside file itself. Think about it in terms of content development, if I have publish system I would need first read lenght of file then put that on HTML tag, what a waste of time(as resource). But why bother argue since Microsoft will support h264 and WMV only and that will mean over 75% of users are potential customers all freetards can just cry.
Streaming live feed doesn’t have END OF FILE. FAIL
I get the feeling you’ve never had to write code that uses GStreamer.
By many peoples’ standards, it’s not mature and not only do the devs have an over-inflated view of its quality, they threw a tantrum when “once burned, twice shy” KDE wrote Phonon with the original intent of it being an API wrapper for GStreamer to ensure API stability for the entire KDE 4.x release cycle.
Here’s one of the places where I read about the original reasoning behind Phonon: http://lwn.net/Articles/183462/
Unfortunately, I was only able to find that because I remember it being on LWN. I can’t get Google to narrow things down enough to find the other two or three which cemented my view of GStreamer developers as immature and with too much ego. (I think it was a post-reply conversation between KDE and GStreamer dev blogs)
That doesn’t change the fact that:
1.) Songbird is using GStreamer and Mozilla could share the workload with the Songbird team.
2.) Mozilla is using GStreamer for Firefox Mobile (aka. Fennec): https://bugzilla.mozilla.org/show_bug.cgi?id=422540
So why should Mozilla support LibPlayOgg on desktop and then support a separate back-end for Fennec if Mozilla could just as well use the same code for both?
Can you send a length as part of the http Content-Length or whatever field it is?
You must be joking. GStreamer is a terrible mess … please.
So are most of the Mozilla projects. A well-maintained mess, though, tends to be better than unmaintained one .
If GStreamer actually is a terrible mess, that is a problem, but a browser should not implement this kind of things in its own code. It should use whatever services provided by the OS where it runs. That way we don’t reinvent the wheel and if there is a problem in the OS level services it is better to fix them at that level, as this will benefit all software using them not just the browser. Using services provided by the OS is also much more future proof, if better codecs shows up in the future the browser could make them autmagically, without any upgrades.
GStreamer also runs on Windows and Mac OS X. Providing different back-ends in not needed.
As an alternative to GStreamer, Mozilla could also use Xine (compiled without patented codecs — perfectly possible). Dirac would still be part of it, as far as I know Xine.
Obvious choice??
Remember that “web” videos must be viewable on as many hardware as possible, I’d be interested to know if an iPhone (for example) is able to view a Dirac video..
Here are some samples:
http://samples.mplayerhq.hu/V-codecs/Dirac/
I haven’t found anything yet which can play these files, including VLC.
Edited 2010-02-02 10:40 UTC
The design quality/performance of Dirac at high resolutions is not the problem:
http://en.wikipedia.org/wiki/Dirac_codec
Performance of Dirac at low resolutions is perhaps a problem.
However, a much bigger problem with Dirac is that it is not patented at all. It has no apparent “patent cover”. It is also newer than other proprietary, heavily patented codecs such as h264 and VC1.
Dirac is a sitting duck as far as attack by patent trolls goes.
What you need (in this day and age of patent trolls) is a codec which you are allowed/licensed to use in open source that is itself covered by patents so old that there are none likely to be older.
PS: As far as practicality for web video goes, the Wikipedia page for Theora has an example small video:
http://en.wikipedia.org/wiki/Theora
but the Wikipedia page for Dirac doesn’t.
http://en.wikipedia.org/wiki/Dirac_%28codec%29
This page also notes the following:
When we look at the whole controversy:
http://en.wikipedia.org/wiki/Use_of_Ogg_formats_in_HTML5
At this time, Theora is far closer to satisfying all of the W3C requirements for the HTML5 codec than Dirac is, in particular the part about risk of exposure to submarine patents.
Edited 2010-02-02 10:23 UTC
Don’t spread FUD about Diracs patent situation. The codec only uses very old techniques that have been around for decades. Every troll attack can be easily dismissed either because of prior art or being old.
it may be the obvious choice to you. but the industry has already chosen h.264. you can blame apple for this, and it’s pretty much too late to switch to theora. so mozilla needs to just add the h.264 codec to their package.
besides, if everyone switched to theora. theora would get sued by the h.264 consortium most likely. h.264 covers hundreds of video patents. and theres no way theora doesn’t infringe on at least some of them. not only that, everyone’s phones would stop working with youtube. and it would cost more money, and use more bandwidth to send the videos on the internet.
ideally it would be nice if it were unencumbered by patents. but it’s just not going to happen unfortunately. the standard is already here. and if mozilla doesn’t add it by the time youtube goes html5 only, then firefox is done. microsoft’s going to add it. you can count on that.
I totally agree! Who cares about quality if there is no freedom?
I really appreciate your commitment to theora and freedom!
Most people, actually. People forget that IE and Flash succeeded because there were no better alternatives. h264 has technical advantages and as a user I cannot really oppose its adaption.
As a developer, I can say that it is short-sighted, but in the end the only thing that really matters is user satisfaction, because that’s where the profits are.
I very much doubt that most users (i.e. the ignorant masses) could tell the difference between a H.264 and a Theora encoded video on their run-of-the-mill hardware under normal circumstances that are usually less than optimal (e.g. lightning, monitor settings, insufficient resolution, etc.).
The encoding problem is as bad as the decoding because it pretty much limits the content providers to the ones with deep enough pockets to pay for the license. Normal users will always risk the wrath of MPEG-LA if they dare to upload their own videos to the web.
Am I the only person who prefers quality to Freedom then?
* I’ll take NVidia’s proprietary drivers over the open nv driver because it works better.
* I’ll take Nero AAC encoder over FAAC because it works better.
* I’ll take h.264 over Theora/Dirac because it works better.
After recent significant improvements in Theora, h.264 no longer works appreciably better than Theora.
If someone is feeding you a story and telling you otherwise, just check that they aren’t in fact speaking for one of these companies:
http://www.mpegla.com/main/programs/AVC/Pages/Licensors.aspx
BTW, the recently available open source drivers for ATI video cards, now included in Linux kernel 2.6.32 or later, work just great. 3D-capable drivers out of the box, nothing more to install, automatically updated along with kernel upgrades.
I have no comment on AAC, other than to observe that Vorbis is also a lossy audio codec that achieves significantly better sound quality than mp3, but it has added advantage of having no possibility of being encumbered with DRM.
Unfortunately the choice has already made by Google and Vimeo. h264 will be the codec of choice for maybe 95% of watched videos for years to come. And, most importantly, because of this (partly) it will be the best codec to accelerate in hardware.
Really hope Theora will be hardware accelerated well enough in the future.
I’m not up to date with the area but what’s happening with Google and their ON2 technology ? Why buy it and not use it to maximum effect ?
I think people are forgetting about Google buying ON2 for 100+ million dollars.
I think Google would love to use a free codec for their money loosing Youtube.com as much as any of us.
Edited 2010-01-31 18:24 UTC
And I think you are forgetting that On2 has a bunch of embedded products that fit Android phones better than YouTube.
BTW, VP8 isn’t a great codec for high-quality, high-res video. It blurs the image for better compression (something similar also works on JPEGs).
This is just a theory but:
All the videos YouTube offers in HTML5 were already encoded in H.264 for flash. So, testing the Video tag functionality on YouTube beta doesn’t require re-encoding all if the videos on YouTube, a massive effort. I believe that once Google gets all the bugs worked out of their implementation, they will switch to a more open codec.
Ok, theora has inferior quality in respect to h264? It needs further development, not so much of a problem after all.
h264 is hardware accelerated? hey, we have gpgpu so we can accelerate theora too
after all theora has few things less than h264, but h264 gives us a lot of legal headaches in respect to theora
so the way to go is theora….don’t they want theora? ok, i’ll use adobe flash then
The potential for Theora improvement is limited. Theora is based on the ten year old VP3, and the Theora bitstream format was frozen in 2004, and is missing some similar features to more modern video encoding standards (VC-1, H.264, even newer On2 codecs), so any further improvement comes only from improving the encoder. But H.264 encoders improve too; in fact, there are enough users of H.264 that there’s competition.
This does not work in the millions of already deployed embedded devices with no 3D acceleration, but with a H.264 decoder.
Short-sightedness abound.
Yes, there are problems now – but I’d rather face those solvable problems NOW than having to deal with lock-in down the line.
you could accelerate it on a computer. but who cares at that point, computers have the cpu power to spare.
what this choice came down to was the phone support.
the hardware acceleration is on the phones. most phone cpu’s would not be able to do video if not for special video only chips than enable them to do 1-3 different types of videos. so no. most phones would not be able to do theora, or wma. because those are not hardware accelerated. yeah, they could have been. But too little too late. The iphone came with h.264 hardware support so that it could do video. now pretty much every smart phone comes with h.264. motorola droid has h.264 hardware support also. and now html 5 is going to need at least h.264 support to support the phones. the phones is where this html 5 push came from. flash wasn’t and still isn’t ready, phones came with internet but no video support, and people want that. so video support started rolling in.
I wouldn’t call them fanboys, but most web developers will blindly follow Apples and Googles lead.
Safari can use a QT plugin for Theora so while most of the web will be using h264 Theora has better browser support (Opera+Chrome+FF)
I really hope the Wikipedia will get lots of videos, but that does not seem to happen
That said if _I_ ever have to put video on the web _I_ wouldn’t touch anything but Theora (in Germany you never know who might sue you in the future)
wait….
You are afraid of legal liability, but you choose to go with Theora which has an unknown patent status over h.264 which is very well defined?
That makes no sense.
You know, if people are so concerned with Theora’s patent status let’s just go after it. Put that statement to rest one way or another. We already know whatever patents VP3 had (which Theora is based from) are a non-issue and, odds are, if there really was patent issues someone would’ve called it out by now. Even so, these days you can’t even develop a keyboard-based UI without treading on some ridiculous patent or other. Try and prove that H.264 infringes on nobody’s patents. Go ahead. They have the MPEGLA behind them, which means that patent trolls probably wouldn’t succeed given the patent pool, but I bet you can find instances where even your precious H.264 treads liberally over someone’s patents. It’s not possible to develop any software today and not tread on one or more, usually many more.
It is not about proving that H.264 does not tred on someone’s patents, it is the fact that MPEG-LA has a giant patent catalog with many many many very large companies behind it. The likelihood that they do infringe on a patent is very small and it is ever a smaller chance that if they do, someone will go after them for anything but a settlement.
Theora’s patent status is perfectly well known.
On2 technologies hold a patent specifically for the VP3 video codec. Xiph.org have negotiated with On2 to obtain an irrevocable royalty-free licesnse to develop a codec (which Xiph.org have named Theora) based on VP3. The agreement includes the rights for Xiph.org to re-distribute the codec to end users under any license that Xiph.org want to, including open source licenses.
Whta is there that is “unknown” about it?
Well, Theora is *not* VP3. The VP3 patent situation is well and clearly defined. Theora is *based from* VP3, so the VP3 portions and technologies are in the clear but what about anything developed in addition to them? VP3 is where Theora came from, but it differs considerably from baseline VP3 now.
Still, I think that needs to get resolved one way or another. I would have thought, if Theora does infringe on any patents now, the infringed party would’ve gone after it if for no other reason than to get some of the publicity Theora is getting around the HTML 5 issue. I doubt Theora infringes on any more patents than any software program does these days. If you look hard enough you can find a patent for *any* common convention used in software today.
VP3 is an older codec technology.
If the USPTO made a mistake, and granted a patent to a member of MPEG LA for something that was also already covered by the VP3 technology … then in all likelihood the VP3 patent grant would be the earlier one, and therefore the valid one.
Also, patents are granted based on “inventions”. The fact that the Theora code is now considerably advanced over the original VP3 codec is not important, as long as Theora still implements the same “invention”.
Wow, after a little research, the problem is worse than I thought. According to MPEG LA, here’s a list of patents yo get a license for with your purchase of a h.264 license…
Oh, and apparently OSNews truncates… entire list runs 43 pages.
http://www.mpegla.com/main/programs/avc/Documents/avc-att1.pdf
Last expire date in 2028, apparently.
This is, as Steve Jobs likes to say, a “bag of hurt”… for human civilization, IMHO.
Apple Inc.
US 7,292,636
The Trustees of Columbia University in the City of New York
US Re. 35,093
CA 2,096,431
DE 69129595
DE 69130329
FR 564,597
FR 630,157
GB 564,597
GB 630,157
JP 2,746,749
DAEWOO Electronics Corporation
KR 174,441
KR 229,783
Dolby Laboratories Licensing Corporation
US 6,728,317
CA 2,406,459
US 7,266,150
AU 2003247759
MX 252716
SG 108,377
TW I231,711
Electronics and Telecommunications Research Institute
US 7,388,916
November 1, 2009
KR 353,851
CN 200410089708.5
CN 01814313.X
France TeÌleÌcom, socieÌteÌ anonyme
US 4,796,0871
DE 3767919â€
FI 86,2412
FR 2,599,5773
GB 248,711â€
IT 248,711â€
SE 248,711â€
Fraunhoferâ€Gesellschaft zur Foerderung der angewandten Forschung e.V.
US 6,894,628
US 6,900,748
US 7,088,271
US 6,943,710
JP 3,989,485
US 7,286,710
US 7,379,608
US 7,496,143
US 7,586,924
US 7,599,435
November 1, 2009
AT 352,826
BE 1,467,491
BG 1,467,491 CH/LI 1,467,491
CZ 1,467,491
DE 50306371.1
DK 1,467,491
ES 1,467,491
FI 1,467,491
FR 1,467,491
GB 1,467,491
GR 3,060,957
HK 1,070,480
HU 1,467,491
IE 1,467,491
IT 1,467,491
NL 1,467,491
PT 1,467,491
RO 1,467,491
SE 1,467,491
TR 1,467,491
November 1, 2009
AT 343,302
BE 1,487,113 CH/LI 1,487,113
CZ 1,487,113
DE 50305419
DK 1,487,113
ES 1,487,113
FI 1,487,113
FR 1,487,113
GB 1,487,113
GR 1,487,113
HK 1,070,992
HU 1,487,113
IE 1,487,113
IT 1,487,113
NL 1,487,113
PT 1,487,113
RO 1,487,113
SE 1,487,113
November 1, 2009
AT 1,550,219
BE 1,550,219
BG 1,550,219 CH/LI 1,550,219
CZ 1,550,219
DE 50311129.5
DK 1,550,219
ES 1,550,219
FI 1,550,219
FR 1,550,219
GB 1,550,219
HU 1,550,219
IE 1,550,219
IT 1,550,219
LU 1,550,219
NL 1,550,219
PT 1,550,219
RO 1,550,219
SE 1,550,219
SK 1,550,219
SI 1,550,219
TR 1,550,219
JP 4,054,345
JP 4,057,595
JP 4,295,356
JP 4,313,757
JP 4,313,771
KR 729,270
KR 733,795
November 1, 2009
Fujitsu Limited
US 5,235,618
US 7,272,182
JP 3,791,922
CA 2,439,886
CN 03156610.3
JP 3,946,721
CN 200510130221.1
JP 3,946,758
JP 3,946,759
JP 4,142,180
JP 4,184,389
KR 788,567
KR 788,569
KR 788,570
Hitachi, Ltd
JP 3,191,935
JP 3,303,869
Koninklijke Philips Electronics N.V.
US 4,849,8124
US 5,021,8795
US 5,128,7586
CA 2,018,031
US 5,179,4427
CA 2,304,917
JP 2,791,822
US 5,606,539
US 5,608,697
US 5,844,867
AT 157,830
BE 0460751
DE 69127504
DK 0460751
FR 0460751
GB 0460751
IT 52497 BE/97
JP 3,162,110
JP 3,280,999
NL 0460751
SE 0460751
US 5,699,476
CA 2,036,585
DE 69109346.6
DK 0443676
FI 101,442
FR 0443676
GB 0443676
IT 0443676
JP 3,174,586
NL 0443676
SE 0443676
AU 641,726
HK 96â€615
SG 9690467.7
November 1, 2009
US 6,594,400 of which it is a reissue.
US 5,740,310
CA 2,043,670
US 6,909,450
CN 100375520
KR 114,697
KR 239,837
CN 100380983
LG Electronics Inc.
US Re. 40,1778
US Re. 40,178
US Re. 40,179
US Re. 40,180
US 6,912,351
US 7,403,663
US 7,424,159
US 7,433,525
US 7,440,627
US 7,447,367
US 7,463,776
US 7,233,621
US 7,262,886
US 7,277,593
US 7,289,682
November 1, 2009
US 7,359,569
US 7,391,921
US 7,391,922
US 7,394,945
US 7,397,965
US 7,403,667
US 7,437,015
US 7,463,786
US 7,486,832
US 7,492,959
US 7,492,960
US 7,496,239
US 7,499,598
US 7,558,321
DE 10300528.5
DE 10300529.3
FI 1,359,769
FR 1,359,769
IT 1,359,769
RU 2,338,332
SE 1,359,769
November 1, 2009
FR 1,383,338
AT 354,259
BE 1,406,453
BG 1,406,453 CH/LI 1,406,453
CY 1,406,453
CZ 1,406,453
DE 60311720.1
DK 1,406,453
EE E001131
ES 1,406,453
FI 1,406,453
FR 1,406,453
GR 3,060,991
HU 1,406,453
IE 1,406,453
IT 1,406,453
LU 1,406,453
MC 1,406,453
NL 1,406,453
PT 1,406,453
SE 1,406,453
SI 1,406,453
SK 1,406,453
TR 1,406,453
GB 2,387,498
CN 200410104547.2
RU 2,297,109
TW I 259,412
TW I 280,806
GB 2,408,889
HK 1,073,043
RU 2,333,616
November 1, 2009
GB 2,388,267
CN 03101444.5
CN 200510055870.X
CN 200510055871.4
NL 1,022,331
RU 2,282,948
TW 221,076
GB 2,405,549
HK 1,073,557
RU 2,335,861
GB 2,416,455
HK 1,079,647
HK 1,091,634
GB 2,422,263
GB 2,393,873
CN 03101623.5
DE 10300533
GB 2,406,459
GB 2,406,460
RU 2,287,908
RU 2,319,317
RU 2,319,318
GB 2,412,032
GB 2,436,981
AU 2005203096
CA 2,512,671
CA 2,522,835
MX 262,980
RU 2,326,506
RU 2,328,090
SG 113,888
VN 6,782
VN 6,783
GB 2,430,325
November 1, 2009
JP 3,958,690
CN 200510116129.X
ES 1,359,767
FR 1,359,767
HU E002547
IT 1,359,767
JP 4,020,789
KR 237,636
KR 251,549
KR 491,530
KR 494,828
HK 1,073,555
KR 494,829
HK 1,073,556
KR 494,830
RU 2,273,113
KR 494,831
KR 505,319
KR 505,320
KR 506,864
RU 2,264,049
TW 229,825
KR 506,865
KR 506,866
November 1, 2009
KR 507,917
KR 508,798
CN 03101657.X
KR 508,799
KR 508,800
KR 523,551
KR 525,785
KR 602,149
KR 617,598
KR 619,716
KR 626,234
KR 626,235
KR 640,937
KR 655,546
RU 2,282,947
KR 674,027
KR 687,845
KR 693,669
KR 693,670
KR 693,671
November 1, 2009
KR 721,022
KR 748,512
KR 757,829
KR 757,830
KR 757,831
KR 757,832
KR 857,383
KR 857,384
KR 857,385
AU 2004217197
KR 864,790
KR 864,791
KR 865,034
CN 03101620.0
KR 865,039
KR 881,554
KR 881,555
KR 881,556
KR 881,557
KR 881,558
November 1, 2009
KR 881,565
KR 881,566
KR 881,567
KR 881,568
KR 881,569
KR 881,570
KR 881,571
KR 883,025
KR 886,192
KR 890,504
KR 890,505
KR 890,514
KR 901,642
KR 901,643
NL 1,022,332
NL 1,022,333
NL 1,029,485
NL 1,029,486
November 1, 2009
Microsoft Corporation
US 6,563,953
US 6,735,345
US 7,289,673
AT 1,135,934
BE 1,135,934 CH/LI 1,135,934
DE 69937462.6
DK 1,135,934
ES 1,135,934
FI 1,135,934
FR 1,135,934
GB 1,135,934
IT 1,135,934
NL 1,135,934
PT 1,135,934
SE 1,135,934
US 6,882,685
US 7,106,797
CN 02143205.8
CN 100463522
JP 3,964,765
JP 3,964,925
TW I 221,388
US 6,912,584
US 7,116,830
US 7,120,197
US 7,263,232
US 7,577,305
US 7,149,247
US 7,155,055
US 7,162,091
November 1, 2009
US 7,242,437
US 7,248,779
US 7,266,149
US 7,280,700
US 7,286,189
US 7,505,485
DE 60311231.5
DE 60310368.5
FR 1,468,566
FR 1,468,567
GB 1,468,566
GB 1,468,567
HK 1,069,700
HK 1,069,701
IT 1,468,566
IT 1,468,567
JP 4,102,841
JP 4,159,400
JP 4,199,973
KR 578,432
CN 03124169.7
KR 839,308
KR 839,309
KR 839,311
November 1, 2009
Mitsubishi Electric Corporation
US 5,926,225
US 6,097,759
CA 2,327,489
NO 310,850
US 7,095,344
SG 101,613
TW I 222,834
US 7,388,526
CA 2,554,143
US 7,408,488
US 7,518,537
JP 1,869,9409
JP 3,347,954
JP 3,807,342
JP 4,211,780
JP 4,326,758
KR 740,381
Nippon Telegraph and Telephone Corporation
JP 2,938,412
JP 3,866,628
NTT DOCOMO, INC.
US 7,190,289
US 7,346,216
JP 3,491,001
JP 3,513,148
KR 583,552
JP 3,534,742
JP 3,679,083
KR 603,175
KR 701,810
Panasonic Corporationâ€
US Re. 35,910
US 5,223,949
US 6,608,939
US 6,658,152
US 6,681,048
CN 97191584.9
JP 3,208,101
JP 3,222,876
JP 3,527,454
KR 354,799
KR 425,613
SG 54,716
Name change from Matsushita Electric Industrial Co., Ltd., effective October 1, 2008.
November 1, 2009
US 6,373,856
DE 69822751.4
FR 0909099
GB 0909099
IT 2004B69800
US 6,445,739
CN 98800434.8
ID 10,293
IN 213,894
MX 220,076
SG 56,895
TH 25,132
US 6,501,793
CN 01118860.X
US 6,954,156
US 6,954,157
AU 2002357584
ID 18,606
MX 245,670
MX 259,376
US 6,967,600
US 7,312,731
US 6,992,605
CN 02813282.3
US 7,109,898
US 7,088,269
AU 2003221378
ID 19,093
November 1, 2009
US 7,095,896
US 7,305,134
US 7,308,143
US 7,308,144
US 7,308,149
US 7,126,989
US 7,167,591
AU 2003236032
CN 03800235.3
ID 19,906
MX 249,459
US 7,218,787
TW I 262,024
US 7,184,598
AU 2006203176
US 7,206,347
CN 3801021.6
ID 21,001
US 7,20
and again the w3c shows how incompetent they are
they repeatadly put out “standards” that are not precise enough and they already burned their fingers on gif
what are they doing now? puting out an unprecise standart to favour a patent-problem
now would be the time for the w3c to stand up and tell the world that for a free web only a free codec must be chosen
if the mpeg-la agreed that h264 will be free to implement and use for web-services it would be ok, but i doubt that it will come to that…
The committees are made up of companies whose best interest doesn’t always match up with ours. They want to make money. Promoting an open and free web is really not big on their agenda. Sad but true.
But minimizing their own costs in the long run is most definitely in their best interest. It seems that promoting H.264 may save them some money in the short term, but in the long term is going to end up costing them a lot more.
If minimizing costs means lowering barriers for entry for competitors, it is sometimes more useful to be in a market where those costs are higher.
but thats the whole purpose of the w3c
the last time they didn’t do their job we ended up with a complete mess with ie6 as a result of it (not the cause)
chair is from google
vice chair is from apple
yeah right.
As usual, Thom puts his vision of the subject.
Here are the real fact corrected…
I will now call this way of doing news the “Reality distortion field of Thom”.
Technical matters are not really the issue here.
The issue is that h.264 is entirely unsuitable for web video, because of the way it’s licensed.
The issue in question is the requirement that anyone who distributes an h.264 decoder is required to pay a royalty to the MPEG-LA (currently, up to US$0.20 per unit).
Since this requires the ability to monitor distribution, and to restrict redistribution (otherwise, you could get one copy from Mozilla and give it to thousands of end-users), this is entirely incompatible with any free software.
The license is not transferrable. If Mozilla had a license, it would only cover copies distributed by Mozilla. Linux distributors could not include Firefox. Developers of other software that uses Gecko could not distribute Gecko without getting their own license. OEMs could not include Firefox on a machine (PCs, netbooks, phones, whatever) without getting their own license. No forking. No modified versions. No developing new software based on it. No incorporating it into something else.
The h.264 decoder licenses would also seem to limit distribution of source code, which makes matters even worse. Notice that Chrome has an h.264 decoder, but Chromium (the open-source version) does not.
Opera has a similar problem as Mozilla – they couldn’t include a licensed h.264 decoder in their free desktop browser for the same reasons that Mozilla can’t include one in Firefox. They also won’t, because it’s too expensive.
That’s not even getting into licenses for encoders (same as the decoders, plus the possibility that you may have to pay $2,500 directly to the MPEG-LA for each copy you use for distributing video). Or worse – royalty payments for streaming video. Unless the MPEG-LA are willing to forgo those royalty payments until 2015, they’ll start charging those royalties in 2011.
Basically, using h.264 prices everyone except the really big players right out of the market.
Safari or Google could build a browser that supports h.264, because they have lots of money to spend on it, and can control the distribution of their browser. Nobody else can. Worse – you wouldn’t even be able to build one of these web browsers into another application, or into some piece of hardware, without paying royalties to the MPEG-LA.
The likes of YouTube can afford the royalty payments on video encoders, and on the videos themselves, but could a smaller site? Could YouTube itself have been able to afford it before they were bought by Google? Would YouTube have even existed if they had to pay those royalties back when it was three guys in a garage?
How about the current trend to try to use video content to generate ad revenue? Remember that ads are typically pay-per-click, while those royalties on the video are payable for each view. What’s the chance that the ad revenue would even come close to covering the royalties?
For that matter, what about all the other uses for web video that nobody’s thought of yet?
That guy from Mozilla was right – the web grew up on, and thrives on, royalty free.
The technical problems with Theora are solvable. The licensing problems with h.264 are not solvable. Seems like a simple choice to me.
You make a false argument to support your desire and demand that the Web use something not patented for Video.
The Content Producers decide, not the free consumer.
Sure they decide, but end users also have to use a licensed product. You’d start to see websites stating that they only worked with safari or chrome. It is possible…but so very silly, it would be the second coming of “site designed for internet explorer”
At this time, Vimeo have started an experiment to supply video via html5, but only using h264 decoder. This will ONLY work in Safari and Chrome, not Firefox.
Despite the fact that Firefox is a few times more prevalent that Chrome and Safari combined, in a recent announcement Vimeo still somehow made the obviously-incorrect claim that using h264 lets them get video to more clients.
Clearly, in order to get open video on the web, users simply cannot leave this decision up to providers who are so obviously willing to lie.
Clearly users will have to start demanding open web video (HTML5/Theora) before we get anywhere with this.
The Content Producers decide, not the free consumer.
No dear, they don’t. At least theoretically, the consumer is the one who pays for all this.
If we all decide not to use H264 content, we will have an awful lot of sellers with no buyers.
Of course the mass of consumers at large hasn’t exercised voting with their wallets for a long, long time. So naturally it seems that Corporations run the show and for now they do.
The question her is: How much pressure do you need to apply to a frog before he croaks?
When pigs fly will this fantasy of boycott occur.
This isn’t the Beta vs. VHS debacle.
The Appliance is open for all platforms. The HTML5 standard is open. Today’s web browser is open all working to use the same standard.
The standard is so open it doesn’t declare any one specific codec for the standard.
Content Producers [Those who make the streaming media] can choose to rip in any codec they choose and provide a variety of options for the client [free consumer] to use.
Linux, Windows and OS X all have the necessary tools to process a large set of video/audio codecs without the W3C demanding the HTML 5 compliance mean this video codec must be the One.
The license doesn’t have to be transferable, the product itself carries the license.
Mozilla could provide h.264 with every firefox download, but it would involve having to provide a binary blob and paying the royalty cap ($5M or something like that). Mozilla could certainly afford it if they wanted to go that route, with the pot of gold that Google has been providing these many years.
It wouldn’t impact re-distribution, either. The distros could still include the blob.
Not that this is an ideal solution, and certainly flies in the face of Mozilla’s OSS roots, but it’s not un-doable. Mozilla could provide an optional binary blob if they choose to.
Opera could easily include h.264, the fact that it is given for free is irrelevant, they would just have to pay the royalty cap. Flash utilizes h.264, and it’s a safe bet that they’re not paying a per download fee, they’re paying the cap.
Whether Opera should, or not, is a business choice for them to make.
Ahhhhh…. now you’re hitting the nail on the head. Anyone that wants to stream h.264 over the web next year is going to have to pay an undetermined royalty fee. That point rarely gets brought up in these discussions.
Google is the largest purveyor of online video streaming. They are in a unique position that they alone could potentially force the standard of their choice.
It is in the consortium’s best interests to keep Google on board, and it’s in Google’s best interests to keep their dominant position for online streaming.
A cynical person could theorize that perhaps Google is leveraging their position, as well as their recent acquisition, as a bargaining chip for the upcoming royalty fees in 2011+. They could do an end run around the consortium, and negotiate with each patent holder separately, ultimately negotiating a cumulative fee that could be below whatever schedule of fees MPEGLA will be issuing at the end of the year.
With the widespread availability of h.264 compatibility in mobile handsets and computers, there’s very little downside to them supporting the standard as the codec for HTML5 video. Any licensing fees paid are much more easily absorbed by Google than their smaller competitors, or any future upstarts, who will now face a large barrier of entry to compete in online video.
Good thing for the web that Google isn’t evil.
This, ironically, could wind up hurting h.264. It’s one thing to license for the encoding and decoding, but to license for web streaming as well is a bit of an over-reach.
This could potentially hinder innovation as the burgeoning streaming market risks becoming under the control of a group of well-heeled companies with the money to pay, while blocking out the underfunded competition.
We saw what happened when Microsoft tried to extend their desktop dominance to the internet by the imposition of proprietary technologies, simply by leveraging that desktop dominance.
We’re still cleaning up the mess to a certain extent a decade later, but ultimately they failed. Smaller upstarts came at them, and in large part succeeded by simply supporting the open standards that Microsoft was trying to undermine.
If the h.264 consortium tries to become a gateway to internet-based multimedia, then it’s inevitable that an alternate solution will start to appear.
h.264 won’t disappear, but it’s entirely plausible that web services could wind up having to support closed and open media standards, much as they spent the last decade having to support IE and open browsers, in order to ensure maximum viewers.
There’s no reason h.264 shouldn’t be an option for streaming media, it offers a number of advantages, but it shouldn’t become the only option.
This doesn’t make any sense. What exactly is supposed to be achieved by having a “binary blob”? MPEG-LA themselves, as I understand it, publish source code for a reference codec. It isn’t a question of the open source vs closed binary blob nature of distibuting a h264 codec that is the problem viz a viz licensing, it is the third-party redistribution that is the problem.
In short, if Mozilla pay for a license, and distribute their software, and their license allows recipients to re-distribute the software, then those who re-distribute it haven’t paid any license fee to MPEG-LA. Mozilla could “get away” with paying only a small license fee by distributing their browser to only a very few recipients (such as SourceForge and ibiblio perhaps). Once SourceForge and ibiblio re-distribute it, Mozilla’s browser still ends up on as many end users systems.
This problem of unpaid-for re-distribution still occurs regardless if Mozilla distribute the codec as an included binary blob or as open source.
It is the patents on h264, and the expectation of MPEG LA that they should get paid for every copy (and for every encoding, and for every transimission of data), that is the problem, and not the fact that Mozilla Firefox is open source.
Edited 2010-02-01 03:04 UTC
MPEGLA caps the royalty fee, they don’t demand an infinte per product charge (as opposed to some other patent pools that demand an absolute charge per every implementation.) You pay per implementation, up to a fixed ceiling, beyond that, it is gratis.
Mozilla (or Opera or anyone else) could simply pay that cap fee, and not worry about it. Granted, it amounts to several million dollars, but hey, it’s only money.
How do you think Adobe et al. are able to distribute h.264 compatible players, even allowing re-distribution? Do you think Adobe is paying a fee for every download of flash everywhere in the world? openSUSE, Ubuntu et al. aren’t paying a fee for providing the player as a download separate from Adobe’s site. HP, Dell et al. aren’t paying a fee for proving Flash pre-installed on their systems. Do you think Google is paying a fee for each download of Chrome? No, they simply pay the cap and don’t worry about having to account for individual downloads.
The MPEGLA license applies to an implementation within a specific product. If you distribute a binary blob, that’s an implementation within a single product, so there’s no problem.
The model doesn’t work for source code, because source code can be modified to become an infinite number of products.
The model sucks, and I’m not justifying it, but it’s just the way it is.
I know that the license fee is capped … it doesn’t matter, it still applies.
Nope. Nice try, but no.
Your explanation doesn’t account for the observation that MPEG LA themselves distribute an open source reference implementation.
It is not being open source that attracts a fee … it is the mere act of distribution of a h264 codec that attracts the fee.
So why isn’t there demand to change the patents of h.264?
Patents != licensing. There are many formats with current patents that are not licensed the way MPEG LA is proposing licensing H.264.
edit: nevermind, guy before me said it a lot better
Edited 2010-01-31 17:19 UTC
This is what happened to the HTML 5 Committee on the topic of the <video> tag. While I agree that Theora is as of this day the only logical choice for standard. There are people who will not allow it to be a standard. Apple (a committee member) will simply never support Theora, and their official position is this is due to lack of hardware support and what they call an uncertain patent landscape (read:FUD). While Google support Theora in Chrome, they do not support it in youtube because of concerns of quality per bit on such a large site.
Where would the internet be if HTML and JPEG required distributors and readers to pay some fee. In order to really bring video to the web, it needs to be in a format that follows the web’s open spirit. It cannot be something that encumbers users in any manner.
The best that one can hope for in a non-open video format is a broken implementation that kinda works some of the time(Ã la flash).
I guess it would be on PNG, just like it now is on JPEG as a result of GIF once had unacceptable licensing terms. The same thing will happen with video. People don’t like to pay if they don’t have to, and there are more people that have money to save on using an unencumbered format than there are people making money from it.
Edited 2010-01-31 18:34 UTC
Agreed. Momentum should push the web to standards that anyone can implement. Websites are what they are today because the images and text is free. You can display it to the users without worrying whether their browser is able to display jpeg or PNG. I fear the same cannot be said about the video tag unless the landscape drastically changes.
“but where a royalty has not been paid, such a product remains unlicensed and any downstream users/distributors would have liability.”
Yeah, not problematic at all. Class act that MPEG-LA.
And on a related note, screw you Apple and anyone else for trying to get this horrendously expensive proprietary shit in an open standard.
Of course, we’re all supposed to know if a video we’re watching is properly licensed. Didn’t you know?
I don’t think they’d actually get away with going after an end user, even as screwed up as the U.S legal system is that wouldn’t fly… at least not yet. Still, the fact that they’re allowed to put that condition in there speaks volumes about what a lot of lawyers are up to these days. Scary thought, isn’t it?
A system that could award millions in damages for downloading a few mp3s is also capable of granting damages when a mere user inadvertently consumes h24 content without a license.
Edited 2010-01-31 16:07 UTC
it isn’t the content, it is the tool used to play the content…. VLC for instance.
That’s your interpretation of it. Trouble is the statement is sufficiently vague as to be interpreted several different ways (typical legal doublespeak, of course). You interpret it to mean the toolchain, but to me it sounds a lot more like they could go after the end users too. It does say anyone in the content chain. Anyone. That’s the all important word.
that is a pretty obvious interpretation of it. It is kind of like interpreting traffic law to say “go on green”.
it isn’t the content, it is the tool used to play the content…. VLC for instance.
Isn’t that academic? If I download H264 encoded video, I’m not merely going to store it as hard disk filler. I’m going to view it.
So consuming content is inevitably going to mean using a H264 decoder.
Given that the world and his dog seems to have a patent in H264, if they really press it, it seems that unlicensed use could run into the millions in damages (especially if proven willful –> triple damages).
as long as you view it with a licensed tool (WMP for instance) then you are covered.
Not quite.
Here is what MPEG LA actually claimed:
“Our MPEG-4 Visual Patent Portfolio License includes 29 patent owners contributing more than 900 patents that are essential for use of the MPEG-4 Visual (Part 2) Standard. Our AVC Patent Portfolio License includes 25 patent owners contributing more than 1,000 patents that are essential for use of AVC/H.264 Standard (“MPEG-4 Part 10″).
Under the Licenses, coverage is provided and rights are granted for (a) manufacturers to make and sell MPEG-4 Visual/AVC Products and (b) for such MPEG-4 Visual/AVC Products to be used to deliver MPEG-4 Visual/AVC Video content. The Licenses were set up this way so as to apportion the royalty at points in the product/service chain where value is received, and also to not place the full royalty burden on one party in the chain (e.g., an encoder maker).
In response to your specific question, under the Licenses royalties are paid on all MPEG-4 Visual/AVC products of like functionality, and the Licenses do not make any distinction for products offered for free (whether open source or otherwise). But, I do note that the Licenses addresses this issue by including annual minimum thresholds below which no royalties are payable in order to encourage adoption and minimize the impact on lower volume users. In addition, the Licenses also include maximum annual royalty caps to provide more cost predictability for larger volume users.
I would also like to mention that while our Licenses are not concluded by End Users, anyone in the product chain has liability if an end product is unlicensed. Therefore, a royalty paid for an end product by the end product supplier would render the product licensed in the hands of the End User, but where a royalty has not been paid, such a product remains unlicensed and any downstream users/distributors would have liability.
Therefore, we suggest that all End Users deal with products only from licensed suppliers.”
The important bits are as follows: “anyone in the product chain has liability if an end product is unlicensed” and “but where a royalty has not been paid, such a product remains unlicensed and any downstream users/distributors would have liability”, and finally “Under the Licenses, coverage is provided and rights are granted for (a) manufacturers to make and sell MPEG-4 Visual/AVC Products and (b) for such MPEG-4 Visual/AVC Products to be used to deliver MPEG-4 Visual/AVC Video content”.
So, presumably, if you use a licensed tool (such as WMP as you suggest) then MPEG LA claim you have met their license requirements for (a) above. However, you still aren’t guaranteed anything about (b) above. The people who made some h264-encoded video that you are watching, or the people who delivered it to you, may not have had a license from MPEG-LA to do so. According to MPEG-LA, you are liable.
Could you please point to just 1 case where the person got fined millions of dollars for downloading?? What color is the sky in your world??
How many IMG tags have you seen that use XBM?
http://diveintomark.org/archives/2009/11/02/why-do-we-have-an-img-e…
I’m sorry, but I don’t understand this reference. Could you please explain how this relates to the video situation?
edit Removed. Stupid comment, not noticing the answer was already in the linked article.
Edited 2010-01-31 16:59 UTC
Yes. That’s what the email explains.
“I would also like to mention that while our Licenses are not concluded by End Users, anyone in the product chain has liability if an end product is unlicensed.”
Pretty clear, people. Why are we even debating this fact?
…because that is lawyer-speak, which does take some real effort for many of us to parse, for whom English is our native language. It’s like vulgar British English v. US English. I can read, and wonder what was just said, then go back and read more carefully, logically working it out.
“(…) not concluded by End Users,” is not a phrase we normally read, nor is the grammar common. On top of that, what it really does mean does not make much sense, at first read, given that the idea is rather absurd.
Agreed, I had to crack open a dictionary to decipher the legalese. Sort of like wording a lay-off notice as “You have been found to be the incumbent of a surplus position.”
It means that if anybody “upstream” from you (as an end user) did not pay for an h264 license, then you (as an end user) are liable if you watch an h264 video.
By “anybody upstream” I mean: people who encoded the video, people who put the video on the web, people who wrote the OS you are using, people who wrote the video driver you are using, and people who made your video card. If any of them have not paid up on all appropriate license fees (where appropriate is very ill-defined, given the number of patents that are claimed to apply), then you are liable for patent infringement.
Can you vouch for all of them, that they have all paid for proper licenses on your behalf?
For every web video you have ever watched?
How did you check?
Edited 2010-01-31 22:53 UTC
MP3 is also patented and proprietary. Nevertheless, I have never had any problems with MP3, because the companies which produce the software I use have paid all license fees. I’m sure Google, Adobe, Apple and others will make sure that everyone can encode, decode and upload h.264 files without (the users) paying any money.
Ok, there may be some problems with free software and there have been completely free Linux distributions without an MP3 codec, but this hasn’t been a real problem because it’s so easy to install one on Linux. After all, there are free and open-source MP3 codecs, because MP3 isn’t patented in countries without software patents. And there are free and open-source h.264 codecs, too.
Most FOSS project are explicitly exempt, in the case of MP3, there aren’t encoding/decoding licenses, and with the exception of online music stores, distribution is royalty-free.
MP3’s licensing terms are good for FOSS, good for wide adoption in general, and for the most part, exceptionally fair.
The most popular linux does not have mp3 support: Ubuntu
If mp3 gave such a big problem with the relatively lose restrictions imagine what the H264 monster will be.
Edited 2010-01-31 18:38 UTC
Not install by default, but the first time you attempt to play an mp3 file it will gladly install it for you.
Not really the point though. You are not free to implement mp3 in your software or device.
Yes, but to the best of my knowledge, MP3 licenses are to be paid for encoders and decoders and only the manufacturers are liable.
MPEG LA just said that, with H.264, end users are just as liable as providers for unlicensed video fees.
And you believe them? Of course they say that, because they want to make more money. Whether users are liable or not depends on the jurisdiction, not on what MPEG LA members say. MPEG LA are not above the law.
Because we all know the joy of having to battle a multi-billion dollar industry giant before court for years to come as a regular user independent of the outcome.
I don’t know about your jurisdiction, but here in Germany court cases have to be accepted first. Such a case wouldn’t even be accepted here.
If MPEG-LA are indeed unable to somehow charge German end users viewing unlicensed h264 video, regardless they will still be able to stop any websites providing any unlicensed h264 video for Germans to view.
The reality is that developers will use the format that has the biggest support. So, for the moment it can go either way … as always it depends on which codec(s) Internet Explorer will support.
What terrifies me is that Microsoft will probably support whatever codecs are installed on each system. This is bad news – it reminds of the times before Flash video when you could barely see any video, because the browsers would embed an ugly WMP control which barely played anything.
The result? The video tag will go nowhere, and 10 years from now we will still be using Flash video. Developers will use something only if they can be sure that it’s supported on 90%+ of the machines. Realistically speaking I can’t imagine how could that happen …
someone like John Gruber, who rails and rails against Flash, finds h264 totally acceptable, telling Mozilla to “get with the program”. A cynical me would say this is somehow related to Apple hating Flash but loving h264, but I’m not in a cynical mood today.
it’s nothing cynical. John Gruber is a well known Mac fanboi. I am not quite sure how he really became well known, most of his articles are trolls like this one really.
it’s thanks to such people that we have patent issues, or people thinking h264 is the best thing since sliced bread (without knowing *why*, blindness ftw)
one thing ive seen missing is why google support h264:
they reencoded all their videos for the iPhone in h264 some years ago. they prolly have contracts with apple for this.
they do not want to have to store both h264 and theora versions. it takes a while to convert, but it works (they keep the raws). But keeping everything twice has a cost as well. Overall its a bad choice for them.
it has nothing to do with the technical merits of h264
I really doubt it is a cost issue with Google. With storage costs so low they could store everything in triplicate and not break a sweat. The reason mentioned back when Theora was abandoned as part of the HTML5 standard was because Google felt that Theora does not meet the quality-per-bit requirement of a site like youtube.
This has since been debated back and forth. And I am not saying they are right, just saying that was their official rationale for using h264.
It’s not only pure storage costs as in price for hard disks. It’s also more server, energy costs for those servers, and computing power for encoding videos into additional formats.
“Web developers, the choice is yours. Are you ignorant and short-sighted, or are you willing to make a stand for keeping the web open, and finally breaking video loose from its proprietary shackles?”
It is just as important to avoid being short-sighted or stubborn when it comes to providing audio content on the web. MP3 is also in proprietary shackles. Each time audio is only provided in a closed format, it gives less incentive for hardware manufacturers to support non-patent-encumbered audio formats and it puts users in a compromised position. For this reason, web developers who wish to supply users with audio content should consider using an open codec (e.g. Ogg Vorbis, Ogg Speex, FLAC) in lieu of or in addition to anything patented like MP3.
I believe that audio is in a slightly better situation than video.
I think that, for example, the iPhone easily has the power to decode Vorbis/Speex/FLAC in software. This means that the hardware argument for video doesn’t matter.
Vorbis is also very high quality, definitely better than MP3 (and Speex obviously beets them all for speech, and FLAC obviously has the best quality, but larger file sizes). This invalidates the quality argument.
I don’t know about patents, but I doubt they are much of a problem for Vorbis (it is fairly well known). Also, audio codecs are quite a bit simpler than video codecs, so there are less patents on them.
This means that the only thing stopping adoption of free audio formats is online store support and player support. Both of these are political issues, rather than technical issues like with video.
The iPhone certainly has the power to decode Ogg Vorbis. I use a music streaming service called Spotify on my Mac and iPhone and it uses Ogg Vorbis as its audio format (probably due to the lack of licensing fees).
From TFA:
Why is “breaking video loose from its proprietary shackles” the responsibility of web developers? That’s like claiming that it’s the responsibility of plumbers to ensure that their customers use low-flow shower heads or water-efficient toilets – when that’s really the responsibility of the manufacturers of said devices, the standards organizations that govern building codes & the standards that the devices are built to, etc.
And in most cases, it’s not the individual web developers who are making the decisions – it’s the people who pay the web developers, who in turn make their decisions based on factors like cost & out-of-the-box compatibility (factors which are, in turn, largely determined by decisions made by browser and OS makers).
So what unique power do most web developers really have to change the situation? Not much, at least not unless the day comes when you can implement codecs in Javascript. So should web devs refuse to work for clients who request the use of Flash or h.264? Provide every video in 2 or 3 different formats anyway & eat the additional cost of doing so? Evangelize to their clients about the evils of proprietary technology on the web?
Edited 2010-01-31 20:15 UTC
I can see the logic of this, but… with that thinking it becomes: “It’s not my responsibility, it’s the responsibility of everybody else.” Meanwhile the MPEGLA rakes in billions and the rest of the world curses H264.
While that is a danger in the general sense, it doesn’t really apply in this situation. The responsible party is clearly-defined (the W3C), yet that hasn’t stopped them from shirking their responsibility by caving in and accommodating Nokia & Apple’s self-interests.
Although we would all love to see something that is completely open and free used it simply can’t happen. H.264 is a standard. Developed by industry and standards groups…
Theora is NOT a recognised standard and was not developed by any internationally recognised standards organisation.
And while this is a noble undertaking that continues to produce some very good products, those products are not internationally recognised industry standards, so how can they be adopted as a default by something that is (going to be) an internationally recognised standard (HTML5)? SUPPORTED by html5, sure, the default codec, won’t happen.
As has also been pointed out by others here, as well as some companies, it has not yet been tested whether Theora in some way infringes on patents encompassed by h.264, so adopting it as part of a standard could open a whole new can of worms.
And to top it off, there are numerous companies out there who are already using hardware based h.264 acceleration in devices – because it IS the standard – who I’m sure would use whatever means they have at their disposal to prevent the adoption of something as part of one standard that itself has not been ratified by any internationally recognised standards organisation. Like it or not, thats just the way it is.
The reverse is more likely. Theora is an implementation of VP3, and VP3 is a patented codec of some vintage now.
If the USPTO made a mistake and there is common ground between VP3 and some part of h264, then VP3 is in all probability the earlier patent.
Standards organisations don’t develop code, they merely standardise on methods. The primary considerations for making something a standard are: (1) it can be implemented by anybody, in order to promote interoperability and competition between and amongst suppliers, and (2) any terms for its use are at least RAND, but preferably royalty-free.
H264 fails on both. Theora passes on both.
Therefore, Theora should be submitted for consideration as the standard for a digital video codec, because it is clearly more suited that any current standard.
Edited 2010-01-31 23:28 UTC
1. Devices are coming with more and more power phones with over 1ghz for example. Theora is less cpu intensive we may not need the hardware acceleration.
2 So what can we do for now? lets nail them fill the w3 mailboxes blogs and personal facebook / twitter accts of ALL reasonable members.
Apple they arent worth trying but maybe we can reach google? Atleast they somewhat listen to their consumers.
3. Dont let people spread the LIES about h264 anymore.
when they spout such nonsense on blogs and boards we have to CALL THEM ON IT!
We need to use theora! we want a free and open web.
Edited 2010-02-01 00:03 UTC
the content of the biggest video sites are already flash and/or h.264. the browsers will support them. next question
Browsers don’t support either h264 or flash without a plugin.
As of next year, the biggest video sites will receive the biggest bills for using h264.
Firefox, Google Chrome and Opera all support HTML5/Theora without a plugin.
Next question.
Edited 2010-02-01 00:02 UTC
In video hardware, programmable pixel shaders can be used to support hardware acceleration of video.
http://xbmc.org/wiki/?title=Hardware_Accelerated_Video_Decoding
http://xbmc.org/wiki/?title=Hardware_Accelerated_Video_Decoding#Alt…
Another possible method is using GPGPU (General-Purpose Computing on Graphics Processing Units).
Hardware accelerated video decoding using the Theora codec is therefore quite possible on some GPUs.
It is certainly possible on many desktop GPUs, but that doesn’t matter much. Desktop CPUs can already do it in software just fine. The real problem is mobile devices. The iPhone simply can’t handle video decoding in software. And the iPhone’s GPU doesn’t have nearly the generic computing capability that a desktop GPU has.
This is nowhere near enough of a reason to impose a patented codec on the web and thereby rip off everybody who does not own an iPhone.
Fortunately, phones are a very short life item. Perhaps the unlucky iPhone owners who have been dudded by Apple’s attempted lock-in will be aware enough buy a Nokia smartphone with maemo and android next time.
Yes, I know this. I am just saying that your argument doesn’t matter, because desktop computers can already do it fine.
Well, I really hope that the next generation of smart phones includes hardware support for Theora. That would totally solve the problem.
Ask @w3c and @whatwg these questions on twitter
It’s time to get rid of this proprietary codec, once and for all. The amount of money that MPEG-LA is charging for the use of h.264 is ridiculous.
Theora offers a great way to do an “end-run” around h.264.
If the Theora approach can be widely promoted and adopted (and Google will have a crucial role to play there), then that would truly be a big win for a more open and standards-based web. If there is one thing that I utterly despise, it is **vendor lock-in**.
As a comparison, even though Adobe’s Flash isn’t fully “open”, at least there is no charge for the use of Flash, and there are at least two open Flash-players that I know of (Gnash and Gameswf). Adobe also doesn’t seem to be too bothered by them either.
Compared to this MPEG-LA outfit, Adobe is an angel.
Let’s dump h.264, and dump it *now*.
Edited 2010-02-01 02:12 UTC
That’s a good point. It occurs to me that a possible intermediate solution would be to implement support for decoding/displaying Theora video with Flash. While that idea may seem blasphemous to free software purists, I think it could be an effective Trojan horse approach to spurring Theora adoption (in that it would give Theora the ability to take advantage of Flash’s ubiquity). It is possible for 3rd-parties to add support for new codecs in Flash – someone has already implemented Vorbis playback in Flash ( http://toolserver.org/~dispenser/view/Flash_Vorbis ).
That would also be an attractive option for web devs/content distributors (a way around h.264 licensing issues, without having to ditch Flash “cold turkey”), and it would allow “Video for Everyone” solutions that would only require a single video file. And that all of that could help Theora overcome the chicken-or-the-egg problem (browser/OS support for Theora is limited because there is relatively-little Theora content available, because browser/OS support is limited… ad infinitum) by leading to the existence of a large amount of content using the codec.
Just a minor correction, but Gnash is derived from GameSWF, and GameSWF is meant for custom applications, not for general web browsers. Swfdec is the other major open source Flash implementation that is meant for web use.
On a side note, I would really like to see Gnash and Swfdec combine forces. I think a lot of effort is wasted. But it is their choice, and I respect that. Fortunately, Flash is becoming less and less important anyway. The fact that a company like Apple would not include it in a product like the iPad signifies that it is not entirely necessary anymore.
Web developers, the choice is yours. Are you ignorant and short-sighted, or are you willing to make a stand for keeping the web open, and finally breaking video loose from its proprietary shackles?
That was a ridiculous bit of moralizing.
Google sets the standard through youtube, not web developers. That’s the main reason people install Flash in the first place. People will install whatever is needed to watch youtube. Portable devices will have hardware decoders designed around whatever codec youtube uses. It won’t matter if web developers use Theora on their own sites.
Web developers are the last people that should be blamed in all this. Try Google first and then the W3 for pussing out in not defining a codec.
Even if Google would like to opt for HTML5 video in Youtube, they could still keep everything in H.264/AVC. They can serve this directly to Chrome, Safari and likely IE in the future. Once somebody comes in with Firefox or Opera, then they just serve the Flash version.
This is the most pragmatic solution I can imagine though staying entirely with Flash would even more pragmatic.
I kind of thought half the point of HTML5 was to have an open source alternative to Flash so the web wasn’t dependent upon Adobe for video. Now we get another entity to be dependent upon?
Google really doesn’t care about getting rid of Flash and that is the lesson in all this. They control YouTube which sets the video standard.
All the FOSS advocates here can stop cheerleading Google now. Open source is just a pet hobby to them.
The web standards should be free and open so that all people not matter what their economic conditions are they can access the content.
What will they try and close off next?
Fix it now or we will all lose
It didn’t take long for the popular press to get it completely wrong.
http://www.nytimes.com/2010/02/01/technology/01flash.html
Utterly backwards. Completely incorrect.
There are no patents surrounding HTML5, it is a completely open public-access standard. Apple has no ownership interest over HTML5.
Edited 2010-02-01 09:53 UTC
Wow… That is definitely the most inaccurate article I’ve seen in a long time. And it’s coming from the New York Times! They should be ashamed of themselves…
Most users just want to be able to see the video without having to pay for it. Google (and YouTube) know that, so they will give the end user a choice. That might change to h264 or theora, but it could just as easily stay the same with flash.
If MPEG-LA asks too much money there will appear other options that might take over. Never forget users can leave youtube for something else when that works better for them. You never know what the future will be.
I’m just a user that wants to view the video I like without having to pay for it.
With H.264 you know how much you will have to pay. With Theora, there is always the risk that some obscure company attack you for patent violation. Since Theora is not developed and marketed by a large company, you will be on your own to pay advocates. In the case of H.264, MPEG-LA would probably take care of the issue, because they would want to continue selling licenses.
So it’s not plain stupid… it’s just a choice. Do you want to pay a known fee, or to gamble? It’s not because something is open, that some part of it has not been patented earlier. The only real solution is to ban software patents
You’d think that if that were the case, Google would’ve been sued already for including Theora support in Chrome.
Google is kind of a big company, you see.
Thom, maybe you haven’t noticed, but usually the patent trolls emerge when a technology is already in wide use and popular.
In order for there to be a patent troll against Theora, there has to be a patent that was awarded before the patents for VP3 were awarded to On2. In addition, the USPTO has to have made a mistake, and awarded that earlier patent, and also On2’s patent(s) for VP3, so that they covered the same methods for video compression.
That is highly unlikely.
If a patent troll turns up with a patent that was awarded after the patent(s) for VP3 … big deal. The patent(s) for VP3, being earlier, trump any later ones covering the same video compression methods.
Edited 2010-02-01 22:38 UTC
That would only apply if On2’s patents covered each and every part of the codec. It is far more likely that On2’s patents cover somer major parts of the codec but each and every algorithm etc used in it.
The fact that On2 has some patents absolutely does not guarantee that there couldn’t be any older patents covering some parts of the codec.
If there were older valid patents, prior to circa 1997, belonging to someone else other than On2, which both On2 and the USPTO missed entirely, and which are still valid to this very day, covering some aspect of video compression technology as used in VP3, for which in all this time the owner has never made any demands at all, I would be utterly astounded.
Considering that VP3 patents were seen as obsolete, and handed over to open source in late 2001, that would be the timeframe. Perhaps earlier.
Highly unlikely. Extremely unlikely.
Edited 2010-02-02 09:54 UTC
No. When On2 patents part X of VP3, USPTO checks whether there is anything prior related to that patent. If there is another part of VP3, lets call this Y, which On2 has no intentions to patent, then USPTO doesn’t even look at that.
The fact that On2 has some patents does prove that those parts of the codec shouldn’t be covered by any other patents. But this doesn’t tell us anything about the rest of the codec.
To reiterate, USPTO has only checked those parts of the codec that On2 has patented, not the other parts. On the other hand, On2 couldn’t possibly have patented the complete codec because even video codecs circa 2000 are very advanced and based on decades’ worth of research and prior art. In the technical sense, VP3 isn’t very different from recent MPEG codecs.
I must admit that it is unlikely that VP3 has any unknown patents or at least that it isn’t any more likely than some other video codec. But yet the patent concern was great enough to make Apple and Nokia vote it out of the HTML5 spec.
You have swallowed the FUD.
There are no parts of VP3 that are patented by someone other than On2. If there were, then On2 would have had to be paying a license to that other party when On2 were flogging VP3 (circa 2000). They weren’t paying any such a license fee to anyone (this is the whole reason why they were able to give VP3 to open source). After 2001, On2 went on to VP4 … so anyone with a claim to parts of VP3 would have lost their chance to squeeze On2 way back then.
http://en.wikipedia.org/wiki/On2
http://en.wikipedia.org/wiki/VP3#History
http://en.wikipedia.org/wiki/VP4
What made Apple vote Theora out of the HTML5 spec was the simple fact that Apple are a member of MPEG LA. This move then, and the subsequent (and still ongoing) FUD against Theora was born of a phenomena known as “self-interest”.
I have got no idea what Nokia were thinking.
Edited 2010-02-02 10:54 UTC
No. There are no *known* patents other than those hold by On2. The only way to prove that there are no other patents would be to read all valid patents and consider whether those could apply to VP3.
No, there is another way to tell.
If there were any patents at all that could be held in some way against Theora, no matter how useless in and of themselves as technology, such patents could be sold now for an absolute fortune to almost any of these companies:
http://www.mpegla.com/main/programs/AVC/Pages/Licensors.aspx
The fact that there has been no such a transaction, and that the companies on the above list have been reduced to vague FUD spreading about mythical possible patents (such as your own posts on this sub-thread), tells us without a shadow of doubt that no such patents exist.
Edited 2010-02-02 12:06 UTC
There is the point, as has already been mentioned more than once on this thread, that Theora is based on the VP3 codec, and that Xiph.org have obtained an irrevocable royalty-free license for the VP3 patents so that they can develop and distribute Theora.
You read it right … Theora itself is based on patented technology. On2 are the owners of that patent.
http://en.wikipedia.org/wiki/On2
The thing is, VP3 is an older codec (late 2001) than h264. That means that the USPTO has granted different patents firstly to VP3, and then later to h264.
http://en.wikipedia.org/wiki/H264
So, if the USPTO is correct, and the technology they awarded patent(s) for in h264 was indeed new and innovative technology, then VP3 and hence Theora does not use any technology in h264.
Also, if the USPTO is correct, the VP3 was new and innovative technology some time before 2001. Patent trolls are going to have a hard time attacking it.
If the USPTO is incorrect, and they have granted a patent firstly to VP3 and then over two years later a patent for the same technology to h264, then that would be a “my bad” for the USPTO … but the winning patent would be the earlier one, not the more recent one.
Hence it is more likely in a patent war that Theora/VP3 would prevail over h264 rather than the other way around, as many people apparently mistakenly imagine.
Edited 2010-02-01 12:33 UTC
Well, On2 is the only known owner of Theora related patents. You can’t patent a video codec as is and get a stamp of approval from USPTO that this is the one and only patent covering the codec.
What you can do is to patent parts of codec and maybe some relations between the parts but you can’t sort of patent it all. Besides, a video codec is very likely to include some well-known parts such as DCT and motion estimation which you even can’t patent because of prior art or existing patents.
This makes it quite likely (not in this specific case but generally) that even if you had some patents on your product, it doesn’t mean that it wouldn’t violate somebody else’s older patents. Besides even your patents could be invalid because USPTO generally does a lousy job and many patents are invalided later by a court.
Well, actually, as it turns out, that is exactly what the grant of a patent by the USPTO is supposed to be a stamp of approval for.
No. Firstly, what is being patented is the method of compression of digital video data. The idea of “codec” (as in encoder/decoder) itself is quite old and therefore un-patentable. Heaps of prior art.
However, a few years before 2001, the methods of compression of digital video were all new. If there are earlier patents that made the same claims as to the method of compression as were made with On2’s patent application for VP3, then the USPTO would not have granted the patent to On2.
Therefore, there were no parts of the compression methods of VP3 that On2 could not patent because of prior patents. These would have turned up in a patent search, and On2 would have had to be paying someone a license for those technologies. They weren’t. The fact that they weren’t paying anyone a license for earlier applicable patents is precisely what allowed On2 late in 2001 to give the VP3 patents to open source in the first place.
If there were previous applicable patents, and parts of VP3 were licensed from some other party, then VP3 wouldn’t have been of any use to open source in the first place.
So there are no applicable prior patents.
As for the other parts of the method in VP3 that On2 could not patent because of prior art … it follows that no-one else could have a prior patent for that either, also because of the selfsame prior art.
If anything, on the balance of probabilities, given the age of VP3, and the time-frame when digital video compression suddenly became something one would want to do, On2 are far more likely to be in the position of the ones to become a patent troll, rather than be the ones who are trolled against.
Remember the vintage … On2 gave over the VP3 technology, including applicable patents, to open source in late 2001. That meant that in late 2001, VP3 was already seen by On2 as “obsolete technology”. That in turn meant that the timeframe for On2 getting these patents in the first place is back in 1995-1998 period. Windows 95 did not even ship with a TCP/IP stack … so much for the scope and presence of the Internet in that timeframe.
Edited 2010-02-02 09:20 UTC
I strongly disagree. All modern video codecs trace their technology back to the eighties. The fact that people didn’t download Divx movies before circa 2001 doesn’t mean that the technology wasn’t there.
The first somewhat popular video coding standard is H.261 from year 1990. It feature among others:
* blocks
* DCT
* quantized transform
* zig-zag scanning
* motion vectors
* entropy coding
All of these are included in Theora. I just checked from the Theora spec.
Nothing prevents other companies holding some obscure implementation patents on these technologies and even if On2 had done their best to check that they don’t violate anybody’s patents, that does not guarantee that there are no patents.
You seem to constantly imply that there exists an On2 patent which covers the VP3 as a whole. That doesn’t hold. On2 has just guaranteed that if they have some patents related to VP3, then they won’t use those against Theora.
AFAIK there are a group of patents held by On2 that cover VP3. These were all licensed to open source in late 2001.
http://en.wikipedia.org/wiki/On2
The technology that became VP3 was developed some time prior to 1995.
AFAIK, at no time in On2’s history have they paid anyone else for patented technology in VP3 belonging to someone else. Neither has any patent claim ever been raised by anyone against Theora.
It is exceedingly unlikely that there is a “submarine patent” out there, older than VP3, which can bring down Theora now.
Outlandishly unlikely. Astronomically unlikely.
If someone could have brought down Theora, they certainly would have tried it on by now, just as they tried (unsuccessfully) with Vorbis.
http://www.xiph.org/about/
This is the whole reason why Apple et al are so much against Theora in the first place.
Edited 2010-02-02 11:16 UTC
Completely agreed. They’ve already done such a havoc to the internet generally that I have little hope for them to solve anything.
Regardless which codec it will be, it will be a mess.
One more thing, maybe you should just fix your patent system, USA?
We wouldn’t have this mess and the world would be a better place.
Don’t ask the developers – though I’m sure some of them fall into that category. Most probably get tied up by their managers who think that H.264 is the way to go, etc. When that happens, the dev’s hands are tied.
This page has a list of companies who are licensors of h264:
http://www.mpegla.com/main/programs/AVC/Pages/Licensors.aspx
Because of a strong conflict of interest, any statement at all about Theora made by any of these companies should be viewed with intense suspicion.
It would be much more expensive for google to stream theora video, because it is just not as good a h264.
there is a reason google is going to buy On2. just wait VP8 will be released as open source.