In line with the current torrent of articles on the H.264 and Theora debate, I feel that is it unfair for the “pragmatists” to talk about Theora as if it is a stupid ideal that is useless to consumers. This article will focus on defining the terms of the debate used and make the case that Theora has a reason, if not a chance.
Before we can even dive into the actual discussion, I seriously need to define what is meant by “idealistic” and “pragmatic”, simply because the current and colloquial usage of the word in this context is actually detrimental to the discussion.
From wikitionary, “idealistic” is defined in terms of idealism, and we can infer that it refers to “having high ideals that are usually unrealisable or at odds with practical life.” Similarly, “pragmatic” is defined as “Practical, concerned with making decisions and actions that are useful in practice, not just theory.” I am now going to argue that the “idealists” have been too concerned with making the case of Theora that we have forgotten that the “pragmatists”, that the “idealists” have been arguing with, are in fact “idealists”.
Basically, I do not see the “idealist” camp and the “pragmatist” camp here. I see the “get it done fast” camp and the “get it done right” camp. It is obvious that the “get it done right” camp can be called idealistic, but so is the “get it done fast” camp. They actually follow the ideal that they want to be the “first to market”, “get the job done and over with” and “provide least hassle to the people”. The problem with getting the names wrong is that the “get it done fast” camp gets to think that they are being in touch with reality, trying to get their work done. This is a not true, and I am going to show how that is the case.
Let us step away from the current debate and look into history (our best textbook). Before Newton’s time, there is no such thing as science. Galileo’s Dialogue is more of a poor man’s guide to engineering than science. In fact, up until thermodynamics, it had always been the engineers that found things out for science to explain. This is a well-understood situation and things proceeded slowly, just like evolution would predict it to be. (slowly, but remember quantum effects — inventions mean progress comes in small leaps).
Then came Newton. Back in a time when people were too busy with Zeno’s paradox and stuff like “What is time?”, “What do you know about time?”, “What do you mean by know?” and the brain-space deadlock, Newton was the practical man in writing that he shall only consider mathematical approach to solving the problems of mechanics and the motion of the stars. This is a huge departure from before in that instead of talking in useless loops, he is able to predict and explain the motion of stuff that was previously held to squabbling.
But notice, he simply used mathematics to do it. I shall not venture to say that the mathematics he used were anyway rigourous, but you get what I mean — it is only talking about the ideal rigid bodies. Surely, you must agree that it belongs in the “get it done right” camp, not the “get it done fast” camp, simply because there is no need for anything to be done fast — engineering was far ahead of science.
Perhaps that is not a good enough example, so let us talk about Sadi Carnot (yes, I’m a physicist). Carnot came right after people formulated the conservation of energy. It was the time when people were furiously trying to improve the steam engine. The Watt engine was already in use for over a century. The engineers were having lots of trouble — steam corrodes the engines they were making, and they never did understand why increasing the compression ratio helped the machine perform better. These were at the back of the minds of the people involved.
Then came Carnot, who used his ideal Carnot engine to show that water was irrelevant in the making of a heat engine (shock and awe) and that compression ratio is much more important than keeping the steam engine well-lubricated. What happened next was monumental — Europe, already powerful from the work of “those ideal scientists (academics)” (I’m including the work of everybody under their influence, ie including people under the influence of Adam Smith and all the workers and employers, as would be seen from a person outside Europe), began to explode in development. It might be inevitable, but the next point will show that the effect of the academics was not trivial.
What point? Transistors. It represented the first thing ever to have been made from scientific knowledge rather than the huge base of engineering knowledge. Given infinite time, someone would probably have stumbled upon the creation of the first transistor, but they would be extremely puzzled, and completely unable to exploit its use.
I suppose this would be a good foundation to say that the “get it done right” camp is credible. I’m not saying that the “get it done fast” camp is completely wrong, but at least acknowledge the reason why the “get it done right” camp is so furious.
In fact, I will now show how the “get it done fast” camp is correct. The story of Unix and Linux itself is an example. Unix had 2 conflicting thoughts. One is “do one thing well”, which is compatible with “get it done right”. However, any reader with the “Unix Haters Handbook” will know that the other Unix power is “get it done 80% now, 100% later” (and usually the latter 20% never appears). Solutions to problems that take too long in gestation never make it past the evolutionary struggle for existence.
However, we must also examine the pitfalls of the “get it done fast” mentality. People with the “get it done fast” mentality have a tendency to rely on intuition, common sense, experience, brute force to find solutions to a problem. Common sense and intuition, however, is what gave Aristotle his “Moving things, when left alone, naturally come to a stop”. This blatant violation of Newton’s First Law is a very simple, everyday life experience that was capable of letting Aristotle-an mechanics find solutions to their problems (by letting them do whatever they please, bounded by evolution).
If the GIF scandal is not a good enough example, let us examine the monstrosity that is Flash itself. Do remember that the H.264 and Theora debate centres itself upon the dethronement of Flash. We chose Flash to be the de-facto standard because it was the earliest solution to the web animation problem. It was clearly in the favour of the “get it done fast” camp.
Or maybe we have to talk about Skype. These
As the genius psychologist who predicted WWII had said, the solutions to these problems are clear and well-known. It is just people that do not want to accept it. On hindsight, it is clear that we would not have had to deal with Flash if we all got a bit of patience and not eagerly jump onto proprietary bandwagons.
So, what do we really have? We have a scale between the “get it done fast” camp and the “get it done right” camp, and since both are ideals, I shall redefine “idealistic” as a person who have extreme views, adherent to the chosen ideal, about any topic at hand. ie anybody at the ends of the scale are “idealistic” because they really and closely believe in the ideal they have chosen, and we know from history that neither ideal is conducive for progress, let alone practical.
Additionally, I shall argue that fence-sitters, those irritating fellows that think that being exactly in the middle will be “pragmatic” is equally ideal because their ideal is “I shall just be in the middle in any debate”. If you project the scale onto a U-shaped parabola (as they usually tend to be), then the bottom is the middle, and it is the extreme lowest point. They are also “idealistic”. These people tend largely to be useless in solving problems (this sentence is pure opinion, no facts backing), but they include many brilliant people who are very good at utilising solutions from both ends (and across many different fields) to any problem.
What, then, is “pragmatic”? Surely, I would suggest that anybody who calls himself pragmatic shall have to reconcile both camps, landing close to the middle, but never exactly in the middle.
For example, Linux is a kernel in the “get it done fast” camp, and has succeeded in migration towards platform independence, something that Linus Torvalds himself admitted that he did not foresee (kudos to his sensible leadership).
Another example would be William Kahan, father of floating-point, who found a sensible-yet-fast solution. (I mean to say that the solution is as clean as it could be, yet can be executed fast, hereby conveying priorities. Also note that his work is so advanced that even James Gosling and Stephen Wolfram would get it wrong in Java and Mathematica.)
So, how does this tie into the H.264 and Theora debate? I will venture to say that the “get it done right” camp is shouting loudly at the “get it done fast” camp that “We should not make the same mistakes again! This if GIF yet again! We just want to have a solution that always work no matter what, so give us Theora! No matter what rubbish-ness it is right now — it is the only competitor right now that has remotely enough momentum to fight H.264, so let it be!” and the other is shouting “Oh come on, you are hindering us again! Quit bothering us you nay-sayers!”
I am on the Theora side right now, simply because we are asking for survival. We are not asking for the exclusion of H.264 (Mozilla, being tri-licensed including GPL, will not be able to include H.264). Hey, we have been listening to you, can’t you listen to us for once?
I mean, this is like cells. Biologically, it took so long for prehistoric micro-life to finally organise itself to make multi-cellular organisms because there was no real standard. No real organism had the correct chemistry make up, and all them had different energy sources. It is until ATP became dominant, and then oxygen became really abundant, did carbon-based life take over. Hence why the cell includes this peculiarity known as mitochondria — our cells simply swallowed the mitochondria instead of producing its own optimised oxygen power chain.
Then, the cells themselves became part of organisms. Notice how close it is to the evolution of computer “life”; compiled programming languages became the power chain of everything else, and operating systems took advantage of the similarity of everything to form multi-tasking computers.
Thus, I beseech the “get it done fast” camp to realise the massive and real utility of the free software movement and allow Theora to become the fundamental baseline of video on the web. We don’t mean to deliberately destroy H.264, it is just a side effect of allowing the other formats to actually have a chance to compete. Net neutrality anyone? Hello, Google?
But in any competition based purely on technical merit, H.264 destroys Theora. This isn’t surprising due to the fact that H.264 is designed based on the current state of the art of mathematical compression techniques. Unfortunately, those techniques are heavily patented so that one group effectively controls the application of the latest mathematical algorithms to video compression.
There really isn’t any simple right or wrong answer here. In fact, from a different perspective, your “get it done fast” and “get it done right” camps can be completely reversed in that the idealists could be those that favor the technologically superior solution and the pragmatists those that favor the open solution. It simply depends on how each individual defines their ideals.
Regardless, I do agree with you that Theora should be standardized as the baseline codec for the web with others being optionally supported. The quality improvements that have been made in the latest Theora encoders have been impressive and I see no reason why it wouldn’t be a suitable codec for user generated content sites like Youtube or for videos embedded in blog posts or personal sites. I would still prefer Silverlight with H.264 and adaptive streaming for things like Netflix where I may be spending longer periods of time watching HD video. But I believe that these technologies can exist together in harmony and that having Theora as a baseline means that everyone has the opportunity to participate in web video.
Thanks for the interesting article.
Then find a group of mathematicians who will work with you to produce better solutions.
I think that’s a shame for US legal system that someone can actually patent Math. What if someone patent’s the algorithm for solving second degree equations?
You’re right (I assume, not being an expert in these things) that H.264 is technically superior right now, but I think it bears repeating that the nature of a standard demands an open, collaborative solution. Such standards are inherently more agile because everybody is focusing their energy on making their implementation the smoothest, the fastest, and the most user-friendly.
Theora may not be the best there is, but I’m confident it will be if it’s made the standard.
As the author said (in a very roundabout way): we need only look at history and avoid most of this debate. The web itself is an excellent example of how open standards create ecosystems and marketplaces. Where would we be if we all settled for the proprietary superiority of HyperCard instead of Tim Berners-Lee’s simple, open implementation of hypermedia?
Support both video codecs. Game over.
That’s always been my problem! Sure, Theora is inferior now, but it’s license allows it to be improved or adapted. The real issue is that half the browser makers specifically REFUSE to include Theora or Vorbis…. it has some to do with technology, but mostly to do with licensing and money. They have paid off the patent cartels and shifting everybody to FATEX, h.264, MP3Pro, JPEG 2000 puts a nice “glass ceiling” between the deep pockets and the garage kids.
Effectively, we’re returning to the 1980’s were each system was locked down to playing only “approved” software (locked at the time to physical chips) and while hackers were tolerated then, they’ve been legally locked out now. Apple is making a killing on locked down iTunes and iPhones. Microsoft’s only interesting stuff is Windows Mobile 7, Zune, and Xbox… all locked down too. Nintendo is locked down, Sony is locked down as much as they can without going out of business. Give another 2-3 years and Linux/Open Source Software will be so cornered by DMCA, Patents, and ATCA that it will be it’s own “closed system” because the ability to interface with any other system will be legally locked down.
This whole thing is about big players choosing to lock out “free” competition. This is all about the true colors and the “club” doesn’t mind using Open Source all day but if they have an edge like h.264 they’re not going to budge.
No, it isn’t. Not for use on the web.
Precisely so.
What gives with that? Since including Theora would cost nothing, and benefit people, why not?
I’d suggest this list covers the entire reason:
http://www.mpegla.com/main/programs/AVC/Pages/Licensors.aspx
You will note that “the people” are not on that list. It turns out, strangely enough, that there are a HUGE majority of interests that are not on tht list.
The real question is, why is this even a question? By far the majority of interests is better served by having Theora as the codec for the web.
Supporting both codecs would definitely the best solution.
But, as H264 is incompatible with free software licenses like the MPL, it will have to be supported as a downloadable plug-in, available only in countries which don’t allow math to be patented.
In countries which allow patents on math (like the USA), H264 plugins will need to be payed for.
On the other hand, providing support for Theora in a browser costs next to NOTHING, so I as a user want to know why so many browser makers outright refuse to include it. If a feature is there, and users actually don’t use it, so what? This feature makes the browser better, why not include it?
So at the end of the day, MS and Apple couldn’t care less about what mozilla does. Neither have major web properties that they need to support, and both of them are major browser vendors. It doesn’t matter what mozilla does, it doesn’t matter what the w3c say, it doesn’t matter how adobe reacts.
One thing that would matter is what way google goes. If IE/Safari do not support youtube, they become completely irrelevant very fast. That is something that matters, and something Apple and MS care about.
But again, that is just a side note, because google has stated that Theora performance to quality is unacceptable for youtube, so again, at the end of the day even this is irrelevant.
What is going to happen, is the only way that you can encode once, get great performance/quality, and deploy once, is flash. So people will use h.264 video when they want to support iDevices, and flash when they want to support everything.
“Get it done fast” can be replaced with “Not completely ignoring the reality of the situation”. If you could wave a magic wand and change reality, that would be great. But at the end of the day, we have to deal with things the way they are, not the way we want them to be. If you don’t do that, you will sit in a room with the rest of the “Get it done right” people, while the rest of the world promptly forgets about you and moves on.
Browser makers should include out of the box Theora support.
Why? Because it’s graris.
If then web sites decide to use h264 instead, the situation of these web sites is exactly as it is without the browsers including Theora support.
But that would at least leave all other web-developers the way open to displaying video that all can see.
Here is the actual pragmatic angle you mean to talk about:
The supply of cheap and powerful h.264 is plentiful. The demand for it is plentiful. Google may push a competitive alternative, and h.264 cost may go up. Nobody important cares about Theora (yet).
I am still hoping every day to see google drop the bomb that they are putting out a royalty free license for VP-7 for use on the web, and that will be what youtube uses.
http://x264dev.multimedia.cx/?p=292
Edited 2010-04-01 22:19 UTC
When the iPhone was introduced, Google (who back then had a good relationship with apple) re-encoded all its youtube videos with h.264 so that iPhone can play them with no flash support. so it will be much easier for Google to support h.264 (cause files are already h.264) while supporting Theora will cost them much money and effort… and i don’t think they will since youtube isn’t profitable (yet).
Not sure if April Fools.
But if so, well done!
Can you explain how Mathematica got floating-point wrong? I use it regularly and was not aware of any problems with its representation of numbers.
I applaud the article and agree with much of your argument but the key factor being overlooked is getting the thing done at all, or at least in a timely fashion that doesn’t stifle momentum.
Lets look at Linux for a minute. I’ve been installing linux based servers as my preferred server platform for over ten years. During that time we’ve constantly heard that linux is gaining momentum on the desktop, that linux is now as easy to use as (or easier than) Windows and OSX. Yet anyone who looks at all the components of owning and using a computer and is then totally honest with themselves has to acknowledge that linux is nowhere near close to being ready for the mainstream desktop. Driver issues, dependency issues, software availability issues all prevent this from being the case.
I know at this point some will target OSX and cite the same issues. The difference with OSX is that it doesn’t claim to be an OS for generic hardware, linux does.
So although linux is a fantastic product in its varied forms, it hasn’t EARNED the right to be a mainstream (or standard) player. And I believe Theora falls into the same category. LETTING Theora become the standard rather than it EARNING that position would be like putting a kid on your track team for this weekend’s meet who has raw talent but hasn’t learnt how to utilise that talent, ahead of a kid who maybe doesn’t have quite the raw talent but has worked hard and is currently outperforming the other kid.
There is no debating that in an ideal world we would wait for the “right” solution to come along, but this isn’t and ideal world and never will be. Consumers want things NOW and therefore businesses have to deliver it NOW or disappear into oblivion, so they will use the tools that allow them to deliver in a timely fashion. The reality is that Theora is not ready to be a “standard” now. The ogg container format has issues, you just about have to be a video expert to get any decent quality using it, there is no hardware acceleration for it and there is nothing compelling about using it over H.264. “It’s free” just doesn’t cut it in the real world unfortunately.
However, the HTML5 standard is still off in the distance, for the moment we have only pseudo-standards and proposed standards. So Theora has time. But let there be no doubt, nobody is going to just LET Theora be the standard. That is a position it has to earn. If Theora wants to be the standard is has to step up to the plate. And unfortunately it’s going to have to be BETTER than H.264, not just as good as it, because there is already a significant investment in H.264 technology, so Theora needs to be a VERY attractive proposition if it’s going to pull companies away from that existing investment.
We all know Theora has a lot of potential, but there’s an old saying that goes “lots of potential and 50 cents will buy you a cup of coffee” (shows how old it is, $3.50 for a coffee these days, ain’t inflation great). There is no doubting it would be great to see Theora topple H.264, but it isn’t going to just happen.
This entire argument hinges on the reader accepting the idea that there is a competition for Theora to win… This is completely and utterly false at the moment. There is no competition at all, it is currently a one horse race.
H.264 is a very good codec, it is arguably better than Theora, it is heavily used currently, AND it is completely unsuitable for standardization by the W3C. It was disqualified long ago and has absolutely no chance of ever becoming part of any W3C standard. It simply CANNOT be standardized, it is quite explicitly not allowed by W3C rules.
Standardization of Theora is not LETTING it become a standard, it is judging it on its merits and comparing it to is competition – and H.264 is NOT part of it’s competition… It has already for the most part won this competition.
The issue at hand is whether or not there is ANY standardization on codecs, not WHICH one is chosen. There is no point even talking about H.264, it is no longer part of the real debate. It the W3C ever decides to standardize on a required codec for user agents, it WILL be Theora – the only way it could possibly be H.264 would be if they signed the patents over to the W3C, and that will never happen.
H.264 is a defacto standard NOW – it will likely remain so no matter what the W3C does. No one with any real understanding of the issues involved sees that changing. Everyone keeps bringing up this straw man argument about “whatever Google uses for Youtube is all that matters”. That is bull. This is not an issue of mindshare.
It really doesn’t matter if 90% of the internet decides H.264 is better and uses it. There is no loser between H.264 and Theora, they are fighting different battles. Standardizing on Theora would be a benefit to all users who want to publish a video on the internet, whether they use it or not. It would offer a choice that could never exist if it was left to market forces alone. THAT is why it should be standardized.
IMHO Theora does nothing to fight patents. On the contrary.
It’s a tool to confirm patents by treating them as legitimate road blocks.
IMO that’s wrong. Patents are wrong. They need to be abolished altogether. Patents only serve only few people, but harm many (with patents on life-saving medicine and even food being the most inhumane use of patents).
Instead of fighting flame wars which format is “better”, we all should use that energy to fight the patent system as a whole.
Once patents are abolished, we can just use ffmpeg in every browser to just support all video formats.
Edited 2010-04-01 22:01 UTC
Patents serve as an incentive to develop those medical technologies in the first place. People don’t want to spend years developing an idea if some corp can come along and just take that idea without any compensation.
If you eliminated patents you would see a drop-off in new technologies. People have better things to do than provide free R&D for large companies.
Is there any law that mandates that medicine (at least the one that saves lifes) has to be researched by for-profit corporations?
Medicine could just as well be researched by the UN and everyone could then produce it.
The rise of Free Software already proves you wrong. Only a few years ago many would have argued that it’s stupid to invest time and possibly even money into FOSS, because everybody can just take it (NVidia still thinks this way).
Aspirin is still a huge money maker for Bayer, even tough the patents have ended many years ago.
Competing corporations cooperate in FOSS projects. They can still make money.
Pharmaceutical companies could just as well do joint research.
Whatever a world without patents might be — non-profit research by the UN or joint-research by for-profit corporation — the world won’t end.
More correct is “USA citizens should fight patents”. Last time I’ve checked, we didn’t have software patents in EU.
No. I meant ALL patents.
Human civilization advanced for millennia without any patents at all — the wheel, paper, or lenses have been invented in a world without patents.
In the scope of human history, patents and copyrights are a relatively recent invention.
It is true that all those items were invented without patents. It is also true that times are very different now than what they were then.
Take for example prescription drugs. To develop a new drug and jump through all of the FDA hoops to bring it to market costs over a billion dollars.No company is going to make that kind of investment without patent protection.
Software patents are a different story. That is a case of patent law being applied to something that should be covered by copyright law.
What on Earth? I could disagree with any number of the examples you used in your attempt to make some kind of point.
But… in the end, I’d like to see Theora go somewhere.
To MechR, I wrote this and submitted it long before, and I don’t know why it took so long for it to be published.
Nonetheless, in reply to cb_osn’s post, I agree that the idealist and pragmatists can be reversed as you have said. Which is my point — both sides are just as extreme in views as each other, and in that I mean both are idealistic and unrealistic.
To google_ninja, would you be happy that the main engine of growth on the internet, by that I mean Mozilla Firefox for its advances the past 6+ years, be left out of the game? I mean, we already know that GPL is incompatible with H.264, and we know Mozilla is tri-licensed GPL, so Mozilla will be adversely affected if everyone goes H.264.
Also, other than Google who has a big impact, W3C also has a big impact. If they do not choose to make Theora the default, we have a small risk of fragmentation or monopolisation. We are talking about baseline, not default standard, which are 2 very different concepts. For example, nVidia graphics on non-x86 will have to suffice with VESA. Although it is a shame to lose 2D acceleration, it is better than not even having an output at all. We are arguing that all websites should be required to have at least Theora output, despite its lousiness, so as to provide a minimum level of service.
To foobaz, William Kahan, the father of floating point, deliberately introduced directed roundings, so that anybody really interested in precision math can use it to form 100% confidence intervals in errors. Mathematica and the like do not use directed rounding (which may still be excusable), but they sometimes also use numerically unstable algorithms, which is yet another esoteric concept. Basically, some algorithms can veer wildly when given infinitesimal differences — i.e. our derivatives-based math really does not work well when quantised. Please do read his work for more details, because he explains them much better, and that I am not really an expert on the matter and highly possibly got it wrong.
To Sergio, I don’t even know what to say in reply to that extremely terse sentence.
To Luminair, that is missing out on what the “get it done right” camp is talking about in the first place. We mean to ask “Will we even survive if we chose H.264?” and we get unnecessarily scary answers at the worst, or “history repeats itself”-style of answers. I would refer you to the “read more” button if you want the details.
EDIT: The replies! They are like Tsunami!
To mrhasbeen, I actually very much agree that a position needs to be earned, and that H.264 is better than Theora right now. However, my priority lies in fortifying my position in the eyes of the law and definitely not in the rat race of the cutting edge. We are merely asking if it is sustainable to always neglect safety in the pursuit of instant gratification. I mean, the US handling of swine flu is so bad that the Chinese who were scolded by the US in mishandling SARS are feeling disgusted. I do not have an optimum solution, nor do I have consistent answers (My view on such things contradict themselves), but I think my concern needs some answering.
In fact, like the GIF patent scandal, we are faced with a de-facto standard that is possibly better than all the other competitors in performance, but of questionable legality. Why should we relive the GIF scandal? Are humans damned to repeat history? If we are, why isn’t WW3 already under way? Should we not try our very best in mitigating the problem? I am saying that possibly, if we accept a slightly less optimal solution of Theora, we may allow other, more worthy competitors like Dirac/Matroska to get a foothold in the end.
To Fettarme H-Milch, that is another problem. However, you do not challenge the legal system by violating it. You are generally encouraged to abide by the system but instigate change (not instigate crime, though). (I don’t understand why we bother to follow this route, though.) Also, we do know that the evil parties involved have a lot of patience, so waiting for them to bomb us with the patent problems will take too long. We need to actively defend every position we have until we have already destroyed their position once and for all.
To Tuishimi, can you please do not just state opinions and sign off? That is not contributing to the discussion, and opinion sharing, though necessary, is the lowest form of discussion we can engage in. Please at least state what you disagree on.
Edited 2010-04-01 22:25 UTC
To let everyone cool off for a bit. We had a few of these in a row, so it made sense to wait a while before putting yours out there .
If the web is supposed to be open, then no standard should be applied that can’t be included in any browser for what ever reason.
I consider myself a realist. The idea of having an open codec is very appealing. If there was a ballot box where I could check one then Theora would be it.
Unfortunately if you bought a smart phone recently then like me you already voted with your wallet. My Droid doesn’t support theora, same as the iPhone, N1, Pre, etc. Web developers are very aware of this fact and it’s obvious what direction they’re going to go.
The h264 vs theora war was lost before it was even started.
I now see the situation as this; h264 should be made compatible with any browser in terms of licensing.
Who ever oversees web standards should really grow a pair and say that if h264 is going to become a standard then it should be compatible enough to be allowed for any browser including open source browsers, if not then theora should be declared the standard.
This whole argument is because someone, somewhere is too pussy to say how standards on the web will work and instead we have companies with billions of monies to be earned making decisions on how it’s going to be done.
Edited 2010-04-01 22:09 UTC
if you run android, chances are it does support theora.
http://code.google.com/p/android/issues/detail?id=4026
HTML, SVG, hyperlinks, TCP/IP etc … these are all technologies that are able to be used by, and implemented by, anyone for any purpose at any time on any platform for no cost. No royalties apply.
This fact is the very basis for the success of the internet.
The video codec for use on the internet should be the same. It is no different from any other part of the web technologies.
Theora is a codec that can meet these criteria. For use on the web, it currently performs as well as h.264, despite the incessant claims by many to the contrary.
Because it meets requirements in both aspects, insofar as its performance AND its terms of use, there is no realistic alternative.
Theora is the only sensible choice for the video codec for the web.
Pragmatism AND realism.
Edited 2010-04-01 23:53 UTC
And prior posts you have talked about the improvements but where is the information? where is the regular updates about what the developers are working on? for all intents and purposes, the lack of communication by developers to the great unwashed masses tells me that the project is dead, dying or it is a rotting carcass. If you want people to support your arguments, how about you get telling to the developers behind Theora so that people like me know that there is actually some development occurring because so far from the blogs, the last bit of information was from over 3 months ago – now what? what has happened in 3 months? what direction are you taking the project?
The one thing open source projects suck at is communication – the communication sucks so hard it isn’t funny. There are blogs – how about using them as a communication platform to the masses to see that the technology is there, it is being developed and with each development a brief synopsis on how it benefits *YOU* the end user. So far I have seen no communication along those lines.
http://openvideoalliance.org/
http://videoonwikipedia.org/
http://corp.kaltura.com/
http://getmiro.com/
http://www.mirovideoconverter.com/
https://www.drumbeat.org/projects
https://www.drumbeat.org/project/open-video-60-seconds
https://www.drumbeat.org/project/webmademovie
http://www.open-video.org/
http://www.webmonkey.com/2009/06/how_firefox_is_pushing_open_video_…
For Mac users in particular, I have foud a file that may be of interest:
http://people.xiph.org/~j/
The file is named:
Xiph QuickTime Components.zip 15-Dec-2009
This site is a preview, but it doesn’t sem to be actively being persued right now:
http://people.xiph.org/~j/apple/preview/
With limited resoures, probably Xiph have got actual work to do further improving the codec, and so they are leaving the PR up to other organisations such as the Open Video Alliance, Wikipedia, Miro and Mozilla drumbeat.
Edited 2010-04-02 07:45 UTC
But none of those tell me a single thing about what is being worked on, what will be happening in the future, what is planned for 1.2/1.3 etc of Theora etc. Its all flashy graphics but nothing being spoken about in regards to the development of Theora as so far as features being added, encoder improvements and so on.
I agree, the sort of communication you’re talking about is incredibly important. Projects can’t live without a community, and you can’t have a community without some kind of line between developers and people.
Especially when you’re as late to the game as Theora in the context of h264. h264 already has hardware support in many devices, meaning it draws less resources at similar quality levels. This is big for small devices like my little Atom box that runs my files/videos.
As a late-starter, Theora has a lot more to do if they want to catch up.
What would you expect from the other side of the fence? Some interesting news on how h.264 licensors were planning to fleece their next batch of victims, perhaps? Exciting new lawsuits in the making?
PR is PR. When it comes to PR, expects flashy websites with no hard tech details.
There are a number of sites opening up about how to put open video on the net:
http://blog.gingertech.net/2010/01/26/tutorial-on-html5-open-video-…
http://dev.opera.com/articles/view/introduction-html5-video/
http://www.slideshare.net/silviapfeiffer/html5-open-video-tutorial
http://www.html5video.org/
http://hacks.mozilla.org/2009/09/theora-1-1-released/
I’m continually surprised at how many people either utterly fail to understand this, or just completely ignore it.
Here is a fairly reasonable discussion:
http://diveintohtml5.org/video.html
For the most part, people don’t know the detailed tech, but one has to have a particular tech blindness to fail to realise that suddenly this year Theora is a challenge to h.264 where it never was before.
Even though it recommends encoding your video twice (once in h.264 and once in Ogg Theora), sites like
http://diveintohtml5.org/video.html
do give you a fair warning of the legal hazards of encoding in h.264.
Theora remains the ONLY sensible, pragmatic AND idealistic way to have video on the web.
The minority holdouts who support HTML5 but refuse to support Theora (that being Safari, iPhone and Android) are just going to have to bite the bullet and conform to the majority (being Firefox, Chrome and Opera) who support HTML5 and Theora.
For the low power hand-held devices (Android and iPhone), this will probably mean a need to implement hardware acceleration of Theora decoding via GPU shaders.
Edited 2010-04-02 10:31 UTC
The other side of the fence is the status quo; it is for you as the underdog to make the case. Again, you’ve provided me with a page of waffle that doesn’t address any of the questions I raised; What is planned for Theora 1.2/1.3? what enhancements are going to be made to the encoder? will there be work to bring it to DirectShow and Quicktime so that people can encode in those application (if you say it is Microsoft/Apple responsibility to provide it, I’ll throttle you one. You’re the underdog, it is up to you to provide the CODEC, win mindshare then the vendor will take over when they see it worth their while). Comparisons done on a regular basis, discussion of encoding features being added and how you as the end user benefit from it, addressing why certain techniques cannot be used due to patent issues and so on.
Again, give me links to websites addressing such issues because all you have provided me, post after post, are links to websites that give me none of that information. You’ve failed to provide me a single blog that is regularly updated (I define regularly being on a weekly basis) on Theora development. If you want to build hype, build momentum, build mindshare then you need to get your act together and get the end user/enthusiast and so on in the loop on what is happening in the world of Theora.
Edited 2010-04-03 03:42 UTC
There is no such information published for proprietary code. You are unreasonably holding two different standards. However, in order to oblige:
http://people.xiph.org/~j/apple/preview/
(This is a preview site only, it isnot ready yet).
http://planet.xiph.org/
I found another one for you.
http://xiphmont.livejournal.com/
Why is it that you cannot do your own searching on the net?
PS: Here is a pre-release of Xiph Quicktime components with Thusnelda:
http://people.xiph.org/~j/Xiph%20QuickTime%20Components.zip
Edited 2010-04-03 11:02 UTC
This is one of the first articles (here, at least) on this topic that has touched on a point which has bothered me for a while now: the idea that idealism and pragmatism have to be mutually-exclusive.
One fact in particular is often overlooked: pragmatism and idealism often have the same results. There are pragmatic reasons for web developers to (at the VERY least) push for more widespread support of Theora, keep abreast of the situation with H.264 licensing, etc.
Most of those who do commercial work with video on the web probably use H.264 (not to mention Flash) for pragmatic reasons. But I’d argue that any decent web dev also, for pragmatic reasons, makes damn sure that he’s familiar with creating video using Theora, publishing it on the web via the video tag, testing out things like Video for Everyone, ensuring his current workflow and toolset can handle Theora, etc.
I’d also argue that any web dev who doesn’t do at least that much is doing a disservice to his clients. And indirectly to himself, since paying customers probably wouldn’t be very happy about being lead into a cul de sac.
There’s also one other BIG pragmatic reason to root for Theora: its existence seems to be just about the only thing keeping the MPEG-LA honest. Or, in their case, *less dishonest* than they would be otherwise.
You are missing the point of my article. I’m actually stating a few points, and concluding with a reason why I’m supporting Theora.
My points are:
1. The words “idealistic” and “pragmatic” are too abused! In this debate, each faction is calling the other “ideal” while they themselves are ideal. The word ideal has connotations of unrealistic, as the dictionary defines.
2. Because of (1), stop using those words. All sides must recognise that both Theora and H.264 are here to stay, just like how Israel will continue its existence in a land filled with Palestinians. Hence, both ideals are unrealistic.
3. I’m only urging people to allow Theora to survive, not to overtake. We _know_ Theora is lousy; we merely need an open standard to keep the door open for some other competitor to come in.
4. We even _know_ what is really pragmatic, and that is the acceptance of both views and coming to a middle ground. In fact, there are 2 ways, one is coming to the middle ground from Theora, and the other is coming to the middle ground from H.264. The middle ground the Theora group is proposing is just to allow Theora be the baseline standard. The H.264 group proposes Mozilla supporting H.264, which the Theora group says is difficult.
Are you responding to my comment, or just to its title? First, I wasn’t specifically addressing your article for the most part – but rather the general sentiment/attitude towards pragmatism here (in comments, in at least two articles that Thom has posted, etc).
And to the extent that I was specifically addressing your article, I was actually praising it for not repeating the idealism-or-pragmatism false-dichotomy.
Which I agree with. Much of the discussion here has ignored the fact that not everyone has the same set of ideals – to a usability expert, the ideal of “fostering freedom and open standards” will probably be less important than the ideal of “provide the best possible experience to end-users.”
If that standard were applied widely, then the majority of the English language would be off-limits. That also strikes me as a defeatist approach, ceding control over the usage of a language to that language’s least-literate users.
Which is not substantially different from the point at the end of my post (even if Theora had no other virtues, its mere existence is beneficial just by ensuring that H.264 isn’t the “only game in town”.)
A point which I don’t (and didn’t) contend.
I actually think this post is so good that I should make a reply.
I was talking about both the title and the post. We agree on quite a many things, but the next bit is where I shall have to disagree (a bit, anyway, not everything).
Actually, this is a real problem, and my solution is not defeatist. Let me elaborate.
Ideal, whether or not the dictionary includes it, has connotations of unrealistic, as opposed to pragmatic, which has the connotations of realistic. (Which the dictionary also agrees)
From your original argument, if we replace the words, then “Pragmatism and idealism aren’t mutually-exclusive” becomes “Realistic-tism and unrealistic-ism aren’t mutually-exclusive” which is really funny and should be immediately obviously troublesome.
Of course, in many uses of the words, the dictionary will never be able to catch up. For example, our ideals can sometimes be achievable, like we can achieve “zero death and large scale construction”, in which case I really agree with you. Similarly, as you have said, pragmatic and idealistic approaches to a problem can sometimes provide the same answer, and people who realise that will wonder to no end why the world would have chosen another alternative and suffer.
But, I will argue that my point still stands. Words should be unambiguously used in most cases, and where conflict arises, we need to thread extremely carefully, and not using the conflicting words is a very careful move to take.
Why should words be unambiguously used? Let us look into history. In Newton’s time, people used momentum for the ideas of mass/momentum/velocity/speed and whatever rubbish. Some other people also confused momentum with energy, or “vis viva force”. Galileo used nothing of sine and cosine, and used x/l y/l for them all. Then he decides to interchange the x and y, because he was not thinking in terms of cosines and sines. Reading his work is difficult until you have replaced them into their appropriate trigonometric functions. I suppose this can show why it is important to follow unambiguous conventions — Newton originally disapproved of the conservation of energy because, as he contended, the “vis viva force” is momentum as he defined it (m v) instead of (m v squared), not realising that his 2nd law, F=ma, implies the conservation of energy, via the proof known as work-energy theorem.
I have been reading LessWrong blog, where Yudkowsky had been posting ideas on words. Where words have connotations that are disputed, then if you replace the words with the connotations, they can easily be resolved. Of course, some conflicts of definition cannot be so resolved, for example “Abortion is murder because it is an act of taking away a human life” depends on “A human fetus is the start of human life” and happily, is not even agreed upon (i.e., if this were an axiom, the entire argument then becomes moot because the axiom itself is shakey). I mean, people are killing newborn baby girls right on birth! How do you even talk about abortion when such atrocities are happening?
Should I expand on this extremely long point? If you want me to, send me a message.
Otherwise, I shall argue why this is not defeatist. It definitely is not ceding control of language. Ceding control of language is when people simply shrug off the use of “their” instead of “they’re” when it is wrongly used. i.e., being tolerant of the error and keeping quiet about it is why basic English is increasingly lost upon the Americans, not the people being so careful as to leave out the use of the word in question before we agree on its definition. When we are careful, we can first agree on what we agree on (that a falling tree with no observer generates acoustic vibrations but no auditory experiences, for the example on LessWrong at
http://lesswrong.com/lw/nr/the_argument_from_common_usage/
and in fact, the entire sequence of blog posts about the same subject is so good, and so relevant to this discussion, that you probably should read them, including the comments, which sometimes give more ideas than the blog post itself.) and then we can happily define whether sound should be acoustic vibrations or auditory experiences or both, or whether we should simply just coin 2 new words instead of the original mess. Now, surely, this is steering the use of the language, so how can that be “ceding control” or “defeatist”? I suppose those people who go on “Oh, come of it!”, “We don’t need no grammar nazis” or “This ain’t no English class” is exactly the type of behaviour that allows sloppy use of words, then creates conflict of usage in the first place, and then allows the lousy usage to gain ground so much that it goes into the dictionary, and is exactly ceding control and would be, in your use of the word, “defeatist”.
Which is where we obviously agreed on. I mentioned it because I was on the point of summarising my post.
Now, I would have to ask you to expand on this. Do you dispute the idea of working a solution from one end to a possible middle ground or do you dispute the solution proposed? Whatever you dispute, please give an alternative solution.
I would think that a solution that tries to incorporate the ideas from both sides is pragmatic as in the realistically possible (but if stated only this way, it is a middle ideal) sense.
Of course, if you disagree with the actual solution I gave, that is fine, but at least please state your version, so we can compare.
<blockquote>EDIT: What is with the blockquotes disappearing?</blockquote>
Edited 2010-04-04 23:51 UTC
First they take the time to trash theora, then they buy out On2 and then….nothing? If an open internet is so damned important to them, why are they not stepping up to the plate and opening VP8 or something like that?
If everyone who has a different oppinion about any subject is going to make their own article.
…but looking at the comments, I wonder if some people actually read the article before copy-pasting their previous comment
Seconded. I’m deliberately missing out the other chunk of debate in my replies because this is not an article to induce debate. It is a summary/philosophical discussion piece.
Theora as if it is a stupid ideal that is useless to consumers
Well, if Theora is useless, so is Linux and every FOSS.
http://www.petitionspot.com/petitions/oggandyoutube/
This petition needs your support. Why should you sign it?
http://lockshot.wordpress.com/2009/07/30/whats-the-problem-with-ogg…
H.264 AVC does not comply. Theora does.
Theora is the only competitive codec that is suitable for use as the web standard video codec.
Edited 2010-04-02 10:46 UTC
Be realistic. Youtube was founded on the proprietary Flash, along with the formats Flash supported at the time: H.263. Guess what? They made millions (off of Google) from it!
Would YouTube have become popular if they started out using Cortado applet+Theora? Nope. If they weren’t afraid of proprietary formats 5 years ago, what incentive do they have to try Theora now? To look good in the eyes of the community? Riiiiiiight.
http://www.streamingmedia.com/r/printerfriendly.asp?id=11011
Apart from the mere money, there is the matter of CONTROL. If h.264 becomes the only standard for web video, then the MPEG LA licensors will have control over which parties may, and which may not, serve video on the web.
Theoretically, MPEG LA already has control over who may, and may not serve h.264 video over the web, and last year MPEG LA were making moves to begin to really exert that control in 2011. However, h.264 is not yet established enough as the web standard. There is a lot of strong pushing and lobbying going on to try to force the issue, and a lot of people are quite fooled by it all (as apparently you yourself are), but MPEG LA have not yet established a stranglehold of control over video on the web.
If they eventually do gain such control, then this list of companies:
http://www.mpegla.com/main/programs/AVC/Pages/Licensors.aspx
will gain control over who may or may not compete in the arena of serving video over the web.
That is not exactly a list of “friends of Google”.
Google cannot afford to let this group get a position where they could put Google out of business.
That is why Google are looking for an alternative to h.264.
Right now, Theora is the ONLY competitive codec that meets the terms of use required for a video codec over the web.
http://lockshot.wordpress.com/2009/07/30/whats-the-problem-with-ogg…..
H.264 does not comply with these requirements. BTW: Why are YOU unable to get realistic, and acknowledge this point?
YouTube will come around, re-encode their videos in Theora, and serve them via HTML5/Theora. In order to keep IE users, they will offer a HTML5/Theora player plugin, just as they do now offer a plugin for Flash.
In fact, Google already have just such a plugin already available:
http://www.google.com/chromeframe
http://code.google.com/chrome/chromeframe/
Goggle may, or may not, reduce the scope of this plugin to just HTML5, but really, IE users would be better served with the entire plugin as it now stands.
Edited 2010-04-02 12:17 UTC
Apart from the ‘mandated video tag’ codec debate, how is YouTube’s choice of video format now any different from 5 years ago? They didn’t care who controlled H.263+MP3: they paid their dues and made money from the formats. Some people like reinventing the wheel, whilst others make use of the existing technology to meet their goals. ‘Control’ sounds a bit egotistical.
You are making it seem as though the MPEG-LA is a cabal of companies waiting for websites to use H.264, then jump on them with heavy licensing fees. Whoever owned the patents with regard to H.263 didn’t do so when flv (H.263+MP3) became the most popular video format at time. Even when flv became popular, there were video sites using other formats, most notably divx, which meant that flv wasn’t the only video format on the web. Just as websites used H.263, VP6 and even DivX to stream video, nothing is stopping them from using other (cheaper) formats in the future. Plugins haven’t been removed from the HTML standard.
I keep seeing you use the term ‘friends of Google’. Streaming video online is a business. Companies aren’t out to make friends: they are out to make money. As long as Google is paying its dues, rival companies don’t care. A quote attributed to John D. Rockefeller:
And spent $124.6M on On2? I think that is a bit of wishful thinking on your part.
Exactamundo. Precisely.
2011 was going to be the year, but MPEG LA had to back off from that plan when Theora became competitive.
Plugins are not any part of the standards in the first place.
Rokefeller is sadly out of date. Patents these days are used to put other companies out of business, and out of the competitive market, more so than they are used to extract royalties.
YouTube bought On2 when Theora wasn’t competitive. It is now.
We ARE talking about “pragmatic” in this thread, are we not? HTML5/Theora, in conjunction with Google Chrome Farme (or some part of it anyway), is an ENTIRELY pragmatic solution for Google/YouTube. TODAY.
As I point out, Google (who own YouTube) already have the plugin for IE. They don’t even need a third party plugin (such as Adobe Flash), Google already have their own plugin ready to roll. Googles plugin gives IE far more additional capability that Adobe’s Flash plugin does.
This gets Google entirely out from under the threat of any possibility of other companies being able to force Google out of the web video market. From Google’s point of view, it is worth doing.
As for the cost of conversion of videos, that isn’t a biggy. openvideo.dailymotion.com have already converted over 300,000 videos to Theora. All it takes is compute power and a bit of time, and Google have plenty of the former and by now have had plenty of the latter also.
Edited 2010-04-02 13:17 UTC
So why is the embed tag a part of the HTML5 spec?
Embed was deprecated in HTML4, but mandated for HTML5. So any browser claiming to support HTML5 must support the embed tag. What is the embed tag used for?
It is used to embed multimedia in a webpage.
Like this for example:
http://people.mozilla.com/~prouget/demos/DynamicContentInjection/pl…
It is not used to embed plugins in a browser.
You obviously don’t know HTML. Embed is used for plug-in content, such as Flash, Java, PDF, and any other format that isn’t native to the browser. The source code of the page you provided does not even use the embed tag once!
Edited 2010-04-02 13:59 UTC
embed is not deprecated in html 4, it isn’t in html 4 at all, nor is it in html 3.2 or html 3. The only html doctype it appears in is html 2.1e from the IETF.
object exists in html 4 for the purpose of embedding plugins.
Embed may have disappeared from the HTML3.X spec, but it had better support (less bugs) among browsers at the time of the introduction of HTML4. It is still widely used today, especially when targeting non-IE browsers.
Check out YouTube’s Embed Video code, for example: it uses an outer object tag for IE, and an inner embed tag for other browsers.
Edited 2010-04-02 16:16 UTC
The threat of that happening with H.264 will always loom over the internet. You give an example where it did not happen, but I can give examples of where it did. No one is saying there WILL be a money grab, but the more popular H.264 is the more likely a money grab would take place. Going with a Royalty Free codec is simply a guarantee that it WONT happen… Theora merely existing as a choice is insurance against that happening – even if it is not heavily used. THAT is why I support its adoption in the standard – Theora being made a default codec would keep MPEG-LA from getting too greedy.
Not for everyone. For some people being able to post a video on their website is merely an incidental activity. Consider the following scenario:
You operate a website that offers paid for content and within that content you happen to offer a few videos in H.264 format that is only available to your subscribers. Video is not your primary content, but you do have a few videos… You have 150,000 subscribers, your site is very popular. Under the CURRENT (not future – this is now) MPEG-LA terms you could be required to pay $25,000 per year for posting those videos (even if no one viewed them). If you have a million subscribers it is $100,000 a year. You can read the terms here if you want to here:
http://www.mpegla.com/main/programs/AVC/Documents/AVC_TermsSummary….
Its right there in black and white on page 3:
Is that a completely contrived example and very likely outside the realm of possibility. Yes. But the terms DO allow them to pursue royalties in such a scenario, and they arguably could win such a lawsuit. What does “pay directly for video services” mean legally? User pays subscription, site offers video, 1+1=2, Bingo! you have a video service… Pay up buddy.
And going back to the whole “we promise not to screw everyone” bullcrap… The point is the promise of not charging royalties for publicly available content is just that – a promise. It is not legally binding in any way. They can change their mind whenever they want to. I have no intention of charging for my content, so my content would be “safe” using H.264 – for now… But Id much rather KNOW it WILL be safe FOREVER. And all the technical bullsh*t as to which codec is _better_ is completely meaningless to me – I care about the video actually working in their browser well enough that they can enjoy it, that is all that matters in the end.
Think about it…
Maybe it is a bad example, but from the quotation you took from their license agreement, the end users are paying for video services. But regardless, if you have 150,000 subscribers, I’m assuming they are paying for something (besides video). Considering that internet bandwidth is not cheap, and you are only using a few videos, I can rationalize a situation where you use some other codec.
However, from looking at those figures, the most they demand is $0.25 per subscriber per year. A simple ‘fix’ would be to add 3 cents to your monthly subscription fee. I don’t see how that is draconian.
It was merely meant as an example of how a website could unintentionally violate the H.264 license without realizing it. No, the fee isn’t draconian, but that isn’t the point. The point is that a website such as this can easily not realize they were supposed to be paying royalties…
What if 5 years goes by, and the site has been basically breaking even the whole time. All of a sudden for some reason MPEG-LA notices them and decides they want to take them for a ride. A lawsuit could mean $125,000 in damages, for nothing more than the existence of a few videos that are not even core content. Many sites start out free and convert to subscriptions – maybe they don’t realize the legal implications until it is too late…
My point was not that H.264 is too expensive. If you are primarily a video content site, then it may be worth the price to you to use it – I’m not arguing that. But there is a VAST number of sites where it isn’t and will never be worth it to even have the possibility of being sued, because the technical differences between the two codecs just don’t matter AT ALL. Theora is very attractive for those sites, which mostly consist of the very kinds of websites that MPEG-LA would never make any money on anyway (their too small for them to monetize). That is why I keep saying this isn’t a competition – Theora represents a REAL and USEFUL alternative, if only the W3C would standardize it to give it equal footing.
I don’t know how it works in your country, but in mine they say: ignorance of the law is no excuse. If you are running an operation which has over 100,000 subscribers, and you don’t have lawyers making sure that all of your I’s are dotted and your T’s are crossed, and consultants to advise you (on operating costs), you really shouldn’t be running a business.
I think it would be totally unrealistic for someone to build a website with 100,000+ subscribers, and not make sure that their operation is legit.
Edited 2010-04-03 00:26 UTC
Wow. I generally got the impression that you were making a rational argument up to this point. What law are you referring to? The MPEG-LA licensing agreement is not a law, it is merely some legal mumbo-jumbo buried in a PDF file on their website. You do not have to read it or even know it exists in order to buy one of the many products that encode video using it. It is entirely conceivable that a well meaning user would simply not know it exists! I would venture to say that most people are completely oblivious to it. I’m from the US if it matters.
You are very naive of how the internet actually works I think. There are hundreds if not thousands of sites on the internet with well over that many subscribers who charge money for no other reason than to cover operating costs – it isn’t a money machine to them – they charge money because they have to pay the bills to run the site they love and their users would rather pay a few dollars a year than deal with advertising. Its called micro payments – lots of sites use them, and it is not even remotely a new concept.
Your point of view requires an obscene profit motive to make sense. If you have a 100k subscribers and you charge $5 a year, after you pay 3 or 4 staffers and the hosting bills there would not be much left… I hate to break it to you but just because you charge money for something does not mean you are in it only for the money. A business with $500k in gross revenue and say a 10% profit margin ends up with what $50k? You think they want to spend it on blood sucking lawyers? Please… You are completely out of touch with reality, a site with 100k subscribers in no way implies a big business with money to burn nowadays.
And I think it is a helluva lot cheaper to use Theora than to hire lawyers…
Actually, I was thinking of sites like Netflix when making my point. Your example consists of users who make voluntary donations; I wouldn’t consider them to be subscribers if the same content is available to everyone else for free.
The only video websites I know of which would fit into your category are anime websites; you can enlighten me of others. About 5 years ago, they hosted files on their servers. Currently, the majority of them have outsourced video hosting to websites, such as YouTube, Megavideo or even Facebook. Why? Because it’s cheaper. So unless the MPEG-LA comes knocking at your door for streaming Youtube, you have nothing to fear.
Most websites/companies I know which hosted ‘premium content’ on private servers have migrated to Content Delivery Networks (CDNs), such as Akamai or Limelight Networks, and even so they charge ~$5 per month. Why? That’s cheaper than the old model of private hosting and you have redundancy. From what I saw at cdnpricing.com, the most you’ll pay is $0.45/GB if you use <50TB; it gets cheaper the more terabytes you use. Those companies outsource their ‘free content’ to websites like YouTube.
Whilst CDNs are cheaper, especially for high volume traffic (100,000+ subscribers?), you can’t seriously run that operation (video hosting) on a $5/yr subscription fee; which is why most sites have a recurring monthly fee option. Consider adding it if you run such an operation.
Five years ago there was no HTML5 and in order to get the user experience Google wanted a flash video player was the only choice. Now that there’re better options that give a comparable user experience of course Google wants to change.
What HTML5 development tools are available that can compete with those available for Flash now? How many times has the YouTube Player’s interface evolved over the years? Any chance that can be done with HTML5 now?
Using a video tag on a website doesn’t automatically give the user a better experience.
Edited 2010-04-02 13:17 UTC
http://newteevee.com/2010/03/18/kaltura-launches-html5video-org-pub…
The combination of HTML5/Canvas/SVG and a fast Javascript JIT compiler is entirely as capable as Flash.
If you have a decent browser, check out some demos:
http://www.html5video.org/demos/
If your concern is the UI, try the themeable player demo:
http://www.kaltura.org/apis/html5lib/kplayer-examples/Player_Themab…
or the video mashup demo:
http://people.mozilla.com/~prouget/demos/mashup/video.xhtml
or the dynamic content injection demo:
http://people.mozilla.com/~prouget/demos/DynamicContentInjection/pl…
Edited 2010-04-02 13:38 UTC
I am a Theora supporter in this debate, but I would like to request that in the future the author of this editorial refrain from writing further on the subject. Nothing novel was said and many things that were said are exploitable by the other side (ie, can be used to prove the opposite point). This long, confusing ramble of a rant can only serve to muddy the waters further and that will only serve to harm the cause.
I’m trying to get to the middle ground, so naturally the points are exploitable. However, I can see the main posters in the comments section do not even bother to read the article.
Also, I’m trying to say that both sides of the debate are abusing the words “idealistic” and “pragmatic”, and suggest people to simply state their point instead of hiding their intentions behind these words.
It is long, but not intended to be confusing. I suppose a bit of philosophy is not going to be too confusing, but apparently it can be too difficult to grasp. I mean, I’m relating the situation to existential risks, which is not novel, but not always reiterated.
And, this is also a summary piece, so I wasn’t trying too hard to add novel ideas.
I’m not going to pick apart your article and point out the places where it is weak to the point of being hard to follow, but I will say a few things on this point. A writeup such as this should have a structure that eases the reader through it. It should know the point or points that are being made and make sure that each statement is used in furtherance of a specific goal. Perfectly true, factual and useful information, analogies and insights might be better omitted if inclusion distracts the reader from the apparent inevitable progression from premise to conclusion.
I second that. Personally, I had to wade through this article, but I persevered .
Much of the debate comes down to how you answer some questions. I don’t think any of these have definitive answers, but how you answer them changes which side you think is right.
1. What are the chances that Ogg Theora and Vorbis can become widely available (as widely available as say flash, or maybe even JPEGs)?
2. How much does it matter if a legal open source implementation can be created in countries where software patents exist?
3. Does a 20% or so increase in bandwidth for a given quality of video and audio stream matter?
4. Does paying for a license for H.264 matter? (i.e. Fluendo, as part of Windows or as part of Mac OS)
5. Is Ogg Theora and Vorbis support already a lost cause?
6. Does using the same codec on Blue-ray players as on the computer matter?
7. What is the chance that Ogg Theora and Vorbis have valid patents that read on either of them?
Edited 2010-04-02 12:35 UTC
I’ll try to offer some answers to your questions.
300,000 Theora videos via openvideo.dailymotion.com, and Theora being the video codec for wikipedia and wikimedia, means that Theora is already widely available.
It doesn’t matter nearly as much as the CONTROL some companies would gain over others, in countries where software patents exist, if h.264 was the only codec for video over the web.
To some large companies, having other companies in control of their business would be unacceptable.
Perhaps it would matter if it were the case. Fortunately, it isn’t.
Once again, it is not so much a matter of the money as it is of the control, and the lack of control, over one’s ability to host video over then net. Why should anyone accept a bunch of companies having control over them, especially other companies?
No. Ogg Theora and Vorbis are supported natively (without needing plugins) in a good percentage of browsers. In other browsers, any multimedia support at all requires a plugin, and there is just as good a plugin for Vorbis and Theora for popular-but-less-capable browsers as there is for Flash multimedia.
No. Why should it matter?
The USPTO is not supposed to grant more than one patent for the same functionality.
In the event that there was a mistake, and the USPTO did grant a patent twice for the same functionality, then since Theora is older than H.264, the VP3 patents for Theora would prevail.
I think most of your other answers are generally arguable as valid. I disagree with your answer to 7.
7. What is the chance that Ogg Theora and Vorbis have valid patents that read on either of them?
>The USPTO is not supposed to grant more than one patent for the same functionality.
>In the event that there was a mistake, and the USPTO did grant a patent twice for the same functionality, then since Theora is older than H.264, the VP3 patents for Theora would prevail.
Some of the patents were filed before Ogg Vorbis and VP3 were published. Glancing through the US patents list there certainly are patents filed during that time.
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-July/02073…
If say a patent from 1996 read on Ogg Vorbis and H.264, it doesn’t matter that Ogg Vorbis was before H.264, just that the patent was filed before Ogg Vorbis. This doesn’t mean that Ogg Vorbis is patented it just means that your logic cannot be used to demonstrate that Ogg Vorbis is not patented (at least not until Ogg Vorbis or something very similar is 21 years old).
Members of MPEG LA have been trying to harp on this point for donkeys years. There is an incessant hopeful claim of “there could be a patent troll against Theora out there”, which is repeated so often it is obvious that it is actually a plea. MPEG LA have been pleading for ages for someone to come forward with a patent claim against Theora (that is either not covered by VP3 patents, or is older than VP3 patents).
So far, no such claim has been made. There is not even a hint of one.
I agree that if MPEG-LA wants to imply that there are H.264 patents that also read on Vorbis, they should tell us the patent numbers. Not doing so makes their claim very weak.
Something I’ve realized in the past few years is that “get it done right” and “get it done now” are not mutually exclusive.
“Get it done now” often proves if an idea is worth pursuing at all. The “get it done now” attitude of sites like YouTube, Hulu, etc. led to an explosion of video on the web. The reason there’s a video tag in HTML5 is so many sites using Flash showed that video on the web is a good idea. Imagine if YouTube or Hulu had waited to “do it right” — wait for the W3C to come up with the spec, wait for browser vendors to implement it, wait for a codec to be standardized on…if they had done that, they’d still be waiting.
The other thing to remember in this debate is none of this is forever. The web is an ever changing place. A decision on a codec will not be etched in stone for eternity, it will change. And as it changes, sites will change too. They did it when Flash changed from the Spark to the On2 codec, they did it when Flash starting supporting H.264, they’re doing it now to migrate from Flash players to native players.
Certainly express your opinion on the proper codec to whoever will listen, but dragging your feet if you don’t get your way in the name of “right” and “wrong” isn’t going to get you anywhere.
“The get it done right” camp is actually the camp of H.264 folks, and that’s simply because H.264 is technically better.
As of Mozilla not being able to use H.264 because of GPL, Mozilla can do a number of things:
1. Drop the GPL
2. Use whatever codecs user has installed on it’s system
3. Use system provided codecs
1. I think that’s not possible being that most FOSS developers are fanatics
2. Mozilla already does that with flash and other media, so why can’t it do the same with H.264?
3. about 95% of Mozilla users are windows users, too. And Windows 7 comes with hardware accelerated H.264 codec. So, using system provided codecs is very reasonable.
In practice, a combination of 2. and 3. will help Mozilla keep it’s place in the rapidly changing world of browsers. If not, too bad for Mozilla, there are other browsers out there and a increasing number of people is starting to like new Chrome browser.
They could also provide it as an extension.
I tried hard to take the stance that the first sentence is anywhere near how most people would accept it, but this last sentence simply just stopped any form of thought from continuing.
Let us not walk on this path. Here be dragons. (I shall not even begin to debunk this statement. It shows intent to be irrational anyway.)
And you cannot even spell “quite” right in your title? Come on. I’m not even a native speaker! Twitter must have damaged your typing ability.
Oh no, I’ve fed the troll. Why not like this: I mention the Nazis, and let this thread of discussion die?
It was obviously a type-o, and his overall point is correct which that Mozilla has options.
You agree that they could provide it as an extension and not violate the GPL, right?
Your opinion is not universal, and based purely on technical observation. “Right” means different things in different contexts, do you even remember the point of the article ?
due to the ease at which videos can be saved to the hard drive.
Anyways IE9 and the iPad have sealed the deal with h.264. Theora barely had a chance in the first place with Google having zero interest in it for YouTube.
… another Theora debate, another total fanboy flag waving extravaganza of support from you! You put me off liking Theora because you seem obnoxious and ready to trot out the same damn over zealous list of cruddy links I have no interest in viewing.
On topic: Theora only has a good chance of making it as a codec IF and ONLY IF someone divirces it from the trainwreck that is the OGG Container. Theora is a fine codec, OGG is utter crud on a stick*. OGG does not work for Theora, it holds Theora back in pretty much every way with regards to streaming. Until this is addressed, stating that Theora should be the new standard is a TOTAL joke.
* to clarify – OGG as a streaming container for anything other than Vorbis.
No point whatsoever in any of that, apart from the weak attempt at ad hominem attack.
Ogg work as a container for Theora and as a streaming format … it does not need at all to work for anything else.
However, if it is necessary, then there is always Matroska. Why would you ignore that?
Google has the power to get it done right, so why don’t they? Software patents are anti-social, and in standards patents are useless and dangerous.
Happy Easter!
First, I’d like to note, that the reason I’ve registered on OSNews are the Theora vs. H.264 discussions (and this would be my first post). And the articles are really really great.
OK, there are some thoughts running in my head.
With a quite few exceptions, I’ve seen Theora OR H.264 arguments. Why not both of them?
The two real shortcomings I can see in each codec are:
– Theora lacks hardware acceleration;
– H.264 requires licensing.
Why are these not so big issues, IMO:
Hardware acceleration is an issue, when we have to deal with lower-end CPUs, high resolution videos, battery life. Proprietary codecs are unsuitable for Open Web (I’m defendant of open standards and closed implementations, btw). And mostly – H.264 would be inappropriate for delivering free videos.
It might be just my imagination, but I think I can see through users’ perspective (not just developer’s one).
So, here’s my daily Joe Average User scenarios:
On my desktop, I watch 5-minute long videos on YouTube (mostly low-quality). I watch 2-hour long movies in fullscreen. On my mobile device, I watch low-resolution videos (iPhone resolution, that is), be it 5-minute or 2-hour long.
AFAIK, most video providers have already been encoded (at least) two times – HD, and low-res.
So, enough with the prerequisites, here’s the point I’m trying to make. The HD movies I watch, I already have to pay for. So some minor fee for the H.264 codec is non-issue for me. For 5-minute long HD video-clips, I don’t mind having my CPU usage hitting 90%, and it reduces battery life with… (less than) 5 minutes The videos themselves are free, and I’m not paying any fee. So Theora seems to have no shortcomings here.
What about the low-res videos on the mobile device – well, if I’m targeting iPhone users with my YouTube-like site, I’d better be making some money already, so the H.264 encoding license should be something I can afford (and something that makes an economical sense). Users have already paid for H.264 decoding in their mobile devices (it’s in the price of the iPhone, you know). So, here’s my dumb-head’s proposal in short:
Proprietary (HD) videos should be encoded in H.264;
Free HD videos should be encoded in Theora;
Low-res videos – well – it depends. If iPhone’s CPU can decode Theora – let it be Theora. If it fails to decode them flawlessly – H.264 (assuming I’m making money).
As I’ve stated above, YouTube already maintains several versions of the same video. And in my mind there’s an assumption, that the higher disk space and bandwidth costs (for Theora) should be compensated by the licensing fee (of H.264).
This is not future-proof, but it will help Theora adoption up to the point where demand for Theora hardware acceleration has come, and the chips (Broadcom “Diamond-HD” for example ) will become mainstream.
I know, there might be lots of unaddressed issues in this “Plan”, but it addresses the main issue – how to adopt an open standard (i.e. Theora), so it becomes as mainstream as H.264; the transition being transparent for end-users and not much of an effort for Video-content providers.
I believe, that OSNews’ intelligent readers are majority, so instead of pointing shortcomings and saying “LMAO, you’re ignorant idealist”, this might become something as “draft”, and you’d be the work force to clear all the problems it will face (if you’re feeling me, that is)
Excuse me for the bad English, it’s not my native language, I’m from Bulgaria, where today we greet each other with “ХриÑÑ‚Ð¾Ñ Ð’Ð¾ÑкреÑе!” and “ВоиÑтина ВоÑкреÑе!”
First of all, welcome to OSnews!
We actually agree that we should have a mix of both H.264 and Theora (In a horrible and extreme sense I am actually saying, but nutter)
Now, the sentence
Is hugely in problem because the Theora camp basically exist because H.264 requires licensing, so in some sense the problem is rather big.
Otherwise, your post (I bit too late into the game to be read by most) is generally alright. However, the others would probably ask “What is new in this solution?” type of question (as they have mercilessly already done so about this post itself.
There is a shortcoming worth a lot here, and that is the fact that your method requires human effort to sort out what kind of videos to encode in H.264 and which to not do so. This is important because sites like youtube have so many that they cannot choose by hand, and currently the computers are unable to guess anywhere near how the human eye will be able to see the difference in quality.
Nonetheless, you can join in the next time with an updated solution. (They always come and go; It never ends.)
I’m very happy, that although at the end of the discussion, my opinion isn’t left unnoticed. You’re great, and if I wasn’t already in good mood, you’d made my day
So, the “new” here – it’s the Acceptance. I believe, that there should be some things, that we must pay for – that’s why I’m into Open standards and closed implementations, as I stated in the original post.
And, the answer to the creative critique (the human effort):
Is the quality of free (HD) videos so much of an issue? Really – that’s the point of a “get it done right” person IMO, Joe Average just wants to watch something, he has already accepted the fact, that there is nothing free, except for the cheese in the mouse traps (and someone has already paid for it).
The other shortcomings (OK, I’ll look up synonyms in the dictionary, I promise) – the Mozilla license. IIRC, Opera has got to the point, where it uses the OS Media Framework to render videos (on Linux, at least – I’m using Firefox on Windows 7, so I might be completely wrong).
This way – it would become Microsoft’s and Apple’s responsibility to provide Theora implementation for the browsers. And I’m pretty sure, that one, who already uses Firefox/Opera/Chrome would never come back to IE8.
/off OK, I kinda did – in the “dark days” of Windows 7 Beta, when Firefox did not have Jump-list support, and IE8 did. It lasted two days, and, believe me, I am Microsoft Fanboy
I’m willing to attend “clear talk” course (since I easily get side-tracked), so here it is all:
Only free HD videos should be (at first) encoded in Theora – the users don’t care about the quality of free stuff that much (*I’ll elaborate in a minute, since I see how you could prove me wrong).
The OS should provide a media-platform and well-defined media APIs, and browsers should use them (idealistically – the APIs would be the same across Windows, Linux, Mac OS X, etc).
I hope it’s better now
When users care about free stuff’s quality (Elaborated):
I already see some people pointing at FOSS projects, that have failed, because of bad quality.
There are two major reasons why bad quality of something free would fail – 1) high expectations; 2) lack of spirituality.
And in my mind – users don’t have expectations of high-quality free videos; And there IS spirituality (and idealism) in Theora-encoded videos.
I hope this time I said something new
/off I’m still not comfortable with commenting here, mostly – using quotations. I’m kind of sceptical about post quotations, but if it’s necessary – I’d change