In a recent interview, Wikimedia deputy director Erik Moller talks about the site’s upcoming suite of editing tools and sharing options. “Although videos have been part of the Wikimedia stable for a couple years through the open-source Ogg Theora format, the offering has been limited. Now, however, a Firefox 3.5 plugin called Firefogg will allow for server-side transcoding to the Ogg format. In addition to allowing for downloading and editing, the Ogg format also consumes significantly fewer resources during video playback. The linked article also indicates that there are other video sites (apart from Wikimedia and Dailymotion) that are moving to the open standards format for video, noting that “hundreds of thousands of public domain videos from sources such as the Internet Archive and Metavid will be available in the new format”.
http://www.theregister.co.uk/2009/07/08/html_5_media_spec/
” for Ogg Theora to improve to a level where it’s considered mature enough by Apple and Google, who opposed its inclusion in HTML5″
The Theora decoder is version 1.0, and has been for a while. The theora bitstream format (which is what must be decoded) has been stable since 2004. All versions of Theora since 2004 have been fully able to decode the same bitstream format … you can still use a five-year-old version of the Theora decoder to decode Theora video made today with the very latest encoder.
The decoder is the piece that is required as far as HTML5 is concerned.
It is perfectly stable alright (as a decoder, as it needs to be in browsers). Why the FUD about Theora?
Lack of hardware support might be brought up as an issue, but there is a lot of hardware that also lacks support for H264, so why exactly is it a problem for Theora in particular?
As for the “uncertain patent landscape” … Theora is based upon the VP3 codec, which is patented by ON2. Theora has an irrevocable licesnse to use this patent, and to distribute the resultant Theora code without requiring any royalty payments. What is uncertain about that? H264 is also patented, so why would it be any different to Theora, other than that it will cost money for anyone to use H264?
Theora is just as covered by its own patent as H264 is. Both are equally subject to attack via other patents. Why the FUD about Theora?
Well and truly debunked. Using a H264 video from YouTube itself, the same video was encoded in Theora to exactly the same bitrate and filesize. There is virtually no perceptible difference in quality. There are a number of such side-by-side comparisons available that show this. Again, one must ask, why the FUD and outright lies about Theora?
Too right. The web is supposed to be open access for all. Why is there a supposed push to hand over pots of money to MPEGLA when there is a perfectly viable non-discriminatory alternative available? Whose interest is served by that outcome? Certainly not the interests of the overwhelmingly vast majoity of people on the planet.
If it came to a vote amongst all parties with an interest, the answer would be absolutely clear cut … open free codecs would be specified without any doubt. Why has there been no consideration of that plain and obvious fact?
Edited 2009-07-21 02:35 UTC
I’m with you on the benefits of a common standard that can be freely implemented.
The suggestion I’d heard is that *Apple’s* devices have H264 and so they don’t want to use anything else ๐
Great.
Why do Apple get to have a say, and yet billions of people (who might actually have to pay for the one option but not for the other) have no say over which is chosen as a public web standard?
GPU’s are programmable. Why should the fact that Apple’s divers include ONLY the software for the expensive proprieatry decoder option be used to ransom the freedoms of the entire world’s populace? Why is it supposed to be impossible for Apple to do the right thing by their customers and include driver software for their hardware GPUs to decode Theora as well as H264?
Edited 2009-07-21 02:55 UTC
apple wasn’t the only one to have a say, there were others with Microsoft not even commenting.
Apple has a say because they develop an OS and a web browser with a share in the market which is noticable (not grand like FF and IE, but enough to register on the comparison graph).
I think the real killer was Patents, it ok now whilst the codec is not really in use, but what about when it becomes widespread, a la the whole MP3 thing, started off being ok, as soon as people started using it, patents etc came flying out of the woodwork.
(1) Theora legally implements a patented codec. The patent for the VP3 codec (that Theora implements) is just as valid as any other video codec patent.
http://en.wikipedia.org/wiki/Theora#History
(2) None of what you said applies to Theora any more than it applies to any other codec. If this is your reason for choosing one codec over another, then it could equally well be used to choose Theora and reject H.264.
(3) Thinking that Theora in particular is susceptible to patent threats is pure FUD, similar to that which was used against Vorbis.
http://xiphmont.livejournal.com/tag/xiph
(4)
There are currently approximatly 6.7 billion people who might one day be potential users of video codecs. A noticeable percentage of these people are already users of video codecs. Being stakeholders in the choice of video codecs for the web, these people also have a say.
http://en.wikipedia.org/wiki/World_population
They out-vote Apple by hundreds of millions to one.
Apple’s interest does not register at all on this comparison graph.
Edited 2009-07-21 10:58 UTC
I would contend that Apple has a say primarily because of the iPod / iPhone. There are many who want their video content to be playable on those devices – and h.264 is the best (only?) way to do that right now.
Here was Microsoft’s opportunity to demonstrate their willingness to be open and transparent by offering royalty free playback for their CODECs by third parties such as web browser vendors. They could have stepped forward but decided not to.
What Apple did or didn’t to has very little weight when one considers that the vendor who would have the most impact would be Microsoft releasing a browser with full standards compliance, implementing HTML5 video tags and offering WMA/WMV/ASF playback royalty free. Had Microsoft done that, we wouldn’t be even debating the issue.
As for patents, I have no problems when it comes to patents for encoding but why should a person have to pay royalties simply to playback a video? it seems like double dipping to me – charge the creator via the encoder then charge the viewer.
Edited 2009-07-22 04:38 UTC
Don’t get me wrong, I’m not with Apple on this one – I want the web to work well on everyone’s devices. I don’t have any of their kit, so I’m not terribly sympathetic to their needs!
Trouble is that this whole debate is being participated in by a tiny minority of people who actually have the resources to make and market popular browsers, rather than the people who use the web.
But the situation seems to be that we have to give the browser people priority in this debate, since they are the ones with the power to render the standard relevant or irrelevant, depending on whether they like it. The other problem is that, infuriatingly, if you let the user base of the internet decide, it’d probably – even in this day and age – come out as “Whatever IE does is right”.
I think with Firefox, Chrome, etc plus real competition from Safari things are moving in the right direction for browsers and the web. But we are still some way from things being as they *should* be
GPUs are programmable but conceivably some hardware has H264-specific decoders built in, no? For instance, the iTouch / iPhone. If that’s the case, you can expect Apple to fight tooth and nail against another codec that’ll require more computing power and battery drain whilst rendering their hardware investments less valuable.
Apple will be all over the ideals of openness as and when they make it easier for them to compete with the rest of the world, particularly with MS.
They don’t have decoders, but common operation accelerators. Unfortunately, most H.264 operations that are interesting to accelerate (spatial transform, entropy encoding/decoding, motion compensation, deblocking) are different from Theora. That said, Theora is close to MPEG-2 in many aspects. Depending on the hardware chip, some parts of it could be accelerated.
This is a consideration only if one is using a dedicated chip to do the decoder maths. A general-purpose video processing chip (such as a GPU) is equally able to decode Theora as any other encoding. Math is math, after all, and a transformation function is just math.
If Apple were foolish enough to include decoding hardware that was not generally programmable (ie the decoder maths is fixed) … then more fool Apple.
Apple’s mistakes resulting in Apple’s vote against Theora should not carry the day against millions of times as many stakeholders whose vote would go the other way (if they were consulted).
Most math operations can be computed faster by fixing some parameters and creating an optimized circuit for the specific operation. That’s what I call an “accelerator”.
Sure, you can implement a DCT in your GPU, but the fixed DCT operation from the manufacturer is likely to be faster. These paths are not flexible, but good enough for targeting standards that don’t change often.
Intel, AMD, nVidia and thousands of hardware manufacturers around the world are including fixed paths in their hardware…
There is no doubt that portable devices from Apple can decode Theora. However, it would require more power to decode than H.264, as their H.264 decoder can use these accelerators.
That is only because Apple put the cart before the horse, and apparently put fixed h.264 dependencies into their hardware. If they had built general support for maths, there would be no power penalty for decoding one format versus the other.
There are way, way too many stakeholders whose best interest is served by ignoring Apple’s self-caused dilemma. Apple’s mistake cannot be allowed to hold the public web standard for video codecs to ransom.
Period.
You are completely missing the point.
Their devices have no problem with math operations. However, a DCT or a motion compensation operation hardly qualifies as general maths.
These functions/algorithms can be decomposed in smaller operations, the sheer number of operations might surpass the designed capacity of the video chip. For a standard NTSC DVD feed, an iDCT is computed around 242757 times per second. The standard definition needs around 200 adds, 400 multiplies, 200 divides, 128 cosinuses. It can be simplified, but the numbers of operations are still piling up.
Instead of bumping the clock raising the clock (which raises power consumption and heat dissipation), hardware manufacturers are implementing special path for these common operations used by well-established standards.
As for putting the horses before the cart, H.264 was established in 2003 by two well-renowned standardization committees, while only an handful of people heard of Theora before it’s 1.0 release. If you are waiting for the next standard, you are going to wait forever.
As seen from these links:
http://en.wikipedia.org/wiki/VDPAU
http://en.wikipedia.org/wiki/X-Video_Bitstream_Acceleration
VLD, iDCT and motion compensation operations are indeed common operations used by well-established standards … that do not appear to depend in particular on the exact video codec used. The same resources are currently used for video acceleration for a number of different codecs.
That makes the maths general enough.
Therefore, at least these parts of the GPU used for accelerating part of the task of video decompression and rendering of same should be equally well able to be employed for the same purposes for Theora.
Actually, VLC/VLD isn’t an specific algorithm, but a whole class of entropy coding techniques. See [1]. That is what is called “entropy encoding” in the chart I’ve presented. As for motion estimation/compensation, most codecs are using similar techniques, although some implementations details are different.
There’s no doubt about it. In the worst case, you could use the programmable pipeline of the GPU to implement the whole decoder.
That said, most mobile devices that are currently in the market don’t have a programmable GPU, so most operations have to be done on the CPU. There is nothing preventing you from using the hardware assistance support for some parts of Theora, but you would need to address them directly!
[1]: http://en.wikipedia.org/wiki/Variable-length_code
I have been doing a bit of research myself on this topic.
Here you go:
http://en.wikipedia.org/wiki/Video_Acceleration_API
My bold.
The critical bit is the text “at various entry points”. I would take this to mean various “starting points” along the chain can be handed over, via this API, to the GPU and driver.
I would take it that, in order to implement Theora decoding and rendering, all that is needed is to do the first step … decompress the Theora data stream … and then pass on the rest of the processing chain via this API to the GPU/driver.
Voila.
Support for hardware accelerated Theora.
Edited 2009-07-22 05:16 UTC
On Windows, the DXVA API supports H.264 bitstream decoding with compatible hardware. This means the entire bitstream is handed off to the GPU.
Yes, but so what?
On any OS, using the exact same hardware but a different entry point, a Theora bitstream can be decompressed by the CPU and the rest of the video handling and rendering can be handed off to the GPU.
Much the same result … the CPU is marginally more utilised, and the GPU has a bit less to do. Given that the required processing is shared around a bit more, this should mean that the same CPU/GPU combination of hardware can process more video bandwidth.
On any OS (including Windows), the video graphics hardware allows data to be injected at any point in the chain of processes. To handle Theora using the exact same hardware, all that is required is for the CPU to perform the decompression of the bitstream, and the resulting output data passed to the same hardware just a bit further along the chain.
Using a CPU/GPU combination in such a manner should provide a bit more bandwidth capability (because of shared load) than using the GPU hardware alone.
None of this necessarily implies a desktop system or a Windows OS. Indeed, the “Blu-ray players, DVRs, phones and PDAs” of which you speak most often would use embedded Linux as the OS.
Edited 2009-07-22 14:10 UTC
Maybe so … but that is NO reason to let Apple dictate the public interest standards for video on the web.
As has been pointed out before, Apple would be out-voted by hundreds of millions to one against other stakeholders on this question.
Why should .00000001% of the votes carry the day?
Edited 2009-07-21 14:50 UTC
No, it’s not. It’s just the reason that Apple themselves are going to be opposed to it even if it would be to the overall benefit of an infrastructure they benefit from.
They shouldn’t, it’s the wrong way to do things. However, perhaps if Google (and therefore YouTube) were more solidly behind Theora then Apple’s vote would have been overruled. It’s not just that Apple are vetoing it, it also seems that the standard has insufficient support from other major players (Google, who won’t use Theora on YouTube and MS who’ll doubtless do something really weird when they get round to it). Since YouTube aren’t going to use it, there’s also not a pressing reason for Apple to support it so they probably figure it’s better to try and block it and save themselves the effort of supporting it.
I imagine Apple would probably cause the standard a certain amount of harm if they said “nope, it’s a bad standard, we won’t implement it”, especially as that would create a precedent for other browsers (i.e. MSIE) to also state that it’s not a proper standard and that they’re not just ignoring it unilaterally.
I think it sucks that we can’t manage to standardise video on the web. I guess I prefer we standardise *some* HTML5 stuff and let the video stuff slide for now if we can’t figure out a way to do it.
If the W3C really wanted to motivate the browser makers towards standards, it’d be nice to see them say “Right, you can’t agree on a way of doing the <video> tag so we’re removing it entirely from this version of the standard”. It seems like a video tag where everyone uses a different encoding format could actually be *worse* than just using Flash
I’m pretty sure that YouTube are going to re-think this. The infamous wrong comment about Theora’s efficiency came from just one (apparently misinformed) person representing YouTube.
Meanwhile, Google themselves are putting HTML5 and Theora support into their Chrome browser …
http://www.nabble.com/Google-Chrome-to-support-Ogg-Theora-video-nat…
Draw your own conclusions.
Edited 2009-07-21 23:50 UTC
I think if YouTube goes to Theora then we basically don’t need to worry about what Apple *say* – it’d be pretty insane for any browser maker to ignore the format YouTube is using.
If YouTube goes for something else it’ll be much more murky which standard wins in the end.
I *really* hope for some sanity here.
Your misconception is that W3C can force anyone to do anything and it’s just a matter of overruling somebody in some obscure voting to shape the market.
W3C is simply a table for each of the important players to sit together and resolve their differences before they go straight into market and engage in destructive wars which surely benefit the collective customer the least. Think of it as a kind of web UN, meant straight to escape the dreadful prisoners dilemma. It’s only relevant as long as everybody voice is heard, and it took years of blood, sweat and tears for W3C to gain this.
But if the players are not ready/mature enough to spring free ecosystem in this domain and start seeking revenue on a higher level. Well, that’s a pity but W3C (nor any other committee) can’t do anything about that, it can only hurt itself.
Well, quite! I certainly don’t think the W3C can force anything on anyone, all they can do is exert pressures to try and get a concensus – they’re in a reasonable place from which to do that. We can just hope for the W3C to introduce some coherence, getting the de-facto standards as close as possible to the *real* standards.
That’s why I’m not suggesting the W3C mandate a particular codec if it can’t get the co-operation of the browser makers – it’s not going to be able to strong-arm them. They could, however, have refused to include a tag in the absence of sufficient agreement about how it should work across multiple browsers.
Making the presence of the video tag in their spec be predicated on some agreement as to format seems something it seems like they could do to focus the browser people’s minds – a motivation to agree on something, or lose an opportunity to include a useful tag in the spec. As it is, the browser makers have been left to fight their vested interest corners and have chipped away at the semantics until they have scope to do what they like. It’s their right to do this but I’m not relishing installing random codecs to make the video tag work on everyone’s sites – Flash sucks but at least there’s a Linux version I can install once and be done with.
Unfortunately, if they were going to play hardball by refusing to standardise it and yet still avoid the browsers going and making proprietary / incompatible copies of it, they probably should have stated an ultimatum up front. At this point the browser manufacturers have already started and are probably going to go ahead and implement the tag video whatever.
On the other hand, without a standardised format the implementations will be de-facto incompatible, so maybe that’s not actually much different!
This is why public perception is important. There is an “Internet meme” being pushed at this time that:
– there is no consensus on W3C standards,
– that W3C standards are unstable and evolving,
– that W3C standards are somehow insufficient, and extra browser plugins (such as Silverlight or Flash) are necessary in order to have rich content on the web, and
– that it is necessary to have code to cater for the different types of browsers that users may have.
None of this is true.
If there had been consensus to support open web standards as they became recommended, and a common, unencumbered, royalty-free set of multimedia codecs (and right now, that means Vorbis and Theora) was agreed, neither Silverlight nor Flash would have been required and we all could have enjoyed low-cost, well-performed rich media content on the web, delivered to and rendered equally well by any (acid3 compliant) browser of a user’s choosing, up to five years ago, with no need for webmasters to jump through ridiculous hoops trying to cater to different browsers.
Fortunately, there are at least three very good browser clients that will behave in the correct manner: Firefox, Opera and Google Chrome. Safari will also come close, except that it won’t support Theora out of the box, but it can easily be made to do so with an extra download.
There are at least five video websites identified who are going to be supporting HTML5 and Theora-encoded video in the immediate future: Dailymotion, Internet Archive, Wikimedia, The Video Bay and Metavid. This alone represents over half a million videos.
Since Google Chrome also will soon support HTML5 and Theora-encoded video, and H264 is going to cost a bomb in the near future, there is at least a good chance that YouTube will also go this way.
Given that momentum, it is likely that this could become a defacto standard, and only IE users will be unable to participate.
That should push it into universal acceptance … even though, as you state, W3C alone has no power to force anyone to do anything.
Edited 2009-07-22 12:03 UTC
Actually, not to worry, even IE users may be OK.
http://www.theora.org/cortado/
Enjoy.
PS: Another smallish site:
http://tinyvid.tv/
Have fun.
Edited 2009-07-22 12:20 UTC
The market as a whole has selected H.264 (and to a lesser extent VC-1), not just Apple. H.264 decoding is present on most new ATI, Intel and nVidia GPUs. H.264 is used on Blu-ray and is commonly used in high definition terrestrial and satellite broadcast. H.264 is also used in videoconferencing. H.264 is supported by devices like the PS3, PSP, and Xbox 360. Mobile phones (other tha the iPhone) support H.264 video, including Symbian and Windows Mobile devices. Camcorders and digital cameras that shoot video increasingly encode to H.264.
Videoconferencing and satellite television are specific bandwidth constrained applications that benefit greatly from using H.264.
Some of these applications may seem to have little to do with internet streaming video, but that’s not quite true. For example, some television DVRs and Blu-ray players are capable of streaming video over the Internet – and they already have hardware decoding for H.264, not Theora. Many of these devices have low-power CPUs and perform all video decoding and processing in dedicated hardware.
GPUs are programmable. Update the driver and these same video cards can decode Theora.
None of the points raised about where H264 is currently used have much to do with selecting a codec for the web. Most emphatically, none of the points raised address the enormous, outrageous and unnecessary costs to the vast majority of people, and undeserved windfall to MPEGLA, that would result from selecting h264 as the video codec for the web.
None of these video cards are reprogrammable in the way you suggest. My ATI card in this machine has hardware VC-1 and H.264 decoding – the work of decoding H.264 is done entirely in the GPU using a dedicated circuit (found in all of ATI’s GPUs since the 2000-series, which came out two years ago). Intel and nVidia’s H.264 decoding hardware works in the same way.
Nevertheless, the GPU itself can be used for this purpose. The GPU can render any image you desire, so it should not matter in principle if the original data for what is meant to be displayed comes from an encoded video data stream or if instead it comes from a 3D game engine … the GPU can still apply the required computations to the data and render the result.
That is what GPUs are for.
Furthermore, I would strongly suggest to you that the rendering of h264-encoded videos on the web (say those served by YouTube) is not achieved in client browsers today via using the h264 dedicated hardware decoder circuits on grpahics cards.
I say this because my own systems is perfectly able to show h264 videos from YouTube, yet the open source Linux graphics driver for the R6xx ATI card that I have in the machine is not at all able to use said codec circuits (even though, in paying for the graphics card, I should have acquired a licensed right to do so, regradless if I choose to run a Linux OS).
Edited 2009-07-22 02:30 UTC
Some information concerning the possibilities of a GPU and its driver handing off part off the video decoding tasks to graphics card hardware can be read here:
http://en.wikipedia.org/wiki/VDPAU
http://en.wikipedia.org/wiki/X-Video_Bitstream_Acceleration
These lists strongly suggest that the hardware circuits involved are nowhere near as dedicated to particular codecs as you suggest. I see no reason why support for Theora could not be easily added to these lists.
Having said that, features such as this:
http://en.wikipedia.org/wiki/Unified_Video_Decoder
appear to be more constrained to particular codecs. Maybe, it is hard to tell how much of this is strict immutable hardware, and how much is actually firmware or driver software running on the GPU and not the CPU (and hence called “hardware acceleration”).
In any event, there is a considerable amount of Theora decoding and rendering that could easily be supported by this type of functionality of GPUs.
More general information about video compression can be read here:
http://en.wikipedia.org/wiki/Video_compression
Edited 2009-07-22 03:29 UTC
I am talking about UVD, PureVideo, and Clear Video. Windows has an API supported by the three major GPU vendors called DXVA. GPUs with these features can (and do) perform H.264 bitstream decoding in hardware – CPU usage is nearly zero when this feature is in use.
Many hardware devices have similar hardware video decoding capabilities – Blu-ray players, DVRs, phones and PDAs, etc., and these devices often stream from the Internet as well. With H.264, the same encoded video can be used for streaming from a web browser or one of these devices.
(emphasis mine)
Please correct me if I’m wrong, but if I interpret the content of http://www.firefogg.org correctly, then the actual encoding happens client side, not server side.
(Yeah I know that this piece of information is a verbatim copy from the interview and the article is on page2, hence no modificiations, and there is precious little information about the whole plugin on the official project homepage safe for some examples for developers, etc. …,
but perhaps somebody could investigate and if necessary update the article.)
Thanks in advance
Edited 2009-07-21 09:45 UTC
http://www.xiph.org/press/2009/thusnelda-alpha-2/
More improvements in both compression efficiency and execution speed.
Coming along nicely. I wonder if it has surpassed h.264 yet?
PS: Just a note … this project is to improve the encoder only. The Theora decoder (which is what one would use in a web browser) is stable, and not affected by this project.
Edited 2009-07-21 11:04 UTC
It’s a bit awkward to compare a specific implementation (Theora) to a whole standard (H.264).
As I’ve already mentioned, H.264 is a complex standard that got much headroom for improvements. There is a good comparison chart between Theora and other standards in this port coming from the Doom9 forums:
http://forum.doom9.org/showthread.php?p=674819#post674819
Just as DVDs got better, there is no doubt that H.264 will improve.
Unless the licensing improves, ie, it goes away, how do any improvements make any difference?
I merely described the situation from a technical standpoint. I’m already in favor of Theora in HTML5, so I won’t repeat myself.
Edited 2009-07-21 16:17 UTC
Your link points to a post from 2004.
My link points to an announcement made by xiph.org on May 26, 2009.
http://www.xiph.org/press/
http://www.xiph.org/press/2009/thusnelda-alpha-2/
Which do you imagine might be closer to the current state of play?
For your interest, here are a number of recent links discussing comparison of videos from the earlier alpha-1 version of Thusnelda making Theora videos versus h264 on YouTube. It was then neck-and-neck. Thusnelda has improved since, H.264 is static.
http://people.xiph.org/~greg/video/ytcompare/comparison.html
http://people.xiph.org/~maikmerten/youtube/
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-June/02054…
http://osdir.com/ml/quicktime-users/2009-07/msg00027.html
http://weblogs.mozillazine.org/asa/archives/2009/06/theora_video_vs…
YouTube is known to use a low-quality but fast H.264 encoder, and the encoder YouTube uses is one of many hardware and software encoders available.
Mine.
I didn’t talked about the post, but the chart. That one: http://img485.imageshack.us/img485/166/compchart2fana8.png
The H.264 bitstream was frozen around 2003 while the Theora bitstream was frozen in mid-2004. The post was made in late March 2004. The feature set for Theora didn’t really changed, though.
Even if it’s old, the chart is accurate. Don’t believe me? Have fun comparing the chart with the feature set from XiphWiki: http://wiki.xiph.org/Theora#Features
Again, you are comparing an implementation with a standard…
The folks at Xiph cannot add bells and whistles without breaking compatibility with the reference 1.0 decoder. However, they can improve their encoder given the current feature set… just like the folks working with H.264 encoders.
Exactly so.
The encoder is the function which determines the compression efficiency and therefore the required bitrate and filesize for a given video quality.
Up until recently, h264 encoders have been a lot better than Theora encoders.
Just recently, the Theora encoders under development had all but caught up.
I would speculate that by right now (ie today), the developmental Thusnelda encoder for Theora (which is at some point past alpha 2 stage) is just about on parity with any h264 encoder.
Why is this apparently so difficult for you to grasp?
Edited 2009-07-22 04:26 UTC
Well, I make sure that my posts are well-written, but English isn’t my primary language… So, in case my last post wasn’t clear enough, let’s try again:
H.264 implementations can improve, just like H.262/MPEG-2 implementations greatly improved in ten years and Theora is improving with Thusnelda. It’s just wrong to compare a standard (H.264) with an implementation (Thusnelda), unless you believe that comparing apple with shuttles can make sense!
Never said that Theora implementations cannot get better than the best H.264 implementation, etc. Don’t get too defensive!
By the way, there’s no need to give me a lecture in video compression… I am familiar enough with the bases.
Edited 2009-07-22 05:20 UTC
Agree with all of this.
The only other point I would make in addition is that right now Theora is making startling progress, to the point where it appears to have caught up to h264. This is no doubt due to some financial help donated by Mozilla.
http://www.hydrogenaudio.org/forums/index.php?showtopic=68976
H264 appears to be asleep on the job, resting on its laurels, as it were.
If you are truly up with the news on the topic of video decompression, I would have thought you would have been aware of all this.
This is debatable. Keep in mind that H.264 is quite complex. It might take years before encoders are exploiting it to its full potential. In addition, the industry is known for moving slowly. Last, but not the least: there are many different implementations. While it promotes competition, developers are doomed to reinvent the wheel.
I don’t think they are sitting on their laurels; they just cannot keep up with the pace of open-source development. x264 is GPL’d, yet doesn’t benefit from funded developers. Money is always a great way to focus development!
What is Ogg Theora and Should We Care: Mike Hudack Explains
Mike Hudack, CEO of Blip.tv, talks about Ogg Theora and what it can mean.
http://tinyvid.tv/show/10mtinxqpzmju
For those of you stuck with a Flash plugin, here is a YouTube link.
http://www.youtube.com/watch?v=ifl8qT9UOVk