Ever since it became clear that Google was not going to push WebM as hard as they should have, the day would come that Mozilla would be forced to abandon its ideals because the large technology companies don’t care about an open, unencumbered web. No decision has been made just yet, but Mozilla is taking its first strides to adding support for the native H.264 codecs installed on users’ mobile systems. See it as a thank you to Mozilla for all they’ve done for the web.
In a way, this saddens me greatly. It’s pretty clear that the Mozilla Foundation’s unrelenting loyalty to a true open and unencumbered web has been a driving force behind the web as we use it today. Without Mozilla’s insistence on openness and interoperability through its open source Firefox web browser, the web wouldn’t be in nearly as a good a state as it is in today.
Even though I’m no longer a Firefox user, I have the utmost respect for Mozilla’s work, its ethics, and the products it has been putting out. We’re enjoying a better web because of them. Without them, we’d still be stuck with Internet Explorer 6. Without them, Opera would still be for-pay or ad-supported. And without them, WebKit would not exist in the way that it does today, because a lower-quality web would not have forced Apple to fork khtml and turn it into the vastly superior WebKit.
We owe a great deal to the men and women at Mozilla.
And now, the rest of the technology industry, which owes so much to Mozilla, is going to force them to abandon their ideals. Heaven forbid we suffer a bit of inconvenience today for a better tomorrow. I find this tragically sad. We’re seeing the technology industry ruined by Apple, Microsoft, and others using their unethical software patents to stifle competition, but you know what, let’s inject another patent-encumbered, non-free technology into the web. What could possibly go wrong?
I do, however, understand Mozilla’s changing position. They simply have no choice. They want to remain relevant and be a part of the future web, and as such, they’ll have to support this patent-encumbered non-free technology. They’re debating it for mobile devices only right now, but with both Mac OS X and Windows turning into desktop versions of closed-off smartphone operating systems, they’ll have to capitulate there, too.
Apple supporters like M.G. Siegler and John Gruber love that this is being portrayed as a pragmatism vs. idealism debate, because of the negative connotations associated with each of these two extremes. However, because of their Apple-induced short-sightedness, they’re mixing up which -ism belongs to which camp.
In this debate, H.264 supporters are the idealists. Even though the patent lawsuits are dropping all around them – especially in the mobile industry – they seem to naively believe that injecting a very patent-encumbered technology into the very fabric of the web will somehow magically not result in loads of patent lawsuits and failed business ventures because not everyone has the cash to pay for a patent license. That’s idealism, right there.
People who believe this is a monumentally stupid idea are the pragmatists, because we are grounded enough to realise that yes, this will open the web up to all sorts of nastiness, and we want to prevent that – even if that means we have to live with a slightly inferior codec for a while. We all know what happens when we become too dependent on a single, closed technology (Internet Explorer 6, Flash, Windows).
We believe that the web should be free and open, no exceptions, no ifs, no buts, from top to bottom, from root name server to client. Everybody must be able to create a device or application that accesses the entirety of the web, whether you’ve got 100 billion (in foreign bank accounts to dodge taxes), or you’re a 17 year old programmer dreaming up the next big thing. Not everyone lives in Silicon Valley where bored Vulture Capitalists have millions to throw around, you know.
The web needs to be 100% open and free to have it flourish. The web is more than cat pictures and porn – it plays a vital role in society, and, is already changing the very destinies of entire countries. Messing with this concept, opening it up to lawsuits, is stupid. There is no other way to put this.
Hopefully they make a least one public attempt to get Google to commit on a specific date for dropping H.264 support in Chrome. I they don’t hear an answer or don’t like the answer they should just push ahead with this.
I agree with you, but i hope that that horrendouse x264 would be killed off soon for a less CPU/GPU intense standard. Damn, even an dual core atom can not watch even 480p mp4’s in that format, and the powerbooks at 1.67ghz can neither. Bad sign of things coming… btw a 1080p xvid runs fine on both machines and filesize is no longer a matter as disk are dirt cheap, why insist on smaller filesizes when space and internet speeds is less of an problem?
Edit, also x264 is not and probably will not be supported in the near future (or ever due to licensing fees) on home theater systems, and even if they were so far none of them have to cpu power to play them well,
Edited 2012-03-15 09:04 UTC
One of the selling point of H.264 (sorry for being a pointdexter but x264 is the free H.264 codec/tool done by the VLC lads) compared to VP8 is that basically everything has hardware acceleration for H.264. If you don’t, it could be extravagant settings that aren’t h/w supported. Eg the iPhone only deals with baseline H.264.
Not true, it can do high profile for quite some time now. While the Apple docs may say only main profile is supported, the iPhone and the iPad will play high profile video.
Android phones will play high profile as well. This “devices only do baseline” thing is outdated.
@judgen: I don’t know what you’re doing wrong, but the single core Atom N270, thanks to hyperthreading, can play 720p high profile almost fluidly, there’s only a few hiccups at bitrate spikes. So 480p shouldn’t be a problem whatsoever.
I also have no idea what you mean by “will not be supported on home theater systems”. What is such a system? If a standalone player that can play Blu-ray, you’ve got h264 support right there, because it’s required for Blu-ray. If it’s a HTPC you’ve put together yourself, either the integrated Intel graphics has a hardware h264 decoder, or you add a low-end Nvidia card which also has a hardware h264 decoder.
Not entirely, like e.g. my 10-inch tablet can do high-profile up-to 720p, but at 1080p it can only handle baseline. NVIDIA really screwed up there with Tegra2.
But yes, almost all devices these days can handle high-profile 720p, and most of them can even do 1080p high-profile.
It depends on the GPU a lot more than the CPU. H.264 really needs hardware decode functions.
My lowly HTPC (AMD Athlon64 1.something GHz, 1.5 GB RAM, Radeon AIW 9800) can play SD x264 videos without issues. Haven’t tried 720p x264 videos as we don’t have an HDTV of any kind.
You’d think that Intel would have some kind of hardware decode support for H.264 in their IGPs by now.
And why would that be? I have a meager Pentium T4500 in my run-of-the-mill sub-$400 notebook. MPlayer2 and MPC-HC can decode H.264-encoded 1080p @ Hi10p without a single frame dropped.
Also, if you have videos which aren’t supported by DXVA or whatever hardware decoding API because they don’t adhere to the respective limits you’re SOL.
Ahem, they have! Again, my lowly notebook has a cheap-ass GMA 4500MHD. And MPC-HC decodes H.264 via DXVA just fine.
A Pentium T4500 is not “meagre” by any stretch of the imagination. Especially when compared to an Atom CPU or a (truly meagre/ancient) Athlon64 CPU.
Again, the HD4000-series of integrated GPUs are not “cheap-ass”. Well, okay, they do suck compared to Radeon HD4000+ or nVidia 200+, but they are much better than what’s integrated into an Atom or a Radeon 9800.
Yes, it is meager. Relative to the kind of CPU power that’s available today. As I said, I can play back 1080p @ Hi10p, but the cores are maxed out.
And I also have something with an Atom N570 lying around, which plays 720p just fine.
And where did I write that they were? I wrote that MY integrated GPU — being the age-old, but ubiquitously used 4500MHD — is cheap-ass. Also, to my knowledge Intel’s new integrated graphics solutions work for HD decoding, both on Windows via DXVA and on Linux via VA-API.
Somehow I don’t get your problem.
You do know, of course, that every damned H.264 encoder has many, many options that allow one to produce videos which can or can not be computing-intensive.
That’s pure bull, and you know it.
Are you frakkin’ kidding me!? The eye-cancer-inducing XviD at 1080p? You have to be a masochist to watch XviD-encoded 1080p.
And your logic has gone bye-bye by this point. It’s totally in reverse, actually. Because disk space is cheap we should accept a vastly inferior codec? What the hell are you smoking? As novel as the concept may be to you: smaller file size actually often enough means better quality. Why do you think x264 has introduced 10-bit encoding? Because it improves the quality. And said quality improvement — meaning better compressibility — brings down the file size.
What the HELL are you talking about? First, x264 is an ENCODER, not a CODEC. Second, H.264 IS SUPPORTED. Every damned multimedia playback device — like the Western Digital TV Live — can play back H.264. Open Source be thanked.
Either you’re doing something very wrong, or you’re making things up.
480p H264 file on a decade-old Athlon XP 1.46 GHz that I keep around …hovers close to 50% CPU usage in software decoding via SMPlayer, with 2-3 year old build of MPlayer underneath – so while quite efficient, not even exactly the fastest decoder around.
When checking out, half a decade ago, the presumably then fastest software decoder (CoreAVC), this machine could actually borderline play 720p (some not really, some it managed but with 90+ % CPU usage – differences probably from various H264 profiles)
Then there’s single core Atom netbook of my buddy, able to play 480p H264 similarly fine.
Space and bandwidths matter more with push for ~mobile (plus your view about internet speeds might very well be a very Swedish perspective)
What? Virtually every current home theater supports H264: every Bluray player does – and those are virtually all current home theater systems that matter (but it’s not like DIY HTPCs have any issues with H264). And “DVD formatted like & using codecs of Bluray” discs (aka: AVCHD or AVCREC) give very nice results on those.
Google should give up H264. Moreover if HW is vp8 friendly we will also see more open drivers.
There’s no such things as patent-proof video or audio codecs.
Don’t believe for a second that if OGG were magically adopted everywhere that suddenly it wouldn’t have patent claims against it, because it would.
With Apple and Google behind x264, it’s not going to be little guys suffering. With OGG, we’ll trade all of our current universality for a subpar codec that will eventually be patent-tested anyway.
Yes it will be.
I can’t make a browser without paying a license or use the OS supplied codecs.
I can’t make an OS.
Did you know that the world’s largest server OS today started out as a hobby project? Without funding. Couldn’t happen if all browsers need to get licensed codecs from the OS.
I’m a small indie developer without any cash in the bank. I can’t continue my work if I will be required to start produce shitloads of money just to get the right to give you my free, open source, in-the-free-time-made software.
</rant>
If apple and MS distribute the codecs with their respective OSes, then they have paid the licensing fees, therefore the little guy is protected, just like with DVD codecs.
This has the nasty side effect of leaving alternative OSes out in the cold, but that is less than 5%, so even in that case, most of the little guys are protected.
Fun fact, did you know that both Mac OS and Windows once had a 5% share on desktop computers?
By limiting the ability to create a desktop OS to those who can cough up a bucket of money we are limiting future innovations.
Apple started in a garage. Just sayin’.
Well, it’s not like you could create an OS from scratch that could seriously compete in today’s market without a shitload of funding anyway. Joe Coder and a group of his friends are not going to build something that rivals Windows or OSX in their spare time.
30 years ago it might’ve been possible, but the tech has gotten immensely more complicated since then. Even Linux wouldn’t be where it is today without the financial backing of large companies.
Look, I’m not saying that patents are a good thing, or that someone shouldn’t be able to create something and put it out in the wild with an investment of $0, but most things cost money to get off the ground. Seriously… if you have a good idea, you can probably get somebody with a lot of cash to invest in it. And anyway, this is a patent on a specific codec – not exactly the same thing as patenting ‘swipe to unlock’.
It wasn’t long ago humanity believed we couldn’t learn more about physics. If there’s something we know for sure it’s that we never know for sure what we may come up with in the future. Any limitations put upon our own innovative capacity should always be carefully considered. For our own good.
When you seek funding from VCs and business angels you have to look at potential costs. This will only make it harder to get that funding.
Just because most people are poor doesn’t mean more people should be. It’s not logical to make things worse because they are bad.
It always take more than just a good idea. Putting more limitations on potential upstarts will be costly. Sure, it may be a minor inconvenience on the whole but it still isn’t worth the costs IMO. It’s simply too little benefit for a too high price.
Just because it’s a codec doesn’t mean it should be patentable. I have yet to see any clear and confirmed evidence that software patents have increased software innovations, or that lack of software patents hinder software innovations. It is not logical to impose mechanisms which have clear costs but unproven benefits.
Hey, I’m not saying building a competitive OS from scratch can’t be done, just that you’re gonna need an army of developers to pull it off, and probably working full-time.
Why does it have to be built from scratch?
None of the current mainstream OSes were built from scratch…
Derivatives suffer the same plight – if I create a new Linux distro, am I licensed to distribute a working h.264 implementation?
I’m sure the same way it works for other codecs like mp3.
So no, I’m not licensed, and I just release it hoping that nobody will come after me even though the software itself is FOSS and every attempt has been made to work around the patents, thus potentially producing a sub-standard implementation that runs like shit on even the fastest processors.
Thanks for clearing that up…
Well, now you’re sort of off-topic. The original suggestion about H264 in open source browsers was to use the codec that’s built into the OS, and then somebody came along and said this would prevent somebody from innovating with a new OS. To which I said you’re not going to build a new OS that a great number of people care about without a serious influx of cash, so you could afford to license the codec anyway.
But what you’re talking about is YALD (Yet Another Linux Distro), not a new OS. I assume that Linux users have access to this codec already (if not built in, then readily available), so whatever method they’re currently using to get it is obviously working.
Not really, it’s all related when it comes to FOSS…
Right, I didn’t mention Haiku, ReactOS, AROS, etc., because I thought you would just mock them – but tjese are OSes that a relatively large number of people care about – which do not have a “serious influx of cash”, and also suffer from this same problem. Haiku is currently distributing ffmpeg as the implementation for codecs, but ffmpeg doesn’t provide a patent license (nor does VideoLAN for that matter…).
I think you’re under the impression that all of the solutions that are “obviously working” are legal – and you’d be wrong. Unless you mean “obviously working” == “ignoring the problem and hoping it blows over”… which is basically what most FOSS users are doing these days. The only licensed Linux (patented) codecs I know of are distributed by Fluendo, which cost money of course, and generally cause problems.
So again, you seem assume that somehow all of this is magically taken care of, and only some huge organization with unlimited funds and resources is ever going to have to deal with it…
Even if Mozilla was somehow miraculously able to obtain a license for all their Firefox users – it would not ever likely extend to unofficial Firefox-based browsers such as the ones ported to AmigaOS and Haiku (well… when it’s finally updated again). The same thing exists for Webkit, and Chromium – Apple and Google can not relicense these patents to other developers who re-use the codebase.
I guess that’s the price that we pay for building our own software, either infringe the patents, or do without.
And these operating systems are also never going to be taken seriously by anybody but hobbyists. That’s why I stressed an OS that was competitive, not one designed by a bunch of nerds, for a bunch of nerds.
And I assume that actually paying a license to use these codecs is out of the question? Is that against people’s religion to pay money for stuff they find useful enough to bitch endlessly about if they don’t have it? Like, ‘Hey… here’s a free OS, but you might have to cough up a few bucks to play these media files …’ At least that might be an option if ‘they’ ever come after you, which they probably won’t if they know you don’t have any money to begin with.
Tbh, ALL OSes start without being competitive and in the beginning they’re all designed by a bunch of nerds for a bunch of nerds. There is no magic wand to wave around so that you can just suddenly come up with a whole OS that even happens to come with a massive userbase, too.
There are plenty of cases where it indeed is out-of-the-question. Linux for example is often used in all kinds of gratis charity-work, like e.g. computing clubs. Often the one or ones running the show are doing it all from their own pockets and having to pay extra just for the sake of being able to play video is just too much. Linux is also used in a lot of various kinds really poor environments and in products like OLPC, leaving the burden of obtaining a license to the end-user is really out-of-question there.
Sounds like somebody just wants the whole enchilada for free. TBH, sometimes shit costs money, and if you can’t afford it, then you’re out of luck. And until we reach the socialist utopia where everything is paid for using somebody else’s money, that’s just the way the world works.
Of course… precisely what I would have expected from you.
It’s not even worth responding to your other arguments, because your density makes it difficult to communicate with you.
We’ll see what happens in 2015 when the MPEG-LA re-evaluates their distribution licensing options for internet service providers who use video technologies.
WorknMan,
“And I assume that actually paying a license to use these codecs is out of the question? Is that against people’s religion to pay money for stuff they find useful enough to bitch endlessly about if they don’t have it? Like, ‘Hey… here’s a free OS, but you might have to cough up a few bucks to play these media files …’ At least that might be an option if ‘they’ ever come after you, which they probably won’t if they know you don’t have any money to begin with.”
Don’t confuse the desire to avoid paying patent royalties with a desire not to pay the software developers. In general many people who feel 100% justified in not paying patent royalties over 3rd party software implementations still feel it’s wrong to abuse copyrights.
The reality is that any commercially viable entity has to pay for patent fees in order to avoid being sued into oblivion. However if it were left up the end users to pay patent fees for commercial and free software, I am fairly confident that large percentages from BOTH sets of users would refuse to pay due to moral objections. Paying the patent trolls is the same as paying the mofia, it gets them off your back, but it only empowers them to do more harm and continue their threats.
Nope, you’re buggered.
Actually, it’s not even sure that an army of developers would help.
Most of the tasks which have to be done in the early life of an OS cannot be worked on in parallel. AFAIK, it’s only when hardware-specific drivers and user-mode software kick in that you start to need the help of as much developers as possible. Before that, the cost of team management may not be worth the cost.
I do agree that working full-time is a plus though. But on something as ambitious and risky as an OS project, it’s probably not going to happen. At least until you get something mature enough to be licensed to a company which has a very specific purpose in mind (ATMs, public terminals…)
Edited 2012-03-15 08:04 UTC
Here is another fun fact:
The time you you are referring to is long gone. But even back then, you had the same crap. It’s the laws that have changed, but even then you could see the writing on the wall.
Apple sued MS in a “Look and Feel” lawsuit back in the 80s. Apple lost. MS, and everyone else won. IBM sued Compaq for reverse engineering the PC BIOS, IBM lost, everyone won.
Those victories would not happen today. The Apple that started in a garage today will sue your ass off for looking at them the wrong way. MS is the same way. IBM doesn’t even make PCs anymore. Compaq is gone.
The companies that started it all have tried very hard to close the door behind them, and they did a very good job. Is it sad and horrible? Yes. Does that feeling change anything? No.
Some say it’s the Post-PC world, it’s really the post freedom world, and your wallet better get used to it.
Au contraire, “the little guy” who wants to write an alternative OS and to include codecs for VP8 and Ogg has licenses to do so from the list of companies here:
http://www.webm-ccl.org/members/
and also this list excluding Sony and Philips:
http://www.openinventionnetwork.com/licensees.php
Between them, that is a lot of patents licensed (for no cost) to anyone who wants to write and build their own OS. Easily enough “inventions” therein to do so.
Between them, that is a lot of patents licensed (for no cost) to anyone who wants to write and build their own OS.
True, but the codecs are not the problem. The problem is people.
You can have your kick ass OS filled to the brim with worthwhile and freely licensed codecs. That won’t help one iota if your potential userbase says, nice OS, but when will it support H.264 or we have no interest.
But let’s leave it there. No reason to rehash the VP8 vs X264 “war”.
Must… resist… urge to abuse admin powers and click +1 buttons several times… UGH !
That’s some serious burst of comment inspiration you have today, sir
Edited 2012-03-15 07:37 UTC
BS. Little guys don’t [credibly] make browsers or OSes nowadays. They may want to — but they’re not successful without serious funding.
Of course not, buying all the licenses and patents is too expensive.
That is just not true, just look at all the people creating Linux distros and ReactOS and so on.
Do you really think they will pay for a H.264 license so they can distribute those codecs with their software ?
Furthermore, most content out there is h.264 already. Digital TV broadcasts in my country are H.264. Digital downloads are H.264.
This is true. Also, you have to look on the hardware side of things. Yes, software plays a vital role, but it can only work if the hardware is there. Unfortunately for us (and fortunately for the MPEG-LA cartel), many companies incorporated h.264 hardware accelerators into their set-top boxes, Blu ray players etc. They also made their machines compatible with h.264, but excluded the other codecs, especially the open source ones.
Right there it’s as if we’re fighting with one arm only. We’re severely handicapped already. Add to the fact that most companies want the h.264 codec to prevail above all others (for their own selfish reasons), and it’s a lost war. It’s unfortunate that things had to turn out this way. I was looking forward for at least WebM videos, but it seems companies are too ingrained in this industry to change for the better.
Edited 2012-03-14 20:58 UTC
The war is not lost. But if Mozilla will give up (following Google), the war will last much longer.
Either way, “the war” will last right about the amount of time it takes for H264 patents to lapse, if I’d have to guess…
(so, only around or a little over a decade left)
WebM accelerated video hardware is included in almost all new mobile hardware these days. Even iPad2s use Imagination Technologies PoweVR graphics
http://www.imgtec.com/corporate/newsdetail.asp?NewsID=597
“Imagination Technologies, a leading multimedia and communications technologies company, announces the latest additions to its multi-standard, multi-stream video IP core families, the POWERVR VXD392 decoder and POWERVR VXE382 encoder, including support for H.264 MVC, WebM (VP8; decode), S3D (Stereoscopic 3D) and resolutions up to UltraHD.”
http://www.webmproject.org/about/supporters/
http://blog.webmproject.org/2011/11/time-of-dragonflies.html
In total, over 50 semiconductor companies have licensed the VP8 technology today. The first devices with 1080p VP8 decoding are today in the consumer market from nearly a dozen different brands (see example here), and the first chips capable of VP8 encoding will ship in 2012.
There is actually a newer version of hardware decoding beyond the dragonfly version:
http://blog.webmproject.org/2012/02/vp8-hw-decoder-version-5-eagle-…
and the encoder:
http://blog.webmproject.org/2012/02/fifth-generation-vp8-hardware-e…
This fifth version won’t be appearing in actual hardware just yet, however, but the earlier versions are certainly shipping.
Edited 2012-03-14 23:58 UTC
iPad 2s may use PowerVR chipsets, but not the chipsets that you cite. They use SGX series parts, where you cite VXD and VXE series. So that’s not really relevant, unless proximity to VP8 is what you were going for.
The situation was like that already for long time. So if you are saying that Mozilla’s change in principal position is either caused by desperation about slow WebM adoption or by need to get hardware performance on mobile devices without hardware WebM decoding, then both arguments are not convincing to differ it from the situation on the desktop really. WebM was nowhere when it started, still Mozilla was strongly against promoting H.264. The most decisive factor here really is Google’s desertion. Since Google was supposed to be a good ally, Mozilla was not alone. Now they stand alone against the dark empire of “non free”. Will they able to pull it through?
Actually, if you join the HTML5 trial at YouTube
http://www.youtube.com/html5
then virtually all of the YouTube video content will be delivered to your browser as HTML5/WebM.
If you then go to a an independent page with embedded videos:
http://www.reddit.com/r/videos/
you can then see most of the videos linked on such a page without having a Flash player or a h.264 decoder installed on your system.
Not all of them, sure, but most of them.
BTW, the fact that digital TV broadcast is encoded with h.264 has very little to do with web content and web browsers.
Edited 2012-03-14 21:49 UTC
I’ve used HTML5 video for some time. On all my systems I removed Flash about 1 year ago.
Here’s my point related to TV:
A normal digital standard definition TV station transmits at 1.5-3MBits/sec H.264 with MP3, AAC or AC3 sound. I can’t find any reason to re-encode the stream for live-tv or for archival purposes (and later playback as VOD on the website). Re-encoding such a stream has a computational tax which is ridiculous since ALL our mainstream computers (just MacOS and Windows as mainstream) come equipped with licensed decoders for those codecs. Furthermore, when you have to save a live stream for later playback you need to save it in H.264, Web-M, OGG, Dirac at different bitrates and resolutions and this literally kills any storage subsystem.
I know a small TV station (200k viewers on average) that serves recordings of it’s past 2 months of programming plus live TV and the storage requirements are huge (8 TB for SD H.264 and 4TB for low res H.264). The station has at any given time of the day about 1000 viewers so it needs 3 copies of the data. Imagine investing into real-time conversion of Terabytes of videos in whatever codecs you want, as well as also needing 3 times the storage (H.264, WebM, whatever other zealot requested codec such as Dirac). It doesn’t make economical sense unless you’re Google. Everyone needs to get their heads out of their a**es and realize that H.264 really is an industry standard for video and the Web (or 30% of it) does not get to dictate what a whole industry should use.
Now apply this to websites like Netflix. Since in other countries NetFlix/Hulu are not available, small, licensed, local alternatives have appeared. A decent TV show library weights in at about 50-60TBytes. You can’t expect to have a Netflix/Hulu like website in a country of 20 Million (so at most a hundred thousand clients per website) that also can afford to have triple storage requirements and huge re-encoding costs. Right now, all the TV show libraries you can legally buy for redistribution give you the media in MPEG2 or MPEG4 compliant streams. Deal with it and stop complaining as they are the industry standard. While Netflix can afford to re-encode everything in VC1 (huge computational task) and serve it via a Silverlight, but small shops cant. Furthermore, using Flash or Silverlight with VC1 instead of just plain HTML5 Video with H.264 doesn’t seem like a big win to me. And for the love of God, stop giving Google as an example. If Google can afford to do something, it doesn’t mean that everyone on the web can afford that.
H.264 is not GIF all over again. GIF failed not because of patents, but because it was inferior to PNG and JPEG. Patents are not a problem for MPEG4 for the users since you already payed for the decoder at least 2-3 times over per system. You have an OS license for the decoder (in Windows 7 and Mac OS X at least), you have a license that comes with the hardware decoder included in most video cards (anything recent from AMD/NVidia/Intel), you have a license included with some of the software you installed (say a Roxio suite, or other software that you might have bought).
I have no problem with the web using WebM, just don’t remove the possibility of using H.264 as you might be hurting the companies in smaller countries. Not all TV stations are the size of BBC and Fox. Not all movie/TV sites are the size of Hulu/Netflix and have access to millions of US dollars in financing. If Mozilla doesn’t give us H.264 on the web, we’ll keep using Flash (which, like all embedable stuff is a hack) or, like in my case, use Safari, even if I favor Mozilla more.
Does your small company already have a license for streaming out h264 videos? If not it could be better (cheaper) for it to buy some hardware for re-encoding these videos into WebM.
Also, relying your business on streaming out content downloaded from others is asking for problems. That’s a different story, though.
Never said it was my company, I offered consultancy for their infrastructure. They do have a license for the content but I don’t have a clue about the license for the codec. I assume that they do have a license (as needed) since they are going the legal route.
What they are doing is providing non US citizens the services that the US citizens have had available for a long time, so it’s not looking for trouble, it’s normality.
Just a little food for thought here : what if PNG received the effort and attention required to become the awesome and universal image codec that it is today because of the Unisys incident ?
I mean, look at lossy image compression. JPEG is old and dirty technology, and everyone knows it. Lots of attempts have been made to introduce a replacement that is in line with modern image compression technology, such as JPEG 2000, WebP and JPEG XR, and every single one failed. Why ? Because it was perceived that the cost to adopt it was not worth the benefits.
Similarly, look at animated pictures. The reason why web browsers still support GIF at all is because Unisys have calmed down and because it is still necessary to use that inferior format in order to display cute animated pictures on forums. Attempts have been made to propose animated versions of PNG, such as MNG and APNG, but they have ended up in a silly struggle between corporate interests backing MNG and pragmatic logic backing APNG. If there was enough push towards a new animated picture format, the issue would have been resolved, but there simply isn’t anymore.
Innovation is hard and painful, strong constraints are needed to make it happen. Patent trolling can be one of such, and I believe that for GIF it was the case.
That’s where the problem stands actually. You may not have noticed, but these 3 licenses end up coming from similar groups of interest. Hardware video decoders have proprietary designs, and manufacturers will only spend some care in their binary drivers if someone sends them buckets of money every month, as is the case for Windows and Mac OS. Similarly, due to the exorbitant cost of codec licensing, video playback software that is legal in the US tends to be developed for operating systems with the biggest market share (Windows) or which already have a licence for the codec (Mac OS, iOS, Android).
Actually I wouldn’t say GIF is inferiour to JPEG. That is like comparing oranges to apples, not apples to apples.
GIF and PNG are the apples.
And I’m sure all these organisations you mentioned all want/need some form of streaming with DRM.
WebM doesn’t have that.
All people are asking for is to put all the non-DRM content in WebM.
It turns out the power-requirement is almost a non-issue as I’ve read in the Mozilla discussion.
Radio (especially when downloading your video) and the screen take up much, much more power than the CPU doing decoding.
Especially for SD normal people can’t see a difference between WebM and H.264.
The real problem is that on the desktop atleast, Flash could have been a fallback option for WebM if Flash supported WebM as they said they would.
On mobile it could have easily be part of the OS, atleast on Android.
So that would have left iOS as the only popular smartphone OS without support. How long would they have waited to support it, if that was true…?
Edited 2012-03-16 10:55 UTC
DRM is usually a function of the container, as opposed to the codec. H.264 doesn’t have DRM either, MP4 or M2TS do support encryption of the streams though.
Yeah, sorry about that mishap.
But I don’t think there is a DRM-supporting container available for WebM either ?
Atleast I don’t think containers are interchangable all that easily, are they ?
VP8 can be used in Matroska and AVI containers as well as WebM, though WebM is just an extended form of Matroska. Matroska does not support DRM, AVI does not either but I’ve still seen several attempts from various companies at using an encrypted AVI-format.
Usually DRM is a function of protocol, though, not containers. Container-specific DRM applies only when the content is distributed via non-restriced streaming protocols or as actual files. There is nothing stopping someone from extending Matroska to support DRM, though, so that again is a non-issue.
To be honest, most of YouTube’s content also comes with an H.264 copy for those crappy mobile devices which cannot play anything but that.
So in effect, it is likely that the set of content which is available in H.264 (though not exclusively) supersedes the set of content which is available in WebM.
And this is sad.
This argument is used quite a bit. But I find it completely bogus, and here’s why: It implies that the web is a world of it’s own. But it’s not. It’s only a part of a bigger world that already has established means of video delivery. And those means deliver h264 video. Do you really expect companies to have a separate encoding chain just for the web? As nice as that would be idealistically, in reality the financial equation just doesn’t add up. Unless you’re Google. But any other company is by logic not Google.
Then add hardware decoders into the mix. You can cite a few examples of chips that have vp8 decoding, but what percentage of devices out there right now has those chips? If you’re delivering video to mobile devices, it makes little sense to deliver in a format that only a very small percentage can decode it without quickly draining the battery of the device.
Edited 2012-03-15 11:49 UTC
Google owns On2 technologies that predate H.264.
There are more browsers out there that support OGG or WebM than those that support H.264. Where are the lawsuits?
This isn’t about silly web browsers and patent lawsuits. This is not about one company suing another company over interests, you have to understand that this is about all companies in equal agreement: This is about lucrative licenced content streams. This is about value-adding fees on top of the content itself. It’s about an end-to-end chain that cannot be escaped or avoided where everybody involved profits.
There won’t be any H264 / WebM lawsuit battles, because that would expose the whole racket that’s working well so far (TV / cable streams) and is about to blossom into a market 100x as big.
This is about value-adding fees on top of the content itself.
In another time and place, this would be called “taxing” and “fleecing”. But adding additional, unnecessary costs to an item these days is called “value add”.
I’m starting to get on the “old” side of the population…
Actually, as Google now owns Motorola Mobility and part of the patens of H.264 because of it.
They should start to demand more and more license fees from everyone.
If they make H.264 more and more expensive and in the mean time more and more hardware and software support WebM.
Then maybe people will be convinced ? 😉
Probably not though, as WebM is also controlled by Google. They wouldn’t trust them anymore.
Is the abstract of compressed streaming data in itself an owned patent?
I’ve grown so tired of this debate… The ENTIRE point of Google buying On2 and releasing webm in the manner they did was to create (by fiat) a patent-proof video and audo codec.
The plan is/was:
1. Google releases webm in the most nonrestrictive manner possible. Does it infringe on existing patents? Hopefully not, but it really does not matter in the long run… see 2.
2. Convince a large enough group of other technology companies to support it openly. Those companies that feel the yoke of the current licensing regime being the most likely to jump on board.
We are still stalled at 2, but it isn’t over yet…
If you get enough technology companies to jump on board patents DO NOT MATTER. Why? Because those companies all individually hold patents too, and if you get a big enough pool then anyone attacking it in court is risking their own livelihood to do so… It is simply playing the MPEG-LA game in reverse. Is it dirty and underhanded? Sure it is – but that is simply how the game is played…
The only way to create a patent proof video codec is to scare the current licensing regime into leaving you alone… Maybe webm is actually non-infringing – but that doesn’t make anyone feel safe and it is no guarantee. But a large, powerful group of companies backing it? Support for it would snowball quite rapidly, because that would act as a litigation deterrent (plus it is free).
So now I hear “But it might infringe on patents! It’s therefore not patent proof”…
I’m sorry, but patent proof doesn’t mean what you think it means. What it actually means is that enough companies (with their own patent portfolios) support it so that it is impossible to attack in court with anything short of a darkhorse patent (a patent held by a small player with little or nothing to lose that no one already knows about).
The question then is actually “Are there any darkhorse patents?” Maybe, but highly unlikely.
The ones held in the MPEG-LA patent pool? Everyone knows about those. They have known about them since day one. They simply do not matter (assuming the tipping point is reached in support) because they either:
1. Don’t matter because webm doesn’t infringe.
2. Will just get settled anyway because no one will be willing to go nuclear at that point.
Precisely.
Here is the WebM patent consortium, BTW:
http://www.webm-ccl.org/
http://www.webm-ccl.org/faq/
http://arstechnica.com/web/news/2011/04/google-builds-webm-patent-p…
Here is the member list:
http://www.webm-ccl.org/members/
When Google completes its purchase of Motorola Mobility, then that name also will be added to that list.
Motorola Mobility happens to own quite a number of the patents surrounding h.264, I believe.
There is also the very large group of companies in the OIN, whose patent pool covers the “Linux System:. Recently the definition of what is meant, exactly, by the term “Linux System” has been greatly expanded.
http://www.openinventionnetwork.com/pat_linuxdef.php
It turns out that, except for Sony and Philips, this expanded definition does include codecs.
So the OIN patent cross-licensing pool now includes patents about codes from all of the following companies except Sony and Philips:
http://www.openinventionnetwork.com/licensees.php
Edited 2012-03-14 22:40 UTC
Google did the Webm thing in order to be included in some way in the H264 cartel. That is, they now have to either:
– lose to Google, eventually
– include Google in their talks and give them advantages, licenses, etc.
If Google does not remove H264, and stop pushing Webm, guess which it is. Yep, guess.
I wouldn’t worry about OGG:
https://air.mozilla.org/open-video-codec-discussion-at-mozilla/
WebM is a different story.
Really, to be competitive they need a browser capable of deliver in performance and low power juicing. Right now the picture is clear, most hardware has support for h.264 from where they can get both things. Also, on content, there are orders of magnitude more things using h.264 than alternatives.
So what could they do? Lose the opportunity to gain market share and be relegated to be ousted of this important market, or try to be competitive right now? I think they should try to be competitive.
Google really has a habit of ignoring some really important issues.
I don’t think they ignore it. They just made a bad choice, and didn’t keep their word.
When one buys a piece of hardware, one receives an implied license to use any IP embedded in that hardware.
http://en.wikipedia.org/wiki/Implied_license
Implied licenses often arise where the licensee has purchased a physical embodiment of some intellectual property belonging to the licensor, or has paid for its creation, but has not obtained permission to use the intellectual property.
This means that Linux users have an implied license to use any h.264 decoder that is embedded in the graphics hardware they purchased along with their system.
If Mozilla do decide to have Firefox call any system codec in order to render HTML5/h264 video, then Linux users will be OK because the Linux kernel can pass on such a call directly to the graphics hardware which the Linux user has an implied license to use.
For owners of Intel GPUs, this is already in place using Intel’s open source driver for Linux. For owners of AMD/ATI GPUs, the code for accessing the UVD hardware is apparently in code review:
http://www.phoronix.com/scan.php?page=news_item&px=MTA2ODk
Hopefully this will make it to the open source Radeon driver in the not-too-distant future.
Edited 2012-03-14 22:59 UTC
Mozilla can not only rely on system codecs, not all systems have the codec installed.
Windows XP does not for example, which is a very large userbase.
And it is good if Firefox works the same everywhere.
Edited 2012-03-16 11:07 UTC
While H.264 is a better codec the difference isn’t something I would care about unless I was trying to watch HD movies. I would hope that IF Mozilla does incorporate H.264 they do it via codecs on your computer (not browser native) so they just skirt the issue instead of hard coding it. Actually I was somewhat irked this wasn’t already the case.
H.264 is not the better codec. WebM quality-per-bit surpassed it a version or two ago. The latest version of the WebM software encoder is version “D” for “Duclair”, released on Jan 27 this year.
http://blog.webmproject.org/2012/01/vp8-codec-sdk-duclair-released….
This version is better than version “C”, which in turn was better than version “B”, and so on. Each version was an improvement over the previous version in quality-per-bit by about 6% each time. Version B almost matched h.264 quality-per-bit, and version C surpassed it.
The “Duclair” release has been numbered by Google as the 1.0.0 release.
Even so, I assume that a new stream encoded with all the bells and whistles (of better quality than H.264) does not play if you have version 1.0 of the VP8 codec.
This is a problem with it as it’s a moving target. No hardware manufacturer will include webm support in the sillicon if it will stop working 12 months after the release of the chip. H.264 is successful because it’s great for Digital TV, Digital Distribution of Movies/TV Shows, Blu-Ray, VVoIP (FaceTime is a great example), etc. Every idiot on the web talks about this as if the rest of the industry doesn’t matter. The rest of the (usually quite expensive) hardware and media recordings can be upgraded to WebM over night or at no cost. If Google can afford to use twice the storage for Youtube to also include WebM, good for them. I doubt that many others can do the same within financial reason.
You assume incorrectly.
http://blog.webmproject.org/2012/01/vp8-codec-sdk-duclair-released….
“Note that the VP8 format definition has not changed, only the SDK. Duclair is ABI incompatible with prior releases of libvpx, so the major version number has been increased to 1, and you must recompile your applications against the v1.0.0 libvpx headers. The API remains compatible, so code changes shouldn’t be required in most applications.”
The data format of WebM has not changed at all since the first release. Videos encoded with the earliest release of WebM will still play correctly with the latest release of the decoder (they just won’t have the same quality-per-bit). The latest version of the encoder produces higher quality-per-bit video with the same data format, so these videos will still play with the earliest release of the decoder (although the earlier decoder will take more CPU than the current version would).
There is no “moving target” here at all, simply increasing quality and performance with the same data format.
Incidentally, Duclair is version 1 of the codec. Version 1.0.0 to be more precise. Earlier version had version numbers lower than 1.
For information, the preceding release was codenamed Cayuga, and it was version v0.9.7.
Edited 2012-03-15 03:09 UTC
I will take your word for it. The link looks promising (will research this a bit) I hope it continues to improve and if it indeed is becoming the superior codec I could see it possibly becoming dominant (at least I could hope). Thanks for giving me incentive to look into this further. My main point however was if Mozilla has to implement it why not use whats already on peoples computers or allow us to hack it ourselves. For instance I have the S3TC enabled via libtxc_dxtn. It was my choice if I wanted to use a technology the distributer could get in trouble if they bundled it directly. Last time I checked (may no longer be true) Mozilla didn’t use external codecs so you COULDN’T do anything like that (yet).
If I’m not misunderstood, only the encoder has improved, without any need to change the underlying video codec. Just like x264 kicks the arses of most other H.264 encoders without requiring an x264-specific decoder. But I need someone more familiar with the matter to confirm that.
You assume an awful lot. The VP8 format hasn’t changed.
Which profile are you talking about now exactly? Hopefully you understand you’re taking about a group of encodings with different features which don’t run on all devices you just mentioned… it has the exact problem you just assumed WebM doesn’t have.
Maybe if they weren’t busy filling comments with assumptions, real discussions about why it doesn’t matter would happen.
so WebM is available on smart TVs and set-top boxes for DLNA streaming?
You forget to mention that HEVC(H265?) is almost ready and among other things it will feature half the file footprint of High Profile H264.
AFAIK H.265 is not format-compatible with h.264.
http://en.wikipedia.org/wiki/HEVC
Good job missing the entire point of the post.
It’s not about which is better but how it will be implemented.
Initially there were some extensive tests on comparing VP8 and H264, have they been re-done recently with the results posted online for us to check? Do you have any proof that VP8 is better than H264?
Otherwise, even Google is not claiming that VP8 is better than H264, just that they’ve improved by a certain percentage on a certain metric.
One can unfortunately debate this topic ad infinitum. About a year ago, after WebM quality had twice been significantly improved compared to its initial release, on OSNews there was a topic about encoder speed, and a poster who knew how to do so provided a couple of examples using WebM and h264 encoder profiles such that the quality of the output was all but identical. As close as could be achieved.
The x264 encoder was far faster. The WebM files were actually a bit smaller.
Google have significantly improved the encoder speed of WebM twice since then.
http://blog.webmproject.org/2011/08/vp8-codec-sdk-cayuga-released.h…
http://blog.webmproject.org/2012/01/vp8-codec-sdk-duclair-released….
I think that the x264 encoder speed is still faster than libvpx, but nevertheless the point remains that the WebM format actually needs slightly fewer bits for the same quality of output.
Edited 2012-03-15 08:36 UTC
You can’t convince anyone that VP8 gives more quality per bit than H264 in that way. A convincing comparison would give both quantitative and qualitative points, like some of the early ones comparing PSNR and SSIM, along with screenshots of the same frame encoded.
The last known tests like this (compression.ru, as well as other sites) put H264 ahead of VP8 in most every category. You’d have to retest with the updated encoders to be sure.
Why not? You eliminate one of the variables (the quality, by selecting profiles which yield the same quality, no matter what happens to be the name of the profiles), and then you can compare the other variables, being file size and encoding speed. It turns out that WebM has slightly better quality per bit, and x264 is significantly faster, for encoding to the same quality.
They were done ages ago. The WebM codec has been improved very significantly in both speed and quality-per-bit in each of four releases since then.
The problem with that method is that quality can be achieved in different ways: H264 has different profiles, each one can give you different quality per bit. So to determine which codec can give you the best quality per bit, just take the best profile from each. Otherwise you are comparing profiles ONLY, not codec families.
Each profile can get to the same quality, if you give it enough kbps. You can create any conclusion you want from this, just pick a terrible profile and then conclude that it needs much more kbps to achieve the same quality.
I don’t consider a method that gives nearly any conclusion the experimenter wants to be good, or convincing.
Did you confirm this by conducting your own test, encoding the same video with both encoders? May we see the results of that test? I did so a bit more than a year ago, and posted sample screenshots from the encoded videos here. VP8 was lagging behind *a lot* and the encoder was also a lot slower than x264. Sure libvpx has had releases since then, so I’d need to repeat the test to see how much things have changed.
But what I’m trying to say is, unless you provide such a test and the info so that others can reproduce it (I posted command-lines I used to encode the videos), statements about the quality of the encoders have no merit.
Not sure if it’s realistic or not, but maybe Mozilla (and most other free software projects) just need to keep themselves outside the USA. If America wants to shoot itself in the foot this way, so be it.
Of course, the USA keeps putting pressure on other countries to adopt software patents. But hopefully, not too many will succumb.
The one bright hope for the future is that the h.264 patents expire around year 2025. So in the meantime, we play a cat-and-mouse game with the US authorities. Of course, don’t be surprised if pressure mounts from Microsoft and Apple to extend software patents to 70 years instead of the current 20. They’ve done with this copyright, so it can be done with patents. But again, the USA is not the whole world.
effort to push WebM tells us that WebM isn’t “open” anyway, as it’s pretty much totally controlled by Google. So what’s the difference? I say, go with H.264. It is higher quality than WebM and isn’t controlled by a single company (a company which, BTW, happens to enjoy a monopoly position in the web video marketplace, where “monopoly” = “dominant enough to be able to manipulate the marketplace”, which is the EC definition; indeed, had WebM “won” by virtue of Google mandating it on YouTube, it would’ve won by virtue of abuse of said monopolistic position in the web video marketplace rather than on merit).
Au contraire, one can obtain alternative VP8 codec software which is not written by Google at all. Google’s code is called libvpx.
Here is an alternative, which is called ffvp8:
http://x264dev.multimedia.cx/archives/499
Google do control the format specification, but that is a requirement if you want all encoders to produce encoded data that will play in all decoders.
Other than sticking to the mandated data format, as specified here:
http://www.webmproject.org/code/specs/
and hence as RFC 6386 here:
http://datatracker.ietf.org/doc/rfc6386/
… other than sticking to that data format, if you want to write your own VP8 codec implementation, just as the ffvp8 project already has, then fill your boots.
Oh, and once again, please be aware that h.264 is no longer higher quality than the latest versions of Google’s VP8 code, called libvpx.
Oh, and if you don’t want to go to the effort of writing your own code, Google’s license allows you to ship their code in your product at absolutely no charge. You can get it from here:
http://www.webmproject.org/code/#webm-repositories
So what exactly is your complaint, Molly?
It appears that you don’t actually have one.
Edited 2012-03-15 03:26 UTC
I’d like to see some evidence of that. Me and several other people tried it during the latter half of last year and atleast back then you needed much higher bitrate on VP8 to get the same quality and especially at lower bitrates VP8 lost quite clearly.
It doesn’t matter.
Last time any meaningful tests were done between the two, VP8 was almost as good as h264 in quality per bit rate, however encoding and decoding was horrible. Even if this has changed to the point where VP8 is better in every way, albeit by a small margin, it doesn’t matter because there’s no support for it.
More importantly that there’s no support for it in mobile. None of the SoCs have hardware support for VP8 and they haven’t announced any plan to do so, which would mean no support until at least 2013. At that point, if they’re going to support another format, h265 is going to be the best option as its way ahead of both codes.
A good analogy is social networking,i.e. Google+ and Facebook. There’s no way for Google+ to succeed unless there’s a pretty big incentive to do so and right now there isn’t. As you can see so far it can’t beat the support and traction that Facebook already has.
Edited 2012-03-15 07:23 UTC
Hate to repeat what someone else has said, but… Any idea how much SoCs out there use PowerVR chips for media processing ? Cause they happen to support WebM now.
http://www.imgtec.com/corporate/newsdetail.asp?NewsID=597
As for H.265, I think it will take it some time to get traction. Do not forget that dozens of years since AAC and Vorbis have been out, everyone still uses MP3. Due to the idiotic way we manage codecs, H.265 will suffer every bit as much as WebM suffers today because it is incompatible with its predecessor and as it stands, no hardware or software supports it yet.
First, you’re just repeating false information. No mobile phone is using the PowerVR chip that you linked.
Second, in response to you and lemur, I was talking about hardware support, I’m aware that there’s been support for it in android obviously.
I was clearly wrong, there will be mobile devices with hardware decoding for VP8 before 2013.
The official WebM blog gives this awesome eastern European tablet(btw have you looked at the rest of the specs?) as one of the devices with hardware decoding for VP8. Clearly, this is what VP8 needs to gain rapid adoption.
You can argue all you want but the fact is that h264 has the support of big companies and it’s been adopted by the media industry. It’s also a fact that WebM is on par (meaning slightly worse/better) than h264.
Common sense dictates it’s impossible for it to replace h264 under these circumstances.
As for h265, yes it’s a new format, not compatible with its predecessor and essentially that’s my point. IF people are going to move away from h264, why move to something that’s the same and not the Next Big Thing(TM).
I agree that no current phone is using PowerVR’s newest GPUs yet, but as far as I know a majority of major ARM SoC manufacturers use PowerVR designs, so if new PowerVR designs are WebM-compatible it is only a matter of quarters before devices with hardware WebM decoding get out of the door.
I believe I’ve read somewhere that while current WebM encoders are ridiculously slow as compared to x264, WebM decoding requires on the contrary very little resources, so little that 720p video decoding can be smoothly done in software on an average PC. From memory again, H.264 decoding would be much heavier on resources, to the point of requiring hardware (or at least GPU-accelerated) decoding at these resolutions.
If this is true, I would expect hardware-accelerated WebM decoding to be significantly more resource-efficient than hardware-accelerated H.264 decoding, which means a significantly lower mobile device power consumption while watching videos. Given that the two codecs have reached similar output quality, this looks like a strong incentive for device manufacturers to me.
But if WebM encoders are already available and in every mobile device in a few quarters, whereas the H.265 spec is not even finished yet, it is to be expected that for several years to come, WebM will compete with H.264, which as a mature codec is its direct competitor as of now.
By the time mature H.265 hardware and software decoders are out, the codec landscape will likely have drastically changed compared to what we know today, in one way or another. For all we know, wavelet-based video codecs like Dirac could even have matured to the point of ridiculously outperforming DCT-based ones.
decoders*, of course…
I’m not an expert but I believe the opposite is true. I’ll ask around….
I couldn’t find any good source myself (only guys who compare hardware-accelerated H.264 to software-rendered WebM and find out that the former uses less CPU… Well DUH !), so I did a small test by running VLC with all forms of hardware acceleration and frameskipping which I could find disabled.
Since my laptop relatively recent and powerful laptop, software video decoding is definitely not a problem at 720p, and I had to get 1080p clips and disable all SIMD CPU instructions in order to get some significant CPU use.
Files used : 1080p Sintel trailer, H.264 version from http://www.sintel.org/download/, WebM version from http://x264dev.multimedia.cx/archives/499.
Note that compression levels of the two files are not equivalent (WebM version is much bigger), which could significantly alter the result. I would have to find a raw source and try to perform H.264 and WebM encoding at similar VBR bitrates myself in order to get more conclusive results.
Average CPU use was measured using the Linux command-line utility sar, by launching the video and putting it in full screen as fast as possible after starting sar, then stopping touching the computer (this may cause some variability in the first measured CPU load, but it should average out).
Results :
For H.264 %user=22.74 %system=1.78 %iowait=0.30 %idle=75.19, rest is 0.
For WebM %user=17.47 %system=1.91 %iowait=0.22 %idle=80.39, rest is 0.
There’s quite a significant performance boost here (about 20% less CPU usage for WebM), but not as impressive as what I think I remembered… Again, I guess I should try again with more “equivalent” videos that I would have encoded myself in order to conclude.
If you’re interested in a per-second resource usage log (quite interesting as both encoders have very different behaviours), here they are…
…for H.264 decoding : http://pastebin.com/2n3CDSyK
…for WebM decoding : http://pastebin.com/uuK0wS0S
EDIT : I found more similar videos at http://www.quavlive.com/video_codec_comparison, will try with them and let you know what I find
Edited 2012-03-15 21:34 UTC
D’oh. You’re only artificially inflating the CPU-usage then. There is absolutely no point in disabling SIMD-usage.
Ergo, the results aren’t really comparable.
You do not test codec speed like that. Especially full-screen vs. non-fullscreen plays no difference whatsoever simply because the window contents are scaled by Xv using H/W.
You use “mplayer -nosound -vo null -benchmark” if you wish to check codec performance.
I encoded a video to 720p WebM and H.264 clips, 1500Kbps bitrate and with all the best possible options, and I m getting 8% CPU-usage when playing WebM and 12% when playing H.264. So yes, there is indeed difference and WebM does indeed require less CPU.
If you read the discussion in the Mozilla mailinglist you’d know that CPU usage is actually not a very big problem anyway.
Most power on a mobile device is actually used by the screen and radio (from downloading a large video file ofcourse).
So if WebM uses less CPU even better, maybe software decoding of WebM actually works pretty good on mobile devices than.
But I would really like to see some tests.
That really solves the problem, no more discussion. Just discussion if the test were done the right way 😉
I never claimed that, I merely responded to OP stating that it is indeed true that playing WebM/VP8 does use less CPU than MP4/H.264.
But, what you are claiming is not true. For example on my tablet playing 720p HP video takes almost 100% CPU if done in software and 1080p is terribly choppy, even baseline. I haven’t actually conducted any real test how much of a difference it makes on battery-life, but I estimate that it’s around 30%-40% difference between H/W-playback and S/W-playback. Even 20% difference would already be significant enough to warrant paying attention.
The radio is on anyways so it actually doesn’t add that much to battery-usage. Screen is the most battery-hungry part, yes.
As I tested, on a desktop CPU the difference was 4%, on a modern 2-core mobile device it would likely translate to something around 10%-15%. I can test that if needed, though I don’t want to because it’s a hassle.
Yes, if it can’t decode it in software fast enough that is a problem, yes.
The radio does use more power with more use AFAIK, if only to send TCP ACK packets.
So, I tried the experiment with the videos of http://www.quavlive.com/video_codec_comparison, and here are the results for the 1080p “Sunflower” videos :
H.264 : %user=32.48 %system=1.63 %iowait=1.00 %idle=64.89, rest is 0 (full log : http://pastebin.com/ZDMmpeQT )
WebM : %user=20.98 %system=2.06 %iowait=0.56 %idle=76.40, rest is 0 (full log : http://pastebin.com/Tt9RFGFH )
Now that’s a 30% improvement in software decoding performance for WebM.
I couldn’t test the “Park Joy” videos, though : VLC randomly freezes (which is to be expected on a high-resolution video where things are moving quickly) and crashes (which is definitely NOT normal) when trying to read them, and so reliable measurements with sar do not seem possible.
Anyway, at this point it seems clear that WebM IS much easier to decode than H.264, although it does have its drawbacks (slow encoders, slightly lower image quality though it seems to have improved)
Edited 2012-03-15 22:09 UTC
Not correct. Android has supported WebM since version 2.3 Gingerbread.
As for hardware:
http://blog.webmproject.org/2011/11/time-of-dragonflies.html
“In total, over 50 semiconductor companies have licensed the VP8 technology today. The first devices with 1080p VP8 decoding are today in the consumer market from nearly a dozen different brands (see example here), and the first chips capable of VP8 encoding will ship in 2012.”
Says it all. More than a dozen mobile brands are shipping with WebM decoders. Fifty semiconductor companies have licensed the technology, and are making chips. WebM encoder hardware began shipping in some devices earlier this year.
You couldn’t have been further from the truth.
Well, not exactly ‘evidence’ but I’ve downloaded 720p youtube videos in both webm (vp8), x264 (mp4) and I can’t see any difference in visual quality despite the webm versions generally being smaller in filesize.
Also since I don’t have flash installed I’m impressed at the webm compability rate on youtube these days, must have been months since I came across a youtube video which didn’t play as webm.
The last few weeks I’ve seen a lot of videos without a webm equivalent. Maybe they are switching encoders or something and re-encoding the whole library or something. I don’t know.
My hope is, that they are prepairing to get WebM on YouTube out of the beta phase.
Because the period before that, they were almost done with the whole library. And they were even doing A/B-tests, giving some people WebM sometimes when these people have never visited /html5
You can see that from the comments on videos, some people say: I played your video and 2x the speed it was really funny.
The non-Flash player has a speed-adjudment button.
It could also be that those where non-WebM just video-tag though.
Well it says I’m not in the HTML5 trial (asks me if I want to join) and I can still see webm videos in youtube just fine using Firefox (I don’t know exactly when this changed but it’s been this way for quite a while).
I was talking about people who do have Flash who still get HTML5-video-tag as a test.
But yes, that which you mention probably already works a couple of months atleast.
I wonder if microsoft will permit end users to install unsupported video codecs such as WebM in Metro?
If not, that doesn’t bode well for the format. It’s sickening to see corporations strangling the life out of open computing.
Ultimately there was no good reason to not support the media frameworks that come built-in with modern OSs. And if the codec for a particular pet format isn’t installed at the system level, then have browser support as a fall back.
Thom, I find it strange that you say things like this: “We owe a great deal to the men and women at Mozilla” and “Heaven forbid we suffer a bit of inconvenience today for a better tomorrow”. And still you won’t suffer the very minor inconvenience of actually using firefox. Mozilla would not have to take this step if only people would care enough to even use firefox. Just wondering what the major inconveniences are that are forcing you to use another browser and thus be part of creating a worse tomorrow for all of us?
Or do you mean that since we now have chromium, which is also open, mozilla is no longer relevant?
No, you are exactly right. That was how I read it to.
But that is what Thom is like, pragmetic and complaining about Mozilla being pragmetic too 😉
It’s funny Mozilla was forced to take this step because once again Google didn’t back up its promises, but Google is only mentioned once and only accused of not doing enough, yet Apple is mentioned four times and getting a much harsher treatment.
That is because Apple and Microsoft are the problem. Apple and Microsoft implement neither Theora, Vorbis nor VP8 in their browsers. Google is guilty of being a half-hearted part of the solution, not of being part of the problem.
Edited 2012-03-16 12:33 UTC
Apple and Microsoft can been seen as a problem and Google announced it was going to solve it, except they once again didn’t causing problems for Mozilla.
Thom pays no attention as to why Google may not have done anything, which you’d expect from a journalist. Instead he turns his article in to a rant against Apple.
What Apple (and Microsoft) may be doing wrong in his eyes is old news. It happened many moons ago and because it happened back then Google announced their way of solving it.
Now fast forward to the present day it turns out Google was all talk, but doing nothing.
No comment on this, instead Thom goes back in time and repeats what we already know. It appears his tries to make it seem Apple and Microsoft are the fault Mozilla got in to problems and not the false promises of Google.
I don’t want to offend but that is a lot of historical nonsense. In what way has the technology industry been ‘ruined’? Give me some actual real world examples of how any major development in the technology field has been ‘ruined’ by anything to do with patents? A single example of how consumers have even been significantly inconvenienced let alone had their use of technology ‘ruined’.
As for the silly fears and anxieties being whipped up about H264 I repeat the same questions – how has the patent status of H264 impacted in the real world to the detriment of anyone? The use of H264 is delivered via a FRAND framework which should ensure fairness and access and a level playing field for all wishing to use it. I say ‘should’ because Samsung and Motorola (supported by Google) have been trying their best to undermine and destroy the FRAND system all for short term legal advantage. Now that’s a real and not imagined threat that could ‘ruin’ the technology industry.
Speaking of Google I noticed that in your whole article you only mention it once and fail to highlight that once again having touted itself as the champion of all things open Google has again said one thing and done another. How much longer can the empty piffle that Google spouts about being ‘open’ continue to hypnotise the very people that should be holding Google to account?
Samsung Galaxy products banned in Germany ?
Patent lawsuits from Apple about a photo viewing app ?
I think it was microsoft had a lawsuit against android device makers for stuff in the linux kernel…
SCO lawsuits against Linux distributions ?
Need i go on ?
Banning Samsung Galaxy products prevents customers from buying and then using them. To me that seems like a very good thing for customers as they’re pretty horrible products.
I cannot agree with you for two reasons:
* People should be allowed to decide for themselves if they want to buy Samsung devices. You, on the other hand, are advocating the idea here that your preference is more important than theirs.
* I own a Galaxy Note and I personally find it exceedingly good, much better than anything else I’ve laid my hands on.
Next time I’ll add a winking smiley, even though I’m not a smiley person.
But on a more serious note, I have recently come in to contact with a few Samsung phones and as my employer threw the BlackBerries out of the windows I now have a WP7 phone (Nokia Lumia 800).
Now I know how others systems work I can say, from my personal experience/view/preferences, that Android and WP7 just don’t come anywhere near iOS. Although I must say WP7 (on the Lumia) is as fast as something that’s very fast. But I’d never buy one, because for the rest is a big disappointment.
Should I say with a winking smiley that coming from you, this conclusion is highly suprising ?
Wink all you want, but I am aware of my Apple preference and experience, so when I got my Lumbia I realized it may take some time to get used to it.
Even so, when I bought my iPod touch everything was clear and intuitive. There was no learning curve. When I got my iPhone and iPad there was no learning curve either as everything is the same. WP7 is less so and some things aren’t a matter of getting used to it. Like it keeps insisting there are updates for 2 apps and when I update them they just disappear from the update list only to reappear later on. The mail app is very unclear. Zooming in to text doesn’t work well. When you scroll a text sometimes it auto zooms, when you do want it to zoom it doesn’t smart zoom like iOS, but just well… zooms, moving part of the text off the screen. Scrolling in general is smooth, but often when scrolling it suddenly things I have selected something. This doesn’t happen to me when you scroll in iOS.
It’s just missing a lot of stuff, making it look more like an alfa product or a proof of concept.
Sure there is some positive stuff too, like being able to link Facebook, Twitter and LinkedIn to your contacts and have actions added. And like I said before everything flies. Then again it may fly, because all apps seem to be very limited.
At least the Metro interface is working much better than it does in Windows 8. And mind you, Windows 8 also gave me a feeling that a lot of options were missing. It all seems dumbed down and just waiting for the missing stuff to arrive in an update.
Personally I don’t understand what the fuss is – why don’t Mozilla developers just use the frameworks provided on the respective operating systems and yet the operating system vendors itself deal with the fiasco of providing h264 CODEC’s and having to shell out for the patent costs? Use gstreamer for *NIX, AV Foundation for Mac OS X and Media Foundation for Windows and you’re good to go.
The “fuss” is about the war against the open Internet, where Apple and Microsoft are the evil side, promoting encumbered codecs for their selfish motives. Mozilla’s principles include promoting open Internet. By enabling H.264 Mozilla will violate their own principles. That’s the core of the discussion.
Edited 2012-03-15 20:00 UTC
Because the point of the html5 video tag is not purely to deliver content to users. If that is the only objective, why not just stick with Flash? It worked didn’t it? It’s the same argument – let Adobe deal with the complexities and the patent costs.
I really hate that people don’t see this for what it is… Let’s try an analogy (a purely fictionally one):
Tomorrow, some open source video stack implementation (maybe gstream) comes up with a killer way to allow doing advanced layering effects – something that would let a developer, for instance, insert procedurally generated objects into a video stream -retaining an accurate depth of field.
What would happen? The browsers out there that implement their own stacks would look at the code, and if it is something they think they could adopt they would adopt it. Or they may look at it and recognize that, with some changes, they could do it even better in their stack. There is nothing stopping them from doing so – they control the stack top to bottom. Everyone feeds off of each others work, that is why the web advances so quickly, because of openness.
Would a browser using the stack provided by Windows or OSX be able to do such things? No – the browser maker is at the mercy of the Operating System vendor to do stuff like this – they are simply playing on the sidelines. They are not part of the process of advancing the status quo.
The point is the browser is the platform when it comes to the web. The less it relies the underlying OS the more flexibility the programmers have to allow them to innovate.
Chrome is very popular today, and Firefox has been for ages. The reason they became so popular is mostly because of their platform independence. Not so much because they can run on multiple platforms – but because since they were built to do so they both implement a lot of functionality commonly provided by the underlying OS – they do a lot of things most software relies on the OS to do for them themselves. In doing so they find ways to do it better. Most of what people like about Chrome and Firefox is a direct result of those two browser’s choosing quite intentionally to do it their own way.
Sure, using the OS video stack will work – I’m not saying it wouldn’t. But why let Microsoft and Apple dictate how video should work on the web?
In that case, if the browser is the platform, it pretty much becomes the OS anyway. To that end, where would you draw the line? Should browsers come with their own fonts and window drawing routines? Should we ship them with drivers to talk to a sound card if we need to play audio?
IMHO, the more browsers can rely on functionality already built into the OS, the less bloated they become. That is what the OS is there for.
That’s true of course. My point is they line is definitely moving away from the functionality being provided by the OS – and that has proven to be a good thing, not bad.
I’m not opposed to using OS provided functionality in browsers – I’m just saying there ARE very tangible benefits to not doing so…
Everyone for some reason seems to ignore those benefits – even when there are so many examples showing how they have benefited from browsers going there own way.
Unfortunately Thom doesn’t work in the tech industry. If the Google Analytics stats say “iPads/iPhones and everyone else has flash installed”.
Well it is Flash for video if h264 isn’t available. I would love to use WebM.
Unfortunately I can’t tell my bosses I won’t implement WebM because “it isn’t good for the web”, I would probably get in a lot of trouble.
It is not often that I completely agree with Thom.
But on this issue, he is spot on.
The web deserves to be open.
Those crap patents have no place here.
You could be a politician. Seriously, too bad one doesn’t see such well reasoned discussion on other tech blogs.
good. lets hope flash dies faster.
google is to blame for flash not dying faster…
from supporting flash in google chrome. to requiring flash on youtube for certain videos so they can imbed commercials.
also they halfway implemented different video codecs. so now when you try to view youtube videos on iPads or iPhones they try to re-encode the video on the fly so they can send you a video. the problem is their servers are so slow that the re-encode doesn’t happen fast enough to watch the video. and then if you come back later, it never saved the re-encoded video so it has to do it again.
i wish google would have never bought youtube. they have really messed it up.
I’ve often wondered why Google never bought Adobe. They would get the PDF portfolio, some of the best commercial design and creative software, and Flash which could easily be open-sourced at that point.
In fact, the only deterrent I can imagine is trying to avoid antitrust lawsuits and sanctions.
Nobody is forcing them to do anything. If they don’t want to implement H.264 playback they don’t need to. Their browser will not suddenly stop sucking because they implement H.264 play back.
What is this? A soap opera?