HTML5 Video is coming to Opera 10.5. Yesterday (or technically, last year–happy new year readers!) Opera released a new alpha build containing a preview of their HTML5 Video support. There’s a number of details to note, not least that this is still an early alpha…
HTML5 video has been available in public releases of other browsers for some time now but it was Opera themselves who proposed the idea and provided an early example of the technology.
Just like all the under the hood changes for 10.5, the video implementation has undergone a large change switching from lib[ogg|vorbis|theora] to GStreamer. This change adds a lot more flexibility and much better system integration. GStreamer will run in a separate thread improving responsiveness and audio quality and if GStreamer is available natively on the OS, Opera will use the OS-provided system instead. This should allow for a good playback experience in Linux. Opera believe in open formats and recommend using OGG media but the GStreamer backend will mean that even if a video is not OGG, it may still play as long as GStreamer has the right codec available (Firefox’s HTML5 video implementation will play OGG and only OGG).
Speaking of implementations, Opera have tried their best to meet the ever growing spec and being late to the game (as far as public releases go) brings them a number of benefits. Firefox has already had to deprecate some HTML5 video features due to changes in the spec.
The special build is not available on Mac yet, due to them not having GStreamer available at the moment. (I wonder if they could look at Songbird who use GStreamer).
Plugin embedding has been a sore spot in Opera before, and I find this very exciting that another browser will soon be joining the ranks of those providing HTML5 video support. Whilst Microsoft have not said anything confirming or denying the presence of HTML5 video in IE9, they have said that it will support HTML5 elements (IE can currently only display HTML5 elements using a JavaScript shim). With the release of Opera 10.5, and if IE9 ships with HTML5 video support, then that will mean that every major browser will support HTML5 video, and hopefully we will begin to see this technology used in the mainstream space.
You have to understand that this goes far beyond just replacing Flash video players. That is the first step, but with video being a first class citizen in the DOM and not hidden in a sandbox, developers can style and play with the video data however they can imagine. You can spin, skew, colourise and even map it onto a 3D cube. Anything else on the page can change or interact with changes in the video. We’ll be able to invent new ways of annotating and commenting on videos, all without the use of Flash, and therefore inherently compatible with any OS and any device, including mobile phones.
Of course, I hope they come out with a Mac build soon so that I can test it out with Video for Everybody. If any readers could submit screenshots for the testing-matrix that’d be appreciated. (I’m particularly interested in Linux as HTML5 video has proven to be unreliable with Firefox 3.5 on Linux; MPlayer-plugin &c. don’t fallback as they should)
KHTML users will get html5 video and audio on KDE SC 4.4 too.
khtml uses kde’s phonon technology, which means mac users will get get output from QuickTime, windows users will get output from direct media and linux users will get output from gstreamer, xine, or any other available backend availble. i guess third party vlc and mplayer backends are there.
happy new year.
depreciate: To fall in value; to become of less worth; to sink in estimation; as, a paper currency will depreciate, unless it is convertible into specie.
deprecate: To pray against, as an evil; to seek to avert by prayer; to seek deliverance from; to express deep regret for; to desire the removal of.
Ah, thank you. Fix’d. You _can_ use Depreciate in British English, but this has all but died out now because of programming terms coming primarily from America. (Program instead of Programme &c.)
Edited 2010-01-01 19:05 UTC
How does reduced in value have similar meaning to that of obsolete in British English?
It’s hard to find details on the web (very Americanised), the two words have merged to a certain extent http://dictionary.reference.com/browse/deprecate
The first time I came across the term “deprecate” in programming was in Java. Before that I’d been using ADA, Pascal and [Visual]Basic.
For me “depreciating”, as in “belittling/making less important” works just as well as “deprecating”, as in “belittling/making obsolete”. The difference is semantics, either way it points me to the fact the construct “we” are about to use may not be a good idea and may be destined for removal.
I’ve always thought that depreciate makes more sense, as most deprec[i]ated methods/functions seem to hang about like a bad smell for a number of years anyway.
Seriously… I don’t think Opera has a chance in snot of moving out of the basement. The world is moving to webkit/khtml… Browser makers would be well advised to use webkit for rendering and focus on innovating features for the browser like faster JS, more scripting languages, content management features, etc.
Why?
I love how people will go on and on about choice and all then claim others should give up on *their* implementation of something and go with the more popular ones. Webkit isn’t the be all, end all.
consumers always wins when there is competition in the market. Opera may not “move out of the basement”, but it plays a critical role in shaping where the web is going
If you read the article linked, you will see that opera was the browser that suggested the “video” tag, if the the browser that gave us tabs, it was one of the first browsers that scored high in acid tests and being web compliant and embarrass those that are already “out of the basement” and we, the consumers of the web are the beneficiary
the web stagnated when IE was the only dominant browser, the web will stagnate if web browsers are reduced to only one or two ..let there be many, let them compete on how to best serve the web and let us enjoy the benefits of the competition
you may not use opera as your browser, my it simply being in existance is putting pressure on your browser of choice and you should be thankful for that because you are indirectly benefiting from it
well said
Out of the basement?
Opera is the #1 mobile browser, with a market share of nearly 30% or so.
The world is not moving to WK. What is happening is that there are a lot of WK forks. There is no mobile WK:
http://www.quirksmode.org/blog/archives/2009/10/there_is_no_web.htm…
Your imaginary scenario where the engine will be the same will never happen.
Also, browsers using webkit will always be “owned” by Apple since Apple basically runs the WK project. So anyone who goes for WK will either be a slave to Apple’s release cycles, or will have to fork WK and develop it into something entirely different, and you’re stuck with two different engines anyway in the end.
sorry. but you’re way off. WK doesn’t need to actually be mobile, that’s why it’s getting so popular, a full browser that can run on mobiles.
webkit isn’t tied to apple, they do dedicate a lot of money to the project with their coders, it also hasn’t stopped a number of other companies using webkit and releasing things on their own schedule.
there is a clear difference between webkit’s release cycle and Safari’s release cycle.
You are missing the point. I didn’t say it needs a mobile version. I said there’s no WK for mobile. That is, there is no single WK engine for mobile. It’s all a bunch of incompatible forks.
The Webkit project is mostly run by Apple. Other companies have released WK based browsers, such as Nokia. And guess what, they created a fork which ended up miles away from Apple’s WK, so they had to go back to the drawing board and do it all over again.
You are missing the point again. Apple runs the WK project. As an example, Nokia forked WK and made their own mobile browser. But it was so troublesome to both work on their own fork and implement all the changes Apple added to the main WK code, so Nokia’s browser ended up a completely separate fork which they had to throw away and start from scratch because all their changes were impossible to merge back since WK and Nokia’s fork had evolved far away from each other.
The “one mobile engine to rule them all” is the wet dream of ignorant people who think it’s just a matter of checking out the codebase and making a browser. It’s much, much harder than that because Apple is moving WK in a certain direction, and if your plans deviate even a tiny bit from that, you’ll end up with an incompatible fork.
Totally wrong.
Wrong. Nokia’s Qt and S60 ports are maintained in the main WebKit repository.
You have a colorful imagination.
They do in practice. Sorry.
As forks, then. Nokia had to start all over again because their fork became too different and unmanageable.
No.
You should be.
No.
No.
Keep telling yourself that. You clearly have now idea how this works.
Unlike you I actually know how WebKit is organized. You keep posting lies about forks, manageability, etc., but your only “proof” is a single article
You clearly don’t know much about how it works in practice. You may have read up on the sugar-coated theory of how one dreams of it working, but that doesn’t match reality.
Oh, I do. Very well, in fact.
Actually knowing how the WebKit project and its ports are organized has nothing to do with sugar coating.
You are clearly ignorant of how it works in practice. Read the article and educate yourself about the major problems and forking going on with Webkit.
I wrote a more lengthy post regarding your comments. I chose a more prominent place (more top level) for it. Here it’s too buried any other readers may not see it.
Nokia is the first “other company” that comes to mind as a user of WebKit? How about, you know, Google? Maker of that Chrome browser people occasionally mention here?
Nokia is just a good example because they have been at it for a long time. It shows what happens over time. What you get is a bunch of incompatible forks:
http://www.quirksmode.org/blog/archives/2009/10/there_is_no_web.htm…
Nowhere in that article it’s claimed that they are forks.
Just read the article. You will see what it says about there not being just one single Webkit implementation.
I’ve read the article and unlike you I also understood it.
Yes, there are different ports with different capabilities.
Are those ports unmanageable forks? No.
Indeed, different ports, as there are major problems associated with maintaining them.
Yes they will all be ruled by Apple, Google will have no so whatsoever.
If Google wants a say, they will be forced to fork Webkit, and that will create a separate and eventually incompatible and totally different engine which is no longer Webkit.
WebKit has no release cycles. Each port (Cocoa, Qt, GTK, Chrome) has its own cycle. Each port is managed separately. Apple only manages the Cocoa port.
Apple basically owns the Webkit project, so they add whatever they need whenever they need it. So yes, Webkit basically follows Apple’s needs and release cycles.
You are talking about the wet dream theory. I’m talking about the cold, hard realith.
No.
Every contributor (Apple, Nokia, GNOME, Google) does. WebKit makes no releases. The main development branch in never frozen for any release. WebKit makes no releases.
No.
You are talking like an Opera fanboy without any connection to reality.
Actually, yes. Just look it up.
The problem is that in practice, Webkit based browsers are a bunch of incompatible foreks:
http://www.quirksmode.org/blog/archives/2009/10/there_is_no_web.htm…
You have no idea what you are talking about. That are not forks.
I recommend that you read the link instead of continuing to ignore the facts.
Google and Apple and KDE produce webkit code. It is not an apple project. When Mozilla jumps on to the web kit engine, it will be all but over.
Seriously… the focus of competition should be on end user features and scripting engines.
WK is basically an Apple project, run by Apple.
Just one engine would be devastating to the browser market. There should be multiple engines out there competing to force sites to code to standards instead of engines.
The world isn’t moving to Webkit/khtml.
The world is moving to WebKit.
The world is moving to a bunch of hard-to-manage forks of Webkit.
You keep posting wrong things about WebKit over and over again.
As proof for your accusations, you post links to an article, that merely states the varying degrees of WebKit port quality on different platforms.
I don’t know if you don’t know better of if you’re lying on purpose.
No matter what, here are some facts:
Fact 1:
WebKit is not controlled by Apple.
Apple initiated WebKit and provides the hosting infrastructure. Other than that Apple has no special role in WebKit these days. Apple engineers work together with Google, GNONE, and Nokia ones.
They all have equal rights.
Anyone who reads http://webkit.org/blog/ sees postings about new source code reviewers (with full commit rights) quite regularly.
Fact 2:
Ports are not forks.
You claim that every port equals an fork.
That’s not true. The main development branch can be accessed via http://trac.webkit.org/browser/trunk
Platform-specific code is in subfolders http://trac.webkit.org/browser/trunk/WebCore/platform and http://trac.webkit.org/browser/trunk/WebKit
The core code is located in JavaScriptCore and WebCore directories. That one is generic and not platform-specific.
Fact 3:
Quality varies from port to port.
When a vendor releases a browser based on WebKit, it works like this: the main trunk is branched and the vendor focuses his work (bug fixing) on that branch. That’s the same for every vendor. Newer ports or branches by smaller vendors may not get as much bug fixing as e.g. the Chrome port by Google.
There may also be bugs in the used toolkit. Those bugs are out of scope to fix for the WebKit project.
Branches are made on a regular basis from the main trunk. A branch does not serve as primary development place for a port.
All big software projects use the trunk/branch development model. It’s a proven model and does not equal unmanageable forks.
Fact 4:
Rendering capabilities depend on the graphics library.
WebKit does not have a single, unified graphics library. Gecko for example uses Cairo. Opera uses something proprietary.
If a vendor decides to port WebKit to a new graphics library instead of adapting the graphics library to the platform, regressions may occur.
No vendor is forced to port WebKit to a new graphics library. WebKit has already been ported to Cairo. If a vendor wants to improve on that one, he is free to do so.
Still, most vendors prefer porting to new/native graphics libraries. This reduces memory footprint — especially important in mobile space. Vendors may see this as more important than 100% compatibility with existing ports.
You keep thinking that things work in the real world as they do in rosy-red theory land.
Fact 1: Apple does control the project in practice. Even if this changes, Webkit will still only be controlled by a couple of major contributors, most likely Google and Apple. Despite Nokia’s size, they ended up with a broken fork, and had to start from scratch. For smaller players, Webkit will never be within their control. When even Nokia can’t pull it off, how would others be able to?
Fact 2: You end up with forks in practice. Look at Nokia again.
Fact 3: Not only does the quality vary depending on the port, but each port will eventually either have to become fully forked, or work will have to be thrown away to be aligned with the main Webkit project, which is run by at most a couple of major players. Right now, Apple is basically the owner.
Fact 4: Indeed, there is more to a browser than the browser engine. Webkit is a not at all easy to work with, and when you start integrating with platforms, creating your own UI, etc. you quickly end up with a separate and incompatible fork.
So as you can see, Webkit is no easy solution. It’s hard, requires a lot of work, and unless you are a giant like Google or Apple, you will forever be at the mercy of the Webkit “owners”.
Did you even look into the WebKit repository? Those ports are not forks!
Why do you insist on your claims even after being proved wrong?
Nokia never put much effort into the S60 port of WebKit. They are only now putting effort into a port and that’s the Qt port.
Even if Apple controlled WebKit: What exactly would be bad about it?
Edited 2010-01-02 22:28 UTC
Did you even look at the real world, and how the various ports are doing? Either your Webkit browser moves further apart from Apple’s Webkit (the main WK development), or you become a slave of Apple’s needs and release cycles. Maybe one or two major companies (Google is a good candidate) can wrestle some control away from Apple, but even Nokia failed miserably, ended up with a crappy fork, and had to throw away their work and start all over.
I didn’t say there’s necessarily anything bad about Apple controlling WK. It probably is if you need a mobile browser and want to compete against Apple, though.
There are tons of smaller players who will be at the mercy of the main WK maintainer (Apple).
WK is not the answer to your mobile browser wet dreams.
So, if Apple controls WebKit….
1.) How is the competition hurt by it?
2.) Which features, put into WebKit by Apple are bad?
3.) Which rendering engine should the competition use?
4.) Why is everybody using WebKit anyway, if by your great insight, WebKit is just bad for Nokia, Palm, GNOME, etc.?
And while you at it:
A.) Give proof that Apple is controlling WebKit. Which special rights to Apple committers have compared to others?
B.) Give proof that the WebKit ports are forks. Show that Nokia is not committing directly to the WebKit repository. The article you are linking to all the time does not give proof that different ports are forks.
C.) Give proof that there is a power struggle between Google and Apple over WebKit control.
Edited 2010-01-03 00:08 UTC
The competition is hurt because WK is tailored to Apple’s needs, not to mention Apple’s patents and talk about enforcing them lately.
Also, consider things like video where Apple’s WK will have support for one thing, but everyone else will have to license it separately. That kills the “WK is free” mantra, as you will have to pay huge license fees for the video decoding technology.
It is NOT in the interest of other companies to help Apple make licensed video codecs win the web video war. Apple already has a license. Everyone else will have to pay an insanely expensive license fee to the codec rights holders.
Apple will push for licensed, proprietary technologies on the web because it will give them a huge advantage over anyone else who chooses WK.
Oh, and everyone is not using WK at all. They never will. Phones are getting faster. Gecko is a joke on phones now, but will do well in the future.
Edited 2010-01-03 15:20 UTC
Those are? Which of Apple’s needs hurt WebKit and the competition?
Give facts with proof.
Which talk exactly? Which patents are affected by WebKit?
Give facts with proof.
WebKit does not render videos at all. That is delegated to external frameworks. Vendors are free to choose a framework. There is no video decoder present in WebKit’s source tree.
You should check the facts before making such claims. Or can you show the exact location within WebKit’s source repository that support for specific video formats is hard-coded into WebKit?
Why are Nokia, Google, Palm, and GNOME using WebKit and not putting work into Gecko?
You were forgetting those:
A.) Give proof that Apple is controlling WebKit. Which special rights to Apple committers have compared to others?
B.) Give proof that the WebKit ports are forks. Show that Nokia is not committing directly to the WebKit repository. The article you are linking to all the time does not give proof that different ports are forks.
C.) Give proof that there is a power struggle between Google and Apple over WebKit control.
The fact that WK is made into what Apple needs it for, not necessarily what everyone else needs.
You didn’t read about how Apple talked about going after people who violated their patents?
http://www.precentral.net/apple-suiting-sue-palm-over-pre
Quote: “We are ready to suit up and go against anyone.”
You have no clue about browser patents? Maybe you should educate yourself instead of discussing here, then.
Exactly. Apple wouldn’t have it any other way. Apple wants to keep the video closed and proprietary to get another advantage.
Read the PPK article again. I’m getting tired about having to educate you about basic stuff everyone should already know before discussing this.
Why would I give proof for a straw man?
Which features exactly? Give facts with proof.
Rumor about Apple maybe suing Palm is no hard fact about WebKit’s patent situation.
Give facts with proof.
Then give facts about WebKit’s patent situation and how other rendering engines don’t violate them.
Which advantages does Apple have with video? How is this caused by WebKit?
Give facts with proof.
No. Show me the location within WebKit’s source repository that the ports are maintained as forks. You said it’s that way.
Give 1st party proof.
You write big words, but without any foundation. You fail to give proof over and over again for your claims.
Do you have proof or not? If you have: Show it.
You also still haven’t give any proof that Apple’s WebKit committers have special right compared to the other’s.
Edited 2010-01-03 18:49 UTC
Well well, it seems like you are hell-bent on ignoring everything, and rejecting facts and information you don’t like, such as Apple’s:
“We are ready to suit up and go against anyone.”
You end up with a silly straw man which really goes to show how desperate you are.
The bottom line here is that there is no “mobile Webkit”. All you have is a bunch of incompatible forks. Combine this with Apple getting lawsuit-happy over patents and their insistence on a proprietary video codec to ensure that other WK browsers are left out in the dark unless they start paying (so much for WK being “free”), and the result speaks for itself.
You still have not given a single credible line of proof that the WebKit ports are forks.
I showed that platform-specific code is in two subfolders of the WebKit source code.
http://trac.webkit.org/browser/trunk/WebCore/platform and http://trac.webkit.org/browser/trunk/WebKit
You failed to give any counter evidence for those links.
I gave proof that various contributors have equal rights in WebKit’s repository. Every time a new person gets reviewer rights (highest rights within WebKit) in the repo, it’s announced via http://webkit.org/blog/ and most aren’t Apple programmers.
It’s also clearly visible via the commit log http://trac.webkit.org/timeline that contributors come from various backgrounds — not only Apple.
You failed to give any proof that Apple’s programmers have special rights in that repo.
I never claimed that different ports don’t vary in quality. I dispute your claims that those ports are forks and I did so with proofs.
You OTOH only linked to a compatibility chart showing varying quality of different ports. That chart does not contain a single line of statement regarding WebKit’s development process.
You have shown proof of nothing but a bunch of claims that are misinformation or pure straw men.
You keep repeating the rosy-red theory, while I have pointed you to to the harsh reality of WK development. Ask Nokia some day. They spend years and millions or billions on a WK browser, and ended up having to scrap their work and start over.
The proof is in the pudding.
Nokia is currently happily integrating WebKit into the core of Qt. Most recent development is the switch from Nokia’s own QtScript engine to WebKit’s in Qt 4.6.
Performance increased by several orders of magnitude.
Nokia could’ve used another ECMAScript-compatible engine, but didn’t.
Yeah, poor Nokia… so tortured by WebKit…. :-p
In happy rosy world. In reality, they had huge problems, and had to scrap all their hard work and start over.
Indeed. They had to scrap all their hard work to get all the latest improvements.
That’s called reality, my friend.
Proof?
Really: Show me blog postings by Nokia employees where they state their pain with WebKit.
Do it.
I dare you.
No, it’s called happy, rosy, imaginary world. Reality is harsh.
You claim to know all about WK, and yet you are unaware of Nokia’s struggles? Amazing.
Give a single line of proof for once!
Show me articles by Nokia employees!
Edited 2010-01-04 10:27 UTC
I have given you a lot of information by now. There’s an expression, “pearls before swine”. I think that’s a good expression here.
How can one educate someone who refuses to be educated?
I look forward to the death of Adobe Flash – it’s a well-known cross-platform security hole. I’m amazed that we’ve put up with it this long.
The sad thing is that we put up with it this long for one major reason: No one’s come out with anything better and as cross-platform as Flash. It’s quite sad, actually. HTML 5 video is only recently even beginning to catch on, and that’s only one use case for Flash though a very common one. We’ll still see Flash around for other things such as rich media games and the like until HTML 5 has equivalent designer tools. A good many Flash designers wouldn’t know what to do with web code if it came out and bit them, yet they can design Flash applets easily with Adobe’s tools and never have to touch code. That’s what we’ll need for HTML 5 before we see the Flash-centric sites begin to move to it and, even after that happens, inertia will keep Flash alive a good long time.
I hope we get more people going with H.264, instead of Theora (where are the patent hounds coming after all the poor x264 users? *crickets*). This looks very good: “We don’t care what codec it is, as long as Gstreamer can demux and decode it.”
That was sarcasm, right? You would actually prefer a ‘Web where if you want to put a video on your own website, the legal eagles have the full right to swoop down on you and demand money for it? Yeah, that’ll be just great for everybody. We already have the audio-police suing people for having a radio on within earshot of a human being, the last thing we need is the video police targetting bloggers.
last I checked, h264 licenses costs were made against the tool makers, not the tool users.
If I make an h264 video, I don’t have to worry about licensing costs… the browser vendor does…. and now that windows, Mac and Linux support h264 playback, I think it is moot.
You didn’t check very well.
Beginning in 2011, you will apparently have to pay a yearly royalty fee if you make a website that includes any h264 video. The more video data you send to more users, the higher the royalty fee will be. Sites such as Youtube are already looking at alternatives, to the extent that Google has purchased On2 (owners of the VP6, Vp7 and VP8 codecs).
http://newteevee.com/2009/08/05/google-buys-on2-now-controls-vp6-co…
http://en.wikipedia.org/wiki/On2_Technologies
Edited 2010-01-03 07:25 UTC
Why would you hope for h264? I think it is next year (2011) when the patent owners of h264 have said they will start charging everyone for each “transmission” (their word) of data encoded using a h264 encoder. The intent is apparently for a charge to be applicable not just each time someone uses the h264 codec to compress a video stream … but rather every time someone “transmits” a h264-encoded stream!
http://www.streaminglearningcenter.com/articles/h264-royalties-what…
Theora 1.1 (previously codenamed Thusnelda) achieves virtually the same performance as h264, but it is utterly free to use by anyone, anytime, for encoding, decoding or streaming, forever.
http://www.theora.org/news/
http://hacks.mozilla.org/2009/09/theora-1-1-released/
Edited 2010-01-02 11:25 UTC
what’s with this craze about Theora? it’s not like it’ s able to give such a better visual quality than other codecs, and it’ not the only open and royalty free one, either (at least Dirac comes to mind)
so why force one particular format onto the web, instead of letting content distributors choose whether “transmitting” in a recognized industry standard format (and pay the royalties), or going the free and open route (but forcing viewers to transcode in order to see the same content -people often uses content grabbers like Orbit, you know- on their standalone player -many a players fully support H264, they don’t do theora instead)
after all, it’s them who will actually be the one charged for the aforementioned royalties, not those surfing the web…
Edited 2010-01-02 12:17 UTC
There is no craze about Theora. It was just consdered to be the best choice. Dirac had some limitations (something about not being streaming-friendly or something).
There should be a baseline format to make sure video works everywhere.
Theora 1.1 has performance roughly equivalent to h264. No other open and royalty free codec comes close.
No other public access transmission standard (such as TV or radio) has a choice of standards. There is just one mandated standard for transmissions, it is open and royalty free, and all comers may implement it and compete in the market for making both transmitters and receivers.
Why should video over the web be different?
Edited 2010-01-02 14:27 UTC
I did not know that. Well, screw H.264, then. I rather think the way Fraunhofer did MP3 was awfully good, and promoted wide use. Royalties per transmission will be a good way to choke it off, and doing so after it’s been out and become popular will foster only the best PR.
I think it’s insane that we have IP law systems that will even allow that sort of licensing, too. A cost for an encoder and/or decoder, if you’re out to sell it to somebody, is entirely fair. A cost for that, and for making content and/or using it? No. Participation costs BAD. It is not analogous to something like TV, where there is ongoing content creation, and service maintenance, that basically doesn’t exist on the side of the video format guys.
Edited 2010-01-02 19:44 UTC
“Theora 1.1 (previously codenamed Thusnelda) achieves virtually the same performance as h264, but it is utterly free to use by anyone, anytime, for encoding, decoding or streaming, forever.”
This, though, is still a bit of an issue, and will remain one. That statement is 100% false. Any modern nVidia card can even show that to be false under Linux+X with common software (AMD as well, in Windows). I’m not sure how that hurdle will be handled, in the future (GPGPU decoder programs?).
It is not false. It used to be the case that h264 was well ahead of Theora in performance, but recent advances in Theora have seen it catch up.
This is why I specifically mentioned Theora 1.1. H264 is well ahead of Theora 1.0 or earlier, but it is only marginally different in performance to Theora 1.1.
http://tech.slashdot.org/story/09/06/14/1649237/YouTube-HTML5-and-C…
Check it out for yourself here:
http://people.xiph.org/~greg/video/ytcompare/comparison.html
Same bitrate, same filesize, imperceptible difference in quality.
As for video cards playing the videos … there are several stages in video rendering. Decoding the data stream is merely the first step. It takes perhaps two or three seconds for a CPU to decode a minutes worth of video data, so the codec decoding function is NOT the determining factor in replay performance.
Even if a given video card does not have a hardware decoder for a particular codec, once the bitstream is decoded from the codec format by the CPU, the rest of the video rendering functions can still be handed over to the video card hardware.
Edited 2010-01-03 07:41 UTC
It depends on what one means by the word “performance.” If you mean video quality and encoding speed, then yes I’d say Theora 1.1 and H.264 seem to be just about even. When H.264 jumps ahead though is decoding performance, and the reason is simple. There are video accelerator chips for H.264, and at the moment there are none for Theora. It’s sort of a catch 22 situation: there are no accelerator chips for Theora so we won’t see any major content producers use it, but until one of them does start using it there won’t be any demand for accelerators. Right now, all Theora decoding is done in software by the CPU. It’s not an issue on desktops or set top boxes, but on laptops and portable devices it’s a mother of a battery guzzler. To be fair, so is H.264 without hardware video acceleration, but that makes little difference to content producers and providers. Currently, H.264 delivers in a huge area where Theora does not, and if I had to bet on a codec that eventually replaces H.264 over licensing I’d bet on VP8 or whatever Google ends up naming it and not Theora. I can only say one thing for certain: next year is going to be very interesting in this area.
It depends on what you mean by decoding performance. Certainly it is true to say that a video decoding and rendering in hardware is far faster than doing it in software, but that is NOT by any means a codec-dependant observation.
As I said in the grandparent post:
Decoding the video data from the codec-compressed format into raw video data is but a small part of the problem of rendering video. This part of the task requires only a few seconds of CPU time for every minute of video. The amount of spare CPU time then is over 50 seconds per minute. If the decompressed video from a Theora-encoded video bitstream is passed on at that point to the graphics hardware, the CPU need be no more taxed than that.
The actual savings from having the video codec decoder function also implemented in the graphics hardware is not much at all … all that one saves is a few seconds per minute of CPU time.
As you say:
That is true. And when it comes to playing video, if the decoding is done in hardware (say for h264, which some graphics cards do have a decoder for), then the only savings is a few seconds per minute of the clients CPU.
However, be advised that a number of programs that play Theora perform the entire rendering of the video stream in software (i.e. in the CPU rather than the GPU), and so they do not use the graphics hardware at all. However, be advised that these programs also tend to do the same for h264, and the playing performance once again will be no better for one over the other.
Edited 2010-01-03 11:15 UTC
H.264 offloading from AMD and nVidia offer 1/2 to 1/8 CPU use of doing it in software, allowing playback on machines that are otherwise not capable, and allowing post-processing on machines that are. If codec Y can use that kind of hardware, and get better playback than coded Z, which can’t, then it is absolutely codec-dependent, regardless of how much of the time is spent just decoding the video stream.
Meanwhile, if they had all had to settle on a common toolset to program their chips, we might have software-based helper programs in Brook or OpenCL by now, and not be worrying about it so much.
If MPEG-LA decides to get greedy this year, then this could and almost certainly will change rapidly.
Note that one can make a hardware Theora encoder/decoder completely royalty-free, so for hardware makers, making such a thing would be cheaper than making the H264 equivalent. Then its just a matter of baking the silicon. The only thing stopping this is the current lack of demand.
At this point we seem to be repeating the same mistakes of history over and over. Remember the GIF fiasco of the early Internet? If MPEG-LA were smart, they’d lay low, not push their license fees higher later this year, and give the Internet & gadget makers more time to get hooked on their drug. They should wait 3 or 4 years, and *then* stiff everyone using H264 with a huge bill.
I personally hope they’re more greedy than smart, so this would-be ‘GIF fiasco 2.0’ repeat doesn’t even get off the ground.
That’s the crappiest argument ever. There WILL be acceleration for Theora if Opera, Chrome and Firefox all support it on all platforms. I can promise you that.
Well, I believe you need more than “support” for the acceleration to happen: you need the *CONTENT* to be in that format.
My bet is that if youtube encodes the videos in an open format, that’s the one that will win the “battle”.
Just an opinion though
“It is not false. It used to be the case that h264 was well ahead of Theora in performance, but recent advances in Theora have seen it catch up.”
It is false. Nvidia, Intel, AMD, Imagination, and Broadcom, just off the top of my head have H.264 offloading in designs with their names on them. You can give a nice GPU (and drivers, and playback software) to an Atom to get flawless 1080P H.264 playback. Theora has some Google Summer of Code entires doing it on an individual level.
It’s not an easy hurdle to get over, though if a, “everyone pays for every product, and every use of that product,” license system goes into effect, it may have a chance.
Edited 2010-01-04 00:54 UTC
Good for H.264! Since doing the same for Theora is absolutely free, and since it is in the intersest of some of the biggest tech companies in the world (Google, Vodafone, Sony, etc.), Theora will be supported by hardware soon enough.
It’s no good if no one uses it, though. It’s also no good if it is a moving target, which is a problem for many FOSS projects. H.264 was pretty much guaranteed wide use, and the format was set in stone some years ago, which is how it got to where it is.
As long as it stays in such wide use, it will be the one to choose. Even as royalties start hitting, the availability of H.264-enabled hardware and software will help keep it going.
It may not cost royalties, but it is not going to be free as in beer to add Theora support. If they get too greedy, though, it could happen. Not because Theora is, “free,” but because H.264 would become too expensive.
Having a totally free initial period is not going to be good, if they turn around and start charging everyone, even if it’s a nickel here and dime there.
Theora will be supported by Firefox, Chrome and Opera. That’s a growing part of the browser market.
What makes you think Theora is a moving target?
Royalties will hopefully help kill H.264. We need an open web, not a patented, expensive web.
Theora will be free to support, and with several major players pushing for it there’s no reason for hardware vendors not to.
“What makes you think Theora is a moving target?”
Its history was just that, until after H.264 came out. In addition, the encoder has taken a great deal of time to get reasonably good.
Another thing, too: I imagine many of the lowly non-professional x264 devs will get reasonably pissed off, if the royalties start to becomes detrimental . That would end up good for several future FOSS projects, I bet.
Of course Theora takes time to settle, especially since there have been no real commercial interests in pushing it along.
But now there are, and Mozilla has already invested, and the result is Theora 1.1. Which is backwards-compatible, IIRC. Which means that the “moving target” argument is moot. All the moving target does is to continuously improving encoding performance, while decoders reap the benefits without having to change.