For its upcoming Firefox 4.0 release, Mozilla has been concentrating a great deal on what is often cited as the main shortcoming of Firefox … its speed. For example, Firefox 4.0 will include GPU acceleration for both rendering and compositing on Windows 7/Vista, OSX and Linux, and it will have GPU acceleration for compositing only for Windows XP.
Jaegermonkey is the new addition for Firefox 4.0 to the Spidermonkey Javascript engine via which Mozilla hopes to at least match the performance of javascript engines of competing browsers. Although the GPU acceleration features for Firefox 4.0 have been showcased for Windows 7/Vista for the past couple of preview releases of Firefox 4.0 beta, the Jaegermonkey javascript engine improvements have not yet made an appearance. The first preview of Jaegermonkey will appear in Firefox 4.0 beta 7, which is not going to be ready for release until the end of October.
So, have Mozilla achieved their aim for Jaegermonkey yet? Javascript performance is a complex matter, and it is possible to write benchmarks which concentrate on one aspect of javascript performance and gloss over another. Not surprisingly, there are a number of javascript benchmarks available, Kraken put out by Mozilla, Sunspider by Apple and v8bench by Google, which just happen to indicate that Firefox, Safari and Google Chrome respectively are the fastest current javascript engines. This isn’t really much help to anyone.
Given their charter and ostensible mission of helping people, Mozilla have been tracking the performance of their Jaegermonkey engine aginst the equivalent engines within Safari and Chrome using the sunspider and v8bench benchmarks (as opposed to their own Kraken benchmark) for some time now. In this way people cannot accuse their results of favouring Firefox.
For the entire time that Jaegermonkey has been in development up until now, the Mozilla performance tracking website has indicated that ‘no’, Mozilla’s javascript was not yet as fast as Apple’s or Google’s. For the first time, in a very recent update, the site no longer says this. Mozilla is saying, very subtly, despite the results for Google’s v8bench, that Mozilla’s javascript engine has finally in their view caught up with the performance of the opposition.
Enjoy it when it becomes available soon for the first time in Firefox 4.0 beta 7.
This has been pretty impressive to watch JaegerMonkey, Apple Nitro, Google V8 going at each-other making improvements.
But where does IE9’s Chakra engine feature in all this? or Opera’s Carakan? I guess because they are closed source they don’t feature on the arewefastyet website but would be useful in the comparison as well.
The reason they don’t feature on the website is a different one: They do not support batch JavaScript execution on the shell, thus they don’t support a way to easily be tied in such an automated performance test. But Mozilla said they at least would like to add Opera benchmarks.
Check out arstechnica.com, they just posted a browser shootout across all the major JS benchmarks.
In short, IE9 blows, Chrome rules, and Firefox isn’t always last.
As usual then
But benchmarks are boring. I don’t run my browser with one single tab only trying to execute java-script as fast as possible.
I want a responsive browser at startup with multiple tabs, with 100 tabs in the background, when playing flash content (music) in the background (though that probably slows down the whole browser compared to stopping flash, so maybe I only want visible flash to play anyway ..), so on so on.
Also good undo/history without losing data (AJAX suck), everything back as it was when the browser crashed, with text in the textareas and such if possible, a smart url field, …
Thank you for putting it this clearly. I’ve been dying for this feature ever since I realized a few years back that Opera was saving the text in field inputs through the navigation history (what I mean is that back/forward keeps your text… how many times have I ranted against FF for losing a forum post because of an accidental slip on a mouse button). In addition, the browser saves the open tabs between sessions, including each tab’s history.
So, I’ve been wondering why it can’t also save form content and what else so that the browser sessions continue transparently, as if it the browser or computer had never been stopped.
In the case of javascriptcrap most likely because the browser can’t guess/assume which state is the one you want everything in.
But sure, store whole javascript result/code/whatever for all I care
Firefox 4 does that (when it can).
I don’t get the meaning in that. Does it do it already? Is it conditional or hypothetical?
It does what?
– Save the text in textareas so that even after back/forward the text is still there? yes it does. As I’m not a FF user, I noticed it in 3.6 after, once again, accidentally hitting the back button on a mouse.
– Keep sessions? Then it’s a great feature to have, one of the two that make me stay with Opera. Just make sure each tab’s history is also kept.
– save form content and continue sessions transparently?
As a side note, if navigating the history could just be **displaying** (not reloading, recalculating, or anything) the page as it was composed and displayed, Firefox would have my vote. Opera had this up until just after 9.27 iirc then they introduced the messed up “history navigation mode”. I know it has to do with javascript but that makes navigating the history a pain as the offline mode became totally useless to me: even cached pages require (most of the times) a network access to be viewed again.
As I posted above, the SessionManager extension (formerly SessionSaver) has done all of these things for Firefox for years. Firefox eventually got some basic built in session management, but nobody I know uses it.
This is the same story as with tabs. Originally tabs were an extension and when Mozilla finally added support natively it was a different and more limited implementation. I continued to use the original extension for years, until the native one reached parity where I cared for it and I still use extensions to let me tweak the behavior to be the way I like.
This is just the way Firefox does business.
Then it makes it even harder to understand why people criticize Opera for petty UI concerns when they can change the UI in seconds whereas with Firefox, many features are devoted to extensions.
If this has existed for years and it still hasn’t been deemed an essential feature to be provided by the browser itself, then I obviously don’t understand a few things. Feels like in the way “Firefox does it business”, the browser is for rendering web pages and not so much about usage features. There seems to be a separation what the application does and how it is used.
It makes perfect sense to me. Firefox provides the core behavior that everybody needs, then each user adds on the extra bit that’s particular to him. Now IMO session manager ought to have been included in Firefox as soon as it was available, because I think everybody needs it, but it wasn’t. It is effectively being included over time, piece by piece, which is acceptable from an incremental-improvement point of view. And, to be honest, it took a long time and a lot of testing to become as stable as it is now.
The important thing here is that Opera is complicated out of the box and Firefox is simple out of the box, with the complication *you* want ready to be added. A lot of people get lost in Opera the way they get lost in KDE: too much stuff everywhere.
I find this appropriate, the more so because there’s not much consensus on what ‘features’ are essential and consensus takes time to emerge. Extensions make a great proving ground for what should be core behavior and what shouldn’t be. The maintainers of the core are cautious and conservative, and if I had a complaint about this it is only that they are not *more* cautious and conservative.
I never lose forum posts in Firefox, not for years. SessionManager takes care of restoring textareas for me very nicely.
Is “SessionManager” an extension? If yes, storing and saving sessions, and saving form content should be baked in the browser. If no, then I lost forum posts out of ignorance.
Try the Lazarus Form Recovery – add-on for Firefox:
https://addons.mozilla.org/en-US/firefox/addon/6984/
Thanks, I’ll try it out.
I don’t know which article you were looking at but here it is for those who wish to see how much validity phoenix has in his post:
http://arstechnica.com/microsoft/news/2010/10/windows-browsers-benc…
Excuse me but where does Internet Explorer 9 blow? sure, it isn’t the fastest but it isn’t the slowest either? Also synthetic benchmarks mean nothing when JavaScript is but a very small part of the larger ‘speed’ equation. All the benchmarks show is that the beta version of Internet Explorer 9 is dogfooding many parts of the Windows stack that need optimisation rather than the beta tests being definitive evidence as to the suckage factor of Internet Explorer 9.
They used 4 different benchmarks, not all of them synthetic, not all of them testing pure JS performance. Peacekeeper, for instance, tests a variety of things, including 3d, encryption (for ssl connection), login/out speed emulating social networking sites, etc.
Looking at the numbers, Phoenix’s summary is rather accurate. IE9 has half the speed of the second slowest browser in Peacekeeper, for instance. It edges ahead slightly in the Nontroppo test, and the difference in Sunspider is again, negligible (except IE9 64 bit, which blows). Lastly, in the V8 test every browser runs circles around IE9. The numbers for Chrome speak for themselves – it’s still king in almost every test. Firefox is never actually the last, but that’s only because IE9 64bit’s horrible performance (if not for I9 64 bit, it would be last in Sunspider and V8). I’m talking about the BETA browser tests of course.
Note that the Ars benchmarks are using FF4b6, which was the last release before the JM engine got included. So current performance is a fair amount better, judging by other reports it’s edged slightly ahead of Webkit but is still well behind Chrome.
That’s good news I think at this point the difference between the top 3-4 browsers is not really significant or noticeable – ultimately it comes down to which UI you prefer/is more responsive. On my PC it doesn’t really matter, both FF (even 3.6.10) and Chrome are fast enough. On a netbook, Chrome is significantly ahead – as others have noted – in UI responsiveness, and rendering speed when compared to FF 3.6.x. FF4 seems to close the rendering speed gap, which is cool. I wonder how well it performs on those cheap Atom N260/280 1GB RAM, Intel GPU netbooks though…
The only place Firefox 4.0b7 is behind Chrome is on Google’s own v8bench benchmark tests.
FWIW, Chrome is much further behind Firefox on Mozilla’s own Kraken benchmark tests.
Oh wow, you are such a magical person, to be able to run FF 4.0 beta 7, which hasn’t even been released yet, and won’t be for another few weeks. Tell me, Mr. Time Traveller, sir, how is the state of the economy in November?
It’s not magical, he just uses nightlies (nightly.mozilla.org if I remember well)
And indeed, their performance rock. But if you want to try it, please note that windows x64 is extremely unstable currently, better use windows x86 if you’re on windows…
Actually, I use the mozilla-daily ppa for Ubuntu.
https://launchpad.net/~ubuntu-mozilla-daily/+archive/ppa
This ppa offers 64-bit builds for Ubuntu, and it is up to version 4.0b8-pre.
Be warned however that is is unstable code … yesterday it didn’t build AFAIK.
Edited 2010-10-17 22:15 UTC
They may have used the right tests, but they didn’t actually run Firefox’s new javascript engine.
Jaegermonkey won’t be included until Firefox 4.0beta7. They used Firefox 4.0beta6.
These facts are clearly visible in the test results on Ars and also in the OP for this thread.
FTA:
Jaegermonkey makes a huge difference to Fiefox’s javascript performance.
Cheers.
That’s good news – I appreciate the the Mozilla guys put into this new release
They tested Firefox 3.6.10 and Firefox 4.0beta6.
Neither of those include the new javascript engine Jaegermonkey. Also, GPU acceleration is not enabled by default … you have to explicitly enable it.
In short … they didn’t test what Firefox 4 might actually be like.
Who cares what it might be like. The point of benchmarking is to show what things are like right now.
When FF 4.0 beta 7 is released, then you re-do the benchmarking at that time to see how it compares.
You can’t benchmark “future products”. Only “currently available” products. Otherwise we may as well “benchmark” SuperDuperLookAtMeMonkey that has uber-profiling and super-chrome-wheels that will blow away every program on the planet.
Edited 2010-10-17 17:00 UTC
There’s nothing wrong with benchmarking beta 6, I just think there should be an asterisk next to it saying that the results will not be representative of the final FF4 product. Since beta 7 will have such a huge increase in performance, and anyone who tests the nightlies can confirm that to be true.
Edited 2010-10-17 20:27 UTC
Duh! That’s the whole point of the word “beta”. (Plus, it’s noted in the arstechnica article.) These aren’t Release Candidates which will be almost identical to the final product. These are betas, as is ‘almost feature complete, but still with lots of debugging enabled, and not all possible optimisations enabled’.
Which is completely beside the point, as it’s not released yet. You cannot compare some future, possible, maybe, we’ll see, product against currently released products. That’s like saying Duke Nukem Forever is the greatest video game ever.
How is beta6 any more “released” than the latest nightly? They’re both compiled, prepared, and distributed by Mozilla. I guess you just mean it doesn’t have a fancy name, so therefore it doesn’t exist in your mind? And if it’s not OK to test a nightly because it hasn’t been released as a beta yet, why in the world do you think it’s OK to test a beta when it hasn’t been released as a final version yet?
The point of testing the betas is to give an idea about what the future performance of the actual final releases will be like. In this case, beta6 doesn’t really do that, so i think it should be clearly stated. Again, testing beta6 is fine. It should just come with an asterisk that it’s not going to be representative of the final product. Testing the existing nightlies would be better, and in fact Ars said that that’s exactly what they attempted to do, but gave up when it wasn’t stable enough at the time.
Edited 2010-10-18 02:06 UTC
Err, no beta is released yet.
You can get a copy of Firefox 4.0beta7-pre which is stable … it does currently have a few blocking issues which are preventing Mozilla from releasing it for trials, but it does exist, and the issues AFAIK don’t have anything to do with Jaegermonkey.
You could perhaps consider ALL betas and previews as “future, possible, maybe, we’ll see” products, and shun them on that basis, but then you will never be able to present an article previewing upcoming releases of software. But somehow “counting” IE9 beta as representative but not counting Firefox-4.0beta7-pre is weird. They both exist, neither is vapour.
Firefox 4.0 beta 7 is in preparation for release. There are apparently some blockers preventing it from being released, but new development has shifted to beta 8.
As I said, beta 7 isn’t going to be developed further AFAIK, it is feature-frozen and waiting to be released.
If we wait for the finally-released version of Firefox 4.0, if current trends hold it will be even better than what is available right now, today, in either beta 6, beta 7 or beta 8.
We are talking here about code that is real, and that you can run today if you want to. Unlike the promised version of “SuperDuperLookAtMeMonkey that has uber-profiling and super-chrome-wheels”, Firefox 4.0 beta 7 is NOT vapourware, and it includes for the first time the javascript engine called JaegerMonkey. Not LookAtMeMonkey.
Edited 2010-10-17 22:24 UTC
AKA, not yet available, not released, etc. How hard is that concept to grasp?
AKA, not yet available, not released, etc. How hard is that concept to grasp?
IE9 isn’t yet released either. It is all just a question of terminology, since neither a beta, a pre-beta nor a “preview” is an official release.
So the question boils down to this: do you really want to fortell for users what the next version of the browser (be it IE9 or Firefox or Chrome or any other) might be like? Do you intend to do that by running benchmark tests against the most indictaive code that represents that yet-to-come final release?
Then do that. Jaegermonkey will be in Firefox 4, there is no question of that. There is code for Firefox, available right now, today, with a preview of Jaegermonkey in it. Everyone can run it, it is only a download away.
So do you want to see what it can do, and what Firefox 4.0 final will be able to do at the minimum … or is it your agenda to hide that away from people?
If you really want to do a speed comparison test that shows what the next upcoming round of browser releases might be like compared with one another, and you don’t include Jaegermonkey code, then you simply haven’t accomplished what you set out to do.
However, if you want instead to spread false impressions, then of course you have achieved that objective by running Firefox beta 6 or earlier, but you had also better be prepared to stop publishing such comparisons in a couple of weeks time, because your earlier results are going to look very silly.
Edited 2010-10-18 01:50 UTC
arewefastyet is really a simple way to see the impressive progress by the FF-team, but there are MUCH better comparisons available online. The best I found so far is http://net1news.com/101011-01-firefox-in-the-dust.aspx
In that comparison they not only compare performance, but also the resources needed to accomplish that performance.
A lot of improvements are made in browsers standards support because they want to reach “Acid-compliancy”. A lot of JS-speed-improvements have been reached so scores in famous benchmarks would be higher. I just hope that scoring good on these tests doesn’t become the major motivation for improvements.
the problem is that benchmarks of any kind are useless if you don’t understand them, and let me tell you straight, 99.9% of the posters have no idea what they really represent (like javascript encryption tests for SSL are a funny joke, but that’s 1 out of 10000 examples)
i could like to kraken which shows firefox being a whole TWICE faster than the latest chrome engine, and kraken is tuned for pretty much “real web experience”.
But that would be just as pointless if the benchmark is not understood.
Also, IMO they’ll release FF 4b8, not b7. The pre-beta nightly releases have all been versioned as b8-pre and they seem to be skipping b7. We shall see.
And yes, the b8-pre is indeed enormously faster.
If you don’t mind “alpha rated” code: http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/latest-trunk…
The 4.0b7 branch has already been created, which is why nightlies now identify themselves as b8-pre. I’ve been tracking the blockers in Bugzilla and I’d expect to see b7 released the week after next (week of the 25th) based on the current progress.
I think b8 will be less ambitious than b7, which is the first release with compartments. b8 will enable per-compartment GC, which will no doubt introduce bugs but is less earth-shattering than the changes in b7.
b8 should also introduce linux build system improvements, which could make a significant difference to performance. (GCC 4.5, switching from -Os to -O3, and turning on profiled compilation)
Profile feedback could help enormously. In code of my own I have seen performance improve as much as 40% with O3 and profile feedback.
Getting a good profile is important though. Using a bad profile can seriously hurt performance in uses that weren’t represented in the profile.
Excuse my ignorance, but what is that “profile” thing you both are talking about ? I tried to look for it in the GCC manual, but I’m not sure I got it.
Apparently, it’s about first making a “benchmark” binary which collects data during runtime about which functions are run, how many times, how much time is spent on it… You use that binary in a fashion that’s representative of real program use, long enough so that you get a statistically viable pack of data.
Then you ask GCC to compile a normal program, but this time you give it the previously generated data. With this information in “mind”, GCC can know what the hot spots of the program are, and choose more wisely where to apply inlining and agressive optimisations.
Did I understand what they said properly ?
Edited 2010-10-17 06:33 UTC
Yes, that’s exactly it. It’s often call PGO (profile guided optimization). They turned it on for windows a while back (3.0 i think) and it apparently gave a nice boost at the time. Older GCC versions had some bugs that kept them from using it, but with the change to GCC 4.5 they’re trying to use it there as well.
Nice stuff ! I envisioned doing something like this by hand in my code, but an automated solution is much more relevant in that context… Too bad it probably cannot be used in kernel code which does not have disk I/O yet ! ^^
Yes it is a great optimization. I would say the most important optimization in recent years since it gathers data during the running stage which allows it to make very informed optimization decisions resulting in reduced cache trashing, much better branch prediction, better loop unrolling, effective code reordering etc. Typically I get between 8-12% speedup when using PGO, in very computionally intensive code I have had around 20% increase. The drawback of course that you have to compile once, run it (preferably touching as much code as possible) and then compile it again.
Actually it has worked fine with Firefox since gcc 4.4 atleast, I’ve built it pgo-optimized on Arch linux for quite some time, and yes there is quite a difference in speed. This is often a reason for why many people on Linux find Firefox slow, since many repositories have not used pgo for their firefox binaries.
Is there some howto-pgo I can read that will tell me how to use it in general or, better, the procedure for Firefox specifically? I’d love to build myself a profiled binary since I’m sure stock-debian doesn’t do this.
Your interpretation sounds about right. It’s called profile guided optimization (PGO). I believe Window’s versions of Firefox binaries use it but not linux.
In GPU utilization there is no difference between firefox 4 & IE 9. The APIs are NOT available for/in Windows XP, thats the same reason IE 9 is also not released for Windows XP.
Soon all browsers will have these. Chrome , Safari & Opera will follow the same IE 9 way.
JavaScript is not a performance meter, UI responsivness per number of JavaScript heavy tabs open is. Firefox reach unacceptable level after 25. Opera soon reach it too around 40. Chrome never reach it. I never stressed Safari to reach the limit, but it feel fast. I did not tried IE with more tabs that the necessary ones to install Chrome and Firefox.
Firefox is my default browser, but it is so slow with the new Panorama feature that it soon become unusable.
It is one of the infamous bug that block 4.0RC, so I hope they will do something about it.
Firefox uses JavaScript for their UI, so it may cause improvements there.
I use JM engine, it does not help. The UI depend on the pages. They use the same rendering thread (why???) and the same JS engine instance, so if a page eat part of the TM crunching power for itself, it is not available to the UI anymore. That’s the problem. No matter how fast if the JS engine, if you can saturate it, performance will immediately plunge.
Ugh, this is bad news. I had really hoped 4.0 would change this because I get frequent UI mini-lockups when browsing due to this problem (you can blame me for using too many tabs, which is true, but it’s such a fixable problem!).
I should really be THE benchmark guy for Firefox, ’cause it seems like I stress it more than anyone else. >1G RAM, 400 tabs… just another day at the office! BarTab helps a little, but Firefox still wants to cry (especially under Linux).
You don’t have to wait to test it: download a daily build and you can test it now
I do suggest you first create a seperate Firefox profile: run Firefox with -P option from the commandline ! Just in case, so you don’t break something of your existing installation.
Ofcourse it’s a daily build, don’t expect stability.
I do test it, report bugs about speed and all that, they are just not fixed, there is nothing to test in this regard, it slower than it have ever been.
Unfortunately, I have to wonder if the new speedy JS improvements will be turned on by default for the UI. They turned it off because it was interfering with at least one extension.
The last big speedup (with TraceMonkey) landed in 3.5 for content but was only flipped on for the UI in 3.6.
You mean, Opera doesn’t choke on a *single* Slashdot tab anymore? Good to hear. The latest Opera version I used more or less extensively was 10.10 (couldn’t stand the UI canges in the later versions for some reason), and opening a Slashdot story with an average comment number used to make its UI freeze for an annoying while.
Yea, that’s fixed, 10.10 was still in the “slow” generation. Now, they are trying to compete with Chrome, so it’s less slow. I hope Firefox will do that too without losing all the extensions.
Were you using 10.10 on a Mac? Cause I have used it in the past and the improvement when switching to 10.52 was enormous to say the least, boot time, rendering, responsiveness, everything felt so much faster. 10.10 on Mac OS X was a lackluster release.
I was using it on Windows.
I remember when the mozilla browser was in the same boots and they come up with the fast and light firefox.
Now firefox is slow (even without any plugins) and ppl just go with chrome. They didnt learn a thing :/
Yeah, basically the main complaint isn’t about how fast it renders pages but how responsive the interface is.
Seeing as how it’s the only major browser to use a scripting language for the interface, it’s easy to see how it gets out paced by all it’s rivals in that area. In the old days, when the interface was un-bloated, it wasn’t such a issue, but the more they build on it the more hangs and freezes they’ll have to deal with.
It’s the only browser with scripted UI and thus the only browser with *scriptable* UI. A lot of Firefox Add-ons would simple be not possible with other browsers.
I’m running the Firefox nightlies on my machine at home and actually I find the performance pretty good especially when it does come to the interface. I don’t have a huge amount of tabs open but the performance is substantially better than for example 3.6 on Mac OS X. With that being said the problem I have is when it comes to Flash playing on Firefox.
On Safari the Safari process in the ‘Activity Monitor’ will have a CPU utilisation that is low whilst the plugin is around 27%, yet with Firefox I’m seeing the Firefox process and plugin at around the same level of CPU utilisation. My assumpation was that the movement to layers being accelerated to OpenGL would mean that the presentation would be on the OpenGL path in the same way that it happens on Safari?
Safari for example has NPAPI Pepper extensions which enable Flash to do its thing without shooting the CPU utilisation of Safari through the roof – are we going to see a similar improvement in the case of Firefox are we yet again going to be disappointed that long standing issues are going to be left unaddressed?
I agree. for me it is very acceptable as long as it doe my job. It can run faster but I believe it is a limitation of the design of FF as a software. If they change the way the software is developed they can have better results.
It is rather funny this. Firefox was meant to be lightweight and responsive. Apparently we didn’t need all the features of the likes of Opera, but over time it turns out we do actually want a feature rich browser so they merely give into the weight of expectation.
That’s just a matter of priorities.
At that time we needed a browser. There was a single (dual?) platform IE and closed source and poorly rendering Opera. On the open side there was Mozilla that was constantly in development, buggy and heavy. There is nothing wrong with features but not at the cost of basic functionality.
Firefox was merely a vehicle for shipping out polished Mozilla code. As we all know that turned out to be quite effective and Firefox has since “stolen” Mozilla’s show and became a platform on its own.
The future here will continue to be FF, Seamonkey and lighter webkit using clones like Midori. Never got comfortable with Opera since v. 3.x, and Google, well, too much concentrated power.
I will stick to Firefox too. In the past I’ve considered some alternatives, driven mostly by sluggishness of the Firefox GUI. The issue was largely addressed in 3.6 and essentially solved in 4.0b. From that point on all major browsers are on par in terms of performance, robustness and basic features (at least for me). That means I’m finally free to choose a browser based on criteria that really matter to me:
– privacy features,
– multiplatform support and openness,
– availability of some obscure extensions matching my domain.
What I’m particularly grateful Mozilla and Firefox team is that thanks to their ideals (yes, good ideals can have both practical and measurable effects), hard work and good PR they have managed to disassemble IE’s monopoly. Google and Apple helped a lot by joining the movement but it was Firefox that set the fire. It perhaps doesn’t matter in a day to day experience but I like the feeling that a simple action of using a particular browser may extend freedom of all of us.
I disagree with the “Apple and Google helped a lot against IE” part.
Apple made Safari because Microsoft didn’t want to work on IE for Mac OS anymore and they wanted a tightly-controlled closed-source solution (remember that if it was not for GPL licensing and polite mailing from KDE lawyers, webkit would be closed-source). By the time they released it, few people ever used IE on Mac OS, it was just an attempt to get control back on their system’s main browser. It took a lot of time before Safari got available to the many using a (poor) Windows port and the iOS port, and by that time Firefox had already won, owning more than 30% of the browser market and being as such mandatory to take into account by web developers.
Chrome is very late at the party too. What it did is to showcase the capabilities of the webkit engine much better on non-mac platforms, and have people discover that a much faster and easier to use browser was possible. So it helped in the sense that it introduced a much-welcome competition to Firefox, that was less geekdom-oriented than Opera. It helped starting the browser speed war which led Firefox to improve itself in that long-neglected area, and it also introduced a general move towards an easier UI that eventually led Opera to be much easier to lean now (at the cost of alienating some long-time users).
So they helped, but not in the IE fight. They rather push browser engine technology and browser UI forward, which is a good thing too. But as far as browser ethics are concerned, we should not forget that we owe everything to Firefox. Including a protection against Apple’s nasty move of pushing H.264 forward, a nasty move towards proprietary webkit tags in places like apple.com which could lead webkit to be the new Trident if it was left alone, and various privacy-friendly features in the upcoming releases of FF that Google would certainly NOT have introduced.
Edited 2010-10-17 07:15 UTC
Yup, I agree that they are latecomers. It’s kind of interesting that at that time the only entities trying to change the status quo were either non-profit organizations like Mozilla or Konqueror team or small businesses like Opera.
What I meant by “helped a lot” is that these browsers (perhaps except for Safari on iOS) added to competition. More competition is less monopoly, as simple as that. Competition means not only more choice for the users but also more standard-based website design and lower entry barrier to other wannabe browser makers.
True. But long-time users have remained Opera users for one reason or another. In all cases, they all know how to revert the “modern look & feel” that has been present since version 10 and go back to the excellent-in-all-respects 9.27 release. I have either stuck with the default skin or used the “Opera Classic 2” skin on all installs. But you can make Opera look like any browser out there, or anything you like. The menu is brought back with one click. The same goes for the side panels using one keystroke. Keyboard mappings are changeable. I reverted all those recent changes in less than two minutes and I’m not that much of a fiddler.
I doubt any long-time user left because of changes in looks, shortcuts or behaviors. The greatest thing with Opera is that if something doesn’t suit you, you change it. So far, I’ve never had a configuration thing that I wanted to change and couldn’t. Others might have had such things but I didn’t.
Anyway, I don’t think that Opera losing long-time users has anything to do with the UI. All the less since (as you said it) Opera is for geeks… these people are not easily scared by a preference dialog.
I think it’s a matter of first impression and priorities.
Before, altough I love opera’s tech (fast, reliable, innovative), the UI kept pushing me back to firefox. It felt cluttered and complicated, and I disliked the default theme while finding the other ones either ugly or unusable. Didn’t have the will to roll my own skin.
Now, its spartan look feels more friendly to me, more focused on the main “browse the web” mission. But at the same time, seasoned users can see it as crippled, as a firefox copycat that’s less empowering than the old Opera they’re used to. Even if they know they can fix it, it just feels simplistic at first sight.
Edited 2010-10-18 14:57 UTC
I agree that starting your own skin would be… an extreme measure.
In fact, I (and you too) was talking about long-time users, not those new to it or those who want to give it a try. These long-time users won’t see the new lean UI unless they’ve faced an upgrade problem because when installing over a previous installation, the previous settings are kept.
And no, no long-time user will think of Opera as crippled, even if they noticed that the new UI is different, after installing from scratch like I did for instance. Hidden, masked, or not shown, yes. Crippled, no.
People may leave because one feature (or more) is absent (no individual delete from the cache, no counter for searched strings, Dragonfly not being as usable as Firebug, fast history navigation not being the fast it once was, etc. and these are just some of my own gripes). A geek dropping Opera because the UI or something they know is totally configurable by them has changed is unlikely.
To be perfectly accurate IE was available on Windows, Macintosh and a few Unix platforms (Solaris and I think HPUX?) at that time. If you didn’t know this you’re in a big club, few people knew (or cared if they did know).
Looks like wikipedia has some info: http://en.wikipedia.org/wiki/Internet_Explorer_for_UNIX
I’ve even used that IE on Solaris (although “used” is a bit of a stretch).
No, that version of IE is irrelevant to this discussion. It only served a purpose of dragging Netscape 3/4 users to IE and it was promptly killed after reaching this goal. Also, note that none of IE’s has ever been released for any of “free UNIX” or for commercial UNIX systems running on x86 (where they would actually matter).
“and it will have GPU acceleration for compositing only for Windows XP”
is the reason i booted ff from all my machines. i only have linux. ff is nothing but a slow dog for a long time.
as much as free community was the first and loudest supporter, they also got the worst treatment of all. they can keep ff as far as i care.
*** happily replacing ffox everywhere with any webkit based browser
Edited 2010-10-16 00:25 UTC
Did you miss the first part of that sentence?
If you read his post his complaint was the lack of rendering support on Windows XP. Windows XP is dead and it is time that people wake up, say good bye to a one dear friend and move on with life. I swear there are some people who keep with old crap just so they can play the martyr card when software vendors choose not to support their OS of choice.
If you read his post his complaint was that the ‘free community… get the worst treatment’.
He quoted the bit about the lack of GPU rendering in XP because he misunderstood. He thought that XP was the only OS which will be getting GPU accelerated compositing (not surprising, given how it was written).
Just for your information — we do hardware-accelerated on Windows XP using Direct3D 9.
What we don’t do on Windows Xp is content acceleration using Direct2D and DirectWrite.
Indeed I think that was the case. The sentence was poorly written. Since I am not completely sure I understand it can someone clarify? I first read it as: Win7, Vista, OS X and Linux will have GPU acc for both composition AND rendering. But at the same time ONLY XP will have support for GPU acc for comp.
Edited 2010-10-16 21:37 UTC
Almost but not quite. Firefox under XP will have GPU acceleration for compositing only.
The word “only” applies to the element closest to it in the sentence, which in my sentence was “compositing” and not “XP”.
Sorry for the confusion, but that is just English syntax I’m afraid.
Read the linked article for a fuller explanation:
http://hacks.mozilla.org/2010/09/hardware-acceleration/comment-page…
Yeah, I got that when I read the sentence the second time. The sentence was kinda fuzzy for me, maybe because English is not my native tongue. But apparently I wasn’t the only one who misunderstood. However, I didn’t troll about it.
It’s fine. “Only” is just one of those words that English syntax is vague about. That is not your problem, it is a problem with the English language.
I should have used commas to make it clearer.
Well, the meaning of the word “only” within a sentence in English is extremely position-dependent.
Example: each of the following two sentences mean entirely different things:
1. The cat only sat on the mat.
2. The cat sat only on the mat.
The first sentence probably means that the cat did not stand or lie on the mat, but only sat on it. It might also mean that the cat was the only creature to be sitting on the mat. If it read instead “The only cat sat on the mat” it would mean that of all the creatures in the room there was only one cat, and it happened to be sitting on the mat.
The second sentence means that the cat did not sit on the floor or the carpet, but only on the mat.
The word “only” applies to the closest active element to it within the sentence.
In the following sentence: “For example, Firefox 4.0 will include GPU acceleration for both rendering and compositing on Windows 7/Vista, OSX and Linux, and it will have GPU acceleration for compositing only for Windows XP” the word only applies to “compositing” and not to “XP”.
English is a difficult language in places, I grant you.
Edited 2010-10-17 06:55 UTC
Except it will be around until at least 2020 in many businesses.
You misread it. GPu acceleration is provided for Firefox 4 for Linux.
FTA:
For XP there is a problem … and only compositing acceleration is available for XP.
But on Linux there is no problem. On Linux, compositing acceleration is provided via OpenGL, AND content acceleration is provided via XRender.
See here:
http://hacks.mozilla.org/2010/09/hardware-acceleration/comment-page…
This time I inserted commas, so I hope that helps.
The actual state of Firefox 4beta8 on Linux versus Windows is more rationally described here:
http://www.tildehash.com/?article=state-of-firefox-4-0-on-gnu-linux
Most of it is exactly the same, there are small differences only.
FWIW: when it is released for Linux Firefox 4 will probably narrowly beat all of the webkit based browsers.
Edited 2010-10-17 07:39 UTC
Rendering the tabs and UI in parallel without causing mass-carnage among extensions is very non-trivial.
People who want to see Mozilla’s progress to redesign Firefox’s internals to be Chrome-like should keep an eye on their Electrolysis project.
Basically, they’re using Fennec (Mobile Firefox) as their test bed for going beyond out-of-process plugins since extensions need to be redesigned for it anyway.
These tests only test JavaScript performance, not the graphics performance. A lot of demos on http://www.chromeexperiments.com/ are still almost frame-by-frame slide shows in the newest Firefox 4 nightlies but smooth as they can be in Google Chrome (both in x86_64 Linux versions). Because these demos use canvas and are paint intensive I guess its the graphic engine that needs work, not so much the JavaScript engine (CPU is hight but not at max!).
The IT media seems to put a big focus solely on speed but I think safety and security issues for all users and access to tweaking for more advanced users is something that should not be ignored. While Chrome has so much momentum I’m still very content with Firefox. My work involves bouncing around between Windows, Linux, and OS X systems and to me the add-on environment for all three platforms is still much more expansive with Firefox. A lot of my preferred add-ons, or their equivalents, are Windows-only in Chrome. Plus I rely on about:config quite often on any new FF install, Chrome doesn’t have that kind of user accessibility (and yes, I’m aware of the handful of about:xxxxxx in Chrome but those are mostly informational and not changeable). I’m not dissing Chrome, it’s just for my situation there’s just not enough incentive to move from Firefox. Speed is just one aspect I take into consideration.
How about the memory leak problems with FF? When I close some tabs, ideally, I’d like to see my RAM usage go down some. My FF on OSX consistently uses 600+MB ram.
Me it’s the opposite (Linux). I find Chrome to leak considerably to the point that my swap is in active use while Firefox’s memory usage is considerably less.
From leakage or simply higher memory needs? Quite a difference.
That has more to do with the fact that tabs are threads rather than processes; with process separation there is the ability to reclaim memory where as with tabs in Safari and Firefox they all operate in the same memory space. What I’d like to see at the very least is some sort of garbage collection where every 5 minutes it does a quick scan of memory to clean the remains of closed tabs.
A big part of that has already landed on Firefox 4 beta 7 and the rest will land on Beta 8.
http://andreasgal.wordpress.com/2010/10/13/compartments/
That’s a good point that the tabs are all in the same address space, and you’re right, they need to have some kind of sweep function that will free up the unreferenced data structures.
Reminds me how in Safari memory usage always _WENT UP_ when closing tabs, until you had no tabs left but 1+ GB of RAM allocated (not now.)
Not memory leaks… it keeps old pages cached in case you didn’t mean to close them the “undo close tab” feature
Firefox did have some memory leak issues some time ago, but they fixed it. It now (Firefox 4 betas) leaks less that other browsers.
It’s ridiculous that when discussing browser performance there is no mention of Opera, currently together with Chrome clearly outperforming all the rest.
This article is rubbish, I have expected OSnews to take some effort on writing an objective report rather that be a Firefox release notes forwarder.
I wonder if completely ignoring Opera in US-dominated news services we have is caused by the Americans traditionally not perceiving anything except the top of their own American nose. Why would you need a Norwegian product if you think the civilised world ends on the US border?
Opera and IE cannot be reliably tested for raw javascript performance because their javascript engine doesn’t allow batch operation.
That’s all.
Edited 2010-10-16 11:12 UTC
[q]It’s ridiculous that when discussing browser performance there is no mention of Opera, currently together with Chrome clearly outperforming all the rest.
This article is rubbish, I have expected OSnews to take some effort on writing an objective report rather that be a Firefox release notes forwarder.
It’s an article about Firefox. Is it OK to write an article about Firefox? The Firefox authors have been using an automated test suite to measure their JavaScript performance and that test suite only compares to WebKit and V8. Is it OK to mention that?
Does every article here need to have some kind of a fairness doctrine? Would it be OK to write an article about an improvement in Haiku that compares performance to, say, Linux without mentioning FreeBSD? Inquiring minds want to know!
This is basically what my comment said. Unfortunately, OSNews somehow messed up the comment (it was fine in the preview), and I didn’t notice until after the edit time was over.
I did not mean to repeat what Derbeth said; in fact, I strongly disagree with his comment.
This indeed was the main point of interest in the article.
Mozilla developers rate their own efforts against tests (sunspider and v8bench) written by their main competitors (Apple and Google), and publish the results for all to see.
To be fair I could have linked to another site where Opera, Chrome or Safari developers rated their in-development javascript engine against Firefox’s Jaegermonky engine using the Kraken benchmark test from Mozilla, but I couldn’t find any such a site.
What does that say about fairness, BTW?
Oops… It looks like OSNews ate my comment again. Maybe I’ll try to recreate it later.
When one is writing an article about a yet-to-be-released version of a browser still in development, one is forced to link to developer’s blogs, release notes and other similar websites in order to document anything about it at all.
Here are some more links which I’m sure will also annoy you:
http://weblogs.mozillazine.org/asa/archives/2010/10/were_getting_fa…
http://news.ycombinator.com/item?id=1789939
Here is a link where a Mozilla developer doesn’t accept using ONLY opposition benchmarks:
http://weblogs.mozillazine.org/asa/archives/2010/09/javascript_perf…
I do have a serious couple of questions for you here:
What could be more objective than the Mozilla developers tracking the progress of their browser against competing browsers using benchmarks written by the developers of competing browsers?
Where are the equivalent blogs of Google Chrome developers benchmarking Chrome and Firefox using the Kraken benchmark?
Edited 2010-10-17 10:13 UTC
If you had actually read the parent comment, you might notice that I was trying to quote it and argue against it.
The comment I wrote viewed fine in the preview but somehow it got cut off when I posted it and I didn’t notice until after I couldn’t edit it.
But yeah, I totally agree with your comment.
I did find an article comparing results from the developing Firefox javascript engine versus other browsers including Opera.
http://blog.mozilla.com/rob-sayre/2010/09/09/js-benchmarks-closing-…
This is unfortunately earlier Jaegermonkey in Firefox back on Sep 8th, which was significantly slower than it is now. The graphs there aren’t very current. The following are more current, and they show the difference in Jaegermonkey on Sep 8th compared with now:
http://arewefastyet.com/
The blog article linked above is interesting however in that it discusses some of the problems with benchmarks.
Edited 2010-10-17 10:32 UTC
One thing I think Firefox needs to fix one of these days is their number of disk I/O’s.
Chrome doesn’t seem to use the disk much, if at all. But Firefox and its SQLite tables seem to hit it a lot.
On my home desktop with SSD I don’t have any Firefox problems but on my Macbook Pro with its laptop hard drive Firefox will get very unresponsive when I’m loading or suspending a virtual machine, backing up files, or doing anything else that causes a lot of disk use.
But Chrome remains pleasant to use in the same situation.
FWIW, none of those numbers are actually terribly noticeable in real life usage, especially on non-HTML5 sites. IE, Firefox, Chrome, Opera, all perform pretty similarly from what my eyes can tell. Thus, I don’t understand the fuzz with all those benchmark figures.
Startup speed is a different story, though.
+1.
Startup speed and shutdown speed (Opera, I’m looking at you!)
Forget GPU acceleration and start with not doing things in a completely retarded way.
There was a similar benchmark/stress-test for msie9, and I discovered to my surprise that KHTML outperformed webkit by a factor 20x and Firefox by a factor 40x and was only outperformed by the hardware accelerated MSIE by a factor 2x. I thought for a while we were using hardware acceleation indirectly by using X11 more correctly, but no. We are just not rendering things in a slow way on multiple levels, that is all.
Btw, the mozilla “HW ACCEL” (sic) test:
Firefox: 0-1fps
Konqueror+Webkit (Qt X11): 1-2fps
Konqueror+KHTML (Qt X11): 18-20fps
Konqueror+Webkit (Qt Raster): 9fps
Konqueror+KHTML (Qt Raster): 9fps
The two later numbers are tested with “konqueror –graphicssystem raster”. They use the software only software renderer in Qt, which makes things a lot faster if you have crappy graphics driver, or have written crappy painting routines. Firefox obviously have even more crappy painting routines than even Webkit (which are pretty bad, as you can see).
Edited 2010-10-17 13:22 UTC
Apparently you did these measurements on linux.
So the most probable thing that happened is that you are using XRender for content acceleration (the only option currently on linux, since Direct2D is windows-specific) and no accelerated composition (since GL is disabled by default, set layers.accelerate-all to true in about:config to enable it).
So what you’re measuring here is that your XRender drivers are crappy. Big news we know about that problem. I guess we should blacklist certain drivers for XRender. You can file a bug with enough system info (X, graphics driver, etc), it’d be a starting point in that direction.
Next time, please just avoid jumping to the conclusion that we’re retarded it’s just that so far, coding a xrender-crappy-driver-blacklist-on-*nix hasn’t been the biggest priority. That can change if the ever evolving web makes it a showstopper for many *nix users.
KHTML uses the same drivers and X11 interface. So XRender does not explain the at least factor 10 difference. XRender would explain it if performance with Qt’s software raster was consistantly faster than using Qt’s X11 raster. When the difference only happens for the Webkit engine and not KHTML, it is my assumption they are doing ineffienct things, probably rendering to server-side pixmaps, and then processing those pixmaps client-side. That would explain why the very same painting routine is much faster when done entirely client side.
It not at all that clear. There are sufficiently many equally valid, different ways of rendering the same thing, that just because you found firefox slower than konqueror on a particular demo doesn’t have to mean it does stupid things: it can also just be that it uses different functions, hits different bugs and performance issues in the graphics system.
Of course it _can_ also be that it’s doing something stupid with XRender, but keep in mind that on other demos, with the right XRender driver (e.g. NVIDIA proprietary driver works well here) it’s doing great, for example on this benchmark:
http://ie.microsoft.com/testdrive/Performance/PsychedelicBrowsing/
(I get 1400-1500 score!) So not all of what we do can be that stupid.
Of course XRender is not a long-term solution anyway. The only satisfactory solution is to implement a free equivalent of Direct2D, i.e. hardware-accelerated content drawing, using of course OpenGL. This is a really hard job: it will require lots of 2d graphics work, and lots of GL performance knowledge.
There are some efforts in that direction already but none is satisfactory as of yet. The Cairo/OpenGL backend is far from ready last I heard. The Qt/OpenGL backend has poor performance (what I meant above when I said it would require lots of thinking GL-performance-wise). I heard that Qt is moving to a higher-level scene-graph approach, which will be great for lots of applications, but there’s little hope that it will be enough to use for a web engine, where one needs to implement a very wide variety of primitives and where 1 pixel errors can’t be tolerated.
By the way, in firefox 4.0-b8-pre, linux / NVIDIA, with GL on (layers.accelerate-all), I get 14 FPS.
Edited 2010-10-17 18:50 UTC
You are right, it is impossible to conclude much about Firefox. I was actually thinking more about Webkit in Konqueror. It has similar performance to Firefox 3.x, so it is possible it could have some of the same issues. And with QtWebkit and KHTML in the same browser with the same toolkit and rendering engines, makes it much better to compare, but there is of-course no guarantee that webkit and firefox has the same reasons for being slow.
The only reliable conclusion I can make for Firefox, is the one I based the Title on: There a lot of software optimizations to be made before I would find GPU accelaration worthwhile. GPU acceleration sounds a lot cooler, but it just doesn’t make any sense if you are not pushing the limit of the unaccelerated hardware yet.
I see your point; but with new HTML things such as <canvas> and <video> we’re at the point where standard HTML/JS pages can easily be too graphics-intensive to be usable without using the GPU.
For example, with a WebGL-enabled browser (e.g. a firefox 4 nightly, and check in about:config that you have webgl.enabled_for_all_sites and if possible layers.accelerate-all) go to:
http://webglsamples.googlecode.com/hg/aquarium/aquarium.html
There’s no way to make this page go smoothly without using the GPU.
One can argue that it was a bad idea in the first place to even allow web pages to become so graphics intensive, but we really don’t have a choice as this is where the world is moving anyway: before HTML <canvas> and <video> came along, people were just circumventing HTML by doing Flash: same result (flash is getting hardware-accelerated 3D too) but non-standard non-free.
Apparently there is a state tracker for the Direct3D API for open source Gallium3D drivers for linux.
http://www.phoronix.com/scan.php?page=article&item=mesa_gallium3d_d…
Does this help?
Edited 2010-10-17 22:14 UTC
No, it doesn’t help with this. We don’t need Direct3D (we already have OpenGL), we need an equivalent of Direct2D which is just a library that Microsoft implemented on top of Direct3D 10. Direct2D is proof that it’s possible, and that it’s a very good idea, to implement an efficient all-purpose good-enough-for-a-web-engine graphics library with good hardware acceleration; if they did it using Direct3D, it must be possible to do with OpenGL too (D3D and GL expose the same hardware features).
On windows, we’re already using Direct2D (we’ve developed a Direct2D back-end for Cairo on Windows) for content acceleration, and it gives excellent results. Now we need the same on other platforms.
Edited 2010-10-17 22:41 UTC
Huh? How could “your xrender drivers be crappy” if Konqueror+KHTML is 20x faster on the same hardware?
Wouldn’t that point more toward “mozilla rendering via xrender sucks” and have nothing to do with the xrender drivers themselves?
I already explained this: http://www.osnews.com/permalink?445375
Your explanation wasn’t posted yet when I posted.
This is not news. Firefox have been slow for … years. Why is this on OSNews?
The yet-to-be-released development version of Firefox is not slow at all. It has become faster than all the other browsers.
This is newswrothy because it takes some time to get such a change through to people’s awareness, as your post so admirably demonstrates.
You forgot to read what you replied to Don’t worry, I’ll wait.
Just as an aside, I would caution users to have a close look at what is being compared with what. There is a lot of vested interest out there which would want you to believe that Firefox is the slowest browser and that IE9 will be faster, and comparable to Chrome.
For example, here is a just-published review:
http://www.daniweb.com/reviews/review318591.html
You will note that this review uses IE9 beta, and compares it to Firefox 3.6.9, which is not even the latest release of Firefox 3.6 series, let alone a beta of Firefox 4.
The latest release of Firefox 4 beta, which is Firefox 4.0beta6, is much faster than Firefox 3.6.x, and Firefox 4.0beta7, (which is the first version to use Jaegermonkey javascript engine and is currently on hold awaiting release), is much faster again.
Perhaps this is why sites like Ars and daniweb are releasing speed comparisons just now. They are perhaps trying to pre-establish a negative impression of Firefox 4 before Firefox 4.0beta7 is released on the scene.
Edited 2010-10-17 23:02 UTC
This statement contradicts the link from the article, arewefastyet.com. It seems that Firefox is now approximately (slightly better) than Nitro in JavaScript, and slightly slower than the V8 engine.
Although I think JavaScript performance is often overrated, it’s one the easiest metrics to test, so that makes it an appealing one.
Actually, javascript metrics are not easy to test objectively. In fact, javascript benchmarks are easy to bias.
So arewefastyet.com indicates that Firefox 4 has become just about the equal of Safari, and is practically level with Chrome in Apple’s sunspider benchmark, but is behind Chrome when it comes to Google’s v8bench benchmark.
So how about Mozilla’s Kraken benchmark?
http://weblogs.mozillazine.org/asa/archives/2010/09/javascript_perf…
How about that then? On Mozilla’s Kraken benchmark, Mozilla’s Firefox build of a month ago was even then nearly twice as fast as Google Chrome.
So it boils down to a question of which bias do you credit? Google’s biased benchmark (which is even named after Google’s own javascript engine) or Mozilla’s biased benchmark?
Anyone being objective would have to give equal weight to results from each of the three benchmarks (Sunspider, v8bench and Kraken).
The overall objective result then (that is, 33% weight to results from each benchmark, then sum the overall score) is still a win to Firefox, and not Chrome.
Edited 2010-10-18 12:29 UTC
They’re not biased, as such.
Each of those benchmarks represents what their authors consider to be important – stuff that they think needs to be fast (as in Sunspider and V8), or stuff that’s currently slow but they think should be faster (Kraken).
Of course, each browser tends to be faster at it’s own benchmark (except Webkit on Sunspider, where V8 is slightly ahead). They use their own benchmark when developing their JavaScript engine, and the benchmark just happens to be composed of the same kind of code the developers think is important to optimize for.
For example, Google apparently think recursion is really important, and their benchmark uses recursion heavily. V8 is, consequently, excellent at recursion.
Kraken is… interesting. It seems to have been deliberately built to be slow. It does all kinds of things that, traditionally, JavaScript is terrible at.
If I had to guess, I’d say that Mozilla designed the thing as a testbed for TraceMonkey. It contains lots of the kinds of things that TraceMonkey should be good at, but isn’t.
FWIW, Mozilla say they designed Kraken to represent typical loads that browsers would see when running javascript on the web.
http://blog.mozilla.com/blog/2010/09/14/release-the-kraken-2/
Of course that is just a claim, but it is interesting to note that Mozilla are the only ones to make such a claim.
v8bench appears to be designed simply to be a benchmark that Google’s v8 javascript engine is very good at.
This blog has some comment on the vagaries of various benchmarks:
http://weblogs.mozillazine.org/asa/archives/2010/09/javascript_perf…
This perhaps more impartial comment from Microsoft notes that Mozilla’s Dromaeo benchmark is a better test than SunSpider and V8bench.
http://blogs.msdn.com/b/ie/archive/2010/09/14/performance-what-comm…
That was before Kraken, which Mozilla claim is even better than Dromaeo at testing realistic javascript performance.
Read into it all what you will.
BTW, spidermonkey (tracemonkey plus jaegermonkey merged) is probably now more than twice as fast as v8 at running the Kraken benchmark.
Edited 2010-10-18 14:31 UTC
How is it a win to Firefox if they lose 2/3 of these stone-carrying contests to the V8 engine? Again, the facts do not support your conclusions.
Also, when you come out with a new benchmark (as with Kraken) you can’t expect that the others score on it as well as Firefox: they haven’t had the time to focus on the things that it tests. Firefox has been optimizing their engine with v8bench as one of their targets for MONTHS, and it’s still almost twice as slow as the v8 engine.
It isn’t twice as slow as Chrome on Google’s v8bench, which is heavily biased towards Google Chrome’s v8 engine.
The figures are 1373 for Chrome vs 2226 for Firefox. If Firefox speed for v8bench is 1, then Chrome is 1.62. Chrome is 1.62 times as fast as Firefox on the v8bench benchmark.
Firefox is 1.92 times as fast as Chrome on the Kraken benchmark (or at least it was a month ago, it is probably further ahead by now).
They are virtually the same speed on the Sunspider benchmark. 1:1.
If all three results are weighted equally, Firefox wins overall when we add them together.
Have you ever seen anyone aggregate totally separate benchmark suites like that? They’re largely incomparable, so you just have to look at each one. Maybe you take a coarse grain win-loss (like I did) but no one adds them up. Practically every JS performance comparison just presents separate results for each benchmark.
Each benchmark itself runs this way, within itself. It does a number of little tasks, and gives you a total time taken. The relative duration (or number of iterations) of each of the little tasks is effectively a weighting for how important the functions within that task are towards the overall benchmark results.
To include results from multiple benchmarks, you simply normalise the results from each one, and add them together. “Normalising” is done by taking the lowest time result of any browser for the one test, and then dividing all of the results from that same test by that number. This gives one browser with a normalised result of “1” for that test, and the others a little higher. This allows you to add the results from each benchmark together with equal weighting.
It is a pretty standard technique. It is used a lot in evaluations and trade-off studies. Look up “Operational Analysis” for a number of techniques of this nature.
http://en.wikipedia.org/wiki/Operations_research
The reason why you won’t see this done in the press for browsers is that the answer it gives you is that Firefox 4 (with Jaegermonkey) is the winner in speed tests. There are a lot of vested interests who don’t want you to know that fact.
Edited 2010-10-18 21:55 UTC
Now.
I don’t really understand why people are so obsessed with the speed, it’s not the only thing that matters, at least for me, – it’s the usability. FF integrates excellent with my 32bit Linux desktop (I don’t see real advantage of going 64), the latest betas are quite stable and fast, so it’s OK. I don’t fully dismiss Chrome, it’s good, but it has certain glitches that I can’t live with. So, bottom line, I’m eager to use 4.0 GA as soon as it comes out. Don’t forget that some optimizations are being done under the hood, not including the JS engine, like removing a lot of IO calls during UI usage, etc, etc. 4.0 will be the best FF release.
I know the answer to this. Speed was the only thing where people felt they could disparage Firefox and get away with it. It used to be memory consumption also, but that has switched in Firefox’s favour for quite a while now so it is no longer mentioned much.
Astroturfers who have been assigned the task of disparaging Firefox are not going to give the speed question away too easily. Firefox 4 is going to have to be faster than any other browser for over a year before claims that Firefox is slow are going to slowly go away in a similar fashion.
None of this will ever be mentioned because Firefox has always been the leader here, and that isn’t ever going to change.
Firefox is about empowering users where other browsers are about limiting them.
http://www.mozilla.org/about/mission.html
Can you say “Silverlight does DRM”? Can you say “proprietary codec”?
Edited 2010-10-18 09:33 UTC
Well, let’s not start disparaging the other browsers, I don’t think that Chromium is trying to limit me as a user. We have the option of many browsers, and all of them seem to be striving to be the best: the end-user wins!
Maybe not, but consider: the Chrome browser does not have Firefox’s power when it comes to extensions (for example there is no video downloader extension); the Chrome browser does embed Flash and a proprietary codec; and finally the Chrome browser does phone home your usage metrics back to Google.
These “features” of Google’s Chrome browser are arguably not in your best interests as the end user.
I don’t use extensions at all, but I do use Flash, so Chromium fits me quite well.
If Firefox offers features that I don’t use that’s fine it’s still a great browser, but I’m not missing those features.
People should use whatever they like to use best. It’s stupid to try to convince them that they are not happy with their browser because it lacks some feature that they wouldn’t use anyway.
If you’re using Firefox without any extensions you aren’t really using Firefox and you’re Doing It Wrong(tm). If you think you don’t need some extensions it’s because you’ve not found the ones you need yet.
Maybe there are extensions that I would find useful (want), but my usage doesn’t require them (need). To claim that I’m not really using Firefox/whatever is just absurd, Firefox is not an extension-delivery-vehicle, it’s a web-browser. I’ll use it simply and be happy for the simplicity.
That’s what I’m saying: I can decide for myself what is useful or not.
Extensions can be great for those who require them, I haven’t found a need though.
Except Opera. I feel limited when Firefox doesn’t save my opened tabs unless there’s been a crash. Starting a browsing session from scratch is extremely rare for me.
I also feel limited when I can’t change the shortcuts. Is it possible now in the 3.6.x? Can I make that ugly round back button have a shape mirrored from the shape of the forward button?
Care to elaborate upon your statement? All browsers are both limiting and empowering in different ways, including IE.
It does, depending on how you close the browser. On the Mac version, if you quit the entire application, it saves your opened tabs (after asking). If you close the window, it doesn’t.
At least, I think that’s what happens. I’m not going to test that right now.
Yes, but there’s no built-in UI for it. There was an extension back in 2004 or so that added one, but I don’t know if that still exists, or if it works on current versions of Firefox.
Right-click on toolbar. Hit “Customize…” and select “Use small icons”. Replaces the back button with a smaller one, the same size as the forward button.
I don’t remember if it changes the size of the other icons on Windows or Linux. It doesn’t on the Mac version.
If it’s an extension that allows changing, maybe it’s “keyconfig”?
I care about speed because I spend an unhealthy amount of my day operating a browser with hundreds of tabs. Every 1% they improve performance saves me several minutes each day, which means I get more done. In an ideal world my web experience would be bottlenecked by my uplink and not by my browser, but on 15Mbit this is not the case. I can open a dozen tabs in a few seconds and download the pages in little more than that, but getting it all rendered and displayed can take a long, long time. Time when I’m waiting on my computer–and I should never be waiting on my computer.
I think it would be great if we had a test-suite (do we alredy?) that tested a wide variety of aspects of the browser. Along the lines of what Con Kolivas was trying to do: produce some measurements about the responsiveness of the browsers. In the end this is what the user will probably notice.
Firefox’s built in session management works just fine. You just have to change the setting (Options -> General -> When Firefox starts: Show my windows and tabs from last time) and it works perfectly. I can’t imagine using a browser that couldn’t do that.
I enjoy using 4.0b8pre in FullscreenMode.
It finally feels right