“Mozilla’s high-performance TraceMonkey JavaScript engine, which was first introduced in 2008, has lost a lot of its luster as competing browser vendors have stepped up their game to deliver superior performance. Firefox now lags behind Safari, Chrome, and Opera in common JavaScript benchmarks. In an effort to bring Firefox back to the front of the pack, Mozilla is building a new JavaScript engine called JaegerMonkey.”
The Ars article has a misleading title which is mentioned in their comments but OSnews just copies it?
As far as I understand the articles about JaegerMonkey is that they will borrow the interpreter of webkit. Right now when tracemonkey doesn’t work it falls back to the Firefox 3.0 interpreter. In the future it will fall back to the webkit interpreter which means it will ‘slow down’ to webkit speeds
My title would be: Mozilla borrows from WebKit to improve their JS Engine
Edited 2010-03-10 10:23 UTC
Webkit doesn’t use an interpreter at all, it uses a compiler. A “just in time” compiler, to be accurate.
Because it involves an extra “compile step” (beyond the syntax parsing step) a compiler is actually slower than an interpreter for code that runs through just once. OTOH, because it does compile native code, a JIT compiler is much faster than an interpreter for any code that loops.
Mozilla’s current Javascript accelerator, called Tracemonkey, is an interpreter with optimization via code tracing.
http://blog.mozilla.com/dmandelin/2010/02/26/starting-jagermonkey/
“About 2 months ago, we started work on JägerMonkey, a new “baseline†method JIT compiler for SpiderMonkey (and Firefox). The reason we’re doing this is that TraceMonkey is very fast for code that traces well, but for code that doesn’t trace, we’re stuck with the interpreter, which is not fast.”
The concept for JägerMonkey is to use Apple’s Nitro Assembler to generate efficient native code (i.e. use the same JIT compiler method as webkit), but also implement the benefits of Tracemonkey where it applies.
“The JägerMonkey method JIT will provide a much better performance baseline, and tracing will continue to speed us up on code where it applies.”
The idea is to get the best of both approaches. In other words, this is fine example of the open-source “meritocracy” approach at work.
If it works, of course. If Mozilla run into snags, it might not work … in which case the “meritocracy” approach would be to just drop it. In open source development, there is no point on hanging on to something just because it is your product. A “NIH” attitude doesn’t really apply, or at least it shouldn’t.
If Mozilla can pull this off, it has the potential to put Firefox back in the lead in the browser speed stakes.
Edited 2010-03-10 10:40 UTC
Hahaha. The best joke for a long time.
The joke is on you. Apple’s Nitro Assembler, a part of webkit, which is clearly not invented at Mozilla, is going to be used as the JIT compiler for Mozilla’s JägerMonkey, which in turn will be used in Firefox.
It turns out that the rendering part of webkit itself (but not Apple’s Nitro Assembler) was based on KHTML, which was written by the KDE project, and is therefore a NIH component of Apple’s webkit. It further happens that webkit is the rendering core of Google Chrome, which is yet another example of not following a NIH approach.
This is all a very good set of examples of open source development not adopting a “NIH” (Not Invented Here) attitude.
This sort of practice of using and re-using the demonstrated-best methods virtually defines the whole “meritocracy” approach.
Edited 2010-03-10 11:00 UTC
You know what, your precious Linux and GNU have always been about NIH. And remain to be about it.
Same goes for majority of open source software. Heck, even I’ve written many things simply because of NIH. There is nothing that wrong in it IMO, but it is ridiculous to deny the existence of it. In a way that’s what FOSS is all about.
Edited 2010-03-10 10:59 UTC
Au contraire, it is proprietary code that defines the “NIH” mindset to a tee.
Now it is quite true to say that there is some component of NIH attitude in open source code, but it is exceptionally easy to show that it is by no means pervasive in open source.
GNU itself is a “work-alike” that implements almost an entire OS after the design template of POSIX. The missing bit of GNU is the kernel, called Linux, which borrows RCU, SMP and other methods from IBM’s mainframe and OS/2 inventory (donated by IBM to Linux), and the likes of X, OpenGL and various filesystems from all over. The executable format is ELF, which is a standard defined by a UNIX consortia. GCC implements standard languages such as C and C++. Outside of that, contributed languages are Java, Python, Ruby, Perl, Haskel and a long list of others, all NIH. There is no “Linux-only language” anything like .NET.
Application file formats are either industry standards (such as ODF from OASIS), W3C standards such as SVG and HTML, or outside-contributed formats such as Ogg, MKV, PNG, JPG et al.
Virtually all of GNU/Linux is NIH. It is donated from all over. One of the few bits that I can identify that is unique to Linux would be the ALSA sound drivers.
Proprietary code, OTOH, all but defines “NIH”.
Edited 2010-03-10 11:29 UTC
lemur2, please don’t be so naive. How many sound stacks? How many wireless stacks? How many window managers? How many distributions? All because of NIH.
All clearly NOT NIH (apart from ALSA, which is the only thing with “Linux” in the name).
BTW – a Window manage is a small part of a wider program known as a desktop. Most Linux desktops these days can run X or GTK or Qt applications, even down to system-tray applets, via a common desktop API known as LSB … Linux Standards Base. Likewise, there are different wireless manager programs that provide a user interface to wpasupplicant and the Linux kernel wireless card drivers … different GUIs for the one wireless stack.
Distributions are merely aggregates of choices of desktop and application programs chosen to work well together as an integrated whole.
You have confused application choice for “NIH”. Choice is not “NIH” … in fact lack of choice of desktop for a base OS would be an example NIH.
It would appear that you are very easily confused.
Edited 2010-03-10 11:42 UTC
Yes, the great Linux Standard Base that no one follows.
Why didn’t you mention Gnome, which was started because of (ideological) NIH when KDE was already there.
On my Arch Linux KDE 4.4.1 desktop, knetworkmanager is not good for me because it starts a wireless connection to my LAN too late in the process and I cannot auto-mount my network shares on desktop startup. Instead, I use wicd, which was designed primarily for GNOME., but it works perfectly well on KDE 4.4.1 or GNOME or XFCE or LXDE or any other Linux desktop with a system tray … because of LSB.
http://en.wikipedia.org/wiki/Wicd
There is no real need to write wicd-qt, however, because the standard wicd GTK interface works fine.
So no-one then except all the above-named every desktops follows LSB, and no-one except all of the above-named desktops (and others) uses the one wpa supplicant wireless stack.
GNOME was started because the KDE toolkit, Qt, was at the time proprietary (but free-to-use for open source applications). That reason no longer exists, but once started a desktop builds a community that carries on under its own steam. Anyway, GNOME wasn’t started because of NIH, it was started for licensing reasons.
Even then, the GNOME widget engine is based on GTK … which is the GIMP ToolKit.
http://en.wikipedia.org/wiki/GTK
GIMP is not GNOME (or at least it wasn’t when each project started), so GNOME’s replacement for the (at the time) problematic Qt was GTK which happens to be a NIH toolkit (from the GNOME project POV).
Edited 2010-03-10 12:08 UTC
“You know what, your precious Linux and GNU have always been about NIH. And remain to be about it.”
That’s funny, because a self-described “typical UNIX Silicon Valley professional” told me GNU/Linux was doomed because it was trying too hard to be like Windows.
And ow, man. Diplomacy. Do you speak it?
(I see lemur2’s comment was posted as I was writing this– a much more elegant response than mine, IMHO)
Edited 2010-03-10 11:21 UTC
Forget it, he admits he’s a Linux troll in his profile.
Well then I offer him this: I’ve seen better. Yeah, really. Better trolling than what he’s offered so far, IMHO.
Eh, tough to tell. I could cite examples of code reuse, you could cite examples of duplications in functionality. There are many ways to do things, but many of them involve the same programs, the same code. Many different Media players, but they all end up using the same codec source code to decode ogg.
Could we move on to a more important debate? Like what end of an egg to open?
“It turns out that the rendering part of webkit itself (but not Apple’s Nitro Assembler) was based on KHTML, which was written by the KDE project, and is therefore a NIH component of Apple’s webkit. It further happens that webkit is the rendering core of Google Chrome, which is yet another example of not following a NIH approach.”
Thank you for pointing that out. It bothered me that the Ars Technica simply said “Apple’s WebKit project” when I knew things hadn’t originated with Safari, but with Konqueror.
Right now I am using the Linux (Debian) beta of Chrome for faster JS despite current limitations (errors with CSS rendering on refreshes, I believe). But I switch back to Firefox for a blogging site that is heavily based on a WYSIWYG editor. I have to use a script that enables me to edit the HTML of the entry manually to get formatting right sometimes, but for me and any of my contacts that use Chrome or Safari, formatting is even worse.
My preference was strongly with Firefox before and I welcome anything that improves JS performance.
I would argue that since webkit is written better, performs better, and supports more then gecko, it would be in the products best interest to just drop gecko in favor of webkit.
>>Webkit doesn’t use an interpreter at all, it uses a compiler. A “just in time” compiler, to be accurate.
Perhaps you do not realize this, but most JIT compilers augment an interpreter. Take a class written in Java or C# for example, if there is a method that gets called one or a few times, it is not compiled to native code, it is interpreted. Another method called a few hundred times will be compiled to native code.
Why you ask? Take any large managed code application, lets use Netbeans. Instruct the VM for the language (in this case the JVM) to compile *every* method before use. You will find for the first 10-60 minutes (depending on your machine) the software is unusably slow.
I don’t know the details of the webkit JIT but it probably does the same thing. Then again, becuase the volume of code on a website is so small compared to an application, perhaps I am mistaken and it does not.
This is not totally accurate. They are using a “tracing” JIT, which does some code analysis and will compile the bits that make sense. What you are talking about is a static JIT, which is how googles V8 engine works.
Mozilla previews new feature to guard against Flash crashes
http://arstechnica.com/open-source/news/2010/03/mozilla-previews-ne…
Although this feature is similar to functionality already implemented in IE and in Chrome, it turns out that Electrolysis is multi-process (process per tab?) rather than multi-thread.
This approach code-named Electrolysis should be a benefit to Linux, because AFAIK in Linux the overhead associated with a separate process is much less than the overhead for a separate thread.
Electrolysis is therefore a bit less anti-NIH than JaegerMonkey is.
Edited 2010-03-10 12:43 UTC
I wish they only keep multiprocess to plugins – where it belongs. And keep multithreading and optimizing for the rest – as it should be.
Chrome’s memory footprint with a few tabs is spectacular, the wrong kind of spectacular.
>>This approach code-named Electrolysis should be a benefit to Linux, because AFAIK in Linux the overhead associated with a separate process is much less than the overhead for a separate thread.
This is incorrect, kernel level threads almost always have a lower level of overhead than processes. Threads share the same memory space so when a context switch between threads occurs the TLB is not flushed. Also, communication between processes must use an IPC mechanism provided by the OS making it much slower than the synchronized data structures threads use to communicate. Also, for operating systems which do not use COW pages, process creation (forking) is extremely expensive in comparison to thread creation.