“On Monday I posted Geekbench results for my Sun Ultra 20 M2 running Solaris and Windows. Afterwards, I received a number of requests asking how Linux performed on the same hardware. Now that I’ve finally managed to download Fedora Core 6, here are the Geekbench results for Fedora Core 6 (and Solaris, as a comparison) on a Sun Ultra 20 M2.”
This is actually a Sun Studio vs. GCC compiler test. BTW, Sun Studio compilers are available on both Solaris and Linux.
yeah, it seems that most of the ‘os vs os’ benchmarks are really compiler benchmarks. what gives?
yeah, it seems that most of the ‘os vs os’ benchmarks are really compiler benchmarks. what gives?
If I look at what Geekbench does, it seems to be very primary operations, and simple system-calls.
GCC is also available on Solaris. It’s what I’m using here under Solaris 2.9, in fact. 🙂
Either way, I’m impressed. In the benchmarks from the previous Solaris/Windows test, Solaris slammed WinXP in just about every category, and in this set of benchmarks, it looks like Linux is a real competitor. I’m sure they’re out there, but I’d like to see the same set of benchmarks for Linux vs Winxp and Vista.
” it looks like Linux is a real competitor”
I would say Solaris is a real competitor.
” it looks like Linux is a real competitor”
I would say Solaris is a real competitor.
You may think linux super in desktop area though only has .4% market share. But in server area, linux is just a toy box. Proprietary unix like Solaris, SGI, HPUX, AIX, XENIX, etc, and FreeBSD were out there for a long time. Recently Solaris goes open source, so people can compare it with linux.
“You may think linux super in desktop area though only has .4%”
I would say it’s about 4-5%, not .4%
“But in server area, linux is just a toy box”
Big 500 fortunes companies like the toy for sure.
” Proprietary unix like Solaris, SGI, HPUX, AIX, XENIX, etc, and FreeBSD were out there for a long time”
Solaris? Then why Sun commits its resources to Linux?
SGI? It’s almost broken.
HPUX? Does anyone use it today?
AIX? IBM has commited resources to Linux. Maintaining AIX source costs money.
XENIX? Come on, are you joking me? What’s next? DOS?
FreeBSD: I respect it, but its licensing scheme is problematic. Linux succeded because of the GPL. Not a technical merit.
“Solaris? Then why Sun commits its resources to Linux?
SGI? It’s almost broken.
HPUX? Does anyone use it today?
AIX? IBM has commited resources to Linux. Maintaining AIX source costs money. ”
Wow, just wow.
Sun commits is resources to Linux to give its customers a choice about the OS they wish to run. Their hardware is certified to run Solaris, Linux, and Windows.
NASA seems to have a lot of success with SGI as do other gov’t. labs
HP-UX: You should look at job postings for system admins sometime, I see a ton of HP-UX work out there as well as AIX.
AIX: So what if it costs IBM money, you don’t think that working on Linux is free for IBM?
You use the right tool for the right job. If a company has lots good experiences with commercial UNIX systems and can get them for a good TCO, why switch?
And yet, large numbers of them are switching to Windows.
Why would they do that, if it working for them? And wouldn’t a switch to Linux be much easier?
From what I’ve heard/seen/experienced, companies have trouble finding skilled Unix administrators. And Windows administrators are also cheaper.
Well, Solaris x86 has been out there for much longer than Linux has, and you’d think with all the dough they’ve poured into it, that it would be the absolute best of breed.
So I agree with you that Solaris is a real competitor, but I think the underdog is Linux, with less time to mature and (theoretically) less centralized corporate involvement.
Well, Solaris x86 has been out there for much longer than Linux has, and you’d think with all the dough they’ve poured into it, that it would be the absolute best of breed.
If this graph http://upload.wikimedia.org/wikipedia/en/0/0e/Unix.svg is to be believed, then Linux pre-dates Solaris 2.x (not only x86). I think the earliest Solaris x86 was release for version 2.4 in 1994.
—-
If this graph http://upload.wikimedia.org/wikipedia/en/0/0e/Unix.svg is to be believed, then Linux pre-dates Solaris 2.x (not only x86). I think the earliest Solaris x86 was release for version 2.4 in 1994.
—
I believe the first release of Solaris x86 was
Solaris x86 2.1 in May of 93 and Linux 1.0 was
March 94. But those are release versions. There
was all kinds of development activitey before that.
Interesting the chart didn’t include some of the
other versions of x86 Unix back than such as
Interactive, Microport, Wyse, Esix and if I remember
correct Dell. But than again you can through
a lot of money at a software project it doesn’t
mean your going to get the “best of breed”. I’d
say back than Solaris x86 and/or the other versions
of x86 unicies where more complete for commercial
purposes. The hardware for those platforms where
more of a pain.
—Bob
Even if it is primarily depending on the compiler, I think that this is still very appropriate. As is the benchmark will reflect the relative performance of the common configurations of the 2 operating environments. When people install Solaris, chances are they’ll be using the SUN compilers. When people install Linux, chances are, they’ll be using GCC. Sure you can use GCC on Solaris and Sun Studio on Linux, but I would venture to say that most people don’t.
Yes, I never realised the Solaris compiler was so far ahead of gcc on x86, let alone SPARC. So I found these compiler benchmarks very interesting.
This makes the Linux kernel all the more impressive when you see results like these, IMO.
http://www.stdlib.net/~colmmacc/2006/04/13/more-ubuntu-on-t2000/
Although the kernel may not tend to benefit so much from compiler optimisations as things like these benchmarks, it wouldn’t be unlikely to get at least another 5% or so on the above test.
I would think that the OS would definitely have some part in the multi-threaded tests, wouldn’t you?
Yes, looking at the FAQ, the multi-threaded tests have four threads and this test was run on a 2-core system so there will be some context switching. However, I suspect the impact of the OS is much smaller than the compiler; given that the test has two independent variables it’s impossible to tell for sure.
Always interesting to see numbers
This reminds me of the quote I read a while ago on LWN from Linus .
Saying something along the line of : If you want something implemented in Linux just post the benchmark showing that Solaris is better at it
A link to what these tests kind of do :
http://www.geekpatrol.ca/geekbench/benchmarks/
Geekbench uses a number of different benchmarks to measure system performance
Integer benchmarks measure integer performance by performing a variety of processor-intensive tasks that make heavy use of integer operations
Floating point benchmarks measure floating point performance by performing a variety of processor-intensive tasks that make heavy use of floating-point operations.
Memory benchmarks measure both memory hardware and memory library performance
Stream benchmarks measure both floating point performance and sustained memory bandwidth.
Indeed, and if you look at all the geekbench benchmarks it’s clear that its propose is to compare “systems”, aka “different machines with different hardware”. The benchmarks are compiler-biased, which is great to compare different systems running the same OS, but not two different OSes running diffrent compilers. the one remotely interesting benchmark seems to be the “memory benchmarks”, which measure stdlib performance (aka, libc)
I’m amazed at how good is the sun compiler in many cases, BTW.
Edited 2006-10-27 19:59
Why does John and Matt use Fedora Core as a target when they could use CentOS or a similar “release quality” Linux distribution? If nothing else it would eliminate the “it’s beta software” comments that will probably appear as people read and interpret the results.
Or, they could benchmark Fedora Core against a recent Solaris Express build and try to level the playing field.
More information on how the system was setup with each OS and compile options would be helpful as well.
Why does John and Matt use Fedora Core as a target when they could use CentOS or a similar “release quality” Linux distribution?
Calling Fedora Core ‘beta software’ is going to rub the people behind FC very wrongly.
What the hell is up with that stdlib allocate score!!!
Overall it looks like Solaris is the better performer, but I wish more data was provided to explain WHY… as in what are the code differences between the two that would account for the differences.
The other score I found interesting was the bzip test. Would it have anything to do with disk performance in that case? How does Geekbench generate the scores, are they all time-based? Yeah, I could go and read the geekbench docs I reckon… but a quick blurb about this stuff would have been extra nice.
I’m using gcc 2.95.3 under SunOS GUTUX09 5.9 Generic_118558-09 sun4u sparc SUNW,UltraAX-i2.
I wonder if this will let me post?
Browser: Links (0.99pre6; SunOS 5.9 sun4u; 124×53)
Using Links 0.99pre6 with the User Agent changed to see if it will allow me to respond in place in a thread.
The strange thing is, even though we do development work on Solaris, our systems group won’t allow us to use the Sun development tools at this point. Weird, huh?
They’ve been using gcc here since well before Sun’s tools were free, so that explains part of it…
I’m getting really tired of these non-sense benchmarks. All these comparisons always test the wrong things. With the almost hatred between OS camps, it’s *amazing* to me that there are VERY few decent comparisons between the OSs in real-world tasks.
Somebody send me a Sun box that can run Solaris and Linux, and I’ll put it through some REAL paces. You know, webserving, database work, etc. I can’t believe people have yet to do respectable benchmarking, it’s not exactly rocket science.
Somebody send me a Sun box that can run Solaris and Linux, and I’ll put it through some REAL paces. You know, webserving, database work, etc. I can’t believe people have yet to do respectable benchmarking, it’s not exactly rocket science.
http://www.sun.com/tryandbuy/
There ya go. I eagerly await your report.
Can I look for it in, say, about a month?
We’re utilizing that program for testing production operations, and they can’t be dedicated to benchmarking against linux boxes at this point (we have no interest in running linux, and cannot spare the try-before-buy boxes to benchmarking.)
Again, I’ll make it clear. Heck, if anybody in Hawaii has a Sun box sitting around, I’ll gladly travel to your location and give it a go. Or, you’re welcome to come by my data center and set it up in there.
I do appreciate your sarcasm, however, bravo.
There you go:
http://corenode.com/
http://uadmin.blogspot.com/2006/03/zfs-to-rescue.html
i just noticed they are actually talking about YOU! haha. sorry for the whole misunderstanding
It’s ok. And yes, they are discussing me.
” I can’t believe people have yet to do respectable benchmarking, it’s not exactly rocket science”
Well, most benchmarks are useless and don’t really represent the real world. You should build a real world application that makes a good and fair use of OS facilities to test performance. Even then, what works best for you would not work well for others. So it’s rather poinless. Look, most business are using Windows as their platform with all its problems of performance and security.
What is needed with any benchmark is an explanation of why the results are a certain way. It as far as I can tell, the GeekBench focuses on integer/floating point/memory bandwidth tests which do not depend very much on operating system structure and depend a lot on compiler quality. If the GeekBench was benchmarking I/O performance, then it would be a more appropriate measure of operating system performance.
Why would the raw integer/floating point performance be different under Windows vs. Solaris vs. Linux? What part of the operating system affects this?
Why would you test a server with only 512MB of ram?
Especially when 64bit Windows really takes advantage of that memory?
Maybe you could test x64 XP with a decent amount of ram, or run the same benchmark on 32bit XP.
XP x64 is really Windows 2003 x64 Server.
Edited 2006-10-27 21:23
Let’s see… benchmarking a cross-platform C compiler on a beta operating system (designed for testing components for their enterprise version) against a full release OS and C compiler targetted directly at the hardware it’s running on and designed for production use.
Yeah. That’s a great benchmark. “For me to POOP ON!” – Triumph
Well, why don’t you make a better comparison then.