“The latest ‘OS X is slow’ meme to impinge on the mass psyche of the Internet comes courtesy of one Jasjeet Sekhon, an associate professor of political science at UC Berkeley. The page has hit digg and reddit and been quoted on Slashdot. Is there any merit to this?”
Less FUD and more facts! Great Analysis! I wish they were all that thorough!
I hardly think so. All he does is basically shrug his shoulders and say “I’ve no idea why it’s slower, but it could be the way they’re measuring it”. Apparently we’re supposed to believe that Apple have made some trade-offs for other areas. Errr. Well where exactly? He also writes “Writing this entry felt like arguing on IRC; please don’t make me do it again.” Well, no one is making you do it, and the reason why people argue on IRC is because someone sees something they just don’t like or someone writes something they just don’t like.
It certainly goes nowhere near to explaining away this:
http://www.anandtech.com/mac/showdoc.aspx?i=2436&p=8
You can’t explain everything away as ‘the way OS X does memory allocation’. System calls right through the system, creating threads as well as other unidentified areas have quite a large impact, and those are things Sekhon talked quite a bit about.
OS X isn’t exactly the fastest desktop OS based on most peoples’ observations either, having used Windows or a Linux based desktop system (or even OS 8 or 9), but that is even more difficult to get benchmarks and hard evidence for, because what do you measure?
Regardless of the particulars though, the overall results are there, and have been for some time and the trend is clear. I just wish Apple enthusiasts would admit there’s something wrong, discuss where the problems are and how Apple can solve them rather than defending it. Apple may well be looking at addressing these problems by moving away from the mach kernel, and looking at using ZFS is certainly a good idea given HFS+ is running out of life performance-wise.
Edited 2006-05-18 14:28
Actually he at least proved that the software benchmark itself is somewhat slanted.
QUOTE: “Linux uses ptmalloc, which is a thread-safe implemenation based on Doug Lea’s allocator (Sekhon’s test is single threaded, incidentally). R also uses the Lea allocator on Windows instead of the default Windows malloc. But on Mac OS X, it uses the default allocator.”
Like R on Windows, it’s a simple matter to compile and link against Lea’s malloc instead of the default one on Mac OS X. What happens if we do so?
Mac OS X (default allocator) 24 seconds
Mac OS X (Lea allocator) 10 seconds
Windows XP 10 seconds
In the past, whenever PPC Macs were benchmarked against x86 PCs, people would cry “oh the benchmark isn’t optimized for whichever the weaker CPU is” or “oh, the benchmark was more optimized for the winning CPU.”
No that we can run benchmarks for all 3 OSes on the same architecture, people nit pick the kernels.
But one thing is always the same: optimized code runs better than non-optimized. A synthetic benchmark doesn’t prove squat because variations in the source code would yield different results. You’re better off looking at real-world application benchmarks and then complaining to the vendor why it’s not running as fast as possible on your favorite platform.
Failing that, grab a compiler and DIY
When it comes to configuration of an application, some optimizing of settings is OK. When it comes to compiling, the only things more than defaults that should need any adjusting are deciding whether to compile for size or speed, and what CPU (if any) to specifically optimize for. Specifically linking to another memory allocator is far too in-depth for most users, even those compiling applications–the fast one(s) should be the default.
I’d try pointing fingers (is it Apple’s fault? MySQL’s? Some random ports repository maintainers?), but I do notice that there is no mention in the AT article of how MySQL was installed on the OS X server.
Yes, but in this case the app was already using another allocator on Windows. They just hadn’t implemented this on OS X.
Let’s say you have a program. It does four things:
1. It reads in some input.
2. It allocates memory from the system dynamically.
3. It performs numerous complex calculations on datastructures occupying the dynamically allocated memory.
4. It outputs the results of these calculations.
Let’s say that 1, 2, and 4 are “system” operations and 3 isn’t. If the time spent in 1 and 4 is minimal (small input and small output) then the bulk of the time is divided by 2 and 3. If the “system” allocator performs poorly with a given workload and it’s used more efficiently as a backend for an allocator better-suited for the workload such that the time spent in 2 is marginalized, then 3 dominates the amount of time the program takes to execute. We’ve made the question of whether the operating system’s facilities are efficient irrelevant by making little use of them. We’re largely concerned with how well we’re scheduled for CPU and system memory at this point.
No, he pointed out exactly why it was slower: the value Apple chose for LARGE_THRESHOLD in their malloc implementation isn’t optimal for Sekhon’s code. He profiled the code, and demonstrated that it was little more than a benchmark of how fast you can malloc and free 35KB blocks with: the Lea allocator on Windows and Linux, and the default allocator on Mac OS X. It turns out that if your code is doing a lot of mallocing and freeing of 35KB blocks, the default allocator on Mac OS X will slow you down. I think it is better to give Apple the benefit of the doubt that they chose to use a lower value for LARGE_THRESHOLD for a good reason, rather than they just drew straws and now Mac OS X sucks. Further, he pointed out that Sekhon made a lot of sweeping generalizations about OS X performance, but included only two benchmarks of questionable (at best) value.
Nope, the article pointed out that the program was optimsed for the benchmark on Linux and Windows but not on OsX. When recompiled and linked to an alternative library the difference disappeared.
Showing that :
a) the benchmark was flawed because the actually result was not the operation that was supposed to be measured, but an external factor.
b) the software was not correctly optimised for OsX
You can’t just move code from one OS to another, it needs to be tweaked. This *was* the case for the Windows version (and the Linux version as it is a GNU software designed to run on Linux), it *wasn’t* the case for the OsX version, invalidating the benchmark.
Read again the article, he has precisely and clearly explained why it is slower on MAc OS X, and how is is related to malloc allocation boundaries. The test and its results are clearly misleading. After come again and try to say something intelligent, no troll bullshit.
You refer to the Anandtech article which is another crap done by people that do not understand performance analysis. Anandtech wants to measure thread performance with lmbench, but lmbench do not tell anything about threads. You dont believe me, well try to conact one of the developper of lmbench, he will say you the same, lmbench does not measure thread performance.
The performance of MySQL on os x can only been understood by a profiling with Shark, in this way we can understand where the app spends time, in system calls, somewhere else, whatever…. and it is surely not in creating threads. Try to write a small programm that creates, say 60 threads, and run it on Linux and OS X you wont see any difference of performance.
Anandtech went to some meaningless explanation of their results, with bullshit crap theories without any valid proof. Come with a profiling, a scientific analysis of the performance of MySQL on OS X and then we can talk. Without that it is just crap, and a discussion amoung windows and linux trolls that want to believe hard that OS X is slower.
“Without that it is just crap, and a discussion amoung windows and linux trolls that want to believe hard that OS X is slower.”
You forgot to mention that it is also going to provide an excuse for someone to throw in that Ubuntu is the most wonderful thing in the world .
You forgot to mention that it is also going to provide an excuse for someone to throw in that Ubuntu is the most wonderful thing in the world .
Ubuntu is the most wonderful thing in the world.
There does that make you happy ?
Actually they did not intend to measure the performance of a thread, nor did they measure the performance of a thread.
They intended to measure the time it took the OS to create the thread, and not how well that thread performed. And that’s exactly what they did.
You didn’t read the Anandtech article, I presume.
Yes, they measured the performance of creating a thread by measuring [u]fork()[/u]!!. Whoosh! That’s the sound of that article’s credibility being flushed down the toilet.
Well, what would you measure?
He would probably measure the time of pthread_create since it’s the lowest interface for creating threads suggested by Apple. As opposed to measuring the time necessary to create a new process and a new main thread within that process.
OS X uses POSIX threads. As such, you’d use pthread_create, which is standard way of creating a thread on *all* POSIX compliant platforms, which includes Linux.
fork() on the other hand creates processes, which is rather highly optimized on Linux. A fairly basic introduction to processes and why Linux process creation is faster than most other *nix can be found at http://www.informit.com/articles/article.asp?p=370047&seqNum=2&rl=1
Johan De Galas is a very respectable technical author, but that article he wrote for Anandtech has really dented his credibility, making a very very obvious mistake and then not bothering to even correct it!
Well, the link was very basic. My books on the linux kernel are a lot more elaborate. I fail to see how the linux kernel is relevant in regard to the kernel in OS X Server.
Hmm… well unless – of course – it’s because you want to insinuate that Johan De Gelas is more experienced with Linux threads (well, processes that is) than with “real” POSIX threads. At least his approach would be correct, had the OS X kernel been Linux (which it isn’t).
He seems to be goofing around between processes and threads, without distinguishing between those two.
We can however conclude following:
* Mac OS X seems to be a suboptimal server OS,
* MySQL 4.x performs poorly on Mac OS X,
* Creating processes with threads takes much longer time on OS X Server than with the Linux kernel.
However, I fail to see how the creation of a process and its mainthread is relevant in regard to the performance of MySQL 4.x since it clearly does not create more processes, but rather creates more threads within the existing process.
The over-all credibility of the article is still high, but the kernel theory seems somewhat farfetched, considering he is measuring an (AFAICT) unimportant factor (creation of processes which is irrelevant since MySQL and Apache creates threads rather than processes).
The problem may be the TCP/IP stack in OS X server, or something third. Perhaps the kernel, but we cannot conclude that on the basis of his work.
I really fail to see how you’ve been able to come to the following conclusions.
* Mac OS X seems to be a suboptimal server OS,
This has yet to be proven. The Anandtech article did no such thing, especially considering that they were benchmarking process creation in order to explain the assumed poor performance of thread creation. How is this a relevant comparison?
Worse off, no profile of the code was done in order to find out where the *real* bottlenecks lie. Until this has been done, everything said about Mac OS X performance is handwaving and speculation. Perhaps OS X is as slow as many claim. Perhaps these applications aren’t fully tested on OS X, after all most deployments of MySQL are on Linux and not OS X. Until such benchmarks have been run through a profiler (Apple’s developer tools provide a very good profiler) , you cannot conclude that OS X is a bad server OS.
* MySQL 4.x performs poorly on Mac OS X,
Yes, but that in itself isn’t very interesting or enlightening. Why is it slower?
* Creating processes with threads takes much longer time on OS X Server than with the Linux kernel.
This was not demonstrated and as such you cannot come to that conclusion. Every process will of course have it’s own main thread, but creating a new process just for a new thread is … unspeakable. The Anandtech benchmark measures fork. Until it actually measures something more relevant to thread creation, its findings are bunk and its conclusions mean nothing.
fork() is historically very fast on Linux, but is also very fast on most UNIX OSs. It is a basic system function used very heavily by most UNIX software. It’s always been rather highly optimized, Linux just optimizes it more than most.
Oh come on….
The only thing he is doing with this article, is explaining why OSX looks so bad in the benchmarks from Sekhon, and why, in a way, the benchmarks aren’t fair. The windows xp version uses a better allocator for his memory usage patterns than the osx version. This can make a big difference as the author has shown. And, IMHO, this is something that should have been mentioned in the original article in the first place, if Sekhon would have done a “serious” benchmark. What’s the point in doing benchmarks and then not thoroughly investigating why a system is slower than another in certain areas?
I think this is an excellent article to get a better understanding about what happened in the benchmarks from Sekhon, and that is the important thing here.
This article in no way tries to deny that “OS X isn’t exactly the fastest desktop OS…”. You say “I just wish Apple enthusiasts would admit there’s something wrong, discuss where the problems are and how Apple can solve them rather than defending it.”, don’t you think that is exactly what the guy in this article has been doing? He investigated why it was slower and found the reason. From his results the allocator seems to be the only reason for it performing worse. Whether this means that OSX should come with another malloc implementation/configuration is another matter. R should probably use the same allocator on OSX, just like it does on Windows.
Did you even read the article? The software package that Sekhon was using (R) used the Doug Lea allocator on Linux and on Windows. On OS X, it was using the default system memory allocator. Once the OS X version of R was compiled using the same allocator, the difference becomes nil.
The problem this article is highlighting is the fact that people are quick to jump the gun to point the finger at Mac OS X being slow. If Jasjeet Sekhon had bothered with profiling the benchmark he was running, he would have found out that the bottle neck was memory allocation/deallocation. Not some phantom passing variables through some memory file thingy.
I do not question your experience when you say that apps you’ve used on OS X felt slower. The question is, why? Is it because they aren’t as well ported (like R?) or is it an inherent fault of Mac OS X? More articles like this one are needed as they seek to explain the performance characteristics of a particular benchmark. This is in stark contrast to Sekhon’s article which contained barely any credible technical information.
The Finder itself could use some work. Is it still Carbon? But most times for me the desktop is snappy as are most OS operations.
I will trade a little speed to not have to fight my way through the messes that KDE and Gnome have become.
And boy have they become messy. Even XP is easier to move around in.
Come again? Wake me when he teaches computer science.
Great point you got there. You really let me see that there is in fact not possible for a person to possess knowledge in more than one discipline.
Oh, and you better tell Linus not to accept any more work from Con Kolivas, being that he’s just a medical doctor…
My, but you pengunistas are touchy. No wonder your OS will never get more than 3% market share.
Another excellent post from the author of this blog. Although OS X certainly has some performance issues, this particular one wasn’t really the OS’s fault.
One thing I don’t really get is why people seem to expect OS X to be faster than or even equal to other OS’s on everything.
1. It’s still a fairly young system
2. The extra overhead from drawing on OS X slows things down considerably. But I feel it is well worth it for the transparency and lack of visual imperfections that it results in.
Certainly the window server in OS X is far superior than Windows. I love being able to still smoothly move a window around whilst it is busy doing something.
One thing I don’t really get is why people seem to expect OS X to be faster than or even equal to other OS’s on everything.
Yes, why in the world would people expect that from The World’s Most Advanced Operating System?
Well, he shows that OSX and Linux can be the same speed, but not if you leave OSX as it is by default.
A quote from the article:
Like R on Windows, it’s a simple matter to compile and link against Lea’s malloc instead of the default one on Mac OS X.
Something seems backwards…
But that is only because in this particular case the program has rather unusual memory usage. Normally you wouldn’t have to do something like this.
…tell me again why we lent any credence to the findings of a political science prof? 🙂
Actually, I found it to be a pretty thorough and surprisingly detailed account of *why* OS X came out the loser to XP in the benchmarks (too bad he didn’t have a linux install to tinker with). I would, after reading this, be quite interested in the performance of Windows XP utilizing the default allocator, but that was not the point of the article.
Whether or not he is a computer science professor (rather than political science) is completely irrelevant as long as his findings are correct. I think the headline is a bit misleading, and lends itself toward flame.
Um…the political science prof is the one who ran the original (flawed) benchmark.
I’m not dissing ridiculous_fish; his debunking of the polisci’s work is excellent.
..perspective, user feel that is, Mac OS X seems to perform much better if the drive I/O speed is higher.
Slap a 10,000 RPM 74GB or 150GB (with SATA PCI card) Raptor and use that as a boot drive and Mac OS X just leaps off the desk.
Apple pushes the envelope on OS features and quality, it comes at a price of pure speed. Combined with their lower end, slower hardware which more can afford, seems to most who use Mac OS X say it’s slow over Windows 95/2000 or other OS’s that are slimmer.
//Slap a 10,000 RPM 74GB or 150GB (with SATA PCI card) Raptor and use that as a boot drive and Mac OS X just leaps off the desk//
Er … of course that would make it faster. Not exactly a cheap solution, given the initial cost of a Mac.
It would likely make XP faster, too.
True. But I think in the case of OS X you would see slightly more of a speed-up. Unfortunately this would probably be due to OS X’s heavier memory usage, and therefore virtual memory use.
hey have any of you used a mac with osx on it? it runs dog slow even on high end macs. the inerface is only done half ass. and gameing is like being on a very low end pc. what a waste of perficaly good hardware.linux runs so much faster on them. but linux is a mess. why is this? because of political reasons. i could just think of what xp is like! that must be funny! what is needed is cross platform . compare them first.
>>>hey have any of you used a mac with osx on it? it runs dog slow even on high end macs. the inerface is only done half ass.
>>>>
what? you have got to be kidding me! what about a dual G5 runns dog slow? describing my dual 5g in front of me… running CS2 (all the apps), firefox, safari, transmit, mail, word, quark, two versions of freehand, google earth, and a few others at that same time is FAR from dog slow! as a matter of fact… i am constantly impress with how much OSX can deal with all at once with out whining! and can not understand what people like you mean by OSX is dog slow…
>>>>and gameing is like being on a very low end pc. what a waste of perficaly good hardware.<<<<<
oh… wait… now i know where you are comming from… YOU ARE A GAMER!!! now i see… it is all clear to me! you want to spend 4grand on a box that gets high frame rates on quake…. you are not interested in actually doing any professional work, like most of us that own dual processor g5s…. YOU ARE JUST A GAMMER… so… yea, right… i can see why you think our computers are a waste of hardward…. its because you are a LOSER GAMER….
>>>>>linux runs so much faster on them.>>>>
really…. CS2 runs faster on linux on a g5 than it does on OSX? hum… interesting… what about Cinema4d…Hum?
>>>>but linux is a mess. why is this? because of political reasons.<<<<
what the hell are you talking about…. “political reasons” i am not even goign to comment on that… cus you obviousl do not have a clue!
>>>>i could just think of what xp is like! that must be funny! what is needed is cross platform . compare them first.>>>
huh?
ok Mr. 15 year old gamer tollie…. aren’t you late for class?
wow i’m a troll for telling the truth. i’m a printer by trade. i have used dual g5. and it is slow for a 3000 doller pc ! and yes pcs are used for mutch more than browsing the web. and mac apps don’t run on linux.
so take your blinders off and look around. the mac is being dumped in my feid for pc. because of this and the cost of the software. so like your an adult because you don’t game? oh a mac is a pc by the way just take the cover off and look!
I like my trolls better trained than you. Go back to picking on liberals in your other forums.
i’m not a troll face the truth! i have 3 macs and many pc. i’m also a liberal! i just don’t see why osx is so great! i use it and it is dog slow! it’s bsd and it should run like it. i sorry for being so blunt. but it’s true. it’s a os not your mother! so don’t get so mad at me. when i tell the truth. just try a diffrent os on your mac and you will see what i’m talking about!
i use it and it is dog slow! it’s bsd and it should run like it.
hmmm… my new MacBook Pro sure hauls ass…
cool so how do other os run on it? i have seen osx run on non apple hardware it was faster but nothing like bsd or linux. i’m not trolling the hardware just like to know.
i just don’t get the whole osx thing.i find it very disapointing. maybe it’s just me. so tell me how do you like the intel mac?
Well I’ve only had it about a week and a half now but I really like. Everything is so simple and clean.
I haven’t had a chance to install other os’s on it yet but I’ll get to it. The quality of the apps, wheather shareware or purchased is amazing. I love having a real shell (Terminal) rather than cmd.exe too.
You should perhaps think twice before writing in a such aggressive way. You never even used an Apple Macintosh ( G5 or Intel ), you’ve looked at it on paper and made your own conclusion out of the FUD floating around on Tom’s Hardware!
The author’s analysis is good, but the conclusion is flawed. He correctly identified the weakness in OS X, but then criticized the benchmark for being flawed instead of the malloc implementation. The thing is, the use-case in question isn’t that unusual for code written in a high-level manner. Malloc’ed buffers are a pain to keep track of, and the straightforward way of coding with temporary malloc buffers is to allocate them and free them immediately. Depending on the code, the alternative is substantially more complex, involving global data, reference counting, etc.
The key issue here is why Apple chose a 15KB changeover instead of 128KB. The cross-over should be set at the point where the performance improvement from using VM for the allocation outweighs the cost of going into the kernel. The same is true for other VM tricks, such as doing large copies using COW in the VM, etc.
Now, it could very well be that Apple profiled a bunch of code and decided that 15KB was the optimal cross-over point. In which case, the performance issue in this code isn’t so much anybody’s fault, as it is just an unfortunate mismatch between the assumptions of the memory allocator and the assumptions of the application. On the other hand, 15KB is awfully small. Its only four pages on x86. I’m sure the 128KB number in Lea’s allocator was chosen after some thought as well, and given the high cost of kerrnel trips on OS X, seems more reasonable at face value.
Edited 2006-05-18 15:54
I’m not convinced that doing several 35KB mallocs and frees sequentially is in any way expected behavior. In fact, I would assert that any application that relies so heavily on large allocations being fast should provide its own special-purpose allocator. In this case, since each buffer seems to be of the same size, a pooling implementation would be appropriate, as it would make (most) allocations and deallocations constant time and extremely fast (just push and pop from a stack). There is no additional complexity over just mallocing, but it does rely on all buffers being the same size. (Well, there may be some additional complexity in deciding when to shrink the pool, if ever).
We can discuss separately whether OS X’s malloc is appropriate for the average program. But I do not think it is fair to judge an operating system’s malloc on an extraordinary case such as this one.
The simple fact is that the majority of code that gets written does not get optimized in that way. Writing custom memory allocators is something that people only do if they are absolutely tweeking the hell out of the software. It is not usually done for regular, portable code.
When working with matrices, or things of that nature, the allocate/free pattern is not unusual. This is especially noticible in C++ code when the copy constructor does a memory allocation. In such code, temporary matrices will invoke a successive allocate/free as the constructor/destructor is called in succession. In hand-written C code, the same pattern is noticible in the exact same situations.
> When working with matrices, or things of that nature, the
> allocate/free pattern is not unusual.
And these are actually the patterns I was talking about where Java is faster than native C. Because java optimizes allocation / deallocation. Something that is very hard to do in C unless you know exactly what your data set is going to look like in advance.
There must have been something more then just compilation for Apple to switch from PPC to Intel CPU.
http://www.apple.com/intel/
Things certainly whont get that much better with the system closed.
http://www.osnews.com/comment.php?news_id=14633
That just goes to show that tests that ‘benchmark’ with only a particular data size are not very informative or interesting. It is more interesting to see how performance scales against a factor, such as matrix size.
A proper test of how performance scaled would be far more interesting. The second author (rasbora?) article implies that performance would merge after 128k allocations. I would love to see such a graph.
All I can say is that Sekhon should stick to Political Science and leave Computer Science to the Computer Scientists.
Edited 2006-05-18 16:29
Tiger is fast enough for me!
It’s awesome watching a bunch of people who don’t know what they’re talking about getting all excited about the “article.” Unfortunately, the author of said article doesn’t seem to be any more informed than any of you.
1. His analysis of the system call overhead is completely flawed. All he did was look at the jump from libc. If he traced it all the way through the BSD subsystem to Mach he would have found that it is, in fact, packed in memory buffers and things get very expensive.
2. The fact that the original benchmark picked a value which causes OS X’s malloc to become rediculously slow is not the fault of the original benchmark. That doesn’t mean that OS X’s malloc is useless across the board, it does show it’s very poor for that benchmark. If that benchmark turns out to be realistic for applications then it does say something about OS X’s malloc.
1. Fair enough. I disregarded that portion of the article and focused upon why the OS X system was slower in the benchmark, which looks to be malloc.
2. I agree wholeheartedly and stated as much earlier in the thread. I also stated that the title of the article here on OSNews is misleading, as the results weren’t “debunked” at all, merely explained.
Having said that, I don’t see any reason to be combative and insulting to the majority of the posters (although I would be willing to make an exception or two). Oh well, back to the coffee machine…
Oh, but they were debunked. The article proves that a sweeping generalization was made based on a micro-benchmark.
I can do the same thing and make the claim “Java code is 10x faster than native C code”. Because in reality, there are a few microbenchmarks I can run where Java really is 10x faster than native C. But can I legimitately make a sweeping claim based on a microbenchmark? Of course not.
So really, all the original article proved is that the Linux crowd has mastered the art of using FUD almost as well as many commercial companies have.
The article proves that a sweeping generalization was made based on a micro-benchmark.
Very much agreed.
So really, all the original article proved is that the Linux crowd has mastered the art of using FUD almost as well as many commercial companies have.
And now, you do it yourself.
> And now, you do it yourself.
Not really. There are plenty of other examples as well. (The one that comes to mind right away is the “Vista delays prove that Microsoft’s development model doesn’t work” one… Um… Perl 6 anyone? Kernel 2.4 anyone?… you get my point.)
Well, the kernel-2.4 development model didn’t work. That’s why they changed the model for 2.6 and onwards.
That however does also seem to have issues (according to an article posted a few weeks ago on OSN), so one could claim that the 2.6 model doesn’t work either.
. o O ( Actually, people could claim a lot, and often be right about half of it, and terribly wrong about the other half of their claims )
What was demonstrated was an inconsistency in what was being measured. After minimizing the role of the operating system’s facilities such as is done in the Windows test, it’s found that there’s no appreciable difference for the particular test. There should be minimal differences across operating systems on identical hardware for compute-bound software that makes similarly-insignificant use of the system services.
Jasjeet Sekhon is Jasjeet Sekhon, and whatever flaws in his methodology are his own.
can someone test these speeds/performance/usability:
linux-os installed and benchmarked by a linux-os geek,
mswindows installed and benchmarked by a mswindows nerd,
osx (ppc/x86) installed and benched by an osx warrior.
all on equal hardware ofcourse, 1 meter above sealevel and roomtemperature should 21 degrees celcius.
(try a cray installed and benched by a cray professor too!)
well iam just curious…
What a brilliant article! We should have more of these! Instead of “Why I will never buy a Mac/Why I will buy a Mac again” nonsense. The joy of reading it was immense, it was like a bright light illuminating a dark path of zealotry and nonsense that’s everywhere these days.
I don’t have a machine running Linux handy, but I do have a FreeBSD 5.4 machine, and Sekhon seems to hold BSD in high esteem.
Shouldn’t the title be:”BSD vs OSX?
While its a nice explaination of why OSX is slower… OSX IS STILL SLOWER.
A G5 is based on the IBM P5 processor. My G5 dually
blow away the benchmarks. Apple lied, the g5 is more effecient than the duo. Apple is just switching because they want to build cheap boxes. the g5 processor is not cheap. If you ask me Apple should of designed the nest generation mac around the Cell processor. If you don’t know what it is look it up. The Cell processor is going to be hot. My next Linux desktop will be based on that.
For now my dual linux / osx G5 w/8GB RAM will serve me fine.
1) The G5 is based on the Power4 processor, not the Power5.
2) It depends substantially on the benchmark. The Core Duo beats the G5 on integer tasks, and unoptimized floating-point code. The G5 wins in vector computations and highly optimized floating-point code. The former types of code are much more common than the latter.
3) Yeah, great, then we’d have a Mac with the integer performance of a 1.5 GHz Netburst Celeron…
4) The G5 is actually quite cheap, with a die size of about 65mm^2 in the single-core incarnation. The Core Duo is more expensive, being about 100mm^2, and being built on a more cutting-edge 65nm process.