Several open source applications are available on both Linux as well as Windows. This gave Mohammed Saleh the idea of comparing the performance of various of these applications on both Ubuntu 8.04 as well as Windows XP SP3, to see which of the two performed better with certain applications. The results were rather interesting.The tests he performed could be more or less grouped into multimedia related tests (Blender, Avidemux, etc.) and hard disk performance tests (the command line RAR utility and ClamAV). The results seem to indicate that Windows XP beats Ubuntu hands-down when it comes to multimedia related applications, and Ubuntu has the advantage in IO intensive applications.
Saleh also did a multitasking test. He wrote a script which performed several tasks simultaneously to see which of the two operating systems was better at multitasking. This time, Ubuntu beat Windows by a wide margin.
The methodology of the test appears to be fairly sound. The tests were performed on the exact same machine, except for one tiny detail: Windows XP was installed on a SATA disk, while Ubuntu had to settle for an ATA one. Interestingly, Ubuntu still outperformed Windows XP on IO intensive tasks.
Just so you know.
Lots of this information is inaccurate, most blatantly is that he thinks the KERNEL is what might be limiting avidemux, and such, when its infinitely more possible that ubuntulinux just hasnt enabled use of mmx/sse/sse2 asm optimizations, which ffmpeg optionally supports.
and then when he says it can cause stability problems to use the for example k8 optimized kernel, wth is that? it seems he is sadly misinformed.
the worst part is, most people believe 100% what random bozos put on their blogs, as it if was absolute truth.
The test is still relevant, as most users will not recompile their software for better optimizations. They will just use what they get from apt-get.
the worst part is, most people believe 100% what random bozos put on their blogs, as it if was absolute truth.
And are you claiming then that the blogger’s result were false? He was receiving lower performance in several important applications under Linux than under XP. I do suspect it’s the CFS scheduler which just tries to balance everything instead of giving one application way much more CPU time than another one. Linux did beat Windows hands-down in multitasking in his test after all.
I think I’ll do a similar test on my laptop, though I’m gonna test Gentoo, Mandriva and XP. If Linux scores lower then are you going to claim my results false, too?
IMHO his tests were pretty much OK and valid. It doesn’t really matter what is the cause in these things, it’s the end-result which matters. One can debate things for all eternity if one wishes but no end-user will care about that.
Of course it matters – we can’t begin to bring Ubuntu up to speed until we know the cause
I’ll be interested in seeing your results when/if you perform these tests
I’ll be interested in seeing your results when/if you perform these tests
It takes me a while to install anything on the laptop, it’s pretty slow afterall, so it might take a few days. And I don’t yet know where I’ll put the results since I don’t have a blog.
You could always consider writing a small article for OSNews with the results. I’m sure Thom would appreciate it, not to mention quite a few fellow readers.
when did i ever claim his results false? what i claimed was false INFORMATION, relating to why something might be slower/faster, and what causes instability.
i think the test results were reasonably fair. Taking a fairly unmodified out of the box Ubuntu install is reasonable. I think if he compiled any of the applications or the kernel it would have been cheating seeing that those who are not so tech-savvy would not even consider any extra configuration. I think tests like these are useful to show that much need to be done on the default settings of distros to make sure they run well for the most popular applications. Good test and I look forward to seeing the results of other distros. Thx
You say Ubuntu did better in the IO-intensive tasks, but then you blame the discrepancy on the fact that Ubuntu is on an ATA device instead of SATA. That would give it a *dis*advantage… and yet it still out-performs Windows.
Yup, you’re right, I mixed them up. It’s fixed in the item now.
Thanks!
Sorry, these benchmarks were irrelevant to me.
Ubuntu 8.04 is a clusterf@%k. OpenGL is inherently broken, it is buggy and slow. Hopefully in the next couple of months all the bugs will have been ironed out.
A much better test would be Ubuntu 7.10 V’s XP SP3
Currently, I am typing this into Konqueror, while the PC is busy removing 8.04 and re-installing 7.10
I don’t know. I have ubuntu 8.04 running as a virtual host and its doing a really good job so far. Performance is good and I’ve had no reliability problems.
I fortunately don’t need 3D so i’m quite happy with Fedora 9 and KDE 4.* .
I rather game on my xbox360 anyway 🙂
I’m really not sure what you’re talking about. A long-time Gentoo user who desperately needed a non-source-based distro for my new Macbook Pro, with much reluctance I gave Kubuntu-kde4-8.04 a try, and have been (decently) pleased.
Is there some story, some blog post, some link that you can give us telling us why 8.04 is the clusterf@%k you claim it to be?
I’d be interested to hear any intelligent criticism, much more so than unsubstantiated claims.
Not sure what’s causing you problems – 8.04 seems to work well for me. Perhaps you have some “interesting” hardware?
But one of the features of free software is that you can stay with 7.10 for as long as you like if that works better for you, unlike certain other environments I know (*cough* XP *cough*).
XP – supported since 2001. Before it is done, it will wind up having being supported for 7-8 years. On the other hand, Ubuntu 7.10 will only be supported for a total of 1.5 years. I hope your cough get better.
You missed my point rather badly.
I can still download and install the original Ubuntu 4.10 today (get it at http://old-releases.ubuntu.com/releases/4.10/), while XP will die on 30 June (except for a few markets where Vista is hopelessly unable to compete) simply because Microsoft wants it to die. And Microsoft can deactivate *your* permission to run XP without recourse, as they did my daughter. Read your EULA.
And lauding Microsoft for supporting XP for 7 years is disingenuous and a bit laughable – it was Microsoft’s ONLY OS for 5 long, delay-filled years until Vista shipped to a less-than-glowing reception. Of COURSE they supported it for what seemed like eternity!
Canonical ships new Ubuntu releases every 6 months with all the reliability of a Japanese passenger train, so they are supporting each release for 2 full successor lifecycles (*6* for Long Term Support releases like 8.04).
Hope that clarifies why Microsoft makes me cough. 🙂
Windows offer bigger CPU priority for one task, so is natural that for one task Windows will outperform Linux. That is happening too cause the Linux does have better priority algorithms that may be less performance for one task, as much as they offer better response on multiple tasks (CFS).
Linux have sometimes worse drivers (e.g. for Ati cards) where Linux have to lose as performance wise systems. Excluding that Linux kernel is similar as capabilities with Longhorn kernels, means fully scalable, secure, better cache tuned.
So, Linux is better than XP
If i’m working with Blender I don’t give a *whatever* CFS is doing – I need to finish my work in time. So Linux sux. If Ubuntu is positioned as Desktop OS it should work like desktop OS. If i will need server OS for heavy miltitasking i will use one that suits me right, but when i need desktop os to work with multimedia linux is no good for me.
then up the priority on blender…
This looks like a job for Renice -20 Man!
Faster than a runaway process!
Able to overcome scroogy schedulers with a single command!
Renice -20 man! Renice -20 man! he renice -20 you! He renice -20 me! Renice -20 man!
funny thing is that even with a -20 process, it will not give the same level of “lockup” as a realtime scheduled windows process. but thats highly iirc…
shoot, a realtime thread in Windows will completely lockup the machine until that thread is damn ready to stop processing I’ve made that mistake one too many times.
You don’t get the gist of the compromises being made here. Windows multi-tasks and does some renicing in favour of the active application currently running, automatically. The problem with this is that the foreground running program, especially one as intensive as Blender, tends to drag down the rest of the system as a result. As soon as you multi-task to something else, poof, everything runs slower. Will that help you get your work done faster? Thankfully, this doesn’t happen on any Linux system I’ve yet used.
However, in this case I doubt whether a renice would help Blender. There are a few other performance bottlenecks to look at first before doing that.
I don’t buy your reasoning. If a scheduler degrades performance when you only run one application, then the scheduler is broken. I hardly think that is true for modern Linux schedulers.
Then you just claim that Linux is similar to the Vista kernel but better without any reasoning or proof.
“I like Linux more than Windows therefore it’s better”.
but you never run just one application, or do you?
all services and such are also applications. i guess it would be a bit hard for the scheduler to know exactly wich application the user preffers. well thats what nice is for. but then ordinary users dont want to fiddle with such things. maby desktop distributions should make so all desktop apps get higer priority. or add so users could chose priority by rigth klicking on a window
http://kerneltrap.org/Linux/Defining_Scheduler_Task_Groups
i know there is a better write up about it out there somewhere, but right now i cant find it…
I will find the Linux-Windows benchmarks interesting as soon as there is a check of Adobe InDesign/Photoshop performance. Self-made scripts or comparing three/four flagship programs all the time and shouting “Linux IS faster” is so… well, never mind. It seems that someone tries to sell me a car, when I need a train (“ok, maybe it wouldn’t take all the carbon you need to transport, but look, it is faster!”).
Noone needs superfast OS with no top-level apps. Let it be Win98 as long as I can finish my job.
***YAWN***
How many times have we heard this?
Some people are sick of intransigence, if you want to be tied to a certain application, fair enough, but do not expect people to stop the world to pander to you and your needs.
What I like about this is how you said all this was irrelevant to you above, but then complain about people complaining about how the world should pander to them.
Classy and consistent.
You picked me up worng there…
The first post was purely because the benchmarks are irrelevant to me, and in all honestly, to everyone else to. Benchmarks are never a true life reflection.
My second post was just a reply back to the usual posting that “linux is crap because Photoshop/Dreamweaver/ do not install natively”…. same old story, same old tune. Some people expect the world to pander to their needs.
I may have read you wrong. And I certainly don’t disagree with the “pander to me” attitude; Ubuntu 8.04 has been taking some hard knocks on the forums for the same reason.
In a way, I agree with this. It doesn’t matter what operating system you use or how slow it is, as long as you can do your work.
In general, Linux boxes run lighter and faster and if someone is prepared to alter their work habits to work with Linux, they can get a very nice experience.
I can’t imagine anyone thinking that they can get more performance out of Windows, whatever the release, than with Linux. All the optimisations in application software and Windows’ bias to give more priority to the foreground application help, but when I quit an application and it takes several seconds for the icons on the desktop to be regenerated, I have to laugh.
Hello, I’m the blogger who wrote the article:
I still thank Redeeman for spending his time reading my post. I’d like to answer some of his points:
“he thinks the KERNEL is what might be limiting avidemux, and such, when its infinitely more possible that ubuntulinux just hasnt enabled use of mmx/sse/sse2 asm optimizations, which ffmpeg optionally supports.”
I was trying to guess where the problem was, and I can confirm that I had installed all media codecs and both LAME and Avidemux were using their internal ASM optimizations but I don’t know about Ubuntu kernel.
“he says it can cause stability problems to use the for example k8 optimized kernel”
Replacing Ubuntu default kernel with another ubuntu-supplied kernel for the same version is safe, but using any other kernels can certainly cause issues especially for unexperienced users.
“The worst part is, most people believe 100% what random bozos put on their blogs, as it if was absolute truth.”
Blogs are not tech labs or magazines. They are for people to share their experiences with the rest of the world. They naturally come without any warranty except for what reader want to belive. Personally, I still believe that there are a lot of credible blogs and one only need to have faith in people.
Finally, you can do a similar test yourself, it is very simple. I can assure you that you’ll get similar results.
I still believe that there are a lot of credible blogs and one only need to have faith in people.
Just to defend the guy, I’d say we have the right places where faith matters. In such cases credibility is what matters. And, however deep they say we are in the web2 mud, I have yet to get convinced by the credibility of the random blogger. In any issue whatsoever. That doesn’t mean there aren’t incredibly good ones, but you’d bettet take everything with a – grand – grain of salt.
Finally, you can do a similar test yourself, it is very simple. I can assure you that you’ll get similar results.
I’d say not, I’m pretty sure one could produce fairly different results with a recompiled kernel and recompiled apps, or on a well compiled Gentoo, whichever.
Edited 2008-05-21 11:32 UTC
I think biggest problem is you cannot consider such data as absolute. Custom-configured machines are heavily dependent on personal preferences (for ex. did you buy specific hardware because you needed it to work with Linux? Maybe Windows with other hardware is blazing faster… maybe not… but this is an example of what I meant), constrains and so on.
Moreover, you cannot assume same application is optimized for different platforms the same way. By using the same application on both platforms, you cannot assume both versions are optimized and working at maximum speed they could do. Think QT and GTK: would you state that those ARE optimized for Windows platform? I don’t think you can state that and that would modify your overall results.
I’m afraid that those data only demonstrate that, for applications you’re using on hardware you own and for usage you’re willing to do by them, Ubuntu MIGHT be faster. And we’re happy for you.
I’m sure I can find thousands of combination where Windows can be faster. Plus, remember that many important details cannot be reproduced with a few tests but require more analytic approach.
Overall, your info can be useful to people willing to work with your hardware and willing to use same software you use. But you cannot use them to measure actual OS performance.
Obviously this has alot to do with what is installed and configured on a base system. I think Desktop oriented linux distributions should spend alot more resources on kernel patches and configuration to make it optimal for KDE/Gnome response time. What i’m seeing is instead the different distro’s fighting to release the latest x alpha version of software y. I need a solid responsive desktop, not a CFS based kernel with alpha 2 of openoffice on beta 5 of Gnome 3.8 or whatever!
This alpha software race is destroying Linux’ credibility on the Desktop.. and as i mentioned in a previous comment .. makes Linux NOT ‘Ready for the Desktop’!
The author repeatedly blames NTFS (vs ext3) for the slower ClamAV performance. The methodology doesn’t justify blaming the filesystem. The VFS or IO drivers can also contribute, or it could conceivably be related to NT security policy settings… more investigation is warranted before you can pin the difference down to the filesystem.
Typically, anything dealing with lots of small files will crawl on Windows, where the overhead of actually opening a file is very high. Similar problems occur with large directories (more than 1000 files or so). Either could have happened with the ClamAV test, and that might begin to explain the discrepancy.
Blaming NTFS itself might be a bit hasty, but there’s definitely something going on in the Windows filesystem somewhere. The same issues do not seem to effect Linux nearly as much.
I wonder what was going on with the Blender benchmark though. Much of Blender’s performance is dependent on OpenGL performance, so was the problem to do with nVidia’s drivers, or the desktop effects which are present on Ubuntu but not Windows XP.
I cannot vouch for the CalmAV benchmark authenticity – but I can offer numbers of my own.
A certain I/O benchmarking tool (C, POSIX/Win32) that I’ve developed was designed to simulate high I/O using very high number of small files (1K-20K) running concurrent create/write/close cycles under different directories.
On the same machine (2x2C-Xeon, 4GB, SCSI RAID5) – under Windows 2K3 (i386/NTFS/no-8.3-names), the application was hard pressed to pass the 5MB/s line. Under Linux (CentOS5, ext3 [noatime/nodiratime], SELinux enabled) the application managed to sustain around 60MB/s.
Switching to FAT32 did improve the performance (~12MB/s) but I doubt that using FAT32 is a desirable solution.
After doing some additional research it seemed that:
A. CreateFile is dead slow. Around 10-100 times (!!!) slower then open/Linux.
B. NTFS performs OK as -long- as you keep the file count (per directory) to <100.
C. Once you pass the 100 line things get far worse, -fast-.
D. NTFS goes into what-looks-like a winter hibernation once you pass the 10,000 file/directory line.
E. NTFS doesn’t seem to like deep directory hierarchy (If you rather replace file count with directory count). The deeper you go, the slower things get.
You can argue that I’m a poor Win32 developer or that I’ve missed anything (beyond disabling 8.3 filenames) – but you can try it yourself.
– Gilboa
Could you send me the benchmark pls? I’d like to try it on 2008 and post results.
Send to [email protected] pls.
Thx a lot
D
I can only send you pseudo code – the code itself belongs to my employer.
– Gilboa
I’d like to say a few things.
There can be a lot of difference between performances of systems with stock and locally compiled kernels. Also, the picked schedulers can sometimes make a great difference. Additionally, there can be a lot of difference between differently compiled versions of the same app. Moreover, we don’t have – or I haven’t notice – information about what is running on those machines while the tests were performed (important from the scheduling and task management point of view). There would be a _lot_ of difference between these results and those gathered on a well setup Gentoo machine. Then, why something like rar, why not – from the top of my head – e.g. bzip2 ? Then, I also don’t like the ext3 vs ntfs claim, and I’d say the difference mostly lies in driver performance and the difference of types of files [e.g. many small files, and so on].
suspicions about Ubuntu’s ability to take advantage of modern CPU’s capabilities
Now here’s where that changing the default kernel can cause stability problems claim (funny at that) translates to changing [bad wording though] the default kernel can cause … performance (!).
this requires technical knowledge
I’d say if one believes to have the technical knowledge to produce acceptable benchmarks, then (s)he should also have the technical knowledge to custom compile a few things.
Edited 2008-05-21 11:25 UTC
Well the one doing the benchmark could absolutely optimize both systems. But that wouldn’t make sense for non-technical readers.
The point is that the default setup of Ubuntu and Windows is what’s important to most people. My mother is a teacher in multimedia and she uses both WinXP and Ubuntu. She doesn’t know what the kernel even is, let alone how to recompile it. So what she, and many others, are stuck with is the default configuration of her system. That’s what matters. That’s what I care about.
But sure, for tech-readers it may be interesting to see what you can get if you everything you absolutely can to get good results, including recompiling with optimized flags, configuring software or even change different parts of the system. I don’t think that’s what the author really aimed at, though.
Edited 2008-05-21 11:39 UTC
I’d like to reply here:
My test was not about how fast Ubuntu or Linux in gerenal CAN BE. It’s how Ubuntu 8.04 perform against Windows XP SP3. I’d stick to its default setup with a couple of easy tweaks that can also be done on Windows. I don’t even pretend to know the reasons behind the benchmark scores. I merely suggest what my limited knowledge help me with.
Well, improving performance and improving stability are completely different things, don’t you think?
Of course I know how to customise linux. In fact, I never settle with the default options, but again, it’s a test of Ubuntu Hardy as it is offered to the world against Windows XP SP3 as Microsoft did it.
Oh, thanks for taking your time and doing a great test, btw.
I can’t believe this, two pages already and people are attacking the guy. The guys benchmarks aren’t the issue. The issue is, what you define as performance; its all very well testing whether you can throttle the system and produce desired result, but if the throttling of the system means that the whole system is locked up whilst a task is being completed – its hardly going to be useful to the end user who might want to do something whilst waiting for a task to complete.
When designing a scheduler, you either have throughput, responsiveness or guaranteed time (aka real time). You choose one, and something else suffers. If you try to do what Microsoft does – you end up with a highly complex, convoluted system that attempts to be everything to everyone. This drives up complexity and all the things that go with it. No use talking about a system that can do it all but is bloody unstable to the point of being useless.
In regards to Auxx, stop being so pig-shit-ignorant. The purpose of a desktop operating system is MAXIMUM RESPONSIVENESS TO THE END USER:
And you’re a frigging moron for not taking the brain power to actually work out what the hell CFS actually is, and the compromises being made so that you, Joe Annoying User can have that balance between throughput and responsiveness. But of course, keep prattling on over things you have no idea about.
And here you are being pathetically uneducated to the point it is gone past laughable to being bloody annoying. Ubuntu is doing exactly what a desktop operating system is meant to do; balancing throughput and responsiveness. What the hell you’re going on about servers for, god only knows, because it has nothing to do with the conversation.
The last technical conversation you were involved with, you were chewed up and spat out by numerous posters – and you’re making a jackass of yourself again.
What stays longer in the end-users mind is the actual waiting time or perhaps the perceived speed.
How long do you have to watch the hourglass?
I agree with you on what WE the users want, but Ubuntu precisely fails at that. Responsiveness to the user is lower than with Windows’ “convoluted” system.
Maybe desktop oriented distros should patch the scheduler to do what the users want it to do instead of being potentially useful for some freak background scientific calculations or Internet services.
Windows has a switch for Linux-style unresponsive mode(it’s under computer advanced settings).
Firefox is slower in Linux, hell, firefox GUI is less responsive in Linux than in OpenBSD(!), and that’s saying something since OpenBSD is a server oriented OS that seeks security over performance and Desktop features. You can blame the background Internet orifices, I mean services, that most certainly are enabled by default in Linux, but even if it came with a strong AI service, the GUI should always have a large enough chunk of the processor time reserved so as to never be unresponsive.
I will give up a 20% reduction in compilation time just to have a responsive GUI, any day.
The multitasking test was completed faster on Linux. What does this mean? It means that the programs got more CPU time. From where could they get that extra CPU time? From the task switches.
The biggest factor speedwise in a scheduler is how often it switches tasks. The less task switches, the more CPU time to the programs. If there are more task switches CPU time is “wasted” in task switches, instead of used by programs.
What does this mean? It means that if a multitasking benchmark completed sooner, then it most likely used less task switches.
Is that good, or is it bad? For desktop use, it’s really bad. If this is about desktop performance, Windows XP should win that particular test, because it obviously uses more task switches, which is good for desktop performance (=responsiveness).
When I compile something in a terminal, and try to switch another window, then I want to do that. I don’t want to wait one more second on GCC. That is desktop performance.
> Ubuntu is doing exactly what a desktop operating
> system is meant to do; balancing throughput and
> responsiveness
So when Linux is both slower (see the blander test) and less responsive (just try it or see the multitasking test), that’s balancing speed and responsiveness?
Besides, that’s only your opinion. My opinion that responsiveness is the one and only desktop goal. Only when responsiveness is good enough, speed can be prioritized. But it must never ever interfere with responsiveness.
Edited 2008-05-21 13:18 UTC
You are argumenting your claim totally wrong. Spending more time doing task switches doesn’t improve responsiveness, it only means the applications have even less CPU time overall for them. Imagine if you had two systems, one which spent 5% of its CPU time doing task switches and another one which spent 10% of its time doing the same; system 1 would leave 95% left for the applications and system 2 would leave 90% left for them, no matter how many task switches each one does.
The fact is, the less time the system spends doing application management instead of running the application the less time the apps have to respond to user input.
No matter how you put it, an increase in the number of task switches reduces the average time it takes before a task gets the cpu. This means it can react to new input with a shorter delay. That’s responsiveness.
Sure, with less task switches the tasks get a bigger share of the CPU, but it gets its share in big chunks with a lot of time in between (this time is when the other tasks get their share). During that time it can’t take user input, which results in a loss of responsiveness.
The only exception is a system with only one task, where less task switching improves responsiveness. But if you have only task, then you don’t need task switches.
In my opinion loss of responsiveness is not acceptable under any circumstances ever. Sadly, major OSes thinks its completely OK, and in fact an excellent idea to regularly prevent the user from using his computer for up to a second (in bad cases) or even longer.
You do realize, I hope, that task switches are measured in milliseconds? Increasing the frequency of task switches will not get you a more responsive-to-human-input environment; other factors (such as scheduling algorithms) completely swamp that factor.
Nor is it reasonable to assume that an environment that runs slower must be switching tasks more frequently. It is much more likely to be spending more time running non-interactive tasks such as virus scans, handling system tasks such as network I/O, or dozens of other routine matters.
My personal experience on computers with both XP and Ubuntu installed has been that Ubuntu is noticeably more responsive, principally because XP spends so many cycles scanning for malware (using McAfee and Norton Defender). However, how you configure both environments will have a great impact on how responsive the environments “feel”.
You should use what you like, of course.
My experience shows that Ubuntu 8.04 is more responsive than XP. That is one thing I notice when switching from XP to Ubuntu. While some heavy task running that make my processor really hot, I can switch applications and desktops easily, and do something with them.
It really surprised me really, because if that happened in XP, switching applications will give me white blank windows, and I have to wait until XP can redraw the windows.
Edited 2008-05-21 14:20 UTC
The multitasking test has the right conclusion!
From my experience with running applications (for example FE-Codes like ABAQUS or EXCITE), Linux is much more responsive on heavily loaded to overloaded systems than Windows XP. At least this is true for RHEL3.x and XP SP2.
When you run an EXCITE process on a 4 core Linux system and tell it to use 4 CPU’s, you will notice some less responsiveness, but you can still work on the machine. Renice the EXCITE job, and you will notice almost no change in responsiveness to an unloaded system.
Do the same on XP, and the computer can barely run a web browser with decent responsiveness.
This is just my observation when I am working for my employer. I have no measurements to back that up with numbers, but it feels like Linux looses 10 – 20% of its responsiveness, and XP looses 50 – 70%. It is consistent through several Linux and Windows machines.
What I do now: I just run the job on a Windows machine with 3 CPUs, and loose some time through this.
When Windows grinds to a halt like that it’s usually because of the different (and obviously dreadful) policy when it comes to memory caching and swapping. As soon as something is swapped to disk, you’ve basically lost the performance battle. Linux seems to excel here, by not swapping to disk so quickly.
At this very moment I’m running while (true) loop that takes as much cpu time as it can get, on an old single-core processor.
There is almost no loss of responsiveness. I can browse the web without noticing anything. Redrawing minimized explorer windows takes probably 0.2 seconds more. Minimizing and restoring Opera is as fast as normal.
I haven’t tried this on Linux, but it can’t possibly be better, because XP is so good in this matter.
Of course doing background tasks will take more time, but I don’t care as long as I can do my foreground tasks.
Edited 2008-05-21 20:28 UTC
Regarding the Blender benchmark — did you enable multiple threads?
On Blender 2.45, only a single-threaded render pipeline is enabled by default, and unless you’re familiar with Blender, you may not know how to turn on multi-threaded rendering. Makes a significant difference here.
Download & run the test with the new Blender 2.46, which has ‘auto’ multi-threaded rendering and see if that changes anything.
I do not understand why this mistake, the comparison between Ubuntu 8.04 and Windows xp sp3, and not between Linux (kernel) and Microsoft systems.
kernel recompile is not required for normal user or improve the performance of Windows, i think it is good comparison because it talks about the Ubuntu 8.04 default, and Windows default ..
Of course we must expect some difference results when using different hardware , or testing various conditions, but the comparison according to current conditions seem objective
It’s certainly worthwhile, a discussion that deals with OS performance gaining momentum. Wether the displayed results in this particular thread are sound or not.
These results show that software that is more optimized for one platform performs better on that platform.
Thus one can probably assume that both operating systems perform very similarly, such that the applications themselves are more often the bottleneck…
To make XP perform with scheduling and caching a bit more like Linux you just tell it to give all processes equal CPU quanta and change the caching. It’s quite easy and takes maybe 30 seconds and is not for the ultra-geeky only.
Check out my website (http://www.recoverymonkey.org) for an in-depth article on the subject.
For Linux he should at the very least use the proper kernel and possibly recompile the apps for the CPU in question. Obviously not good for non-techies.
I’d have done the same tests with Vista and 2008 Server just to be thorough.
D
Edited 2008-05-21 15:44 UTC
To make XP perform with scheduling and caching a bit more like Linux you just tell it to give all processes equal CPU quanta and change the caching.
AFAIK the whole point was to compare _default_ installations, without some arcane modifications which no Average Joe would do. Trying to crank as much performance as ever possible from either system would need a totally different kind of a review and point of view.
I am going to start installing XP, Mandriva and Gentoo on my laptop later on today and shall conduct a similar test tomorrow or perhaps the next day. The only thing I’ll change though is that I won’t turn 3D bling on under Linux since XP doesn’t have that, and I will turn Beagle off. We’ll see what results I get.
It would still be academically useful. I’ve seen HUGE differences in I/O doing what I suggested above. Those tweaks are NOT arcane, are well-documented and are far easier to enable than anything one can do on Linux.
Indeed, for pro audio apps, FAQs recommend you set the system that way, for instance.
It all depends on what you’re doing.
NO system behaves optimally for ALL workloads.
Closest I’ve seen is Windows 2008 Server.
D
thank you, i was going to post the same thing. Windows NT’s scheduler is actually very configurable.
You have a choice of:
3 foreground priority boost levels
Variable or Fixed quanta (timeslices)
Long or Short quanta
And any combination of the above.
The default for XP is to give the foreground app a 2x quanta boost with variable short quanta.
CFS tries to be fair through exact timing and clever algorithms, but XP can be fair with strict round robin policy.
Personally, I like the way NT hard codes the mouse and audio into the scheduler so they rarely skip. CFS will cause your mouse and music to skip under load.
Also, XP’s foreground boost makes sense for interactive reasons. A while ago there was a patch in CFS to add a Xorg priority boost in the scheduler.
Edited 2008-05-21 18:48 UTC
Most of these apps are Linux Applications ported to windows. The problem with that is the fact when you make a Application for Unux/Linux you are working with a different mindset then doing it for windows and vice versa.
Lets use GIMP perfromance. GIMP for linux uses GTK which has a lot of development for x optimization. vs. GTK for Windows which works more as a wrapper on microsofts windows method. So GIMP works at a lower level on Linux and a higher level on Windows.
>> The tests were performed on the exact same machine,
>> except for one tiny detail: Windows XP was installed
>> on a SATA disk, while Ubuntu had to settle for an ATA
>> one. Interestingly, Ubuntu still outperformed Windows
>> XP on IO intensive tasks.
I don’t see how having two completely different harddisks could be considered a ‘tiny detail’. Curiously the benchmarker has only seen fit to specify the brand, capacity and interfaces used – nothing about rotational speed, cache, seek time etc..
Surely if for some reason the Ubuntu system didn’t support the SATA drive (seems unlikely), why didn’t he just use the PATA drive for both?
I highly agree. I had a laptop that I used to run large virtual machines (Parallels). Eventually, they outgrew my internal hard-drive, and I put them on my external USB 2.0 hard-drive. I expected performance to drop significantly. I was surprised when it went up. Then I thought about it – 5400rpm internal vs. 7200rpm external. That explained it.
One thing a lot of people miss here:
those were stock installations of both ubuntu and windows. As far as I remember, WindowsXP is compiled for i386 – same goes for Ubuntu. Those were stock installations, the versions were provided and those are well-defined test environments.
I wonder whether he considered using `nice’ though …
Now, regarding RAR: I am very sure that RAR uses the same algorithm under both windows and *x. I do not know how retarted you have to be to write an unportable compressor/decompressor in any language which has a standard library supporting bit logic and file IO.
I found the post very informative and especially representative for most of SP3 and Ubuntu machines. Few people consider architecture-native software and messing with nice [granted, I really used it only in extreme situations].
This benchmark especially shows where developers and distributions might want to start profiling – something very often being ignored/forgotten.
I’d like to see him compare a custom kernel to the default kernel and to XP. I’m mostly curious to see if there is any speed advantages to a custom kernel versus the default kernel. When I upgraded the kernel on ubuntu 8.04 to 2.6.25.x it would no longer log into gnome. Luckily I had kde 4 installed and it logged into that just fine. Its a weird problem.
Users of OpenGL (eg. Blender etc) may be interested in the very recent article comparing driver performance between Linux and Windows using the ATI driver 8.47.3.
Summary: Linux comes out on top on more of the benchmarks. Very well done AMD to turning things around!
http://www.phoronix.com/scan.php?page=article&item=amd_firegl_v8600…
Note that Ubuntu 8.04 uses fglrx driver version 8.47.3. I noticed that on my HP nw8240 laptop with a FireGL V5000 Mobility (essentially an X700) the frame rate in GLgears went from 1400 to 4030. Plus the rate is the same whether logged in as an ordinary user or as root (it used to be faster when running as root). Wow!
My machine with an Nvidia 8800GTX always ran quick under Linux.
If you are on a ATI card and Blender runs slow, don’t whinge here – it just shows you haven’t configured your workstation properly (duh!). Do a simple check by running ‘glxinfo’ and ensure it contains an output line with “direct rendering: yes”. If it doesn’t then figure out how to get OpenGL running in hardware, since you are running in software.
I’m happy to help you out with this if you want (just reply to this message, and all can see what needs to be done).
Edited 2008-05-21 19:36 UTC
That has always been the case with X11 – there are people here on osnews.com who like to perpetuate the urban myth that X11 is a performance liability. The only evidence to prove such a preposterous assumption is “windows tear” – ignoring the fact that tearing is a by-product of synchronisation and buffering issues rather than it having anything to do with ‘performance’ – performance being throughput. Heck, I remember back when I used to frequent COLA (comp.os.linux.advocacy), there used to be frequent displays even back in the XFree86 days of, when X11 was benchmarked, it outperformed Windows without any specialised tweaking involved.
I’m running Solaris SXCE with an Nvidia Quadro 570 FX (Mobile) with the latest drivers, and the responsiveness and snappiness is far higher than what I experienced when running Windows Vista Basic (which was the preloaded version on this lenovo thinkpad). With GNOME 2.22.1 installed (Vermillion B91), the stability, speed and snappiness is even better than before. Hence, I find it incredibly funny when I hear people go on about how great Windows is, and yet, if one were to use the metric of improvement of products on the same hardware – each release for Microsoft is a further regression.
I agree. I found the recent (Open)Solaris to be the most responsive on x86 hardware.
I believe the main advantage for Windows left now is the breadth of games. However, this is sure to change since Macs now sell 66% of retail sales over US$1000 (according to Slashdot and eweek, see http://apple.slashdot.org/apple/08/05/20/0146218.shtml). Office users don’t buy in this segment, but home users and gamers do. Once the Windows advantage in games has been diluted by Macs and consoles then I believe Microsoft’s previous dominance of IT will be quite weak.
I know alot of students (and parents, etc) today who are actually now purchasing cheap laptops and desktops (hence the low number of US$1000 machines, and Apples dominance) and deciding to go for games consoles instead. I can think of at least 30 people I know who decided that a $1000 laptop with Intel integrated everything used solely for internet and university work, and purchase a console with a decent tv for playing games. Not only does the machine last a lot longer, the software availability is large, and most importantly, you’re not on a treadmill of continuous upgrades simply so you can run a given game.
I think the other myth created by Microsoft is marketshare; take New Zealand, the number of Windows machines that are merely used as dumb terminals, I’d say there are atleast 30,000 desktops of the public sector where all the work is actually done on mainframes. Same can be said with banking in New Zealand as well; yes, there are thousands of desktops there too; but very few are actually using the local resources, its all being done on a mainframe or a big UNIX server. So the one thing I always emphasise is the myth of Windows market share.
As for Macs, I think they’ll take off without any problems, with Laptops leading the way. As the performance gap between desktop and laptop become smaller, people will choose a laptop over a desktop -for me, I have both, but even at home, I prefer my laptop as I can sit anywhere and use it. I am not locked to my desk. With that being said, its going to be interesting to see what happens in the future. I think the best chance *NIX has for the future is the focus on supporting laptops very well. Make it *THE* laptop operating system – once you’ve done that, everything else falls into place.
but the theme of this thread is not speed per se but responsiveness and X is not helping in this area.
BeOS wasn’t famous for its kernel, VM or hardware accelerated graphics but the app server was designed well and that is why it was so responsive.
How is X not responsive? as I said, people use ‘tearing’ when resizing windows as an example, ignoring that it has nothing to do with speed and everything to do with poor synchronisation and double buffering – in other words, nothing to do with responsiveness.
If you are getting bad performance, 9/10, its crappy drivers; case in point; I had crappy performance with Solaris using my ATI Radeon X300 video card; pulled it out, replaced it with a Nvidia 8400, and voilà , perfectly responsive.
Mind you, quite frankly, if you’re going out and purchasing hardware and you want to run Linux – quite frankly, you’re silly even thinking that ATI is a viable option given the royal shafting ATI/AMD has given to the opensource community in the past – and the lack of quality drivers today.
poor syncing has nothing to do with speed but everything to do with responsiveness. Desktop users won’t tolerate it even if you could show a thousand benchmarks because it will *feel* slow. The OS then gets in your way and doesn’t feel ‘natural.’
Ubuntu 8.04 has a regression in the desktop kernel that is causing problems with multimedia apps. I think that could be the problem. I’ve installed the update form ubuntu-proposed and the difference is remarkable.
More info here:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/188226
Thanks heaps for that link Pikachu. Didn’t know about that (Ubuntu 8.04 is my work environment, so it makes a difference).
2 different drives, on 2 different busses etc.
Sure the Linux one was faster in some, and Windows was faster in others but it just makes the results unpredictable to use 2 different drives.
Windows software likely uses a better and more optimized compiler, whereas Linux software uses gcc, which is good but not as good as microsoft’s windows compiler.
The author claims some tasks are not hard disk intensive – which I find really strange. Of all tasks that one might do, virus scanning, compressing files, and possibly others mentioned are some of the more hard disk intensive tasks one might perform. Most software like this is written to read a chunk of data, process it, read some more etc. in a linear fashion, so hard disk speeds play a big part here since although it doesn’t read much data, it reads a LOT of times and doesn’t do the cpu-intensive parts until its finished reading each time.
Would I think different if Linux was faster in every test? Actually I dont think there is any fair way to compare two different OS’s. You can do subjective tests but its near impossible to develop the same application to run natively on each OS using native GUI code etc. In nearly all cases, an app is developed for one OS and then ported to another, often adding another layer in order to port it. This is the case with the GIMP (windows version) – on linux it runs on GTK, but on windows it runs on GTK which also runs on win32.
So although many apps LOOK the same on both OS’s and sometimes may even look like native apps, the truth is that its impossible to compare the two. One will always be faster and the OS may have nothing to do with that.
I think the most telling part is that even though Linux was slower in most tests, it was quicker in the multi-tasking test, which matches my own subjective experience using windows XP compared to Linux. Different apps, different uses, but Linux just feels more responsive. You cant measure that though
Interesting article, but what’s the point in comparing a recent kernel (Linux) with a seven year old kernel (XP)? Wouldn’t a better test pit Linux against Vista? Or against Windows Server 2008?
Honestly, Windows 2008 can be tuned easily for workstation service (that’s what I’m running on the machine I’m posting this comment from) and from my experience, it uses less memory than Vista and is more responsive. Even under extremely heavy loads, this system never becomes non-responsive nor does the mouse or audio ever skip.
Need I say more?