Here is the continuation of a series of comparison tests that is without doubt bound to cause a huge amount of controversy: Workstation Benchmarks: Windows 7 vs. Ubuntu Linux There are performance wins and losses on both sides of the fence, but Ubuntu compares very well with Windows 7, and no doubt these tests indicate a much closer performance comparison than most people would have expected.
In my experience, some techies put too much emphasis on benchmarks. They are vital in the server space, but of little interest to personal computer users unless the differences are such that they are readily visible without formal measurement.
Much more critical to PC users are issues like continuing support, cost, forced upgrades, and vulnerability to malware, stability, system longevity, etc.
One fascinating criteria is system requirements. Ubuntu runs on any old P-IV with 512 M or even less, while Windows 7 requires the same hardware as Vista — several G of ram with a multicore processor preferred. Traditional benchmarks measure only speed, but a better measurement is performance per resource set. In this regard Ubuntu beats any recent Windows version hands-down.
You ‘forgot’ responsiveness. I think that is a really important one. That used to be a big problem for the Linux desktop, it has improved loads. I can’t remember anyone complaining about it anymore.
PS I wonder about the openssl-test, I wonder if they used just an openssl-binary from cygwin (Unix/Linux emulation layer) on Windows.
Edited 2010-08-03 18:02 UTC
Responsiveness still sucks, try copying a few gigabytes of small files and watch Ubuntu Lucid lock up and become almost unusable. The problem is that there is no synergy between the kernel/X/GNOME/Nautilus so the IO Scheduler starves them of bandwidth. Switching to the deadline IO scheduler doesn’t really help either.
5 minutes for 12 Gb and 1600 files on my archlinux with a linux kernel 2.6.35… Ubuntu is not the only distro in the world, only the best known one for now.
Uh? The GP post wasn’t about the time used but whether the station stayed comfortable to use or not during the copy.
Apparently you totally misunderstood his point.
due to the i/o scheduler in windows, you can watch a dvd without stuttering while installing an os in virtual box. in linux on the same machine, copying a few files is enough to make it unwatchable
this is a throughput vs latency thing, and linus has made it very clear where the linux priorities lie.
Edited 2010-08-04 14:12 UTC
Odd, I’ve had the exact opposite experience. You could open up solitaire and have a movie playing and it’ll choke both apps under Windows 7. Under Linux, I could watch a movie, encode another one, copy huge files, and generally have a crapload of things open.
You could say it’s the hardware, but I have a Core 2 Quad, with 4GB of Ram and an Asus P5Q. Not the newest tech on the market, but certainly not ancient.
Sounds to me like some people just have faulty SATA Controllers, or bad drives. Linux sometimes shows things that Windows does not.
Funny story, a friend of mine complained that her computer was really slow. So I went to check it out… it wasn’t just slow.. it would take 15 minutes for the right click menu to come up in XP. So I installed Fedora on it. Fedora actually complained about disk I/O errors. Turned out her hard drive was failing in a horrible way. New hard drive and it worked properly again.
As always it comes down to the best tool for the Job. Windows is the Best out of the two for video games and Linux is the Best for everything else. For the record, copying things in Nautilus is not really the same as copying things on a command line or with rsync. It has gotten a lot better though in the last few years for Gnome / Nautilus.
Yep, I run Linux on a number of box’s. I needed to test Win7 had it loaded on my Netbook, surprised but it ran rather well. But of course Aero was turned off. anyway I loaded up Linux, run all the gee whiz bang 3D/Transparency and run multiple apps which would choke Win7. Also with Windows you also need to run many resource sucking programs like Antivirus, it all adds up.
Bottom line any of the *nix’s are light years ahead of Microsoft especially in regards to threading and SMP. Watch a system with Windows running mulitple CPU’s and/or cores and you’ll see the stair stepping of the Windows, where as the *nix’s balance them out rather well. Sun Solaris handles threads extremely well, bout the best that I know of.
Well I run multiple distros on a bunch of machines, CentOS on a quad Opteron server, Ubuntu Lucid on my desktop, and Ubuntu Karmic Netbook remix on my eeePC 1000, etc. If I dd /dev/zero to a file or copy some files using Nautilus or cp the system will start to become unresponsive. Windows will gray out, the mouse will stutter and then freeze and launching any new apps will take forever. I have looked and other people seem to have the same issues, even coming up with hacks like “ioReniceD” to ionice background processes. What needs to be done is for the IO scheduler to actively adjust priorities to insure fluid interactivity with the GUI, but seeing that Linux is primarily a server OS kernel I don’t see this happening anytime soon.
i was talking generally, but was thinking of a specific situation from about 1.5-2 years ago when I was doing that exact thing. The hardware was a core2duo hp laptop with 4 gigs of ram and crappy stock hard drives. It could have been loads of issues, but this was the exact thing that con colivas has been raging against for years on lkml, so I figured it was that.
Did you check the system logs before trying to install a new OS?
Would 3000 files of 1MB each suffice as test? Because copying that to an external usb drive while watching a HD movie seems to work fine.
The framerate drops a bit at times, but it’s still watchable and the desktop remains responsive. Considering my crappy computer is already struggling to play HD movies alone (CPU around 90%) I don’t think anything is really starving because of the copy process.
I’m talking about sata/pata bandwidth, USB2 tops out at 480Mbps so its not really going to max out your internal drive to the point where your system becomes unresponsive.
I see.
So what about the script to create those 1000 files? It writes on the HD, no USB involved:
for ((c=1;c<=3000;c++))
do
dd if=/dev/zero of=test$c.bin bs=1000000 count=1
done
Or copying all those files to another folder on the same HD?
I can still watch a HD movie with some framerate loss at times (but no audio skips, and non HD movies play just fine), and everything stays responsive.
Anything else I can do to try bringing my desktop to a crawl? I only have one IDE hard drive, so I can’t test copying between two of them.
Edited 2010-08-05 16:16 UTC
What do you want me to say? I have no idea how your system is setup, all I know is on my systems I notice GUI sluggishness caused by heavy disk IO under Linux, and that this is a known issue with Linux’s server centric IO scheduling.
I was just trying to reproduce the problem here as I’ve heard of that before but never experienced it myself, hence why I was asking for something else I could try.
If it was just a scheduler issue it should be consistent for everyone (I’m just running lucid lynx, not some tweaked distro), so there has to be something more into it.
I’ve just checked and turns out my HD is SATA, I’m not sure if that’s relevant for this issue.
Don’t forget these benchmarks are often made to make the side that you want to win, win. These systems are designed with different algorithms for handling different sorts of problems. Some features windows handles better others Linux handles better. If you want windows to win you run benchmarks that favor windows. If you want Linux to win use benchmarks that favor linux.
Using these benchmarks to determine if two different systems has one better then the other is the wrong approach…
What should have been done was using the same set of benchmarks on Windows XP, Vista and 7 with say Ubuntu 8 – 10 and see who has the best improvements from its previous version. But not to whom is better.
Yeah, this reminds me of when sites were doing Javascript benchmarks comparing Chrome to Firefox, and concluding that Chrome was faster. Well, Chrome didn’t have any sort of adblock extension (I think it does now though), so that pretty much made Chrome irrelevant in my view.
My point here is that, unless something is really slow, benchmarks are probably the very last thing I’m going to care about. You can do benchmark comparisons with IE9 vs whatever browser you want, but there’s no way in hell I’m ever using IE9 as my main browser, unless they make some massive improvements to the overall feature set.
Edited 2010-08-03 21:26 UTC
Windows 7 doesn’t need “several G of ram” It’ll run on 1G of ram just fine, and have you tried Ubuntu 10.04 on only 512, it’ll run, but no better than Win7.
I tried Windows 7 (64-bit) with 1GB RAM. Yeah, it’ll run… barely. Install other programs and a couple services (antivirus, anyone?) and do moderate Web browsing and it’ll quickly run down to hell. Hell, forget the extra services and antivirus–open enough tabs and Windows will start paging and stalling.
I would not suggest *anyone* use Windows 7 (at least in 64-bit) with 1GB RAM. 2GB at the very least… possibly 3. Most Linux distributions (both 32- and 64-bit) run quite well in 1GB or less in my experience. Some of them much less.
Edited 2010-08-04 06:09 UTC
Win7 and Vista work fine on one gig and it doesn’t matter if the OS is 32 or 64 bit.
But if you have a slow hard drive they will crawl from the swap.
Well… At work, we have a 3 GiB computer with Windows 7… And it takes a long time to open a single session. Maybe because this computer is 1 year old.
Something like 3 months before Windows 7 release…
So 1 Gb ? Are you kidding ?
Ever thought that the problem is with your IS staff loading crap onto it which slows down the operating system? if I had a dollar for every stupid setup I saw in an enterprise I’d be able to purchase Microsoft ten times over. The IT industry is filled to the brim with well presented morons who look the part but are unable to do the most basic of tasks without making a royal pigs breakfast of it.
On every single system : OS + SPs + updates + Microsoft Office and that’s all…
Really ?
lol !
Maybe there is something wrong with the computer?
I have Win7 running on an Athlon XP 2800 with 1G DDR, and it works fine. It’s no speed demon, but it’s adequate. Takes about 1.5-2 minutes to boot up, and then it’s good to go.
Truth is it makes no diff between 1GB and 3GB to open your session *only*
You most likely have another problem (like tons of crapware and poorly configured install)
As much as I like my Linux based OS (which is not Ubuntu), Win7 is a pretty damn good OS as well.
I could say, many of the setting stuff aren’t polished, and the cmd line sucks.. but some others are good (and settings on linux arn’t nice at all via the GUI)
I manage 1100 win 7 computers at work. Max specs is 4gb with quad cores processors on THIRTY-SEVEN out of that thousand.
Average spec is 1gb with single core processors which run on approx 600 of the thousand. The rest are laptops/tablet with 2gb dual core processors or netbooks with atom processors.
The only problem I have with the lower spec machines is some don’t have WDM drivers for their intel graphics which stops things like windows live movie maker running.
The image is clean and fast and run perfectly well as a workstation. Windows 7 isn’t as bad as you think it is. It all boils down to who does your image in the end.
Really, does anyone still look seriously at their “tests”?
Yes. At least the FreeBSD developers do, of course with some caveats, usually related to caveats mentioned by Phoronix. One should of course take care not to exaggerate the meaning/value of benchmarks but none-the-less the benchmarks carry some validity.
Perhaps you prefer benchmarks from Microsoft-sponsored “magazines”?
EDIT: Fixed an incorrect inflection – darn that reversed English gramma…
Edited 2010-08-03 18:44 UTC
Indeed. Their benchmarks are weird
I was about to ask the same question. Long time ago, Phoronix did some good performance comparisons. Recently, basically they post a number of charts and describe them without offering any interpretation or analysis. I see a bar with 1210 and another at 1211, and they diligently describe what I can already see (ie. “In this test the difference between X and Y is negligible”) Of course, you can read those kind of sentences in most tests, but they also offer some content, unlike phoronix “tests” that are basically a series of graphs and 90% redundancy in between.
Worse still, they stopped providing necessary background information to interpret those charts. So apart from the redundancy of their “articles” the graphs themselves are completely useless. Providing a benchmarking tool is all fine, but then once they finished working on their phoronix suite, their “tests” became little more than advertisements. Usually they just claim “we used the defaults” – but a casual reader has no way of knowing what those defaults are without installing each OS than digging deeply in settings. Once they compared a number of distroes, among them Arch, saying they used the default install of Arch – which is kind of ridiculous seeing how Arch has no defaults at all (no default drivers, no default sound systems, no DE, etc..)
I found their recent “tests” basically worthless (you know, it does matter what mount options you use when testing filesystems)and I removed them from my bookmarks (I used to read them quite often before). Looking at graphs that are basically useless and reading their redundant descriptions is such a waste of time. Not to mention their sensationalist headlines that drive me up the wall (LOOK WE FOUND A HUGE LINUX CATASTROPHE!!! – or something like that testing a BETA kernel).
Those benchmarks seem to agree with the Wine benchmarks I see every now and then.
Win32 apps in Wine see that ext3/ext4 perform significantly better than NTFS but because Windows and Linux test more or less the same for 3D performance, the overhead of Wine’s DirectX-to-OpenGL translation layer causes a big hit.
WHen will they have ARM vs Atom comparisons? I guess they could use beagleboard as the ARM board to compare against.
out of the box, ext4 is dog slow compared to pretty much everything else. if there wasn’t something strange with the test, that speaks pretty bad about ntfs
Why do these articles always degenerate into “my os runs on blahlbah” pissing matches? You always have some smug 12 year old boasting about how his preferred linux install runs on X amount of ram and some other troll feeder going on and on on windows running in X amount of memory/cpu. It’s never ending and neither side is going to believe the other. So give it a rest.
I’m even going to refrain from saying the specs of the machines I run various OSes in :p
I have not yet RTFA, but I want to say one thing before I do: Another name for “Workstation Benchmarks” is “A waste of time.” If you know anything about performance comparisons, or have ever attempted to prove that one thing done on a computer is more efficient than another thing done on a computer, it will be very clear why.