Following the previous benchmarking article between the two classic virtual machines, Hernán Di Pietro wrote the 2nd part of the article, where WinXP is used as a guest. Check out the new results.
Following the previous benchmarking article between the two classic virtual machines, Hernán Di Pietro wrote the 2nd part of the article, where WinXP is used as a guest. Check out the new results.
I have to question the validity of any benchmark that shows a PC under virtualisation to perform better than the host PC. To give credit to the author, he did mention at the end of the article that WinXP+Virtual PC “feels” faster and smoother, which for me would probably be more important than any benchmark.
Anybody know if someone’s done benchmarking under Linux? Like VMWare vs. Win4Lin?
Yes, this is possible because of the caching issues.
The OS disk cache of the host os works as an additional cache level for the virtual hard disk in the guest OS.
Most ATA hard disks today have around 2-8 MBs of cache. Using virtualization you may get even 200 MB of in-memory cache.
Valid disk benchmarks shouldn’t include cache (except for the on-drive cache perhaps). If they do, the results are invalid and should be disregarded.
These benchmarks are useless. They do not give units! How is anyone supposed to compare their own benchmarks with them? They cannot!
i guess default settings (such as they are) would have to do… and why not include disk cache? I think that if the program can do it, DO IT sort of like
Default vs. Default
&
Best vs. Best
thataway one could make a desision , to whether or not one would be so inclined to “Buy” said product(s)
He should have given units yes, but it is not that hard to figure out the units.
CPU tests usually uses FLOPS.
GFX – FPS.
HD speed – MB/s
and so on…
Or you could download and run the benchmark tool from:
http://www.passmark.com
Up to a point, the rant against MS is valid. On acquiring VPC, the company responsible for the port to OS/2 was told that there was no longer any future for OS/2 as a host OS for VPC which was a complete volte-face from the attitude shown by Connectix. It would have cost MS nothing to let Innotek continue, but it did not fit in with MS’s policy of migrating everyone to Windows.
Enter twoOStwo. It seems that OS/2 will be added to the list of host OS systems asap. Now there is a new player in the market. Trying to kill off VMWare will not be enough.
The virtual PCs use other gfx drivers/hardware. This might lead to the issue…
I use Virtual PC and like it. I was also happy with VMWare when I used it in the past. Certainly I am suspicious of Microsoft but I believe that emulators and run-time environments are not so difficult to implement that MS will be able to drive competitors out of this particular market. I have immense respect for programmers who can implement emulators since this is a difficult area of programming IMHO (I have a CS degree) but let’s face it: companies like Connectix and VMWare are not that large. Emulators can be implemented effectively by relatively small numbers of sufficiently competent programmers for relatively little cost. As long as there are enough people to buy the emulators then there should be enough market. Caveat: I admit I am currently running OS X as a host OS so this may affect people running Windows as a host OS more than me. I recommend emulating Windows unless you are a game player. At this point I view Windows as a legacy OS. The most interesting OS development is occuring elsewhere.
All comparisons of Win4Lin vs VMWare I read concluded that Win4Lin is faster. I’m curous to see Win4Lin against VirtualPC. My guess is that Win4Lin will win…
How is Windows (an operating system that still is having development done to it) “legacy”?
Well, I concede that it is not “legacy” in that sense. I should have chosen a different word. What I meant was that Microsoft appears to be losing “mindshare” in the IT world. For instance, I started with DOS/Windows but slowly gave up on MS. Others seem to be doing the same. By comparison, Netware is still in use and once enjoyed a dominant position but slowly lost out to MS. During the second half of the 90s, everyone agreed that MS was the future and they were right. I believe this Netware-like downfall is now happening to Microsoft. It will still be in use for decades and still might be the majority OS but the bleeding edge is now elsewhere. As for longhorn, I expect this to be vapor for a while. This has ranged far off-topic. I would like to see continued development in emulation so I hope MS is not able to kill competition in this area and I don’t think they will be able to.
Virtualization apparently *is* pretty difficult to implement correctly, considering that the only oss/free-software project to attempt it (plex86) had been languishing for years in alpha and they recently gave up completely.
kernelthread.com has a good introduction to the technology behind virtualization.
I thought Bochs was free? Are you refering to the license?
As I said in a previous post, I consider this field of programming to be difficult so there aren’t that many people around that can do it. What I meant was that I didn’t think it took many people, just a relatively small number of talented individuals. Now that I think about it, I think this is true of most areas of programming except financial programming maybe. Web programming is also easy but nonetheless is rarely done well. Lots of people can program, most cannot do it well.
Of course, Bochs also is very mediocre but I have gotten it to run DOS and a couple of Linux versions.
Bochs, from what I understand, is a full x86 emulator rather than a virtualization engine. I imagine that it’s somewhat more straightforward to implement an emulator since none of the guest code is run natively, whereas with virtualization you have to work out all the corner cases and architectural quirks.
I’ve never used bochs, so I can’t comment. What I *really* want is a PPC emulator for x86 that’s fast enough to run OS X.
Microsoft is rarely about the bleeding edge. Microsoft is about support for new hardware, and long-term installations. How often do you upgrade your Linux distro? Most have upgrades every six months (or even more frequently), with some notable exceptions. How long has Windows XP been out? Do you know why there hasn’t been a Windows XP 2004? It is because it was very featureful, and FINISHED (at the time) when released. There weren’t any plans to add more features later, like font anti-aliasing, or firewire support. It was released when it was ready, and not before, so customers wouldn’t have to barrage through a stream of updates. That’s why Microsoft’s patches/service packs are _almost_ always about fixes, not features. SP2 is the exception, because there is not another operating system release planned for probably two more years.
Can you name any Linux distribution that could compete with Windows, with only patches for four years? Easy installation of all software, easy hardware detection, easy installation, proper power management support for laptops, etc?
What we all actually need is just cheaper Macs so we don’t desire emulators to run the OS! Of course, they still take up room. I have 5 computers in my home office. Since I bought Virtual PC, I rarely boot most of them.
Anonymous: we’re off-topic again. You won’t find me defending Linux. I migrated to OS X precisely because of the problems you are illustrating. For what it is worth, Apple also has excellent automatic updates as does AIX. In fact, AIX and HP-UX are really hands-free when installed properly. I run windows at work and when I have to at home. This is usually for remote access to . . . you guessed it: work. Linux is good as a server and for experimental programming however, and therefore is also useful as a virtual OS. Similarly, IBM sells a product that allows multiple virtual Linux servers to be run on a single big computer. This allows you to build and destroy servers for a variety of tasks: testing, automation, experimentation, etc. I guess as a programmer, I felt limited by Windows in some indefinable way and naturally preferred Linux. I switched to Apple only after they implemented with BSD, nor would I have switched if they had not.
Good grief, I must be bored. Usually I don’t even post to boards. Good day to all.
First of all, I’d like to add that Bochs is not a virtual machine, but an emulator, and a very slow one that that, because every single instruction is interpreted in C++ code.
But back on topic…
Virtualization is not as easy as some might think, at least not on x86, because Intel basically screwed up. The IA-32 has one decent virtualization mode, namely V86, but that one only works for Real Mode and is used for the DOS box in Windows or DOSEmu under Linux.
Unfortunately there is no clean way to virtualize the Protected Mode of the IA-32, because a lot of the privileged instructions don’t cause a General Protection Fault when executed in user mode (Ring 3).
In the case of a virtual machine the GPF isn’t a bad thing, but it’s actually needed to transfer control back to the Virtual Machine Monitor (VMM) to emulate the privileged instruction and thus make the operating system think that it runs alone on its own machine.
Since some of those the privileged x86 instructions do not cause a GPF when they should, to be executed code blocks have to be scanned for these instructions and replaced with breakpoints to allow transfer to the VMM. Of course the VMM has to keep track of what instructions to be able to emulate them, and the blocks have to be remembered as well, so new blocks can be scanned for problematic instructions as well.
All these painful operations wouldn’t be necessary if Intel had seen to the fact that all privileged instructions cause a GPF when executed in Ring 3!
But in any virtual machine virtualization only covers the processor, and all the remaining hardware (graphics, sound, network, printer, etc.) has to be emulated.
If you add these facts together then a virtual machine on a PC is far from being trivial!
When doing a benchmark, please make sure that the
configuration of both virtual machines is the same
otherwise this kind of exercise is just meaningless.
In particular, if VMware appears to be faster all over
the place except on the disk benchmarks, it’s clearly
the sign of something funky with the configuration.
I’m 100% sure that the guy who did the benchmark used
“growable” disks with VMware while it was using “flat”
disks with VPC. The first kind of disk is inherently
slower because it needs to maintain some additional
data structures on disk. Also, were the disks defragged?
(both the VMware disk and the host disk?) IDE or SCSI?
At the very least, please post the config files.
Btw there’s no way VMware can be slower than VPC on disk
benchmarks. I’ve done the exact same comparison for my
company, and interestingly VMware was 1st on *every* benchmark I tried (disk included).
Yay.