InfoWorld’s Randall Kennedy takes an in-depth look at VMware Workstation 7, VirtualBox 3.1, and Parallels Desktop 4, three technologies at the heart of ‘the biggest shake-up for desktop virtualization in years.’ The shake-up, which sees Microsoft’s once promising Virtual PC off in the Windows 7 XP Mode weeds, has put VirtualBox — among the best free open source software available for Windows — out front as a general-purpose VM, filling the void left by VMware’s move to make Workstation more appealing to developers and admins. Meanwhile, Parallels finally offers a Desktop for Windows on par with its Mac product, as well as Workstation 4 Extreme, which delivers near native performance for graphics, disk, and network I/O.
but it’s not targeted to the end-user. The current hardware requirements are really steep and expensive.
But, better graphics and audio performance would make virtualization more compelling (sophisticated)for end-users, along the lines of what Joanna Rutkowska says she uses as her day-to-day – an Internet-connected VM that she doesn’t worry about because she can delete or revert it if it gets rooted.
In contrast, she has one or more safe VMs that don’t have net connectivity; I guess you could use shared, read-only storage to copy files back and forth among VMs.
Or maybe just buy two computers?
Well, at some point we’re going to have an embarrassing under utilization of hardware resources on a single machine. Most programs aren’t multiprocessor aware or simply don’t do a good enough job allocating resources efficiently. So If I have the extra cpu cores, why not?
It makes a lot of things easier having them on the same box.
Sure. That wasn’t really a serious comment, more of a reflection over the “virtualization vs. security” to which the parent poster referred.
I’ve given up on Parallels. I purchased 2 licenses for Version 2. It would not keep up with kernel changes for Ubuntu – even if you stayed with the LTS version (Ubuntu 8.04). Every kernel update would break it, and eventually they stopped breaking it. THis was even though Canonical sold it in their store!!
Then version 4 came out – I bought the upgrade. It worked when I had Ubuntu 9.04, but it stopped working with 9.10, and I’ve never been able to get it working again since.
When Parallels works, it is a nice product. It’s just too unstable and poorly supported. IF they can get enough funds to properly support it, it could be a good product – certainly the price is right.
From what I gather, the Mac version is better, but I do not have a Mac to test that out – just from looking at the forums.
Another example of why Linux’ total disregard for stable interfaces is bad for users and vendors
DKMS. You don’t need a stable interface.
The article kept going on and on about how Virtualbox is free – but the open-source edition doesn’t have all the bells and whistles, and the proprietary edition is NOT free if you’re using it in enterprise. The article was about enterprise.
I’ve never gotten DKMS to actually work — or save me effort against just re-building all the closed-source drivers I use by hand. Actually, looking back at it, it was probably more effort to use DKMS with ATI’s binary driver on Debian 4 when I tried that a few years ago then it is to just re-run the installer every time the the kernel gets updated on the RHEL4 machine I’m using at work now. Which is sad.
Edit: and, if I recall correctly, the Open Source edition isn’t missing anything that you’d care deeply about. The main thing I can think of off the top of my head is that the Open Source edition doesn’t include utilization of the host’s USB sub-system on the guest. That’s probably not a big deal, in an enterprise environment; I use VirtualBox at work, and I’ve only ever used that feature once, to try to mount my iPhone on my Windows gues: it didn’t work, and I haven’t bothered with it since.
Edited 2009-12-17 19:08 UTC
With Nvidia and ATI drivers installed using the Ubuntu’s ‘Hardware Drivers’ tool, DKMS works without issue.
With ATI driver install, DKMS did not re-compile the driver automatically, I had the run the installer again.
Next week I may provide ATI with feedback on their driver with regards to DKMS and wine support.
http://www.amd.com/us/CatalystCrewSurvey
http://ati.cchtml.com/
I’ve heard arguments both for and against a stable ABI and I don’t think the current model helps end-users.
I wonder if it would be possible to stabilize it peridoically, say twice a year.
Not that it’s relevant to anything, but it’s entirely possible. I believe that reasons that the Linux kernel has no driver-loading API are entirely political.
Edit: Er, re-reading your post, I guess you already knew that, didn’t you?
Edited 2009-12-17 19:02 UTC
Uggggh?!?!?? *
– Gilboa
* 1. You can load and unload modules from kernel mode code (ugly but working).
2. You can load and unload modules from user mode code. (Exec never killed anyone).
3. Or were you talking about GPL-only __symbol_get?
Edited 2009-12-18 00:53 UTC
I’m talking about loading an arbitrary, external binary driver, not a kernel module. There is a difference — at least, there is a difference between a kernel module the way the Linux kernel does it, and a driver the way Windows does it. I’m talking about a way to not have to re-build all your third-party drivers every time you update the kernel, like I get to do now (which is in no way a hassle, and never fails or leaves my system in an unstable state! Really!).
Every time Red Hat pushes out a kernel update, I have to re-run the VirtualBox installer to re-build the kernel driver, and I get to un-install and re-install the nVidia driver. Because those drivers are actually kernel modules, that have to have bits of them built against the source-tree used to build the kernel that’s meant to load them. Now, the kernel team could trivially provide a stable driver-loading interface, so that that didn’t have to happen; they have quite deliberately elected not to do so.
As far as I remember, you can configure the kernel to ignore build versions (CONFIG_MODVERSIONS?). Most distribution (if not all), enable version to support in-order to -force- out-of-tree kernel modules to be recompiled against the latest kernel build – or at least against the latest kernel major build. (As far as I know, a driver that was build against RHEL 5.4 [164], can be used against all RHEL 5.4 series kernels [164.x]).
The reason for it has nothing to do with the kernel driver loading interface (?!?!?) and anything to do with the distribution’s reluctance to be forced into keeping a stable API across major releases on one hand, and preventing comparability issues on the other – as even a minute change in one of the kernel’s structures or API can crash your system (Hence, you are forced to rebuild your module against the latest kernel-headers).
– Gilboa
Vmware Player has come a long way to handle the changes in the kernel and now it is quite pleasant. In earlier versions, you needed to “reinstall” manually the player (basically recompile the virtual devices and plug them into the running kernel).
Now, in the latest version, when you start the player it detects that the kernel changed and recompiles/plugs the devices in the fly. It just takes a few extra seconds than usual and it only happens when a new kernel is installed.
I think vmware nailed it nicely and made the extra effort to give the final user a consistent and polished solution.
No. Compiling should never be necessary. That is for developers, before the system is released, not for users.
What if we on Windows had to compile whatever when we upgraded VMware (or some other software). How many would like that? How popular would that software become?
Why should Linux users deserve any less?
I could point out that the Linux kernel was never designed to support out-of-tree modules – let alone proprietary modules.
I could also point out that a large number of proprietary kernel driver developers have learned to live with this by-design limitation, and by designing their modules with distinct kernel-interfacing-layer (as opposed to calling the kernel API from 10,000 different places), managed to reduce the changes required after each new upstream release. *
… But given that fact that your short comment had more-or-less nothing to do with the subject at hand (the problem might have had nothing to do with upstream kernel API changes and everything to do with sloppy package maintainer in the Ubuntu side or problematic driver building script on parallel’s side – I have no idea [but neither do you…]), I can only assume that were simply trolling. Oh well…
– Gilboa
* Personal experience.
Edited 2009-12-18 00:58 UTC
This does not mean that it would either be impossible to add, or unreasonable to request.
The lack of a external-driver-loading interface is still one of the greater (completely unnecessary) hassles that face Linux users on a regular basis. Whatever the reasons for the situation are, and there may be good ones, it’s still a significant annoyance that doesn’t have to be there. And it’s also a risk; I’ve had it happen where re-installing the same driver dozens of times eventually left the system in an unusable state.
That solution is not perfect, and still causes a lot of unnecessary hassle to many users. It was something of a revelation to the literal rocket-scientists where I work that they where going to have to re-install their graphics drivers every time they let the Red Hat update agent install a new kernel version: after installing an update and then re-installing the ATI binary driver left X unusable on my system, some decided that, as a rule, they should never install updates at all, because it was both too much of a disruption, and too risky! Think about that for a minute: there is very obviously a problem there.
This much is true: whinging about Linux is becoming a popular past-time around here. I do notice that I’m doing it too.
Unless kernel.org completely changes their development model, a cross version stable API is impossible to achieve.
Such an interface can only be maintain by the distributions themselves within a certain release. (Read: RHEL)
Again, driver-loading-API has nothing to do with it. The -intended- lack of a cross-release stable API is the main issue is.
However, if you use – say, nVidia and DKMS-riding distribution (E.g. Fedora + FreshRPMs) the recompile part is more-or-less invisible to you. At least as long as nVidia makes sure it follows the latest upstream kernel release changes (and they do).
End user should not install out-of-distribution drivers. At least unless they really, really, really knows what they are doing.
Again, the Linux kernel was never designed to support proprietary kernel modules.
If someone -chooses- to use out-of-tree driver, he better make sure he uses the right distribution.
🙂
– Gilboa
Parallels isn’t a Linux product and you can’t blame Linux if the kernel breaks Parallels when it’s supposed to be transparent to the OS (just like every other VM product is)
It’s been a while since I’ve used VMware on Linux but my experience then was hardly what one would call “transparent” – the installer built a kernel module for the running kernel.
If you change kernels or run mulitple, you would need
a kernel module for each. Not all the “user-friendly” distros have all the required dev tools installed for you to accomplish this, and, in some cases, like recent Ubuntus , it’s not a straightforward process to obtain them.
Sorry, I thought people were referring to Linux guest rather than Linux host as the original comment was about Parallels (which AFAIK it is OS X and Windows only).
You’re right that the VM is far from transparent on the host OS
I’m not sure what happened with //’s and the PC version, but it always seemed to be great on the Mac. I have Ubuntu 9.10 running like a bought one on //’s 5 now, right down to wobbly window effects and all. I think Linux always seemed to be a second class citizen in the VM world their for a while. I guess it makes sense, you want to get Windows working first I guess…
I am glad to hear they have updated the Windows version, might be worth checking out.
VMware and Virtualbox have also been broken by kernel updates.
But the problem has nothing to do with Linux failing to provide a stable interface for third parties. Since we all know Linux is perfect in its design there must be another answer.
So … the only OS you had a problem with using Parallels is Ubuntu, which kept crashing Parallels.
And this is Parallel’s fault.
Got it.
Yes, because Ubuntu is an explicitly supported platform.
Even though you got it working, the next Uboobootoo updates could easily bork your Parallels install. Canonical moves the goalposts more than any distro around.
Edited 2009-12-21 20:13 UTC
Well, time to chew on my shoe a bit – I went out and downloaded the latest update to Parallels and it started up fine on Ubuntu 9.10. Whatever was wrong seems to have been fixed. I’ll run it for a while and see how it goes.
The shake-up, which sees Microsoft’s once promising Virtual PC off in the Windows 7 XP Mode weeds, has put VirtualBox — among the best free open source software available for Windows — out front as a general-purpose VM, filling the void left by VMware’s move to make Workstation more appealing to developers and admins.
The article didn’t put VirtualBox out in front. This is an inaccurate summary. They (InfoWorld) still rate VMware the best overall.
VirtualBox could be the software of my dreams if they only fixed the damn USB detection.
Works fine here, every time.
Same here. Excellent product.
I don’t think that the author of the article actually has been using the 3 programs over any extended period of time.
He talks about scalability in the context of number of Virtual Cores (!) not disk access or network performance for example (both these things matters to me a lot more than cores).
Further he does not comment on the massive instability issues that virtualbox has even on supported platforms and with supported clients (don’t believe me? then check out the virtualbox forums, or try for yourself :-).
Are you sure it’s VirtualBox that’s the issue, or just the hosted OS that doesn’t play well with what VirtualBox offers up?
It’s a bit of both, but I’ve had countless problems with VBox – particularly with VBox 3.0.x.
In fact, I’ve been forced to downgrade to 2.1.4 just to keep my VM alive (and even then it only works with 1 VM – any more and the whole machine dies).
I’ve been toying with the idea of wiping the system and installing VMWare instead, however VMWare does have some features that I do like and it is easy to maintain in a command-only environment.
In all fairness, I must confess that I am sick and tired of hearing the ethos that an OS is broken or bad or whatnot if it does not work under virtualization. Several times I’ve seen an OS that works perfectly fine with real hardware but fails to work when inside a VM. In these cases the VM is always the one to blame.
I’ve read about some of those every now and then, but I have yet to experience any of it, although I’ve been using vbox on Linux and Windows hosts with Linux and Windows guests for some years now, both for virtual server hosting and for development environments.
Note: As with every software, some people always bump into issues, while others don’t. Me being ok with vbox doesn’t mean it’s the best, it just means it’s been good for me.
When the 2.0 series first came out, you couldn’t even setup a vm on 386 hardware with Windows XP (it kept crashing although the software was released as final).
After asking a technical presales guy from Sun about this issue, his remark was: “don’t take the first releases, they need to stabilize over time”. So it is a bit like FreeBSD and KDE, where you need to wait 3 releases before it really gets where you want it to be, only with FreeBSD and KDE they don’t hide that tip from you…
For me, a filesystem and a vm are such basic building blocks that I don’t want to use the experimental versions as the base of my solutions. If the creators hide the real status of their product, then I lose all confidence and stop using it.
Exactly the same happened to me…
“386 hardware”, really?!? That’s so 1980s and to top it off, WinXP requires a Pentium (586) anyway.
Insert YMMV here . . . Pretty much the same could be said for the initial release of any new software version (or cars for that matter .
Well, for me, Disk and network would be a problem, if I had enough CPU cores. So, it may just vary depending upon your particular pattern of usage. Well, actually disk is also an issue at times. There are several times where I didn’t expect disk to be an issue, but was.
I’ve never seen “massive instability” From Virtual Box.
If I shutdown a guest that was using VT-x, my kernel panics. Every time.
That’s the only problem I’ve had, and it’s easy to work around: the world doesn’t end if I just turn of VT-x. It also appears to be a problem with the ancient 2.6.9 kernel that Red Hat Enterprise Linux 4 uses, and not with Virtual Box. But I think I can call that an instability.
Sure, with that particular 2.6.9 kernel
I’m a fairly happy user of VirtualBox-OSE. It does what I need it to do, I don’t use it a heck of a lot but it’s my goto when I need a VM for something.
VirtualBox is pretty handy. I use it at work to virtualize Windows XP, to read MS Office docs people send me, access pages that require Internet Explorer, and etc.
It’s still way easier to use from command line than VirtualBox :p
VMware should really release a VMware Desktop product at the same price point as Parallels Desktop/VMware Fusion product for Mac. Most Windows users don’t need all the flashy developer features of Workstation. And it’s kind of ridiculous that they have an awesome, cheap Mac product but their Windows equivalent costs twice as much….
I agree 100%, the price for workstation is way to high. They should make a version for windows at the same price point and same feature level as fusion.
Yeah, I think they really don’t want to sell too many copies of workstation. The more consumer oriented you become, the higher your support costs are. They see their future on servers, in enterprises, not on Grandma or even “power” users desktops.
But for Windows you have VMware Server, which is free!
Except that vmware server sucks. I hate that web management console, it’s slow, it has no context menus, and I’ve had to reinstall vmware 3 times this year because all of a sudden I couldn’t reach the damn server. Somehow, either vmware or Apache is corrupting it’s config file. Nobody needs a full blown Apache + Tomcat installation to manage VMs.
VMware Server 1.x was a great product, 2.x blows, and it’s all about the management console. That’s why at home I’m now using Virtual Box and on Windows Servers, it’s Hyper-V (It’s free with the Win2k8 Server license)
Aha, I was not aware that the situation had become worse with newer versions. I was happily using VMware Server 1.x on Windows XP half a year ago, and when switching to OS X, I had to pay for Fusion because there is no free version for Macs, not even VMware Player. So I actually thought it was the Mac users that were the unlucky ones.
I use Virtualbox on a ubuntu host with a XP guest. If the kernel gets updates, the needed modules are recompiled automatically if needed. For me it’s very stable and does everything I want.
I do think the linux community should offer a stable framework for closed drivers. Most people just want their hardware/software to work and really don’t care whether it is open source or not. If you don’t like it, just use a kernel without that support.