This article is intended mainly for developers who are new to Xen and who want to know more about it. The Xen VMM (virtual machine monitor) is an open-source project that is being developed in the computer laboratory of the University of Cambridge, UK. It enables us to create many virtual machines, each of which runs an instance of an operating system. These guest operating systems can be a patched Linux kernel, version 2.4 or 2.6, or a patched NetBSD/FreeBSD kernel.
Didnt have time to read the article. Is this same as Solaris Zones?
Solaris Zones is similar to FreeBSD jails or the new Linux Virtual Server project, where you can run several “virtual” systems on a single OS. However, each “virtual” system is the same OS, and the entire thing runs on a single kernel.
Xen is a hardware virtualisation monitor. You can run multiple OSes on a single CPU, at the same time. Very different beast. Think VMware, but much lower down, in the hardware.
Solaris Zones is similar to FreeBSD jails or the new Linux Virtual Server project, where you can run several “virtual” systems on a single OS. However, each “virtual” system is the same OS, and the entire thing runs on a single kernel.
Xen is a hardware virtualisation monitor. You can run multiple OSes on a single CPU, at the same time. Very different beast. Think VMware, but much lower down, in the hardware.
It really depends what you want to use it for. If you only intend to run Solaris apps then Containers (Zones + Resource Management) use a lot fewer resources than multiple OS instances, but look like multiple OS instances. Less memory, less disk space and less performance overhead. Until Linux ABI support is released (supposed to be sometime next year) Solaris Containers only support Solaris apps. Not all that serious since most major apps Oracle, Apache etc. are all available for Solaris (stating the obvious aren’t I
If you intend to run multiple OS instances then something like Xen or VMWare is what you need (on x86/x64 anyway), there are more sophisticated systems for other ISAs. Xen uses paravirtualisation (like a number of other systems), which reduces the performance overhead, but does not eliminate it entirely. In a recent paper titled “Diagnosing Performance Overheads in the Xen Virtual Machine Environment” the performance overhead for network loads was found to be significantly higher than reported in the “Xen and the Art of Virtualization” paper. I don’t know to what extent the Xen guys have remedied this. Perhaps one of the friendly Xen guys can respond.
I wouldn’t agree with the comment that “Xen is much lower down in the hardware than VMware”. If you are referring to VMWare workstation then yes, but not for VMWare ESX Server.
Xen is certainly an impressive x86 VMM, but there are a lot of other systems to consider (not just on x86) (too many to list here).
> It really depends what you want to use it for.
I couldn’t agree more! I always like to stress that OS-based and hypervisor-based virtualisation are complimentary and should ideally be available together. OS-based partitioning is more convenient and extremely low overhead. Hypervisor-based partitioning allows better {performance, security, administrative} isolation, heterogeneous guest OSes, “live migration” between hosts, etc.
> Performance Overheads in the Xen Virtual Machine
> Environment” the performance overhead for network
> loads was found to be significantly higher than
> reported in the “Xen and the Art of Virtualization”
> paper. I don’t know to what extent the Xen guys have > remedied this. Perhaps one of the friendly Xen guys > can respond.
In this forum, that would be me ๐
Since Xen and the Art of Virtualisation there have been some changes that increase the overhead of IO (a bit – only really matters in fairly tough cases). The Xen 2.0 “driver domains” architecture makes virtualising high-load situations like the network driver rather harder work for the CPU than the old 1.0 architecture. The advantage is that (unlike other hypervisors) Xen supports most of the random hardware Linux does, without bloating the hypervisor with loads of device drivers.
The 2.0 architecture incurs more context switches because it runs drivers within privileged virtual machines, rather than in the hypervisor. When an unprivileged domain needs to do network IO it must “proxy” device requests through the privileged IO domain(s).
Anyhow, the performance results in the “Xen and the Art of…”, “Reconstructing IO” worked for us, and are reproducible *if* you have fairly beefy hardware *and* configure it with appropriate efficiency tweaks. Which is to say some of them might not work out of the box, depending on your system. However, on the flip side there are some performance limitations, as identified in the Diagnosis paper.
On uniprocessors, receiving *small* packets at GigE line rates was not possible on a uniprocessor, even for us, however small packets on GigE is a fairly unusual case for many people – we chose it because it was pessimal.
Receiving large packets (or using a more powerful machine) got us up to GigE line rate. This still uses more CPU time than the unvirtualised case, so it still has some overhead.
In terms of fixing this, there will be ongoing optimisations but at the end of the day, virtualising a device is always going to have more overhead than accessing it directly. Modern CPUs can mostly absorb this but we’re also looking towards the future.
First off, it’s already possible to dedicate a PCI device (e.g. a network card) to a virtual machine, which will improve overheads. The mechanism for doing this needs more work in order to be widely usable, though. We’ll leverage new chipset hardware as it becomes available to enable this to be done *securely* (on current hardware, domains with PCI access can break out of their container).
User-accessible networks also offer better performance by eliminating context switching. Infiniband support is being worked on, other similar interconnects may well follow.
> I wouldn’t agree with the comment that “Xen is much
> lower down in the hardware than VMware”. If you are
> referring to VMWare workstation then yes, but not
> for VMWare ESX Server.
Absolutely. As I mentioned earlier, the lines are even less clear than that: Xen runs device drivers in guest domains, VMware runs them in the hypervisor. Fun stuff to categorise ๐
I heard Xen supports other CPUs than just x86. If this weren’t possible I don’t think the project would be as interesting. But I don’t get how that is made possible. Does it have an entirely new instruction set (something like a JVM) that any of the supported processors emulates?
I don’t believe any architectures other than x86/x86-64 are supported at the moment.
Also the multiple architectures bit doesn’t refer to running x86 code on, say, ARM, but instead running PPC code on PPC processors, IA64 code on IA64 processors, and so forth.
x86_64 will be supported in the next stable release, which should be Real Soon Now(TM), as will Vanderpool support (i.e. initial support for running Windows guests although it’ll likely need optimising later).
IA64 should be in beta by then, PPC will follow a bit later since it’s been underway less time.
Xen is (from a user perspective, not technically) rather like VMware. It lets you run multiple isolated operating systems on one machine, as if each had its own machine. It runs on multiple architectures but it doesn’t emulate different architectures: multiple x86 OSes running on x86, multiple x86_64 OSes on an x86_64 box… and so on (ports for IA64 and PPC well underway).
If you want something that runs lots of architectures on lots of other architectures try QEmu http://www.qemu.org – you can emulate machines of several architectures using this. There’s also a “user mode” emulator that allows you to run binaries for another arch, e.g. run Linux x86 binaries on Linux PPC host.
Oh, yes, by all means. Only supporting the one arch more popular that all the others combined *certainly* makes it less interesting.
Idiot.
It was a reasonable comment. I think he was talking about the ability to run code for foreign archs (which Xen can’t do, although it is multiplatform). See my post regarding using Qemu (or Qemu on Xen) for this purpose.
Well read it then. We’re not your personal assistants
Always happy to read your posts (I love Xen!)
Will Windows support only be available with Intel VT, or more like what ESX does?
Thanks!
-iGZo
> Always happy to read your posts (I love Xen!)
Thanks ๐
> Will Windows support only be available with Intel
> VT, or more like what ESX does?
It’ll be only with Intel VT (Vanderpool) or AMD SVM (Pacifica). These will be pretty much standard on CPUs in the near(ish) future: Intel in particular are actually introducing it on the desktop first and in the Xeons later…
The trouble with fully virtualising x86 without hardware support is that it’s a) really really hard and b) a bit slow.
If you don’t have special hardware, you’ll need to run another layer on top of XenLinux: You can still run Windows in QEmu but it’ll be rather slow (particularly as the Accelerator module hasn’t been ported to XenLinux). Win4Lin (QEmu-based) officially support running W4L in Xen VMs. For the best fully-virtualised performance on pre-VT x86 you still need VMware.
Is this what novell showed off in their keynote or are they using something else?
Don’t know – haven’t seen the keynote. But Novell are shipping Xen from Suse 9.3 onwards and will be integrating it into SLES 10. Redhat’s also shipping Xen in Fedora 4 (and will be in RHEL 5).
The other distros have varying levels of Xen support too – patches for the Debian / Ubuntu installer are available to integrate Xen right into the initial install process.
See Tim Marslands blog entry on the
http://blogs.sun.com/roller/page/tpm?entry=hello_world_from_solaris…. Also very interesting from Tim is is entry on how the are using http://blogs.sun.com/roller/page/tpm?entry=solaris_deployment_and_k… diskeless clients as part of the bringup.
Now than Mac OS X (a FreeBSD variant) is going to be port to Intel ยฟCould be possible to port it to OS X?
> Now than Mac OS X (a FreeBSD variant) is going to be
> port to Intel ยฟCould be possible to port it to OS X?
Yup. A port of the Darwin kernel could be made to run on Xen, or you could run Mac OS in fully virtualised mode. The caveat is that you’ll need to satisfy the legal requirements (and any hardware tricks Apple employ) in order to bring up Mac OS X on your hardware.
If you run Xen on Mac OS X , and not Mac OS X on Xen there is not legal restrictions
> If you run Xen on Mac OS X , and not Mac OS X on Xen
> there is not legal restrictions
There’s a terminology problem here: Xen runs under the OS kernel, so when Xen’s installed *everything* is running on it. There’s no “host” kernel as in VMware. The nearest to the “host” is “domain 0”, a special virtual machine that has access to PCI devices.
But yes, basically if you’re only running one copy of MacOS on one piece of Mac hardware you should be fine. Beyond that, the legal issues get a bit murky ๐
How entangled is the Xen code in the Linux kernel? Would it be possible to untangle it to run it on, say FreeBSD? I’ve used FreeBSD on both my desktop and laptop the last couple of years, and would like to continue to do so, but I would really like to try out Xen for portability testing of my programs.
Aron
> How entangled is the Xen code in the Linux kernel?
Xen itself is a completely separate layer to Linux: Linux runs under Xen. The Xen codebase derives a few platform initialisation features from Linux code but is essentially a separate project.
Your OS still needs porting to the Xen API. There are two sides to this: the ability to run in an unprivileged virtual machine (a “domU”) and the ability to run in a privileged virtual machine (dom0 – analagous to the “host” OS in VMware workstation). domU functionality is a prerequisite for dom0 functionality.
> Would it be possible to untangle it to run it on, say
> FreeBSD? I’ve used FreeBSD on both my desktop and
> laptop the last couple of years, and would like to
> continue to do so, but I would really like to try out
> Xen for portability testing of my programs.
FreeBSD 5.3 has been ported to Xen already. Kip Macy (who did that port) is now working on making it run on Xen 3.0 and on checking that support into FBSD 6.0 so that FreeBSD will natively support Xen. This is just domU support but dom0 is on the cards for the future.
NetBSD natively supports Xen also. NetBSD -current includes support for running on Xen 2.0 as dom0 or domU. Various other OSes have been (or are being) ported also.
I know FreeBSD and NetBSD can run under Xen, but I meant running the Xen hypervisor on FreeBSD, not running FreeBSD as a host system. Sorry if I’m misunderstanding how Xen works :/
Aron
> I know FreeBSD and NetBSD can run under Xen, but I
> meant running the Xen hypervisor on FreeBSD, not
> running FreeBSD as a host system. Sorry if I’m
> misunderstanding how Xen works :/
I think we’re still talking slightly cross purposes… Or did you mean “guest” in that sentence rather than “host”?
You can’t yet run FreeBSD as dom0 (the primary OS that boots when you start the machine). You can do this with NetBSD or Linux…
Once FreeBSD 6.0 has Xen support, the ability to run as dom0 will likely be integrated, either by the original porter or by some other enterprising BSD hacker. I *think* that’s what you’re wanting.
Are there plans to add features to Xen that would be especially attractive to corporate IT, such as failing over a service running on a guest on one machine to a guest on a second machine? Other high availability features? Using Xen as the basis for a clustering framework?
Definitely – this sort of environment is a great use case for virtualisation.
You can already run standard Linux HA stuff on Xen. There are also some neat features you get now with Xen:
* “Live migrate” virtual machines to another host without stopping them. Use it to load balance servers across hosts, to evacuate hosts from a node for h/w maintenance, etc.
* “Driver domains” allow drivers to be isolated into virtual machines, limiting their ability to crash the machine and allowing them to be restarted / upgraded whilst in use (this feature is still a bit “raw” to use)
In addition, some of the cluster-friendly features that are being worked on include:
* High performance cluster wide store(s) for virtual machine data – pool the resource usage of all physical hosts into one large resource for storing VM disks, supporting VM migration, local caching, etc.
* Take snapshots of complete state of virtual machine disks (in the cluster store) every few tens of seconds with low overhead
* Checkpoint execution state regularly with low overhead
* Run two virtual machines (on different hosts) is sync at the instruction level (by delivering the same events and data to both) – if one fails, the other will be in (practically) the same state and just carry on.
* VM “fork” to allow easy replication of services
* Advanced memory sharing to allow virtual honey farms of thousands of VMs to run on a single host
* Mandatory access control in the VMM (a la SELinux but lower level). Should enable EAL-5 level assurance (i.e. higher than almost any other general purpose system).
There’s other stuff too ๐ It’s all research at the moment but many of these features stand a good chance of being productised and rolled into future revisions of the distro (if they haven’t already).
This stuff looks brilliant. Some of these features even seem to be ahead of the curve relative to proprietary virtualization solutions. I’m currently working for IBM on AIX. During my interview, it came up that I’m a Linux guy and he said that Linux doesn’t have the virtualization or HA features that AIX has. I said, what about Xen, and he was completely unaware of the project. Two months later everyone here is talking about Xen.
You might have read about IBM’s acquisition of a company called Meiosys. Their deal is application virtualization (as opposed to OS virtualization), sort of like Solaris containers or BSD jails but with the HA/mobility/checkpoint capabilities of Xen. Is this a direction in which Xen can extend, or is this completely different functionality that is better implemented in a separate project?
> This stuff looks brilliant. Some of these features
> even seem to be ahead of the curve relative to
> proprietary virtualization solutions.
The ideas are arguably ahead of the curve but bear in mind that the “in development” features are research projects at the moment, so it may be a while before they’re production ready (if at all). I’m sure people like MS and VMware are also working on some really sexy stuff. Some of the new Xen stuff is shaping up nicely – deterministic replay works (for uniprocessor guests) but isn’t checked in, the distributed block store prototype works, etc.
> I’m currently working for IBM on AIX. During my
> interview, it came up that I’m a Linux guy and he
> said that Linux doesn’t have the virtualization or
> HA features that AIX has. I said, what about Xen,
> and he was completely unaware of the project. Two
> months later everyone here is talking about Xen.
Yeah, from what I could see IBM dawdled for quite a while about supporting Xen – then they suddenly went for it at full speed. I hear there’s over 100 IBM people working on Xen-related stuff. No idea what most of them actually do though!
> You might have read about IBM’s acquisition of a
> company called Meiosys. Their deal is application
> virtualization (as opposed to OS virtualization),
> sort of like Solaris containers or BSD jails but
> with the HA/mobility/checkpoint capabilities of Xen.
Yes, I saw that; looks like (like previous application level systems) they’re interposing on the standard libraries of the system to provide a “virtual OS” rather than a virtual machine. The neat thing is the apps don’t need relinking / rewriting to benefit from it. It’s good engineering if it works but it’s quite a complicated layer to work at. Because of the large amount of state involved at this layer (network connections, open files, etc), it seems like more effort in some ways than machine level virtualisation where you only have to deal with hardware state.
> Is this a direction in which Xen can extend, or is
> this completely different functionality that is
> better implemented in a separate project?
I think it’s really a separate project. Most of the things you can do at the app level with Meiosys you can do at OS level with Xen. I think either a choice, or a combination of two will satisfy most people.