The Xen virtual machine monitor was recently merged into the upcoming 2.6.23 Linux kernel in a series of patches from Jeremy Fitzhardinge. The project was originally started as a research project at the University of Cambridge, and has been repeatedly discussed as a merge candidate for the mainline Linux kernel.
Just FYI, too many words here are linked. I see what you did – taking an already link-full story and adding even more – but that was a bad call. It is over the top now!
Edited 2007-07-20 23:54 UTC
Yes lots of links, but why is this a bad thing?
Sometimes on OsNews, the links can be non-obvious, but in this case, it was clear what the subject of each linked page was.
I agree. I don’t see why the links are inappropriate.
Alas “into” just isn’t an obvious enough subject to make sense to me as a link.
Slashdot pioneered this link-every-word stuff. They’d link “into” as well, except they’d probably send it to dictionary.com.
I agree that ‘into’ isn’t very expressive or informative when taken out of context.
But this style of linking relies on context. Therefore links in the sentence: ‘…Xen…was recently merged into the … linux kernel’ will point to articles/resources that adress this subject.
It is a means of very concisely disseminating links to many information sources in an intuitive manner.
By scanning the summary, I can see that there are links to information about:
the XEN-Kernel merge.
The pertinant kernel release
background about XEN.
some debate about whether XEN belongs in the kernel.
And I can choose which aspects of the news story to learn more about. Granted, it takes some getting used to, but once you’ve learned the skill of parsing these links, the format becomes very information-dense, a good thing for a news aggregation site.
What would be nice is for one link in each story to be visually different from the others as a form of (subjectively decided) master-link to the most relevant supporting-article.
where this leave other VMs projects?
Dunno…..KVM has already been merged and most of kernel hackers seem to agree that KVM it’s much cleaner and better than Xen: http://itmanagement.earthweb.com/article.php/3659196
My long term bet is certainly for KVM.
Competition is always good, having KVM and Xen as two different projects would have been better for the community than adding both projects into the Linux mainline;
What can we expect now from those projects?
1. They will find a good way to live together into the Linux kernel? IMHO that is not a good approach; having two things to do the same thing into the same project just make them bloat.
2. Xen will eat KVM or viceversa? Very negative, because competition is always good.
3. They will get merge technologically? Maybe this is the best option.
KVM could render Xen obsolete, it is possible to https://ols2006.108.redhat.com/2007/Reprints/harper-Reprint.pdf“>… .
Simple as that eh? You know, because *everyone* runs Linux 😛
Every user of Linux kernel runs Linux. And the question was about future of both KVM and Xen in Linux kernel.
Industry standard is Xen, that’s the fact. It works with Linux, OpenSolaris, BSDs are porting it and Microsoft worked with XenSource somehow. So we can presume that Viridian will be Xen-compatibile.
Xen is very much “enterprise” like projects on http://thedailywtf.com. Yes, it works, but it code quality is horrible — every kernel hacker reviewing Xen source code has run away.
KVM is clean and simple. It has almost all features of Xen (SMP guests, live migration etc.) and can run Xen guests. So Xen seem redundant on Linux.
And guess what? KVM is to be ported to FreeBSD: http://code.google.com/soc/2007/freebsd/appinfo.html?csaid=FACC0F1A…
I heard rumours about possibility of porting KVM to OpenSolaris. So we would have KVM as defacto standard.
Porting isn’t so huge undertaking as porting Xen was. KVM is clean and looks much more like device driver. That’s why it got merged into Linux kernel so fast — it’s not invasive.
WOOW ! This KVM-Xen is wonderful project !
Anybody has any idea where the project’s homepage located? (Google returned irrelevant answers)
The Linux kernel virtualization is developing at excellent pace – I would say that year 2007 is the year of Linux kernel virtualization !
The Linux kernel develops separate components separately, which results in a beautyful virtualization infrastructuire-in-kernel, rather than a Xen mess.
KVM only works on CPUs with hardware virtualisation support. Xen works on any x86 CPU since the PPro. For that reason alone, KVM cannot supplant Xen. And until the management tools for KVM catch up to those for Xen, I don’t see it “killing” Xen.
There’s a fundamental difference in the approaches taken by Xen and KVM. Xen exports a virtual CPU architecture and “pseudo-physical” guest address spaces. It was conceived, designed, and implemented as a clever way to implement virtualization on host hardware architectures that were not designed to be virtualizable. They implement a lot of mappings, protections, and switches in the hypervisor that would ideally be provided by the hardware, and they compensate for this software emulation by presenting an ideal virtual architecture for guests to target.
Xen lit a fire under Intel, AMD, and (to a lesser extent) IBM to provide virtualizable hardware. These extensions are coming in phases. For example, Intel’s first phase, called VT-x, implements a guest mode with selectable trapping of privileged instructions, state switch on mode switch, and guest exit conditions. An upcoming phase, called VT-d, will implement a hardware IOMMU, nested page tables, and guest interrupt delivery. The hardware vendors are steadily extending their architectures to correct the shortcomings that drove Xen’s design.
KVM has taken a somewhat opposite approach by demanding a certain level of hardware support in order to implement a reasonably simple hypervisor that exports vanilla x86 (now including 64-bit and SMP) virtual machines. The theory is that a little bit of hardware support saves a lot of CPU cycles and software complexity. Work on KVM started at the perfect time to capitalize on the explosion of x86 virtualization support, allowing the developers to support the new capabilities as they arrive while keeping the code lean and mean.
The exception is the management of the shadow page table, which keeps track of the guest-physical to host-physical mapping in the host so that the hardware MMU can present the guest with the combined guest-virtual to host-physical mapping. Shadow page table synchronization was implemented with considerable complexity in order to provide decent performance. This will become a lot simpler with the arrival of hardware support for nested page tables.
So, in a sense, Xen is a virtualization technology that was designed to be state of the art in 2005, while KVM was designed for 2008 and beyond. Xen is taking advantage of these new hardware features just as KVM is doing. But it also carries around a lot of design and implementation baggage due to yesterday’s requirements. KVM is beginning to implement paravirtualized drivers to smooth out its remaining performance wrinkles, but it will take some time to catch up to Xen in this area.
There’s no question that KVM is becoming increasingly relevant. But it’s too early to tell if KVM will grow at Xen’s expense. A lot of big players jumped on the Xen bandwagon quite enthusiastically. Novell, Microsoft and (most of) IBM are firmly in the Xen camp. Red Hat is less committal and more likely to support KVM. Kernel developers love the KVM approach, but commercial vendors are hopelessly attracted to solutions that have that “bolted-on” feel. The degree to which KVM can leverage existing management solutions such as Red Hat’s virt-manager will be very important.
Now that KVM is being or has been ported to many architectures, will we be witnesses to the slow but steady decline of Xen? XenSource themselves seem to have resigned to serving as a platform primarily for virtualising various Windows releases and are effectively competing against VMware ESX Server now.
I see that Hollis Blanchard, who has been involved in porting Xen to POWER, is doing a port of KVM to embedded POWER. I am wondering why this should be limited to embedded now that the ISA has been unified with POWER6, since both AIX and Linux could serve as a virtualisation platform with the proper hardware support. KVM has also been ported to S390 and IA64 according to the Qumranet web page.
I am asking in particular because I have ported Slackware to MIPS and would like to use a virtualisation solution such as Xen or VMware. Since MIPS would only support something like KVM with the MT ASE the best thing to do would probably be to port Xen to MIPS. I don’t have much faith in the KQEMU kernel module since it doesn’t even work on x86.
I have asked on the Xen-devel list but their developers are too busy developing for x86 to even react to my inquiry about any current effort to port it to MIPS. I believe MIPS is a serious enterprise platform and is having a come back with 16-core processors from multiple vendors available now and in various development stages.
Edited 2007-07-22 14:07 UTC
KVM currently only works with CPUs that include hardware virtualisation (ie true virtualisation).
Xen works with regular CPUs (paravirtualisation) as well as hardware virtualisation.
Xen also currently supports LVM partitions for the virtual machines as opposed to regular files in the host’s filesystem.
Xen also supports fast movement of VMs between hosts.
There are also several management consoles and GUIs for Xen, none of which work with KVM.
Eventually, KVM may catch up to Xen. For now, they are very different beasts, with very different feature sets, and neither can really replace the other.
The only advantage of this integration is that mainstream Linux kernels can now be run on top of Xen dom0 without having to integrate patches from Xen. Xen itself (dom0) is not in Linux code base yet.
Also of note is the lguest hypervisor that runs independent of KVM (doesn’t require hardware acceleration). This actually allows linux to pretty much replace xen entirely if desired.
http://kerneltrap.org/node/13916
the thing that worries me is that when i’ve used the special xen-kernels, they’ve broken a lot of things, especially vmware, nvidia and wireless (i guess the proprietary drivers).
is this going to alleviate those issues as its now built into the regular kernel, or is it going to mean that you can’t avoid those issues by not using a xen-kernel (as i currently do on fedora7)?
In general, you can only use one proprietary driver or one out-of-tree patchset before you have to worry about breakage. Proprietary drivers and external patchsets are only built and tested against the mainline kernel. They assume that they are the only unusual code in your kernel.
Xen going into the mainline is a good thing, but there are some caveats. First, this is only DomU support, which allows the Linux kernel to run as a guest on Xen. The Dom0 support is still provided by an external patchset. So you’ll still have roughly the same level of difficulties with proprietary drivers in your Linux Dom0.
Furthermore, since proprietary drivers fall outside the jurisdiction of the kernel maintainers, there is no particular assurance that they will work correctly in uncommon configurations. So your proprietary drivers may not work in your Xen guest, just because the vendor didn’t bother to support that configuration. There’s not much that anyone in the kernel community can do about that.
The Xen Dom0 is a fairly invasive patchset. I’m not sure if anyone expects it to be merged. That’s sort of the reason why a lot of kernel developers are excited about KVM. It’s a Linux-based hypervisor that fits nicely into the existing kernel. I’d go that route if your hardware supports it, especially if you need proprietary drivers.
I think Xen is already very successful – their paravirtualized drivers can be found in many operating systems (there are even some for Microsoft Windows). Due to that enormous amount of work, I think their pv interface will become / became a de facto standard and it is very unlikely that other projects will reinvent the wheel (aka create their own interface and write drivers for all operating systems).
Furthermore I think that the past has shown that paravirtualization is necessary in order to get decent performance for disk i/o and networking – this may change when virtualizable hardware is available at large but this will take at least a couple of years.
At last but not least a virtualization technology is more than the plain hypervisor: Management tools are needed, too and that’s where Xen has a big advantage compared to other open source virtualization technologies.
But IMHO there is no big competition between KVM and Xen, they will learn from each other. And they do share quite some code: Both are using a modified qemu for their device emulation 🙂
The trend is toward extracting commonality rather than taking sides. For example, both Xen and VMWare implement the paravirt_ops interface to handle hypercalls. As for paravirtualized drivers, this is also a topic of considerable debate. Rusty Russell (the developer of lguest) has proposed a virtual I/O abstraction layer called virtio that will enable paravirtualized drivers to work seamlessly with multiple hypervisors. It presents a simple and generic API that works for various kinds of drivers including block and network.
where is GUI tool to get xen working? Qemu seems to have it but it failed to create vdisk .
(using fedora 7).
i think there are also plans to add lguest to the next release. or at least hopes to be able to do so.
So – is there any additional data about KVM-Xen ?
There’s no such beast as “KVM-Xen”.
They are two separate things: KVM is the Linux in-kernel virtual machine manager that uses the hardware virtualisation features of the latest Intel/AMD CPUs; Xen is a virtual machine monitor that can do either full virtualisation on certain Intel/AMD CPUs or paravirtualisation on other x86 CPUs.