Citrix Systems is acquiring XenSource, whose founders helped develop the open-source Xen hypervsior, for USD 500 million in a move that caps a significant week in the development of virtualization technology. The XenSource acquisition, which both companies announced Aug. 15, comes just a day after VMware, which has long been the dominant player in the x86 virtualization market, announced an initial public offering of 33 million shares of stock. By the end of its first day of trading, the company’s stock closed at almost USD 51 a share.
It’s funny how the description leads into talking about VMWare offering it’s IPO and actually feels (by the way it’s worded) like more of a revelant topic than Citrix buying XenSource.
Kinda like a debian thread turns into a ubuntu thread…or a opensuse story has to mention Microsoft and Novell..
I find this a really intriguing acquisition myself. One would wonder if it is a simple monetary investing to get into the virt game, or if Citrix has plans for it’s own existing products.Virtualized citrix application server anyone?
Nothing makes me angrier than virtualizing applications down to the bare metal. This is the pinnacle of insanity, a monument to inefficiency, and a referendum on code reuse. More senseless vertical integration brought to you by ruthless corporatists that refuse to find common ground.
They don’t care about interoperability! Their definition of interoperability is that their application runs on your hardware. Their giant monoliths may reach all the way down to your processor, but they will not reach across to the other applications on your system. They couldn’t even interact with each other if they wanted to.
Fine. It seems that commercial vendors are going to have to learn their lessons the hard way. IT is about combining applications together to solve complicated problems. Application-centric virtualization will create more problems than it solves. It will create more barriers and it breaks.
And as always, customers will pay dearly for the software industry’s inability to cooperate.
After all having agreed on PC architecture underpinnings is a monumental achievement on its own merits.
Maybe software guys are incapable of cummunicating and have to lend themselved to HW people.
Well, time to fork Xen.
Once Citrix, that Microsoft’s bitch, takes over Xen, they’ll try to cripple or kill it for sure.
“Well, time to fork Xen.
Once Citrix, that Microsoft’s bitch, takes over Xen, they’ll try to cripple or kill it for sure.”
You can take this to the bank: Microsoft is extremely interested in seeing Xen succeed because it validates their own efforts to incorporate a virtual machine monitor into the OS (one, that I might add, is architecturally very similar to Xen). This is critical to Microsoft’s efforts to combat VMWare and their potential argument that Microsoft is bundling virtualization software with their product. By both Windows AND major Linux distributions adopting Xen, Microsoft gets all of the above AND a competitor that has cooperated in making an interoperable solution (Microsoft and Xen are both working on interoperable hypercall functionality, VHD format, etc.).
So no, Xen, in the near and distant future, is not going to end up dead or crippled (probably quite the opposite). It is, however, a major component in Microsoft’s gambit to marginalize VMWare and bundle virtualization with their product. It might be an unholy alliance, but it’s an alliance nonetheless.
Well, to be brutally honest, Xen was crippled by design from the very start. XenSource pushed against the Xen community to replace a relatively challenging situation with an almost insurmountable challenge. And now Citrix gets to inherit that challenge.
Of course, I’m referring to the initial decision to create the notion of a privileged guest, the Dom0, which deals directly with the hardware. Although other ports have been attempted unsuccessfully, the only viable Xen Dom0 is Linux. A heavily-modified Linux that keeps diverging from Linux as time goes on.
XenSource doesn’t like tracking Linux kernel development with their out-of-tree Dom0 patchset, and there’s zero future for the Xen Dom0 in the mainline kernel (although the Xen DomU has been merged). They don’t like the whole idea of the Dom0, especially since it can’t be Windows.
So Citrix is acquiring a company whose technical strategy is to dump Linux as its hardware abstraction layer and develop comprehensive hardware support on its own. VMware seems quite loony these days, but at least their strategy is technically feasible. They already have a “fat hypervisor” with reasonably broad hardware support.
Linux also has a fat hypervisor with very broad hardware support. At least three of them, in fact. One of them is a hardware-assisted full virtualization solution with all the hype of Xen circa early 2005, except it’s in the mainline.
You see, the Linux community has already forked Xen. It’s called KVM, and it’s the right technology at the right time.
butters, I have to agree with you 100% on this one. Take a look at this awesome post from the gnu libc maintainer, ulrich drepper about kvm vs xen:
http://udrepper.livejournal.com/17577.html
This is bookmarking material.
KVM seems to be learning from Xen’s mistakes. A good example, Xen’s live-migration features connects 2 servers on a lan via an unencrypted tcp socket. Not sure about some people, but I find the idea of transferring raw memory over the wire unsettling at best seeing as how the ssh agent could have my key in it. kvm supports ssh for live migration:
http://kvm.qumranet.com/kvmwiki/Migration
The guys at qumranet know what they are doing.
Do you really think KVM is the right solution? I don’t think so. KVM depends heavily on Linux so it has a large foot print in the name of Linux. Each individual VM runs inside a linux process so I don’t know how strong isolation or good shceduling you get.
KVM lacks good SMP support as well as good live migration. KVM is more suitable for light-weight virtualization needs like on desktop.
But if you want rock-solid and enterprise level virtualization then you need a hypervisor. This is the reason Microsoft is building a hypervisor for enterprise ignoring their own products Virtual Server which works like KVM.
In future their will be DMA remapping support and device assignment, you would want to have a strong isolation between multiple VMs so you can safely assign hardware resources to them.
Btw XEN not in Linux is due to some kernel maintainer’s bullshit. XEN is an excellent project (far ahead of it’s competition). Redhat knows that and that is why they included it in their OS. But the politics of kernel developers is preventing it to be as mainstream.
KVM *is* the right *technical* solution for opensource. Xen is a bloated hack that has its interest in the times where there was no hardware support for virtualisation (VT/SVM). Now it’s just overcomplicated.
Actually, Xen is not just about the hypervisor, as this one is useless without the Dom0… until they reinvent the wheel and reimplement all the drivers and FS support inside the hypervisor. If you crash Dom0 then your machine is dead. Talking about stability, security and strong isolation? Try to read about DomU to hypervisor and Dom0 communications…
KVM is about to catch up with Xen after only 1 year of developpment (it *does* support SMP) and far less ressources. Because it’s design is clean.
And no, Microsoft Virtual Server is not working like KVM, not at all (hint, VT/SVM)…
Virtual Server supports VT/SVM. It is exactly like KVM but better because it also supports non-VT/SVM hardware.
Please quit the fud. Trying to recreate memory managers and schedulers when a really good and well tested on is simply a bad idea. Period. This is what Xen does.
Of course, I’m referring to the initial decision to create the notion of a privileged guest, the Dom0, which deals directly with the hardware. Although other ports have been attempted unsuccessfully, the only viable Xen Dom0 is Linux. A heavily-modified Linux that keeps diverging from Linux as time goes on.
—
So would you prefer the vmware model where the drivers are inside the hypervisor? Did you consider the disadvantages in that approach?
Yes, I prefer the drivers to be inside the hypervisor. Wherever the bare metal drivers are located in a hardware virtualization solution, they are a single point of failure. So we might as well refrain from reinventing the wheel by keeping them at the bottom of the stack.
Nobody likes to write drivers. There’s a ridiculous amount of hardware out there to support. The code is tedious and error-prone. Drivers are the usual suspect when anything goes wrong. This is an area where we shouldn’t be replicating work. It’s too big and too important. Drivers belong on the bottom of the stack where we can find common ground and cooperate.
I can’t stress enough how much we need to cooperate on drivers. We need high-quality free software drivers for the vast majority of devices in the field. They need to be developed, tested, and distributed as a coherent and interoperable unit. The drivers must remain maintainable when developers and manufactures disappear.
Linux is the industry’s largest such collection of drivers. Any individual or business can contribute, and its drivers are guaranteed to remain free under the GPL. Linux is the closest we’ve come to addressing the daunting challenge of comprehensive hardware support without depending on the limited attention span of the hardware vendors.
Hardware virtualization will inevitably lead to OS proliferation. The next decade will see everybody from Sony to SAP introducing their own native runtime environments for packaging their warez. So we really want these DRM and middleware vendors messing with complexities of bare metal? No, we want to present a generic, idealized virtual driver model that keeps things simple and isolates the system from bugs in virtual drivers.
Besides, it isn’t possible to guarantee full isolation if multiple guests have direct access to the hardware. A driver bug in one guest can bring down the others, which is a big virtualization faux pas. Like it or not, the device drivers are a single point of failure in any hardware virtualization solution. It doesn’t matter whether there’s one privileged guest, N privileged guests, or zero privileged guests.
With this in mind, it’s clear that the best solution is to have zero privileged guests. Let the hypervisor abstract all of the functionality that is unavoidably exposed as a single point of failure. Virtualized hardware makes OS development vastly easier, leading to innovative new designs and implementations.
Hardware virtualization is going be badly abused by misguided commercial software vendors. We have to draw the line somewhere. We can’t just have a task switcher for grotesque monolithic “apperating systems”. We have to have a sensible abstraction that insulates our hardware from these beasts of questionable quality.
In order to maintain the sanctity and sanity of our computers against the onslaught of downward integration, we have to keep the drivers in the hypervisor. Please, please think of the hardware.
Edited 2007-08-16 06:12
The problem is that you limit virtual os’es innovation by shoving them all to one single virtual hardware model which has to encompass all possible hardware. How is this different from other casual driver apis? A lot of OS innovation happens in a way and abstractions in which the OS interacts with the hardware. Moreover nowadays hardware and software architectures are really designed in concert, esp. in case of MS.
This all is castrated I “driver in hypervisor” model. I cannot imagine MS giving up their power there and lending themselves to some beforehand settled specification.
Besides what about various weird kinds of hardware that haven’t been anticipated in virtual hw model?
The only way I can imagine is they (MS) will really treat Windows on Windows scenario specially.
There a lot of possibilities inside those 2 options. Namely where to draw the line between hypervisor and guest. Should all drivers be virtualized or it’s enough to only virtualize busses to achieve effective hardware resource sharing.
The only visible advantage for linux users I see is that there would be coroprately developed hypevisor with stable driver abi making linux fluctuating internals less of a maintainence problem for HW compaines.
Ok let me list some advantages you get by using a hypervisor + dom0 model:
1. Less code in hypervisor which means less possibility of errors in the code. Also smaller memory foot print.
2. Using the OS scheduler for scheduling VM is bad idea, how do you handle interrupt priorities? Does the guest OS gets higher priorities for it’s timer interrupts than dom0’s other processes etc etc? So you really need a separate scheduler targetting Virtual machines. KVM is doing this by adding special priorities. I beleive it is overloading the OS scheduler.
3. Having drivers in hypervisor hampers future device assignment. Think of a model where each device runs in a lightweight domain just to support that device and all other guest domains uses this service domain to access this hardware. Now think if it is a network card and if the network driver crashes, you can easily restart the service domain without even other guest noticing it.
4. Having virtual hardware is really useful to support live migration where you don’t really care where your OS runs. Dynamically moving OS from any available hardware is a big advantage.
What XEN needs to do is to get rid of all the code they wrote for non-VT/SVM hardware to make it simple. Other than that, i think XEN is one of the best open source project.
KVM is a kids toy.
I just want to take a step back and, at a high-level, layout the three common methods to building a virtual machine monitor in use today and their commonly acknowledged strengths and weaknesses (and yes I am dropping the Type 1 vs. Type 2 designators because I don’t think they help properly illustrate the discussion):
The first is kernel based VMM. This model uses a kernel mode driver or extension to provide the VM environment within a host OS wherein the host is responsible for memory management (natively or via extension), virtual machine scheduling via its own scheduler and interrupt handling. In this model the host OS itself becomes the VMM. This is the VMWare Server, VMWare Workstation, Microsoft VPC, Virtual Server and KVM model. The downside to this model is it depends on the general purpose OS’s notion of memory management, processor scheduling and interrupt handling which is not necessarily optimal for VM resource management.
The second option is a monolithic virtual machine monitor (or fat-hypervisor), such as VMWare ESX. The advantage to this scenario is that the VMM, and its driver model, is specifically engineered to handle memory allocation and processor scheduling, interrupt handling and resource management in a Virtual Machine centric fashion. The primary negative of this model is that there is the overhead of what amounts to a complete secondary driver set in the VMM, adding complexity to the VMM codebase and creating yet another driver model that 3rd parties and OEMs must support.
The third option, employed by Xen and Microsoft’s hypervisor’s, is a thin-layer VMM that only provides basic VM instantiation, tear-down, memory management, processor scheduling and interrupt handling (and redirection) for Virtual Machines. Everything else, including hardware device access, is implemented via the root (parent, dom 0) VM’s OS driver model. The advantage to this model versus the monolithic VMM is that in theory, no proprietary virtualization specific driver model exists. The downside to this model is that the VMM is dependent on the root VM’s driver implementation, which might not be optimal or may require modification of the root OS’s kernel and driver model to perform well.
We will continue to see all three models for the foreseeable future. All three models have their weaknesses and strengths. While architecture fundamentals are important, what ultimately matters is the implementation. Architecture only matters in so much that it facilitates a good implementation. To state that Xen’s architecture is somehow fundamentally broken, and to do so without explaining why, is typical Internet FUD.
for years now, they have REFUSED to come out with a server product for Linux… Solaris Yes, HPUX Yes, Linux NO…..
Why?
Because this company is Microsofts paid and spiked leather leash wearing BITCH.
Edited 2007-08-16 00:42
this doesn’t even make sense. citrix’s whole goal is running things on servers, and then serving the applications to people over the network or internet thru the citrix client or a small terminal service box.
xen is virtualization, which is hardly related to citrix. citrix clients basically just show what’s happening on the server. Theres no virtualization i don’t see any reason for them to have bought xen. besides to basically hire all their talent at once. why did they buy xen? and why for 500 million dollars?
They’re not buying Xen. They’re buying XenSource.
And it does makes sense,business wise, the latest buzzword being virtual desktop, a market where Citrix is way behind their competitors (VMWare of course but also Microsoft with RDP and their hypervisor).
How about running applications for all imaginable osses (read different Windows generations) from a single app server.
I don’t know, it kind of makes sense to me.
With Citrix+Xen you’d be able to serve up several different Operating environments from the same hardware. When you connect to your Citrix server you could chose between win2k, win XP, Vista or Linux (handy for testing or supporting legacy apps). And even if you don’t need several different OS’s engineering could get a completely different Windows install than HR gets. That could also be handy.
In addition Citrix gets all the neat live migration stuff. And finally they could simply think that Xen is going to be hot shit in its own right in a few years and they’re looking for ways to diversify their revenue streams. And that’s after thinking about it for 20 seconds.
So on the whole I can see plenty of reasons why Citrix would want Xen, and I’m sure Citrix know of a fair few more.
I wonder how this will affect Microsoft. Xen was the channel through which Microsoft was submitting code to the Linux kernel.
“Xen was the channel through which Microsoft was submitting code to the Linux kernel.”
Say what?
What I’m surprised is the fact that Sun Microsystems didn’t purchase it given that Xen will be the virtualisation technology which OpenSolaris/Solaris will adopt.
Side note, 3.x series allows unmodified operating systems to run, Windows XP works nicely on it given some of the blog and wiki entries I’ve see on the net.