Linux programmers are moving toward a change that would put virtualization software from VMware on a more even footing with open-source rival Xen. Xen was expected to be built tightly into the Linux kernel at the heart of the open-source operating system. But Andrew Morton is advocating an interface in the Linux kernel that would let it work with any virtualization foundation.
This is good news. Just because Xen is the open source darling of virtualization (excuse me, paravirtualization), why shouldn’t other virtualization platforms have an equal go at it? So long as the interface Morton and others are proposing doesn’t mess things up for future hypervisor developments, I think the idea for a more generic interface would be great.
The VMI interface has some other nice advantages beyond using different hypervisors. One is that a VMI kernel could be run without a full hypervisor with a stub layer making native calls. Another is that with newer processors, the hypervisor can use the new virtualization technology. Finally, since VMI is hypervisor and OS agnostic, it might be possible to make VMI versions of Window, OS X, and other closed-source OSes.
My impression is that the basis of VMI is that the kernel is patched to replace privledged instructions with calls to a block of code prepared by the hypervisor. The hypervisor can put in its own particular calls. Or the native instructions for the new virtualization technology or hypervisor-less mode.
The idea of a generic interface speaks to good design principles. On a slighly different scale, this is precisely the rationale behind the Bridge Design Pattern used in software development.
If you ever looked at the VMWARE design, it is quite amazing how it has to jump through so many hoops, due to the fact that it has to distinguish were signals/interrupts are coming from: a virtual privileged state or a virtual user state. All is due to lack of more intelligent interfacing from the guest machine. I have always thought Linux needs some kind of intelligent signaling.
XEN is hardwired, and relies on a source code, and this is, I believe is quite limiting factor. XEN actually modifies the kernel of the host. VMWare approach is more flexible. It’s already quite efficient, but with a better interface its designed can be streamlined and make it yet more efficient.
Xen only needs to modify the kernel of the guest if your hardware doesn’t support virtualization. Pacifica (AMD) and VT (Intel) hardware can run Windows unmodified as a guest.
Xen takes a serious performance hit when running unmodified guests on VT hardware. Paravirtualization is the fastest way to run a guest, followed by binary translation, followed by VT. (VT is the easiest code to write, though!)
No guest OS will run fast within a virtualization environment without paravirtualization support. Microsoft has propriety paravirtualization (Virtual Server can run Windows with some degree of paravirtualization), but I don’t expect them to ever release enough details for anyone else to do that. Ah, anticompetative behavior…
Can you please point me to more detailed information regarding XEN’s performance of unmodified guests on VT hardware vs. patched/paravirtualized guests?
I am studying VM solutions that would be useful on the desktop as opposed to servers. Looking at solutions from Parallels, MS, VMWare and XEN, I see the major unsolved problem being the lack of GPU virtualization, which would not allow for an acceptable graphics experience while running a VM guest OS. What is needed is the ability to support 3D graphics at a reasonable performance level in a VM guest OS, be it either WinXP (or Vista), or any of the Linux/UNIX variants out there. The only half-way solution to this that I have found so far is a kludge using two separate machines that would allow a MS Vista VM guest to run Aero Glass. In my mind this is clearly not an acceptable or reasonable solution. As for the question regarding VM performance using hardware CPU support (Intel VT or AMD Pacifica) vs. paravirtualization, and which solution is better, I have so far found no discussion regarding this. In all of the discussion I have seen regarding AMD’s new AM2 socket CPUs, all that I have heard so far is talk about CPU performance and memory bandwidth. My particular interest in AM2 is that as far as I know these CPUs are the first from AMD to provide Pacifica virtualization support in hardware. I think it would be interesting to see a comparison of VM manager performance incorporating Intel’s VT and AMD’s Pacifica in both paravirtualized and hardware-supported modes. XEN might be the only VM platform that cold currently support all of these scenarios, although VMWare might have something up its sleeve as well. However, for the desktop, we still go back to the lack of decent 3D graphics in VM guest OSs, and until this problem is solved, I do not think that a VM environment on the desktop will take off. Anybody out there know anything about where this is going?
What’s next? Will Linus finally cave in and create a stable kernel API so that closed source module/driver developers don’t have to get on the Linux point release treadmill?
!!!
“What’s next? Will Linus finally cave in and create a stable kernel API so that closed source module/driver developers don’t have to get on the Linux point release treadmill?”
—-
Yes, I think you are right, and have a point there.
Advocating an interface in the Linux kernel that would let it work with any virtualization foundation could be positive for computers, and software and OS development at first sight, but in this way, the final result will be that closed source companies and developers will get what they need for free, whitout need to return anything…
It could create a “dangerous and worrying” precedent.
!!!
in this way, the final result will be that closed source companies and developers will get what they need for free, whitout need to return anything…
And that would be negative, because? We have different phrases to express the concepts “volunteered to” and “obligated to,” precisely because they are different concepts.
Much of the tone of OSS advocacy these days reminds me of people I know who are involved in volunteer organizations, and spend most of their time nitpicking others’ contributions, “Oh so and so only donated $5, but I know his wife makes 80k a year at the Department of Such and such, and they have a car worth this, and a house worth that…”
“What’s next? Will Linus finally cave in and create a stable kernel API so that closed source module/driver developers don’t have to get on the Linux point release treadmill?”
Don’t you know that Linux kernel is OSS so why wait for Linus to “cave in and create a stable API” when YOU can do it?
IMHO it would be more usefull than “sitting and waiting” don’t you think ?
Closed sources drivers are not a priority in OSS this is just a substitution solution untill device manufacturer understand that they should give specification/interface for their hardware (unfortunately this can take ages or never happen at all!). To me, this is a manufacturer issue and has little to do with OSS or Linux in particular.
Back on topic, standard interface are usefull (if not necessary)to help developpers in their task but also to give drivers/software an extended life time. On that point I agree with you.
That may be their issue but it affecs Lunux users ang given Linux’s market share on desktop it won’t disturb their sleep.
Actually, it’s pretty standard practise in the kernel to prefer a general interface to a specific one. Look at the plugable schedulers, plugable security managers, and plugable file systems as examples.
Morton recognizes that Xen is the king of the hill *now*, but it might not always be the best choice or be the best choice in all situations. Xen isn’t the only game in town when it comes to (para-)virtualization. Xen might be the best for the x86 architecture, but it might not be for the SPARC. It might be good for PCs, but it might be horrible for massively parallel big iron NUMA machines or low memory embedded systems which might only want one Linux kernel or some resources shared between multiple instances (e.g. VServer, BSD Jail, Solaris Zones).
Having a plugable (para-)virtualization architecture allows for choice, which is what free software is all about. Your fee about Linux pandering to proproprietary vendors, isn’t held up by history. Linus has been extremely open to what proproprietary vendors have to say about system architecture. However, he and most core Linux developers have made it clear time and time again that they have no qualms about changing binary interfaces if it helps kernel maintenance. They also have no qualms about rejecting technology (even good ones like Reiser’s new metadata or IBM’s original Logical Volume Manager), that was not universally applicable or would hurt the Linux system architecture.
!!!
“What’s next? Will Linus finally cave in and create a stable kernel API so that closed source module/driver developers don’t have to get on the Linux point release treadmill?”
—-
VMware is a very good virtual machine, as Xen is, aswell. There are other god ones (free-OpenSource or Not). But giving all them free “highway” to take advantage of the free-Opensource sofware and of the Linux kernel is not the most clever thing probably…
I mean, It could help to the spread of Linux too, but at what cost at the end?
!!!
VMware is a very good virtual machine, as Xen is, aswell. There are other god ones (free-OpenSource or Not). But giving all them free “highway” to take advantage of the free-Opensource sofware and of the Linux kernel is not the most clever thing probably…
Actually, I think this approach, where a solution is chosen which is Open (even for ‘competitors’) and technically superior, even if it seems like a bad idea commercially in the short/mid-run, is an attitude which is typical for the kind of FLOSS I’m a fan of.
I think in the long run, this ‘professional’ (in some sense of the word) attitue might turn out to be an advantage. I certainly hope so.
So should the developers of Gnome and KDE make it impossible for closed source applications to make use of those toolkits? Commercial closed source developers are being given a free “highway” to take advantage of the free-opensource software otherwise, aren’t they?
I guess if you guys knew anything, you’d be dangerous. Trust Linus.
When are they going to stop adding stuff the the main line kernel?
Is there a feature lock anywhere in sight?
How bloated does it have to get?
How bloated does it have to get?
Yup, the inevitable consequence of using a monolithic design. Monolithic made sense 16 years ago, but with today’s fast computers with processors collecting dust most of the time, the microkernel design just makes so much more sense. The overhead issue causes by communications between the various parts of a muK is now neglicable due to modern-day fast computers.
I’m not really sure if Linus has put any thought into the future– the Linux kernel keeps growing, will continue to be harder to manage. At some point in time, it’s just gonna be too much. And what plan B does Linus have?
> Monolithic made sense 16 years ago, but with today’s fast computers with
> processors collecting dust most of the time, the microkernel design
> just makes so much more sense.
Microkernels have other issues, especially the indeterminism created through massive concurrency. With 20 server processes running in parallel, nobody can predict all possible orderings of actions and whether they cause trouble. I’m not saying that this is an unsolvable problem – mechanisms can be added to enforce ordering. However, I don’t think that a microkernel is needed to clean up the mess in the Linux code, but rather clean, modular programming style. It just happens that such style comes as a present with a microkernel. (Before the fanboys start crying: Yes, I did write a module for Linux once (around 2.6.10), and the exposed APIs were *not* properly abstracted)
As long as you can compile out stuff I can’t see a problem here.
In fact this form of modularization is most cost efective in terms of performance.
Of course you pay for it in increased maintainence complexity and lax ABI, but so far linux kernel developers are ok with this treadoff.
I’m not really sure if Linus has put any thought into the future– the Linux kernel keeps growing, will continue to be harder to manage. At some point in time, it’s just gonna be too much. And what plan B does Linus have?
I agree this might happen at some point. I disagree that this point is already in sight: microkernels like the Hurd still have quite a road to travel, and I’m not so sure we already reached the point where a microkernel can be built that can run on standard hardware acceptably both in terms of features and speed.
The Linux kernel has no real active role to play here.
What GNU/Linux CAN do is making the interfaces between components as clean as possible (not only system calls and such, but also things like X11 and whatnot). Once the time is there, our favourite software will be portable to the microkernel setup.
It will be a bumpy ride, but it’s certainly not impossible: X, not exactly a trivial piece of software, has been running on the Hurd for years.
Bloat can be managed, but good infrastructure is needed for that: many levels of abstraction (which add to code bloat but not necessarily make kernel inefficient and memory consuming).
Linux kernel is monolithic but “scalably” monolithic. Features add up to the source code, but purpose of abstraction layers is to have differently structed (binary) kernel depending on what you select at source configuration time.
The real trouble would be in regression testing of every possible combination.
hahahahahahah…Another college kid who just read a book on micro kernels. Yeah, what have you done, buddy? Go word on the Hurd…it’s SUCH a better idea. Proof is in the pudding.
“hahahahahahah…Another college kid who just read a book on micro kernels. Yeah, what have you done, buddy? Go word on the Hurd…it’s SUCH a better idea. Proof is in the pudding.”
Compare your response to the various posts he’s made over the years defending the idea of microkernels being superior. Thom’s posts on the matter are generally well thought out and sometimes even informative. Your post isn’t something I’d expect to have come from any self respecting college student. Its highschoolish at best.
The cruftiest of things can gain momentum (case in point, GNU, Linux, Windows etc.), while the masses ignore better designs.
I am curious tho, what have you done buddy?
hahahahahahah…Another college kid who just read a book on micro kernels. Yeah, what have you done, buddy? Go word on the Hurd…it’s SUCH a better idea. Proof is in the pudding.
As Lazarus already pointed out, I actually happen to KNOW what I’m talking about concerning muK’s.
http://www.osnews.com/story.php?news_id=8911
THere I also acknowledge its shortcomings. It’s a fun read, you might learn something.
The Linux codebase has a lot of code, and it keeps getting more, but that doesn’t mean it is bloated.
Actually, there is stuff being removed from the kernel all the time. Mostly to clean things up and/or throw away useless cruft. For the most part, Linux is a very well designed kernel, and its subsytems are actually quite lean and manageable.
You have to keep in mind that drivers account for the vast majority of code in Linux. And adding drivers doesn’t contribute to make Linux unmanageable, nor does it contribute to bloat the kernel because:
1) You can choose not to compile the drivers that you aren’t going to use.
2) Most drivers are compiled as modules anyway, so they just waste disk space (which is cheap nowadays).
That’s why it’s easier to have a driver included in mainline, and stuff like Xen (or Reiser4) have to jump through a large number of hoops until they introduce the least amount of code and changes to the kernel internals. This has everything to do with avoiding bloat inside the kernel, and having functionality as decoupled from the kernel core as possible.
But if you still consider extra drivers as bloat… The alternatives would be:
1) Separate drivers from the kernel core. This would yield a smaller source tarball, but would make obtaining the necessary pieces a tad more difficult. It would also mean that changes to the kernel core would break some drivers, possibly permanently. As it stands now, when someone changes something inside the kernel, he is also responsible for making the necessary changes to all the drivers that got broken by those changes.
2) Turn the kernel into a microkernel. Microkernels are just a design choice, they don’t reduce bloat. Turning Linux into a microkernel would be the same as doing 1). The same amount of code would still exist, but separated into smaller pieces.
I say that 1) and 2) would actually bring along manageability nightmares…
Microkernel fanboys, please explain why it’s so much better and the way of the future?
The way I see it, modular kernel running in kernel mode will always be faster, and still work just fine. Right now you can compare benchmarks run on OS X to other systems.
Microkernels are easier to maintain, have “natural” capabilities for distributed systems, SMP processing, therically they can be used to create a virtualisation layer therefore permitting concurrent OSes to run in parallel… Also they can suffer from such a design for instance the Mach kernel is well known to suffer from intensive messages passing as the “default” IPC system. To me this debate looks like procedural programming vs object-oriented programming. Micro/nanokernel aren’t perfect just as monolithic/modular ones but they both can bring good features on the table depending on what it will be used for.
But why go only to exteremes . A hibrid approach gets best of both worlds IMO. Integrate performance critical and close-to-hw stuff into kernel address space and leave the rest outside it.
I’d like to see more things in userspace in linux: for example why in the world every usb device needs a kernel driver? Isn’y it just another bus? From what I have read, one could serve most of them from userspace given adequate generic IOCTL api.
a hybrid approach (and a well thought out one, not something like apple uses) is a good idea.
a hybrid approach (and a well thought out one, not something like apple uses) is a good idea.
NT kernel. Most used and generally appreciated hybrid kernel. Contrary to what many anti-MS zealots would want you to think, the NT kernel is seen as a very good, stable, well-coded, and well thought-out design. Contrary to Apple’s hybrid kernel, the NT kernel was written in-house at Microsoft.
It started out as a muK, but slowly more things were added into kernelspace to reduce the inevitable overhead muKs carry with them. With Vista, they go back to NT’s roots a bit more by removing certain parts again from kernelspace.
Personally, I’d love to see the NT kernel becoming a true(tm) muK again; ok, it would give a minute (on today’s computers probably unnoticable) speed decrease, but it would mean the world to stability and security. The kernel itself can become pretty much bug-free (i.e. MINIX3’s muK has roughly 3800 lines of code, and they say it will eventually be close to bug free), and the userspace servers/drivers can be also made pretty much bug free, as they are by themselves easier to debug than the complexity of an intertwined monolithic beast like Linux.
The big problem remains the message passing subsystem; that which controls the messaging between the kernel, and between the various drivers in userspace. QNX’ message passing subsystem is supposed to be pretty good.