KVM, the kernel-based virtual machine for Linux, will be merged into the Linux kernel , according to Andrew Morton’s merge plans for 2.6.20. Besides VMware, Xen, qemu, etc., Linux users now have a full virtualization solution built in the kernel which supports running unmodified Linux or Windows images.
I must admit this is the first I hear of this, and it sounds intriguing. What is the advantage of running this over, say, VMWare?
http://kvm.sourceforge.net/faq.html
“You will need an x86 machine running a recent Linux kernel on an Intel processor with VT (virtualization technology) extensions, or an AMD processor with SVM extensions (also called AMD-V).”
“What is the difference between kvm and VMWare?
VMware is a proprietary product. kvm is Free Software released under the GPL. ”
“What is the difference between kvm and Xen?
Xen is an external hypervisor; it assumes control of the machine and divides resources among guests. On the other hand, kvm is part of Linux and uses the regular Linux scheduler and memory management. This means that kvm is much smaller and simpler to use.
On the other hand, Xen supports both full virtualization and a technique called paravirtualization, which allows better performance for modified guests. kvm does not at present support paravirtualization.”
> VMware is a proprietary product. kvm is Free Software released under the GPL. ”
you can say that VMWare is an awesome product and kvm is free software (you get what you pay for and I wouldn’t bank my company on this)
Edited 2006-12-17 19:07
you can say that VMWare is an awesome product and this is free software (you get what you pay for?)
Whatever rocks your boat. Though, what this essentialy does is leveraging something that you and others may already have paid for: virtualization extensions to modern CPUs and the Linux kernel. Since the kernel is pretty smart about scheduling and memory management, it is a good contender for being a hypervisor. And that’s pretty much what KVM does (besides adding sugar to emulate hardware in userland, AFAIK based on qemu).
Since virtual machines will be visible as normal processes on the Linux host, this is a very managable solution. GUI tools to manage VMs will probably be available soon, giving KVM a good chance to become one of the major VM contenders on Linux.
Edited 2006-12-17 19:25
KVM is essentially a heavily modified qemu. Qemu has been around for quite some time and is (relatively) stable. Also, KVM supports all of the disk formats qemu does thanks to being more or less a fork of it.
The latest bleeding edge virtualization work patches are available here:
http://ozlabs.org/~rusty/paravirt/
According to the FAQ, KVM is going to continue depend on and merge qemu changes in. Overall, this is great news indeed, as qemu itself was lacking not only VT functionality, but a free and open source kernel “accelerator” module to unlock its full potential.
I’m looking forward to see KVM and qemu raise and shine!
Edited 2006-12-17 20:50
KVM is not a free qemu accelerator. It requires VT hardware while kqemu doesn’t. For a free kqemu replacement, check out qvm86 ( http://www.nongnu.org/qvm86/ ).
I wish they had given it a cooler and more unique name, but Linux already has confusingly-named stuff in it (DRM comes to mind), so maybe one more can’t hurt.
“””
I wish they had given it a cooler and more unique name
“””
Sometimes I think that there must be some paragraph in the GPL that I’ve missed that says you can’t give your project a good name.
Just be thankful it’s not a recursive acronym. 😉
Edited 2006-12-18 02:19
DRM comes to mind
Well, just think which came first. It’s weird to call the older one confusing because a newer one popped up.
and also, what else would you call DRM? you perhaps think its not natural to call a direct rendering manager, say, DRM?
They could have called it the direct rendering layer, for example.
Does is *require* a modern processor with VT extensions, as in a Core2Duo, or does it work *best* with it? That’s a big difference from QEMU.
Robert
1. Click on the first link.
2. Click on faq in the left menu bar.
Would it have not been quick and simple enough to have copied the relevant line for him?
Either way, it answered my question as well and I can’t say that KVM impresses me. I must have the latest CPU technology it seems to be able to take advantage of it, it’s worthless to me and a great deal of other users.
There’s nothing to see here, move along is my impression. If they can support all CPU types, then they have something. As of now, it’s a kernel feature for those that are wealthy enough to own new systems.
This just proves to me that the Linux kernel developers have lost the plot – they’re not coding for the people now, but for business interests.
Dave
Dave,
They haven’t “lost the plot”. You’re right, there is a lot of commercial development going into the kernel these days. However, that doesn’t stop Joe hacker from getting his non commercial patches merged with the kernel. Remember that “the people” benefit from contributions made by commercial developers too! That’s really the upside of open source.
To be more specific about this particular case, the kernel developers are taking advantage of HARDWARE features built into modern CPU’s. It’s these features that allow for a relatively straightforward implementation AND acceptable performance.
It wasn’t reasonable for such a feature to be added to Linux or ANY OTHER O/S without this hardware support. This is not some conspiracy of the kernel developers to ignore older CPU’s. Rather it is an example of them extending Linux to leverage a feature of new CPU’s that wasn’t previously an option.
So, it’s really hard to understand your complaint.
Edited 2006-12-17 22:10
Yes, but the average joe hacker generally won’t get their code accepted into the kernel tree. The kernel tree has become the domain of the professional kernel hacker and corporate interest, especially since the introduction of the 2.6 kernel tree. I’ve used Linux for long enough to see how it has all changed, and in all honesty, it’s not for the better.
Anyways, I realise that the KVM is designed to take advantage of modern CPU abilities etc, but it is ignoring a lot of other CPU types. Sure, you can load up Xen or QEMU if you really want it. I guess the kernel developers would really rather see this sort of thing in userland rather than within the kernel infrastructure itself.
I disagree with your comment that the people benefit from commercial interference in what is a public development item. I can see no new features in the 2.6 kernel tree that have benefited the public, it’s all been geared to the corporate interest. Any changes that have been made, are re-hashes of previous code that worked fine for most instances. New items are of things that the average person would never use, or want to use.
My complaint is quite simple – KVM is of little advantage to the average computer user, since it is unlikely that they’ll be running modern enough CPUs to take advantage of it. In the future, in ten years time, then yes, it’ll probably be fine.
Dave
The reason that the ‘average hacker’ won’t have code accepted into the tree is because there is no new code that an ‘average hacker’ can add. The kernel now is in a very mature and powerful state. Any missing features require serious work and knowledge that guys like me and you just won’t/can’t do.
Actually I think the changes that have come because of the major vendors is for the better. By and large, the major maintainers are professionals who understand the need for professionally written code.
I don’t want to burst your bubble, but there is a large percentage of average people running Linux on newer hardware.
Tim
Yes, but the average joe hacker generally won’t get their code accepted into the kernel tree.
That’s simply false. There are even some “professional” kernel hackers actively trying to attract “average joe hackers”. For instance, through the KernelNewbies site that Rik van Riel and others have maintained for years:
http://kernelnewbies.org/
They have a list of lighter-weight “janitor” jobs to get newcomers started with the kernel. And there is a mentoring process.
That said, it’s a kernel! You don’t want to average programmer to touch the code, just as you don’t want someone who does a physiology minor to conduct brain surgery. Most of the kernel is complex code that requires much knowledge and working experience to work on. Naturally, that’s easier if you get paid to spend time to do that.
I am happy that there is a lot of quality control in place.
Anyways, I realise that the KVM is designed to take advantage of modern CPU abilities etc, but it is ignoring a lot of other CPU types.
That’s not a good reason not to implement it. By the time Linus added NX support for x86-64 CPUs and later to kernels with PAE enabled, CPUs with the NX bit weren’t widely in used either. But it provided an extra layer of protection to those who had. These days many users have it.
Systems with CPUs with virtualization extensions are now widely available through system vendors, even in some lower-end models. Within three years time, a large chunk of the computer owners in the West will have a CPU with virtualization extensions. Will Linux stop functioning on old Pentium I machines? No. But it has become much more useful to people with new machines. I don’t see something inherently bad in that. Actually, it is quite good, as I know many people who find this useful.
I disagree with your comment that the people benefit from commercial interference in what is a public development item. I can see no new features in the 2.6 kernel tree that have benefited the public, it’s all been geared to the corporate interest.
Although some features have been backported 2.4, I can name a bunch of 2.6 features that have benefitted me personally, and in some cases the general public:
* SELinux
* SATA support
* ExecShield
* Access Control Lists
* Scheduler improvements
* NPTL
* IPsec
* and dozens others
My complaint is quite simple – KVM is of little advantage to the average computer user, since it is unlikely that they’ll be running modern enough CPUs to take advantage of it. In the future, in ten years time, then yes, it’ll probably be fine.
With the average Dell machine shipping with a CPU with virtualization extension, ten years would be a very pessimistic view. Give it three to four years, and less on servers.
Daniel,
In all due respect, the average user probably would only have made use of SATA support and Scheduler improvements. The others in your list are really not something that the average Linux user uses from my experience. SELinux is still problematic and a bitch to set up and keep running. ACLs are really totally necessary for the average user either. They’re nice, but not vital. NPTL, IPsec are again, not used by the common Linux user.
Daniel – I know you, and you know me, and you’re not the ‘average Linux user’. You’re a very technical person, with superb knowledge of Unixes, and are far above the average user in knowledge and experience. You also use the Linux systems to the maximum.
Dave
Dave,
It sounds like you think the kernel was essentially feature complete back in the 2.4 days. If you think all the new 2.6 feature are only useful to corporate big wigs, then you always have the option of continuing to use 2.4.
Out of curiosity, in your perfect world, where there was no corporate support for Linux and it was just an unpaid army of kernel hackers advancing Linux; what features would they be adding that were clearly for “the people”? How would Linux be different in that world, than it is today? Put another way, what needs of the average user aren’t being met by the Linux kernel today that would be _IMPROVED_ if all the corporate support vanished?
It seems to me the kernel is developing just fine; in fact better than I ever imagined it would. The real interesting improvements for the average user need to happen elsewhere. Desktop software and applications for Linux need to continue to get better to appeal to a broader audience of users. But the kernel? It’s not something the users you seem worried about really even need (or want) to think about.
Edited 2006-12-18 05:08
But the kernel? It’s not something the users you seem worried about really even need (or want) to think about.
I agree with you although it would be nice if the devs would borrow the wireless drivers from OpenBSD:-)
Er, all new computers with the latest generation of Intel and AMD processors (read: most new computers) have VT or SVM.
So how exactly is this not goin to be useful for 10 years?
It’s useful for me now on my middle of the range 1 year old laptop and I’m not some super rich kernel developer.
To suggest that the kernel shouldn’t accept code (as a *loadable* module) which won’t run on 10 year old hardware makes no sense. Where’s the harm? If your computer can’t use it then it doesn’t affect you.
My complaint is quite simple – KVM is of little advantage to the average computer user, since it is unlikely that they’ll be running modern enough CPUs to take advantage of it. In the future, in ten years time, then yes, it’ll probably be fine.
If you buy a new computer today, there is a good chance that you will get the required VT support (http://en.wikipedia.org/wiki/Virtualization_Technology). You can get it for less than $100, so it is not only for big spenders.
“QEMU Accelerator Module” like functionality could be added if someone would scratch this itch.
KVM is just another player in the diverse world of virtualization (http://en.wikipedia.org/wiki/Virtualization).
Anyways, I realise that the KVM is designed to take advantage of modern CPU abilities etc, but it is ignoring a lot of other CPU types. Sure, you can load up Xen or QEMU if you really want it. I guess the kernel developers would really rather see this sort of thing in userland rather than within the kernel infrastructure itself.
I think the point made previously should not be over looked, in that with modern hardware improvements there are now ways of doing this in the kernel.
We still have Xen and the other product that allows one a more fully featured approach but with possible performance hits, or at least that is the impression I am getting.
I would HATE for kernel developers (or ANY developer) to NOT make use of some thing just because the majority of people don’t have it.
If they made it a pivatal part of the kernel, and that it wouldn’t boot with out it, then it would be a bit silly.
I disagree with your comment that the people benefit from commercial interference in what is a public development item. I can see no new features in the 2.6 kernel tree that have benefited the public, it’s all been geared to the corporate interest. Any changes that have been made, are re-hashes of previous code that worked fine for most instances. New items are of things that the average person would never use, or want to use.
I wonder how much of this is similar to that of developments in engenearing for either F1 racing or the race to the moon. Yes, all of the improvements in their own right do not actually result in any benefits (directly) to the home consumer. But with more things appearing in the top end, they gradually filter down to the average joe consumer.
I’m just curious though, you seem to VERY frequently comment on how so few items (or none at all which I don’t believe for a second by the way), of use to the average computer user are making it in to the kernel in recent development, and you seem to be stating that this has a lot to do with the corporations.
Hmm, I can’t see how you can say that with a straight face, and then browse through the kernel driver modules, and review the amount of obscure hardware that I’ve NEVER had on ANY of the umptine computers that I have owned?
I wonder if there was some one, way back when stating that all of this was terrible and the world was to end?
So what if the majority of old computers do not support this? I’m fairly certain that there will be knock on benefits along the road, either for old hardware, or sod it, just for those that by new computers in a few years time.
I just don’t see your point.
Linux coders were never coding for “the people”. That’s right.
They’re coding for themselves. If it makes their life easier to have KVM only run on systems with the latest CPU technology, then they’ll do it, unless they either don’t own the hardware, or they have significant motivation not to.
If you feel like paying someone to write the patches to get what interests you done, and/or you write them yourself, then it’ll have support for what interests you. Of course, you can also try to convince the developers that what you want _should_ interest them, and there’s always a good chance you’ll be able to do that if what you say makes sense (hence, feature requests on bugzilla), but the developers aren’t necessarily coding for you.
Yes, that is correct. However, by releasing it under the GPL, they’ve ensured that the users can use the software. The developers can develop all they want, without users they’re nothing. Developers seem to have this ‘holier than thou’ attitude to normal users, especially Linux developers from my experience and they seem to think that users owe them. Of course, not every user can code, and neither should they have to. If Linux developers really have the type of attitude that you’re saying, then perhaps they’re better off releasing code under a less community friendly license like BSD.
Dave
>>If they can support all CPU types, then they have something. As of now, it’s a kernel feature for those that are wealthy enough to own new systems<<
You can get Dell laptops with this functionality (VT enabled CPU) for less then 1,000euro (including tax and delivery), it’s hardly a case that they are just targetting those with deep pockets.
Likewise majority of Intel CPU’s (if not all?) sold for servers in next year will include this functionality. It might be handy for “Joe User” to have virtualisation on their desktop, but the real target market is that of server consolidation.
I’ve tried to run some virtual servers with Xen on Fedora Core, but I was quite appalled with their poor stability on a relatively small workload (< 0.05). One of the VMs was crashing twice a week. After a crash, Xen wasn’t able to reclaim its ressources, forcing a reboot…
I would definitely be interested to invest in VT-enabled hardware if KVM is working relatively well on, say, Debian.
If the VM is crashing, then the bug is in the VM (i.e. the guest OS) and not the hypervisor.
Also, Xen can reclaim resources of a crashed guest, you just need to RTFM.
Thanks for the tip, but I was able to figure out to RTFM by myself… If the ressource reclaim did worked, you wouldn’t see me complaining here. It’s quite possible the VM had a bug, but I wonder why two similar VMs (cloned, except for userland services) would behave differently under a certain load. Possibly an issue with the Xen part of the Linux kernel. Either way, the end result was the same: it was unusable.
Anyway, I’m not asking for tech support (not a proper forum). I just wonder how stable that soon-to-be-official module as it seem to be an interesting alternative for me.
OK, I’ll re-iterate my point since it seems that the majority of people seemed to fail to grasp it:
1. Most people, running Linux are running it on a pre-existing system. Not *everyones* system is going to be the latest and greatest dual core. Most won’t be. They’ll be Older Pentium/Athlon systems, or Socket A’s or Socket 939/754. These systems will NOT take advantage of it.
For those that tried to misconstrue my earlier comments, I did not imply that the kernel developers shouldn’t introduce said code because it might not become a majority for another ten years. I simply said that KVM won’t probably be usable for the majority of people for probably another ten years. That’s probably a bit of an exaggeration, 3-5 years is probably nearer the truth.
Daniel – I never said I wanted the average joe hacker to contribute code to the kernel, someone else did. My reply was that this was really unusual these days, given the modern professional kernel developer.
Hans: $100 to you might not be much, but to many it is. Not only do you have to buy the CPU, but you have to buy the motherboard, more than likely new RAM (most modern motherboards probably don’t take ordinary DDR RAM I suspect), and a new graphics card (again, most modern motherboards to not seem to accept a AGP graphics card, but all want PCI-express). So, that $100 has suddenly grown to $500 plus. As I said, not cheap.
I also realise that those that can’t use KVM will probably use something like QEMU or Xen.
I really wish people would read and comprehend things correctly, instead of miscontruing someones post. I realise that not all of your are native English speakers, so that’s probably a big part of the problem (no disrespect intended I might add, I also come from a NESB).
Cheers,
Dave
Much like 3d acceleration and MMX, the absence of VT on older computer does not mean that it is not worthwhile to utilize on machines that have the capability. These technologies made it significantly easier to implement this functionality. As I understand it, the KVM functionality is not practical to support on older machines. VMWare Qemu, and so forth can give you roughly the same functionality if you need it however.
Further it is because the majority of Linux installs are not on VT capable machines that now is the time to develop it. This way as such machines become more and more common place they will have a tested and more robust solution, versus the first pass. The time to develop a feature is not when the majority of Linux users want/need it. The feature should be there waiting.
stands for: GNARL’s Not A Recursive Acroynm
On a more serious note, watching VT support come to intel processors is another step towards turning them into the mainframe architectures of forty years ago. Now we can do /VM
Can channel control programs be far behind?
I’m sorry if I’m missing something obvious here, but shouldn’t it be GNARA? I just don’t see what the L is doing there at the end. Lacronym?
Johnny Carson, may he rest in peace, used to say “never explain a joke,” just before he explained his jokes…
It should be GNARA. but then it would be a recursive acronym. That sets up the classic joke that it’s a recursive acronym that says it isn’t.
Instead, I went with the ploy of having the “acronym” not actually match the phrase, which is the same joke in a different form.
It’s also a bad pun, because one of the meanings of the word ‘gnarl’ is ‘to twist’ ie, i ‘twisted’ the definition to get the acronym.
(Aren’t you sorry you asked?)
Dave, sorry but I think your way off base on your assessment of the
average Linux user. I am typing this message on a VT enabled system.
And I think there are many, many others like me. I am glad as can
be that developers worldwide are pushing the envelope. In 6 months
I think virtually all machines sold will have virt abilities.
In addition the the added features of 2.6 over 2.4 it is way
more stable. It is not just features. There is no way, within reason,
that virtualization can happen without hardware support. All that
is happening is that the software is catching up to the hardware. That
is a good thing.
I ever loved qemu and all virtualization stuff on free softwares but we need to be honest here, VMWare Workstation have a much better performance than qemu (with and without acceleration module) and I heavily believe its still better than KVM.(?)
When I spoke about performance I mean aspects like speed, stability and compatibility with another OSes.
Anyways KVM is a great feature to Linux and I hope to see new improvements on it soon.
well qemu without the accelerator module you cant compare to vmware, as qemu is then a fullblown emulator, and as for the accelerator module, well, it doesent work too well..
however kvm is entirely different, and you probably shouldnt “heavily believe” that vmware is better, it may be more mature… for now.., but i doubt its faster, of if it is, for much longer.
I don’t think VT requirement is a problem. Virtualization requires a powerful machine, so is OK to assume that real users will have a modern processor. We already have vmware, qemu, so if a better alternative (I trust kernel hackers, I’m sure they know linux much better than us, osnews readers) arises and is the way to go, jump into the train!
This is not the classic scenario for puppylinux or DSL, this is for people with real needs!
Tim Holwerdi
Hi, My name is Tim Holwerdi.
I am gonna tell you my last dream…
I am an Aszzhole in search of Notoriety…
I work in a Website that offers news of IT and Open Source.
I pretend that I do it for the sake of love for IT, but the fact is that, I am expecting good revenues for the
future…
If not, why should I loose my time looking for IT news in other IT Web Sites that offer what I am not able to
offer… for the sake of these IT weirdos geeks and Open source-free computing fanboys…? c’mon…
I think I know more than the rest, of course… and I am always right!
Yes, I know more than anyone of you about Computers, and about anything else you can imagine! even If many people prove me the contrary, I am still right…
Me and my Mac go together everywhere, I even sleep with it, which is somehow problematic, cause as you can imagine, is not easy to have sexual relations tru an USB port, or a FireWire one… but I am in love anyway!…
Anything that is not Mac or commercial, is just wacko rubbish!
And, of course, is not going to offer me anything, because all these Open Source weirdos have no future, and are not gonna advertise in my site, or pay me money… I dont even talk about the FSF retarded hippies!
At best the big companies that now move to Linux, and pretend to be Open Source, worth a little bit, and may be a source of revenues in the future if the have some sucess…
Cheers…
P.S. Apple Rocks… Linux sucks… (MS is very good also, cause they have plenty of money, and are the pattern of our great western Businnes Economic and social system…)