During the Bill Gates keynote at WinHEC 2006 Microsoft demonstrated the new Windows Hypervisor (codename Viridian). In the demo two unique features for a virtualization platform were demoed: 4 virtual CPUs per virtual machine and live modification of virtual hardware while the virtual machine is powered on (i.e. adding a NIC or more virtual memory while the guest OS was still running).
Is this capability targeted only towards virtualizing servers, or will it also be for the desktop? If the latter as well, will it also provide full 3D GPU virtualization so that the destop OS GUIs will be reasonably performant?
Is this capability targeted only towards virtualizing servers, or will it also be for the desktop? If the latter as well, will it also provide full 3D GPU virtualization so that the destop OS GUIs will be reasonably performant?
It’s been a while since I looked at the original docs, but I believe this will be server first then later on the client. RE: GPU virtualization — this is a goal, and supposedly will be in the next release of Virtual PC. I’m not sure if it’ll be in the first version of the hypervisor since it isn’t really necessary on the server.
Re the uniqueness of number of virtual CPUs and live modification of virtual hardware:
IIRC we support up to 32 VCPUs under Xen, and you can hotplug VCPUs according to demand. A correctly configured guest can also have its physical memory allocation altered – VMware ESX can do this too.
Also, we support hotplugging other hardware in theory but I’ve not looked at that recently, so I don’t know if it works at the moment ๐
I’m pretty sure the AIX hypervisor can do this too and has been able to do this for some time too. I know I’ve had extra CPUs added and removed from LPARs with no disruption at all. In fact it’s quite amusing (in a geek way) to be watching a tool like nmon64 when a new CPU suddenly arrives that you weren’t expecting!
“two unique features for a virtualization platform were demoed: 4 virtual CPUs per virtual machine and live modification of virtual hardware while the virtual machine is powered on (i.e. adding a NIC or more virtual memory while the guest OS was still running).”
This is factually incorrect. Just off the top of my head the following virtualisation systems support 4 or more CPUs per virtual server:
1. IBM LPARs for pSeries
2. Sun Solaris Containers (zones + resource manager)
3. Sun Dynamic System Domains
4. HP Virtual Partitions for HP-UX
5. HP Integrity Virtual Machines for HP-UX
6. Xen
7. VMware ESX Server
8. HP nPartitions
9. Sun hypervisor for T1000/T2000
etc.
They can’t even claim a first for Windows virtualisation, as 6, 7 and 8 support Windows. (8 only on Itanium)
The virtualisation space is way bigger than just x86/x64/Windows/Linux. Many of these “new” features have already been available in AIX (POWER), Solaris (SPARC and x86/x64), HP-UX (PA-RISC, Itanium), z/OS etc.
This is factually incorrect. Just off the top of my head the following virtualisation systems support 4 or more CPUs per virtual server:
When I first read that about unique features I read that as unique for virtualization in general and not specifically for MS virtualization platform, and as such it would be factually correct.
Haven’t read the article yet so I don’t know if anyone claims it’s new in general.
Edit: after reading the article I’ll have to ask if the systems you listed make
“Note that actually both 4-way (or 8) virtual SMP and live modification of virtual hardware are features not provided by any virtualization vendor.” factually incorrect or not.
While virtual SMP is certainly nothing new and modification of virtual hardware isn’t either, I don’t know if the combination of the two is new or old.
Edited 2006-05-24 11:17
And as far as I know, MS Virtual Server is currently the only server virtualisation system I can think of which does not support more than one CPU per virtual server. Adding multiple CPU support may be a first for them, but it is *certainly* not an industry first!
I’ve never used IBM’s Power hypervisor but I’ve read some of the related Linux source – with dynamic LPAR it looks to be possible to hotplug PCI devices dynamically into the control of a partition, although of course that’s real hardware not virtual. That’s cool too ๐
And yes, I imagine it’s interesting watching new CPUs appear unexpectedly ๐
Also, IIRC, the Xen patches to Linux include the SMP alternatives patch, which allows a guest Linux to dynamically convert between using spinlocks and not depending on how many processors are present in the system – it stops using spinlocks if there’s only one VCPU to save overhead, and then reactivates them if more are hotplugged. I think this patch is also in the -mm tree at the moment, presumably also useful on Power. It’s mind-boggling stuff. ๐
….or is the person who put this thing together?