A thread on the OpenBSD -misc mailing list began by discussing whether or not XEN had been ported to OpenBSD, “is it planned at some point to release a paravirtualized xen kernel for OpenBSD 4.3 or 4.4?” Later in the discussion it was suggested that virtualization should be a priority for security reasons, “virtualization seems to have a lot of security benefits.” OpenBSD creator Theo de Raadt strongly disagreed with this assertion, “you’ve been smoking something really mind altering, and I think you should share it.”
Theo made an interesting point. It is true, if you have ten servers and consolidate them into one physical box (via VMs), no matter how you wrap it up they will always be less secure for the simple reason that there is no longer that physical security layer. Whatever you do you must go through that network pipe before even attepting to hack into the system.
The point Theo ignores (or no one brings out) is that allot of times VMs can help security by allowing each service to run in its own sandbox. For instance, running a mail server in one VM, a fileserver in another and remote X clients in a third. Some corporations do not have the cash flow to buy a box for each. So in the past you would just throw all these services on one box. In cases such as this it is more secure to run each service in its own VM.
In order of the most secure to the least secure we have: 1) separate boxes for each service 2) VMs for each service 3) all the services on one box. Sure, VMs aren’t as secure as physically separate machines, but in some cases it’s better then the alternative.
The point Theo ignores (or no one brings out) is that allot of times VMs can help security by allowing each service to run in its own sandbox. For instance, running a mail server in one VM, a fileserver in another and remote X clients in a third.
Hopefully selinux (and whatever equivalent are there in other OSes) allows admins to sandbox those applications without needing to use virtualization – hence avoiding the performance and memory costs of not reusing most of the code of the OS which virtualization hits
Edited 2007-10-25 18:04
There’s two sides to this argument, at least once the technology matures. Yes, virtualization as a workload consolidation technique carries some reliability tax, in the form of imperfect guest isolation and the host as a single point of failure.
However, virtualization can be used simultaneously to both consolidate and distribute workloads, in the form of full-time cluster multiprocessing and failover migration. Because a typical datacenter can improve its hardware utilization many times over through workload consolidation, they can afford to implement full redundancy without increasing hardware capacity.
The point is that the opposing forces of consolidation and clustering can be unified under the umbrella of virtualization to provide a balance of efficiency and availability. In fact, clustering may not have a future (in IT) independent from virtualization. The lead developer of the defunct OpenMosix project is now working for the company behind KVM.
The future is a full decoupling of workloads from hardware. If consolidation is a 1:N ratio of hardware to workloads and clustering is M:1, the product is M:N. This represents an arbitrary mapping of a bunch of workloads onto a bunch of boxes, where the sheet metal boundaries are outwardly irrelevant to resource management.
There is no standalone server solution that is so reliable that it can provide a mission-critical service with no redundancy or failover. If Theo believes that OpenBSD running on a box “which barely has correct page protection” is such a solution, then I want what he’s smoking.
It is true, if you have ten servers and consolidate them into one physical box (via VMs), no matter how you wrap it up they will always be less secure for the simple reason that there is no longer that physical security layer. Whatever you do you must go through that network pipe before even attepting to hack into the system.
Granted, there comes a point where you simply cannot get away from having different physical hardware in order to partition things, particularly from a redundancy point of view. Online replication of virtual machines just creates complexity that shouldn’t really exist I think, but if you’re willing to manage that in order to maximise your hardware then I suppose it can work out. In a data centre, it’s a killer to have unused and under-utilised hardware lying around and then going out to buy more because you can’t get access to those resources.
The problem is that no one was ever going to buy ten servers and use up the space, when they could run all their stuff on one box, just because it’s more secure. Given a set of circumstances and workloads, running a lot of services on one physical and logical server does make it less secure. If you can partition those services logically, if not physically, then it can certainly help. If something, whatever it is, infects one logical server, then it is about 95% certain to only affect that logical server and nothing else. You’ve also got unintentional side-effects of running some applications together, not to mention users. From that point of view it does have security benefits, not to mention that it is generally quicker to get a virtual server back up and running than a physical one.
Yep, virtual machines might reside on the same physical server, but then, the world is not perfect. Theo seems to forget, all too often, that we do not live in a perfect ‘Theo de Raadt’ world. I mean, if he’s developing OpenBSD “…on top of a nasty x86 architecture which barely has correct page protection”, and he’s trying to claim that OpenBSD is built for security, and he expects people to run applications directly on top of it(!), then I’d advise him to lay of the fungi in the garden he thinks are mushrooms. Logically, I have difficulty with that.
He thinks he’s Linus sometimes Theo……..and he’s not. He’s just not practical. Notice in that e-mail conversation when L. V. Lammert brought up application separation, Theo didn’t have a clue what he was talking about.
Edited 2007-10-25 19:28
I should start this off by saying that i am a great fan of OpenBSD, though i don’t use it in a commercial environment (it’s used on most of my home comps). I should also mention that i am currently very please with OpenBSD because it’s small install size allowed me to keep in touch with family during the San Diego fires.
Personally, i think that most of the people that use OpenBSD are drawn to it because of it’s strong stance on security. Most of the users are happy to compromise features for this reason. This stance is also what makes it such an awesome firewall, you don’t have to jump through hoops to secure it, just write the rules and put it online.
Really, Linux, FreeBSD, and NetBSD all have VMs, why is it such a bad thing to have one that doesn’t? I mean open source is in large part about having options, and it looks like the VM section is getting pretty well covered. I seems that there is a section of people who think that open source means linux, and expect everything to match up with linux in the open source world. This is much like a lot of Linux desktop distros are more like windows. It’s nice for transitioning, but sometimes it just isn’t the best option for every one.
Basically Openbsd isn’t supposed to be a revolutionary OS. It adds features when they are seen as needed by the developers so long as they are within the goals of keeping the base system secure and simple. If this means that OpenBSD doesn’t fit your needs, then maybe it just isn’t the right system for you.
Personally, i think that most of the people that use OpenBSD are drawn to it because of it’s strong stance on security.
Theo has just blown all this out of the water by admitting that OpenBSD is written “…on top of a nasty x86 architecture which barely has correct page protection.” That’s where he sees the weaknesses being, not in virtualisation itself.
> In order of the most secure to the least secure we
> have: 1) separate boxes for each service 2) VMs for
> each service 3) all the services on one box. Sure,
> VMs aren’t as secure as physically separate
> machines, but in some cases it’s better then the
> alternative.
The OS already does as much protection as is physically possible as per the hardware. If malicious code can break through that, it won’t be harder to break through the virtualization layer, which can’t do any better protection without a heavy performance penalty.
You can also run the on different physical machines, you know. Low-end hardware is dead cheap.
So instead of buying 3 small, cheap boxes (mail and file serving does not require a lot of horsepower) they’ll buy one very expensive ninja box? Not necessarily an awesome business decision.
People who actually know what they’re doing run them on different boxes.
So instead of buying 3 small, cheap boxes (mail and file serving does not require a lot of horsepower) they’ll buy one very expensive ninja box? Not necessarily an awesome business decision.
Small cheap boxes don’t necessarliy have redundant power supplies, hardware raid and other nice extras that might come in handy on a server. One box with all these features is cheaper than three and probably has enough power to run everything you where going to run on these three boxes.
I’m not saying it’s always the right idea, but I don’t think it’s an idea that should be automatically thrown out.
Praise be to Theo, for not only sticking to his theology but embarrassing heretics at every turn.
Hail! Hail!
Seriously, that was brilliant.
And Re: Interesting point
The virtualization of services for separation thing … is it really that helpful to multiply a piece of software (OS) with it’s own lurking security flaws that many times? I am in favour in some circumstances, but I’m not sure if it actually make anything safer, just easier to track. Joyent.com has a bit of information on that.. but I’m hardly convinced that it’s the right approach and the desired development target for Everyone.
Are you kidding me? You are actually praising that stupid piece of shit?! Not only was his answer unwarranted but the fact that he would speak in such a way implies that regardless of how much people want to dig their noses up his ass, he has no regard for his users. A simple “no this won’t wok because…”, would have worked better. I would curse him, his momma, his kids, his wife, whatever the fsckout for being an ass.
the fact is that if you have the services on the same physical box and someone can exploit a security hole in service Z that escalates their privileges, they might be able to break out of the sandbox, thereby exposing the rest of the machine (and other services in other sandboxes). In that case a separate VM might not be safer than a simple chroot or a jail.
imho it’s never a good idea to put all your eggs in one basket anyway but I’m more paranoid about hardware failing and forcing me to restore all the eggs than people breaking out of one egg and getting into the others…
removed (didn’t read above post thoroughly, sorry).
Edited 2007-10-25 17:42
What people forget is the #1 big win of virtualization is consolodation. The #2 is that I can run multiple versions..
Look I run a VMWare ESX cluster, and I can add new VM’s, destroy or upgrade them all remotely. This saves me travel time, and gives me far more flexibility than trying to manage physical machines. The other thing is that I can upgrade remotely. I can also make copies of older machines, and see how the upgrade goes. And unlike jails and other nonsense I can mix my enviroment, so I can use MS SQL, A Linux web server, and OpenBSD firwalls into my enviroment without needing several jailed OS’s.
The ultimate win is when you have legacy setups. I have Oracle 7 on NT 4.0 that runs the accounting packages, and it runs GREAT on VMWare. Would you actually colo 2000 miles away a machine running NT 4.0??
I know a lot of people just think of 1:1 but you’ve got to see this in a data centre perspective.
I need to open my mouth on this – just can’t keep it shut. I have the unfortunate position of being a dev in a company which relies on virtualization for everything (ESX3). The biggest problem is it just doesn’t perform, even on big end kit (we’re talking 4-8 core Xeons with 4-32Gb of RAM and a big SAN).
I have great respect for Theo for actually having the balls to see through the marketing stuff and point out the real situation.
I think the way to go is to use a blade based server platform of some description. It’s quite space and power efficient and it’s easily scalable by chucking more REAL physically isolated servers in the cage.
What doesn’t perform? What processes are you running on them? I’m just curious.
Of course it doesn’t perform. Any assembly programmer would have been able to tell you. And any C programmer worth his pay. It’s _obvious_ that extra layers will show things down.
I think a major concept behind virtual machines is that your not taxing your available CPU cycles, memory, and to a lesser degree drive space at all times. If you’re trying to run too many CPU/memory taxing processes against your hardware, you’re going to run into bottle necks whether you’re running “bare metal” or various virtual machines.
If you were running your servers at WOT all the time, nothing short of new hardware(or maybe a serious audit at what software is running on your box) will help. But more times than not, you’re not running your machines that hard so virtual machines become much more viable.
Edited 2007-10-26 02:24
Actually, that’s not very accurate. Xen can do paravirtualization or hardware assisted virtualization, and those techniques bring an incredibly small performance overhead, it is really negligible.
There’s noticeable slow downs only when VMs and the host system fight for resources. In this case, the extra layer doesn’t have much to do with slowing down the whole thing.
“””
“””
Consider how finely tuned are some of the resource management mechanisms in a modern OS. Now consider several OSes, each with its own ideas about managing resources, running parallel with others on the same hardware. Lots of room for bad interactions there.
Virtualization has more performance pitfalls than just the overhead of the hypervisor itself.
I have the unfortunate position of being a dev in a company which relies on virtualization for everything (ESX3).
Maybe you should use it, but not for everything.
The biggest problem is it just doesn’t perform, even on big end kit (we’re talking 4-8 core Xeons with 4-32Gb of RAM and a big SAN).
What doesn’t perform, specifically? What are the specifics? There are many ways of killing performance with virtual machines, obviously, and there are many things you can do to make it acceptable given what you want to trade-off for the benefits.
I have great respect for Theo for actually having the balls to see through the marketing stuff and point out the real situation.
Theo is talking about something different.
I think the way to go is to use a blade based server platform of some description. It’s quite space and power efficient and it’s easily scalable by chucking more REAL physically isolated servers in the cage.
That’s exactly what people are trying to avoid. It doesn’t solve the issue that many people go about solving with virtualisation – namely that powerful hardware is being completely under-utilised. It gets expensive in every work scenario, but particularly for smaller businesses and places like data centres. That’s not mentioning any of the management advantages of using virtualisation either.
First, some words on the underlying technology. Theo compares virtual machine isolation to the isolation of processes in a single OS. This comparison is close to reality, the difference between the two being the behaviour presented by the virtual machin to guest processes. In the case of a single operating system, this behaviour concentrates on high-level concepts such as processes, files, and signals. In case of a VM, it concentrates on mimicking real hardware.
Virtual machines were designed to create isolated virtual machines that behave as is they ran on different physical pieces of hardware. However, they are not totally separated, since they both access shared resources (virtual or not), e.g. they are connected by virtual networks.
Similarly, a traditional OS separates processes completely as if they ran on different machines, but allows them to access shared resources, such as files or physical devices. This complete isolation was a great “selling” point of one of the first OSes which did it right: unix. (on a side note, I do not know to what extent pre-unix OSes achieved this goal).
Now Theo rightfully asks the question: Given the similarities between virtualization and the very concepts that are the foundation of unix, how could we claim that one is “more secure” than the other? In fact, he tries to answer the question by stating that virtualization adds more complexity without adding features.
To carry his line of thought a little further: wouldn’t a carefully-engineered, specialized interface to the operating system for restricted processes (often called “syscall interface”, although that’s not really exact) be a much better solution to the problem virtualization tries to address, in the sense of “faster, more convenient, *and* more secure”?
“Similarly, a traditional OS separates processes completely as if they ran on different machines, but allows them to access shared resources, such as files or physical devices. This complete isolation was a great “selling” point of one of the first OSes which did it right: unix. (on a side note, I do not know to what extent pre-unix OSes achieved this goal). “
A comparable implementation has been present on IBM’s S/360 (360/67) and S/370 in approx. 1970. They provided support for virtualizing hardware so that many different hardware environments could be simulated, which different OS implementations could make use of… IBM CP-67 or VM/370, if I remember correctly.
Of course, this implementation was not as capable as VMs are today, but they tried to achieve the same goal.
But OS/360 did not catch on. I also seem to remember that it was hopelessly bug-ridden (I guess that’s a good definition for, “they didn’t do it right”).
Anyways, I wonder what the point is of having the VM layer emulate all quirks of the real hardware, just to have the guest OSes work around them again. If I’m hunting a million bugs anyway, adding complexity is the last thing I’d do.
If my memory serves me right, OS/360 wasn’t really an attempt at security, than at backwards compatibility for already-written OSes on the same hardware (kinda like using Parallels to run Windows software on the Mac). While compatibility is an understandable goal, the net result is usually anything but perfect from an engineering point of view.
Incredible that anyone would say such things. OS/360 didn’t catch on? OS/360 made the *world* go round for *years*.
And there is *no such thing* as a “hopelessly bug-ridden” mainframe OS. That’s almost by definition. If it’s bug-ridden, it’s not a mainframe OS. Reliability is *that* critical on this class of hardware.
I have been reading the list.
There’s a “troll” named L.V. Lammert on it insisting that virtualization improves security by allowing applications to run in their own containers.
However, I believe the opposite is true. I read the paper from the Google researchers, and it indicated the following:
1. Virtual machines have emulated devices.
2. You can very easily “fuzz” an emulated device through its device driver and cause it to crash.
3. Since these VMs run with such high permissions to allow you to access hardware, you more than likely can run code on most of these and own the entire system.
The good things about VMs are (and I speak from a development manager perspective):
1. Easy to deploy development/test servers.
2. Easy to roll back changes.
3. Easy to run older operating environments such as NT4 on newer hardware.
4. Easy to recover from IF you’ve been backing up your virtual machine disk and config files correctly.
The bad things are (these are human factors, not technology):
1. The current trends encourage sloppy development because some developers will code their apps for one virtual machine and set of DLL files, instead of being more flexible and documenting their systems the right way.
2. The current trends encourage sloppy deployment because some project managers will just assume that the big backup will back everything up, and restore all the configs the way they were. You still have to document and write out all your DR documentation. VMWare is not a Magic Bullet
3. You don’t get extra security from running a VM on x86. You actually get less security because you’re running virtualization on a chip architecture ill-suited for proper hardware virtualization. The IBM mainframe processors (which are now a superset of POWER for the zSeries mainframes) have this on-chip, in the firmware, and designed deeply into the OS. Even the Itanium is better designed for virtual machines. VMWare, QEmu, Bochs, Virtual PC, and Virtuozzo/Parallels have all performed incredible tricks to pull it off. The VT extensions work, and the AMD extensions work, but they are nowhere as robust as the IBM solution. IBM’s solution is simple, designed deeply into the hardware, and it’s worked for over 30 years.
This is what Theo’s talking about…keep it simple, audited, and well-documented. This emphasis on virtual machines makes things more complex, and therefore less secure because it can’t be simply explained.
4. The current trends actually waste more resources because we’re seeing people deploy their applications as virtual machines. In other words, to run some little CMS or web server app, we now need to run a virtualized instance of Linux or Windows, and also possibly run backup software, monitoring software, and even AntiVirus for the virtual machine. These things cost money and/or resources in any corporation, no matter how large or small. Whether you spend the money on additional CPUs and memory, or on Veritas, CommVault, Symantec/McAfee/Trend AV, or even on OpenView licenses, these things cost money.
5. The only apps that gain from virtualizing are the small ones that can’t be effectively run with other applications. Large applications get killed on performance. You’re not running Oracle Database 10g on VMWare ESX without serious issues.
6. The management software vendors gain with this overemphasis on virtualization because certain pieces of software such as AV, Backup, and system monitoring still are licensed per instance or named user, not per physical CPU. Look at who sells that and you’ll see who makes the most money off of it (hint: Symantec/Veritas/Altiris, CommVault, EMC, Microsoft, Cisco, Oracle, and McAfee). No wonder they’re pushing it. It’s a way for people to run many virtual servers on one server while still paying for the software .
I agree with Theo. Saying that virtualization improves security is a crock unless it’s designed into the hardware like IBM does it. They’ve been doing this for over 30 years, with the patents and peer-reviewed papers to match .
How in the hell is this news or even interesting for anyone who isn’t already on the OpenBSD lists?