Now this is interesting. We only briefly touched upon Qubes two months ago, and now, the team behind the project have announced a very interesting feature: disposable virtual machines. The idea here is that you can tell your operating system to launch an application in a virtual machine that gets created specifically for opening that application. If you close the application, the VM is destroyed automatically – and this all in under one second.
Qubes is an interesting operating system that builds upon the strengths of Linux, the Xen hypervisor, and the X windowing system. Its goal is to create a secure system through the use of virtualisation. “Qubes lets the user define many security domains implemented as lightweight virtual machines, or ‘AppVMs’,” the website reads, “E.g. user can have ‘personal’, ‘work’, ‘shopping’, ‘bank’, and ‘random’ AppVMs and can use the applications from within those VMs just like if they were executing on the local machine, but at the same time they are well isolated from each other.”
Qubes supports secure copy and paste between these virtual machines. The virtual machines consist of a small X server running a dummy driver, so they take up relatively little RAM. Apart from the AppVMs, several system components are running inside special virtual machines too, such as the networking environment.
They’ve now taken all this a step further by announcing the disposable virtual machine concept. You can right click on any file or application, and select ‘open in a disposable VM’; the system will create a virtual machine and launch your application in it, which should take less than one second. Once you’re done and close the application, the VM will be destroyed automatically.
“Basic support for Disposable VMs is planned for Beta 1, which is scheduled sometime at the end of the summer holidays,” Joanna Rutkowska, the driving force behind Qubes, writes, “But I can tell that’s just the beginning. The ultimate goal, from the user’s point of view, would be to make Qubes OS to look and behave just like a regular mainstream OS like Linux, or Windows, or even Mac, but still with all the strong security that Qubes architecture provides, deployed behind the scene. Seamless support for Disposable VM is one of the first steps to achieve this goal.”
This is a very interesting approach, and in all honesty, I’m kind of wondering why none of the big boys have every thought of this – spelled out like this, it sounds like such a no-brainer.
How does this differ from being a normal OS except with highly restricted interprocess communication? It seems like nothing more than a OS that takes processes partitioning all the way down to the filesystem level.
Not sure I would actually call this a VM…
I take it you didn’t read the qubes FAQ:
Its using Xen for the virutalization.
Not sure how this changes the original question? Ok so it’s going to the extent of firing up a virtual machine, but really what does this buy you?
Security, obviously.
Take for example you have a central system offering diverse services/apps to your users. For example in a school or gov office. You want them to access only what is predefined, but you dont want the maintenance cost of supporting so many users.
The user runs their app on a thin(ish) client which quite happily runs as you would expect. When the users is done it closes on the central system whipping clean any sillyness they got up to inc virus and unwanted extras.
This same idea can be extended to internet kiosks or similar systems in airports or library.
It is more than just restricted IPC. The entire file system, including any changes made to it are gone as soon as the application is done. Kind if like incognito mode taken to extremes. I used to use this a lot with testing Windows install programs. I used a snapshot VM that I could apply the changes or roll them back when I shut it down. It was very good for testing that my installer did not break the registry.
This will be great for trial software that hides stuff in your ~/.conf/blabla directory to determine how many days you’ve been running it.
Would running Firefox ask you if you want to import IE settings every time you start it then?
By default it probably would:
I wouldn’t be surprised if this will enable the user the modify/customize the filesystem, and possibly in the future, the windows registry too. Possibly by simply snapshotting the AppVM at some point.
…are doomed to repeat it.
This has been thought of before by the “big boys”. Trusted Solaris, as its core method of isolating processes from each other, creates and removes lightweight OS containers as applications are started with “trusted” labels, and removes these containers once the application is stopped. “Trusted” application contexts are completely separate from each other, even to the point where a GUI application with an “untrusted” label cannot cut/paste to or from an application in a “trusted” label.
When Solaris 10 came out in 1995, the OS extensions that make Trusted Solaris work (containers, labels, etc..) were rolled up into the main OS as “zones”, and the “userspace” aspects (mostly application mods and CDE/Gnome mods to isolate “trusted” X apps from “untrusted” ones) are still available as extensions available for download and are mainly used in government circles.
I’ve seen this idea thrown around for years as a way of isolating applications and easing cross-platform development. Say goodbye to dependency issues as well.
The downside of course is the performance hit. In those screenshots it notes that those VM apps take about 400mb of RAM each.
One assumes you would predefine the memory and resource of the VM based on the requirement of the app. Similar to Mac OS9 memory management
You can’t get around the fact that every app will require its own instance of X which will suck up memory.
Err, no, it doesn’t. It could perfectly well use the host’s X-server.
But there’s a tradeof between security and safety: each time you share something, there’s a possibility that a cracker can use a vulnerability in the shared element to access information that he shouldn’t have had access to..
Well I’m talking about their implementation.
But even if you had a single server/multiple client implementation you would still have an increase in overhead. It would also take a lot more work to create and you would be providing an avenue for apps to break out of the VM.
As much as I like VMs, I don’t think that it’s a panaceum for all of the security problems, really.
I’d rather see some really good and well-coded base OS as a ground and then some well-designed VMs. To me it’s not just about creating new layers of some abstract, but to avoid some common coding errors.
BTW. I think Joanna does great job with BluePill and stuff – just another clever Polish researcher, but I’d rather stay focused on some basic problems first.
I wonder how they get the whole to be nearly instant. This would be a nice feature for something like a cloud infrastructure. Being able to roll out a new OS in a minute or so would be cool.
VMs are software, too, and thus insecurity must be assumed.
I guess the benefit to this is transparency to the applications. But I can’t imagine it’s any more secure than, say, a FreeBSD jail, and definitely not more than some microkernel or Plan 9-type OSes that run almost everything in user space. As one goes in that latter direction, though, app compatibility suffers.
I suspect all this will eventually be moot, though. The malware writers will turn to web-based exploits that run as your user to get into your bank account.
The idea, I guess, is that you run an instance of Firefox with your bank website in a “Work” VM, and another instance of Firefox to visit h4x0rzrUs in “Random” VM. The fact that they are both launched by the same user is irrelevant, which is very good. But a 400 Mb VM for each security domain seems like overkill. I think something like Solaris zones or Bitfrost would be preferable, as long as you trust the drivers.
I think the best security models are capability-based, like in Genode or the Newspeak environment.
Security through obscurity is dull, cumbersome and hopeless; security through correctness is folly. The only viable approach is security through isolation. That’s what separation of processes and users is all about.
The problem is that security based on users, groups and permissions is the wrong model. But instead of switching to a better model (like capability-based security), the industry has kept the old model, throwing new layers of obscurity and malware-hunting (should that be included in “obscurity”?), and pointlessly insisting on correctness.
Hopeless? On the part of who? If any security method is cumbersome it is isolation through a VM. Security by obscurity can be very effective but should only be used to enhance security, not provide the basis of it.
Meh, I wouldn’t describe the work of the openbsd team to be folly.
There’s also security through authentication.
A big problem with a new VM based OS is that even if you design a highly secure system where every unverified app is isolated and acts native you have still done nothing for the over 1 billion desktop pcs in the world. Hackers would shrug their shoulders and continue to focus on XP. You have to consider security improvements in the context in which they would be applied.
Since this system is designed around Linux apps the overall gain in security will be minimal, especially since I doubt the typical Linux user would be willing to trade performance for additional security. But it wouldn’t surprise me if future MS and Apple systems make greater use of VMs for security reasons.
I mean security through obscurity is hopeless because eventually all those little obstacles become known and then they are useless, unless you change them all the time, hence the “cumbersome” part. Also, if by “obscurity” one means running some little-known OS, it’s also hopeless because every OS should aspire to the best user experience, which will eventually translate to a larger user base.
The OpenBSD guys are very clever and they work a lot to minimize the damage of using a bad security model, but it doesn’t change the fact that the security model is broken. Believing you can be safe running a OS that malware writers actually target, just by making sure that all your applications are bug-free, *that* is what I think is folly.
As I said, I agree that basing security on heavyweight VMs is a waste of resources, but it needn’t be very cumbersome to the user (compared to a roughly equivalent level of security achieved by other means) as long as the OS is designed around the concept, and it provides for things like secure copy-pasting, as in this case.
I wonder if it wouldn’t be rather good enough to just create a new dummy user account with strict MAC/SELinux rules applied, copy needed data files, say PDF files in her example, and launch needed application, connect to it via X protocol, or VNC or whatever, and when done with it close and dispose the user and its home directory? Maybe FreeBSD’s jail would definitely help in security-wise.
I guess a VM for each application is way too much.
I dont think that method is much better either.
Say, for example, you’re working on a few files that are ~100MB. The time taken to copy those files into a dummy homes would be irritating to say the least.
You might right. Still the method I describe should be better especially in the terms of performance and resource usage wise than the Qubes’ disposable VM method where one needs to run a separate instance of VM for each applications! That was my point.
If the security wise difference is nearly none, why on earth one would like to waste much more memory/disk resources in every single instance of applications he/she runs? Then might not be able to run many instances at one time which would be demanded by many people. Especially if there are other ways of secured methods that demand lesser resources and would let one to run more number of instances at the same time..
So I feel like the “disposable VM” is not ideal or not really practical in the real world usage. You know, even though the computer power ever grows exponentially (almost .. at least in my eyes), the applications are getting slower and slower and bigger and bigger at the same time.
However, projects like these are needed to explore such viabilities.
I hope more money spent on more practical and meaningful research.. and I don’t like marketing phrases.