Namespaces and cgroups are two of the main kernel technologies most of the new trend on software containerization (think Docker) rides on. To put it simple, cgroups are a metering and limiting mechanism, they control how much of a system resource (CPU, memory) you can use. On the other hand, namespaces limit what you can see. Thanks to namespaces processes have their own view of the system’s resources.
The Linux kernel provides 6 types of namespaces: pid, net, mnt, uts, ipc and user. For instance, a process inside a pid namespace only sees processes in the same namespace. Thanks to the mnt namespace, it’s possible to attach a process to its own filesystem (like chroot). In this article I focus only in network namespaces.
If you have grasped the concept of namespaces you may have at this point an intuitive idea of what a network namespace might offer. Network namespaces provide a brand-new network stack for all the processes within the namespace. That includes network interfaces, routing tables and iptables rules.
One task, One Tool. Thanks Diego
VMs are overrated.
Nah, they are not. Not at all. You can’t run Windows 2012 Server or Solaris x86 in a Linux Container, but you can do it in a Xen VM (with a very little overhead and most important: no host OS dependencies at all). Simple, elegant and clean.
Containers are way overrated thanks to this new “Docker mania” (nobody gave a shit about them before Docker). FreeBSD Jails, Solaris Zones even User Mode Linux have been doing exactly the same as Containers during the last 10 years (at least) and nobody claimed that VMs are overrated.
You are a fashion victim my friend.
Edited 2016-04-14 05:48 UTC
Containers do have certain uses that can’t really be done with VM’s. For example, on Linux, everything under Google Chrome (each tab, each plugin, each app, and I think possibly even every script) gets run in it’s own container, almost completely isolated from the rest of the system. This really can’t be done with a VM (it might theoretically be possible with Xen, but doing so would require an insane amount of work, and would mean Chrome would have to run with root-equivalent privilege). Similarly, there are a lot of distro build systems that use containers these days, because it’s a lot easier to quickly set up a controlled environment in a container.
I agree that most of the hype recently has been because of Docker (or similar things like sandstorm.io), but there are still use cases outside of Docker type things that just aren’t practical with VM’s.
I agree 101%!!! They are different tools, that’s why I said VMs are not overrated at all.
You have to use the right tool for the job. Sadly, computer world is usually driven by fashion instead of common sense…
If you want to convince your manager you have to mention “Cloud”, “Docker”, “Open Stack” or the fancy word in vogue. Doesn’t matter if it makes sense or not, you must put it in the mix because it sounds cool!!
Sorry, but I hate this stupid culture… and the first post of this thread is perfect example of that.
Not to disrespect. But do you know how the sausage you call Simple, elegant and clean is made ? The things they do to make VMs work even some what efficient ?
I was running containers before the x86 hardware with virtualization came out, it did not need any hardware to be efficient, I would consider that to be much more elegant.
What I think about VMs or containers ?: They are differnt tools, which can be used for different situations.
Edited 2016-04-14 19:20 UTC
Not disrespectful at all, We are one the same page.
VMs are a better and cleaner solution in some cases (full system virtualization) and Cointainers are a better solution in other cases (app isolation, app testing, ecc).
PS: the “sausage” that x86 hypervisors are it’s justified by the incredible usefulness, isolation and stability that they bring to an ultra-commodity platform. I can assure you that “cheap” x86 virtualization like Xen or ESXi, in much cases, works better than “elegant” and super expensive unix hardware solutions like LDOMs or LPARs. I think x86 hypervisors are one of the very few things where you can say “hey, this is really good”. I’ve been working with all these unix and x86 virtualization technologies for years, what x86 virtualization achieved is just incredible (and unthinkable 15 years ago). Overrated my ass. x86 VMs are wonderful.
I don’t think VMs are overrated, but their use cases are changing as newer, better methodologies are developed.
VMs solved a particular problem in the mid 2000s, and they became phenomenally successful. As servers got more and more RAM, more cores, systems were consolidated into hypervisors, predominantly VMware’s ESXi. These systems, like their bare-metal predecessors, lived forever. They lived forever because the application installed on them was difficult to move. As such, when we moved a Linux system into a VM, it needed to run Linux. Windows into a VM, it needed to run Windows. FreeBSD, we moved it into a FreeBSD VM. The ability to take Physical to Virtual without changing much, plus tools like HA/DRS, vMotion, made the change easy. We got more efficient utilization (no more servers taking up power and giving off heat while only running at 2% CPU) and still had “servers” with various operating systems. We had ESXi of course, and Linux had KVM and Xen, and for the most part FreeBSD missed out in this generation as it lacked a decent hypervisor (it still does).
The thing is, these systems were built to live forever. Partly because the application they contained was tightly integrated as they’re installed on the OS. So it’s difficult to separate the app from the server/VM.
The servers are highlanders.
Enter Amazon, and the trend towards ephemeral compute nodes. These are compute nodes that spin up when needed, but are destroyed. The average lifespan of a Netflix VM in Amazon is 36 hours. Rather than install an application on a highlander-style everlasting system and deal with the system, there’s an automated build-install system (several various methods are available). So when you update an application, you blow away the old VM and create a new VM, installing the application all from some sort of definition (more flexible, achieves better consistency, etc.) The application is now separate from the server/VM it runs on.
VMs are also huge. They are generally several gigabytes, so they’re tough to move around. If I’m a developer and I write a micro service and put it in a VM, it’s tough to move that VM from one location to another. There’s a ton of dead-weight, literally gigabytes of crap to move that have absolutely no relevance to the app.
So we tend not to move them outside of a DC anymore. And if we’re developing an application, we tend not to encapsulate them into VMs anymore.
Containers have the benefits of ephemeral VMs (capturing environment variables, binaries, etc.) and are much more portable. They’re much, much smaller, so they’re far more portable. They tend to take up megabytes instead of gigabytes.
Google has been doing this for over a decade. They don’t use VMs.. They’ve had to solve massive scaling problems as well as being able to push code changes quickly, so they’ve used Linux containers for over a decade for gmail, search, map, etc. Right now they launch over 2 billion containers per week
Docker provided the tooling, APIs, repositories, etc. to encapsulate discreet applications and settings into a container that can be moved from a laptop to a VM in a datacenter to a container service in Amazon to whoever you’ve got a Linux server with the docker service running. Much of that isn’t new, but it was combined in a way to make things easier to consume for application developers. I think the Jails approach was that from a sysadmin perspective. Docker-style containers are from the prospective of an application developer, and the tools it provides as such is why it’s such a “fashion” (it’s really not). It also provides much better networking support for multi-tenancy, which overlay networking (based on VXLAN) which not something that FreeBSD or Jails has ever been good at.
Jails deserves credit of course for blazing the trail. But every piece of technology is based on something that came before it.
The value-add for operating systems is mostly gone, these days. Developers don’t care if they’re running on Linux, FreeBSD, NetBSD, or whatever. Consumers of these services certainly don’t care. They all care if it works, and works reliably. Even Windows provides that (though there’s more cost involved there).
VMs are better at containers/jails/zones for multi-tenancy right now, so a combination of the two are being used. If you have Coke and Pepsi wanting to run containers, typically a hypervisor is used, two VMs are spun up (on for Coke and one for Pepsi) and containers launched inside.
The right tool for the right job. And no, they’re not a fashion victim.
Edited 2016-04-14 19:51 UTC
Super interesting post man, thanks!
BTW I don’t think Docker is a “fashion item” itself I think PEOPLE saying that VMs are overrated just because We have “Docker” (or the fancy word of the moment) are fashion victims.
You put it perfectly in your post, Containers are useful to solve some new problems/approaches and VMs another ones. Different problems, different tools.
In the case of “traditional” server virtualization (no trascient nodes, no application testing, just regular servers), I think hypervisor solutions like Xen or ESXi are a simpler and cleaner solution than Zones/Containers/Jails because these technologies have a lot of dependencies with the host OS and make live migrations pretty complex and unreliable (i.e. you can migrate Solaris Zones between hosts but It sucks compared to vMotion/XenMotion flexibility and speed).
In my humble experience, Xen and ESXi are the best virtualization technologies I’ve ever used (and I’ve used a lot: hardware based, software based, unix based, intel based, even System Z ones). The poor and always underrated x86 systems has the best solution!
But why do you want to run Windows 2012 server or Solaris? But I guess if you have no other choice, a vm makes sense. Azure becomes pretty enticing for windows, obviously. And outside of legacy things, I’m not sure what solaris would be recommended for exactly. Maybe just for ZFS or dtrace in certain use cases?
If you’re exclusively linux, containers make a lot more sense in many cases.
And to be fair solaris zones make a lot more sense if you were all solaris, or free bsd jails if you’re bsd only.
You can run your Solaris x86 applications in a container with QEMU user space and your Windows application in a container with Wine. The overhead will be significantly less then.
Really? They’ll all run?
Nope…that’ll work for some, but the selection is still rather limited, especially products coming out of Microsoft tend not to work on their latest versions. So yeah…good luck with that.
I don’t think VM’s are rated at this point. Its like calling disco dead. Sure, they have a use, but that use case is increasingly relegated to legacy.
These have saved me a LOT of hassle running integrate/test stuff.
I can fire up my test server and run unit-tests against it, inside a network namespace, without having to diddle around randomising ports or injecting new ports into the code (which, natch, means I’m not testing the production code).
Inside that network namespace everything is nicely “virtualised” and I can run multiple network tests on one box in parallel – this is good as getting a machine with 16+ cores and a ton of RAM is pretty cheap!