First up, a bit of clarification. By general purpose OS I’m referring to what most people use for server workloads today – be it RHEL or variants like CentOS or Fedora, or Debian and derivatives like Ubuntu. We’ll include Arch, the various BSD and opensolaris flavours and Windows too. By end I don’t literally mean they go away or stop being useful. My hypothosis is that, slowly to begin with then more quickly, they cease to be the default we reach for when launching new services.
So note that this isn’t about desktop workloads, but server workloads.
Outside of a few unicorns and bleeding edges, I have seen no great shift to containers that would support this article’s hypothesis. If anything people are discovering that running any number of containers in production brings a whole host of new problems with it; in fact it’s not dissimilar to the cycle we saw with stuff like OpenStack, where it was touted at the solution to All Things, some people made a lot of noise but no progress, and it has turned out to be a massive waste of time and energy.
Not to mention the article makes the classic assumption that the authors workloads are everybody’s workloads; because what you don’t know doesn’t exist, right? Lets just say that containers are not a solution (and in fact would be a hinderance) to a whole bunch of potential workloads, and there are far more machines running those workloads than you probably think.
Here’s two timely articles about running something like Docker in production.
https://thehftguy.wordpress.com/2016/11/01/docker-in-production-an-h…
https://patrobinson.github.io/2016/11/05/docker-in-production/
I would tend to agree on this. Outside of some specific cases, containers are actually used more in desktop and workstation setups than server setups (most modern browsers use containers to isolate plugins, and there are all kinds of other programs that use them).
Now, this also depends on what you consider a container. chroot() is a really primitive container, and that’s been around for decades and is pretty much ubiquitous, but that’s also because it’s very easy to use and has very specific and well understood implications. Virtual machines are hardware level containers, and they’re ubiquitous too in any reasonably large deployment, but again they are easy to use and hve very well understood limitations. More ‘modern’ containers (ones on Linux that use cgroups and namespaces) aren’t even a 100% stable interface, don’t have very well understood side effects, and are generally buggy (and therefore not secure and in turn not worth the effort).
As a super micro business, I use a lot of PaaS and SaaS systems (WP Engine, Galaxy, Digital Ocean, AWS) these days, because I don’t want to spend time at all on dev-ops – I’m an architect/engineer, that’s what I’m good at, and that’s what I want to spend my time on.
It may be true that these systems are still using general purpose containers, which have been specialized right now, but if the trends continue- and these services get even better around the edges, I can easily see these appliance like services, taking over more and more from dedicated dev-ops teams. Right now there is a bit of a premium on these, but at a certain point they should become much more cost effective than the alternatives.
On the other hand, we probably thought we’d see more focused computing hardware/software in the personal computing space as they’ve gotten smaller, yet it’s been the opposite. Consider the limits of BlackBerry or PalmOS against iOS, or Android, and it’s clear why the latter have won the battle. They are simply more open (and Android will eventually eat Apple’s lunch, since they peaked last year – they are even following their predecessors into the graveyard of once popular platforms – enterprise, where BlackBerry previously went to die).
SaaS and PaaS are not the same thing as containers. They can leverage containers for security, but they can also just as easily use virtualization for that, or in some cases, just use whatever the software already has for isolation (which is usually a primitive form of containerization). Those are becoming ubiquitous for pretty much the reasons you mention (although they still have their issues, I for one hate trying to get reasonable debugging info out of MS’s exchange online e-mail service), but not all of them use containers (AWS for example is still mostly built on Xen based virtualization).
Despite this, I would still argue that containers won’t obsolete traditional general purpose server systems. Just like virtualization, cloud computing, cluster computing, NUMA, and many other technologies, there are just as many cases where containers aren’t useful as ones where they are. Take a shared file server for example. If it’s one system serving files, it’s pointless to use containers, as all the permissions checks you need are built into the server software and the OS already. Similarly, single tenant web servers don’t make much sense for containers either because you don’t need to isolate people. The two cases where containers are going to become commonplace are SaaS with multi-tenancy, and development, simply because those areas benefit more than almost anywhere else from the benefits containers offer.
Not sure I agree there. See Amazon EC2, Google, and Netflix. They run containers in production quite well.
EC2 is hardware level VM’s running on Xen systems. AWS in general uses an odd mix of both that and containers.
As far as any other case, by number of users, containers are used more on desktop systems than servers. If you pay attention, most of the companies using them in server setups have lots of time and money to spend on getting it working correctly, because getting it working correctly is hard right now, whereas the number of desktop users is about the same as the number of people who use one of Edge, Chrome, Firefox, Opera, or Safari, as all of them use containers (to varying degrees) for plugin isolation.
Notice those companies are all Unicorns. They have large, well defined and compartmentalised workloads, a large engineering staff to make this stuff happen, and a good motivator to save pennies across millions of machine hours.
Oh and they all also have loads of non-container based workloads.
So let’s take a look at Google.
Is a unicorn? Check.
Has a large, well defined and compartmentalised workloads? They have a very diverse set of workloads. Probably anything you can imagine, they have it. And they run pretty much everything inside of containers [1].
Has a large engineering staff to make this stuff happen? Check.
And a good motivator to save pennies across millions of machine hours? Check.
Now, what has this Google the Unicorn gone and done? They have taken the lessons that their large engineering staff has learned over the past 15 years, re-implemented Borg/Omega from scratch, and released that as an open source project called Kubernetes [2]. This is Google infra for the rest of us, and it’s good stuff. Mere mortals can now wield the weapons of the Unicorns. For the small fee of having to learn something new. I have introduced Kubernetes at a couple of companies already, and I didn’t have or need a large team of engineers to reap the benefits of Google-style infrastructure. These things aren’t beyond our means anymore.
ahferroin7 said that he wouldn’t containerise a single tenant web server. I would, in a heartbeat. Let’s see what deploying our website to a Kubernetes cluster might look like and you’ll see why you might want to do that:
â± cat Dockerfile
FROM nginx:1.10.2
COPY my-html-directory /usr/share/nginx/html
â± docker build -f Dockerfile -t myorg/my-website:1.0.0
â± docker push myorg/my-website:1.0.0
â± ls deployment/manifests/
my-website-deployment.yaml my-website-service.yaml
â± kubectl create -f deployment/manifests/
We’ve built a Docker image and pushed that to our registry. We have then created a Kubernetes deployment and our Kubernetes cluster has spun up an instance of our website in a container. If the container dies, Kubernetes recreates it. If the node on which the container is running dies, Kubernetes recreates the website on another node.
But we can do better. To increase the availability of our website, let’s run three instances and have Kubernetes load balance requests among them:
â± kubectl scale deployment/my-website –replicas=3
If we see heavy load (we wish!), we can go crazy with –replicas=10 or whatever. We sleep better at night already.
Now let’s say that there’s a new version of nginx.
â± cat Dockerfile
FROM nginx:1.11
COPY my-html-directory /usr/share/nginx/html
â± docker build -t myorg/my-website:1.0.1
â± docker push myorg/my-website:1.0.1
â± kubectl set image deployment/my-website my-website=myorg/my-website:1.0.1
Kubernetes will bring one container down at a time while creating a new one with the new version. No downtime. If there’s a problem and we want to rollback, we might simply do:
â± kubectl set image deployment/my-website my-website=myorg/my-website:1.0.0
We can easily cananry the new version too. Maybe introduce a couple of containers at the new version in the load balancer for a while. Make sure that there are no issues, and then update the rest.
I personally am not going back to tending to individual web servers. A containerised web server becomes an application:
* Whose recipe can be checked into a source code repository
* Whose recipe can be built on. The nginx folks can do FROM ubuntu, and I then do FROM nginx
* That can be built automatically in a repeatable manner
* That can be easily be copied, distributed and moved
* That launches in under a second
* That is automatically managed for me
[1]: https://softwareengineeringdaily.com/2016/10/20/google-cloudbuilding…
[2]: http://queue.acm.org/detail.cfm?id=2898444
What about all the rest of your infrastructure that isn’t a web server?
Like what? Databases? These used to be a Docker pain point because if you’re going to mount the host’s volume into your container then that container is pinned to the host and you lose the ability to move it around. However, Kubernetes has the ability to dynamically provision and use many types of persistent volumes [1].
If you can decouple the workload from the underlying compute and storage infrastructure, a lot of good things happen.
[1]: http://kubernetes.io/docs/user-guide/persistent-volumes/#types-of-p…
Databases, load balancers, bastions, DNS resolvers, NTP servers, monitoring & alerting, caches, the actual machines that run all of your containers…you know, the entire rest of the stack that even makes it possible to start a container and connect to it in the first place?
Just because you don’t see it, doesn’t mean it doesn’t exist. It very much all exists, it exists in very large numbers, and all of means that general purpose operating systems will continue to be important for a very long time.
Actually, yes, we can (and do) run all of these in containers too! An application running inside of a container thinks that it’s running on top of a full blown Linux system.
Try this:
docker run -it ubuntu /bin/bash
And have a poke around.
Right. A general purpose operating system.
Yep, sure. For now an OS that has a lot of things that a container doesn’t need. E.g. hardware systems: Sound, USB, etc.
I’ve said it before: Containers aren’t that interesting to me. We may be running Docker today, rkt tomorrow. Maybe unikernels eventually. Call it processes for all I care.
The orchestration part is the game changer. Google has come out and given us access to some jewels off the crown. I think it’s worth paying attention.
Oh totally. Habitat (from Chef) is also one to watch.
https://www.youtube.com/watch?v=PivpCKEiQOQ
lol, thank you. I’ve never seen that clip. Damn node.js hipsters. “I’m moving everyone to Windows!”, [girl sobbing] “Don’t cry, you can run bash on Windows 10 now.”
Edited 2016-11-06 02:17 UTC
Hi,
Over time, “containers” will get leaner and more streamlined, until they look exactly like the “processes” we started with.
Then an OS somewhere will do something silly (e.g. some kind of temporary security problem with the implementation of containers on one OS); and some deluded moron will decide that we need “vessels of containers”.
Then, over more time, “vessels” will get leaner and more streamlined, until they look exactly like the “processes” we started with.
Then (for whatever reason) some deluded moron will decide we need “receptacles of vessels”.
Then, over even more time…
– Brendan
I know, I know! Someone will come up with a new buzzword!
And everyone knows that buzzwords are the pinnacle of science.
It will be just a matter of time before someone thinks of the name “turtle shell”.
and then all the things will have “Turtle Power”.
Namespaces are a Good Thing. If Linux had a proper chroot people wouldn’t be losing their minds over containerization.
I hate to tell you, but Linux implements chroot() pretty much exactly like all BSD implementations, Solaris, SVR4, HP-UX, AIX, and just about every other UNIX in existence.
I assume you’re intending to compare it to FreeBSD Jails, which are just containers (so is chroot() actually, it’s just a really primitive one), and originated for pretty much exactly the same reason that containers are getting hyped.
Yeah, chroot() doesn’t provide great protection, but it’s easy to use and has very well understood limitations, hence it gets used regularly. It’s also worth pointing out that it’s at the core of pretty much any complete system container system (as compared to basic application or process containment).
It’s also worth pointing out that there are quite a few cases where a chroot is kind of pointless, not because of the security implications, but because the functionality it provides is not needed. For example, Chrome’s plugin containers don’t need one, they just used namespaces and seccomp(), and many other applications that use containers internally fall into the same category.
Given the level of hype that’s accompanied Docker’s meteoric rise over the past couple of years, it’s inevitable that there would be a proportionate backslash against Docker and containers.
A lot of the complaints that I see are against Docker specifically. And yes, the Docker tools have been downright buggy at times. But things will stabilise eventually. Whether we end up using Docker, rkt, or some other image format doesn’t matter. For better or worse, containers are replacing VMs as the new deployment abstraction. They have been part of Google’s secret sauce for over a decade by the way.
Personally I don’t find containers that interesting. It’s the orchestration part that I’m excited about. I’ve been using Kubernetes for just over a year now and I’m a big, big fan.
<pedantry>
English Teacher (ET?) in South Korea points out that:
“Hidden behind my hypothosis, which mainly went unsaid, was that containers are becoming the unit of software.”
And:
“…(and in fact would be a hinderance)…”
Clearly one thing that all computer bods have in common is an inability to spell properly.
This isn’t trolling – I see everything exactly the same here in Korea every day – but gentlemen (and gentle ladies, if there are actually any here), if English is your first language, there’s no excuse.
</pedantry>
It’s something to do with the Internet being an informal communication medium. Also, I was drunk.
Vanders,
I’m just laughing that: of all the comment posted at osnews, you`re mispelling of “hinderance” is being singled out to make an example of.
Surely dionicio deserves an honorable mention, just because funniest grammer ever. REALLY LOOKING FORWARD. “There’s no excuse” [paraphrasing].
(Just to throw some fodder into the mix, haha)
I really thought (and I mean no offense) that Dionicio was a spam bot and I have to admit that I was contemplating several times to report him to an admin.
It’s things like that that throw you off your balance and make your day well worth it.
Not English teacher here, but at risk of being pedantic, I’d have to point out that you’re actually being punctilious here.
oh you havent seen nothing yet..it’s like spellcheck never existed..let alone proper education..it sucks….
Sorry, I couldn’t resist. The double period with no space is one of my pet peeves in particular.