Have you ever tried to install Minecraft and seen an error message like, “This application requires a Java Runtime Environment 1.6.0”? Or you try to install something on Windows, and you get an error that says some .NET framework is missing? Or, as a more basic example, have you ever spent a couple hours setting up a new computer with all your applications and preferences?
Those are the kinds of problems Docker, and “containers” more broadly (Docker is kind of the Kleenex of containers), are meant to solve. Docker makes it easy to install Linux applications on servers, along with their required dependencies and whatever preferences you might have for those applications. And, as an added bonus, conflicting dependencies between applications (maybe one app relies on Python 2, and another app relies on Python 3) aren’t an issue, because everything is isolated in different containers.
Nope, this is a solutions in search of a problem. Package managers are what you just described. Just use a distro with a good package manager and maintained programs. Repackaged static linking is not a nice solution.
Also python is a bad example, as various versions of python are easily co-installable.
Edited 2018-05-25 21:16 UTC
Easily, but not by system package managers. Setting up multiple python environments is not something yum, apt, zypper or pacman handle.
There are ways to do it, but they’re *not* as easy as using the system package manager.
That’s literally as simple as sudo apt-get install python python3.
Sometimes. On a red hat 6 system I manage, yum suddenly stopped working one day– It seems someone with sudo decided to replace the system python with python 3.
And this was a professional python developer. In theory.
paludis and portage support it just fine. I currently have python 3.4, 3.5, 3.6 all installed side by side, by the package manager. (also gcc-5,6,7, clang-5,6).
As I said: *good* package manager (with well written packages).
As has been mentioned, python is a bad example.
More important, however, is that custom environments is only a small fraction of why Docker is useful. See my more detailed response farther down.
I would describe containers as a lightweight method to rapidly construct and deploy a minimalist, isolated, environment for one (or more) application(s).
Try this with portage: Build an Ubuntu 14.04 environment to compile the Android Open Source Project OS in, without creating a VM.
I agree that quickly deploying a custom, prefabricated environment is what docker is good for. Your specific environment for Android example is actually quite good.
What I’m arguing is that its use for “normal” application deployment is a misuse and a waste of resources.
How about let’s not install a Java 1.6.0 app on the server, right?
How else do you propose to run a Minecraft server free of the more recent changes such as the hunger mechanic?
(Disclaimer: I’m not actually sure if going back that far requires Java 1.6, but it should illustrate the concept.)
This has been possible for years just by using chroot(), you can tar up an entire system with all its libs, dump it somewhere and run your apps through chroot…
Ofcourse it’s horribly inefficient, you end up with multiple copies of all your libs in memory, almost as much overhead as running a full vm with its own kernel.
Someone else had the right idea, a proper package manager with maintained packages is a better idea – most software can be recompiled to support newer versions of libraries, the only problem is when you want to go outside of the packages supported by the package repositories.
Docker is a case of choosing convenience over efficiency.
OK– While we’re at it, let’s set up that chroot with it’s own network interface using a local, private subnet to communicate with other chroot’s.
And now, let’s see you share data between them.
If you haven’t played with docker, comparing it with package managers, chroots, or even jails, is totally inadequate.
Calling it a lightweight version of vagrant would be better, but that doesn’t describe it either.
I do a lot of puppet development– we manage around 250 machines at any given time with puppet.
I’ve moved my puppet server environment (puppetdb, puppetserver, puppetboard) all into a docker environment managed by docker-compose. Each service has it’s own container, with exactly the number of libraries needed to run. Ports needed for internal communication are only available on the private 172.x.x.x/24 network, and all configuration data is external to the container(s) so I can delete / recreate the containers at any time.
The really cool part is that I can take a copy of the data, move it to a test system, build a totally private copy of the puppet server, and test it with docker containers for each OS/version combination I support in my environment.
I can then automate that process, and have my gitlab server run a series of automated tests inside a docker environment to verify the code is legitimate before pushing it out to my production puppet server.
Compiling the Android OS (don’t ask why) requires specific versions of Ubuntu– if you want to compile Android 6, you have to do it under Ubuntu 14.04, in a fairly customized environment. You could create a VM guest, but now you have to allocate enough memory and CPU to the VM.
Or you can grab a prebuilt Ubuntu 14.04 docker container, with build script, and build it inside the container, regardless of what OS you’re using (heck, using docker and WSL, you could do it under Windows 10, but, WHY?!?!?).
I’ve used chroot, jails, vagrant, vmware, virtualbox– each is a tool, with it’s own purpose.
docker is a different tool.
Docker, LXC, etc, are glorified chroot jails that leverage resource quotas and clever overlay filesystems and add management systems with a rich repository of pre-made containers.
We old-schoolers like to laugh at things like Docker but these evolved “container” frameworks have really simplified and popularized chroot jails, something that has been available for many decades.
The emergence of kernel features such as cgroups and file systems like unionfs had a lot to do with this.
Chroots were hard to manage and inefficient with duplicative file managment. Now they’re nearly effortless and efficient, and with a large community of people working on frameworks and repositories.
Edited 2018-05-27 04:25 UTC
Look for example how easy is installing Guacamole with docker:
https://guacamole.apache.org/doc/gug/guacamole-docker.html
and you will understand why I do not really think docker is the “solution” for installing software.
Containers are nice for developers as they are very easy to copy between environments. They are nice in cloud environments as you can deploy them and load balance them etc as needed, and they abstract away most of the operating system from your application, and you can use cloud services to deploy and manage the running containers and hence let others fiddle with boring things like servers and operating systems.
They are however already starting to look slightly old school as the new trend even abstracts the containers away as another boring thing for others to fiddle with and you instead deploy your code down to the api function level.