Ubuntu Desktop will eventually switch to Snappy packages by default, while continuing to provide deb-based images as an alternative, at least for a while. I’m sure this doesn’t come as a surprise for some of you, but further details regarding this have been revealed today.
They’re slowly moving away more and more from Debian packages.
I understand that it is a hybrid model between docker and what OSX has for packages but I wonder why no one else had done it so far!
This way we can actually rollback, maintain more distro independent packages and better form of publishing them. So good but I still would have hoped some form of collective open source groups would have done it, like Redhat, and others together. People who use docker these days don’t even care to publish their project using old source package models. Although security needs some work.
I forgot to mention how much it looks like Rocket from CoreOS people.
These days there isn’t much that hasn’t been done yet before. To answer your question, it has been done before for Linux. Autopackage did this years ago with there self-contained *.package files. [http://en.wikipedia.org/wiki/Autopackage]. Klik did the same. [http://en.wikipedia.org/wiki/Klik_%28packaging_method%29] And so did Zero Install [http://en.wikipedia.org/wiki/Zero_Install] which actually worked for OSX, Windows and Linux.
None of the above really took off – I don’t know why, I thought it was a brilliant idea. Lets see what Ubuntu can do with there spin on this (not so new) idea.
It’s Ubuntu. It’s lets re-invent the wheel time again.
IMHO, they are already trying OS coverage and all with a fraction of the resources.
I found this fragmentation of vision an impact on Quality some time ago. In the end, I jumped ship.
I did spend a good amount of time promoting FOSS and Ubuntu but no longer.
I’m now using CentOS & Fedora (for POC’s etc).
I really don’t care about having Ubuntu on my mobile devices (sorry Thom). As I have grown older I find fiddling with stuff that I just want to use (as opposed experiment/play with) more and more of a PITA.
.deb and .rpm packages with tools like YAST, YUM etc on top are perfectly adequate(IMHO). No one has given me a USP that would want to make me change my mind.
yours, certified GHP72 {Grey haired Programmer since 1972}
Self-contained packages are already in use in Ubuntu Touch devices and work great. Now also the average developers without 10 years of Debian experience are able to package their applications for Ubuntu
Edited 2015-04-27 15:25 UTC
Oh, and I forgot, PC-BSD has also done this with their PBI (Push-Button Installer) packages, which are also self-contained packages to run applications. [http://www.pcbsd.org/en/package-management/]
The core issue is dependencies for me. Snappy can have dependencies from what I gather, but they aren’t supposed to? Take something like owncloud: depends on a whole messs of php and php extensions and a database. Those php extensions depend on a whole bunch of other system libraries like libcurl, libcyrpt, libmemcached etc. Are those dependencies, including php and mysql included in the snappy package? So you would have a crap ton of duplicated core libraries floating around? Seems like that would be hard to maintain each mysql server in each app or each php.ini file. Or are sys admins just supposed to trust the vender for security? I forogot some kind of webserver would be in the mix there too.. That’s a lot of trust to have.
Also, I highly doubt a roll back to a previous version of owncloud will retain the actual data that was created in a newer version and continue working.
I do agree on that devil is in details. For example we are heavily using docker and virtualization for our compute and data intensive clusters, and they are not solving everything. Yet, I think the track is right. Thats why I believe solution should be a collective entity like apache or god forbid FSF. MAybe then we get some general solution that works for everything. And also I think snapshotting and COW would solve many of the dependency and having parallel versions of libraries for different packages and still depends on how they will get implemented.
The App Container Specification which was started by the CoreOS guys (Rocket is their implementation of that spec) is probably already what you want.
There was a patch for Docker on the github repo but it’s closed.
Now I don’t know if that means there will be a new patch.
The issues raised were real issues.
For example the AppContainer spec might be to big, a number of people have suggested it should be split up in pieces.
Well, thankfully, this Snappy[*] Ubuntu Core will be for “for clouds and devices”, I’m thinking single purpose appliances. For general purposes – well, with that i mean more than the average consumer crowds – it would be a major pain.
[*] Snappy, eh? Felt like a weird name even when Google used it for their compressor, and I don’t like it any better now.
No, read the headline again. Its now for core, but eventually it will be for all variants. I’m not sure why it makes a difference for single purpose machines. You have a single image of an app that you replicate via puppet or something like that and voila.
Snappy or docker make sense in a multi application server as they promise to isolate each app from each other while requiring less resources than a VM.
Reminds me of a variant of dll hell on Windows.
A few years back, MS found a vulnerability in a dll of theirs.
Thing is that was part of the Virsual-c++ redistributable, and so just about anything running on Windows and its own copy sitting around.
So what they had to do was to issue a general warning, pointing admins to a scanner, and tell them to pester third party vendors.
It is interesting to note that while containers of a some kind have been around in BSD for ages, none of them has adopted it as their general package management technique AFAIK.
What seems to be driving the adoption on Linux is this “devops” thing that everyone is hot and bothered about.
And it in turn seems to have come about because webmonkeys don’t want sysadmins denying them their latest shiny toys.
THIS x 1000!!!
Basically the definition of DLL hell in the App Container Specification, which Snappy is allegedly based on.
If they start to bundle copies of shared libraries in application containers, then yes this is really catastrophic. It is as bad as statically linking applications to common libraries since a security update in those libraries still leaves applications vulnerable unless they are rebuilt too.
Arch Linux, for example, tries to ship as less static libraries as possible to avoid such a situation.
Apart from nvidia libraries, my whole system is compiled from source on my own computer and I honestly am happy it is still possible to do so.
Edited 2015-04-26 12:40 UTC
Yes, it’s bad. But this is how it works on Windows and on most mobile platforms. Qt applications, for example, ship their own copy of Qt on Windows, Android, iOS.. And of course this is how also generic Linux builds handle libraries. Packaging for Linux has always been hell and one of the biggest blockers among application developers.
How do you currently package an application for Ubuntu 14.04 if you want to use Qt 5.4?
Edited 2015-04-26 21:06 UTC
Easy. Make your package depend on Qt>=5.4 (this means Qt 5.4 or higher is required).
If Ubuntu doesn’t have a Qt 5.4 or higher package, ship a qt-5.4.deb compiled on a 14.04 system along with your application. That way other applications that need Qt 5.4 can use your Qt 5.4 package as well.
This is the power of package management and why many of us fled Windows during the Windows XP days or earlier.
(Qt 5.5 will still run applications linked against Qt 5.4).
Edited 2015-04-26 22:43 UTC
It just doesn’t work that way in practice. Applications usually depend on certain Qt modules and packaging Qt properly is far from easy. There are ~50 libqt5* packages on my Ubuntu 14.04. You are suggesting a non-standard Qt packaging…how are “the other applications” supposed to know about that?
Another problem is that somehow you should manage to make your custom Qt available in Software Center and that’s impossible.
Edited 2015-04-27 15:12 UTC
Then just provide debs for all the modules (i.e. follow Debian packaging standards and provide all those subpackages).
Ok, you can’t put them on the Ubuntu official repository. But your main app isn’t there either. Just provide download links to the Qt5 packages and your app.
However, you did hit the real issue here .
Unlike Android where everyone uses google play store to get 3rd part applications, Linux doesn’t have that.
So bundling something onto one app and installing into /opt/myapp/{bin,lib} or /usr/lib/myapp/{bin,lib} is the easier way to distribute those closed source 3rd party apps.
Edited 2015-04-27 16:27 UTC
Yes, this is one of the biggest problems on Linux and Canonical is trying to fix that. Currently 3rd party apps in USC will install under /opt. However, it’s still a huge effort to review every new version of every app because, as we know, .deb-archives can run whatever code they want as root when you install them.
Click/Snappy packages will improve this a lot, because in that case the applications are run in an isolated container. Updates to applications are published immediately. You can even roll back to a previous version. AFAIK Click applications will install under /app without any predefined directory structure. I think that’s great. Only one manifest JSON that tells the system where the binary is.
As a developer I cannot wait to get rid of PPA’s.
Edited 2015-04-27 16:48 UTC
Absolutely, and we must stop this madness. I will not be forced to adopt devops mentality. As long as its optional.
Also this:
http://www.vitavonni.de/blog/201503/2015031201-the-sad-state-of-sys…
So, based on some quick Google searching, it seems the best thing about this transition is that Ubuntu will *finally* get delta binary updates. Someone more intimate with the *buntu OS correct me if I’m wrong. This is the one thing I miss as a recent convert from Fedora.
Why can’t this and everything else be done as incremental improvement of the current apt+dpkg combo? What’s wrong with continuing deb format from “2.0” it’s been for years? How hard can it really be to generate a bindiff between files in two .deb ‘archives’ made from the same source package?
The only trouble I can imagine is Debian flatly refusing to allow that kind of functionality into apt and dpkg. But then again, lots of things can still be accomplished by working around it using a plugin or a wrapper around vanilla tools.
It would be a ridiculous amount of work for a tiny amount of bandwidth savings. You’re looking at ~35,000 packages times a dozen architectures, and generating binary diffs for all that.
You don’t really need to go full retard and do bindiffs for all packages. There are plenty of examples, like when a tiny bugfix in LibreOffice results in hundreds of megabytes of wasted bandwidth.
Ubuntu seems to be going both ways. For the core systems they appear to be increasingly following Debian, see for instance the whole systemd discussion.
Problem solved.