In recent years, three different distribution independent package formats have gained a lot of popularity. There are already a few Linux distributions like Endless OS and Fedora Silverblue that depend solely on distribution independent packages to run desktop applications. Are these package formats ready to become main packages formats for Linux distributions?
In this article we will take a look at the advantages and disadvantages of each package format individually, and of distribution independent package formats in general.
I haven’t really been keeping up with this relatively recent development of new distribution-independent package formats, so I was unpleasantly surprised when, after installing Linux Mint on my laptop, I would often find two different installable packages of the same program in the software manager. Often, these would have different versions.
Regardless of technical merit, that’s not exactly a friendly user experience.
Was any of the installable packages the latest versions? One thing that keeps me away from Desktop Linux is the fact I cannot just go to the repos and find the latest VLC, and the official VLC website sends you back to your distro’s repos.
Edited 2018-10-15 12:45 UTC
You can compile from the very bleeding edge.
Packaging is a chore, and that’s what snap, flatpak, appimage, etc are trying to solve.
However … from personal experience, i’ve also found that snaps just use a lot of resources. Appimage just works most of the time, but integrates poorly with your system. No experience with flatpak.
The other problem is that it’s not just about all the work to get a new version ready to build as a package, but also the actual hardware and time to do it. Building packages takes a lot of time and has to be done for every architecture. If you’re doing it right you’re also testing on a few architectures and trying to get others to help with that when you don’t have it. Take debian for instance. They support many CPU architectures, and even have more than one kernel available (linux & kfreebsd)
However, the fact that there are multiple formats to solve the same problem is more of the linux community shooting themselves in the foot. It will be years until we have a winner.
Then pick a more “bleeding edge” distro. For example on my Fedora 28, VLC in the repos is at the same level as upstream, 3.0.4
What if I want a stable OS and new apps, like every PC with Windows 7 and up does? Plus you get to annoy the FOSS crowd by supporting Microsoft. Works for me.
Then use a distro like Solus (I personally use Solus). It has a stable base, but it still updates packages just like a rolling release, but only after proper testing. So that’s as close as to Win 7 as you’re going to get.
That kind of trolling got boring a long time ago.
No, it’s not good at all. Because now you’ve just hinged “stable” and “new apps” on people’s opinions. Every distro has a thousand opinions, never providing a clear answer to “is it stable”, and “how up to date are the programs”.
Debian Stable. Between their backports repo and Flatpak/Snap/Appimage/Steam/whatever you get a rock solid base OS and up to date packages. It’s been my go-to on desktop hardware for years, and probably on laptops again once Buster releases as 10.0.
tidux,
I was also of this mindset and gave little thought to applying debian stable updates. That is until an update broke some of our hosting infrastructure. We were using a network bonding configuration on a colocated server. After updating and rebooting, we lost connectivity…major panic. Fortunately the data center “remote hands” got us back up and running. To be fair, in all these years, there’s probably only been a hand-full of breakages, and most didn’t affect connectivity/SSH (although the thought of it still makes me nervous).
Edited 2018-10-16 20:12 UTC
Network bonding is fussy on any OS.
I don’t mean to fan the flames, but didn’t Microsoft push out a Windows Update that deleted a significant number of files from some PCs?
I’ve never had that happen with Linux.
Have you looked at Gentoo? They not only do a reasonable job for the popular packages of staying up to date (assuming you go with the appropriate unstable keyword for your architecture), but also provide ‘live’ builds for a number of packages, which let you build directly from the master branch fo the upstream repository while still using the system package manager for the installation.
Gentoo has the same problem as Arch– the community insists you must be “this high” to ride this ride.
I mean, there was a huge uproar in the Gentoo community when they decided to provide binary packages for initial installation, instead of making people compile their OS from scratch as part of the install process.
Both Gentoo and Arch have bitten me with updates because I didn’t read the release notes completely 3 times when I did an update. To be fair– Arch warned me. Gentoo, the forum users said “do this!”– I did that, and the OS never booted again.
Manjaro is pretty nice, though.
There are some distros built on top of Gentoo though that provide all of that for you too. I don’t remember the names of any off the top of my head right now, but I do know they exist.
The thing is though, ‘I want the absolute newest version of this software’ is at odds with ‘I want things to keep working without having to do anything when I run updates’. That’s a vast majority of why it takes most distributions so long to get updates (except Fedora, which actually seems to do a pretty good job most of the time, which I’ve always found ironic as it’s essentially a testbed for RHEL, which is one of the worst offenders when it comes to not updating).
My experience with Manjaro over the past year has been pretty good. I’m far closer to up-to-date on my software than I’ve ever been under linux, with minimal pain and suffering.
The only downside is when you have a package that doesn’t support Arch, and hasn’t been added to the AUR.
I had to do a bit of trickery with the VMWare remote desktop client, for example.
Generally, though, it’s been a very positive experience.
Arch (and by extension Manjaro) are one of the odd exceptions, but they still aren’t quite as stable as, say, Debian.
To the extreme disbelief of a lot of people I know, I’ve actually had almost no issues running unstable keyworded Gentoo (three times total have I had problems that required significant intervention, two of them were my fault in the first place, and the third was a case of me being unlucky enough to pull down updates in the short time window between a broken glibc update being published by mistake and it being masked).
Were it not for the fact that I really do want to strip out things I don’t need from packages (and now with the recent changes, that I hate systemd), I would probably be using Arch myself though.
I run Manjaro. It’s Arch, with a bit of handholding (like, say, an installer).
Like Arch, it’s a rolling release, but unlike Arch, it’s a curated rolling release.
I think in the past year, I’ve had one update cause problems, and that was because of a custom splash screen.
For your example, I’m running VLC 3.0.4-4. KDE is a little behind– It’s still at 5.13. I expect 5.14 will be out within a month or so.
The Nix package manager and NixOS solve so many problems with packaging incredibly elegantly. I keep seeing all of these efforts at different packaging schemed but they all have the same inherent problems due to statefulness that are not present in Nix. IMO Nix (or a pure functional technique like it) is the future of packaging technology once people catch on to it and its many benefits.
I’ll admit, I don’t know much more about nix than the about page. It seems like its just a standard package manager and package format like dnf/rpm or apt/deb.
Does nix produce portable packages that include dependencies that can be installed on a variety of linux distros?
Nix does not produce binary packages, only text definitions, or “recipes”, to download, build and install the software. Much like Gentoo or Arch.
Based only on my own experience, I can’t recommend it.
Yes, Nix provides cross-distro packages and handles the dependencies. You just install Nix and then use it the way you would a normal package manager, it handles the magic in the background.
It has the benefit of letting you roll forward/back to different versions of a package too.
Right, that’s not what is being discussed here. This isn’t a rpm vs deb article.
Nix works very differently from those. Everything is part of the nix store in /nix/store and only exposed through symlinks or environment variables.
This makes for some interesting properties, which classical formats like deb or rpm cannot really guarantee you.
I can definitely recommend nix and NixOS, it takes some time to get into it, but I migrated my whole infrastructure (nextcloud, e-mail, xmpp, etc servers) and my desktop + laptop to it and I’m pretty happy so far.
Ok, not what flatpack and snaps provide, but cool in its own way. Seems more like Redhats Software Collections Library.
I was struck by a few thoughts, the most prominent one is the author apparently didn’t re-read his article after writing/posting it because he’s talking about LibreOffice while supposedly reviewing the GIMP’s packages.
He also didn’t follow up in tracking down the natures of the failures. We don’t know if it’s a configuration problem, a dependency problem, a corrupted download, or if the author himself screwed up somehow. Regardless, it does point out that these package systems aren’t fool proof whether the fool is the packager or the user.
My personal preference is towards AppImage because when it works, it works well. One executable package, entirely self contained, and doesn’t require installation of any kind. For end users of programs not requiring any development packages for further development that’s pretty ideal. Erase the file, or store it, when you upgrade and continue on. There’s no orphaned libraries hanging around in obscure directories left behind by an incomplete removal procedure. It doesn’t matter whether you’re on Ubuntu who wants you to use Snap, Red Hat who wants you to use Flatpak, or tomorrow’s distro du jour that wants you to use something entirely different that they’re nerd-gasming over.
But unlike DEB, RPM, etc., Snap and esp. Flatpak aren’t tied to just one set of distros. Yes, Ubuntu pushes Snap and Red Hat pushes Flatpak, but Snap is available on other distros as well and it works fine, and Flatpak is available on Ubuntu and most distros as well and also works fine. So even if you prefer AppImage, at least know Snap and Flatpak are way more flexible. A Snap or Flatpak package is something you build once, then deploy it and it works on every supported distro. No need to build a DEB, RPM, EOPKG, PKG, etc.
The way I see it (which probably misses a few things) is that Appimage is the way to go.
Distributions can focus on the core systems and simply provide mirrors for the appimages instead.
AppImage is also my choice, as application developer, to make my software available for Linux users.
Anyway, the real point of all distribution independent package formats is freedom from app stores and packagers that only provide old, deprecated versions or no packages at all. Because they know better what are the needs of your users.
The Linux distributions have failed to make Linux and free software available and popular among the normal users. The way they treat (or boycott) some open source projects does not help or improve the situation.
And to us normal users like coming from the Windows world, stuff like honestly just this drives us out. Why shouldn’t I just be able to download the latest vlc from videolan.org and install it instead of having to wait for some random dude to repackage it for my distro? (it’s not a walled garden, honest).
And don’t get me started on the security risks of having random people repackaging your software. This creates a yin-yang problem where the more repackagers you have, the greater the risk of anyone doing something nasty, but the more controlled you keep the list of who can repackage and upload apps to the repo, the greater the chance an app won’t be repackaged.
And then there are the regulatory problems that come with repositories: Essentially, in order for an app to go into the repos, it has to meet the regulatory regime of the app vendor and the regulatory regime of the repository provider (which is a problem because most of them are UK and US based, aka DMCA countries)
Essentially, Desktop Linux is a more “free” OS in the FSF sense of the world, but Windows is more open in the practical sense of the word. I can, for example, use Windows to download AnyDVD HD and DVD Fab, or even Game Jackal. Good luck finding such software in the repos of Canonical for example…
The reason Linux isn’t popular among ‘normal’ users generally is because normal users like my parents can’t do what they want to do on Linux and have things “just work” without it breaking or without trying to read some arcane reference manual they can’t understand. It probably wouldn’t matter if all they do is Facebook, but not everyone is a Facebook drone who equates Facebook or Netflix as “The Internet”.
Business software is generally a big sticking point. For better or for worse, businesses have standardized on Windows and good luck trying to get software for payroll, industry specific estimation, corporate taxes, etc, along with support contracts for Linux. This is generally why the business community won’t let Linux in the door. It also doesn’t matter if Microsoft keeps breaking stuff in Windows 10 because there’s no alternatives. So since people are used to the look and feel of Windows at work, that’s what Average Joe wants at home because they can understand it. Otherwise they’re simply moving from desktops to Android or iOS.
I do agree, from a social standpoint, that the free/open community in general is hostile to people who can’t read and understand a highly technical man page. But likewise, who is not an MD going to be able to read and understand the overly technical language in JAMA or without a PhD in physics going to understand a paper in quantum physics in Nature? Overly technical documentation between developers is one thing, it’s quite another to tell your grandpa (who’s not a gray beard op from the 60s) to RTFM for CUPS to get his new printer from Wal Mart working and that’s why Linux isn’t going anywhere for most desktop users.
Oh puh-lease, I am a fairly technical person, do some centos7 management at work, and can read obscure reference manuals, and still avoid Linux for personal use. It’s one thing having to read some manuals, and it’s another thing having to compile VLC from scratch, VAAPI and all… I once had to run the latest x265 on a colleague’s Ubuntu desktop rig for some uni report back in the day, and after spending half a day trying to compile it and make the assembly bits work, I just run the windows exe in with wine (don’t ask why he run Ubuntu on a desktop rig, he was a Stallmanite). And that was the day I swore to avoid Desktop Linux for anything else than cli server stuff.
Edited 2018-10-15 17:23 UTC
So what?
Neither you and I are a normal user in this case. Your statement doesn’t invalidate what I said. There’s no reason to be rude about it.
I hate it when people use outdated examples as to why linux desktops are terrible today. I could tell horror stories about Windows that makes it sound like the worst operating system ever– when in reality, Windows has been pretty rock solid since Windows 7.
I mean, admittedly, I run a frighteningly current version of Linux. It’s hard to beat how up-to-date Manjaro is, or the breadth of the AUR repository (which does compiles, but automates them).
My media center, however, runs Debian, and even there, it’s pretty easy to run relatively recent video codecs.
The biggest problem with Ubuntu is there are really good docs out there… and really bad ones. Worse, there are docs that *used* to be really good, but are simply out-of-date and will leave you headbutting brick walls.
But Ubuntu is not aproved by Stallman…
This really depends on what exactly you’re talking about. A lot of the times I see people get yelled at when dealing with the projects I actually work with, it’s because they didn’t even try to read the documentation, or didn’t pay attention to the stuff telling them to post support requests on the forums instead of in the issue tracker, or any number of other things that show they put zero effort or thought into doing whatever it is they’re trying to do. People who put in near zero effort themselves will piss off almost any person doing IT work, even if they’re being paid, regardless of what platform they’re dealing with, but that goes double when you’re donating your time to a project and not getting compensated for it.
This is an interesting topic and the article raises some useful points, but I’d also be interested to read something compares them from a technical perspective. For example: the security models/sandboxing the different formats support; whether shared libraries are duplicated or not; how straightforward the packaging process is; and whether they’re only for user-space applications.
If anyone has any good references, I’d be interested.
Not exactly a good reference but:
AppImage is by definition user applications only, but that might include purely user-space based drivers for some hardware (for example, tools for managing Logitech Unifying receivers or Yubikeys could probably done as an AppImage). Flatpak seems to also be user applications only, and the integrated sandboxing probably means that you couldn’t easily do things that require nontrivial hardware interaction with it. Snap can probably provide kernel modules, but they would almost certainly be done as DKMS packages for portability.
From what I remember:
1. AppImage doesn’t guarantee platform agnostic builds but allows you to accomplish them by building all your dependencies into the distributable.
2. Flatpak supports bundling dependencies, but tries to avoid duplication by allowing each Flatpak to depend on a runtime (like the Steam runtime). They provide three: A base one, a base+KDE one, and a base+GNOME one. (Also, being based on OSTree, it should automatically deduplicate identical files across multiple Flatpaks.)
3. Last I checked, snap bundles all the dependencies into the distributable and that was one of the points used against it by people who favour Flatpak.
Edited 2018-10-16 13:15 UTC
to me it seems to be an over-engineered solution trying to solve problems bigger than reality.
I like to have most of the applications coming from the distro repo and complement the very few I may have a need for a more up-to-date version with an AppImage version.
It seemed not too long ago that some people here were telling me to shut up, I didn’t know what I was talking about, because Linux package management had been solved and everything will be rainbows and unicorns.
Finally considered harmful? If the answer to the question of how you ensure that the dependencies that your application needs are the same as those available on all client systems is to supply a copy of the dependencies that the app uses, then…
… there’s not much sharing going on, is there?
Static link for the win.
I have tried snap and flatpak and I hated them both. First of all, I was worried about security. I installed an old game with snap and then things were happening that I still don’t understand.
I used flatpak to install the latest Gimp. That worked but it was so slow that I not only decided to deinstall that gimp version but also flatpak itself.
At the moment I have zero such apps installed on my desktop because I don’t want my computer resources (RAM, storage) wasted with multiple copies of the same libraries.
For a short while I played with a pre-release of Gimp 2.10 as a flatpak, but removed it as quick as a native version was available, which was fortunately soon.
I don’t see much of an use case except a few proprietary apps, especially games, where the developers don’t bother to support your distro and you play that game full-screen anyway, with no much else is parallel. And uninstall it once finished. For day to day work, native is better.
Snap actually may be worth it though. Transactional updates are a huge selling point for a lot of people, and Snap is one of the few ways to do them on Linux at the moment (and is a lot more intuitive for most people to work with than Nix).
Edited 2018-10-16 13:11 UTC
I don’t care about transactional updates, it’s a desktop, so no big issue if Firefox or LibreOffice are down for 5-10 minutes.
Zero downtime is not the primary benefit of transactional updates, and you can already achieve it with no work at all when dealing with well written software and just updating single packages.
The primary benefits are:
* If the update breaks something and you have to revert the update, you can do so without having to worry about reinstalling the old version or uninstalling the new one, you literally just roll back the transaction, and everything is exactly like it was before the update.
* Half-complete updates can’t break anything. If the system dies part way through a transactional update, it boots into the pre-update state, and you can just re-run the update. If the system dies part way through a non-transactional update, you may very well need to use recovery media to boot again.
* Adding to the above, it lets you update cross-dependent packages together as one unit. Suppose you have packages X, Y, and Z that all need to be installed for any of them to work correctly and have matched versions (this is not an unusual case, especially if you are building from source). Without transactional updates, you can’t make that guarantee. Also, this really is a huge benefit for desktop users, because there are quite often odd cross-dependencies between components of desktop environments (the KDE libraries are a pathologically bad example of this).
>so the end user does not have to pick a single package format, they can enjoy software from the different package formats.
I find great irony in the user “enjoying” any of this. This is why Linux will never be popular. Their answer to the fact .exe, .dmg, .apk, etc just work was to band-aid every dependency together instead of fixing the underlying issue that the whole thing is still a mess. What desktop Linux desperately needs is not a technology or standard, but a person with the power to say “THIS IS HOW WE ARE GOING TO DO IT. THIS IS THE SDK, THESE ARE THE EXECUTABLES IT PRODUCES, IF YOUR DISTRO DOESN’T RUN THEM AS IS, IT WILL NOT BE CONSIDERED DESKTOP LINUX.”
I think you mean MSI files, not Windows executables.
They also have their own special set of issues though (I challenge you to uninstall OneDrive on a Windows 10 system without using the command line in some way).
Yes, they absolutely need someone to say that and alienate 95% of the developers who work on it…
People have tried this multiple times before, and it’s failed pretty miserably almost every time.
Also, only Android and iOS really achieve this level of coherency, Windows is nearly as much of an issue as Linux, they just had (until Windows 8) one unified framework for handling things at the OS level from a regular user perspective, there’s still at least a dozen actual underlying packaging systems that people are using.
Name some attempts then. I have a feeling they are all going to be the same false equivalents like the .exe .msi being the same as package managers. The problem package managers face is uniquie to Linux, and only Linux. Android, iOS, etc. are not unique because a corporation did exactly what I suggested. You say it’s as much of an issue on Windows, but for 99.99% of people, it just works. From you I get the feeling that a single registry key taking up a byte of memory is an issue when 16 GBs of RAM are present, not a problem if you don’t look. Also, the developers would stop working on these half solutions if the customers didn’t tolerate them, however they simply don’t tolerate desktop Linux instead due to this and thus it’s a waste of time.
dark2,
To be honest, both linux and windows software have given me grief from time to time. On a fresh install of windows I frequently encounter software that won’t run without tracking down more dependencies.
There are ways for developers to mitigate it:
1. compiling everything statically
2. bundling all the dependencies
3. limiting the dependencies to those that are included with the OS
Realistically whether or not you’ll run into dependency issues comes down to how the software developer builds the software and what’s already been installed on your computer. MSI by itself is merely a container and doesn’t resolve any of these problems on it’s own.
As far as reasons for people not using Linux go, this is actually pretty low on the list for most people. For most people it ultimately comes down to the fact that humans are creatures of habit, and Linux is just different enough that they don’t want to learn something new, which has a much bigger impact when explaining that no, you can’t run Outlook on Linux.
As the next answer down goes into detail on, MSI has a ton of features that are extremely desirable for enterprise deployment, but it’s a major pain if you don’t need those.
Source: https://stackoverflow.com/questions/6245260/installers-wix-or-inno-s…
Heck, WiX is such a pain that one of the respondents recommends a project called Wix# (oleg-shilo/wixsharp on GitHub) which transpiles a C# derivative into WiX XML and just looking straight at the code samples in the README for the first time makes far more sense to me than WiX XML after actually having read the official documentation.
…also, I just confirmed another comment from that StackOverflow thread.
Microsoft’s Visual Studio Code is using InnoSetup for its installer.
Except, under the hood, they are all ultimately using the same package management infrastructure. That’s why you can go into the Programs and Features section of the old control panel and uninstall any of them there. I’m not talking about high level API’s here, but the really low level stuff like configuring uninstaller entries and registering components with the system. MSI provides a bunch of functionality for you, so does NSIS, and InnoSetup, and pretty much whatever else you care to name. Ultimately, they are doing the same things though, configuring uninstaller hooks, registering libraries (if they need to), prepping services, etc.
The closest analogy I can give from Linux is how Gentoo handles binary packages. It has its own native format (based on compressed tarballs, plus some extra control hooks), but it also supports RPM packages. At the lowest level, the same package management infrastructure is being used (portage, with either emerge or paludis on top), but anything above that is different.
Get Lennart Poettering on it.
That approach has been tried in the server world already.
It’s called the Linux Standards Base. It never really caught on as a unified thing (distros implement whatever subset doesn’t present too much inconvenience to implement) and compliance as a whole is considered an optional extra you can install in the world of Debian-family distros because it specifies RPM as the official package format.
…not to mention that the only person with the authority to make that claim is the holder of the Linux trademark. Anyone else trying to make such a claim could even get sued by the holder of the Linux trademark if they disagree.
Actually, I find that software management on Linux is so much more convenient on macs and Windows. “apt install vlc” is all I need to type. No downloading, no clicking through install dialogues etc. I honestly can’t understand why anyone would want to do it the windows way.