In the Free and Open Source communities we are proud of our ‘bazaar’ model, where anyone can join in by setting up a project and publishing their programs. Users are free to pick and choose whatever software they want… provided they’re happy to compile from source, resolve dependencies manually and give up automatic security and feature updates. In this essay, I introduce ‘decentralised’ installation systems, such as Autopackage and Zero Install, which aim to provide these missing features.
I am the author of Zero Install, but I hope to focus on concepts rather than particular implementations here. I’ll start by describing our goal of allowing programmers, users, administrators, reviewers and QA teams to collaborate directly, and explain a few of the limitations of traditional centralised systems.
We’ll look at the technical issues involved in distributing software without a central authority; how to prevent conflicts, handle dependencies, provide updates, share software efficiently, compile from source, and provide good security. Finally, I’ll look at the possibilities of converting between different formats and systems.
Use cases
Stepping back for a moment from implementation issues, let us consider the various people (‘actors’) involved and the things they should be able to do. Here is a simplified Use Case diagram:
Here we have a user who selects and runs a program, which has been made available by a programmer. The user may provide feedback to help with future versions. Patched or improved versions may come from the original programmer, or from a third party represented here by a “QA team” providing timely bug-fixes on the programmer’s behalf. Users must be able to receive these updates easily.
The user’s choice of software will be guided by advice from their system administrator and/or from 3rd-party reviewers (e.g. magazines). In some situations a system administrator may need to lock down the system to prevent unauthorised programs from being run. The administrator may also pre-configure the software for their users, so their users don’t need to do it themselves.
Since users run many different programs, and each program will use many libraries, a real system will involve many different programming teams, QA teams and reviewers.
Language translators and producers of binaries for particular architectures are not shown; they are included in “Programmer” or “QA team” (depending on whether their contributions are distributed separately or bundled in the main release). Providers of hosting and mirroring services are also not shown.
Note that in these high-level use cases I don’t talk about ‘installation’ at all, because this isn’t something anyone actually wants to do; it’s just a step that may be required in order to do something else.
The challenge, then, is to provide a framework in which all of these different people, with their different roles and goals, can easily work together.
Traditional distributions
The picture above doesn’t quite correspond to the model used by traditional Linux distributions, where users must pick a distribution and then only use software provided by that distribution. This model falls short of the ideals of Free software, because a user is only free to install programs approved by their distribution (of course, it may be possible for users to do this; here and in the rest of this essay I am concerned with things being easy and reliable).
As a software author in this system, I must convince one or more major distributions to accept my software before most users will be able to run it. Since distributions are unlikely to accept the maintenance burden of supporting software with a small user-base this makes it very difficult for new software to be adopted.
The situation is worse if the program has network effects. For example, few people will want to distribute documents in a format that is readable only by users of a particular distribution (because only that distribution has packaged the software required to read them). In this case, the programmer must convince all the major distributions to include their software. This problem also applies to programming languages: who will write programs in a new language, when many users can’t get hold of the required compiler or interpreter?
For example: I want to write a program in D because I’m less likely to introduce security flaws that way, but I actually write it in C because many distributions don’t have a D compiler.
The situation with traditional distributions is also highly inefficient, since it requires multiple QA teams (packagers) doing the same work over and over again. The diagram below shows some of the people involved in running Inkscape. I’ve only shown two distributions in this picture (plus a Linux From Scratch user, who gets the software directly), so you’ll have to imagine the dozen or so other packagers doing the same for other distributions.
Do we need this many people working on essentially the same task? Are they bringing any real value? Without the Fedora packager, Fedora users wouldn’t be able to install Inkscape easily, of course, so in that sense they are being useful. But if the main Inkscape developers were able to provide a package that worked on all distributions, providing all the same upgrade and management features, then we wouldn’t need all these packagers.
Perhaps some of them would join the main Inkscape development team and do the same work there, but benefiting everyone, not just users of one distribution. Perhaps they would add exciting new features to Inkscape. Who knows?
Naming
A system in which anyone can contribute must be decentralised. Otherwise, whoever controls the central part will be able to decide who can do what, or it will fragment into multiple centralised systems, isolated from each other (think Linux distributions here).
How can we design such a system? One important aspect is naming. Linux packages typically have a short name, such as gimp or inkscape, and they include binaries with names like convert and html2text, and libraries with names like libssl.so. If anyone can contribute packages into our global system without someone coordinating it all, how can we ensure that there are no conflicts? How can the system know which of several programs named firebird the user is asking to run?
One method is to generate a UUID (essentially a large random number), and use that as the name. This avoids accidental conflicts, but the new names aren’t very friendly. This isn’t necessarily a problem, as the identifier only needs to be used internally. The user might read a review of a program in their web browser and tell their computer “When I type ‘gimp’, run that program“.
Another approach is to calculate the name from the program’s code using a cryptographic hash function. Such names are also unfriendly to humans, but have the advantage that if you know the name of the program you want then you can check whether a program some random stranger gives you is really it, enabling peer-to-peer distribution. However, since each new version of the program will have a different name, this method can only name individual versions of a program.
Another popular approach is to include the name of a domain you control in the program’s name. For example, the Autopackage developer guide gives @purity.sourceforge.net/purity as an example. These names are much more friendly for users. This does require you to be given a domain name by someone, but these are rather easy to come by, and a single domain is easily sub-divided further. Zero Install uses a similar scheme, with URLs identifying programs (such as http://www.hayber.us/0install/MusicBox), combined with the use of hashes to identify individual versions, as described above. Using a URL for the name has the additional advantage that the name can tell you where to get more information about the program. Sun’s Java Web Start also uses URLs to identify programs.
Finally, it is possible to combine a URL with cryptographic hash of a public key (corresponding to the private key used to sign the software). This gives a reasonably friendly name, along with the ability to check that the software is genuine. However, the name will still change when a new key is used.
Whichever naming scheme is used, we cannot expect users to type in these names manually. Rather, these are internal names used by the system to uniquely identify programs, and used by programs to identify their dependencies. Users will set up short-cuts to programs in some way, such as by dragging an object representing a program from a web-page to a launcher.
Note that Klik identifies programs using URIs, but using a simple short name (e.g. klik://firefox). Therefore, it is not decentralised in the sense used in this essay: I cannot distribute my packages using Klik without having my package registered with the Klik server, and the controllers of the server must agree to my proposed name.
Conflicts
By using globally unique names, as described above, we can unambiguously tell our computer which program we want to run, and the program can unambiguously specify the libraries it requires. However, we must also consider file-level conflicts. If we have two libraries (@example.org/libfoo and @demo.com/libfoo, for example, both providing a file called libfoo.so) then we can tell that they are different libraries, but if we want to run one program using the first and one using the second, then we cannot install both at once! This ability to detect conflicts is an important feature of a packaging system, helping to prevent us from breaking our systems.
Another source of file-level conflicts occurs when different programs require different versions of the same library. A good package manager can detect this problem, as in this example using Debian’s APT:
# apt-get install gnupg The following packages will be REMOVED [...] plash rootstrap user-mode-linux The following packages will be upgraded: gnupg libreadline5
Here, I was trying to upgrade the gnupg package to fix a security vulnerability. However, the fixed version required a newer version of the libreadline5 package, which was incompatible with all available versions of user-mode-linux, rootstrap and plash (three other security-related programs I use regularly). APT detects this and warns me, preventing me from breaking the other programs. Of course, I still end up with either an insecure gnupg or no user-mode-linux, but at least I’m warned and can make an informed decision.
In a centralised Linux distribution these problems are kept to a minimum by careful central planning. Some leader decides whether the newer or older version of the library will be used in the distribution, and the incompatible packages are updated as soon as possible (or, in extreme cases, dropped from
the distribution).
Traditional Linux systems also try to solve this by having ‘stable’ or ‘long term support’ flavours. The problem here is that we are forced to make the same choice for all our software. In fact, we often want to mix and match: a stable office suite, perhaps, with a more recent web browser. In fact, at work I generally want to run the most stable version available that has the features I need.
In a decentralised system these problems become more severe. There is no central authority to resolve naming disputes (and renaming a library has many knock-on effects on programs using that library, so library authors will not be keen to do it). Worse, if updating a program to use a new version of a library prevents it from working with older versions, then it will now be broken for people using the older library. We cannot assume that everyone is on the same upgrade schedule.
Indeed, upgrading a library used by a critical piece of software may require huge amounts of testing to be done first. This isn’t something you want to be rushed into, just to get a security fix for another program. Less actively maintained programs may not be updated so frequently, especially some utility programs developed internally.
Finally, conflicts become much more serious if you allow ordinary users, not just administrators, to install software. If installed packages are shared between users, which is important for efficiency, and packages can conflict, then one user can prevent another user from installing a program just by installing something that conflicts with it. How packages can be shared securely between mutually untrusting users will be covered later.
Avoiding conflicts
In the GnuPG example above, the package manager is providing a valuable service, but the situation still isn’t ideal. What I really want is to be able to install all the programs I need at the same time!
The general solution is to avoid having packages place files with short names (such as gimp) into shared directories (such as /usr/bin). If we allow this, then we can never permit users to install software (one user might install a different, or even malicious, binary with the same name).
Also, supporting multiple versions of programs and libraries can only be done with a great deal of effort. For example, we could name our binaries gimp-2.2 and gimp-2.4 to support different major versions of the Gimp, but we still can’t install versions 2.2.1 and 2.2.2 at the same time and we’ll need to upgrade our scripts and other programs every time a new version comes out.
We can simplify the problem a great deal by having a separate directory for each package, and keeping all of the package’s files (those that came in the package, not documents created by the program when run) in that directory. This technique is used by many systems, including the Application directories of ROX and RISC OS, the Bundles of NEXTSTEP, Mac OS X and GNUstep, Klik’s disk images, the Filesystem Hierarchy Standard’s /opt, and Bernstein’s /package. Then we just need a way to name the directories uniquely.
So how do we name these directories? We can let the user decide, if the user explicitly downloads the package from the web and unpacks it to their home directory. This still has a few problems. Sharing packages between users still doesn’t work (unless the users trust each other), and programs can’t find the libraries they need automatically, since they don’t know where the user has put them.
One solution to the library problem is to have each package include all of its dependencies, and not distribute libraries as separate packages at all. This is the technique Klik uses, but it is inefficient since libraries are never shared, even when they could be, and upgrading a library requires upgrading every package that uses it. Security patches cause particular problems here. As the Klik site acknowledges, this is only suitable for a small number of packages. Bundling the libraries with the package is also inflexible; I can’t choose to use a customised version of a library with all my programs.
A solution that works with separate libraries and allows sharing between users is to store each package’s directory in a shared system directory, but using the globally unique name of that version of the package as the directory name. For example:
/opt/gimp.org-gimp-2.2.1/... /opt/gimp.org-gimp-2.4.3/... /opt/example.org-libfoo-1.0/... /opt/demo.com-libfoo-1.0/...
Two further issues need to be solved for this to work. First, we need some way to allow programs to find their libraries, given that there may be several possible versions available at once. Secondly, if we allow users to install software then we need to make sure that a malicious user cannot install one program under another program’s name. That is, if /opt/gimp.org-gimp-2.4.3 exists then it must never be anything except gimp version 2.4.3, as distributed by gimp.org. These issues are addressed in the following sections.
Dependencies
GNU Stow works by creating symlinks with the traditional short names pointing to the package directories (e.g. /usr/bin/gimp to /opt/gimp.org-gimp-2.4.3/bin/gimp). This simplifies package management somewhat, but doesn’t help us directly. However, with a minor modification – storing the symlinks in users’ home directories – we get the ability to share package data without having users’ version choices interfere with each other.
So we might have ~alice/bin/gimp pointing to /opt/gimp.org-gimp-2.2.1/bin/gimp, while ~bob/bin/gimp points to /opt/gimp.org-gimp-2.4.3/bin/gimp. Alice and Bob can use whatever programs and libraries they want without affecting each other, but whenever they independently choose the same version of something they will share it automatically. If both users have a default policy of using the latest available version, sharing should be possible most of the time.
Continuing our example above, Alice can now run the new gnupg, while Bob continues to use user-mode-linux, since each has a different ~/lib/libreadline5.so symlink. That is, Alice can’t break Bob’s setup by upgrading gnupg. We can go further. We can have a different set of symlinks per user per program. Now a single user can use gnupg, user-mode-linux and plash at the same time:
When an updated version of user-mode-linux becomes available, we simply update the symlink, so that user-mode-linux and gnupg share the new version of the library, while plash continues with the older version.
In fact, we don’t need to use symlinks at all. Zero Install instead sets environment variables pointing to the selected versions when the program is run, rather than creating huge
numbers of symlinks, but the principle is the same.
Publishing software
The simplest way for a programmer to distribute software is as an archive file containing the program’s files. This archive can be placed on a web page, along with instructions on how to download and run it.
This isn’t very convenient. At the very least, we will expect our computer to check for updates periodically and give us the option of installing them. We will also want the system to download and install any missing dependencies we require. Both of these tasks require a machine-readable version of the web page, and there are two similar file formats available for this purpose.
Luau and the Zero Install feed specification both define XML-based formats for describing available versions of software and where to get them. They are rather similar, although Luau feeds don’t provide information about dependencies and don’t have signatures, while Zero Install feeds don’t provide messages about what changed between versions.
A more subtle difference is that each version in a Luau feed contains a cryptographic digest of the package’s compressed archive, while Zero Install feeds give a cryptographic digest of the package’s uncompressed directory tree. The Monotone documentation has a good explanation of how this can be done, although the manifest format it describes is not identical to the Zero Install manifest format.
While either type of digest is sufficient to check that a downloaded package is correct, Zero Install’s digests also allow installed packages to be verified later, and permit peer-to-peer sharing. This doesn’t work if you give the digest of the archive, since the archive is thrown away after installation. Of course, there’s no reason why both can’t be provided, giving an extra layer of security.
Java Web Start’s JNLP is another XML-based format with similar goals, but only works with Java programs. Also, the JNLP file and all the jar files (libraries) must be signed with the same certificate, which isn’t suitable for the distributed OSS development model.
Sharing installed software
We saw above that letting users install software and having it shared between them was possible (and safe) provided that they were able to put genuine versions of programs in the shared directory, but not incorrect ones. There are two ways we can achieve this.
The first is to have users ask a privileged system process to install a program, giving it the program’s globally unique name. The system downloads the named software and unpacks it into a directory with that name. So, Alice can ask the system to install http://gimp.org/gimp or http://evil.com.gimp, and the system will install to /opt/gimp.org-gimp or /opt/evil.com-gimp as appropriate. Alice cannot cause one to appear in the other’s location, so Bob can be confident that /opt/gimp.org-gimp really is the program he wants. This is rather similar to the way a local caching web proxy works:
The second approach is to have users download the software and then ask the privileged system process to copy it to the shared cache. This requires the program’s name to be a cryptographic digest of its contents, as explained above.
Which method is best? An early filesystem-based version of Zero Install used the first method, while the current one uses the second. The main disadvantage of the first method is that the privileged process is rather complicated, and this is not a good thing for a program which runs with higher privileges than the person using it, since it’s easier for a malicious user to find bugs to exploit. After all, the process must download from the network, provide progress feedback to the users downloading the package, allow users to cancel their downloads or select a different mirror (but what if two users are trying to download the same package?), and so on. In particular, it is not possible to share software installed from a CD when using the first method.
Using the second method, the privileged process only needs to be able to check that the digest of a local directory matches the directory’s name. For example: Alice goes to gimp.org and discovers that the version of the Gimp she wants has a SHA256 digest of “4fa…“. She sees that the directory /shared-software/sha256=4fa… already exists on the computer (perhaps Bob added it earlier). Alice runs this copy, knowing that the installer wouldn’t have let Bob put it there under that name unless it had exactly the same contents as the copy Alice wants to run.
Security
The basic security model used by Linux and similar systems is to have different users, each in their own security domain. Each user must trust the core system (e.g. the kernel, the login system, etc) but the system is protected from malicious acts by users, and users are protected from each other.
In this essay, I’ve talked about malicious users in several places, but it’s important to realise that this includes otherwise-trustworthy people who are (accidentally) running malicious software, or whose account has become infected with a computer virus, or who have failed to choose a secure password, and so on. So even on a family computer, where the people all trust each other, there is benefit to containing an exploit in a single user’s account.
Many Linux installation systems work by downloading a package and then executing a script within it with root access, and copying files into locations where they can affect the whole system. If you tell your computer to “upgrade all packages” each week, there may be several hundred people in the world who can execute any code they like on your machine, as root, within the next seven days!
For some packages, this is reasonable; you can’t expect to keep your kernel up-to-date without trusting the kernel’s packager. For others (desktop applications, for example) we might hope to limit them to destroying only the accounts of users who actually run them. Games, clipart, and documentation packages should ideally be unable to damage anything of value. This is the Principle of least privilege.
As well as malicious or compromised user accounts, we must also consider the effects of an insecure network, hostile web-sites, compromised servers, and even malicious software authors.
Klik is activated by the browser trying to resolve a URL starting with ‘klik://’. Firefox displays a confirmation box to prevent malicious sites from starting an install without the user’s consent, although the dialog does give the user the option to defeat this protection in future:
Autopackage requires the user to download the package file and then run it from the file manager. Firefox extensions can only be installed from white-listed sites, and a count-down timer prevents users accidentally clicking on Install. Zero Install requires the user to drag the link from the web-browser to some kind of installer or launcher (e.g. onto a Start menu), and requires confirmation that the author’s GPG key is trusted:
Transport Layer Security (e.g. the https protocol) can protect against insecure networks and replay attacks. It allows the client to be sure that it is talking to the host it thinks it is, provided the remote host has a certificate signed by a trusted CA. However, TLS requires the private key to be available to the server providing the software; the server does not require any action from a human to make use of this key. This means that an attacker breaking into the server and modifying a program will go undetected.
An alternative approach is for the author of the software to sign it on their own computer and upload the signature to the server. This should be more secure, since the developer’s signing machine is much less exposed to attackers (it may not even be on a network at all). It also allows mirrors to host the software, without the user having to trust the mirrors.
In fact, rather than signing the software itself, we may prefer to sign the XML file describing it. The XML file contains the digest of each version, as explained above, and the software can be verified from that. The advantage here is that the actual package file doesn’t need to be modified. Also, the signature remains up-to-date, since the author re-signs the whole XML file on each new release (signing keys should be updated from time-to-time to use stronger algorithms and limit the effect of compromised keys).
The downside of these static signatures is that replay attacks are possible, where an attacker (or mirror) provides an old version of a program with known security flaws, but still correctly signed. To protect against this, Zero Install records the time-stamp on the signature and refuses to ‘upgrade’ to a version of the XML feed with an earlier signature. This warning should also make it obvious to those users who did get a more recent version that a break-in has occurred.
The signed XML file must also include the globally unique name of the program; it’s no good trusting a correctly-signed version of ‘shred’ when you asked your computer to open the file with a text editor!
As always, users have the problem of deciding whether to trust a particular key in the first place. The hint in the screenshot above is from a simple (centralised) database supplied with Zero Install, and only says that the key is known, not trustworthy. A useful task for a QA team (or distribution) would be to add their signatures to approved versions of a program, or to provide their own database of trusted keys.
A final point is that we may want to give different levels of trust to different programs. If I am evaluating six accounting packages then I will probably want to give them all very limited access to my machine. Once I have chosen one, I may then give that single program more access.
Sun’s Java Web Start is able to use the security mechanisms built into Java to run programs in a restricted environment. Other systems may use more general sandboxing tools, such as Plash.
Compiling
Freedom to modify programs requires easy access to the source code, and the ability to use your patched version of a library with existing programs. Our installation system should be able to download the source code for any program, along with any compilers, build tools or header files we need, as in this screenshot of Zero Install compiling a Pager applet:
It is often the case that a program compiled against an old version of a library will still run when used with a newer version, but the same program compiled against a newer version will fail to work with the old library. Therefore, if you plan to distribute a binary it is usually desirable to compile against the oldest version of the library you intend to support. Notice how Pager in the screenshot above has asked to be compiled against the GTK 2.4 header files, even though my system is running GTK 2.8. My blog post Easy GTK binary compatibility describes this in more detail, with more screenshots in 0compile GUI.
The Pager binary package includes an XML file giving the exact versions of the libraries used to compile it. Our ability to install any version of any library we require without disturbing other programs allows us to recreate the previous build environment very closely, reducing the risk that recompiling it will have some other unintended effect.
Converting between formats
Autopackage and Klik packages are supplied as executable files. Running the script installs the software. Zero Install uses no installation instructions, only an XML file describing the software and its requirements. Most systems fall in between these two extremes, often giving dependencies declaratively, but with scripts to perform any extra setup required.
The scripting approach gives the most power to the packager. For example, supporting a new archive format only requires having the Klik server send out suitably modified shell scripts for clients to run, whereas new archive formats can only be used with Zero Install after upgrading the Zero Install software itself.
On the other hand, scripts give very little power to the user; the only thing you can do reliably with a script is execute it. It is possible to trace a particular run of a script and see what it did; CheckInstall monitors a program’s “make install” command and then generates a package from the actions it observes. However, this cannot pick up dependency information or detect alternative actions the script might have taken in other circumstances.
My klik2zero script can convert Klik packages to Zero Install feeds. This works because the result of executing a Klik script is a self-contained archive with all the program’s files. However, you have to host the resulting file yourself because the information about where the script got the files from is lost.
The autopackage2zero program works differently. Many Autopackages are actually shell scripts concatenated with a tar.bz2 archive, and the converter parses the start of the Autopackage script to find out where the archive starts and then creates an XML description that ignores the script entirely. This means that you can create a Zero Install feed for an existing autopackage, using Zero Install to check for updates and providing signature checking, but actually downloading the original .package file. Again, this loses any information about dependencies or other installation actions, but it does work surprisingly often.
Going the other way (converting from a declarative XML description to a script) is much easier. Zero2Bundle creates large self-contained application directories from Zero Install feeds by unpacking each dependency into a subdirectory and creating a script to set up the environment variables appropriately. Note that this is different to the normal (recommended) way to use Zero Install applications from ROX, which is using AddApp to create a tiny ROX application directory with a launcher script.
Even if people continue to get most of their programs from centralised distributions, the process of getting new versions of packages into the distributions in the first place could benefit from some kind of decentralised publishing system. A packager should be able to tell their computer to watch for new releases of programs they have packaged, download each new release, create a package for it, and notify them that it’s ready for testing. Likewise, adding a new package to a distribution should require no more than confirmation from a packager. If it is not this simple, then we should be finding out why and fixing it.
Better integration between decentralised systems and native package managers would also be very useful. Users should be able to mix-and-match packages as they please.
Summary
We want to combine the openness and freedom of getting software from upstream developers with the convenience of binary packages, dependency handling and automatic updates. We want to allow users, administrators, programmers, QA teams and reviewers to collaborate directly.
Users of traditional Linux distributions can only easily use software from their own distribution. This is inefficient, because popular software is packaged over and over again, and limiting to users, because less common software is often not available at all. Adoption of new software is hindered by the need to become popular first, so that distributions will carry it, so that users will try it, so that it can become popular.
If we no longer have a distribution to ensure unique naming of packages then we must use a globally unique naming scheme. Options include UUIDs, content-based digests, and URLs.
We must also ensure that packages cannot conflict with each other, especially if we permit mutually untrusting users to install and share programs. We can avoid the possibility of conflicts by keeping each package’s files in a different directory and naming these directories carefully.
Dependencies can be handled by letting go of the idea of having a single, system-wide version of a program installed. Instead, dependencies should be resolved on a per-program basis.
To allow our packaging system to check for updates and fetch dependencies for us we must provide the information about available versions in a machine readable format. Several XML file formats are available for this purpose.
For a decentralised packaging system to be used for more than the occasional add-on package, it must be able to share downloads between users to save bandwidth, disk space and memory. This can be done either by having a trusted process perform the download, or by having a shared directory which will only store directories with names that are cryptographically derived from their contents.
The are many security concerns to be addressed, for both traditional and decentralised software installation. We should avoid running scripts with more privileges than they require, and avoid running them at all if we can. We must provide a user interface that makes it difficult for users to accidentally install malicious software, and allow users to check that software is genuine in some way.
Free software requires the ability to make changes to programs. We should be able to get the source code easily, plus any compilers or build dependencies required. With conflict-free installation, we can get better binary compatibility and more reproducible builds, since we can build against selected versions of dependencies, even if these are not the versions we normally use when running.
Finally, we saw that it is often possible to convert between these different formats, with varying degrees of success. Even if most users don’t start using decentralised packaging right now, but continue with their existing centralised distributions, these techniques are useful to help the process of getting packages into the distributions in the first place.
I say this all the time on digg and only get dugg down. Sadly the greater linux community doesn’t like the idea of not having to install software. Installation of software is a pathetic idea which Windoze and Linux seem to embrace. Luckily for MacOS and AmigaOS, it isn’t usually a necessity to run an installer of any type – thank god these two OSs exist.
Hiding what it’s doing means it’s not installed? The files it saves its settings to don’t count for installation, nor in some cases the virtual mounting thingie (yeah, I forgot the term)? It’s nice packaging (I think still better than Klick), but is no more not installed than any other OS–the interface is just really simple, right down the file level.
For ports and things not available in such nice packages, it’s generally more difficult. Of course the idea of using many directories off the root at the same time for a piece of software, rather than those being within it, is partly to blame (but it can save space).
IMO, since a Linux distro has finite software available in the repository, you should have all of it available through the GUIs, with some indicator that it is not installed, ten installing it upon first run. This would make things easy as a user, and still not annoyingly hide things if you want to do it some other way; or look inside.
Then, for third-party, just have scripts to support use of RPM and/or similar from the file managers (as in, you click, and it installs, and if the info is in there, even runs–I’m not sure of all the metadata in them).
Huh? The only alternative to installing software is SaaS (software as a service). Even dragging and dropping a compressed archive falls under the category of software installation.
No, I don’t think I’m being pedantic here. The first time I had to install software on a Mac, it took me a little while to figure out how to get it to install the software permanently rather than run it out of the disk image. Even then it didn’t add the program to the Dock automatically. It made me feel stupid that I had mastered Gentoo yet had problems with the “intuitive” MacOS X. I’m not saying it’s harder to install software on a Mac than on other systems, but it’s not 100% intuitive for everybody. Very few things are.
Further, how do I keep my system up to date? Is there a single command or button? What if the upstream distributor moves to a different web address? Why should I trust a third party to deliver software that integrates nicely with the rest of my system?
I think that searching the web and downloading some compressed archive is a pathetic idea from “Windoze” and MacOS X. Package management takes the guess work out of finding, installing, and updating software, while providing some protection against malicious packages, I might add.
To each his own… but I personally feel that the most challenging aspect of package management for newbies is that it’s different.
I see why you get “dugg down” a lot. Notice how you make a strong assertion and then back it up with nothing? You could be right, but you’re not going to change anyone’s mind like this.
Edited 2007-01-16 04:19
This is a good article and a great attempt to explain the problems, cases, resolutions and summaries of what I feel is a common problem in the FOSS environment.
I guess there is the arguement that this system just adds yet another installation choice adding another layer of complication, especially for less technical users. This is true, but only if a common installation method isn’t adopted by the majority of Linux vendors.
Frankly, I think it’s very much time to start settling on a single common installation method / package manager. It’s certainly a hurdle that I feel is impacting to a degree adoption of Linux.
Installation and compilation via source should always be there. It exists on other platforms and offers a degree of freedom that developers and more technically savvy users want, however for the rest a common solution needs to be used across all distributions.
Fingers crossed.
Edited 2007-01-16 01:46
Frankly, I think it’s very much time to start settling on a single common installation method / package manager. It’s certainly a hurdle that I feel is impacting to a degree adoption of Linux.
My stock argument here is that, like everything in FOSS, standardization will follow consensus. While competing package formats are certainly not a desirable feature, neither is standardizing on a substandard packaging system. Last I checked, still no consensus on the best package manager.
However, one this is for sure: Linux users are being served far better by RPM and APT than Klik, Autopackage, Zero Install, et al. Show me a distro that successfully uses one of these self-contained package formats by default, and I’ll certainly reconsider.
While I agree that the centralized approach is better, I think somehow encouraging original developers to make applications available in one of those packaging systems can help streamline the packaging and delivery of new versions in repositories to users. Which is one of the points I think he was trying to make. I don’t know how familiar the author is with Conary/rPath[1] but I seem to recall it functions based around a similar concept.
While I don’t think it needs to be the case that distros abandon their current packaging systems, for things like backwards compatibility, wider ISV adoption and maybe even the reclamation of effort packagers, it would be a good thing for them to look into adapting their processes along these lines. Maybe modifying build tools to import those universal formats.
The idea of making cross distro collaboration easier is not new. Various[2] other[3] efforts[4] seem to attempt addressing interoperability amongst distros without having to give up individuality and focus. I think it is critical at this point to help continue to foster growth and development especially in areas such as polish and refinement.
[1] http://wiki.rpath.com/wiki/Conary
[2] https://launchpad.net/distros
[3] http://freedesktop.org/wiki/Home
[4] http://www.pathname.com/fhs/
Edited 2007-01-16 04:53
Definitely a nod for Conary, thanks for bringing that up. Simply put, Conary is like a hybrid revision control system and packaging system. Very promising, because it does what the other distributed packaging systems don’t: it actually attempts to make the upstream developer’s and packager’s jobs easier instead of focusing solely on the end user.
Less so for Launchpad… While it’s great that Canonical wants to help OSS projects coordinate and work together, this does not seem like an appropriate opportunity for Canonical to go proprietary. This is not a matter of “proprietary == bad.” If Canonical came out with a proprietary remote system administration console, for example, that would be great for the Linux community. But a collaboration tool targeted at free software projects? They had to know this was not going to be acceptable for many projects.
If you want to try and lock-in your corporate customers, go right ahead. If you want to sell premium add-ons to end users, I have no problem with that. But please don’t insult the development community by peddling proprietary development tools. That is sooo not in the spirit of free software.
However, one this is for sure: Linux users are being served far better by RPM and APT than Klik, Autopackage, Zero Install, et al. Show me a distro that successfully uses one of these self-contained package formats by default, and I’ll certainly reconsider.
Nonsense. Linux users aren’t being served better just because something just happens to exist in its current form.
…a thread where discussion of software installation will be on-topic. 🙂
Kudos to the author for the excellent presentation. these are very interesting project, very much in the spirit of FOSS. Still, I don’t think that such an approach should replace traditional package management altogether. I think the author indicates this in the last paragraph:
Finally, we saw that it is often possible to convert between these different formats, with varying degrees of success. Even if most users don’t start using decentralised packaging right now, but continue with their existing centralised distributions, these techniques are useful to help the process of getting packages into the distributions in the first place.
This is a very good approach, IMO. Package managers fill a need right now, and remain very good for system software. This allows for a smooth transition where it makes sense.
That said, I still think that this is not currently a barrier to Linux adoption. None of the people who have asked me about Linux have ever mentioned ease of installation of applications as a factor, or asked if software versions were up-to-date. So while I think this is a laudable effort to unify and simplify software installation, as well as improve the communication between users, developers and distros, I would place false hopes that such a system would necessarily attract more users.
That said, I hope solutions such as these become more prevalent, especially for “standalone” apps.
Edited 2007-01-16 02:18
That said, I hope solutions such as these become more prevalent, especially for “standalone” apps.
One of the problems is that the independent developer has to be accepted into some repository universe. That’s problematic.
I’ll add that “system” software would probably best be served by a centralized mechanism, as it is on Windows and OSX, even though the need on Unix tends to be less because of more decentralized (or modular) components than windows.
Edited 2007-01-16 16:47
..for this to “work”, all distro must use the same compiler( or only versions that are compatible with one another), the same libraries( or only those that are compatible with one another ), the same kernel ( or only those that dont break anything btw releases), all maintainers must agree on the same patches( or all distro must run the same vanilla kernel) ..all distro must sit and agree on the “right” direction before start moving or have the big players dictates what they want on others ..and the list goes on ..
the dynamics of FOSS development more or less demands this kind of “craziness” ..having everybody lining up and have only the front person start moving before the one behind him/her will slow everything down ..
“Do we need this many people working on essentially the same task?” it looks that way, some task will have to be repeated while others work on different paths advancing everything forward ..
I remember trying to kickstart a discussion to solve this some time ago[1], didn’t end that well[2]
[1]http://groups.google.com/group/linux.gentoo.dev/msg/28441f08ac1cc20…
[2]http://groups.google.com/group/linux.gentoo.dev/msg/02ea221ca44f705…
sounds like a lovely way to destroy your system. and are there really that many package management systems in use? from what i can tell, 80% of linux users are likely using a rpm or deb derived package platform. those who aren’t really only have one valid counterargument – the desire to build all code locally a la gentoo.
these “generic” packaging projects were and are non-starters, even if they could create a system that makes sense, utilizes your hardware properly (not one size fits all) and does not destroy your install, no one is using them anyway
Browser: ELinks/0.11.1-1.2-debian (textmode; Linux 2.6.18-3-686 i686; 91×34-3)
YES, there are.
http://en.wikipedia.org/wiki/Package_management_system#Examples
Furthermore, as in the case with Ubuntu and Debian, a distribution with the exact same PMS as the next distro can still be incompatible with that other distro.
Ubuntu and Debian are the perfect examples showing why decentralised packaging is a BAD idea. If two systems that are so closely related to each other as they are, using the same packaging and installing system, succeed in creating incompatible binary packages, how should a decentralised packaging system solve binary incompatabilities between let’s say Red Hat and Debian.
The problem this article about decentralised installation tries to solve is a closed-source problem. If an open-source programmer can’t get his open source program into the main tree of distributions, he can provide .deb and .rpm packages. The user can install them quite easily (richt click: install package (in kubuntu and ubuntu that is)), and i.e. apt-get will search for dependencies on the centralised system. It’s more work for the maintainer in the beginning (and a lot of work for closed-source programmers to provide .debs and .rpms for all distro’s, Opera seems to like this way of working though), but afterwards when distributions start packaging his program he’ll get more peer review by actual programmers who compile and support his packages.
This week seems to be about porting problems from the windows world to the FOSS world. First backward compatibility and now binary compatibility. They are only problems to people who want the latest and greatest.
Ubuntu and Debian are the perfect examples showing why decentralised packaging is a BAD idea. If two systems that are so closely related to each other as they are, using the same packaging and installing system, succeed in creating incompatible binary packages, how should a decentralised packaging system solve binary incompatabilities between let’s say Red Hat and Debian.
This is exactly why you need a decentralised system: your example is a centralised system failing to cope with packages from two different distributions! It’s quite possible to create a system that would handle this situation fine.
The reason it fails is not because it’s a centralised system, it’s because you’re trying to install a package created for one distribution on another distribution. If that distribution differs too much from the distribution it was created for it might not work. It’s perfectly possible to distribute .debs in a decentralised way (opera does it like this). It’s a bit more work for opera, but well it’s quite easy to install opera on an ubuntu/xandros/mepis/debian/suse… system.
I’d rather see more .deb, or .rpm packages on website than .autopackage packages, for those who insist on installing a program by searching the net and downloading it from a website.
The reason it fails is not because it’s a centralised system, it’s because you’re trying to install a package created for one distribution on another distribution.
Why do you think it fails, then, if it’s not the installation system’s fault?
I can install user-mode-linux on a Debian system and then install the Ubuntu package inside that. So it’s not a hardware limitation.
I can install Ubuntu in a chroot in my Debian system,
and run it from there, so it’s not a kernel issue. I can set DISPLAY to the host system, so it’s not an X problem.
So, that leaves just libraries and services (daemons). Libraries can be handled as described in the article.
Services (e.g. mysql) are usually designed to run across different computers, and so have a stable protocol.
In fact, the only thing I can think of is D-BUS (a system service which used to change its API regularly). And now it’s gone 1.0 that should be fine too.
This week seems to be about porting problems from the windows world to the FOSS world. First backward compatibility and now binary compatibility. They are only problems to people who want the latest and greatest.
Maybe that’s because on Windows backward compatibility and binary compatibility are solved problems, on Linux they’re not.
Write and compile a program on Windows Vista using only functions available in Windows 95, then take it to a Windows 95 box, and it will work.
Now compile on glibc 2.4 using only ANSI C functions, take your program to a box with glibc 2.3 and it will fail to start, even though all the functions you use are available. And that’s not even going to horrors like thread-local locales, where it won’t even work on another libc of the same version compiled without that option, and Fedora Core 6’s GCC’s DT_GNU_HASH, where unless you compile with special flags, AFAIK your program doesn’t even run on any other distro.
It’s not that people want the latest and greatest – currently the only reliable way to get unpackaged software installed is to compile it, and ISVs want to write software once and have it work on every distro for all time.
If you know something I don’t, please elaborate.
Well I know for one that backward compatibility is not solved in the windows world. For example Navision (a MS product), can not run on windows Vista.
Backward compatibility has also it’s drawbacks in speed: as in being forced to keep all drivers ever created to be supported, i.e. USB-drivers in windows vs the fasted drivers for USB in linux.
Compatability on binary level is disease that has more drawbacks than it is good for and is something that the open source world circumvents quite well: on source level.It should not be introduced in the open source world.
No, just the opposite. Ubuntu and Debian incompatibilites are perfect examples of a subpar packaging system. Scroll (depending your threaded view) up to find links on the Conary packaging system. Fine-grained versioning would seem to solve these incompatibility problems.
Like the other poster said, there is a difference between package format and package compatibility. You might not realize how similar .rpm and .deb files are to one another. In fact, most of the spec is identical.
The differences lie in package management and package compatibility. For example, we see that rpm has lots of different management systems: yum, urpmi, and yast come to mind. The APT equivalent of rpm is dpkg, but people simply refer to the Debian-derived system as APT because it is more-or-less the de-facto management system for dpkg.
Further, distributions often change the dependencies, add/remove patches, and even change the names of packages they port from other distros to play well on their systems. The name of the Xorg server package, for example, is different on many distributions, and each distro uses different distro-specific patches.
Could this be made simpler? Yes… but the distros hide this complexity from the user. It is mainly more complex for the distributor and its packagers instead of for the end users.
Speaking of which, this whole distributed vs. centralized package management debate represents a tradeoff–shifting work between upstream and the distributor. With distributed packages, the burden is on the upstream developer to get their package working on as many distros as possible. With centralized packaging, the burden is on the distributor to get as many upstream packages to work on their distro.
Guess what? Developers hate packaging! They want their job to be done as soon as their source tree builds and runs properly. Distributors, on the other hand, are essentially packaging machines. Packaging is what they do best. Why not leave things as they are? Let the developers code, and let the distributors package.
Because the Distributors sometimes take long to package. What it should is as simple for developers to package that properly running source tree as taring the folder. Notice that what is described in the article simply generates instructions as to what is provided/needed in terms of files and actions.
Because the Distributors sometimes take long to package. What it should is as simple for developers to package that properly running source tree as taring the folder. Notice that what is described in the article simply generates instructions as to what is provided/needed in terms of files and actions.
There’s nothing preventing developers to package their applications using statically-linked binaries *and* having the same apps be packaged by distros at the same time. That way those who want the latest and greatest can download them directly from the developer’s web site, and those who prefer to wait for the packages to be available in the repos can do that as well.
You shouldn’t have to choose between the two, you should be able to use both as you wish. The only thing I can see being a bit harder is managing system menus, but with freedesktop.org that’s not much of an issue anymore.
Yeah, but can you install that app from the developer’s site with a GUI front-end if you wanted to?
Anybody with command-line prowess can pick either solution if they wanted to, while those who don’t, but still want the latest and greatest just as much as the next person, will only have one solution available to them.
And unfortunately, that’s the majority of the folks who use desktop distros such as Ubuntu….or at least the target audience that those distros are gunning for.
Yeah, but can you install that app from the developer’s site with a GUI front-end if you wanted to?
I’m not sure you read what I posted correctly. I said that developers can provide standalone GUI installers or use on of the other (GUI) methods that use statically-linked libraries. I was not talking about compiling the apps…
but still want the latest and greatest just as much as the next person
See, that’s where you get it wrong. Ordinary users don’t want the latest and greatest, they want apps that work well. Constantly getting the latest version is something that Windows geeks do (the same hold true ex-Windows geeks using Linux, such as me).
When you say that everyone wants to run the latest and greatest, I believe you are projecting your own preferences onto the average user, and that you are mistaken in doing so.
Mind you, as I said I think developers should release statically-linked GUI installers in addition to tarballs, and let the distro makers update their repositories in all due time.
Also, as I’ve pointed out many times over the past few days, the Ubuntu repos are *very* up-to-date. If one cannot wait a day or two for an app to be released, then perhaps one needs to get a life… 🙂
Ubuntus repositorys may be great fo mainstream apps, but as soon as you approach the corners of the known space such as new apps, or apps with very specific target groups the qualitu of service quickly drops.
Yes, which is why I say that developers of such apps should use one of the statically-linked, distro-neutral packaging options at their disposal.
Yes som manner of distro-neutrality is probably desierable. Installing duplicate copies of dependencies is probably not the best solution though.
Instead better cross-distro integration infrastructure is needed.
Then we have think about FLOSS needs. It isn’t enough to just provide a means of installation (Freedom 0) you also need easy access to source code (Freedom 1), easy branching (Freedom 3) and all that.
See, that’s where you get it wrong. Ordinary users don’t want the latest and greatest,
Not as a rule, no. But when they ask for help “Evolution crashes when I try to read my mail from {odd mail server}!” they get told “Try version X – it’s fixed there”.
Should the distribution package the new barely-tested version? No. Most users want the stable version. But the user experiencing the crashes wants the new version.
That’s a good point. I guess that, with the repository system, it ultimately depends on how fast the packagers are at implementing bugfix versions into the main distro. So far with Ubuntu I’ve been lucky…
See, that’s where you get it wrong. Ordinary users don’t want the latest and greatest, they want apps that work well.
And see, that’s where you get it wrong. The latest and greatest “developer release” tends to be better than the previous version. And the latest and greatest developer release tends to lag behind the repository version.
Hell, in Ubuntu you’re in this bizarro world of various degrees of instability before a real release, and then nothing else besides security updates.
And see, that’s where you get it wrong. The latest and greatest “developer release” tends to be better than the previous version.
Yes, but often (in the Linux world) those “latest and greatest” are beta versions, which may introduce breakage, especially if they depend on lots of other packages.
And the latest and greatest developer release tends to lag behind the repository version.
I think you probably meant the opposite…
Again, the amount of lag varies. For me (someone who likes to try out new versions), I find that with Ubuntu the delays are acceptable.
Hell, in Ubuntu you’re in this bizarro world of various degrees of instability before a real release, and then nothing else besides security updates.
Not so. Just add the “Backports” repository to Synaptic, and you’ll get newer versions of apps for your stable distro.
And there are not “various degrees of instability”. There is only the current release and the next (unstable) release. I think you’re confusing Ubuntu with Debian here…
Yes, but often (in the Linux world) those “latest and greatest” are beta versions, which may introduce breakage, especially if they depend on lots of other packages.
That’s a problem with package management, not any arbitrary hand-waving about what is a beta version, which typically is better than a previous stable version.
I think you probably meant the opposite…
Yes I did
Not so. Just add the “Backports” repository to Synaptic, and you’ll get newer versions of apps for your stable distro.
If it’s been backported, which is a big if.
And there are not “various degrees of instability”. There is only the current release and the next (unstable) release. I think you’re confusing Ubuntu with Debian here…
Of course there’s various degrees of instability. If you jump on the next pre-release the day after an official release then it’s going to be a helluva lot more unstable than a pre-release the day before it becomes a release.
Guess what? Developers hate packaging!
Yes, mostly because they have to build 1 package per version of each distro. On Windows, they build one package alltogether.
Distributors, on the other hand, are essentially packaging machines. Packaging is what they do best.
Not by a long shot. A frightening amount of packaging is done by people who don’t know what they’re doing. Wine, for example, comes with wineserver, a per-user server that handles things like inter-process synchronization, which is started on-demand by wine and exits when wine itself exits. A while back, some distro put wineserver in an initscript (it’s a server, right?) and tried to run it on startup, as root!
Like autopackage put it, it makes no more sense for a distro to do packaging than for them to do artwork and GUI design for the app.
Why not leave things as they are? Let the developers code, and let the distributors package.
Developers need feedback from users, and getting feedback from a version you released 6-12 months ago is worse than useless.
Distributors produce one package for one version of one distro – an absolute vendor lock-in.
Distributors have to put in extra work packaging, testing, dealing with bug reports – and that’s for each distro. Tonnes of wasted effort being the middle man.
Making a DEB or RPM is an absolute waste of your time – it works today but not tomorrow, it works on this box but not on that one.
The sad thing is, solutions existed for years now, and end users love them (autopackage website gets 500-1000 hits per day), but distros would rather die than implement them.
Another approach is to work on a protopackage-format. A standard which sole purpose is to do as much distroindipendent work as possible upstream.
There are a few of those today. We have the autotools, ./configure; make; make install routinte, but it’s arguably not very maintainable in the longrun, and not that userfriendly.
Debian is turning into a protodistro.
Gentoo was a “meta”-distribution from the start.
So instead of focusing all effort on how to desing packagesystems to bypass distroefforts (autopackage, klick, zero install, what have you) the effort should be spent on protopackagesystems that makes the lives easier for the user/developer (prosumer) AND distributer.
> Guess what? Developers hate packaging! They want their
> job to be done as soon as their source tree builds and
> runs properly. Distributors, on the other hand, are
> essentially packaging machines. Packaging is what they do best.
Often the problem is the packagers. They pack the stuff
without any feedback to the developers. First time you
know there is a package for distro $foo is when sombody
sends you a mail about a bug in the package.
Another thing is the delay. When I release something
i want my users to have the new version ASP. Not when
they buy the next CD or hurd is ready.
These installers do the same thing as the centralized systems, but with all the voodoo contained within the file itself instead of comming from a centralized system.
There are two distince disadvantages to this approach:
1) the files are “distro neutral” which just screams “staticly linked”
2) since they are self contained, there is no centralize means to update them if bugs are found, or upstream dependencies change.
Re-inventing the whell is great, but the wheel has to be significantly better in order to convince people to use it. A distro specific solution offers all kinds of advantages that “distro neutral” solutions just can’t compete with.
1) the files are “distro neutral” which just screams “staticly linked”
Not necessarily. The article has a whole section on handling shared libraries:
http://osnews.com/story.php/16956/Decentralised-Installation-System…
since they are self contained, there is no centralize means to update them if bugs are found, or upstream dependencies change.
That doesn’t follow at all. Zero Install, for example, lets you specify any number of ‘feeds’ for a single program. One might be ‘upstream’, another might be your distribution’s security team.
What Linux really needs at this point is package generation built into the big IDEs. For instance MonoDevelop and Eclipse needs a plugin each that will let the program select a number of code projects in the workspace. Then, when you build this “packaging project” in the IDE it will generate .rpm, source.rpm, .deb and .tar.gz packages automatically.
If it’s THAT easy to generate all the different types of packages then the devs will actually generate all packages. Instead of what we have today, were the big projects (mono etc) typically provides their stuff in all the major packaging formats, while the smaller devs choose their favorite format (and ignore the rest).
Once all projects publish their stuff in (more of less) all the major packaing formats, distros (or users themselves) can stop by at the project website and pickup the package for testing or install/usage.
This “auto packaging” plugin should also generate .msi files for Windows (and whatever Mac uses). Atleast this should be an *option* for Mono and Eclipse which typically generate cross-platform programs. Then if someone creates a Linux-only program with Mono (or hates Windows) he will just not check the “Generate .MSI package” checkbox.
The key is: make it ridiculously easy for the devs to generate top-notch packages.
static linking of libs or apps will only make users complain about kde apps not using kde styles becouse they are linked to a different qt version, and that is only one example.
To all the people commenting on this article. If you want to maximize the amount of people that will read this article you should Digg it. If you don’t have an account with digg, get it.
While the article’s example of Alice and Bob having different GIMP versions installed is quite cool, I am wondering how this works when a new user account is created afterwards.
Say an account for user Diane is created after Alice installed GIMP 2.2 and Bob installed GIMP 2.4
Will Diane get 2.4?
What if Bob upgrades to 2.6? Will Diane keep 2.4 or be upgraded with Bob?
Assuming she will be upgraded as this would seem to be usually expected: how can she tell to remain on 2.4? Does she have to explicitly install 2.4 herself, even if it is already installed?
While the article’s example of Alice and Bob having different GIMP versions installed is quite cool, I am wondering how this works when a new user account is created afterwards.
Say an account for user Diane is created after Alice installed GIMP 2.2 and Bob installed GIMP 2.4
Will Diane get 2.4?
Assuming the sys admin hasn’t set up anything, Diane doesn’t get any version:
diane $ gimp
command not found: gimp
If she wants it, she goes to gimp.org and tells the computer “Run that program, Gimp 2.4”. At that point, the system notices it is already present and runs the version Bob’s added directly without downloading another copy.
If she wants it, she goes to gimp.org and tells the computer “Run that program, Gimp 2.4”. At that point, the system notices it is already present and runs the version Bob’s added directly without downloading another copy.
I see, makes sense since both installed Gimp versions have been installed by individual users.
Can a person with elevated rights, e.g. the administrator, install system wide software, i.e. something all users see as installed?
I mean using zero install, obviously they can still use the system’s package manager
Can a person with elevated rights, e.g. the administrator, install system wide software, i.e. something all users see as installed?
I mean using zero install, obviously they can still use the system’s package manager
Sure, the admin just creates a launcher as /usr/local/bin/gimp (man 0alias shows how to create such a short-cut).
Or, the admin can add it to the system-wide defaults the Applications menu (how is desktop specific).
Sure, the admin just creates a launcher as /usr/local/bin/gimp (man 0alias shows how to create such a short-cut).
Ah, very good.
Or, the admin can add it to the system-wide defaults the Applications menu (how is desktop specific).
Well, no. Any current desktop implements the freedesktop.org menu specification.
One can use xdg-desktop-menu to handle the installation of the .desktop file to make sure it is installed correctly (even for intentionally incompatible distributions like Mandriva)
Assuming one has, as with the application launcher, a cross-desktop specification for things like MIME type registration, icon sets, etc., can 0install handled this automatically (provided there are available tools the xdg-utils) or is this always a manual step?
This article was really well thought-out and delivered–it’s not just some publicity piece. It’s clear that the developer/author has taken a lot of time to think about installation and use cases, and is making the next generation of Linux installation technologies a reality. I especially like the idea that different versions of libraries should be matched to different versions of programs, albeit without the needlessly inefficient app-folder method. I wish the author the best of luck, and hope that Zero Install catches on!
However, one flaw I see in your implementation is the cryptographically-derived naming of folders. In the beginning of the article, you point out that non-hash-derived identifyers are much more easily user-readable, yet later on you claim that end-user “Alice” will be willing to go to the Gimp homepage, look up the appropriate hash and compare it to hash-name of the folder that “Bob” installed on the hard drive. Yeah, right! Not only does that sound like the exact opposite of the user-friendly goals you set out with, but it’s also incredibly insecure to assume user vigilance as a means of security! All the hash-naming of folders would serve to do is make the end user more confused.
Likewise, the certificate verification dialogue box doesn’t seem too user-comprehensible or foolproof–especially considering that the user is told the database is “Unreliable”! A whiteboarding system (either independent or distro-specific) would be much more reliable, but of course then the sytem is practically as centralized as was supposed to be avoided!
It seems as though achieving ease-of-use, decentralization, and security all at once really is an elusive goal…
However, one flaw I see in your implementation is the cryptographically-derived naming of folders. In the beginning of the article, you point out that non-hash-derived identifyers are much more easily user-readable, yet later on you claim that end-user “Alice” will be willing to go to the Gimp homepage, look up the appropriate hash and compare it to hash-name of the folder that “Bob” installed on the hard drive. Yeah, right!
I should have been clearer: the installation system does this on behalf of Alice. It gets the hash from the XML file describing the Gimp; all Alice has to do is find the link to the XML file.
Likewise, the certificate verification dialogue box doesn’t seem too user-comprehensible or foolproof–especially considering that the user is told the database is “Unreliable”!
Right. Ideally, there should be multiple feeds for this information. Currently, there’s only mine, which is “unreliable” because I don’t have the resources to check out people’s keys or offer any compensation if I’m wrong.
This is certainly an area where a commercial company could add value, but without having to start their own distribution (as they’d have to do now).
I should have been clearer: the installation system does this on behalf of Alice. It gets the hash from the XML file describing the Gimp; all Alice has to do is find the link to the XML file.
Aha ok. Although I still feel like having some sort of real-name identifier there would help the administrator find the folder in case of a problem. Why can’t the directory name be the program name (as opposed to the URL)–i.e. gimp-2.3– so it can be installed off of CD? I don’t understand what about making the folder a hash makes it more secure than, say, storing the hash information in a separate protected file within the program’s folder.
Right. Ideally, there should be multiple feeds for this information. Currently, there’s only mine, which is “unreliable” because I don’t have the resources to check out people’s keys or offer any compensation if I’m wrong.
Aha. Cleared up, although as it is now it’s clearly not a long-term solution. My only gripe with the model is that in an office setting, users might not be able to install any software they wanted, since they would be probably be locked into a list of software on a predefined whiteboard server. However, I suppose that the problem of “missing software” would probably be much scarcer than in today’s repository model, since hosting such a whiteboard server that certifies URLs rather than individual versions of programs is undoubtedly much easier than having to package and test every new iteration of software by hand.
Why can’t the directory name be the program name (as opposed to the URL)–i.e. gimp-2.3– so it can be installed off of CD? I don’t understand what about making the folder a hash makes it more secure than, say, storing the hash information in a separate protected file within the program’s folder.
OK, so Alice puts a CD in the drive with ‘gimp-2.3.tgz’ on it. How can the system know that it’s genuine? Anyone could make such a CD. The system can’t tell, so it can’t share it with Bob.
But, if the CD contains an archive called ‘sha256=XYZ.tgz’, then it can be put in the shared directory. The system can see that it’s correct just by comparing the archive’s contents to its name.
Hmm… while your solution does sound perhaps a bit more elegant, I still don’t see why the system couldn’t extract an identifier text file from the archive and then compare it to the archive’s contents. Also, just to be clear: comparing the hash to the contents wouldn’t do a thing to ensure security all by itself; it would also need to be compared to the hash at the project’s website, right? Otherwise anyone could create malware and provide a hash to match it, but make it look like normal software. Furthermore, don’t the archive contents have to be re-analyzed every time you want to verify their authenticity? So given that the website needs to be accessed and the hash needs to be recalculated in any case, couldn’t we just skip the step with the local copy of the hash?
To rephrase: The hash being stored in the local filesystem does very little to ensure integrity of the program. Only by checking the folder’s contents against an online hash of its contents can ensure the program’s security, which effectively renders the local copy of the hash useless.
Or am I missing something?
Otherwise anyone could create malware and provide a hash to match it, but make it look like normal software.
Yes, just because something is in the shared directory doesn’t mean it’s safe to run it. One reason why unfriendly names are OK here is that you really don’t want users browsing around running things that just look interesting!
Furthermore, don’t the archive contents have to be re-analyzed every time you want to verify their authenticity?
No, that’s why you have the privileged helper. It checks the digest once and then adds it. So, if you see a directory called:
/shared-directory/sha256=XXXXXXX
then you don’t have to calculate the XXXXXXX bit yourself. If it didn’t match, it wouldn’t have been allowed in.
BTW, you don’t need to use the web to check the hash. It may be that Alice and Bob both trust the CD (in which case they get to share the copy on it). Denise doesn’t trust the CD, so she checks with the web-site instead (and will share the copy only if it matches).
>>No, that’s why you have the privileged helper. It checks the digest once and then adds it. So, if you see a directory called:
/shared-directory/sha256=XXXXXXX
then you don’t have to calculate the XXXXXXX bit yourself. If it didn’t match, it wouldn’t have been allowed in. <<
Is that “sha256” there before the XXXXXXX the name of the program? If so, then my fears are allayed–the name is user-readable and the user could find the program without relying on a third-party program/database.
However, I don’t get the point of talking about how “this is safe if you see the hash” (or even, “this is safe if the system sees the hash”), since you say the “privileged helper” is supposed to prevent unsafe items from being in there in the first place. So malware shouldn’t be able to find its way in, period, regardless of whether it has a hash directory name or not.
Which actually seems to contradict the following statement:
>>Yes, just because something is in the shared directory doesn’t mean it’s safe to run it.
What does that imply–that the privileged helper really isn’t that effective after all? If the shared folder isn’t secure, what’s to prevent malware from just being copied in there, in a folder named after a hash that was generated using the same public algorithm you make available to all software publishers?
**Unless**…. every user had a unique encryption key for their individual system, so hash files generated on the system were unable to be generated ahead of time by malware authors. That would be a very secure mechanism. In that case you’d need two separate hashes–one stored online (or on CD), that is checked at install time to verify the software, and a different one on the local system, encrypted using the local personal encryption key, which shows that the file has been verified once already. That would effectively be spoof-proof, since a simple copy procedure would never be able to imitate a true Zero Install-based installation of a program. And if the user were to somehow lose their local encryption key through an OS upgrade or some such thing, no problem, they can just reinstall the software using a new key. (Actually, this kind of bears a remarkable similarity to evil system-bound DRM schemes, but it serves a much less evil purpose, since in this case it’s not locking the user out of installing the software they want, it’s only locking out the software they never asked for.)
>>One reason why unfriendly names are OK here is that you really don’t want users browsing around running things that just look interesting!
Security by obscurity doesn’t seem like much of an answer. I’d much prefer to keep my directory tree user-navigable over turning it into the filesystem equivalent of the Windows registry.
And I still can’t think of a reason that any system would require the hash to be stored in the directory name.
you say the “privileged helper” is supposed to prevent unsafe items from being in there in the first place.
No, I said that things can only have their real name. ‘unsafe’ is a subjective term; you can’t expect the computer to enforce the rule “No unsafe software to be installed in this directory”. Different users might even disagree on whether something is unsafe.
If the shared folder isn’t secure, what’s to prevent malware from just being copied in there, in a folder named after a hash that was generated using the same public algorithm you make available to all software publishers?
OK, take ROX-Filer version 2.5 (Linux x86 binary) for example. It has this hash (and, therefore, directory name):
sha1=d22a35871bad157e32aa169e3f4feaa8d902fdf2
You’re quite free to change it in some way and add your malicous version to the shared directory too. BUT, changing it will change the hash so your evil version might be called:
sha1=94fd763dfe509277763df47c53c38fc52866eaf4
You can’t make your version appear under the original’s name, because the name depends on its contents.
And I still can’t think of a reason that any system would require the hash to be stored in the directory name.
It depends what you want it for. I think you are thinking about this scenario:
“Alice is bored. She wants to run something, so she has a look in the shared directory to see what other users have been running. Noticing a directory called ‘gimp-2.4’, she decides to run it, first checking at gimp.org that its hash matches the one on the web-site.”
That works fine with the hash inside the directory, but it’s not the case Zero Install is aimed at. Here’s our target scenario:
“Alice needs to edit some photos. She goes to gimp.org and asks to run Gimp 2.4. She (her software) looks for an existing directory with the same hash and finds the copy Bob installed earlier.”
Notice that the question we’re trying to answer isn’t “does this directory have hash XXX?”, but “where is the directory with hash XXX?”
Aha, so basically the hash is more to protect the program’s integrity AFTER installation than trying to figure out whether the program is legit AT THE POINT OF INSTALLATION. Now everything is much clearer.
Many years ago my first Linux was FT-Linux (and I regret
losing the CD). It installed a base system and whatever packages it was instructed to. HOWEVER it also ran all the other stuff on the CD from the CD with some cacheing.
FT was great to live with, right from the install. Having just installed Ubuntu this week, I am forever going back to Synaptic to pull in one tool or another.
Saying it is a BAD idea because it is difficult makes no sense to me. Humans apply ingenuity to difficult problems to invent solutions – thats what we do!
Let’s see it in practice and then decide!
I couldn’t agree more. These days I laugh when people say such-and-such software is “intuitive” or “unintuitive”, especially if such software is Windows- or Mac-based. To add to your examples, what’s “intuitive” about dragging a disc icon to a trashcan to eject it (it should delete all files/quick format/fully format the disc) or dragging an icon to an /Apps folder to install it? (To a Unix user, if the “app” (binary) is dragged to /Apps, then the libraries should be dragged to /Libraries, etc). As a Gentoo user – you’re right, as a Gentoo user the “intuitive” way of installing foo is to emerge foo.
> These days I laugh when people say such-and-such software is
> “intuitive” or “unintuitive”, especially if such software is Windows- or
> Mac-based. To add to your examples, what’s “intuitive” about dragging
> a disc icon to a trashcan to eject it (it should delete all files/quick
> format/fully format the disc)
I agree with that, but let me add that next to each ejectable drive icon in Finder there is an eject button (labeled with the same symbol as the eject button on VCRs). I always use this button because I’d call it intuitive (and thus easy to remember), while the trashcan gesture is non-intuitive (I didn’t even remember it until now). What is still non-intuitive about the eject button is that it is located in Finder (or rather, that disks appear at several points in the UI at all).
> or dragging an icon to an /Apps folder to install it?
Why, that sounds very reasonable to me.
> (To a Unix user, if the “app” (binary) is dragged to /Apps, then the
> libraries should be dragged to /Libraries, etc).
I didn’t have to install additional libraries on my Mac yet, but that’s exactly what I had thought I’d had to do. At least that sounds most reasonable to me.
> As a Gentoo user – you’re right, as a Gentoo user the “intuitive” way
> of installing foo is to emerge foo.
In “my dream OS”, the package manager is no magic piece of voodoo, but simply a front-end to organize /Applications and /Libraries (to manage the vast amount of stuff you’d find there), and to access online repositories easily, to download and verify packages and move them to those folders. In other words: a tool, and not a wizard.
You have a point–“intuitive” has a lot to do with what you’re used to. And (perhaps unfortunately) most people are used to Windows and/or Macs, which means that borrowing interface elements from those OSes will result in a higher percentage of the general population being able to find their way around. Hence, the elements that make up Windows and Mac interfaces are more “intuitive” for the general population.
Since the usage case described seems to be targeting desktop-Linux end-users in office environments, I’d imagine that they wouldn’t want to make the interface *too* unfamiliar. That said, the Windows paradigm of having to download an executable, then find it, double-click it and click through “Next” a bunch of times is hardly what I would call a good interface in anyone’s book.
If you don’t have an account with digg, get it.
Jawohl, mein Kommandant.
(That, btw, is German for “no, thankyou.”)
I agree with most of article points.
However I feel free to note some outstanding issues I’d like to see solved (don’t know how much of it it is supported or not though in 0install):
-System dependencies:
What if app requires a specific daemon version (with specific protocol version), which is single & root on the system and that (version) isn’t provided by the distro? Answer would be to avoid this kind of dependency or advice packagers not to target too high with version dependency. Useful is to have a way to try to fool program so that it at least installs and tries to rtun with older version. Area I even fear to mention are kernel features, init & udev scripts, etc. Probably this kind of packages is better handled by distro-specific packaging.
-Scripts are an issue. Obviously autotools’ or deb/rpm scripts can’t be used directly as they can mess up the system. They need to be modified to create specific temporary environment in which zero-installed app is supposed to run. Same goes for library environment, so when app which needs specific library is run, it should also execute specific script possibly required to set up the environment for that version of library (SDL env. variables as an example).
-Configuration: user should be able (via a friendly GUI) to modify which lib version is default for running his app with (whether it fails or doesn’t is another problem). He/she should also be able to select ‘variant’ of a package he wants, as provided by different authors, such as distros, contributors or original code authors. It should also be possible to set defaults for which variant automatic installer will favour, including blacklist & whitelist support. Another wanted ability is to have specific tweaked lib/app version for specific distro, if ‘vanilla’ code is known to cause compatibility problems.
-Source. Users need ability to have custom compiled and installed software to his home directory, but which is registered into the package databases, or in ‘overloading’ local database. Perhaps autotools & co. should support this for ease of use.
-Local untrusted install. If user wants to have a version of software/lib from source which is new/unknown, he should be able to install it into his local home directory. Once root starts trusting this source, he can move it to shared repo.
-Binary compatibility. Another big topic which can make cross-distro compatibility harder. Autpoackager site has some interesting info about it.
-Optional dependencies. This needs support from multiple parties: linker support, packaging system support, and finally properly designed applications which will not malfunction if “optional” library isn’r present, but scale down their functionallity.
-Multiarch. although not mentioned, I hope x86 on x86-64 and packages with special compile options (mmx, sse,2,3 etc) are supported concurrently by zero-install.
All together, this system seems as best of a few devised to solve linux packaging fragmentation (and dependency hell). I certainly hope that distributions will support it as an alternative install system, as it shouldn’t be too invasive addition. Certainly sounds like a good way to distribute newer app versions to older distributions, as well as various third-party stuff which isn’t in distro repositories (perhaps even commercial sw). Not that deb & rpm are bad, but they don’t solve many of above issues.
Edited 2007-01-16 14:00
Source. Users need ability to have custom compiled and installed software to his home directory, but which is registered into the package databases, or in ‘overloading’ local database. Perhaps autotools & co. should support this for ease of use.
Exactly. This is what the ‘Register’ button does in the compile screenshots:
http://rox.sourceforge.net/desktop/node/360
-Multiarch. although not mentioned, I hope x86 on x86-64 and packages with special compile options (mmx, sse,2,3 etc) are supported concurrently by zero-install.
It currently just uses the machine name and OS from uname. You could easily get it to prefer ‘-mmx’ binaries if available, though (edit arch.py; there’s a table listing compatible architectures in order of preference for the current machine type).
I agree with that, but let me add that next to each ejectable drive icon in Finder there is an eject button (labeled with the same symbol as the eject button on VCRs). I always use this button because I’d call it intuitive (and thus easy to remember), while the trashcan gesture is non-intuitive (I didn’t even remember it until now). What is still non-intuitive about the eject button is that it is located in Finder (or rather, that disks appear at several points in the UI at all).
Well, the last time I used a Mac, which was admittedly several years ago, the “eject disk” option in the Special menu caused the computer to immediately eject the disk, then immediately ask for it back – and this is the days when Macs, in the interests of “making computers behave like an appliance” were completely closed – no expansion cards, no nothing.
In “my dream OS”, the package manager is no magic piece of voodoo, but simply a front-end to organize /Applications and /Libraries (to manage the vast amount of stuff you’d find there), and to access online repositories easily, to download and verify packages and move them to those folders. In other words: a tool, and not a wizard.
The point I was making is that, if I didn’t know what went in to making software, /Apps would be just as much “a magic piece of voodoo” to me as you seem to imply “emerge foo” is to you – no more, no less. I agree that it would be nice if you could drag stuff to /Apps and have it place stuff in /Libraries, etc. – or, if already installed, not – but I don’t see any inherent advantage to this over emerge or, perhaps, graphical frontends to emerge. Not better, just different.
Do we need this many people working on essentially the same task? Are they bringing any real value? Without the Fedora packager, Fedora users wouldn’t be able to install Inkscape easily, of course, so in that sense they are being useful. But if the main Inkscape developers were able to provide a package that worked on all distributions, providing all the same upgrade and management features, then we wouldn’t need all these packagers.
I haven’t read the whole article yet, but I can see the author is getting at what I’ve said on OSnews numerous times. Distros are a form of lock-in, and more importantly a suboptimal allocation of scarce developer (package maintainer) resources.
I have to disagree on that statement for one reason: first, source package is available on most distributions which provide a spec file (used to build these binary packages) and the tarball from these upstream developers (Inkscape in this example).
Second, this package needs to be reviewed for validating the license, check out security (key word), make sure the installation goes to the right path and more. Since each distribution has a different criteria for packaging a application, having another package manager from upstream like autopackage won’t help solve these above issues.
Second, this package needs to be reviewed for validating the license
No it doesn’t. The distro only worries about licenses for software it distributes. They would be out of the loop.
I’m all for this kind of solution. I hope it expands to be more robust. It would be nice if it could integrate into the update managers of various distros. It would also be nice if it integrated bittorrent.
All we need now is for a distro to use it as default. The big guys, like Ubuntu, probably won’t cave in. But a new kid on the block could probably implement it. Imagine how much smaller a distro team would need to be, if they outsourced the packaging to the devs.
But one question hanging in my mind is compatibility conflicts with different kernels. Alice couldn’t use two different programs that rely on two different kernels, at the same time, could she? Or, take VMWare for instance, what if specific kernel modules need to be installed? User-mode-linux?
I personally think that linux needs to be more modularized. Drivers need to be pushed into user-space. But, in the mean-time, how can something like 0install be a full end-to-end solution? Would there have to be at least _some_ alternative package manager to handle system critical software?
All we need now is for a distro to use it as default. The big guys, like Ubuntu, probably won’t cave in. But a new kid on the block could probably implement it. Imagine how much smaller a distro team would need to be, if they outsourced the packaging to the devs.
Ulteo?
I personally think that linux needs to be more modularized. Drivers need to be pushed into user-space.
Agreed, but I doubt it’ll happen anytime soon, given the sentiments of the kernel devs. Actually, I wouldn’t care about the drivers being kernel-space, if it weren’t for the fact that it makes installing them much more complicated than it should be…. (For instance, why do you think people so often revert to ndiswrapper, even for wireless chipsets that Linux is “supposed to” support?)
But, in the mean-time, how can something like 0install be a full end-to-end solution? Would there have to be at least _some_ alternative package manager to handle system critical software?
I’d say the answer to that is a near-indisputable yes.
Edited 2007-01-16 22:34
… that every single distribution of Linux is effectively a unique Operating System – it would be like having several hundred versions of Windows or OSX, each with varying levels of binary compatibility. This makes it very difficult for developers, especially those who have valid reasons for keeping their source code closed, to write software for Linux. This is why most commercial software written for Linux is written for Red Hat, Suse or occasionally Debian – as far as commercial software houses are concerned, there are no other distros, and for most, Red Hat is the only distro (Red Hat=Linux, all other distros != Linux in their view).
Package management tools like APT are a good idea for managing core OS updates, but a really bad idea for the installation of apps.
This is one area where Windows gets it pretty right – the OS comes bundled with very few applications, and the apps it does come with are all written and maintained by the vendor. MS manages the core OS and these packages with its own package manager (Windows Update). Many apps you install on windows have their own automatic update features etc..
Where Linux distros get it wrong is by bundling sometimes thousands of apps with the OS, and then trying to manage them, which is simply impossible to do properly in a centralised way. It also has the potential to backfire on developers and users alike – say you install Gimp for the first time on one distro and it is horribly buggy, due to poor packaging. Your first impression of the program is not very good, and most people don’t give things another chance, so that user is going to move on to something else, even though the original developer had nothing to do with the bugs that were created by a poorly packaged distro. Thereby one program gets a bad rap because of one distro. The ISV should shoulder the responsibility for the app, and get the kudos as well, not the distributor/packager).
Linux distros need to stop trying to bundle 374 types of kitchen sink in the distro, and instead focus on shipping a small, stable, compact Operating System with a limited set of basic software (browser, media player, text editor, file manager, image viewer and perhaps an email client), and create a stable binary platform for ISVs to be able to simply make one package that will work for all distros.
Until that happens, Linux is unlikely to make its way onto a much bigger % of PC than at present, simply due to the fragmentation and confusion generated by having too many distros, too many apps that basically do the same thing in slightly different way.
For me, the ideal Linux distro is one that comes with a limited set of core functionality, an uncluttered, simple but powerful and tightly integrated GUI (eg XFCE, which I am very fond of), and the ability to install signed drivers and software from the Vendor’s official website. It should not take up more than a couple of hundred megabytes.
I like to have a single program that has a comprehensive feature set that does one thing well, rather than using 15 different programs to perfrom only part of a task at a time.
Since I rarely have more than a dozen or so apps installed over my base OS (at this point on WinXP as my main OS), I don’t need any tools to manage my installed software, and since I use tools with comprehensive functionality, I don’t forsee any need to change this by installing hundreds of little bits and pieces.
Linux as a whole needs to simplify, rationalise and become more organised and integrated to really take off in the way many of us would like it to. Linux the OS needs to separate itself from the apps that run on it, and at the same time create a stable platform for ISVs to create software that can be distributed separately in binary (or source) form and Just Work® on any distro.
Oh boy. Where to start?
“[The problem is]… that every single distribution of Linux is effectively a unique Operating System – it would be like having several hundred versions of Windows or OSX, each with varying levels of binary compatibility.
Er, except not. The differences between distros of Lovely Linux that use the same package managers (and most use only one of three or four package managers) are an order of magnitude smaller than the differences between Wonderful Windows versions or Mac OS <9 and Mac OS X (Can’t speak for versions of OS X as I’ve not used many).
This makes it very difficult for developers, especially those who have valid reasons for keeping their source code closed, to write software for Linux.
I’ve not seen a valid reason for keeping source closed yet, unless maybe it’s national-security-related. (And even then, as in the recent spats between the US and UK over their fighter-jet software, some degree of openness may be required). If you can introduce me to one, however…
This is why most commercial software written for Linux is written for Red Hat, Suse or occasionally Debian – as far as commercial software houses are concerned, there are no other distros, and for most, Red Hat is the only distro (Red Hat=Linux, all other distros != Linux in their view).
Maybe so; however, I’ve seen commercial software run on Gentoo. I suspect you could get it working on Slackware with no more effort than a “Slackwearer” [sic] would be able to handle. I suspect you will find that the reason so many software houses act as if Red Hat = Linux are:
1. That for many moons (due to restrictions on trademark use) it was indeed packaged by third-parties as simply “Linux”;
2. That in the business sector it outweighs use of SuSE, its nearest competitor, by a factor of 8 to 2.
Package management tools like APT are a good idea for managing core OS updates, but a really bad idea for the installation of apps.
Why?
This is one area where Windows gets it pretty right – the OS comes bundled with very few applications, and the apps it does come with are all written and maintained by the vendor.
On the contrary, it’s all wrong; Magical Microsoft OSes are barely usable on a clean install (ok, technically the OS’s are barely usable with the computer fully loaded, but you know what I mean).
MS manages the core OS and these packages with its own package manager (Windows Update). Many apps you install on windows have their own automatic update features etc..
Which means you have a million and one different programs to search for when you (re-install) the OS, and yet another update program screaming at you when each of the apps wants to be upgraded; if Lovely Linux systems did it this way they would be criticised for “a million and one different upgrade systems”. (But unlike the there’s-a-million-and-one-apps-to-do-the-same-thing, argument, it would be valid.)
Where Linux distros get it wrong is by bundling sometimes thousands of apps with the OS, and then trying to manage them, which is simply impossible to do properly in a centralised way.
Well, Gentoo and Debian seem to manage this Herculean and impossible task. I’m not saying they’re aren’t mistakes and problems; I’m saying that Windows is not the Wonderful Be-All-and-End-All the Worshippers at the Altar of Good Gates say it is.
It also has the potential to backfire on developers and users alike – say you install Gimp for the first time on one distro and it is horribly buggy, due to poor packaging. Your first impression of the program is not very good, and most people don’t give things another chance, so that user is going to move on to something else, even though the original developer had nothing to do with the bugs that were created by a poorly packaged distro. Thereby one program gets a bad rap because of one distro. The ISV should shoulder the responsibility for the app, and get the kudos as well, not the distributor/packager).
Remember DLL hell? As for “one distro’s package manager buggering up and casting doubt on the quality of all the available versions of software X,” I take it for granted:
1. That people intelligent and inquisitive enough to be investigating Linux learn very early on that these problems can be (a) temporary and/or (b) limited to one distro;
2. That these problems are by no means Limited to Linux and even Worry users of Wonderful Windows.
3. That for reasons 1 and 2 your Windows-worshipping FUD is invalid.
4. That a person sufficiently (and justifiably) outraged by the (lack of) quality in a Windows OS will also be responsible for the non-purchase or use of Windows-only apps. The user informed by a company that wants said user to use said apps can easily retort that he wants said company to port said app.
Linux distros need to stop trying to bundle 374 types of kitchen sink in the distro, and instead focus on shipping a small, stable, compact Operating System with a limited set of basic software (browser, media player, text editor, file manager, image viewer and perhaps an email client),
Newsflash – there are distros that do this. Even fully loaded, however, few Linux distros I’ve seen contemporaneous with any version of Wonderful Windows take up as much space as vanilla installations of same.
For me, the ideal Linux distro is one that comes with a limited set of core functionality, an uncluttered, simple but powerful and tightly integrated GUI (eg XFCE, which I am very fond of), and the ability to install signed drivers and software from the Vendor’s official website. It should not take up more than a couple of hundred megabytes.
You really think that by advocating Slackware/DSL/Puppy Linux, you are speaking up for the “Common Man”?
I like to have a single program that has a comprehensive feature set that does one thing well, rather than using 15 different programs to perfrom only part of a task at a time.
Not the Linux way. Though of course you can use OO.org if you so choose – and many of us do. Nevertheless, I’m not going to sit here and let you (or anyone else) dictate my choice of app, thankyou.
[/i] Since I rarely have more than a dozen or so apps installed over my base OS (at this point on WinXP as my main OS), I don’t need any tools to manage my installed software, and since I use tools with comprehensive functionality, I don’t forsee any need to change this by installing hundreds of little bits and pieces. [/i]
I wasn’t aware you were Everyone. I also wasn’t aware Wonderful Windows programs didn’t spread cute little bits of themselves all over the filesystem (particularly if you attempt to install them on any drive other than C:) and the registry.
Linux as a whole needs to simplify, rationalise and become more organised and integrated to really take off in the way many of us would like it to.
Translation: Linux needs to become Windows. Except that Windows is not exactly simple (let alone rational), and it’s “package management” is even LESS “organised and integrated”.
Seriously, if you want to use Windows, kindly do so – and stop trying to turn Linux into it.
“I’ve not seen a valid reason for keeping source closed yet, unless maybe it’s national-security-related. (And even then, as in the recent spats between the US and UK over their fighter-jet software, some degree of openness may be required). If you can introduce me to one, however…”
Uhm.. it’s a pretty big one actually – it is called making a profitable living form the software you write. While some software is well suited to being commercially provided as FOSS on the basis of selling support for the software, in most instances this is not the case. And the model of “we will give you the app + source code for free, and we will sell you support” creates an inherent incentive for the software developer to deliberately create substandard software that requires users to purchase support. Good software should be so elegant and simple to use, so well written and documented that buying support is not necessary.
Not every software developer can afford to spend their spare time writing software, and commercial software companies need a stream of revenue to fund the salaries of programmers. Closing the source code prevents others from:
a)reducing your competitive advantage by using ideas you may have invested a lot of money developing
b)stealing the focus from a project by forking and fragmenting it (as has happened to a lot of OSS projects, particularly Linux).
“On the contrary, it’s all wrong; Magical Microsoft OSes are barely usable on a clean install (ok, technically the OS’s are barely usable with the computer fully loaded, but you know what I mean).”
No, while I have many gripes with Windows, I am very glad that they provide me with a fairly clean slate to start from – provided I have a web browser, I can add everything I need, and I have little to remove that I don’t. I have quite a bit of control over it. I can also slipstream an installation disc so that I can set it up how I like from a single installation.
Now, there are plenty of minimalist Linux distros that also come with a similarly blank slate, but the problem is that I can’t easily and painlessly download the few programs I want to use and install them without having to go through any number of time consuming or irritating processes. If you know of a distro which comes pre-installed with Klik or autopackage and a basic gui + browser and little else, let me know.
“…your Windows-worshipping FUD is invalid.”
Where did you get the idea that I was a Windows worshipper? Certainly, there are things I like about Windows, but there are just as many that I don’t (such as the needless pandering to backwards compatibility, bulky installation size, the GUI etc). Similarly, I very much enjoy using Linux (Xubuntu is my preferred distro at the moment), but there are a number of things that shit me about it (and I have already discussed most of what I feel can be improived in Linux). Same with OSX. Nothing is ever perfect, and openly dicussing the good and bad points of each operating system without fear or favour hardly constitutes FUD.
“Nevertheless, I’m not going to sit here and let you (or anyone else) dictate my choice of app, thankyou.”
Where was I dictating what apps you use? Of course, you can use whatever you like, and nothing in what I have said would prevent you from doing this. It sounds like Linux as it is suits you well, and that is fine. I am talking about the one thing that is really holding back linux from widespread adoption (over-reliance on package managers coupled with poor binary compatibility between distros).
“I wasn’t aware you were Everyone. I also wasn’t aware Wonderful Windows programs didn’t spread cute little bits of themselves all over the filesystem (particularly if you attempt to install them on any drive other than C:) and the registry. ”
I never said I was, however, the fact that Windows and OSX remain the two most popular OSes has as much to do with Linux’s fragmented and unfocussed chaos as it does with dodgy OEM bundling on the part of Microsoft and Apple.
And good Windows programs don’t spread little bits of themselves around the PC, many don’t even use the registry (most of the apps I use are self contained in their own folder)(granted, there are plenty of badly designed Windows apps that do horrible things to your system). Linux apps tend to spread themselves across a whole bunch of directories (something GoboLinux aims to fix), so I don’t think you can honestly claim that Linux (in general) has an advantage over Windows or OSX on this point.
“Translation: Linux needs to become Windows. Except that Windows is not exactly simple (let alone rational), and it’s “package management” is even LESS “organised and integrated”.”
No, it doesn’t need to become Windows, rather it needs to become more focussed and streamlined, and simpler to use by people who have better things to do with their time than fiddling with command lines and .conf files.
Windows’package management (Windows Update) is highly integrated into the OS, only deals with the core Windows components that ship with the OS, and doesn’t affect other third party apps. You couldn’t make it more organised or integrated.
As I said before there are things about Linux and Windows that I like, and if you combined them into a single OS and discarded all of the bits I don’t, I would have an OS I could be very happy with.
I prefer to manage my apps my self, and let the OS take care of itself. Hence my desire for an operating system with an XFCE-like DE that keeps the apps separate from the core OS functionality.
Edited 2007-01-17 02:47
And the model of “we will give you the app + source code for free, and we will sell you support” creates an inherent incentive for the software developer to deliberately create substandard software that requires users to purchase support
You are probably correct. Concider this:
Usability: “The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.”
So to design a usable product you have to spcify its users, their goals, and the context of use. The sharper this specification is the better you’ll be able to design a usable software.
This means that you have to have quite a narrow focus to create a really usable software.
Now, if your business model is to sell software, the cheapest way to produce it is to copy allready created software. Thus the way to earn money is to create a cheap “orignial” that can be sold to many customers at a high price.
The “many customers”-part isn’t exactly compatible with the “specified user, goal and context”-part though. So you compromise and try to creat cheap software for “all users, goals, and contexts”. Which in the end means unusable software.
So the “selling copies of proprietary software” isn’t really the way to “[g]ood software [that is] so elegant and simple to use, so well written and documented that buying support is not necessary” either.
Now if you base your business model on producing cheap software that is so narrowly focused that no competitor could use the same copies (because you allready saturated the market for that software), the bast way to gain profit is to share a common platform for base functionallity with your competitors and be the best producer of narrowly focused software based on that platform for a select niche of users, goals and contexts.
Uhm.. it’s a pretty big one actually – it is called making a profitable living form the software you write. While some software is well suited to being commercially provided as FOSS on the basis of selling support for the software, in most instances this is not the case. And the model of “we will give you the app + source code for free, and we will sell you support” creates an inherent incentive for the software developer to deliberately create substandard software that requires users to purchase support. .
Ah, an old unsubstantiated FUDstatement followed by an unsubstantiated slur. That may pass for an argument where you come from, but I can’t say the same.
Good software should be so elegant and simple to use, so well written and documented that buying support is not necessary
Yeah, ‘cos BSD and Windows software lives up to that *wink*.
Not every software developer can afford to spend their spare time writing software,
Who mentioned free time, until you did?
and commercial software companies need a stream of revenue to fund the salaries of programmers.
Prove that FOSS software prevents programmers from getting salaries. If you can.
Closing the source code prevents others from:
a)reducing your competitive advantage by using ideas you may have invested a lot of money developing
b)stealing the focus from a project by forking and fragmenting it (as has happened to a lot of OSS projects, particularly Linux).
Yes, in exactly the same way that allowing Fujitsu to make the same architecture PC’s as Dell’s is preventing them from “maintaining their competitive advantage” and is thereby bankrupting them.
Oh wait; it isn’t.
Closed software/hardware just means the customer is at the mercy of the vendor. No thanks.
No, while I have many gripes with Windows, I am very glad that they provide me with a fairly clean slate to start from – provided I have a web browser, I can add everything I need, and I have little to remove that I don’t. I have quite a bit of control over it. I can also slipstream an installation disc so that I can set it up how I like from a single installation.
Er, you can do that with Linux… As for “setting Windows up how I like,” if we pretend for a minute that Windows could be set up to my satisfaction, that level of customizability (read: any) went out with Windows XP, didn’t it?
Now, there are plenty of minimalist Linux distros that also come with a similarly blank slate, but the problem is that I can’t easily and painlessly download the few programs I want to use and install them without having to go through any number of time consuming or irritating processes.
What, you mean like google software, download software, install software, click Next interminably, accept restrictive licence agreement and/or incomprehensible EULA? I thought we were talking about Linux.
If you know of a distro which comes pre-installed with Klik or autopackage and a basic gui + browser and little else, let me know.
Well, installing whichever distro uses click and choosing either something like “minimal installation” or unchecking unwanted software would seem to do the trick.
Where did you get the idea that I was a Windows worshipper?
Because you seem to be wanting to turn Linux into it.
“Nevertheless, I’m not going to sit here and let you (or anyone else) dictate my choice of app, thankyou.”
Where was I dictating what apps you use? Of course, you can use whatever you like, and nothing in what I have said would prevent you from doing this.
If you get rid of all the distros but one, and put only one choice of software in the remaining distro, you are enforcing a set of standards which it would be almost impossible to break – just like Microsoft did with all their software.
It sounds like Linux as it is suits you well, and that is fine. I am talking about the one thing that is really holding back linux from widespread adoption (over-reliance on package managers coupled with poor binary compatibility between distros).
Ah, yes, but you see, the number of Linux distros is probably only outweighed by the number of grains of sand on a beach – and the number of different things different people say are “holding Linux back from widespread adoption”.
the fact that Windows and OSX remain the two most popular OSes has as much to do with Linux’s fragmented and unfocussed chaos as it does with dodgy OEM bundling on the part of Microsoft and Apple.
Yes, and PC’s will never be as successful as Macs, Ataris, and Amigas as long as you can get them from just about any manufacturer you like. It’s fragmented and unfocussed chaos.
Oh, wait…
And good Windows programs don’t spread little bits of themselves around the PC, many don’t even use the registry (most of the apps I use are self contained in their own folder)
There can’t be many “good Windows programs” then; maybe you are thinking of the ones which cost $$$, which I wouldn’t know about.
(granted, there are plenty of badly designed Windows apps that do horrible things to your system). Linux apps tend to spread themselves across a whole bunch of directories (something GoboLinux aims to fix), so I don’t think you can honestly claim that Linux (in general) has an advantage over Windows or OSX on this point.
Yes, GoboLinux does install things in centralised directories (or pretends to) but I’m not claiming centralised app directories are a good thing (they’re not, unless you have space to waste with statically-linked or endlessly reinstalled libraries); what I’m claiming is that Linux apps don’t, in general, install stuff willy nilly in whatever folder they feel like (binaries are in /bin or /usr/bin or /opt/{packagename}/bin, for example, not in /var or /etc. Generally).
“Translation: Linux needs to become Windows. Except that Windows is not exactly simple (let alone rational), and it’s “package management” is even LESS “organised and integrated”.”
No, it doesn’t need to become Windows, rather it needs to become more focussed and streamlined, and simpler to use by people who have better things to do with their time than fiddling with command lines and .conf files.
I fail to see how your suggestions above would make it “more focussed and streamlined” (especially since statically linked apps are the shortest path to bloat) or, if you’re not referring to what you’ve said before, what you mean by “more focussed and streamlined”.
If you don’t want to fiddle with commandlines and .conf files, then may I suggest you use a Linux distro that does not force you to do that? (Mandrake, and SuSE being two examples).
Windows’package management (Windows Update) is highly integrated into the OS, only deals with the core Windows components that ship with the OS, and doesn’t affect other third party apps. You couldn’t make it more organised or integrated.
Except by having it deal with other third-party apps.
I prefer to manage my apps my self, and let the OS take care of itself. Hence my desire for an operating system with an XFCE-like DE that keeps the apps separate from the core OS functionality.
Sounds like one of the BSD’s is in order. However, I do agree that it would be nice if there were a clearer separation between system and apps; Slackware probably comes closest to this within Linux.
I am actually not suggesting that there be only one distro (though I think a bit of pruning is in order), rather, I am suggesting that if distros standardise the way they interface with apps, and have standards for core libraries, it will enable software manufacturers to not have to worry about distros, as they will all work with their binaries straight out of the box. It will also mean that if standardised libraries are used, there will be no need for apps to be installed with extra dependencies – all of the standard libs would be part of the OS, and there would be little or no duplication of libraries amongst apps. Anything that uses some boutique library can just incorporate it into the app without cluttering up the system with extra crap. And given the current size of hard drives, I think monolithic binary blobs are perfectly fine for apps these days – no installation method is simpler than dragging a single executable file onto your desktop or a folder of your choice, and if for some reason, you have an unusually large number of applications, it is pretty easy to create a utility to manage them, without resorting to a full blown package manager.
The fact is that Windows is still more user friendly than the vast majority of Linux distros (and given the shit Microsoft is prone to producing, that is saying something). Linux may be technically superior in many ways, but in the one way that matters most (ease of use for people who don’t like computers but have to use them anyway), it(they) simply fails to hit the target. Not that it couldn’t, it is just the issue of focus and integration that is the problem. And maybe this is OK – after all the things that make Linux great for what it is (freedom, flexibility, community) also work against the things that make for a great desktop OS: (limited freedom, well defined standards & APIs, commercial viability and support.
Far from suggesting that Linux becomes more like windows, I am suggesting that it adopts some of the features of Mac OS and Windows.
I’m not asking for every Linux distro to disappear – they all have their place (up to a point), but rather for at least one distro to break from the mould and do the things that are needed to give Microsoft and Apple some desperately needed competition.
“Yes, and PC’s will never be as successful as Macs, Ataris, and Amigas as long as you can get them from just about any manufacturer you like. It’s fragmented and unfocussed chaos.”
And for that reason you will never have the tight, integrated, focussed experience you will get from Amiga or Macs. It is also the reason most people get computers from vendors like Dell etc., where the hardware has been well matched and configured in advance. It is somewhat misleading to say that PCs are successful because of their customisability, since the vast majority of computers sold are Dells and the like that have very much restricted customisation. PCs became popular because they run windows, which through an accident of history (certainly not merit) became the defacto standard for OSs, like it or not.
It’s good to see we agree on keeping the system separate form the apps, though we obviously can’t agree on the best method for doing that.
Edited 2007-01-17 05:47
And for that reason you will never have the tight, integrated, focussed experience you will get from Amiga or Macs. It is also the reason most people get computers from vendors like Dell etc., where the hardware has been well matched and configured in advance. It is somewhat misleading to say that PCs are successful because of their customisability, since the vast majority of computers sold are Dells and the like that have very much restricted customisation. PCs became popular because they run windows, which through an accident of history (certainly not merit) became the defacto standard for OSs, like it or not.
Whether or not you can customize PC’s was not the point. The point was, rather, that you are not dependent on one vendor for IBM-PC compatible hardware. Becoming dependent on one vendor for computers, operating systems, word processors, or anything else is The Road That Should Not Have Been Travelled.