14 months ago, the Autopackage project was small and active, and members sounded optimistic about its success. Now, although the alternative installer project continues, progress has almost come to a halt. The #autopackage channel on irc.oftc.net sits vacant most days, the developer blogs cover almost anything except the project, and commits to the source code repository have become rare. Formally, the project is still alive, but the major contributors all agree that it is faltering. So what happened?
Think it was trying to solve a “problem” that is greatly exaggerated in its importance.
Most people don’t have a problem getting the applications they want on Linux; their distribution of choice already delivers them quite easily. And for the few applications not included in any distribution, it’s easy to include a GUI install wizard users are comfortable with. So what was the problem again?
So what was the problem again?
The problem is that you have so many distros that are spending time packaging the same apps, thereby duplicating each other’s efforts. I suppose this isn’t really a problem, unless you’re one of the volunteers doing this and you suddenly realize what a colossal waste of time and manpower this is.
I still contend that what is needed is an ODF-style standard for packages, containing all the files needed (source and/or binary) along with descriptions of what the name of the package is, what version, what the dependencies are, etc. Then let each distro build their own package managers around this standard. That way, they can all keep the ‘my package manager is better than yours’ mentality that they seem to be so fond of, and an app would only need to be packaged once (or maybe twice, for source & binary), but it could still be used with any package manager that supported the stanard.
So come on guys, why the hell WOULDN’T this be a good thing? Do you guys really LIKE having all of these dis-seperate systems? I thought that one of the mantras of open source was about interoperability. You guys need to cease with the pissing contests and come together to make things work together.
the testing and whatnot still needs to be done for each distro, so i’m not sure where the collosal waste really is.
The problem is that you have so many distros that are spending time packaging the same apps, thereby duplicating each other’s efforts. I suppose this isn’t really a problem, unless you’re one of the volunteers doing this and you suddenly realize what a colossal waste of time and manpower this is.
It’s not that much manpower. Do you hear any of the distros complaining about how hard it is to maintain packages? No you don’t. It’s actually quite easy for a single person to maintain hundreds of packages within the Fedora repo for instance; in their spare time. And there’s no shortage of manpower, in fact there are more people knocking on the door all the time wondering if they can participate in maintaining a package or two as well.
The world just doesn’t need a new package manager to rule them all. If a single package manager was ever going to work .deb or .rpm or one of the countless others would already have assumed that role.
People just have to stop wringing their hands that the Linux world doesn’t work exactly like the Windows world. In Linux, casual users are meant to get their software from a distribution, not from random tarballs on the Internet. Application developers that want to see their apps in the hands of more users simply need to ask the handful of dominant distros to include their app.
This issue is just blown out of proportion almost every time it’s brought up. The very fact that autopackage was DOA should make people reevaluate their ideas about the supposed problem. At the very least, it should be obvious that the autopackage “solution” wasn’t the correct approach.
It’s not that much manpower. Do you hear any of the distros complaining about how hard it is to maintain packages? No you don’t.
Yes. It’s difficult enough getting the thousands of packages of a specific version maintained and into a distribution, but what isn’t addressed is how you can quickly and easily install a new version when it is released. For this to work, when a new version of a distro comes along then new packages are built for that, and in addition, backports for previous versions need to be updated and maintained. It is untenable.
The only place where the backporting of software isn’t a real mess and can be done reliably is in Gentoo. Because it is source based, if there is an ebuild you simply install, it compiles on your system and you’re there. However, it still doesn’t solve the problem of the availability of a wide variety of third party software availability.
For example, I do some Ruby development on a Linux distribution, which is very nice. However, when a new version of Ruby is released I’m either going to be stuck with the version that is built with my distro, or I’m going to need to find a backport repository somewhere or a set of packages or build it myself. I have this problem with Bacula as well. When you go beyond what’s in the core of the distro it doesn’t work.
Now imagine if you didn’t have to maintain a good portion of the packages you currently do, and users could simply download and install them in any way that they wanted. Imagine what you could all then concentrate on instead.
Application developers that want to see their apps in the hands of more users simply need to ask the handful of dominant distros to include their app.
They’re not going to do that, and they simply can’t. End of story. Application developers and ISVs have enough to do as it is, and anyone who doesn’t understand this doesn’t understand ISVs application developers.
The very fact that autopackage was DOA should make people reevaluate their ideas about the supposed problem.
There is not a supposed problem. There is a problem. Just ask VMware or anyone trying to write software for a Linux distribution.
The reason why something like Autopackage hasn’t been picked up is simply because the distros want to live in their own little worlds, and they still believe that application developers out there are going to develop packages for their own little package systems.
Now imagine if you didn’t have to maintain a good portion of the packages you currently do, and users could simply download and install them in any way that they wanted. Imagine what you could all then concentrate on instead.
Go to the Java website, click on the Java 32bit link, run the download. Look at the nice GUI installer. It’s the same bloody thing as what you get on Windows. What’s the problem exactly?
The reason why something like Autopackage hasn’t been picked up is simply because the distros want to live in their own little worlds, and they still believe that application developers out there are going to develop packages for their own little package systems.
Well that’s one reason. And it’s not going to change, and that’s why autopackage was a wrong headed approach doomed to failure from the start. If everyone could agree on a package manager there’d be no reason for .deb, .rpm and whatever gentoo uses etc to have existed in the first place.
>> The problem is that you have so many distros that are spending time packaging the same apps, thereby duplicating each other’s efforts.
>It’s not that much manpower.
I think that you’re underestimating it. Ask Debian packagers the amount of effort used.
The reason why this doesn’t happen is that distributions don’t really want users to short-circuit them, they want to be the sole providers of software..
Wasting a lots of efforts for stupid incompatibilities in the process, bah.
I think that you’re underestimating it. Ask Debian packagers the amount of effort used.
All I can tell you is that in Fedora, it’s really not a big deal. It’s not even on the radar as being an issue worth thinking about. There are more than enough hands to handle the work.
The reason why this doesn’t happen is that distributions don’t really want users to short-circuit them, they want to be the sole providers of software..
Wasting a lots of efforts for stupid incompatibilities in the process, bah.
Let’s assume you’re correct. Then you need to come up with a solution that doesn’t rely on those people to implement it. That’s why CNR has a much better chance of offering a workable solution than autopackage ever had. And i’m saying that as someone who isn’t particularly fond of CNR or the reasons for it.
Instead of complaining that there aren’t enough developers for autopackage or it didn’t get enough love from the distros you should take it as a BIG hint that it wasn’t the correct approach from the start.
Edited 2007-02-13 12:10
Yea, but come on. We can *always* use another layer or two of abstraction!
why the hell WOULDN’T this be a good thing?
Because it doesn’t solve the problem. The problem isn’t the package format. The distributors couldn’t care less about that. If you took a look at the spec for .deb and .rpm, you’ll find they’re nearly identical in most respects. They could pick one and create a common file format, but they don’t because that doesn’t solve the problem (it actually makes it worse).
The complexity is in the package manager itself, as well as in the metadata maintained both locally and globally to coordinate the proper installation of packages. This is why a .rpm for Red Hat doesn’t work on SUSE. Same format, but they contain different metadata inherently linked to the repository they came from and the system on which they’re intended to install.
A software package isn’t anything like a document. There are complex relationships amongst packages and the system they share that just don’t exist for other data types. The fact that we have incompatible document formats is inexcusable. That we have packages that run on one system but not on another is a fundamental challenge. You can’t just pull out a buzzword like ODF and expect a silver bullet for package management.
Instead of standardizing the package format, we need standardize one or both of the following:
1) A stable userspace ABI for the life of the package
2) A cross-distribution build system
The former is the approach taken by Microsoft and, to a slightly lesser extent, Apple. This means that we would build packages against a certain set of shared libraries that aren’t statically linked into the package, from glibc on up to libsexy. OK, maybe libsexy gets statically linked. From then on out, we can’t touch these libraries without rebuilding every package built against them. You can only add, you can never take away. You end with cruft on top of cruft and vendors that choose to only use the lowest common denominator because otherwise their products might not work on some of their customers’ systems. Welcome to Windows.
The latter option has never been attempted on a massive scale, but if anyone can pull it off, it’s the Linux community. The idea is to take what makes package installation different from one distribution to the next and push it up into the source tree for each package. Today each distribution maintains its own packaging metadata for each package and attempts to keep it up to date when a new version is released from upstream. What if the distributor could submit a patch to the upstream project that allows the package to build natively for their system when passed the appropriate build flag?
The key to getting this to work is to first select a cross-platform build system (such as CMake) and then to take it to the next level–to achieve the goal of a cross-distribution build system. If building packages for a particular distribution were as easy as running CMake and selecting the Ubuntu_6.10_amd64 target (for example), then providing binary packages would be trivial. But this is just the mechanics. The workload is also better distributed because the upstream developer maintains the distribution-neutral build configurations (such as library dependencies), while the packagers can submit patches (CMakeLists, for example) when their requirements change (such as paths changing).
Linux is a source-based operating system by nature. Package management is an abstraction layer that seems to offer binary compatibility. The key to making this abstraction seamless is to manage the complexity in the build process itself, rather than to rely on external trickery. What holds us back is our obsession with package formats (as reflected in the parent post). The one true universal format for OSS packages is the source tree, and the one true universal packaging tool is make. Once we understand this, our distributors can easily build Slackware-style tarballs–and so could relatively novice users.
The complexity is in the package manager itself, as well as in the metadata maintained both locally and globally to coordinate the proper installation of packages. This is why a .rpm for Red Hat doesn’t work on SUSE. Same format, but they contain different metadata inherently linked to the repository they came from and the system on which they’re intended to install.
For someone who is not intimately familiar with the underpinnings of a package/manger, maybe you could break this down for me in English ?
A software package isn’t anything like a document. There are complex relationships amongst packages and the system they share that just don’t exist for other data types.
I would think that the package shouldn’t care so much about the system/distro it’s being installed on. For example, when an app is packaged, there is some metadata in there that says this application needs libsexy3.5 to work. At this point, it is really up to the individual package managers to decide what to do about this. Does the package manager resolve the dependency automatically or leave that up to the end user? And where exactly does libsexy go? THe package manager would know this information, but the package does not and should not care. It just says “put this wherever libraries go on this distro …” In other words, the complexity is still in the package manager, but the packages themselves are put together in a standard way so that an app only needs to be packaged once and then each distro/package manager can deal with it however they see fit.
Edited 2007-02-13 06:37
EDIT: Just in case I haven’t made myself clear on exactly what I’m talking about, let me illustrate with an example. I’m going to use some Windows terminology because that is what I’m most familiar with:
So you you write an application that requires 3 files to run:
app.exe (the application itself)
app.dll (an application dll)
external.dll (an extternal library)
So you stick app.exe and app.dll in a package (i’m talking a binary package here), and in the package metadata, you specify that the external.dll library is needed to run the application. In our example, this particular application was compiled against version 1.3 of the external.dll library. So somebody attemps to install this package on a distro, and the package manager notices that version 1.3 of the library is required. So it checks the system and notices that version 1.7 of the library is installed. So, what to do now? Well, as part of our new standard, each package is required to specify whether it is backward compatable with itself, and what is the oldest version that is compatabile with this latest build. For external.dll, the package manager looks through its database (which could be anything from an xml file to a mysql database) and sees that version 1.7 is not backwards compatable with anything prior to version 1.5. So, in this case, the package manager knows that it’s gonna need to automatically reslve this dependency and download version 1.3 of external.dll and install it along side 1.7. Well, I dunno if you can run Linux libraries side-by-side, or if you run into open source DLL hell, but I’m sure you get the idea The package does not care whether or not the right library is installed already .. it just dictates what is needed and lets the package manager do the rest.
As you can probably tell, I am not an expert on Linux packages .. just trying to understand why something like the above wouldn’t work.
Well, your example is a bit like autopackage works. Autopackage can install several libraries and application versions next to each other, but normally that isn’t really possible. Besides, each library and application contains functionality which can be turned on and off at compile time – so you’d also have to specify that. And there is the problem the compiler (gcc) now and then breaks binary compatibility, so all packages need to be rebuild. All this leads to a very difficult situation if you want to install a certain package – there are a milion reasons why it won’t install/work. So your distribution still has to test everything and ensure it works, and there goes the advantage.
At the same time, a coherent system, build around certain versions of certain packages is better for performance (if you run 4 applications, you don’t want them to each load a different version of libsexy).
I think the ‘best’ solution would be a source-based packagemanagement, if you’d go for package management at all. I think Klik can fill the void autopackage wanted to fill. It’s simpler from a users’ perspective, works reasonably well, etc.
tough I love zeroinstall’s ideas.
Just a thought – I’m no expert or anything, but wouldn’t it be better to have
app.exe
app.dll
external-1.3.dll
install all of them in the app directory, and at runtime load external-1.3.dll only if it isn’t loaded already?
The disadvantage would be an obvious waste of space and bandwith – but seeing that broadband and >250GB harddisks have become quite common – it wouldn’t be such a big issue.
The advantage – simple and universal packaging.
Edited 2007-02-13 09:50
I think the disconnect here is that you’re thinking in terms of upstream components that are only available as binaries. These already play by Rule 1, where they are explicitly linked to a certain versions of certain libraries. On such platforms, backwards compatibility is always maintained. They only add new functions. If the really need to break backwards compatibility, they split the old library on its own and give the new library a new name.
For example, every Windows system has a win32.dll. You’d better believe that every library routine in win32.dll from Windows 95 and possibly earlier still works on Vista just as it did then. On Linux, our core library is called glibc. It’s current version (from the perspective of compatibility) is 2.5.x, the sixth break in compatibility since 1994. Each time it breaks, everything must be rebuilt.
Now, 90% of my comment relates specifically to building binary packages from source. So the answer to your question is that your idea won’t work because upstream developers don’t distribute binary packages for all distributions, and maybe they don’t provide binaries at all. The unique problem we have with Linux is that we only have source compatibility (at best). We need to find a way to shepherd packages from the source tree to the user’s filesystem. We don’t have the luxury of binaries that just work.
I hope you’re gaining a better perspective on Linux package management. I’m glad that you’re really making an effort to understand and “reach across the aisle.”
I would think that the package shouldn’t care so much about the system/distro it’s being installed on. .
For example, when an app is packaged, there is some metadata in there that says this application needs libsexy3.5 to work.
That is because there is a new version libsexy installed on that system that is either incompatible with the old version and obsolete it. The workaround is to package a legacy version that contains the old library libsexy under a different name. gcc is one of examples.
Another way is to contact the main developers of that application to inform the problem so they can fix them. If it takes time, maintainers can choose to create a patch that fix the issue and incorporate it on the source package for rebuilding.
If there is an additional dependencies related from the distribution, maintainers will be informed via bug report.
OK, here we go in my best attempt at plain English…
If we had a universal package format, it wouldn’t contain as much information as a system-specific package. More complexity gets pushed down into the packaging tool. This much is pretty clear. The next logical step is that since the package manager is still system-specific, and it needs to be smarter to understand the universal package, we end up with more differentiation between the systems and more duplication of effort. That’s what we want to avoid.
What we want is to make the package itself smarter, but we want to do this in a way that distributes the effort evenly. In other words, we don’t want the packager for each system to have to work independently to get the package to work on their system, and we certainly don’t want the upstream developer to have to independently craft a package that works on all systems. We want the information that enables the package to install on all systems to be located in the developer’s source tree, where each system’s packager can inspect the work of the others and modify it to work with their system. We also want the information common to all systems to be logically separate from the system-specific information, which abstracts the system-specific stuff from the developer (who isn’t an expert in all systems).
By making the package smarter, and by integrating what we know today as “metadata” into the source tree, we can make the packaging tool and package manager decidedly dumb. Perhaps this is the key to eliminating what differentiates today’s Linux distributions, thereby encouraging consolidation.
The other concept I didn’t explain very well is that the complexity isn’t really in the package manager, the tool you use to install packages. The complexity is in the packaging tool, the utility that creates the package with its associated metadata. The metadata makes the package manager’s job doable. As you’ll see in a moment, the packaging tool as we know it today often can’t be fully automated.
To address you questions specifically: The problem isn’t where libsexy should go. Assuming libsexy is an external dependency, the package doesn’t come with it, and it doesn’t know how to build it. It relies on the fact that the proper version of libsexy is installed on the system, and it needs to know exactly where it is. This may be different from one system to the next, or even between different releases of the system.
You’re right that the package doesn’t need to explicitly include information on where to find libsexy on a particular system. The packaging tool can somehow become aware of the fact that the source package requires libsexy and tell the build system where to find it. This is how we do things today. It’s fine, but it means that the packaging tool can’t be dumb and just tell the source package to build. If it can’t detect all of the requirements automatically, then a human needs to step in and give it the old clue-bat. Lots of system-specific complexity and human intervention just to build a package that only installs on a single system.
Whereas in the techniques I described, we move the human clue-batting into the source package itself, where requirements common to all systems don’t have to be replicated. For example, the developer’s section of the build configuration says we need libsexy-2.0.0 or greater. Every system now knows this requirement. All the system packagers need to do is say that libsexy is located at /usr/lib/libsexy.so (which is a symlink to libsexy.so.2.0.3). In addition, because it leverages the build system, the packaging tool is not only automatic, but it’s also really dumb. Dumb to the point where it would be pointless for each system to implement their own.
See? This is starting to look like a universal packaging system. We’re moving as much complexity from each system as possible.
The remaining item is the package manager itself. As I mentioned in the previous post, such a packaging system would likely build a plain old tarball, like Slackware has done since time immemorial. We’ve done a pretty good job of removing complexity from the system, but there’s still a few things we need to keep system-specific.
First is package naming and aliasing, which is inherently tied to the second–package dependencies. One of the fundamental differentiators between systems is their focus on the particular needs of its target market. Perhaps no singular technology expresses this idea more than the naming of packages and the aliasing of package options and package groups. Packages are given names that specifically identify their source and version, i.e. gcc-nopie-3.4.1. This might be one of several packages that satisfy the alias gcc-3. This might be part of a group alias called gnu-toolchain. Similarly, there might be an alias called lamp that includes apache, mysql, php, and associated packages or aliases. Most systems will have a group that contains all of the packages in the default install and another with all of the packages installed on your system. Each system should be able to handle this in their own way that addresses the needs of its userbase.
Everyone knows what dependencies are, but the point is that dependencies, um, depend on the choice of names and aliases employed by the system. Dependency resolution is a really simple problem once you have a complete “graph” of the dependencies for every package and alias available from the selected repositories. No, I’m not doing away with repositories, and I’m not doing away with dependencies. Because we’ve stripped all of the package-building complexity out of the system, we’re left with a simple graph traversal problem to figure out the correct ordered set of packages to install/update to satisfy a request to install/update a given set of packages or aliases.
From there it’s just a matter of extracting the package tarballs onto the filesystem in this order. All of the little stuff that today’s package managers do, like extracting the man pages and putting them in the right place, has already been done during the build process. Everything is exactly where it should be in the tarball.
In a way, we have to come full circle, back to a time where Linux software was distributed as tarballs that you simply extract and run. The time of Slackware. Before Red Hat came along as decided we needed big complicated metadata-based packaging systems. To this day I believe that Red Hat’s intention with RPM was to break commonality with the existing distributions and build packages that could only be easily installed on Red Hat Linux. In the process they fragmented the community and created a compatibility nightmare. It’s about time we fixed this, and to do this we need to make our package managers as dumb as possible.
Edited 2007-02-13 08:38
I still contend that what is needed is an ODF-style standard for packages, containing all the files needed (source and/or binary) along with descriptions of what the name of the package is, what version, what the dependencies are, etc.
It is already available. For example, source rpm provides both tarball and the spec file to build a binary rpm. Binary rpm can be extracted to get the changelog and other binaries. Spec file contains all informations about dependencies, changelog, version of package to name a few. Maintainers can easily get a openSuse spec file from source rpm to adapt it for Fedora. Packaging from Debian to other derivates provides a similar process.
Remember all these different distributions came from the same origin (Red Hat, Debian, Gentoo and Slackware).
Edited: Removed Off-topic Rant.
Note to self: Learn to read.
Edited 2007-02-13 11:20
Open source isn’t really about interoperability. It’s just a nice side effect of having free access to the source code.
Packaging isn’t duplicated effort. Each package manager is different for a reason, they offer a different take on the idea.
Different distributions have different default settings in their packages. This is what makes each distro unique.
The great thing about Free Software is that interoperability is guaranteed by it’s nature, which leaves us free to experiment.
once you get tired to a single package manager then all innovation in the area of package management stops.
I like having many different package managers
I like having many different and unique GNULinux distros
I like having hundreds of different window managers, media players, scripting languages, Kernel patchsets, text editors. Because the more there are the better chance I’ll find one that I love working with.
That’s why I love Free software.
It’s not really a problem for me at all.
I just don’t understand what is so “innovative” about one package format compared to another, aside from maybe speed (because at this point the Smart Package manager does probably the best possible job of finding dependencies, and is equally good at doing it regardless of format).
All the packaging wars end up doing is stymy diversity by limiting users to only those packages that are “supported”. If you as a user want to install a new app or version of an existing app that uses a bunch of (versions of) libraries that you don’t have installed, you’re basically screwed–and so is the software developer that wants to get his app widely tested, used and recommended. So the use of new apps, and thereby the development of those apps, as well as the use of new libraries (and the further development thereof) are stymied.
I agree with you about Linux distros helping innovation and diversity, but the fragmentation of package formats and repositories it’s a whole different story. Let package managers be reserved for system software. But don’t throw out the notion of cross-distro compatibility in the name of innovation. Because most of the innovation in Linux distros does NOT occur in or depend on the package manager, EXCEPT in the case of system software.
> So what was the problem again?
The problem was (is) that Linux is a PITA for an ISV.
You may personally have issues with the ‘V’ bit, but having an ISV ecosystem provides choice to users, even if you don’t personally want it.
The problem was (is) that Linux is a PITA for an ISV.
Well if _that’s_ the problem, then something like the CNR announcement from Linspire makes a hell of a lot more sense than autopackage.
Your suspicion was right, neither of them interest me personally, but at least CNR has a chance of working for the people who actually care.
Well if _that’s_ the problem, then something like the CNR announcement from Linspire makes a hell of a lot more sense than autopackage.
No it doesn’t, because all CNR is is yet another repository based system with yet another, albeit slightly nicer, front end. It solves zip. Nothing.
ISVs want you to be able to build general purpose installation packages that you can download from their web site or install from a CD. It’s also very much what users want. No one wants to wait six months before that particular version falls down into a repository. And no, backports don’t work because you have to wait until it drops into that repository as well.
Package repository systems do not work when you want to get the latest version of a piece of software, install and get it working or when you have general purpose software to install where it isn’t worth getting every single damn distribution making packages and getting it into their repositories. What about closed, commercial software that simply cannot be packaged up for every single distribution? Take a look at the trouble someone like VMware has.
Just don’t even bother trying to claim otherwise, because this doesn’t work, it has been consistently shown that it doesn’t work, ISVs have consistently spelled out that this is a PITA for them and every single distribution from Red Hat, to Suse to Ubuntu still believe that people are going to create specific packages just for them.
One *REALLY ANNOYING* side effect of all this, is that you never know where anything is going to be put on your system. I have binaries installed in:
/bin
/sbin
/usr/bin
/usr/sbin
/usr/local/bin
/usr/local/sbin
/opt
..(any others that i’ve forgotten)..
I understand the rationale behind most of these different paths, but it’s still a PITA.
And because I’m a Ubuntu junkie, self-compiled libraries usually dump themselves in /usr/local/lib, which Ubuntu doesn’t include in its PATH, so I usually have to re-compile* with a different prefix.
I work across about 10 Ubuntu machines, and the effort of adding the /usr/local/… paths to all the environment vars on all of the machines is just too much.
* obviously, I usually don’t HAVE to recompile, but it is safer IMO to let the make scripts do their own thing, rather than try to manually install some libraries.
Have you tried Gobolinux?
http://www.gobolinux.org/
Whenever there’s a discussion about package management, I can’t resist to bring it up.
Essentially, it’s a wholly different filesystem structure, one directory for each program… and a shared library is a program on its own right. Some shell and ruby scripts do the magic of creating the right symlinks so that programs can (indirectly) link against those libraries. It’s based on “recipes” (something like ebuilds, but much smaller and simpler). You only need the url from where to download the tarballs (for instance, in SourceForge) and the scripts do the rest.
Recipes for autotools-compliant programs are very short. For the oldest programs, in the worst case, the traditional FHS directories, with symlinks in them, are preserved, but hidden by a kernel patch (GoboHide) so that you don’t see them even in the command line. You can unhide them easily if you like, but you don’t have to, there’s nothing in them but symlinks.
Of course, I can’t say that everything you compile works without glitches, but I think it’s worth trying it out, especially if you do a lot of manual compilation. At this moment I have it in my disk along with other, more mainstream distros.
Yeah. That sounds good. I like the Lighting reference too .
One (albeit small) drawback of that is it sound like you have to compile everything. Could be kinda-slow when doing a big system-update? Maybe I’m wrong.
Also, I still can’t consider the idea of interpreted languages doing sys-level operations without a bit of an involuntary shudder, but that is just my prejudice showing.
If I get the time, I’ll give it a try. Unfortunately, my servers are tied to Ubuntu, but I have a Linux virtual machine that could cope with it.
“One (albeit small) drawback of that is it sound like you have to compile everything. Could be kinda-slow when doing a big system-update? Maybe I’m wrong. ”
Well, there are repos for packages as well as for recipes, and you can have local recipes (and packages, I think). There’s a nice GUI frontend (called Manager) for the package installation and recipe compilation scripts. So, if there is no package for the app you want, there may be a recipe, and if not, you can make a recipe with the MakeRecipe script and the URL. Then you can easily contribute the recipe upstream
/etc/ld.so.conf to include the /usr/local/lib library path, no need to rebuild.
Sloppy, yeah, and what you’re mentioning here is at least as important as all the low level technical stuff mentioned here. Adherence to the FHS and stuff like (the location of) menu/desktop icons, preference files, etc, are at least as important as whether or not library X of version Y is installed and/or backwards compatible or not.
/etc/ld.so.conf to include the /usr/local/lib library path, no need to rebuild.
Thanks for that. I didn’t know, I figured that ld wouldn’t have its own config file .
Of course, I then need to find and change where Zsh and Bash define their PATHs at the system level /etc/zProfile? and change them to include /usr/local/bin, or else the utilities won’t work and the autoconfigure utilities won’t be accessible. .
‘Tis easier imo just to try to remember to customize the prefix every time.
/etc/profile is pretty standard for such stuff, or /etc/{bash,z}rc. Simply having a look at the relevant manpages should tell you where these shells look for their (global) profiles.
I don’t see how recompiling every package with a new prefix is somehow “easier” than editing two or three files
Of, you might need to run `ldconfig` after editing the ld.so.conf file.
Thanks for the info. I guess I should read up about these things at some point.
I don’t see how recompiling every package with a new prefix
It’s not every package, its only when I’m compiling a non-standard (usually) library from source that I have to remember to change the prefix.
I also (arbitrarily) don’t like using the /usr/local/ directory, it seems too much like overkill for me.
So, what’s your solution?
Actually, there was a discussion about this a while ago, (look in the archives). The answer was smart-symlink-directories. Special dirs that contain symlinks to various folders in a clever fashion, (I think MS called them shim-layers in Vista).
So there is a folder called say: /bins which contains links to all the files in /bin, /usr/bin and /usr/local/bin. Duplicate Filenames in /usr/local/bin mask files in /usr/bin which mask files in /bin.
The destination of newly-created files can be detemined on a per-user basis. So if a local ‘root’ creates the file /bins/test, then it goes into /usr/local/bin/test. If a hypothetical ‘domainroot’ user is set-up then when it creates the file /bins/test, this gets written to /usr/bin/test.
The end user, just has to worry about one folder: /bins
A much simpler solution is to properly standardize the folder structure across Distros, but I can’t see that happening anytime soon.
So, what’s your solution?
Distributors have a set of core packages in their distribution that make up their system as shipped, and then an application installation system, like Autopackage, is shipped and used to install packages on top that any application developer can make general purpose packages for. You then integrate that with your development tools for easy use. In terms of that application installation system, users can download packages themselves like Windows or specify a repository so they can get decent automatic updates if they want.
I’d also like to see some form of Innosetup system so that we can at last get proper graphical installation packages, or text based.
Now think about this. Yes Autopackage has manpower problems, mainly because it isn’t being used, so the last point above in making Autopackage a general purpose and full featured installer isn’t happening. Imagine if you could use Autopackage to get ISVs from everywhere creating packages of their software nice and easily for Linux based systems and then through that increased popularity people like the Innosetup guys think “Wow, let’s get our stuff involved in that”.
I’d also like to see some form of Innosetup system so that we can at last get proper graphical installation packages, or text based.
For pete’s sake there’s already numerous options to choose from. Here’s one commercial option which lets you create an installer for “Windows, Mac OS X , Linux, Solaris, AIX, HP-UX, NetWare and more”
http://tinyurl.com/lwhjg
And yes, there are open source options of varying quality, and perhaps innosetup will add Linux to their open source offerings as well. But it’s not like this is a feature that can’t be purchased _today_ if an ISV wants it.
For pete’s sake there’s already numerous options to choose from. Here’s one commercial option which lets you create an installer for “Windows, Mac OS X , Linux, Solaris, AIX, HP-UX, NetWare and more”
If you don’t know what it’s like to be a third party developer or commercial ISV and you don’t know what they require, then please, just don’t comment.
The kind of things that that package does is not any any way native to any Linux distribution, and it merely plugs into the already existing software installation systems for those systems – and on Linux that means RPM. It does not solve the overhead of creating packages for different distributions or even different versions of the same distribution or the lack of graphical installation tools.
Oh, and although this might be a solution for a large company looking to bring some sanity to their software installation processes (which is what this is really targetted at), how does this miraculously become a solution for third party developers and ISVs whereby distributors can wash their hands?
But it’s not like this is a feature that can’t be purchased _today_ if an ISV wants it.
It doesn’t solve anything for an ISV. I suggest you look at who that product is targetted at and what it actually solves.
It doesn’t solve anything for an ISV. I suggest you look at who that product is targetted at and what it actually solves.
I suggest you pull your head from where its lodged and go back and read again that I was responding to the comment that there was a need for something like innosetup on Linux.
ISV’s will have to come to terms with the fact that there are mutliple Linux distributions and they will have to do some extra work to market to them. All your crying about it without offering a solution does nothing to advance this conversation.
I suggest you pull your head from where its lodged and go back and read again that I was responding to the comment that there was a need for something like innosetup on Linux.
You don’t know what you’re talking about – as all of those who have criticised Autopackage, or anything else, and the real world problems they have attempted to solve. You’re simply not in it.
You did get the part where I pointed out that it simply plugs into the installation system of each OS and doesn’t solve the problems within them, right? You can’t simply go frantically surfing on Google for something that doesn’t solve anything.
ISV’s will have to come to terms with the fact that there are mutliple Linux distributions and they will have to do some extra work to market to them.
They’re not going to, and it’s really, really, really funny and deluded that you believe that Linux or any little Linux distribution that you use is important enough for ISVs to put individual time and effort porting to. You also show you haven’t the faintest idea what you mean when you say ‘some extra time’ from an ISV’s perspective.
Besides, ISVs and others would probably come up with a solution like Autopackage, Klik or something similar like Ruby gems – and it would be rejected by the distributors, who of course, know exactly what the real problems are and what’s really required ;-).
All your crying about it without offering a solution does nothing to advance this conversation.
I’m not crying at all – it’s you who are crying. Oh and I have offered solutions, as the Autopackage, Klik and Ruby gems people have. What can you and the oh-so knowledgeable distribution developers offer in return? A whinge and a rant from a technical perspective so disconnected from the real world and the real problem it beggars belief.
This is a real problem for ISVs and developers, someone comes up with a solution like Autopackage and people like you and others whinge and stamp your feet about it on technical grounds and that the repository system is better for everyone (which it simply isn’t for reasons consistently pointed out – you’re not in a position to know anything here).
If this issue doesn’t get solved fundamentally, regardless of how many Linux distributions there are, then ISVs simply won’t port their software under any circumstances and users will stay where the software is. Guess where that is?
It’s no skin off my nose or anyone elses’, so feel free to climb off the pedestal and discuss the real problem. Telling everyone that they’re crying and that there are multiple distributions simply isn’t doing it.
It’s no skin off my nose or anyone elses’, so feel free to climb off the pedestal and discuss the real problem. Telling everyone that they’re crying and that there are multiple distributions simply isn’t doing it.
Hey.. you’re the one who has the problem. For my needs open source and the packages I get out of the Fedora repo handle everything I need. So this is your issue not mine.
But instead of realizing that the real world political and economic realities and coming up with a solution that can work within them. You want to bitch and moan because everybody won’t embrace your preferred solution. Shrug. That seems just insane to me.
Things just aren’t so bad, and seem to be headed generally in the right direction.
Just don’t even bother trying to claim otherwise, because this doesn’t work, it has been consistently shown that it doesn’t work, ISVs have consistently spelled out that this is a PITA for them and every single distribution from Red Hat, to Suse to Ubuntu still believe that people are going to create specific packages just for them.
ISV’s can SPELL OUT whatever the bleep they want. But it’s their job to deal with it. IT’S NOT OUR JOB TO DO THEIR JOB And there’s no big Linux unification on the horizon, there WILL remain differences between distributions. That’s just a fact of life, you and they will just have to deal with it.
But… I don’t think you understand what CNR is.
While CNR isn’t perfect, at least it works with Red Hat, SUSE and others and will automatically package up binaries for all those distributions. So ISV’s have the option of delivering their latest offerings that way to all of the dominant distros without a big headache. And it’s a single click to install for users.
It’s what you’re bloody asking for, short of every Linux distro standardizing everything (ie. becoming a single distro) which is just a crack pipe dream.
Short of that, any ISV can simply make a GUI installer just like they have to do on Windows. Users download, double click on it, and install just like they do on Windows. NOBODY IS FORCING ISV’s to release their software in tarball fashion or as a package, they can release it as a binary installer. Just like Java is released for instance, it’s not that bloody difficult.
Again, people are exaggerating the problem way too much. It’s really not that big of a deal.
ISV’s can SPELL OUT whatever the bleep they want. But it’s their job to deal with it. IT’S NOT OUR JOB TO DO THEIR JOB
Well, many developers seem to disagree here. Just look at the portland project for example.
Also, aren’t these blame games rather nonsensical? Shouldn’t this be how linux can be improved, not about it’s his job, it’s not, it is, it isn’t?
ISV’s can SPELL OUT whatever the bleep they want. But it’s their job to deal with it. IT’S NOT OUR JOB TO DO THEIR JOB
Wow. So it’s an application developer’s job to build something that is an integral part of a Linux OS today? Oh, and when people have started to try and solve the problems they are highlighting, like with Klik or Autopackage, distributions still want to ignore the problem and carry on their merry little way.
On whatever level you want to put it, it is a distribution problem.
But… I don’t think you understand what CNR is.
Yes I do.
While CNR isn’t perfect, at least it works with Red Hat, SUSE and others and will automatically package up binaries for all those distributions.
The CNR solves absolutely nothing, because specific packages still have to be made for different distributions. Additionally, you’ve also invariably got to package for each individual version of each distro as well. It’s a useless solution – such as it is.
So ISV’s have the option of delivering their latest offerings that way to all of the dominant distros without a big headache.
Alas, you still have to get it in the CNR system, as in any repository, and it solves none of the problems for getting your application into the system for immediate install when you create a new version or the maintenance issues of producing different packages and different packages for different distro versions.
It’s you who doesn’t understand what the CNR is, because you don’t understand that it doesn’t solve anything. It is exactly like OpenSuse’s build system and solves none of the fundamental issues.
It’s what you’re bloody asking for, short of every Linux distro standardizing everything
No it isn’t, and distros don’t need to be completely standardised. They just need standard interfaces. Besides, you can claim that that isn’t possible, but I’m afraid that’s what’s required to a large extent. No third party packages and developers, no progress.
Short of that, any ISV can simply make a GUI installer just like they have to do on Windows.
There is no support system for doing that as there is on Windows.
NOBODY IS FORCING ISV’s to release their software in tarball fashion or as a package
Oh cheers. Thanks. So ISVs and third party developers aren’t necessary? Glad we cleared that one up.
they can release it as a binary installer.
There is no standard means of installing software to any given location, and there is no telling whether that binary installer will be targetting what it actually needs to run.
Just like Java is released for instance, it’s not that bloody difficult.
Java has had no end of installation issues, like every other piece of third party software, which is why people mostly get it with their distribution system these days. Java is even one of the slightly easier pieces of software to install as well, since it has very few dependencies and is, in effect, its own self contained installation environment. When you install JAR files you simply need a JRE or JDK.
Again, people are exaggerating the problem way too much. It’s really not that big of a deal.
————> Problem
————> Your head
You just don’t understand what the problem actually is, as many people really don’t seem to either, and to fully appreciate it you’ll have to spend some time in the real world of ISVs, third party developers and the real world in general. Alas, that is something that few people developing and directing Linux distributions seem to have been in.
Edited 2007-02-13 12:12
You just don’t understand what the problem actually is, as many people really don’t seem to either, and to fully appreciate it you’ll have to spend some time in the real world of ISVs, third party developers and the real world in general. Alas, that is something that few people developing and directing Linux distributions seem to have been in
Sorry, it’s the other way around. The REAL WORLD is the world where there are multiple distributions of Linux with different priorities and configurations. Screaming that it’s all wrong isn’t going to fix anything. There are reasonable practices that can be taken to make it relatively easy to deal with. And to bring this back on topic, autopackage was the wrong approach from the start.
The REAL WORLD is the world where there are multiple distributions of Linux with different priorities and configurations. Screaming that it’s all wrong isn’t going to fix anything.
You’re ass end over heels over this. The fact that there may be multiple Linux distributions isn’t my concern. The fact that there isn’t a standard way for me or any third party developer or ISV to get their software to their users anywhere is the problem. It really is funny how small some peoples’ worlds actually are.
Notice those words developers and users, because distributions are used by developers and users. No one is screaming that multiple distributions is all wrong – no one cares (it’s only in your head) – what people want is what I’ve described.
There are reasonable practices that can be taken to make it relatively easy to deal with.
No there aren’t. We’ve been through this.
And to bring this back on topic, autopackage was the wrong approach from the start.
It never was off topic. The fact is, none of the alleged solutions solves anything that Autopackage was seeking to solve.
I notice Debian’s developers have whinged over Ruby’s gem system as well.
You’re ass end over heels over this. The fact that there may be multiple Linux distributions isn’t my concern. The fact that there isn’t a standard way for me or any third party developer or ISV to get their software to their users anywhere is the problem. It really is funny how small some peoples’ worlds actually are.
For f–k sakes stop your immature whining and offer a solution that can work in the REAL WORLD where there are competing Linux distributions that aren’t all going to adopt a single package manager and a kernel that doesn’t have internal API guarantees.
Unless you have a REAL WORLD solution your pathetic whining is POINTLESS. The only comment i’ve made is that autopackage can not deliver a solution to the problems you’re droning on and on about. Autopackage can not attract or interest the people it must in order to be successful. NEXT.
Edited 2007-02-13 12:42
For f–k sakes stop your immature whining and offer a solution that can work in the REAL WORLD
Sigh………. You’re discussing immaturity now? I suppose Godwin’s Law will apply soon and you’ll mention something about Nazis. We know the discussion is sliding and you have nothing further to say.
I have offered solutions, as have the Autopackage, Klik and Ruby gems people who live in a world where you don’t. All that is offered in return is a whinge and a rant disconnected from the real world.
The only comment i’ve made is that autopackage can not deliver a solution to the problems you’re droning on and on about.
You haven’t said why at all, and the simple fact is, it could work and it hasn’t even been given a chance because people want to whinge that it is taking over their own pet package management systems. That’s really what it’s about, isn’t it.
I’m describing problems encountered in the real world by developers and ISVs. You’re whinging that Autopackage and the problems those guys have pointed out are, well, I don’t know really. Apparently they don’t do things in a way which is technically compatible with you, or with people who work on distributions like Debian. Autopackage, Klik and Gems seeks to solve and highlight real world installation problems for ISVs and developers. Go figure.
You haven’t said why at all, and the simple fact is, it could work and it hasn’t even been given a chance because people want to whinge that it is taking over their own pet package management systems. That’s really what it’s about, isn’t it.
Let’s say you are correct about that, and that’s the ONLY reason it hasn’t been successful. That should be a big hint to you that it was a wrong headed approach from the very start. Solutions to problems can’t be only TECHNICAL. They need to consider social and economic realities as well. Clearly autopackage by the nature of its failure to be embraced by the people it seeks to help, wasn’t such a solution.
P.S. Hitler.
Edited 2007-02-13 13:25
I’m pretty sure you referenced the Nazi’s, so let the argument go, for all our sakes
I’m pretty sure you referenced the Nazi’s, so let the argument go, for all our sakes
I think that all this debate is pointless. You raised good points but you refuse to see tux68´s counter points no matter how he puts them.
The fact that Autopackage is not gaining momentum is that it doesn´t solve any Linux problem but it has the potential to add Windows´ problems (or inconveniences, if you want) on Linux with no visible gain for Linux at all.
At this point, Autopackage is too alien (more on this later) to the system and people used to maintain their systems using a fine-tuned package manager are completely right when they want to avoid such thing.
Besides, I have yet to see a package from an ISV that does take advantage of Linux modularity using its intrinsic dynamic-linking. They often choose to rely on static binaries with everyting statically compiled built-in so it doesn´t matter a iota if the executable file is on $HOME, /opt or /usr/bin for those applications and therefore they are free to use any GUI installer that pleases them. People already mentioned several examples of that so I won´t rehash them here.
On the topic of Autopackage being too alien on the system, it would be nice if Autopackage could at least register its packages with the native package manager (I believe that Smart does that) if there is any so that the choice to manage that package using the package manager is still available to the user if he/she wishes so. I understand that this is not a small task by any stretch of the word, but it is certainly doable especially if you make it work with all the big distros and that way spread Autopackage to their derivatives automatically or offer some sort of layered API where the distro developers could “plug” their package managers into Autopackage. If I understood correctly, this is/was on the to-do list but given the lack of active developers it is not something that will happen anytime soon, sadly.
Native package management integration is a really needed feature that needs to be reached before most people would care to give it a shot. I tried it a few months ago, right after they created the QT front-end and liked it a lot but I don´t want to go back to the Windows way of managing installed applications. It is way too… primitive… for my taste.
I´m still interested on Autopackage and will follow its progress closely as I liked a lot and definitely see its potential but the integration with the native package manager is a critical feature FOR ME and until it gets there, I won´t touch it for anything other than commercial closed-source packages that use it.
The fact that Autopackage is not gaining momentum is that it doesn´t solve any Linux problem
You’ve hit the nail on the head, especially in how this is perceived. Solutions like Klik, Autopackage and Ruby gems seek to solve ISV, user and developer issues regarding software installation, but in many peoples’ eyes it does not solve any perceived Linux problem.
but it has the potential to add Windows´ problems (or inconveniences, if you want)
I know many Linux people wander around with a sulk on their face and say to people “I don’t want any Windows installer on my system”, but what problems would a universal, working software installation system cause, exactly? Apart from people actually being able to install the software that they want, and actually buy the software that they want, without having to wait months for it to appear in a repository?
On the topic of Autopackage being too alien on the system, it would be nice if Autopackage could at least register its packages
The problem is that the native package manager manages the core system, and something like Autopackage manages applications and dependencies within the application space. If you don’t have that distinction then the whole problem of what you have in the native package management system and what you have as Autopackages gets………………complicated.
I won´t touch it for anything other than commercial closed-source packages that use it.
Why do you think that closed source and third-party developers would want to use something like Autopackage?
I have a feeling we’ll come full circle right there. I never really thought that one issue would kill desktop Linux’s more widespread use stone dead, but this sure is it.
I can appreciate why Mike Hearn threw the towel in and just accepted that real world problems weren’t going to be solved. A bit sad really.
Edited 2007-02-13 21:56
Apart from people actually being able to install the software that they want, and actually buy the software that they want, without having to wait months for it to appear in a repository?
Months? Don’t you mean “days” instead? Exaggerating the minor annoyance of having to wait a couple of days to get a new app doesn’t help your argument.
I never really thought that one issue would kill desktop Linux’s more widespread use stone dead, but this sure is it.
No, it isn’t. It isn’t even on the radar. I have yet to hear a single potential Linux user tell me “I’d use Linux, but I’m not sure I like the fact that it uses a package manager, and that the bleeding-edge version of some software might take a couple of days before their added to the repositories.”
I’m sorry, but that’s simply ridiculous.
It’s simple, really:
Popular/currently maintained applications are updated often and are easy to install (getting easier still with services like CNR). Newbies want popular applications. No problem here.
Obscure/unmaintained/obsolete applications might not be available or updated often. People who want these applications are usually more advanced users, who are ready to take the risk of compiling from source. No problem here.
Commercial software can either provide packages for popular distros (i.e. Ubuntu, RedHat, SuSE) or provide a standalone installer. No problem here.
A unified installer for Linux is a pipe dream. It doesn’t even exist on Windows (InstallShield, self-extracting EXEs, MSI packages) – there’s no way you could impose a single solution to the chaotic ecosystem of distribution that is Linux.
Linux won’t take off ever, with people as close-minded as you are. There’s no such thing as closing your eyes to ignore problems.
Linux won’t take off ever, with people as close-minded as you are. There’s no such thing as closing your eyes to ignore problems.
Aren’t you tired of personal attacks? Do they make you feel superior? Because they don’t in any way further the debate.
I’m not closing my eyes on any problem, I *disagree* that they are serious problems, that’s quite different. I base my disagreement on my own personal experience, as well as what I can learn from reading forums, user testimonies, and such.
As a Ubuntu user (well, Kubuntu really, but they’re pretty much the same) I do not feel like I’m penalized as far as software availability goes. I’ve *never* felt that way, not even when I started using Mandrake Linux five or six years ago.
Is the system perfect? No, it isn’t, but it is *good enough*, and it keeps *improving*. It won’t dramatically change overnight, because it is too big to do so. But it is changing and evolving, make no mistake about that.
As for Linux “not taking off ever”, it’s too late to make that prediction. Linux has already taken off. It keeps getting better, and is slowly making inroads around the world. This is not the year of the Linux desktop, but its decade has just started.
Well, I agree with your point, though not really with the way you said it.
However, one shouldn’t forgt that this is not only a problem for commercial ISV but especially for small open source projects. They certainly would benefit from as much exposure as possible but building easy to install packages for all the distributions out there is clearly beyond their scope.
One also shouldn’t forget that people are aware of these problems. Just look at the portland project which aims to solve some of the problems ISVs face. Or think about the new initiative by the Linux Standard Base to develop a new, distribution independend package system.
On a final note, I don’t think autopackage has run into problems because it didn’t address a real problem, but because a very antagonistic atmosphere from the getgot (no doubt, all sides are to blame here) and probably, though I’ll have to admit that I lack the experise to really make a well informed judgement here, because of some very controversial design decisions.
I would have modded you up, but you were already at 5.
How open is CNR – would I have to cozy up to Linspire to get exposure? Or involve Linspire to manage which version(s) and patches are available?
Having something which can be controlled and managed locally to locally set prioities and timescales is important to an ISV.
I don’t know much about it – we’ll see how it goes. You have to get coverage of the main commercial deployments to be effective. People who pay for Linux also pay for software. People who expect Linux and software to be free are off the radar for an ISV unless they can be hooked with a free teaser – and even then they are just low grade potential leads.
Sure, that is why i have been trying to install vmware player in ubuntu for the last 5 days.
Sure, that is why i have been trying to install vmware player in ubuntu for the last 5 days.
You don’t mention what the actual problem is, but it’s unfortunate you’re having troubles. You’d think VMware could get their act together and make a decent installer.
So what was the problem again?
——————–> Problem
——————–> You
As Mike Hearn said, people like you and the Linux distributors live in what they believe to be their own software installation utopia, and ISVs who make a living out in the real world have real world issues to deal with if people want them to port to their platforms.
Linux distributions are just not designed for those real world problems. Let’s just leave it at that, shall we?
Edited 2007-02-13 21:35
As Mike Hearn said, people like you and the Linux distributors live in what they believe to be their own software installation utopia, and ISVs who make a living out in the real world have real world issues to deal with if people want them to port to their platforms.
Wait…you’re talking about two things here: installation, and porting. If ISVs want platform-neutral, statically-linked way to install their software, these are *already* available to them (Loki installer, install scripts, etc.). That has nothing to do with porting the application.
People talk a lot about hypothetical problems, but how many ISVs have actually identified this as a serious problem, which has prevented them from porting their software to Linux? How many can *you* name?
Cry me a river. That whole article goes like “Nobody likes my package manager. It must be the man keeping me down.”
Maybe, just maybe it was one the following
1) Autopackage was ill-conceived
2) A fix for a problem that never existed
3) Who in their right mind wants the added complexity of multiple package managers on the box?
Don’t forget mister Hearn barging in on the distro sites and practically telling the distributions to get rid of their application packages and adopt Autopackage wholesale. At that time Autopackage was barely out of the alpha stages.
Autopackage died because of personalities, not because it was without technical merit. If you hope to replace, you first need to add.
It was never well integrated into any of the systems that used it. It created its own sub menu that kinda clashed with the already present menu system. Plus, the people who actually use Linux generally know how to get software using the native package management systems. Too bad, the idea is cool.
The problem is that autopackage was not installed by default and was not endorsed by the distros — for one reason or another.
The problem is that autopackage was not installed by default and was not endorsed by the distros — for one reason or another.
That’s the sad truth. It’s a very good system, which was promising system. But there’s truth in the fact that it was partly a solution in search of a problem. Sure, there are annoyances with package managers, but the fact is that the package system works, mostly. Recent efforts, such as friendly front-ends (such as Ubuntu’s Add/Remove Program app, or soon from CNR) have started to really address these issues. It doesn’t hurt that Ubuntu, being more and more popular, is starting to be a more attractive distro to package for among ISVs.
the only problem is with package managers that freeze every six months, like Ubuntu. If I want to use the latest Firefox or OpenOffice, autopackage was perfect for that and I’m sad that it’s languishing.
DEBs and RPMs are great for core functionality IMO, but not for independent software vendors or updated versions that haven’t made their way to backports yet (or never will, as Firefox 2 in Ubuntu showed)
Autopackage was also good because you didn’t need to make a deb, rpm, or whatever for every single distro, therefore less duplication of efforts. I remember when Ubuntu was still in its infancy, some packages from Debian wouldn’t work on it, yet the .packages of autopackage would, thankfully. But for smaller distros, autopackage was good for them too.
1.2 was supposed to including updating apps too..
Edited 2007-02-13 02:27
the only problem is with package managers that freeze every six months, like Ubuntu.
Well, that’s why you have the backports directory. 🙂
If developers want to, they can still distribute through autopackage.
The question really is who gets to “package” the software (starting from source): the developer, through standalone installers or solutions like autopackage, or the distros, for use with package managers. In reality it’s a bit of both right now, and that seems to mostly work out.
Is it optimal? No, but it works well enough for most people, so don’t expect it to change radically! More probably it will evolve naturally, using new approaches such as CNR…and there’ll always be the standalone isntaller (using statically-linked binaries) for those who don’t want to provide distro-specific packages…
well as I said, there are lots of issues with that, Firefox 2 and OpenOffice are the ones with big issues in backports. For example, because they don’t ship the Gecko engine separately (yet), Firefox 1.5 couldn’t be upgraded without breaking yelp (Gnome help system).
I’m not sure what you’re referring to…I run Kubuntu Edgy with backports and I don’t have a problem with either Firefox 2 and OpenOffice. How long did that problem persist?
Also note that you can install both Firefox and OpenOffice with standalone installers…
It was for Dapper Drake or Breezy Badger IIRC
The question really is who gets to “package” the software (starting from source): the developer, through standalone installers or solutions like autopackage, or the distros, for use with package managers. In reality it’s a bit of both right now, and that seems to mostly work out.
There’s also the build-service that Suse recently released as GPL to the community. You can set up a build environment targeting multiple disros and multiple versions of each, plug in your source and it will spit out distro/version specific packages. If one of the dependencies changes in a specific distro or version, it triggers an automatic rebuild/repackage. Modifications, updates and patches need only be applied to the source to trigger rebuild/repackaging for all targets.
Front ended by distro-specific repository interfaces, repos can be added to a user’s native package management infrastructure and have dependency-resolution and updates. Or they can just download their distro-specific packages and install them manually but with dependency resolution.
It’s not a silver bullet, and won’t change things overnight, but does give ISV’s a free tool for building and packaging software targeted at the mainstream distros, or any other target the ISV’s/community wishes to add.
Of course, it could also just be another rejected autopackage-type solution, but at least it’s a solution that integrates with existing distribution mechanisms rather than work around them. And it’s already in use by Suse since 10.2, though it hosts packages for non-Suse distributions as well.
It is no wonder that #autopackage on irc.oftc.net is quiet. The official channel is on freenode!
Here’s what I see:
1) Linux distributions are generally apathetic to the likes of autopackage, simply by way of the package management system being the pride and joy (and difference) between this Linux distro and the other one. Autopackage poses a challenge to that, and they’re not happy.
2) The wording of Autopackage’s mission and approach puts off the curious.
3) The two sides simply cannot understand each other. They’re antithetical to each other, kinda like Microsoft and Linux.
4) Numerous design flaws have been cited (“just a collection of shell scripts”). Klik is often cited as an alternative to Autopackage by the begrudging party.
5) For the tarballs which litter the Internet, is there a GUI-friendly solution for installing the tarball? So far, that hasn’t been answered, and the likes of Autopackage only serve to create a new package management system (“reinventing the wheel”), not to solve an already-existing question.
For the tarballs which litter the Internet, is there a GUI-friendly solution for installing the tarball? So far, that hasn’t been answered, and the likes of Autopackage only serve to create a new package management system (“reinventing the wheel”), not to solve an already-existing question.
The Apollon guys had done a reasonably good job with Arkollon…but from what I understand you need to tailor the installer for each program. The issue is that to compile certain apps you need to install a *lot* of development packages.
The reason such a program doesn’t exist is that people who compile from source don’t mind dropping to the command line.
I’m fine with that – I don’t tarballs aren’t the solution. I for sure don’t want to have to compile every program I download.
But why the need for compiling a tarball?
It’s an archive format ferrisakes, just like Apple’s .dmg or Amiga’s .lha, yet the latter two have GUI-friendly ways for installation (or, in Apple’s case, “mounting” it) rather than compiling, while the tarball has none.
I just don’t get it. Mozilla puts out a Firefox tarball at around the same time that they put out a Mac disk image, but I don’t see the Mac people compiling that disk image, do I?
What to call this: hypocrisy, apathy, double-standardness, stupidity?
Or just the way things are in Desktop Province, Linuxland?
tarballs are usually used for source packages. You’re just talking about an archived (tarred) standalone installer (the Firefox example you give) or stuff like Klik for archived app “folders”. That’s not a tarball in the usual sense of the word, and you shouldn’t call it like that if you don’t want to create confusion.
If you want archived app folders, use a distro that uses Klik. If you want standalone installers (like for firefox), then ask developers to provide them (if enough people do, they will).
What to call this: hypocrisy, apathy, double-standardness, stupidity?
No. Evolution. Evolution isn’t always clean. Sometimes its messy. But it adapts…
Or just the way things are in Desktop Province, Linuxland?
Now you’re starting to sound a bit arrogant.
Because Apple is Apple. Amiga is Amiga (sort of). Linux comprises about a dozen major variants that aren’t binary compatible. So, yes, this is just the way things are with Linux.
The very nature of the Linux ecosystem makes it hard for developers to delivery binary packages. That’s why we have distributors that do it for us. That is, they take the source and package it into a binary that has been tested on a particular distribution. The reason it works this way is because maintaining binary compatibility severely limits progress. It decreases the degree to which the community can distribute the workload, since it greatly increases the amount of coordination required to avoiding breaking things.
We wouldn’t have a viable desktop OS where one-click software installation works 99.5% of the time if Linux systems were designed to maintain binary compatibility across releases. Maybe the available binaries would install 100% of the time, but there wouldn’t be over 20,000 of them, and they wouldn’t comprise a viable desktop system.
This isn’t apathy or arrogance, this is the reality of our situation. The Linux community is making this happen with a small fraction of the investment made in the leading platforms, often without the cooperation of the software and hardware industries. We made this decision–to be a source-based system at heart–not out of choice, but out of necessity. We spawned multiple Linux distributions in the hope that at least one of them would evolve in the right direction and achieve the goals of the community. One way or another, we will make it economically infeasible for software and hardware vendors to shroud their products in secrecy, and we will deliver a platform that empowers the user.
Why don’t we have standalone disk images for applications? Because we haven’t figured out how it could possibly work for us, and we have mechanisms that are working quite well. Package management used to be a real sore spot for Linux systems, and now it works very dependably. Is it perfect? No. But it’s getting better all the time, and this is remarkable considering that centralized package management is a design so bold and ambitious that no one else dares to do it. As we asymptotically approach perfection, we will have achieved the best software management system ever devised: a comprehensive repository of all known software, each package (un)installable with a single click, with the whole system update-able automatically. I don’t care what you say, a perfected package management system beats a perfected disk image system.
The problem is that linux was not designed to run like that. ldd would have to know about every directory that has libraries that it will need to load. Kinda defeats the purpose of shared libraries then.
This is almost like BeOS. A great idea that deserve more but will never get it. Microsoft killed BeOS but I think Linux itself is killing Autopackage. It is a shame.
MS didn’t kill BeOS, BeOS killed BeOS, MS has done a lot of things, but you can’t blame them for everything
You can’t Blame MS for….Apart from forcing Computer retailers not to provide a working copy of BeOS on the computers they sold.
actually, I’m pretty sure that mismanagement and arrogance on Be’s side was the main reason all Mac users aren’t using BeOS, because the price Be was asking for was much too high. MS didn’t force the computer manufacturers not to include Be, they just said OEMs that offered other OSs would have to pay more for Windows. Other OS’s survived that time, but BeOS didn’t, because they also couldn’t manage the company properly.
MS didn’t force the computer manufacturers not to include Be
The Windows OEM licence EXPLICITLY prevented OEMs from installing or modifying the bootloader. This would have force any OEM who wanted to include a working Beos to buy every copy of Windows at the full retail price.
I don’t know about management/arrogance issues because I wasn’t an employee of BeOs at that time. Were you?
I did however use Beos around that time, especially with the live-CDs that were around. The user-interface quality of the OS was far greater than both Windows and Linux at the time (I didn’t use Macs then) and I was majorly surprised when nothing came of it.
“The Windows OEM licence EXPLICITLY prevented OEMs from installing or modifying the bootloader. This would have force any OEM who wanted to include a working Beos to buy every copy of Windows at the full retail price. ”
this did not prohibit them from selling a computer running BeOS, just one that dual-booted.
“I don’t know about management/arrogance issues because I wasn’t an employee of BeOs at that time. Were you?”
No, but they were widely reported at the time and after that.
I liked BeOS also, but it’s lack of apps kept me from using it.
this did not prohibit them from selling a computer running BeOS, just one that dual-booted
Given that OEMs had to pay Microsoft per processor sold, If they wanted to ship BeOS only machines, OEMs would have to pay both Be Inc, and MS for the privelidge.
Of course that’s fair.
actually, I don’t think that was the case, as some manufacturer’s at the time were shipping computers with freeDOS
It’s a big document, and it’s old, but it’s relevant to the time-frame that we’re talking about. Read this section:
http://thismatter.com/Articles/Microsoft.htm#tq7
autopackage fails, because it provides a technical solution to a cultural problem. That’s a common fallacy with technical people.
“Stable ABI doesn’t matter when we have the source.”
This is the prevailing attitude. It is fallacious. But alternative package-managers will not change it.
Either LSB/RPM become of practical relevance or nothing else will.
autopackage fails, because it provides a technical solution to a cultural problem.
Very well said.
This is the prevailing attitude. It is fallacious. But alternative package-managers will not change it.
Either LSB/RPM become of practical relevance or nothing else will.
My money’s on debs right now, mostly because of Ubuntu, but also because of the Debian-Ubuntu-Ubuntu derivatives filiation. RPMS, on the other hand, are used by distros who have grown futher apart, and who are quite different from one another by now. In other words, there’s much more differences between Mandrake, SuSE and RedHat than there is between Debian, Ubuntu and one of the Ubuntu-based distros.
Look at that: http://devmanual.gentoo.org/ebuild-writing/functions/src_unpack/aut…
It’s all bullshit. Shame on you Gentoo!
I looked at it. I’m still sitting here trying to figure out what exactly you think gentoo should be ashamed of.
Not much a fan of Gentoo myself.
I haven’t seen this before.
But unfortunately, that’s reasonable, valid critisism of autopackage’s design flaws, point for point.
Autopackage was too focused on creating a package manager. They hoped it would be so great the app devs would distribute in their format. The problem is most devs only distribute in source. They expect the distros to provide packages or their own community to contrib packages (ie wine). This puts the burden on others and thins the AP resources across the internet.
Instead, they should have followed the distro model. AP needs to create a community that provides autopackage packages. Create tools to facilitate it. Look at aur.archlinux.org for a good example. This keeps their destiny in their own hands (as opposed to 3rd party support) and concentrates resources in one place. Both of those will have an affect on morale.
I really hope they can regain some steam. The current distro/package mess is real waste of resources. This is especially true for the majority of user apps. Let the distros focus on “core” packages.
When I first responded, I hadn’t read more than the first paragraph as quoted on osnews. I followed autopackage from early beginnings, and had my opinion formed.
Now reading the article in full after the fact, I see Mike Hearn quoted with my exact same words, that’s spooky.
Well, I guess he finally got it. No wonder he dropped out of the project.
Autopackage is a wonderful idea on paper. Unfortunately, it’s crippled by the very thing it’s trying to solve – the horrendous statically linked library (SLL) hell that plagues Linux. Autopackage’s solution (or bandaid for that matter) is to patch all the dependencies for the given program that’s targeted to be autopackaged to make them dynamically linked. Problem solved, right? Wrong. They don’t work on much of the distros due to the differences the way each of their filesystems are set up.
godsolete: What’s this SLL hell you’re on about?
Just about everything on my Linux box at home is dynamically linked – this is a big reason why you need a good package manager to resolve dependencies.
Personally, I feel like Autopackage didn’t gain any traction because it’s a solution in search of a problem. There already are plenty of good package managers and frontends. If people have trouble using them, it’s not because Ubuntu provides a bad frontend to it, it’s because they want it to be just like Windows. Autopackage didn’t address that, which IMO is about the only real (albeit large) obstacle package managers face.
this is a big reason why you need a good package manager to resolve dependencies.
No you don’t.
Slackware’s package management is just as good as the ones found on any other distro.
The only difference is checking for dependencies is left up to the user.
http://www.slackbook.org/html/book.html#PACKAGE-MANAGEMENT
Apparently many people in the Linux community think that a packager manager must by definition include dependency checking. Well, that simply isn’t the case, as Slackware most certainly does not. This is not to say that Slackware packages don’t have dependencies, but rather that its package manager doesn’t check for them. Dependency management is left up to the sysadmin, and that’s the way we like it.
and imho, dependency checking AND resolving is the most important job a package manager has (I can un-tar a package myself, thank you) so imho slackware is just fun for self-punishment. But hey, have fun!
I like Slackware a lot even if I don´t really use it that often, but I tend to agree with you. The lack of dependency checking is somewhat embarrassing.
But Slack makes up for that by outlining clearly what the dependencies are on sites such as LinuxPackages.net so that you know each and every file that you need to download in order to install using pkgtool.
And there is slapt-get and others that are intended to be used as package managers just like other distros. I don´t know how effective they are compared to their counterparts on other distros, though.
And I really don´t know why but Slack seems to be fairly popular in my country (Brazil). All the websites somehow related to Linux shows a larger number of Slack users than other distros with Ubuntu with a slight second and then followed by the local distros including Mandriva, which has an excellent Brazilian localization. Perhaps it is not nearly as hard as it is rumoured to be?
And does anyone see a different outcome for CNR? Can the ubuntu powerhouse hold CNR under its wing and save it from failure.
Tune in next time…..
I never found any use for Autopackage, but their <a href=”http://autopackage.org/docs/binreloc/“>BinReloc library is quite useful if you are writing cross-platform software in C/C++.
imho their biggest accomplishment isn’t autopackage itself, but the fact they solved many problems like binary relocation.
I’m not working on autopackage anymore because my day job at Google now takes up most of my hacking energy.
It’s definitely not because I suddenly decided autopackage was a bad idea all along.
2007 and people still talking about “how to install software on Linux”… What a joke!
Browser: Opera/8.01 (J2ME/MIDP; Opera Mini/3.0.6540/1558; en; U; ssr)
So the Microsoft/OSX Software instalation world is perfect then? Thought not. At least with Linux, people ARE discussing the issues, rather than ignoring them.
So the Microsoft/OSX Software instalation world is perfect then? Thought not. At least with Linux, people ARE discussing the issues, rather than ignoring them.
Yes, it’s perfect. And no it’s not easy to install software on Linux, and I don’t see this changing anytime soon.
I suggest you revise the meaning of the term ‘perfect’. Then have a thing about:
network software deployment. (SUS sucks)
user settings (vis. Vista UAC etc.),
.NET dependency issues.
Lengthy unexpected downloads (Try to install IE 5.5 (I did for testing purposes), it’s tricky unless you manage to track down a hacked-installer)
Uninstall hell (less of a problem now we have SFC, but still there)
But you guy(s) in your belief that you have reached perfection, will never progress further.
I however ACUTALLY use Ubuntu, and install software on it. And I find it pretty darn easy. It’s not perfect, but then that’s what progress is for.
On the mountain of perfection, you guys are having a picnic half-way up. The OSS people won’t stop untill they reach the bloody top.
[quote]Yes, it’s perfect. And no it’s not easy to install software on Linux, and I don’t see this changing anytime soon.[/quote]
Perfect? far from it.
I repackage Windows software as part of my job. Trust me, “perfect” is a qualification which most definitly does not apply to software installing in the Windows world.
But hey, it’s not at all relevant how “the other guys” do it. That kind of comparsion is generally only used as an excuse to deliver crappy products (The “they do it too” excuse). What’s important is how we want to solve thie “problem” for ourselves.
Oh it is VERY easy.
The problem here is not getting an easy way to install applications. There are gazillions (or at least a dozen) easy solutions.
The problem is to reach an agreement across many different distributions with very different goals.
I like the autopackage-approach but it is conceptually and technically incompatible with gentoo – unfortunately. And the same goes for other distributions.
Joe User, you can rant as much as you want to about the superiority of Windows – but it doesn’t make you right.
Autopackage is almost what we need – but not quite, yet. And that’s why it hasn’t taken off. And never will in its existing form/shape. But the concept it great.
Yes, it’s perfect.
No, it’s not. It’s far from perfect, in fact. You’re just used to it.
And no it’s not easy to install software on Linux
Yes, it is. It’s actually easier to install/update stuff in Linux.
I’m kind of amazed people still argue that nobody needs ISVs. There is a lot of software out there with no exact open source replacement, and for which there probably never will be. For example, World of Warcraft (in fact any modern 3D game) or Google Earth.
I guess you can tell the millions of people who use these programs they shouldn’t out of some misguided principle, but you wouldn’t get very far.
I guess you can tell the millions of people who use
these programs they shouldn’t out of some misguided principle, but you wouldn’t get very far.
Nobody said there wasn’t a need for ISV’s. All I said was i’m personally not interested.
Distro-Agnostic, Package Manager-Agnostic Unified Installation and Uninstallation Interface
– Users download a .install package for their proper distro.
– A file is included in the .install package, we can call it “instructions” for now, specifying whether the file is src, binary, apt, rpm, etc., hell, even Autopackage.
– Gnome, KDE, console interface for installation.
These steps are done in the background, while the user sees a pretty interface telling him/her what’s happening with a status bars and such (“Building Application %34”, “Installing Dependencies”, etc.)
– If binary, it just copies it to where you want
– If file is src, it builds it, installs it
– If file is apt, it does apt-get install. You could technically put all of a package’s required .deb in the .install and specify them all in the “instructions” file. It can also not include them and it’ll just grab them automatically from your distro’s repository if available.
– If rpm, it does rpm -Uvh. Does the same thing then it did with dependencies for .deb
– Registers it all in an Installed Repository file/database. “instructions” file specifies name for it. It can also specify where to check for upgrades, which it could do automatically. If you go in the Manage Application item in the menu, it lets you uninstall those files (plus gives you details about it). When you click uninstall, it chooses the method of installation depending on how it was installed (so if you click uninstall on something that was installed as a .deb, it does apt-get remove, if it was a source, it deletes the files it copied.
– Technically, you could run a script to get all your native distro’s repository installed apps listed in the Manage Application window.
Why not just do that? I know there’s people denying that there’s a problem, maybe because it isn’t a problem for them, but installation is still a problem for things that aren’t/won’t be included in the repository whether it’s the fact that the package managers haven’t got around to it yet, or whether there’s an ideological aversion to those packages, but really, it shouldn’t be the distro paternally deciding what’s good for us, the power should be in the user’s hands. And the user shouldn’t have to worry about opening a console and typing “./configure;make;make install” in 2007, this should be all done automatically. Yet it isn’t.
Maybe someday I’ll get off my ass and try to learn Python, Glade and all that and code the application I’ve outlined on top, but in the meantime I have other important things to do and I’ll just keep on being amazed that nobody has done already what I suggested but instead hacked on things that try to do too much and end up simply being ignored.
No need to re-engineer the world… Just work with what you have.
Edited 2007-02-13 13:02
Why not just do that? I know there’s people denying that there’s a problem, maybe because it isn’t a problem for them, but installation is still a problem for things that aren’t/won’t be included in the repository
The solution you propose is the very reason I said there is no problem. Repo’s work great for many things, and GUI installers are possible for everything else.
Now if someone wants to make it easier for ISV’s to make GUI installers they should. No reason to bitch and moan that everything is borked. There are already commercial packages available that go a long way toward that goal.
But nothing of the sort is going to help VMware for example if they’re having issues because the internal kernel API changes underneath them. That’s just a fact of life in the real world.
Repo’s work great for many things, and GUI installers are possible for everything else.
No they don’t and there are no mechanisms for GUI installers as there are on other platforms. This has been repeated again, and again, and again over the past twenty or so comments. I’m not repeating it again. If you don’t understand it, don’t comment, or go away and work in as an ISV until you do.
Now if someone wants to make it easier for ISV’s to make GUI installers they should.
Yes, it’s called Autopackage and many shortcomings for and problems third party developers were pointed out – and people like you bitched that everything is OK.
There are already commercial packages available that go a long way toward that goal.
No there aren’t.
But nothing of the sort is going to help VMware for example if they’re having issues because the internal kernel API changes underneath them.
That’s actually the least of their problems.
It’s amusing how you talk about things being a fact of life – and yet nothing ever gets solved and no solutions are proposed (even though you ask them of others). Sounds like a person I used to work with, or a person that no one could work with, more specifically……………
While I tend to agree with you, you forget that quite a few technical criticisms were laid against Autopackage (bad format, reminder of shar, etc.): I haven’t seen any answer it seems except flames.
IMHO before fixing other problems, one must fix his own problems.
When (if ever) Autopackage becomes satisfying from a technical view, then it has a chance to be integrated into distributions, before it has none: working against both societal (“we don’t need ISV”) and technical objections.
[ Kinda like LSB which used (still use?) compatibility tests which had threading issue: hard to be taken seriously ]
No they don’t and there are no mechanisms for GUI installers as there are on other platforms.
That is incorrect. There are several examples of such solutions, working across several distributions. AFAICT the “problem” is that none of these solutions are particularly wanted by their intended users.
It does indicate the existing solutions aren’t the right ones – or that they aren’t good enough to make people want them more than they want existing solutions.
Personally I agree with the goal of autopackage and I also consider central repo’s to be flawed by concept. But they exist and they work fine and it’s easy to install applications from these repo’s. So the problem autopackage is trying to solve is probably not big enough to make people use it.
Right, I would have to agree that repos are not themselves the problem. To me, the problem is how unuser-friendly adding/removing third-party applications is right now, particularly applications that are packaged in a format not native to the distro (or simply a tarball that needs to be compiled)
Fix that problem, and you’ve made life much easier for non-hacker types using Linux when it comes to adding/removing apps, regardless of the distro, or the underlying package manager.
Current approach to software in Linux seems to be sort of all-in-one way of thinking. You install a distro and this distro provides everything you need.
I am currently in a different situation. For now I must keep using an old Linux distro: SLC4 (Scientific Linux CERN 4) which is basically a slightly modified version of RHEL4. The distro itself and the software bundled with it works just fine. However when I try to install new apps (especially new desktop apps) things become very painful. The problems is that software authors develop their product using the latest libraries. My old distro provides too old versions of these libraries so I should probably install new versions (and their dependencies!) from source (since no RPM-package of these libraries exists). I actually did this on my laptop to get latest version of Inkscape working. I installed Inkscape, libgtkmm (gtk C++ wrapper), libgtk, libpango, libatk, libcairo and libglib from source. This took lots of time and effort, but finally I got it working… except that by doig this I had broken some existing gtk apps! Ouch! I had installed the new versions into special directories, set LD_LIBRARY_PATH etc. etc. etc. so it was easy to remove the problematic new gtk version (and just switch from Inscape to good old XFig (Athena widget set, usability from the 80s, but at least it works).
So one problem is that even though one has the necessary skills to install stuff from source (I have used Linux, BSD and Unix for the last ten years and I also work on high energy and nuclear physics simulation software) the source may not even build on older systems. And of course I have lots of custom stuff installed on my laptop so upgrading to new operating system every six months is completely unacceptable!
It’s a tremendous waste of time and effort for so many people to do basically the same thing over and over. And, as for the “distro” differences, I wonder why the core libraries keep changing so much. Why is somebody changing the C libraries even today (especially in a way that affects binary compatibility)? The C language definition hasn’t changed in nearly 20 years. It’s time the GNU crowd left the C library alone.
It’s a tremendous waste of time and effort for so many people to do basically the same thing over and over. And, as for the “distro” differences, I wonder why the core libraries keep changing so much. Why is somebody changing the C libraries even today (especially in a way that affects binary compatibility)? The C language definition hasn’t changed in nearly 20 years. It’s time the GNU crowd left the C library alone.
C does change: C99 for example introduced new functions. libc also supports other standards like POSIX, newer versions of which get released from time to time.
GNU libc is full of bugs, read uclibc’s description of some of them and how developers repeatedly ignore them. And it’s virtually impossible to get a patch accepted.
Microsoft is changing the equivalent, MSVCRT.DLL, the version shipped with Windows 95 was 4, the latest is 8. But at least they encourage shipping it with the application, unlike Linux, where (if you even know about the problem!) you compile against an older version or use autopackage’s apbuild, and pray.