I’ve tried it with a few programs. Seems to work. Although I refused to give it my root password. The result was that it installed everything into /home/me/.local, which I actually prefer. It can’t confuse my package manager there, and it’s easy to get rid of if I choose. Would never let this thing mess with my /usr/bin, /usr/lib directories.
In terms of how to take it furthur I think all that’s needed now is for more app developers to make autopackage packages for their stuff. Not much of a selection now. Though the latest Inkscape had an autopackage the day the source was released. That let alot of people see how cool this thing could be. No waiting for distros to package it. Instant gratification.
I’m not worried about it ‘harming’ those directories. But I like a clean system. I don’t want files floating around in my file structure that are not a part of my distro installation and that my package manager isn’t aware of.
I also never do ‘make install’ for the same reason, but rather ‘checkinstall’.
I’m don’t have such worries. I’ve got a few packages done the make install way since there are no Gentoo builds for them. And I did it too with Fedora. RPM can be brought to know about such packages and besides that. Nobody prevents you from logging these installations yourself (install-log or just write it in a textfile that you installed those packages).
However, autopackage should at a given time become capable of coworking with native package managers. Whatever that means.
Well… it should work fine since the systems are openly documented. Let’s call it reasonably flawlessly – with the exception that poorly created packages might screw up the system.
In the world of office suites, somebody finally decided to wise up and create a standard around a particular file format, and then any office suite or other program can be written to use that file format.
If we translated this to the world of package managers, somebody would build an office suite and everybody would stanardize on that. Every platform/distro would use a different type of file format and the office suite would have to be built in such a way so that it recognizes the diffence between every single one of those file formats.
I guess what I’m trying to say is, why not standardize the packages instead of the package manager? You might need a few different types of packages (eg – binary, source, whatever), but if all these types were all built the same way, then you could use any package manager you wanted that recognized the standard.
Think about it – in the package, you would have one or more XML files (or whatever) that describe the package – what it is (its real name), what it does, information about the files it contains, what its dependencies are, what options are available at install time (eg – is a spell checker available), etc. Let each distro have its own package manger that worries about the specifics aof that particular distro, such as where the libraries on the system are, etc. A package (such as appname.package) should really never be concerned with any of these things, should it?
I think you are right but it’s in the nature of distros to have slighlty different versions of every library an application could rely on so there is no way that the package can standardised.
The package is either independent(no deps), self contained (a la klik) or distro specific. Klik is better placed right now than autopackage.
it would be nice to have some sort of “meta-package”.
rather then specifying directly what folder the files should go into the package would have something like this:
$docdir/”manfile(s)”
$docdir/”other documentation”
$binarydir/”binary files”
$libdir/”librarys”
and so on. how many do we need?
these would then be translated into distrospecific directorys at install time.
then we need to standardize dependency entrys and how to translate them between the meta package and the package system of the distro. the system can allso allow for the install of staticly linked binarys…
its realy the dependency problem thats a issue, and that current package managers have a problem of having multiple versions of the same package installed (and maybe a cleanup function that one can run periodicaly to check if any library package isnt needed by any installed package, and can therefor be removed).
thats one of the good sides of the gobo system, the ability to have multiple library versions installed side by side.
but this is realy a issue with the package managers currently in use, not a limitation on the os itself.
Though I abhor the idea of package managers being used for applications above the system level, autopackage is a step in the right direction. That is, toward a universally (Linux) installable package. As a nice side effect, projects using it make their apps relocatable, which makes it trivial to package them as self contained AppDirs.
I really believed autopackage was the next big linux thing. After all, it would have been reasonable to expect the distros to support it, seeing a way to reduce the number of packages that they have to build, shifting part of the packaging load to application developpers for application outside of the ones they wish to fully support…
And for application developpers, a way to expand their potential public by being available for all distros.
None of that has happened. Debian has been used as a meta package repository by a wide variety of distros and rpm has now wrappers to automatically resolve dependencies…
Ubuntu/Suse/mandriva are now the distros for us who don’t want to have to struggle with software install and the gentoo/debian users never had the problem, and slackware users still don’t see where the problem is (just joking ok ?)
and there is klik that provides self contained archives, thus not potentially confusing the native packager about the content of /usr/bin. And application developppers are unsurprisingly very happy not to have to package and integrate their stuff themselves.
I hope I am wrong but I don’t see autopackage taking off anymore.
It’s a huge advantage for small applications who don’t have packagers; although packagers usually aren’t too hard to come by.
I definitely see autopackage taking off. I’ve used it a few times, the trouble is that many of the projects using it now are the big ones: Inkscape, gaim, etc.
They need to do anything they can to make autopackage preparations as easy as possible. They’ve got a 9 step thing on their site, but it’s not quite *that* easy. Especially when you get to writing the actual spec.
The other thing that would be awesome would be BSD compatibility .
If my memory isn’t playing tricks on me, I still remember one of Mike Hearn’s early posts here on OSNews, explaining the rationale for autopackage before a line of code was even written. Gosh, that must have been a couple of years ago. It was one of those hopelessly ambitious pieces, but was very well reasoned and written. Of course, you see those every once in a while about something which is missing or should be fixed in Linux or open source. Most of those kinds of articles never lead anywhere.
And here’s where Mike Hearn has been a pleasant surprise: he just went ahead and did what he thought needed to be done, notwithstanding those who said it was unnecessary, or wouldn’t work. It’s great to see somebody take that initiative and continue with it – countless projects are abandoned before they reach release 0.1, and Autopackage has been at 1.0 for a while. It still hasn’t experienced widespread adoption, but that is not a problem at the present time, just read the FAQ for Autopackage. As previously stated by others, it excels in installing software which doesn’t have a packager for a specific distribution. Indeed, a few larger projects have been provided in Autopackage form such as Inkscape.
In the end, I’d like to thank Mike Hearn for his efforts and putting his time/money where his mouth is. Unfortunately I don’t have the technical expertise to assist in this project, but I’m glad to see that there are those who strive to make open source software better.
I use Slackware, which means I predominately install from source but the other day I was pointed to a program that used autopackage, so I decided to try it again (previously (about a year ago) it balked and did nothing). It worked flawlessly this time.
This is good because not only does it make it easier for the enduser but also the software developers. Knowing that only one package is required.
Automatically verifies and resolves dependencies no matter how the software was installed. This means you don’t have to use Autopackage for all your software, or even any of it, for packages to successfully install.
“””
Let me get this straight… Even if I never use Autopackage, it will somehow still make everything install correctly ???
Yes, if the package being installed depends on some library that was installed by you as an RPM, it will find that lib and work with it. If you don’t have that lib, it will install it autopackaged (provided the lib is not something core like X Window).
I love the idea, and would love to see it become a standard, but ive seen too many distros do things too different for it to ever be universal, unless all packages come with multiple install methods auto selected by some kind of “distro-library-path” detection.
oh and offer some kind of source compile option (sorry im a gentooer)
They might have some points but consider this. By Slackware standards Debians package managment system is broken. Anything that forces package dependencies down your throat will in the end lead to disaster.
So what is more broken then DPKG or autopackage?
Which one works across the most distro’s?
Since Autopackage is just another type of DPKG one would be as bad as the other.
Hmm, I’ll bite. You don’t provide links to support your words, but I know what it might be about (not only with Debian, but with other distros).
So why is Autopackage considered harmful by some people? The two basic reasons are security and system integrity. Let’s discuss them both from common sense point of view.
1. Security.
The argument here is that “You shouldn’t install software from random places in Internet”. This specific reasons cited are threefold:
a) You don’t trust developers themselves.
If this is indeed the case, why use their software at all? After all, if you think they can sneak spy- and malware in the stuff they release, do you honestly believe that this spy- and malware won’t be carried over to the distro repository? No, distros don’t do a comprehensive security audit of every package in a repository.
Moreover, let’s not forget that we’re speaking about open source software now (with regard to closed source, neither repository- or standalone installer-based model has any advantage). Do you use Windows? Do you use open source software for Windows downloaded from the developers’ websites? Just how common are the cases when you got spy- or malware with that? Not very common I guess. Then shouldn’t the reason for it be something else than the distribution model?
Conslusion: The problem is orthogonal to Autopackage.
b) You’re afraid of a malicious hacker compromising a website.
This is a legitimate concern indeed. But couldn’t the hacker compromise the whole repo as well? What’s the difference then?
Conclusion: The problem is not specific to Autopackage.
c) The Autopackage installer is a shell script that’s run as root and you give it complete control over your system.
Yes, the Autopackage installer is a shell script, but you don’t have to install it as root if you don’t trust the package provider (except the software that absolutely must be installed systemwide, in which case you can install it from source or distro repository, but then you’ll have to deal with (a) and (b) cases anyway). You can refuse to give the Autopackage installer your root password, and then (provided the nature of the software allows it) it’ll install the software in your home directory.
Conclusion: This is an Autopackage-specific problem, but Autopackage offers features that make it possible to mitigate it.
2. System integrity.
This boils down to “Thou shalt not install in /usr”. While true IMO, the reason for this behavior of Autopackage is simply that distros don’t properly support installation to /usr/local or /opt. This is about finding necessary libraries, putting menu entries in right places etc. However, this problem is not specific to Autopackage as such – a hypothetical LSB RPM package would suffer from it just as much. To reliably install a package with all necessary system integration, Autopackage is forced to use /usr. However, you can override it by specifying the prefix of your choice from the command line. It’s just not guaranteed to work perfectly. And most often, you can install to /home!
The actual hazards of /usr installation are not that high. First, Autopackage installer will refuse to overwrite an installed RPM and will ask you to uninstall it first. Then, obviously, there is the risk that APT or a similar dependency resolution system will install another version of the software over the installed Autopackage. If you ask me, the policy of overwriting files quietly is questionable by itself to say the least, but let’s forget about that for now. Remember that Autopackage is meant for installing application software, not separate libs or core system components. Therefore, the situation with installing a distro-packaged app over an autopackaged one will most likely be related to a dependency on this application – which doesn’t happen that often. (The likely culprits here are large-scope metapackages like ubuntu-desktop.) But if a user did a systemwide install of an autopackaged app, he/she must be careful in order to not overwrite it afterwards (watch the dependencies and don’t forget what you installed with AP!). All in all, I prefer /home as a desitnation.
And the last, but not the least: the effort that is needed from the distros to support /usr/local and /opt prefixes properly is miniscule (mostly steeing up some symlinks and environmemtal variables). But the reason they don’t do it is very simple and sad. DISTRO MAKERS DON’T WANT USERS TO INSTALL THIRD PARTY SOFTWARE AT ALL. Period. You’ve probably heard many times that open source is all about choice and stuff, right? Well, if you ask distro makers, it must be wrong when it comes to software installation. Ironically, they seem to want to control your computer as tightly as Microsoft does! And this situation is not likely to find resolution any time soon… Luckily, there are workarounds for that, which is why Autopackage works NOW.
Even more interesting, the ideas of repositories, standalone installers and cross-distro packages don’t even contradict to each other. Take Softpedia – a repository of standalone installers compatible with multiple versions of Windows OS. Nice, isn’t it?
Fedora, Ubuntu, Mandrake, Gentoo… Pretty much all major ones.
This is interesting, /usr/local/bin seems to be in the PATH for Debian unstable and I can’t remember having encountered any problem installing into /usr/local before.
I actually install KDE programs I compile myself to /usr/local/kde
So it seems that make install does something different then, but it leads to the question why can’t the autopackage installer do the same?
“This is interesting, /usr/local/bin seems to be in the PATH for Debian unstable and I can’t remember having encountered any problem installing into /usr/local before.”
It isn’t as simple as /usr/local/bin being in PATH. Menu items don’t show up, icons don’t work, MIME type registration is ignored, etc.
Menu items don’t show up, icons don’t work, MIME type registration is ignored, etc.
As far as I know these entities are controlled by environment variables given systems using the freedesktop specifications on directories, menus, etc.
Assuming that the autopackage installer is capable of more than just copying files, e.g. running commands and writing text to files, it should be possible to extend the environment if necessary.
For example extending the KDE search tree is as simple as
Given the autopackage developers’ knowledge I am certain the did know about this, so I am curious why they chose to follow a path that guaranteed to get the distributors up on the barricades.
For instance, we already do patch up the $PATH variable and a few others so you can at least run a program if you install it into an odd prefix (and so ~/.local/bin is put in the path) but this causes an endless stream of bugreports from people who use exotic/broken shells and setups, or from people who feel ‘violated’ that some program modified their startup scripts.
This is simply fixing one, well known, variable!
Now imagine the chaos that would occur if we went around modifying startup scripts, XML files, environment variables, and other directories. All so stuff could be kept in a separate prefix – not worth it, given that there is no problem at all with installing to /usr: both autopackage and the system package manager keep databases of what packages own what files, and the files don’t overlap (like, you can’t have a file owned both by RPM and autopackage simultaneously).
The whole installing to /usr thing is really quite bogus IMHO. The people who are complaining whinge about a *lot* of different things we do – basically they just don’t like the idea that users are getting software from outside the distro. Nothing we can change in autopackage will ever fix that, as it’s the whole point.
but this causes an endless stream of bugreports from people who use exotic/broken shells and setups
You’re right, not all variable are going to be modifyable as easily as KDEDIRS where an installer could query all currently active paths using kde-config and where it can be sure that startkde is executed by a sh interpreter.
both autopackage and the system package manager keep databases of what packages own what files, and the files don’t overlap
I knew I must have been missing something. My previous information indicated that autopackage would overwrite files or remove files on uninstall that are no longer the files it installed (have been replaced by a file from a package)
I guess your uninstaller is querying the package manager for files it is about to remove to check for the latter.
Anyway, is there some documentation of what kind of setup an administrator has to ensure to make autopackage work with a different prefix?
I am asking because my systems are configured to let local administrators install software in /usr/local without requiring root permission (group writeable) and if there is some more or less fixed setup one could create a “distribution” package that ensures it.
I imagine something like
/usr/local/bin in PATH
/usr/local/lib in ld’s config
/usr/local/share in XDG_DATA_DIRS if it is set (included by default if not set or empty)
We don’t query on uninstall to make sure RPM didn’t blast our files, because if that has occurred we can no longer reason about the state of the system anyway (and removing the file would be the wrong thing to do). The solution is simple – don’t install packages you know are already installed!
As to what has to be done for a new prefix, well there is no canonical list but I made a big one for a thread on fedora-devel some time last month. Try looking there.
e don’t query on uninstall to make sure RPM didn’t blast our files, because if that has occurred we can no longer reason about the state of the system anyway
Ah, I see. So you simply check the file’s hash and if isn’t the one you expected you do not remove it, right?
As to what has to be done for a new prefix, well there is no canonical list but I made a big one for a thread on fedora-devel some time last month
Well I was amased how easy autopackage works. Our national tax program was just releases as autopackage and it was a breece to install. It auto installed some autopackage software first (to bad you need a inet connection for it though) and then asking for root pass and thats it! It seems a good way to install commercial software for example
I want to see some results! Autopackage has been on people’s lips for over a year. However, there are still only a few autopackages to install, and not even all of them work perfectly. In the meantime, Klik ( http://klik.atekon.de ) has gotten itself thousands of packages, and the system has improved a lot too (it’s still not in a stable form, but has great potential). If I had a bet, I’d put my money on Klik.
Klik has one fatal problem: it’s not really cross-distribution. For example, Fedora still isn’t supported. At autopackage, we focus on cross-distribution compatibility.
“Klik has one fatal problem: it’s not really cross-distribution. For example, Fedora still isn’t supported. At autopackage, we focus on cross-distribution compatibility.”
Well, while it’s not there yet, Klik also _focuses_ on cross-distribution compatibility. However, I may remind you that neither all Autopackage files work with all distributions. So, pot, I’d like to introduce you to the kettle.
“However, I may remind you that neither all Autopackage files work with all distributions. So, pot, I’d like to introduce you to the kettle.”
Not quite. We may not support every single of them out there, but we support a lot. More than Klik does. For example, Fedora Core 4. Doesn’t work with Klik. Klik basically converts Debian packages, which aren’t built with cross-distribution compatibility in mind at all.
“Well at least auto-package will install the missing files, dunno about klik I didn’t tried it yet because it’s not yet fedora proof.”
Klik will also install missing libraries through server-side-apt to the Klik package, but it assumes that you’ll have some “core” packages already installed, so that they wouldn’t be included in every Klik package. Different distros have a bit different naming method for these libraries, and that’s why Klik isn’t compatible with all distributions yet. LSB will hopefully someday make things better.
You can safely test Klik with your Fedora, because Klik packages don’t touch your system in any way. The worst thing that can happen is that the program doesn’t work.
Linux distros have dug themselves a hole even worse than Windows98 DLL Hell, that an end user cannot deal with. Words like “synaptic is easy” make me want to bash my head against a wall. No it’s not – and people who claim as such are so uterly blind to that fact that it only screams louder that linux will _never_ make it to the desktop until it’s as easy as OSX or Windows to install an App.
And neither Klik or Autopackage solve the whole problem.
“No it’s not – and people who claim as such are so uterly blind to that fact that it only screams louder that linux will _never_ make it to the desktop until it’s as easy as OSX or Windows to install an App.”
Your statement is quite weird, considering that programs are installed exactly the same way in OS X and in Linux with Klik.
But what would you suggest? I bet you have some good thoughts on this problem.
I don’t really need to, OSX is a shining example. Anyone can put up a .dmg of their app on any website and it only requires (after attaching the image, a step I still think is pointless) to drag the app to the app folder.
Klick attempts to mimik this, but uses a repositry, taking the power of choice away from the user. Sure the repositry can store every app you ever need, but Mr.Independent app developer who wants to publish regular beta updates to his own app on his blog doesn’t have control over the repositry. It is a “distro-dictation” installation method and thus, not “unviersal” by any means of the word.
We need _one_ app bundle, using one standard layout, that works on every distro (just like Mac .apps can contain a Classic and Mac (OSX) folder for one app to run on both OS9 and OSX. And can be downloaded as a single file, without “repositries”, “apt-get” and any more complexity at all than a drag-and-drop and that’s it.
The makers of linux package managers just can’t get their heads around this and it is beyond me why.
I don’t really need to, OSX is a shining example. Anyone can put up a .dmg of their app on any website and it only requires (after attaching the image, a step I still think is pointless) to drag the app to the app folder.
I really don’t see how MacOS-X shines more than e.g. Red Hat. A red hat developer could put an RPM on a website and when the user clicks on the link, the program is automagically installed. No need to drag it to any special folder.
Of course this only works the package doesn’t contains unresolved dependencies. The same problem occurs in MacOS-X by the way. If you drag something with unresolved dependencies to the App folder, MacOS-X will not try to download what’s missing automagically.
I’ve never had a single dependency problem with OSX :S Including dragging over the Office.X .app files from another mac without using the office installer.
“Linux distros have dug themselves a hole even worse than Windows98 DLL Hell, that an end user cannot deal with. Words like “synaptic is easy” make me want to bash my head against a wall. No it’s not – and people who claim as such are so uterly blind to that fact that it only screams louder that linux will _never_ make it to the desktop until it’s as easy as OSX or Windows to install an App.”
Indeed! Package managers are great when it comes to installing applications that are approved by the distro (for example Xfce under Ubuntu) but if you want to download something of a web site and run it, you’re completely on your own. In Mac OS X you drop the application where you want it and run it, in Windows you either do that or the application comes with a built in installer that just works. When the Linux world can do that, wake me up because then, maybe, maybe it is ready for the desktop.
Package managers are great when it comes to installing applications that are approved by the distro (for example Xfce under Ubuntu) but if you want to download something of a web site and run it, you’re completely on your own. … When the Linux world can do that, wake me up because then, maybe, maybe it is ready for the desktop.
A decent distro will come with a huge variety of packages with, usually, more from third-party repositories. How often do people really “find something on a website” they can’t get from their distro’s online stores? I’ll wager it’s not often if their distro is any good. In addition, Joe Sixpack users are likely to be quite content with what their distro offers and will never go beyond it – all this aside from the questionable sanity of downloading and running executables from websites.
The vast majority of all complaints about desktop Linux boil down to “I’ll only use it when I can use the same programs (or Linux clones of them) I do under Windows”. Like that or hate it, but that’s what the market has always said.
A universal packaging manager for Linux sounds a great idea but it probably won’t happen. In my experience, too many Linux devs enjoy being members of the awkward squad and would be aghast to find themselves using a package manager intended to help mere mortals. One can imagine the horrified blogs on Planet Debian right now.
It makes me laugh when the Windows(other platforms) people, caim Linux is not ready for the desktop because of no universal install solution. Unlike Windows users we dont have to goto god knows how many websites just for software, then install each one by one.
Package managers provide a repository of software far superiour to any Windows platform, if I introduce this to a new Linux user they are simply amazed. One place where you can download hundreds of software in the few clicks?, Windows dont do that, Linux distros do. Why have a universal system when you have all the software you want in say Ubuntu or Debian and it gets updated for FREE.
I think Windows users cannot get used to the idea of hundreds of software packages at there mersey in a few clicks, it’s to good to be true. It’s just like the idea that there is no Start menu in Linux so there brain goes into meltdown, applications menu, whats that?
Websites are better, end-users prefer websites as they have colour, variety, detail the software and its functions, provide news, communities with forums and not just a cold list of a million confusing names starting with either g or k.
Klik has some good ideas, and we’ve worked together before, but by NO means is having more packages an advantage.
We could have mechanically generated autopackages from Freshmeat feeds, but that would have been a dumb thing to do because packaging is an inherant part of the software development process. You need to consider whether your dependencies are reasonable and fix them if not, make sure your software can take advantage of features present in newer systems but still work on older ones and so on. Packages also need to be compiled in a special way to improve binary compatibility.
None of these things will be done if you just mechanically generate them from the Debian archives (or any other distributions archives). And that’s a good way to get packages that don’t work reliably. Go look at some of the auto-generated CMGs : their feedback system indicates that they work for only 2/3rds of the users! And that’s already biased because people on distros like Fedora are marked as unsupported etc.
If you want to use autopackage as developer you need to:
1. either put their C!!! (C-entric anyone? what if I don’t use C?) code in your program and thus link to libC (C-entric again! What if I don’t want to?)
or
2. use glib, see C-entric entries in #1
In other words it forces developers into a dead end.
It uses a kernel hack (they actualy admit that) to find out paths. This is unacceptable. Either make a standard kernel call for that, or don’t do it at all.
It uses a kernel hack (they actualy admit that) to find out paths. This is unacceptable. Either make a standard kernel call for that, or don’t do it at all.
This is news to me. Where do we “admit” that it’s a hack? It reads /proc/self/exe which is the API Linux exports for determing your binary path.
There’s also no requirement for you to use the binreloc kit, you can implement relocatability however you like.
I also don’t get your complaint about C – every program links against libC regardless of what language it’s written in. That’s the standard way to access the operating systems kernel and dynamic linker services. Are you even a developer at all?
Yes, I am and unlike you I have the wider view. Your statement about “every program uses libc anyhow” is ignorant and shows how little you actualy know. There are programs and compilers which chose to ignore libc(for a good reason) and make their own base libraries which call the kernel directly. They mostly link staticly.
You probably wonder what the use of such approach is so let me elaborate.
Have you been able to run your 5 year old binaries on today’s linux? No because they link to libc. Using static/smartlink approach with conservative kernel call library makes your programs much more consistent, without the requirement to recompile them when some nitwit decides to break ABI again.
I’ve gotta admit, I can’t remember the last time I saw a fully static program. Making one of them is difficult as simple things like doing a DNS lookup can cause crashes if you really do statically link libc. Most programs which want total robustness dynamically link against libc (no matter what language they use) and that’s about it.
Static binaries were broken in the last few years anyway, I remember icculus being quite annoyed by it … even avoiding libc is no defence.
This is one such compiler which under normal circumstances links smart (this means static) and DOESN’T USE OR LINK IN ANY WAY LIBC.
Sorry for the bold but you simply don’t get it. LibC is NOT the thing everyone must use to get functionality. You must link it in most unices if you wish to work with things like pthreads, gtk etc. but that’s a design flaw not something to boast about. There’s an example binary generated by this compiler somewhere which is 5 years old and still runs ok.
Instead of libC they have a RTL (runtime library) which is specific for each Arch/OS this compiler runs on. It’s the base library required for even the compiler itself. It DOESN’T use libc in any way unless specified by a define. It goes directly to the kernel.
I don’t wish to ignite any language/compiler was here. All I wanted to say is that you force developers into something some really can’t do.(especialy if you want to be cross-platform)
Adding linux specific code into my sources just because I want/must use autopackage is kinda stupid.
P.S smartlinking is static linking only those methods/objects which you really use, reducing filesize for hello world static program to 20kb (compared to 800 kb with gcc –static)
I’m not aware of any programs that use Free Pascal.
Also, it’s hardly a design flaw for an operating system to require processes load some userspace code – every OS I’ve ever seen does this. Not having access to the standard thread library, widget toolkit etc is pretty huge.
The “smart” linking can also be done with the GNU toolchain, actually during the 1.2 cycle we experimented with enabling it by default in apbuild but it suffered quite an obvious problem – apps that use libglade can contain references to functions outside the main body of C/C++ code. The garbage collector cannot see this reference and deletes the “unused” function causing runtime failures.
So we decided not to enable this by default for 1.2, but there are plans to introduce a more robust form in later versions.
“I’m not aware of any programs that use Free Pascal.”
See their homepage, news section has a nice entry. In any case this is ignorantism.
“Also, it’s hardly a design flaw for an operating system to require processes load some userspace code – every OS I’ve ever seen does this. Not having access to the standard thread library, widget toolkit etc is pretty huge.”
Sure but look at how MS does it. As much as I hate winapi they made the wise choice when making base libs not language centric. I can link to their thread library without ever needing to link libc, unlike in linux. As I sayed it’s a design flaw. Unix was made “By C for C” and C was made to make Unix. It’s ok and nice but it has negative impact.
“It uses a kernel hack (they actualy admit that) to find out paths. This is unacceptable. Either make a standard kernel call for that, or don’t do it at all.”
There is no standard kernel call. That’s the problem. If there is we would have used it.
Now, if you know a better solution we’d like to hear it.
Why not Linux adopt the .app format? The .app is simply a folder, and it contains a resources folder, and a folder containing the binary for an OS.
Software could be made to create and run .app bundles that have linux binaries in a linux folder inside the .app, the same as MacOS runs .apps using the MacOS folder inside the App.
You could have a folder structure like this
demo.app
> Resources (resources in program)
> MacOS (OSX binary in here)
> Linux
-> PPC (Processor architecture)
–> All Distros (Default exe to use in here)
–> Ubuntu (use this exe in Ubunutu instead)
–> … (other distros that have to use different code)
-> i386
–> All Distros (and so on…)
–> …
This way, a single app – could – if it so wanted to have one .app that could run on OSX (both PPC and i386) and Linux, with differently compiled binaries for certain distributions. It’d be a rather large app file, sure but it’d be one app to run them all! And the author wouldn’t have to include any more than one binary if they wanted.
When the binary runs for the first time it could check against an internal list of dependencies and auto-download them from the repositry.
You’d have one standard that runs on OSX and Linux (if the author wanted), and it’d be easy to install and uninstall.
i’ve replied to this many times: why instead of having autocontent binaries just have autocontents PACKAGES, something like a tar of debs or rpms, with the application and its dependencies, it gives the same illusion of one big binary with all libraries inside, and you don’t lose the ability of sharing code and automatic updates.
Answer: yes, you could do, and implementing a variant of the NeXT/Apple scheme on Linux would have been a good idea 10 years ago. If I was to go back and design Linux from scratch then something very similar is what I’d do except I’d dump the way the shell tries to locate as many appfolders as it can and I’d replace it with an explicit install or integrate action.
Implicit integration has caused quite a few nasty security problems on MacOS and it’s not a big deal to wait for the user to run the program for the first time before it’s linked into LaunchServices.
Anyway. You can’t implement this on Linux systems of today, because there is no such thing as “Linux”, instead there is just a random collection of packages thrown together with little or no organisation between them. Apple and Microsoft provide large coherent collections of APIs that are versioned together, meaning you don’t need horribly complex installers or depsolvers to do anything, you just need to check “Is this OS X 10.3+”.
clearly you guys need to start working together… maybe the klick packages can be transformed to autopackages? or instead of all the ‘handmade’ klik packages you (and the klik mantainers) can start to instead make autopackages and keep klik for the rest…
i really appreciate all your work, esp the research on binary relocation, and again – this is really valuable work. please use this for more than just autopackage!
… is that once your distro upgrades the dependecies (e.g. if you install new release of your distro upon an old one) some of autopackages will mysteriousely stop working or worse expose some faulty behaviour that can damage user data.
The solution for that is to always install *all* dependencies in some spare place and regularily scan system for changes.
E.g.:
1. Your App depend on libfoo which is already installed on the system.
2. You install your version of libfoo on the hdd in some spare place but use system provided one to enable sharing. Than you remember signature of the system library file and its modification date.
3. Upon every application startup you scan the dependencies and LD_LIBRARY_PATH-override the changed ones with yours. ( Of course some more elaborate checks (than signatures) can be executed so that security updates doesn’t invalidate a system library. A case by case analysis is necessary .)
4. Autopackage controll pane should expose an option to forcefully override system provided libs with AP installed ones in case there are still problems.
I’ve tried it with a few programs. Seems to work. Although I refused to give it my root password. The result was that it installed everything into /home/me/.local, which I actually prefer. It can’t confuse my package manager there, and it’s easy to get rid of if I choose. Would never let this thing mess with my /usr/bin, /usr/lib directories.
In terms of how to take it furthur I think all that’s needed now is for more app developers to make autopackage packages for their stuff. Not much of a selection now. Though the latest Inkscape had an autopackage the day the source was released. That let alot of people see how cool this thing could be. No waiting for distros to package it. Instant gratification.
It doesn’t harm your /usr/bin nor your /urs/lib directories.
I’m looking forward till the day it works flawlessly with other package managers, including install-log
I’m not worried about it ‘harming’ those directories. But I like a clean system. I don’t want files floating around in my file structure that are not a part of my distro installation and that my package manager isn’t aware of.
I also never do ‘make install’ for the same reason, but rather ‘checkinstall’.
I’m don’t have such worries. I’ve got a few packages done the make install way since there are no Gentoo builds for them. And I did it too with Fedora. RPM can be brought to know about such packages and besides that. Nobody prevents you from logging these installations yourself (install-log or just write it in a textfile that you installed those packages).
However, autopackage should at a given time become capable of coworking with native package managers. Whatever that means.
“I’m looking forward till the day it works flawlessly with other package managers, including install-log ”
Don’t hold your breath…It’s likely to happen on the same day Microsoft gets their act together…..
Well… it should work fine since the systems are openly documented. Let’s call it reasonably flawlessly – with the exception that poorly created packages might screw up the system.
In the world of office suites, somebody finally decided to wise up and create a standard around a particular file format, and then any office suite or other program can be written to use that file format.
If we translated this to the world of package managers, somebody would build an office suite and everybody would stanardize on that. Every platform/distro would use a different type of file format and the office suite would have to be built in such a way so that it recognizes the diffence between every single one of those file formats.
I guess what I’m trying to say is, why not standardize the packages instead of the package manager? You might need a few different types of packages (eg – binary, source, whatever), but if all these types were all built the same way, then you could use any package manager you wanted that recognized the standard.
Think about it – in the package, you would have one or more XML files (or whatever) that describe the package – what it is (its real name), what it does, information about the files it contains, what its dependencies are, what options are available at install time (eg – is a spell checker available), etc. Let each distro have its own package manger that worries about the specifics aof that particular distro, such as where the libraries on the system are, etc. A package (such as appname.package) should really never be concerned with any of these things, should it?
I think you are right but it’s in the nature of distros to have slighlty different versions of every library an application could rely on so there is no way that the package can standardised.
The package is either independent(no deps), self contained (a la klik) or distro specific. Klik is better placed right now than autopackage.
it would be nice to have some sort of “meta-package”.
rather then specifying directly what folder the files should go into the package would have something like this:
$docdir/”manfile(s)”
$docdir/”other documentation”
$binarydir/”binary files”
$libdir/”librarys”
and so on. how many do we need?
these would then be translated into distrospecific directorys at install time.
then we need to standardize dependency entrys and how to translate them between the meta package and the package system of the distro. the system can allso allow for the install of staticly linked binarys…
its realy the dependency problem thats a issue, and that current package managers have a problem of having multiple versions of the same package installed (and maybe a cleanup function that one can run periodicaly to check if any library package isnt needed by any installed package, and can therefor be removed).
thats one of the good sides of the gobo system, the ability to have multiple library versions installed side by side.
but this is realy a issue with the package managers currently in use, not a limitation on the os itself.
Though I abhor the idea of package managers being used for applications above the system level, autopackage is a step in the right direction. That is, toward a universally (Linux) installable package. As a nice side effect, projects using it make their apps relocatable, which makes it trivial to package them as self contained AppDirs.
hmm, should not be hard to have it play nice with gobolinux then
I really believed autopackage was the next big linux thing. After all, it would have been reasonable to expect the distros to support it, seeing a way to reduce the number of packages that they have to build, shifting part of the packaging load to application developpers for application outside of the ones they wish to fully support…
And for application developpers, a way to expand their potential public by being available for all distros.
None of that has happened. Debian has been used as a meta package repository by a wide variety of distros and rpm has now wrappers to automatically resolve dependencies…
Ubuntu/Suse/mandriva are now the distros for us who don’t want to have to struggle with software install and the gentoo/debian users never had the problem, and slackware users still don’t see where the problem is (just joking ok ?)
and there is klik that provides self contained archives, thus not potentially confusing the native packager about the content of /usr/bin. And application developppers are unsurprisingly very happy not to have to package and integrate their stuff themselves.
I hope I am wrong but I don’t see autopackage taking off anymore.
It’s a huge advantage for small applications who don’t have packagers; although packagers usually aren’t too hard to come by.
I definitely see autopackage taking off. I’ve used it a few times, the trouble is that many of the projects using it now are the big ones: Inkscape, gaim, etc.
They need to do anything they can to make autopackage preparations as easy as possible. They’ve got a 9 step thing on their site, but it’s not quite *that* easy. Especially when you get to writing the actual spec.
The other thing that would be awesome would be BSD compatibility .
Having packaged with it I have to say I really like it. Instead of either:
1.) Just releasing source.
2.) Source + rpm + deb.
Instead I can just release source and an autopackage; that way everybody (except BSD users) is happy.
It does involve a bit of coding to get working, but they’ve made this fairly easy as well.
If my memory isn’t playing tricks on me, I still remember one of Mike Hearn’s early posts here on OSNews, explaining the rationale for autopackage before a line of code was even written. Gosh, that must have been a couple of years ago. It was one of those hopelessly ambitious pieces, but was very well reasoned and written. Of course, you see those every once in a while about something which is missing or should be fixed in Linux or open source. Most of those kinds of articles never lead anywhere.
And here’s where Mike Hearn has been a pleasant surprise: he just went ahead and did what he thought needed to be done, notwithstanding those who said it was unnecessary, or wouldn’t work. It’s great to see somebody take that initiative and continue with it – countless projects are abandoned before they reach release 0.1, and Autopackage has been at 1.0 for a while. It still hasn’t experienced widespread adoption, but that is not a problem at the present time, just read the FAQ for Autopackage. As previously stated by others, it excels in installing software which doesn’t have a packager for a specific distribution. Indeed, a few larger projects have been provided in Autopackage form such as Inkscape.
In the end, I’d like to thank Mike Hearn for his efforts and putting his time/money where his mouth is. Unfortunately I don’t have the technical expertise to assist in this project, but I’m glad to see that there are those who strive to make open source software better.
I use Slackware, which means I predominately install from source but the other day I was pointed to a program that used autopackage, so I decided to try it again (previously (about a year ago) it balked and did nothing). It worked flawlessly this time.
This is good because not only does it make it easier for the enduser but also the software developers. Knowing that only one package is required.
From the website…
“””
Automatically verifies and resolves dependencies no matter how the software was installed. This means you don’t have to use Autopackage for all your software, or even any of it, for packages to successfully install.
“””
Let me get this straight… Even if I never use Autopackage, it will somehow still make everything install correctly ???
“your software” -> “your existing software”.
Yes, if the package being installed depends on some library that was installed by you as an RPM, it will find that lib and work with it. If you don’t have that lib, it will install it autopackaged (provided the lib is not something core like X Window).
I love the idea, and would love to see it become a standard, but ive seen too many distros do things too different for it to ever be universal, unless all packages come with multiple install methods auto selected by some kind of “distro-library-path” detection.
oh and offer some kind of source compile option (sorry im a gentooer)
Did they reinvent the wheel?
pkgsrc is the official package manager of NetBSD which also supports these:
Darwin (Mac OS X)
DragonFlyBSD
FreeBSD
Interix (Windows 2000, XP, 2003)
IRIX
Linux
OpenBSD
Solaris
Tru64 (Digital UNIX/OSF1)
http://www.netbsd.org/Documentation/pkgsrc/
pkgsrc is great. As a Debian ex. pat., growing comfortable with pkgsrc was a wonderful bonus to moving to NetBSD. I now use it on OS X and OpenBSD.
It’s great. Highly recommended. I think it would compliment Slackware best of all.
read various posts by debian folks. responses by mike hearn et al appear to be much hand-waving
They might have some points but consider this. By Slackware standards Debians package managment system is broken. Anything that forces package dependencies down your throat will in the end lead to disaster.
So what is more broken then DPKG or autopackage?
Which one works across the most distro’s?
Since Autopackage is just another type of DPKG one would be as bad as the other.
Hmm, I’ll bite. You don’t provide links to support your words, but I know what it might be about (not only with Debian, but with other distros).
So why is Autopackage considered harmful by some people? The two basic reasons are security and system integrity. Let’s discuss them both from common sense point of view.
1. Security.
The argument here is that “You shouldn’t install software from random places in Internet”. This specific reasons cited are threefold:
a) You don’t trust developers themselves.
If this is indeed the case, why use their software at all? After all, if you think they can sneak spy- and malware in the stuff they release, do you honestly believe that this spy- and malware won’t be carried over to the distro repository? No, distros don’t do a comprehensive security audit of every package in a repository.
Moreover, let’s not forget that we’re speaking about open source software now (with regard to closed source, neither repository- or standalone installer-based model has any advantage). Do you use Windows? Do you use open source software for Windows downloaded from the developers’ websites? Just how common are the cases when you got spy- or malware with that? Not very common I guess. Then shouldn’t the reason for it be something else than the distribution model?
Conslusion: The problem is orthogonal to Autopackage.
b) You’re afraid of a malicious hacker compromising a website.
This is a legitimate concern indeed. But couldn’t the hacker compromise the whole repo as well? What’s the difference then?
Conclusion: The problem is not specific to Autopackage.
c) The Autopackage installer is a shell script that’s run as root and you give it complete control over your system.
Yes, the Autopackage installer is a shell script, but you don’t have to install it as root if you don’t trust the package provider (except the software that absolutely must be installed systemwide, in which case you can install it from source or distro repository, but then you’ll have to deal with (a) and (b) cases anyway). You can refuse to give the Autopackage installer your root password, and then (provided the nature of the software allows it) it’ll install the software in your home directory.
Conclusion: This is an Autopackage-specific problem, but Autopackage offers features that make it possible to mitigate it.
2. System integrity.
This boils down to “Thou shalt not install in /usr”. While true IMO, the reason for this behavior of Autopackage is simply that distros don’t properly support installation to /usr/local or /opt. This is about finding necessary libraries, putting menu entries in right places etc. However, this problem is not specific to Autopackage as such – a hypothetical LSB RPM package would suffer from it just as much. To reliably install a package with all necessary system integration, Autopackage is forced to use /usr. However, you can override it by specifying the prefix of your choice from the command line. It’s just not guaranteed to work perfectly. And most often, you can install to /home!
The actual hazards of /usr installation are not that high. First, Autopackage installer will refuse to overwrite an installed RPM and will ask you to uninstall it first. Then, obviously, there is the risk that APT or a similar dependency resolution system will install another version of the software over the installed Autopackage. If you ask me, the policy of overwriting files quietly is questionable by itself to say the least, but let’s forget about that for now. Remember that Autopackage is meant for installing application software, not separate libs or core system components. Therefore, the situation with installing a distro-packaged app over an autopackaged one will most likely be related to a dependency on this application – which doesn’t happen that often. (The likely culprits here are large-scope metapackages like ubuntu-desktop.) But if a user did a systemwide install of an autopackaged app, he/she must be careful in order to not overwrite it afterwards (watch the dependencies and don’t forget what you installed with AP!). All in all, I prefer /home as a desitnation.
And the last, but not the least: the effort that is needed from the distros to support /usr/local and /opt prefixes properly is miniscule (mostly steeing up some symlinks and environmemtal variables). But the reason they don’t do it is very simple and sad. DISTRO MAKERS DON’T WANT USERS TO INSTALL THIRD PARTY SOFTWARE AT ALL. Period. You’ve probably heard many times that open source is all about choice and stuff, right? Well, if you ask distro makers, it must be wrong when it comes to software installation. Ironically, they seem to want to control your computer as tightly as Microsoft does! And this situation is not likely to find resolution any time soon… Luckily, there are workarounds for that, which is why Autopackage works NOW.
Amen.
Even more interesting, the ideas of repositories, standalone installers and cross-distro packages don’t even contradict to each other. Take Softpedia – a repository of standalone installers compatible with multiple versions of Windows OS. Nice, isn’t it?
While true IMO, the reason for this behavior of Autopackage is simply that distros don’t properly support installation to /usr/local or /opt.
Is there a list of distributions that do not support this?
And wouldn’t it be possible to just add the paths instead of delivering your opponents the best excuse for not supporting it?
Is there a list of distributions that do not support this?
Fedora, Ubuntu, Mandrake, Gentoo… Pretty much all major ones.
To date I’ve only heard that the developer of Mepis promised to provide all necessary support for Autopackage.
Fedora, Ubuntu, Mandrake, Gentoo… Pretty much all major ones.
This is interesting, /usr/local/bin seems to be in the PATH for Debian unstable and I can’t remember having encountered any problem installing into /usr/local before.
I actually install KDE programs I compile myself to /usr/local/kde
So it seems that make install does something different then, but it leads to the question why can’t the autopackage installer do the same?
“This is interesting, /usr/local/bin seems to be in the PATH for Debian unstable and I can’t remember having encountered any problem installing into /usr/local before.”
It isn’t as simple as /usr/local/bin being in PATH. Menu items don’t show up, icons don’t work, MIME type registration is ignored, etc.
Menu items don’t show up, icons don’t work, MIME type registration is ignored, etc.
As far as I know these entities are controlled by environment variables given systems using the freedesktop specifications on directories, menus, etc.
Assuming that the autopackage installer is capable of more than just copying files, e.g. running commands and writing text to files, it should be possible to extend the environment if necessary.
For example extending the KDE search tree is as simple as
mkdir -p `kde-config –prefix`/env
echo “export KDEDIRS=/your/prefix:$KDEDIRS” >> `kde-config –prefix`/env/extend-kdedirs.sh
Given the autopackage developers’ knowledge I am certain the did know about this, so I am curious why they chose to follow a path that guaranteed to get the distributors up on the barricades.
Modifying the users system configuration is:
* Risky
* Error prone
* Not our job
For instance, we already do patch up the $PATH variable and a few others so you can at least run a program if you install it into an odd prefix (and so ~/.local/bin is put in the path) but this causes an endless stream of bugreports from people who use exotic/broken shells and setups, or from people who feel ‘violated’ that some program modified their startup scripts.
This is simply fixing one, well known, variable!
Now imagine the chaos that would occur if we went around modifying startup scripts, XML files, environment variables, and other directories. All so stuff could be kept in a separate prefix – not worth it, given that there is no problem at all with installing to /usr: both autopackage and the system package manager keep databases of what packages own what files, and the files don’t overlap (like, you can’t have a file owned both by RPM and autopackage simultaneously).
The whole installing to /usr thing is really quite bogus IMHO. The people who are complaining whinge about a *lot* of different things we do – basically they just don’t like the idea that users are getting software from outside the distro. Nothing we can change in autopackage will ever fix that, as it’s the whole point.
thanks -mike
Thank you for answering Mike!
but this causes an endless stream of bugreports from people who use exotic/broken shells and setups
You’re right, not all variable are going to be modifyable as easily as KDEDIRS where an installer could query all currently active paths using kde-config and where it can be sure that startkde is executed by a sh interpreter.
both autopackage and the system package manager keep databases of what packages own what files, and the files don’t overlap
I knew I must have been missing something. My previous information indicated that autopackage would overwrite files or remove files on uninstall that are no longer the files it installed (have been replaced by a file from a package)
I guess your uninstaller is querying the package manager for files it is about to remove to check for the latter.
Anyway, is there some documentation of what kind of setup an administrator has to ensure to make autopackage work with a different prefix?
I am asking because my systems are configured to let local administrators install software in /usr/local without requiring root permission (group writeable) and if there is some more or less fixed setup one could create a “distribution” package that ensures it.
I imagine something like
/usr/local/bin in PATH
/usr/local/lib in ld’s config
/usr/local/share in XDG_DATA_DIRS if it is set (included by default if not set or empty)
/usr/local/config in XDG_CONFIG_DIRS?
/usr/local in KDEDIRS
Anything else?
We don’t query on uninstall to make sure RPM didn’t blast our files, because if that has occurred we can no longer reason about the state of the system anyway (and removing the file would be the wrong thing to do). The solution is simple – don’t install packages you know are already installed!
As to what has to be done for a new prefix, well there is no canonical list but I made a big one for a thread on fedora-devel some time last month. Try looking there.
e don’t query on uninstall to make sure RPM didn’t blast our files, because if that has occurred we can no longer reason about the state of the system anyway
Ah, I see. So you simply check the file’s hash and if isn’t the one you expected you do not remove it, right?
As to what has to be done for a new prefix, well there is no canonical list but I made a big one for a thread on fedora-devel some time last month
Edit: Thanks for the direct link FooBarWidget
Edited 2005-12-04 00:39
“Anyway, is there some documentation of what kind of setup an administrator has to ensure to make autopackage work with a different prefix?”
You may want to read http://plan99.net/autopackage/What_can_distributions_do_to_fix_brok…
Edited 2005-12-04 00:35
Ah, thanks
You might consider renaming the title, maybe something like “What can distributions do to fully support /usr/local”
Given the listed examples I guess that XDG_CONFIG_DIRS should include /usr/local/etc
Well I was amased how easy autopackage works. Our national tax program was just releases as autopackage and it was a breece to install. It auto installed some autopackage software first (to bad you need a inet connection for it though) and then asking for root pass and thats it! It seems a good way to install commercial software for example
I want to see some results! Autopackage has been on people’s lips for over a year. However, there are still only a few autopackages to install, and not even all of them work perfectly. In the meantime, Klik ( http://klik.atekon.de ) has gotten itself thousands of packages, and the system has improved a lot too (it’s still not in a stable form, but has great potential). If I had a bet, I’d put my money on Klik.
Klik has one fatal problem: it’s not really cross-distribution. For example, Fedora still isn’t supported. At autopackage, we focus on cross-distribution compatibility.
“Klik has one fatal problem: it’s not really cross-distribution. For example, Fedora still isn’t supported. At autopackage, we focus on cross-distribution compatibility.”
Well, while it’s not there yet, Klik also _focuses_ on cross-distribution compatibility. However, I may remind you that neither all Autopackage files work with all distributions. So, pot, I’d like to introduce you to the kettle.
“However, I may remind you that neither all Autopackage files work with all distributions. So, pot, I’d like to introduce you to the kettle.”
Not quite. We may not support every single of them out there, but we support a lot. More than Klik does. For example, Fedora Core 4. Doesn’t work with Klik. Klik basically converts Debian packages, which aren’t built with cross-distribution compatibility in mind at all.
I’d really like to see where Klik does research on cross-distribution compatibility. I certainly haven’t seen anything on their homepage. We on the other hand have http://plan99.net/autopackage/Good_packaging_practices%3a_what_… , http://plan99.net/autopackage/Improved_C++_support_in_autopackage_1… , http://plan99.net/autopackage/Linux_Problems , http://autopackage.org/aptools.html and http://autopackage.org/docs/binreloc/
Well at least auto-package will install the missing files, dunno about klik I didn’t tried it yet because it’s not yet fedora proof.
“Well at least auto-package will install the missing files, dunno about klik I didn’t tried it yet because it’s not yet fedora proof.”
Klik will also install missing libraries through server-side-apt to the Klik package, but it assumes that you’ll have some “core” packages already installed, so that they wouldn’t be included in every Klik package. Different distros have a bit different naming method for these libraries, and that’s why Klik isn’t compatible with all distributions yet. LSB will hopefully someday make things better.
You can safely test Klik with your Fedora, because Klik packages don’t touch your system in any way. The worst thing that can happen is that the program doesn’t work.
I love this redfinition of the term Universal.
Linux distros have dug themselves a hole even worse than Windows98 DLL Hell, that an end user cannot deal with. Words like “synaptic is easy” make me want to bash my head against a wall. No it’s not – and people who claim as such are so uterly blind to that fact that it only screams louder that linux will _never_ make it to the desktop until it’s as easy as OSX or Windows to install an App.
And neither Klik or Autopackage solve the whole problem.
“No it’s not – and people who claim as such are so uterly blind to that fact that it only screams louder that linux will _never_ make it to the desktop until it’s as easy as OSX or Windows to install an App.”
Your statement is quite weird, considering that programs are installed exactly the same way in OS X and in Linux with Klik.
But what would you suggest? I bet you have some good thoughts on this problem.
I don’t really need to, OSX is a shining example. Anyone can put up a .dmg of their app on any website and it only requires (after attaching the image, a step I still think is pointless) to drag the app to the app folder.
Klick attempts to mimik this, but uses a repositry, taking the power of choice away from the user. Sure the repositry can store every app you ever need, but Mr.Independent app developer who wants to publish regular beta updates to his own app on his blog doesn’t have control over the repositry. It is a “distro-dictation” installation method and thus, not “unviersal” by any means of the word.
We need _one_ app bundle, using one standard layout, that works on every distro (just like Mac .apps can contain a Classic and Mac (OSX) folder for one app to run on both OS9 and OSX. And can be downloaded as a single file, without “repositries”, “apt-get” and any more complexity at all than a drag-and-drop and that’s it.
The makers of linux package managers just can’t get their heads around this and it is beyond me why.
I don’t really need to, OSX is a shining example. Anyone can put up a .dmg of their app on any website and it only requires (after attaching the image, a step I still think is pointless) to drag the app to the app folder.
I really don’t see how MacOS-X shines more than e.g. Red Hat. A red hat developer could put an RPM on a website and when the user clicks on the link, the program is automagically installed. No need to drag it to any special folder.
Of course this only works the package doesn’t contains unresolved dependencies. The same problem occurs in MacOS-X by the way. If you drag something with unresolved dependencies to the App folder, MacOS-X will not try to download what’s missing automagically.
I’ve never had a single dependency problem with OSX :S Including dragging over the Office.X .app files from another mac without using the office installer.
“Linux distros have dug themselves a hole even worse than Windows98 DLL Hell, that an end user cannot deal with. Words like “synaptic is easy” make me want to bash my head against a wall. No it’s not – and people who claim as such are so uterly blind to that fact that it only screams louder that linux will _never_ make it to the desktop until it’s as easy as OSX or Windows to install an App.”
Indeed! Package managers are great when it comes to installing applications that are approved by the distro (for example Xfce under Ubuntu) but if you want to download something of a web site and run it, you’re completely on your own. In Mac OS X you drop the application where you want it and run it, in Windows you either do that or the application comes with a built in installer that just works. When the Linux world can do that, wake me up because then, maybe, maybe it is ready for the desktop.
Package managers are great when it comes to installing applications that are approved by the distro (for example Xfce under Ubuntu) but if you want to download something of a web site and run it, you’re completely on your own. … When the Linux world can do that, wake me up because then, maybe, maybe it is ready for the desktop.
A decent distro will come with a huge variety of packages with, usually, more from third-party repositories. How often do people really “find something on a website” they can’t get from their distro’s online stores? I’ll wager it’s not often if their distro is any good. In addition, Joe Sixpack users are likely to be quite content with what their distro offers and will never go beyond it – all this aside from the questionable sanity of downloading and running executables from websites.
The vast majority of all complaints about desktop Linux boil down to “I’ll only use it when I can use the same programs (or Linux clones of them) I do under Windows”. Like that or hate it, but that’s what the market has always said.
A universal packaging manager for Linux sounds a great idea but it probably won’t happen. In my experience, too many Linux devs enjoy being members of the awkward squad and would be aghast to find themselves using a package manager intended to help mere mortals. One can imagine the horrified blogs on Planet Debian right now.
“the universal Linux package installer”
In this Universe, nothing is universal.
A bit like Sony’s UMD. Can’t the ITC get on to them for false advertising?
It makes me laugh when the Windows(other platforms) people, caim Linux is not ready for the desktop because of no universal install solution. Unlike Windows users we dont have to goto god knows how many websites just for software, then install each one by one.
Package managers provide a repository of software far superiour to any Windows platform, if I introduce this to a new Linux user they are simply amazed. One place where you can download hundreds of software in the few clicks?, Windows dont do that, Linux distros do. Why have a universal system when you have all the software you want in say Ubuntu or Debian and it gets updated for FREE.
I think Windows users cannot get used to the idea of hundreds of software packages at there mersey in a few clicks, it’s to good to be true. It’s just like the idea that there is no Start menu in Linux so there brain goes into meltdown, applications menu, whats that?
Edited 2005-12-03 13:15
Websites are better, end-users prefer websites as they have colour, variety, detail the software and its functions, provide news, communities with forums and not just a cold list of a million confusing names starting with either g or k.
Klik has some good ideas, and we’ve worked together before, but by NO means is having more packages an advantage.
We could have mechanically generated autopackages from Freshmeat feeds, but that would have been a dumb thing to do because packaging is an inherant part of the software development process. You need to consider whether your dependencies are reasonable and fix them if not, make sure your software can take advantage of features present in newer systems but still work on older ones and so on. Packages also need to be compiled in a special way to improve binary compatibility.
None of these things will be done if you just mechanically generate them from the Debian archives (or any other distributions archives). And that’s a good way to get packages that don’t work reliably. Go look at some of the auto-generated CMGs : their feedback system indicates that they work for only 2/3rds of the users! And that’s already biased because people on distros like Fedora are marked as unsupported etc.
If you want to use autopackage as developer you need to:
1. either put their C!!! (C-entric anyone? what if I don’t use C?) code in your program and thus link to libC (C-entric again! What if I don’t want to?)
or
2. use glib, see C-entric entries in #1
In other words it forces developers into a dead end.
It uses a kernel hack (they actualy admit that) to find out paths. This is unacceptable. Either make a standard kernel call for that, or don’t do it at all.
It uses a kernel hack (they actualy admit that) to find out paths. This is unacceptable. Either make a standard kernel call for that, or don’t do it at all.
This is news to me. Where do we “admit” that it’s a hack? It reads /proc/self/exe which is the API Linux exports for determing your binary path.
There’s also no requirement for you to use the binreloc kit, you can implement relocatability however you like.
I also don’t get your complaint about C – every program links against libC regardless of what language it’s written in. That’s the standard way to access the operating systems kernel and dynamic linker services. Are you even a developer at all?
Yes, I am and unlike you I have the wider view. Your statement about “every program uses libc anyhow” is ignorant and shows how little you actualy know. There are programs and compilers which chose to ignore libc(for a good reason) and make their own base libraries which call the kernel directly. They mostly link staticly.
You probably wonder what the use of such approach is so let me elaborate.
Have you been able to run your 5 year old binaries on today’s linux? No because they link to libc. Using static/smartlink approach with conservative kernel call library makes your programs much more consistent, without the requirement to recompile them when some nitwit decides to break ABI again.
I admit to being wrong about that “hack” tho.
I’ve gotta admit, I can’t remember the last time I saw a fully static program. Making one of them is difficult as simple things like doing a DNS lookup can cause crashes if you really do statically link libc. Most programs which want total robustness dynamically link against libc (no matter what language they use) and that’s about it.
Static binaries were broken in the last few years anyway, I remember icculus being quite annoyed by it … even avoiding libc is no defence.
http://www.freepascal.org
This is one such compiler which under normal circumstances links smart (this means static) and DOESN’T USE OR LINK IN ANY WAY LIBC.
Sorry for the bold but you simply don’t get it. LibC is NOT the thing everyone must use to get functionality. You must link it in most unices if you wish to work with things like pthreads, gtk etc. but that’s a design flaw not something to boast about. There’s an example binary generated by this compiler somewhere which is 5 years old and still runs ok.
Instead of libC they have a RTL (runtime library) which is specific for each Arch/OS this compiler runs on. It’s the base library required for even the compiler itself. It DOESN’T use libc in any way unless specified by a define. It goes directly to the kernel.
I don’t wish to ignite any language/compiler was here. All I wanted to say is that you force developers into something some really can’t do.(especialy if you want to be cross-platform)
Adding linux specific code into my sources just because I want/must use autopackage is kinda stupid.
P.S smartlinking is static linking only those methods/objects which you really use, reducing filesize for hello world static program to 20kb (compared to 800 kb with gcc –static)
I’m not aware of any programs that use Free Pascal.
Also, it’s hardly a design flaw for an operating system to require processes load some userspace code – every OS I’ve ever seen does this. Not having access to the standard thread library, widget toolkit etc is pretty huge.
The “smart” linking can also be done with the GNU toolchain, actually during the 1.2 cycle we experimented with enabling it by default in apbuild but it suffered quite an obvious problem – apps that use libglade can contain references to functions outside the main body of C/C++ code. The garbage collector cannot see this reference and deletes the “unused” function causing runtime failures.
So we decided not to enable this by default for 1.2, but there are plans to introduce a more robust form in later versions.
“I’m not aware of any programs that use Free Pascal.”
See their homepage, news section has a nice entry. In any case this is ignorantism.
“Also, it’s hardly a design flaw for an operating system to require processes load some userspace code – every OS I’ve ever seen does this. Not having access to the standard thread library, widget toolkit etc is pretty huge.”
Sure but look at how MS does it. As much as I hate winapi they made the wise choice when making base libs not language centric. I can link to their thread library without ever needing to link libc, unlike in linux. As I sayed it’s a design flaw. Unix was made “By C for C” and C was made to make Unix. It’s ok and nice but it has negative impact.
“It uses a kernel hack (they actualy admit that) to find out paths. This is unacceptable. Either make a standard kernel call for that, or don’t do it at all.”
There is no standard kernel call. That’s the problem. If there is we would have used it.
Now, if you know a better solution we’d like to hear it.
I’ve just had an idea after googling up this document on the internals of an OSX app http://doc.trolltech.com/qq/qq09-mac-deployment.html
Why not Linux adopt the .app format? The .app is simply a folder, and it contains a resources folder, and a folder containing the binary for an OS.
Software could be made to create and run .app bundles that have linux binaries in a linux folder inside the .app, the same as MacOS runs .apps using the MacOS folder inside the App.
You could have a folder structure like this
demo.app
> Resources (resources in program)
> MacOS (OSX binary in here)
> Linux
-> PPC (Processor architecture)
–> All Distros (Default exe to use in here)
–> Ubuntu (use this exe in Ubunutu instead)
–> … (other distros that have to use different code)
-> i386
–> All Distros (and so on…)
–> …
This way, a single app – could – if it so wanted to have one .app that could run on OSX (both PPC and i386) and Linux, with differently compiled binaries for certain distributions. It’d be a rather large app file, sure but it’d be one app to run them all! And the author wouldn’t have to include any more than one binary if they wanted.
When the binary runs for the first time it could check against an internal list of dependencies and auto-download them from the repositry.
You’d have one standard that runs on OSX and Linux (if the author wanted), and it’d be easy to install and uninstall.
i’ve replied to this many times: why instead of having autocontent binaries just have autocontents PACKAGES, something like a tar of debs or rpms, with the application and its dependencies, it gives the same illusion of one big binary with all libraries inside, and you don’t lose the ability of sharing code and automatic updates.
Answer: yes, you could do, and implementing a variant of the NeXT/Apple scheme on Linux would have been a good idea 10 years ago. If I was to go back and design Linux from scratch then something very similar is what I’d do except I’d dump the way the shell tries to locate as many appfolders as it can and I’d replace it with an explicit install or integrate action.
Implicit integration has caused quite a few nasty security problems on MacOS and it’s not a big deal to wait for the user to run the program for the first time before it’s linked into LaunchServices.
Anyway. You can’t implement this on Linux systems of today, because there is no such thing as “Linux”, instead there is just a random collection of packages thrown together with little or no organisation between them. Apple and Microsoft provide large coherent collections of APIs that are versioned together, meaning you don’t need horribly complex installers or depsolvers to do anything, you just need to check “Is this OS X 10.3+”.
thanks -mike
Might play nice together. The distro provides the LSB-compliant libs, and autopackages provides the rest.
And voila ! Simple click and install desktop apps!!
So where is the difference between autopackage & openPKG (which seems a “million” times more mature) ?
I OpenPKG seems to be server software oriented.
clearly you guys need to start working together… maybe the klick packages can be transformed to autopackages? or instead of all the ‘handmade’ klik packages you (and the klik mantainers) can start to instead make autopackages and keep klik for the rest…
i really appreciate all your work, esp the research on binary relocation, and again – this is really valuable work. please use this for more than just autopackage!
… is that once your distro upgrades the dependecies (e.g. if you install new release of your distro upon an old one) some of autopackages will mysteriousely stop working or worse expose some faulty behaviour that can damage user data.
The solution for that is to always install *all* dependencies in some spare place and regularily scan system for changes.
E.g.:
1. Your App depend on libfoo which is already installed on the system.
2. You install your version of libfoo on the hdd in some spare place but use system provided one to enable sharing. Than you remember signature of the system library file and its modification date.
3. Upon every application startup you scan the dependencies and LD_LIBRARY_PATH-override the changed ones with yours. ( Of course some more elaborate checks (than signatures) can be executed so that security updates doesn’t invalidate a system library. A case by case analysis is necessary .)
4. Autopackage controll pane should expose an option to forcefully override system provided libs with AP installed ones in case there are still problems.
I see. But this is not specific to Autopackage as such – source installs would suffer as well…
Yes, but source installs are not meant for beginers while AP is.
Learn proper UNIX, ok?
Stop screwing things up. Go look at what BSDs do.
Your problems are Linux problems, caused by people who don’t know what to do with C libraries, don’t respect UNIX FHS.
Geez, f you just $man hier in a Linux box, what you get is a load of guck. Nothing to do with what’s on the system.
Oh, yeah, stop ABI breakage, too.