Come in and vote for your favorite front-end of package management solutions (Javascript req). Update: The “Other” option was removed and all its votes deleted, as the last hour it got hammered by script kiddies and went from 5% to 39% unaturally fast. The poll is now closed.
pacman is the best. i686 pkgs people!
which is base on pax.
—
http://perso.hirlimann.net/~ludo/blog/
pacman’s almost there, just we’re needing packages split up better. But then again i think this is ironic coming from (one of) Arch’s KDE maintainer(s) (tpowa being the other (un-)official maintainer(sorry for extra ()’s then necessary)).
I voted for Apt though I think Portage def. gives it a run for its money even if Portage always broke* alot more for me. Not a knock against gentoo…just my own experiences.
By broke I mean packages wouldn’t compile/install but that might have been due to my shitty net connection or busted packages.
I voted for other. Of course all options can’t be there, but I do find it peculiar that QNX’Package Manager wasn’t listed. What I especially like about it is the fact that one can access a repository as if it was a website; enter url and done, no settings or whatsoever required. Each package in itself is a tiny repository.
But, everyone, let’s not forget that package management tools are solutions, not features.
APT. Use it with Debian & Ubuntu. ๐
Wait, I’m confused here. Are we to make our choice based on the frontend or the actual package manager?
If it’s the frontend then it should be based on how easy it is to use for both newbs and advanced users.
If it’s the actual package manager then it should be purely chosen from how well it works and also its flexibility, i.e. can it install source and binary, can it handle different package formats etc.
If it is actually for the frontend then why aren’t Synaptic, Dselect, RPMDrake and lots of other GUI frontends in there?
The best is MS-MSI, no question
>If it is actually for the frontend then why aren’t Synaptic, Dselect, RPMDrake and lots of other GUI frontends in there?
Because I decided that this poll should be about the “native” front end of these package managements, not third party efforts.
The only option in the poll that has a non-native option there is pkgtool with swaret, but at least swaret was shipping with slackware up until 10.0.
I lean more towards Slackware, and I have to say that I love using slackpkg for updating my Slackware packages. However, aptitude and synaptic win hands down for me. They have great user interfaces (as opposed to the nightmare that is dselect) and are organized quite well.
Maybe somebody ought to note that most package managers are inherently flawed in that if you don’t have broadband (which is not available in all areas and rather expensive in quite a few), you’re basically screwed and your favorite packager will be next year’s release version. Do people actually download something like KDE 3.3 over dial up and be happy about that?
Okay, so fink is based off of debian/apt, but fink and fink commander are (IMHO) the easiest to use that I’ve done.
Also, how do I register 10 votes against SGI’s inst?
How about building packages against standard system libraries and haveing a drag n drop installer.
Works great on OS X.
Front End:
I like synaptic.
It’s easy to use and gives a list of what packages and their version numbers are already installed or just lists the packages if they are not already installed.
Back End:
Also if you like text mode you can always use apt-get.
Off Topic:
I really like the new debian installer.
Someone mentioned that if you don’t have broad band that package managers are flawed and I do agree.
The debian installation option I chose after the initial cd
was from a network http. If I didn’t have a fast ethernet connection it probably would have taken quite a while.
Peace man,
Jim
That works for Apple ’cause they run the show and they can make the rules stick. Linux people like to confuse chaos with choice. Rather than building one very good tool to handle a necessary piece of busywork (installing software) they’ll run around and create 500 half-baked and half-finished version of the same thing.
Freedom is wonderful, but sometimes doing the job right is wonderful, too.
Come in and vote for your favorite front-end of package management solutions
I though RPM is the package management format and its native frontend is yum, with apt4rpm as an option.
Apt is the best package manager of the ones I tried it solves all Dep. problems and has a vast amount of repositories to select from. I use it all the time for my 3 FC2 boxes. It makes it so easy to get all the stuff theat FC leaves out because of licensing issues. Mplayer is a breeze with apt.
1) supports binary packages; not everybody wants to compile everything from source all the time.
2) supports dependencies; if one command can install a package, AND everything needed by that package, that can take a lot of work out of your hands. Maybe you don’t need that, but it’s very useful.
3) uses a file format that can easily be processed using other common tools; .tar.gz. is nice here.
4) for the rest: keeps things simple. Command line interface will do nicely, thank you.
So far, I’d say this makes Pacman my favourite.
Apt and Portage work, and work very well, you know…
When I need to install a piece of software across a network using SSH drag’n’drop it’s not that useful to me…
Does it have a front-end, and what is its package format?
I know it was once based on BSD, is ports its format?
BTW if you removed RPM from the poll, then the poll would be accurate. By having it there you are further adding to the confusion of newbies.
yum may have been 3rd party, but it has been posted here before that it is the more technologically appropriate solution Vs Apt for the RPM package format. It is more than third party.
I’ve used all three. Swaret is the most primitive – its dependency checking mechanism is basically glorified ldd; meaning that if program A needs to call program B on the command line or using a socket, or relies on configs/resources provided by B, then Swaret doesn’t see that A depends on B.
Apt has an excellent dependency checking mechanism. However, it’s a bit clunky in places. For instance, there can only be one package with a given name installed. The result is that you get files like libgnomeprint2.2-0_2.8.0.1-2_i386.deb, where “libgnomeprint2.2-0” is the name of the package, and “2.8.0.1-2_i386” is the version; the gist of it presumably being that version 2.8.0.1 of libgnomeprint can be upgraded over version 2.2.0, but not over 2.1.0. Yes, I know, it is silly and confusing.
Portage is the newest and the best. You have USE flags, allowing you to turn optional features on or off. You have SLOT’s, allowing multiple versions of a package to be installed concurrently. You have a clear separation between “world” (things the user explicitly wants to install) and the rest (things that are required to install “world” packages). You have multiple layers of masks, allowing you to adjust settings on a per-package basis. The ebuilds (package building instructions) have a sort of object-oriented hierarchy, meaning you don’t need to reinvent the wheel when making your own custom packages. Of course, compiling something like openoffice takes forever, but that’s orthogonal to the fundamental merits of portage.
If you know how to use it, and how to properly set up your repositories, there is nothing better. In the end though, its all the same. If you know how to use APT or Portage correctly you are just as well off.
All systems depend on proper package construction and solid repositories. Fubar’d packages and repositories are where %90 of the problems occur. The other %10 are user error.
./configure; make; make install
if you want a fancy GUI, add GNU screen.
./configure etc is fine until you hit Dep. HELL
Hmmm, is it just me, but are some of these options not really “front-ends”?
Some, like RPM, apt, portage, ports etc. seem more like the package management *system* of the distrib/OS. And yes, Eugenia said something about ‘native’ front-ends (whatever they are *grin*), but as far as I’m aware, things like rpm and apt don’t really have a native GUI front-end as such. Perhaps either the poll’s title should be renamed (eg to Favorite Package Management Format/System), or some of them like RPM should be renmaed to their relevant frontend – it’s probably just a bit confusing for some people who don’t even know what RPM’s ‘native’ front-end is.
Just my 2 cents.
Bye,
Victor
MS installers, freebsd ports, Lunar’s moonbase, Arch’s pacman, and apt-get are my favs. overall, i’d have to say ports. it’s very cool browsing locally, and there’s always bsd packages too. very simple, very clean.
Does it have a front-end, and what is its package format?
Solaris has native packages, but i don’t know what the system is called. there is the aftermarket pkg get which can pull from a remote repository. there are tons of programs on blastwave now. with this “newer” system in place, i plan to try Solaris again.
pacman is not a frontend, it’s a native package type like .deb or .rpm, and it natively supports dependency resolving. There is no other way to install an arch package other than pacman. dpkg -i is the backend and apt-get the front end in debian.
rpm -ivh is the backend and apt/yum/urpmi the front end in rpm based distros.
500 half baked and half finished versions create competition which leads to the end to a beter solution than the solution tought out by the most brilliant people
I’m a bit lost.
Why do you consider rpm as a frontend ? If rpm is a frontend, then dpkg is a frontend too, isn’t it ?
From what I understood, yum and urpmi are frontends for rpm, and apt is a frontend for both rpm and dpkg.
Then, synaptic is a frontend for apt.
So, opposing rpm, apt, yum and urpmi in the same poll is quite strange, isn’t it ?
That works for Apple ’cause they run the show and they can make the rules stick. Linux people like to confuse chaos with choice.
—-
linux is fundamentally a more modular system than OS X. this is the nature of the OS and development process. it enables you to choose or remove parts of stuff as required.
and remember that not all programs can be dragged and dropped in OS X. basically everything that requires further dependencies needs a better package management system. it is only suitable for monolithic systems to “drag and drop” to install packages.
Solaris uses pkgadd, but there is no where near the amount of native packages for it that even QNX has. The better alternative is pkg-add from blastwave.org which acts much like apt. I like QNX’s the best by far though pkg-get is great as well. The worst I ever run ran into is swmangr on irix. makes RPM’s dependency hell as pleasure cruise by comparison.
…Portage is the best but it’s a tad slow. The new 2.0.51 is a bit faster but it’s not a speed daemon. The USE flags are an _excellent_ idea but I think they are becoming a mess (just install app-portage/ufed and you will understand). The support for binary packages is rather poor and I would like to see something like on the BSDs (a binary installer and ports). But I think its features make up for its shortcomings (who will probably fixed sooner or later).
and remember that not all programs can be dragged and dropped in OS X. basically everything that requires further dependencies needs a better package management system. it is only suitable for monolithic systems to “drag and drop” to install packages.
——
Linux dynamically linked libraries is optimized for saving hard drive space, the tradeoff being dependency hell. Mac OS X, on the other hand, strives to keep all dependencies in one package (using more hard drive space), with the very much realized benefit of eliminating most dependancy issues. This allows most Macintosh software to installed by simply dragged to the hard drive from the installation media, or even moved from one hard drive to another without breaking the package. Most Cocoa and Carbon software is programmed this way. On occasion some software developers from the DLL crowd decide to dynamically link components of their code to files all over the hard drive. This situation requires special installation requirements similar to any Windows, BSD, or Linux package management system.
Hard drives being cheap, and time being scarse: I’d rather software built such that all dependencies are included in the package itself, so the product will always work even if another version of the same dependency exists somewhere else on the hard drive as a different version.
-Karrick
Linux dynamically linked libraries is optimized for saving hard drive space
Good luck replacing all the dependencies and keeping track of their versiones when a severe vulnerability appears and you have to check and patch each package one by one. With dynamically linked libraries, you upgrade that library once and you’re done, every program using that library is no longer vulnerable.
rpmdrake isn’t third party. urpmi and rpmdrake have equal status as alternative official front ends to the urpm package management backend. urpmi is a CLI frontend, rpmdrake is a GUI one.
I voted ‘other’ on the basis that, frankly, there’s not much difference between any of the ones I’ve used (apt, urpmi and YaST). I could happily use any of them, on any distro. I expect the others are much the same.
“Portage is the newest and the best. You have USE flags, allowing you to turn optional features on or off. You have SLOT’s, allowing multiple versions of a package to be installed concurrently. You have a clear separation between “world” (things the user explicitly wants to install) and the rest (things that are required to install “world” packages). You have multiple layers of masks, allowing you to adjust settings on a per-package basis. The ebuilds (package building instructions) have a sort of object-oriented hierarchy, meaning you don’t need to reinvent the wheel when making your own custom packages. Of course, compiling something like openoffice takes forever, but that’s orthogonal to the fundamental merits of portage.”
An interesting explanation, but most of it seems intrinsic to source based packaging, which is great if that’s what you want to do. Of the stuff that’s more generally applicable – the alternatives system allows you to install multiple versions of a package on Mandrake and Debian, I don’t know which other distributions use it. The “world” / rest distinction sounds rather arbitrary and unsustainable to me, it doesn’t seem like a line that I’d be able to draw naturally.
Amen to that last paragraph.
Sadly that is not ideal for all programs on linux.
For example I no longer used tcl/tk for programming, but my distro always includes it. It must be used somehow.
Exactly. Also, when an improvement is made in a central library, in a dynamically linked system you simply upgrade that library and all the apps that use it get the improvement. A good example is media encoding; on my Linux system, when the Vorbis guys or the Xvid guys make their codecs better I install the updated libvorbis or libxvid or whatever package and anything on my system that happens to encode xvid or vorbis gets better.
“Linux dynamically linked libraries is optimized for saving hard drive space, the tradeoff being dependency hell.”
not only that. it saves hard disk space but also memory and keeps versioning possible. it also lets people have parallel installations of incompatible versions.
you have to balance between dynamic and static libraries on a case by case basis. linux cant have a gui all the time like Mac OS X.
suppose you have a package that depends on some gui components you will have put a dependency and let a good package manager handle it appropriately. basically the only thing that is guaranteed to exist on a linux system is the linux kernel itself. all other stuff are optional and will have to be marked as a dependency which isnt the case with mac os x or windows.
Clearly Portage. You wouldn’t begin to appreciate it until you understand how challenging and complex it is to build hundreds of packages automatically from source. Any user who has unsuccessfully tried to compile a package from source can relate to this. I have a lot of issues with it, Portage.
But undoubtedly, it is ahead of the game and its time. It won’t become popular until everybody has a 5GHz processor and using other people’s precompiled binaries is considered bizarre.
and remember that not all programs can be dragged and dropped in OS X. basically everything that requires further dependencies needs a better package management system. it is only suitable for monolithic systems to “drag and drop” to install packages.
——–
Exactly. Also, when an improvement is made in a central library, in a dynamically linked system you simply upgrade that library and all the apps that use it get the improvement.
——–
Both of these are exellent points, with which I whole-heartedly agree. One must admit that Microsoft is attempting to address these very concerns I possed earlier in addition to the above quoted concerns while implementing versioning capabilities with .Net. Their solution is very interesting indeed. In the end, I suppose Apple chose its approach due to the market they seek and a platform they offer: easier computing for everyone. If there are critical flaws in a rolled-in dependency, then you must get new compilled versions from the distributor. I quote another exellent comment:
you have to balance between dynamic and static libraries on a case by case basis
In Mac OS X there is the capability to bind the dependencies within the package itself or to rely on standard locations for dynamically linked libraries. Whichever method the software developer sees fit to use when designing their solution it is available to use in this operating system.
I am currently a big fan of my Gentoo system, and have come to respect the Portage system. They have developed SLOTs to address versioning problems.
I am also very interesting in the recent work by the OpenDarwin team in the DarwinPorts project, allowing mutliple versions of packages to be simultaneously installed, and made active and inactive by simple commands. Well done!
-Karrick
But undoubtedly, it is ahead of the game and its time. It won’t become popular until everybody has a 5GHz processor and using other people’s precompiled binaries is considered bizarre.
You mean the BSD ports was ahead of its time, don’t you?
Really, it comes down to personal preferences. I’ve gotten sick of waiting for my software to compile, and I now prefer a binary system. I’m done tweaking for a while. Nowadays I want things to work pretty much straight away. But again, that’s my personal preference.
“It won’t become popular until everybody has a 5GHz processor and using other people’s precompiled binaries is considered bizarre.”
Ahaahahahahaha. Like that’s ever going to happen.
Click and run is my favourite for its ease of use and power. I would prefer all software to be distributed throguh this medium.
The only downside really is that it is not so up to date until Linspire 5 will be released and that only Linspire uses this system.
IMO pacman has to be the best i have used.
ports be 2nd best
I have a dislike for apt and anything rpm
Look at
http://www.autopackage.org
It’s a really nice idea.
Yes, I agree, it is very promising, but I did not mention it because it is not yet ready for prime time. In addition, it is aimed at a different set of users.
You mean the BSD ports was ahead of its time, don’t you?
No, I was referring to Portage.
Really, it comes down to personal preferences. I’ve gotten sick of waiting for my software to compile, and I now prefer a binary system. I’m done tweaking for a while. Nowadays I want things to work pretty much straight away. But again, that’s my personal preference.
I respect your preference, but I don’t get how compiling packages from source doesn’t work straight away. All things being equal, it’s a little longer, but it works. I can understand the feeling of wanting software now or never, but as I grow older, new, now and shiny don’t trigger that side of me anymore as it once used to. I prefer stability and control. Unfortunately, binary systems have been fragile and unstable for me. So I lean towards source based systems which I have had better luck with. By nature, I am also not influenced greatly by instant gratification. I don’t mind waiting 10 minutes to get gimp installed.
Ahaahahahahaha. Like that’s ever going to happen.
Would you care to share your amusement? Are you saying computing and processing power isn’t going to get faster?
Exactly. Also, when an improvement is made in a central library, in a dynamically linked system you simply upgrade that library and all the apps that use it get the improvement.
That’s not really the full story.
One downside is that you are requiring the API/ABI to remain consistent between library versions. If the API gets changed then all the apps are going to need to keep using the old library (Complete with any security flaws) until the app maintainers get around to re-coding against the new API/ABI.
Imagine, for instance, that someone decided to change the values of the glibc errorcodes. There’s no real way to predict the effect on compiled code that’s checking on the previously defined values. In this instance the API is the same, the ABI is the same, all that’s changed is a few integer constants, but you still need to recompile the app to be sure it will work as expected (Simple and unlikely example, but I just wanted to make a point).
That’s not a problem with a statically compiled app unless it’s the kernel API/ABI that’s changed. Fortunately with packaging tools dealing with the dependencies for you such things are now pretty invisible to the end user.
I still prefer the dynamic linking solution, but it shouldn’t be viewed as a panacea. No matter how sophisticated package management becomes you’ll still be left with some old library versions hanging around for compatibility purposes.
>Ahaahahahahaha. Like that’s ever going to happen.
Would you care to share your amusement? Are you saying computing and processing power isn’t going to get faster?
I think he was referring to your second point actually – which is interesting because if you use Java or .Net you are not running pre-compiled binaries. Transmeta CPUs are even better example as they are completely incapable of running any x86 code without their translation software.
But in answer to your question the failure of the IBM 970 to reach 3GHz and the subsequent cancellation of the 4GHz Pentium 4 shows the way for the future. Computing power will increase but through parallelisation, unless a radical new technology comes along the days of rapidly soaring clock speeds are now over.
Although I voted for APT, I think the optimal solution for application dilivery is: http://zero-install.sourceforge.net/
PLEASE READ ABOUT ZERO INSTALL BEFORE FLAMING IT.
Note that I said application delivery, the other existing solutions are the best for maintaining the OS.
actually, i believe that within less than ten years, most personal computing will not be done on personal computers. And the devices people will use instead certainly won’t be compiling all their software from source, I don’t think. But honestly, I see no advantage to using a source-based conventional Linux distro for most users. Sure, you may have had more luck with one, Mystilleef, but there’s no reason that’s because your machine happens to be the one doing the compiling. If the same distro was entirely binary and some other machine happened to have done the compiling, assuming the same source was used, the resulting distribution would be precisely as reliable (although maybe a couple of percent slower or something). I’m sure you have a very nicely set up machine now, but the fact that your CPU did the compiling has very little to do with that. You can make a binary distribution just as *stable* and well packaged as a source based one, although of course it can’t, in theory, be as fast.
-decentralized packages
-dependency checking
-GTK+/QT/text interface
-can be installed on any Linux distro
It might still be in development but still looks the coolest
No, I was referring to Portage.
With all due respect, I consider the BSD ports to have been ahead of its time. Portage essentially borrowed the idea, even if it improves on it.
I respect your preference, but I don’t get how compiling packages from source doesn’t work straight away. All things being equal, it’s a little longer, but it works. I can understand the feeling of wanting software now or never, but as I grow older, new, now and shiny don’t trigger that side of me anymore as it once used to. I prefer stability and control. Unfortunately, binary systems have been fragile and unstable for me. So I lean towards source based systems which I have had better luck with. By nature, I am also not influenced greatly by instant gratification. I don’t mind waiting 10 minutes to get gimp installed.
Compiling from source works, of course. However, it can take *much* longer than just getting the binaries, especially on older systems. For example, Openoffice takes up to 2GB of space to compile, and is something that I usually have to leave overnight. But that is only half the problem. Add regular updates to the equation, and you will be doing a fair bit of compiling.
However, servers are typically kept more stable and one doesn’t install/uninstall software on a regular basis.
In the end, I’ve reached the conclusion that for me, having a binary-centric system for my desktops and a source-centric system for my servers works best. Source-centric systems tend to be more customisable while binary-centric systems tend to be more convenient. That’s what it boils down to. Now it depends which one you prefer, and in what situation: customisable or convenient. Both approaches have many things going for them.
i am not suprised that apt and portage have the most votes at the time of this writing.
i have been told yum has come a long way since last time i tried it, so when fc3 hits the streets i will force myself to use it and do a fair comparison between the two.
all in all as long as the package system works it matters not which one anyone uses. the repositories are the key, having current packages and security fixes in a timely manner and having a wide selection of applications to pick from. i think that right now debian, fbsd, gentoo, and redhat/fedora are leading in this reguard.
Quote: “But in answer to your question the failure of the IBM 970 to reach 3GHz and the subsequent cancellation of the 4GHz Pentium 4 shows the way for the future. Computing power will increase but through parallelisation, unless a radical new technology comes along the days of rapidly soaring clock speeds are now over.”
Does this mean that Intel can be done for misadvertising the ability of their P4 range of CPUs. Already AMD64 is started to come close to the 3ghz limit (clock speed, not rating), won’t be too long until they go over it. In theory smaller CPU dies should allow faster clock speeds, Intel is due to move to 9 micron tool dies very soon I believe. And 6 micron is on the chart as well (and not the too distant future).
On a separate note, the person who indicated that MSI was the best packaging system needs to rethink. Try installing a Windows based application, then uninstall it. Then troll thru the registry and system files and see how much junk is left behind…that’s not a good package management system. The idea is to help install things, keep a track of where their components are, and then remove ALL of the components if asked.
Dave
Being a NetBSD user, I am a little biases, but for my money… I will have to say pkgsrc.
“all in all as long as the package system works it matters not which one anyone uses. the repositories are the key, having current packages and security fixes in a timely manner and having a wide selection of applications to pick from.”
I agree absolutely, and many people confuse the two.
“i think that right now debian,”
granted.
“fbsd,”
yup.
“gentoo,”
sure.
“and redhat/fedora are leading in this reguard.”
WHAT?!?!?!?! Fedora? Err….
no.
Fedora has one of the smallest vendor-supplied repositories of any mainstream distro. As far as I know virtually everyone who uses Fedora has to pull stuff in from third party repositories for even basic functionality. Nope, Mandrake or SuSE kick the pants off Fedora if you define the terms in this way.
you are all crazy! . nothing beats FreeBSD’s ports.
who said anything about vendor supplied? heh, fedora is a project that is half vendor and half community. there are excellent community repos at freshrpms and livna. (i recommend the livna rep as it allows for easy installing of nvidia’s xfree driver and will soon allow for easy install of ati’s as well)
i have not used suse ever, i cannot comment for or against it, and mandrake always brings the thought “rehashed redhat” to my mind (yeah i know it is its own distro now, just a stigma still attached to it in my head).
Why is RPM included? Isn’t DEB supposed to there also?
let me add that i was talking about security fixes and current software in addition to application selection.
even without 3rd party repos fedora has the sec fixes and current software bit covered.
Portage is nice, but even when everybody has 5 Ghz processors the normal code bloat inflation will negate any compilation time gains.
I’m fairly confident that we will see 5GHz multicore 64-bit, or even 128-bit, workstation processors before the end of the decade regardless of the wall IBM and Intel seem to have hit today. I think they just need to reengineer their manufacturing techniques.
@Lamebug
That’s a good point. It’s sad but it’s true. But I am optimistic that future software development tools will at some point get back to focusing on creating efficient code, after all these lazy programming paradigms are dead and are recognized for the flaws they promote. Even today, especially in Asian countries, the trend seems to be shifting toward placing operating environments in ridiculously small mobile and embedded devices.
I prefer sorcery to portage because I find the USE flags to get cumbersome. From a quick copy of the table at the gentoo website and a wc -l there are 302 global USE variables. And in my gentoo days (it may have changed by now) I don’t think there was a conviniant way to check for USE variables that were for a single package, such as to disable building of the mail client in mozilla.
Also sorcery usually lets me see what benefit I get for including an optional dep, for instance if I install package xyz it may ask something like:
Install optional dependacy gtk+? (for GUI support) [y]
which would default to yes if I already had gkt+ installed.
If you append the v flag to the emerge command it will show you what USEFLAGS the package depends on. An illustration:
[11:48 PM root(goldenmyst)]# emerge -pv python
These are the packages that I would merge, in order:
Calculating dependencies …done!
[ebuild R ] dev-lang/python-2.3.4 +X -berkdb -bootstrap -build -debug -doc -gdbm -ipv6 +ncurses +readline +ssl -tcltk -ucs2 0 kB
It’s been like that since the dawn of age.
Synaptic is getting better and better and the end user is very near to have a really nice tool and simple to use for manage the large repositories software.
is none at all, Zero-Install!
I stand corrected.
And let me state for the record that I do think portage is an awesome package manager, probably my third favorate, with ports coming in second (if I can include portupgrade and pkg_cutleaves as part of “ports” otherwise portage 2nd and ports 3rd).
yum is versatile and easy to use…and yumgui makes yum intuitve too
http://www.cobind.com/yumgui.html
If you like yum, you should try yumgui is a very nice frontend.
… package manager is netbsd’s pkgsrc which
is available for non-bsd systems too
I would like to see an Autopackage-based distribution. One such distro could do a big difference for the whole Linux world. Here’s how: take any distro, for example RPM-based, make a really basic system (_minimal_ GNOME or KDE GUI) with its native package management solution (here it is RPM). Package the rest of the software as autopackages with the help of developers. Voila: no more duplication of effort now! Not only has _your_ distribution a full collection of software, but a significant part of it is now available for people using other distributions! This is the future…
dpkg is the backend.. some frontend programs use apt-get as a ‘middleend’.
Yep, mine too. It’s very clean compared to other pkg managers and highly portable, so I can use it on other platforms, too.
In windows, I can install a new program, which most
likely will not change the delicate set of versions
of different parts of my system.
Or, I can run windows update to update my system.
This may destabilize it, but probably not.
On Linux, there is *only* system update, unless
I want to compile the source myself outside the
package system.
Shouldn’t there be difference between system update
and program installation?
It must be possible to install an application outside of
the package system provided by the linux distributors
main channel, without risk.
Lets say I want to test application YYY which requires
version 123 of GTK and version 321 of KTG.
I don’t want to update my system with these version just
to test application YYY.
My application could be zero-installed somewhere, and
the system packages GTK 123 and KTG 321 also.
The application is provided by whoever is hosting it,
and the system packages by the linux distributor.
This way, I (not root) can install an applicaiton without
destabilizing my system.
I don’t think having one source (the linux distributor)
for all programs and system packages is the way to go.
The best packages manager I’ve ever seen is sorcery. It’s really simplier than portage, while as powerful as it.
The package systems are not the problem, most of
them work.
But they still don’t allow a (non-root) user to install a
simple application without having to update
the system.
Synaptic, redhat update agent, etc are front-ends. apt is not backend either but somewhere in the middle. portage, dpkg or rpm are back ends and definitely not front-ends. This poll is crap.
“In windows, I can install a new program, which most
likely will not change the delicate set of versions
of different parts of my system.”
Same in Linux – unless it requires upgrades to dependencies. Even then it’s hardly a delicate set.
“Shouldn’t there be difference between system update
and program installation? ”
Why? I feel this is a huge strength of Linux and Portage in particular, that little distinction is drawn. Look at the disaster with Windows Update, Office Update, Symantec Update and christ knows how many other auto-update programs – I’d much rather have everything managed by one good program than a bunch of mediocre ones.
“It must be possible to install an application outside of
the package system provided by the linux distributors
main channel, without risk.”
On what basis do you think that? It’s quite possible to do such a thing in Linux; but it’s not totally safe. It’s the only way of installing software in Windows since it has no package system; this leads to installing things like HotBar, Comet Cursor and Counter-Strike ๐
“But they still don’t allow a (non-root) user to install a
simple application without having to update the system.”
No, they don’t! I think you fail to understand some key security concepts here; in Linux you are a restricted user (except in Linspire, but we won’t go there) and you cannot install programs and diddle about with the entire filesystem to your hearts content.
You may have been able to do this in Windows; it’s security records speaks for itself.
And it’s not totally dissimilar to Windows – there are many applications in Windows that require Administrator privileges to run/install. You just don’t notice because damn near everyone is an admin since you can’t get anything done if you’re not.
Basically I think you’re coming at this from the wrong direction. su’ing to root isn’t a major issue when I need to install something; no doubt one could even have a GUI to prompt you for it.
Given that there’s no distinction between “system” packages and “application” packages, it’s pretty much impossible to allow unprivileged users to install packages – because then they’d be able to install or uninstall anything.
I think you’ve fundamentally misunderstood; “Linux” (referring to a Linux-based OS rather than just the kernel) is not a monolithic “System” like Windows or OSX. It’s built from a number of modules which may/may not be required in various situations. This requires a different approach to Windows; I think package managers *are* the right way to go.
I’ve had instances where dependency checking package manager doesn’t see an installed package so I end up with two or more versions of it. I’d rather it just get the package I want, I’m fine with “can’t find shared library <xwhatever.so>” and so I’m willing to go get it. This way I know what’s on my system and why, makes cleanup really easy.
After the election results, for some reason, I’m not in the mood to vote anymore
>> Synaptic, redhat update agent, etc are front-ends. apt is not backend either but somewhere in the middle. portage, dpkg or rpm are back ends and definitely not front-ends
rpm and dpkg could be considered front-ends because they can be used without any other tools e.g. I often install RPMs using just ‘rpm -ivh …’. Perhaps the poll should be limited to tools which are purely front-ends (i.e. can’t operate without a backend).
>> This poll is crap.
That is the nature of polls, I would say.
>> rpm and dpkg could be considered front-ends because they can be used without any other tools
What I mean is, without any other tools in front of them.
I voted for APT but in fact I use APT for RPM with Synaptic as a front end.
Ahaahahahahaha. Like that’s ever going to happen.
Would you care to share your amusement? Are you saying computing and processing power isn’t going to get faster?
I think what he is implying is that no matter how fast processors get, how much RAM you have, how big a network connection you own, you will always find a way of using everything.
As processors and RAM increase, so do application sizes – compare the requirements of fvwm 1.x to kde 3.
For this reason, we are never going to have fast source-based package management.
The forte of source-based distros is not (contrary to popular opinion) having code optimized to your processor, but customising programs at compile time. For example, if I want nmap to work over ipv4 and ipv6, in portage its as simple as setting 2 USE flags. OTOH if I want it to have only ipv6, “-ipv4 +ipv6”! Why should I have ipv4 compiled in when I only want ipv6. This example is a little futuristic but you should get the point.
subject sez all:
Other: windows installer
I voted for apt because I really like dpkg, and I wanted to give credit down the chain. I like dpkg because it is fast and efficient, much more so than rpm. As a simple example, take the case of searching for an installed package. Running the command:
dpkg -L | less (or grep)
Doing this operation is instant, probably because it accesses an index of packages. A similar operation in rpm:
rpm -qa | less (or grep)
takes forever and really gets your harddisk churning. It goes to show that dpkg is well designed, hence the success of apt on top.
Put the files in a dir, done.
All those CLI utilities have frontends so they’re all ~ user friendly. I bett Fink has a frontend too. IRIX has one for inst / .tardist (“Software Manager”). QNX has one. In the end it doesn’t matter much, but only allowing MSIE as frontend pretty much sucks.
Hey, what’s with all the source based distros?
The sorcery splits aren’t in there…
I love source based – built by me for me.
Archangel, In refernce to this part of your post:
“It must be possible to install an application outside of the package system provided by the linux distributors main channel, without risk.”
On what basis do you think that? It’s quite possible to do such a thing in Linux; but it’s not totally safe. It’s the only way of installing software in Windows since it has no package system; this leads to installing things like HotBar, Comet Cursor and Counter-Strike ๐
“But they still don’t allow a (non-root) user to install a simple application without having to update the system.”
No, they don’t! I think you fail to understand some key security concepts here; in Linux you are a restricted user (except in Linspire, but we won’t go there) and you cannot install programs and diddle about with the entire filesystem to your hearts content.
You may have been able to do this in Windows; it’s security records speaks for itself.
And it’s not totally dissimilar to Windows – there are many applications in Windows that require Administrator privileges to run/install. You just don’t notice because damn near everyone is an admin since you can’t get anything done if you’re not.
I think you misunderstand the original poster. I think he’s suggesting it’d be nice to be able to install user-specific applications into the users home directory, which would be self-contained (contain all dependency’s foreign to the standard installation of the distro version he’s installed) for mainly the purpose of testing said application. For example, testing the newest nightly build of firefox, without deploying it system-wide. I think he’s right on the money with this idea.
Why wasn’t YaST one of the choices? SuSE is popular enough to deserve its rpm tool on this list…
It’s rather amusing to see Swaret get more votes then RPM Though I believe that is rather true. I’m waiting for the day when all the small packages are source distributed and automatically built from source when downloaded. This would leave the bigger packages like X and the kernel(for some users) to be distributed as binaries. I know there are lots of people that do this manually right now, what I’m saying is, that it should happen automatically behind the scenes for RPM based distros.
as far as i know, only apt (which is a fronted/layer/abstractin over dpkg, like urpmi is over rpm) manages reverse dependenies.
what is this?
in installing package A, apt installs packages Da, Db, Dc.
later when i decide i no longer need package A, the apt system realises that Da, Db and Dc were only installed to satisfy the now removed A, and suggests they be reomved. unless another package deoends on Dc for example, when Dc won’t be removed.
this is good. and urpm doesn’t do that. neither does portage nor does freebsd ports.
Although I am a Gentoo user, I used Debian for years and continue using it on servers. apt is way cool and the best out there.
I also love portage and the power it gives to me through USE var, it manages well my home system.
I’m suprised out of 98 posts nobody has mentioned AIX’s package management system. Having developed several packages with Solaris’s package system, RPM, and installp, I must say that for binary packages, AIX’s installp system is unsurpassed in terms of flexibility and features.
Installp’s only downfall is its complexity. It requires a good deal of study to use, but if you invest the time, you can do anything with it, including Applying packages w/out Committing them (making it easy to roll back changes). It resolves (and optionally installs) dependencies, lets you write pre- and post-install (and uninstall) scripts in any language you want, and lets you attach identifiers to sets of packages, which makes it easy to audit multiple systems for differences, and apply sets of packages as “fixes” or “maintenance levels.”
It also lets you create update packages, which may contain only the files needed to bring a base-level package up to date, without having to uninstall and reinstall, deal with configuration files, or overwrite. RPM’s upgrade features are extremely weak, and somewhat unreliable, compared to installp.
Well. this poll is a mess..
Nice mix of frontend and actual package tools.
apt? what happened to dpkg? thats more “native” than apt. Least we forget dselect.
yum, urpmi, openpkg and even apt(the rpm version and the development effort from progeny) all require rpm.. so how can rpm be a frontend?
portege?? shouldnt “emerge” be the proper frontend for the portege system?
man…….