“What are your experiences with the RPM package format? Do you install/uninstall RPM packages frequently? Do you upgrade every time a new release comes out? If so, does it go smoothly? Have you ever switched to an RPM-based distribution from Debian or Slackware? Have you tried other packaging formats? Have you tried source-based distributions?” Read the article at DistroWatch.
For the most part .RPMs’ are good enough for me
sometimes I run in to dependency problems, but they are easily resolved, first I make a directory, and put my rpm in it, and go to either rpmfind.net or freshrpms and get the required rpm for resolving dependency’s, and put em all in this directory, then su in a terminal and cd in to this directory, and type in rpm -iUv *rpm and everything is working as it should, then I save this directory in another disk partition if the need ever arises to reinstall the app…
what I would like to see is RPMs become more distro independent so they would work in any rpm based distro, and not be specific to any particular distro…
two tools I use on occasion for Redhat is “apt-get for Redhat” from freshrpms and Ximian’s red-carpet they are both good but you gotta watch em close for what they want to do if dependency’s crop up or things that you do not want installed, and like this for example> I recently installed AbiWord-1.02 (current) and then both apt-get & red-carpet wanted to upgrade AbiWord to a older version (0.99)…
so i can sum it up as mostly I like rpm, sure it has a few minor bugs but for the most part they run good enough for me, and I have compiled from source code and could not see any improvement in performance from compiled apps over rpm installs…
are emerging.. (pun intended).. i dont like .rpm ..does it even support package retrieval from net.. and if.. how easy is it to setup ?
I have yet to see one that works easily.
I have had the dependency problems of RPM over and over, repeated apt-get corruption that caused nothing to be apt-get’ed or removed, Gentoo turned many of my files in /etc to 0kb files after emerging java. FreeBSD fared better, but eventually lost track of installed software.
All those new Wal-Martians who buy Lindows systems will be highly upset when they try to install software.
IMHO, Windows installer is better than 99% of the Linux package managers. It is simple for the user, works across different versions of Windows and is universal. It leaves crap in the registry and some orphaned files, but most of the time it is no problem.
It is a shame that Windows matches or beats Linux in this regard. For all the “My OS is better” I hear from Linux users noone has fixed this issue.
I know that there are many people who don’t want Linux to be the #1 desktop OS, but there are those who do. Until the distros get together and fix this problem there is no chance.
John P.
my first distro i ever tried was debian.. its what i learned on and got used to…. was a little strange at first but i liked it
then i moved to mandrake for a while because i could buy it at the local wal-mart :o)
i thought it had some good points but the package system wasnt one of them
later i moved to openbsd and i thought pkg_add was fairly similar… but it just seemed to work smoother for me… less dependency issues… always seemed to have the right versions of libraries right at hand
didnt even have problems with library versions when i compiled by hand… i cant say the same for mandrake
about that time i gave redhat a whirl… very similar to mandrake but the problems with dependencies were even worse.. it was a nightmare!
have been looking at gentoo but havent tried it yet so i cant comment on portage
all in all id have to say that.. in my opinion… rpm is horrible
Windows installer if I remind correctly is the crap that just refuses to run if some condition are met (faulty application install (installation fails and/or win32 just crashes as usual), non default shell etc….)
from my experience with the old suse and with gentoo I had less problems that with windows^^
… with that article.
Completely. After trying RPM, APT, Ports, etc, etc my personal conclusion (opinion) is:
– APT is the (my) best choice for upgrading a system. A source system like the BSD Ports is probably just as good, has a few advantages but takes much more time (so is not as good for people who constantly change their software) and probably a bit less stable.
– For optional applications I would prefer a much simpler approach, similar to what BeOS does. Just extracting the package to your “Application” (or whatever) folder and run it from there. APT or Ports could be used to solve unresolved dependency problems.
– RPM is hell.
This is one of those things that exemplifies for me why I believe Linux is not ready to be an average human’s computer operating system.
Yes, I’ve had problems with shared libraries in other operating systems, but usually it is a matter of downloading one lib or dll and placing it into the app’s /lib folder (as I do in BeOS) or the app’s folder (windows) or the system folder (windows again, which I’d prefer not to have to do).
Shared libraries, in my opinion, are more trouble than they are worth. Ooo.. there’s flame bait…
I think the BeOS trick where you could put the libraries like that:
<application folder>/lib/
it was great! The system will FIRST look at the application’s lib folder (if exists) to see if it finds the needed libraries and then it will go to the home’s (~/)library folder on /boot/home/config/lib/ and if it can’t find them there either, only THEN it will try to find them in the system’s lib folder.
This way, you may be having a *few* different versions of the same libraries on your system (having *many* is uncommon), but at least they are placed in the right place, where they really needed from the application that they have been compiled with.
Under Unix that would be more difficult though, because you download a simple app and it has up to 20-30 dependancies, while under BeOS the dependancies where never more than 2-3 needed libraries, so it was easier to distribute them this way.
So, while the BeOS way was a clever and wise one, it was such for itself. Under Unix things are a bit different and that trick would only be useful for specific situations, like for example, when newer versions of libPNG are breaking binary compatibility with older versions and you really need this specific version of the lib because that binary you just downloaed was compiled with it. libPNG is only a few hundred KB, so it would have been doable. But for most cases, where whole bigger and too many libs are needed, the <app folder>/lib/ trick wouldn’t work well for the Unix model. BeOS was lean, Unix has way too much legacy and dependancy issues everywhere and having an <app folder>/lib/ thingie, it wouldn’t work as well…
you have too many fingers in the pie.
What’s needed is a standards body to define a universal package installer that *all* distros adopt. Ofcourse this will never happen because everyone feels their solution is the best and all distros are uncoordinated.
RPM sucks.
Lu_zero – nice troll. If you’re able to screw up a windows installer install so easily and with such variety, I’d hate to think what you must do to a *nix installer/system.
Gotta love dependency conflicts. Especially when RPM complains that you need a lib/whatever that you already have (eg, try installing FSViewer from RPM on Redhat 7.3). How long has basically every other major Linux/BSD package manager been able to fix dependency conflicts on its own?
And then there’s the wonderful emerging trend of distro-specific RPMs (eg, Mandrake RPMs that won’t work on RedHat and vice versa).
Ideally, installation of programs in Linux should be as easy as it is in BeOS (Download zip file, extract it, double-click the executable).
My experience with RPM .. some good, some bad.
I remember when I tried to install Everybuddy, said something like dependency problem with lib.o.gcc.hell.H20.R2D2.somethingothershit ….
RPM is just like everything else in Linux – when it works, it’s golden. But there’s about 50/50 chance of it actually working the way it’s supposed to and when it doesn’t work, you might as well forget it
Windows Installer is a mad joke!
If you have ever worked with Windows seriously
you know that installing under Windows is pure pain.
Every install-process is a risk. Many software manufacturers use their own install process. Installshield for example. There is no 100% safe version controll over dlls. The are simply replaced and/or thrown together in system32. Corrupt installed applications are the consequence. The regestry is patched by every installer. Often without checking the current configuration. Uninstall is simply a joke. The system often asks the user: “It looks like dll ‘xyz’ isn’t in use anymore. Should I delete it?” What is that for a management of the installed files? I agree to your point: there is no 100% good packagemanager out there. The only good package manager I know exists in MacOS X. Because there is no installer or manager. Even Installwise is only a better copy-routine. All the software I know needs no installation process, just copy the soft to the handdisk and it works. The old fairy-tail of shared dlls/libs between apps of different vendors dosn’t work in reality. Even MS Office for X says: “If you want to install the full Office, just drag the Officefolder from the CD to your harddisk. Uninstallation is the same simple task: Just drop the app-folder in the trash. The only thing that is really easy to share between apps are the os-libs. They are updated by apple’s OS updater. No app-installer sould touch these files. I think this is the only, safe and simple way to install software. And, don’t argue with saving diskspace by the use of shares libs. Nowadays the app-binaries consume the smallest part of a system-installation.
… only, it does not know about it yet.
I use Lycoris, which is rpm based.
I have a NIC which I need to compile from source.
I have a graphics, which I need to compile from source.
I have to compile my fave editor, games, browser… bla, bla, bla…
In case you haven’t figured yet, actually you need to virtually compile everything – so where is the point of rpm? I have a system, on which I haven’t yet installed a single rpm, because none of what I needed was available – matter of fact.
Since it only takes you the same 2-3 commands to install from source, I don’t actually see the point of rpm. Well, I do if everything was rpm-based and I mean EVERYTHING. But since this is not the case, I am waving good bye to RPM… see you around and don’t rest in peace…
Go Ralf! Those are good points.
I don’t agree 100% against Windows Installers, it is the UNinstallers that suck.
The other good thing about Windows Installer is that your Start Menu gets updated. Hardly any Linux distro does that so you end up searching for where your program went and how to run it.
Why not do it like MacOS? A place for everything and everything in its place.
What is the real point of shared libraries these days?
You never know if you have the right version.
Do ALL my programs need the same version or will they break?
Uninstall is usually a gamble.
Hell, even static linking is doable these days. The old problems of too litle RAM and HD space are easily overcome when you look at the spec’s of modern PC’s. It would seem to be a good alternative to bad package managers.
John P.
> Windows Installer is a mad joke!
> If you have ever worked with Windows seriously
> you know that installing under Windows is pure pain
* start installation
* select directory
* sometimes select components
* click ok a few times
Is that how you define pure pain?
The ting I dislike about using Windows Installer is that it takes quite some time. There are super fast installers like that from Nullsoft but they have probably far less features.
From my experience Windows Installer does it’s stuff very well.
I’m currently using gentoo and I have no problem at all
Probably gentoo is easier to recover since if the worst happen (unmerge of portage or something similar) you can just take the stage1 or the stage2 get the a portage from there and then emerge rsync and emerge -u portage ^^) dead easy and quick.
If a close source installer that relies on NOT known files/db fails and burn, the best you can try is just reinstall the OS
still I think is a matter taste and common sense the choice.
BTW Eugenia any news about the developement of the unified package manager for *nix? I don’t remember even its home page.
I’ve been playing with RPM packages successfully when i got my initial SuSE 6.1, but after i needed to install some softwares/libraries manually because of dependencies errors, things became to be less nice (newly imported RPM didn’t find those that weren’t installed by YaST).
After moving from 6.1 to 6.4 without reformatting the system, things have become even worse. I now that i have had to reinstall Xfree, gnome, kernel, etc. manually (for hardware issues, mainly) i must admit that i wouldn’t even dare to try an RPM. I always pick sources and pray for a correct makefile
RPM distributions are usually only available for latest distributions and mine is nearly a museum item, now …
There is no good solution to that problem.
You can do it the MacOS X way – put the application and everything it needs in one big directory and forget about redundancies. What’s the point of a shared library if you don’t share it?
Or you have a central place where all the libraries go, be it /usr/lib or system32, then you need something to keep track of who needs which library. A package management, a registry.
BTW, the BeOS “trick” of putting shared libs in <appdir>/lib works as well in Windows – just place the DLLs in the Application directory. That’s how a lot of applications do it – you can move Photoshop or Dreamweave or the like to a different computer just by copying the directory. It’s just a bundle.
“BTW, the BeOS “trick” of putting shared libs in <appdir>/lib works as well in Windows – just place the DLLs in the Application directory. That’s how a lot of applications do it – you can move Photoshop or Dreamweave or the like to a different computer just by copying the directory. It’s just a bundle.”
Interesting, but a few questions ..
1) Don’t Windows DLLs (or maybe just ActiveX/COM DLLs) require registering with the OS before they can be used? For example, is it necessary to use regsrvr32 to register every single DLL?
2) How do you keep track of what DLLs go with which apps? For example, if you install Dreamweaver and it puts 30 DLLs in system32, how do you know which ones it put in there?
3) How do you get shell integration if you don’t actually install the apps? For example, after I install Ultraedit, I can right click on any file and there’s a menu open to open it with Ultraedit. Would stuff like this have to be done manually ?
Darius, Windows used to have problems with DLLs as Unix has with the shared libraries, it is often reffered to as “DLL Hell”.
However, one of the great things WindowsXP brought, is that it solved the DLL hell. One of the new tricks is that inside a single DLL file, *many* versions of the same library exist, and depending what version the application needs, that is the one that it will be pulled out and executed each time by the system.
> How do you keep track of what DLLs go with which apps? For example, if you install Dreamweaver and it puts 30 DLLs in system32, how do you know which ones it put in there?
Each and every uninstaller knows. This is why these apps come with full uninstallation option.
> How do you get shell integration if you don’t actually install the apps?
This is simply a key on the Registry. You could even add it by hand if you know where to look for.
1) Not always.
2) Dependency Walker from http://www.dependencywalker.com/ (Microsoft), there would be no dlls placed in system32 without an installer.
3) Shell integration in windows can done by the application, proper programming and registry knowledge is all that is needed.
From what I know, neither Dreamweaver nor Photoshop puts any DLLs in system32. The DLL hell isn’t nearly as bad as some people want to make you believe.
I’m just curious if anyone has ever tried this little trick with Norton SystemWorks ?
I’ll have to look into the dependency walker thing, as I’m always looking for ways to tweak Win2k for the most amount of speed!
Norton makes more registry changes then photoshop or dreamweaver since its more thoroughly integrated into the system. Because of that you can’t just copy the folder (it may contain all the dlls you need, but there are a ton of registry changes to make). With photoshop (or illustrator, or any other adobe imaging program for that matter) you just copy the folder and after you run it the first time it makes the needed changes to the registry (if its picked to be the default viewer of file types that is).
I have yet to see one that works easily.
I like the system QNX RTP uses.
I hate RPMs.
First of all, those who complain that solving dependencies using rpm compared to say, debians apt-get are comparing apples to oranges. If you want to compare rpm to something in debian, then it should be dpkg, which is the program handling the .deb’s. apt-get is package manager independant and works with rpm as well. It is tools like urpmi and apt-get that should solve the dependancy problems, not rpm.
Someone talked about each application keeping a local copy of the libs, or even that static linking is used. Consider the security implications. If you do this, then you need to update ALL software that use a specific library if a security hole is found, if you use shared libraries you only have to update that one library.
Often hard to solve dependencies are due to bad packages. I have tried having rpms depending on the patch level of library, and i have also often seen packages that require a specific minor version of a library, when in reality it would work just fine as long as the major version was correct.
And i think the problem goes deeper than just looking at the package manager. The real problem is of course that linux as a platform is changing rapidly, and those changes tend to be non backwards compatible, and probably worse, usually not binary compatible either.
But if you dont like rpm, then use an alternative. One such could be the loki games installer, and i even think you can get installshield for linux, but im not 100% sure. But i dont think it will help, the problem is that creating packages is hard and there are a gazillion of independant libraries and utilities that you need to run most software, and no single entity is in control of the development which is what i think is the real problem. But if we had a single entity in control then linux wouldnt be linux.
So i unfortunately can’t see any good solutions to the problem at hand as the problem goes much deeper than the package manager
I see a discussion about win vs linux file management but where does the mac fit into this discussion? Just a light bulb above me head?
If you read the entire thhread you would see that mac was thrown in also.
“Someone talked about each application keeping a local copy of the libs, or even that static linking is used. Consider the security implications. If you do this, then you need to update ALL software that use a specific library if a security hole is found, if you use shared libraries you only have to update that one library.”
Is that the only argument against static linking (besides of the slight waste of resources)? I think the best idea would be to draw a line between CLI programs and librarys on the one hand (the system) and graphical stand-alone applications (the applications) on the other hand. The system could still be managed by your package manager, like apt-get. I think this is the perfect solution for system upgrades and stuff like that, while your graphical applications are simple installed by copying the application folder.
Let’s take Mozilla as an example. There is a new version, why would you want to wait for your packager? Why not just create a statically linked binary package, only dynamically loading the most fundamental system libraries (glibc, …) and the toolkit library (Gtk). Advantages:
– It would work on every system because it doesn’t matter which package manager is used and where all those files are located. Just put this binary package whereever you want and it should run.
– No dependency trouble, an outdated glibc or a missing version of a new toolkit is easy to fix (should be a matter of a simple upgrade).
– You could place several versions of one application on your system, delete obsolete ones, do whatever you want. Much more flexibility.
I gave up on RPM and Linux long ago. RPM is just another example of the overcomplex,fragmented,hacked world of Linux.
I switched to FreeBSD-all the source i download compiles cleanly,no stuffing around with package managers and no dependency hell. I simply read the requirements for each piece of software i download and download any need libraries and compile them.
MarkH if you would like to give linux another go, gentoo is about bsd as linux can get IMO.
portage = ports
🙂 Well Troels… when will people finally overcome this stoopid saying that apples and oranges can’t be compared? Of course they can and the outcome of this comparison will tell you the differences of the two, plain and simple.
When people come up with this saying, there are not normally off to having something compared in the first place, because they don’t like seeing put anything against their own views for what ever reason. This is not the case with you here, but I am fighting this stupid saying where I see it because it cannot add anything to a discussion…
I’m not getting into this package type wars, because the only package system I ever used is RPM in Mandrake. However, isn’t the title of the article, “DistroWatch: Is RPM Doomed?” a little mismatch? The article said nothing about the RPM thingy dying, in fact suggest otherwise. Beccause Red Hat (along with Mandrake) are the leaders of the Linux market, it would be in a long time till RPM dies. But the article certain said how RPM sucked so badly.
Well, I think the purpose of this title was to suggest that RPM has no real future if it doesn’t get a major overhaul.
The problem is that by far more people switch away from RPM to never come back than the other way round. Who really wants to give up his APT, ports, whatever-system to go back to RPM? Not many. I know much more people returning to a Windows system actually. :/ This is frustrating.
There’s no doubt about it — plain old RPM does suck. But that doesn’t mean that it’s going away any time soon. While it may suck, it’s the de facto standard. There are too many RPM-bases systems and RPM files out there to ignore. And although it sucks by itself, extensions like URPMI go a long way towards solving the dependancy problem.
The main thing that RPM has going for it is the number of mature GUIs available for it. Ports and apt-get may be technically superior, but the ascetic nature of their (usually text-only) interfaces is what’s keeping them out of the mainstream.
Is that rpm that is the problem or the way it is used ? Application rpm packaged should include all necessary libraries rpms… Then running rpm -Uvh rpm_list should then install without problems. if some libraries are already installed, then they would get upgraded if necessary.
Now maybe there is a problem with the above, but the main problem I see is distro specific rpms…. Packaging is not the most productive thing that can be done. It should be kept to a minimum but somehow, everyone is repackaging to death…
Well from what I know of the inner-workings of Stampede, they had/have what IMO could be the future of packaging linux software but it seems that it may not be released.
Let me start out by saying that I am an OS guy, I am a Windows guy, I am a Linux guy, I am a FreeBSD guy, I am a Mac guy, and I have used BeOS, but I am not a BeOS guy.
There doesn’t seem to be any one true way of packaging software. The windows way works, installs and uninstalls are easy enough, although it tends to keep a lot of extra crap around, dlls and registry entries and it doesn’t always work as advirtised.
The Mac way seems to work pretty well, copy the folder to your Applications folder and go. Of course, this is not the most efficient way use system libraries, and there is the occassional program that has an installer, but, it seems to work pretty well. The one time it might be a real pain is on a multiuser system, because different users may install the same app 5 different times in 5 different places. This is my favorate of all package managment systems, it is waistful and backwards, but it is easy, darn it!
RPMs work most of the time as long as you don’t try to install anything funky and update at the pace of your distro. Anything beyond that is a pain of conflicting and oddly named dependencies that never quite work right. This is good for a server, where, you probably don’t have a lot of need to install the latest and greatest, and probably just need to keep abrest of security patches and the such, but is a horrible pain for the average desktop user, who both frequently installs the latest and greatest, and some funky stuff. I guess this is why RedHat has decided to focus on servers rather than desktops. Mandrakes solution seems to be to just package the latest and greatest.
Debs work pretty well too. apt-get install <package_name> works most of the time. Working in the “unstable” port tends to give you problems though, sometimes packages won’t install for no good reason and it can be a real pain. Also, Debian is no newbies distrobution, once you get used to it, it is pretty nice, but, if you are not already familiar with Linux, it is gonna be tough. Another problem with both RPMs and Debs is that if you install from source, you have a whole new set of problems, the package managers can only store info for packages installed with it, source throws a monkey wrench in the whole thing.
Which brings me to ports, FreeBSD’s package manager. This one tends to work pretty good, it handles all the same dependency issues and debian’s apt but, does so with source, so, even self-compilied applications tend to work pretty well with ports. However, compiling all of your software is time consuming, especially on a lower end machine. Also, this system tends to run into issues when you install software that doesn’t have source on it. Thankfully there isn’t too much of that in freeBSD, but if there was, ports wouldn’t work as well as it does. Another problem with this is that you have to wait for the package maintainers to commit there software to ports before you can install it unless you want to figure out all the dependencies yourself. Ports seems to update pretty quickly, so, that ussually is not too much of a problem. Out of all package managers I have run into, this is my second favorate, and my absolute favorate for multiuser, server-oriented tasks.
I have yet to try Gentoo or Sorcerer. I expect them to both be nice package managers, and hope I can contribe more in the future. For now, I’ll just leave it to this, each package manager has it’s strength’s and weakneses. RPM is not perfect, but niether is anything else.
Olivier, you bring some good points, although I cringe whenever I see the word “should” in discussions like this. Unless you’re God, you really have no business telling the world what to do. I think that packaging every last component needed would be wasteful, and could grow out of control real fast. A more reasonable approach would be to package all binaries statically linked only, and even that might be a bit much.
The way I see it, dependencies wouldn’t be such a big problem if the package managers were more Internet aware, and Internet-based. If the publisher also provided access to all files that the program is dependant on over the Internet, then the package manager could retrieve the other packages as needed. Just a thought.
Well gentoo’s portage is fully internet aware, and it works graciously.
I know all about apt also
The reason that DEB works better than RPM is not the package format, nor apt-get (which is available for RPM from Conectiva), but because there is one DEB distribution and many, many RPM ditributions. It is not possible to issue a binary RPM that works with all distributions, because they do not all follow the LSB, and (equally important) they use different names for their packages. Often, RPM reports failed dependencies, when the packages are actually present but have unexpected names. It’s easy for geeks to check this and force the installation – but no fun for ordinary users.
Kon, Please remember that it is not only complicated because there are too many fingers in the pie. In Linux’s case it is actually a miracle that we still use rpm’s fairly successfully
considering the speed at which Linux is developed. An os like
windows has never had these kind of problems because it only issues a new release every few years. The base packages that make up Linux and where dependency problems arise are far more alive than that.
Still, I do think it would be a great idea for the major distros to come up with some kind of standard concerning dependecies of the base packages, but also things like where to mount devices ( /mnt/cdrom, /media/cdrom, /cdrom) and the way the init-tree ( /etc/init.d) is build up ( this also seems to be different with every distro).
A set of standards for these kind of things would be very beneficial for linux on the whole.
I’d also like the comment to the people who have written that the windows installer works fine for them. I feel it is not fair to compair the windows installer with rpm, because the task for rpm is far more complicated. The windows installer just copies files to a dir and makes some entries in the regestry. That’s it. Rpm has to distribute files in the package all over the system, check for dependencies and make entries in the rpm database.
I dare you to use the windows installer to tell you to which package a file belongs and where I can reach the author and give me a description of what it is for. In fact, I dare you to use the windows installer and successfully deinstall something ( no more dirs, entries in start menu , icons on desktop or entries in regestry). It can not be done.
Yep, Gentoo’s a great example. Now if someone could package a less geeky distro that uses portage…
Source based distributions don’t buy you anything over RPM
distributions. Yes, I have had problems with RPMs, but when
I do I go grab a tar ball and ./configure, make, make install. That’s what you always do on a source distribution,
so what’s the difference? What does it buy you?