A next-generation package manager called Nix provides a simple distribution-independent method for deploying a binary or source package on different flavours of Linux, including Ubuntu, Debian, SUSE, Fedora, and Red Hat. Even better, Nix does not interfere with existing package managers. Unlike existing package managers, Nix allows different versions of software to live side by side, and permits sane rollbacks of software upgrades.
I am wondering how this would solve a problem if the package is not in the channel for downloading?
Also, I use RHEL/Centos/Fedora and I do not have problems with broken dependencies. Maybe this was in the past, however I have installed packages manually and resolved dependencies myself (not recommended). As far as RHEL goes, if it is in the channel if a dependency is missing they will get it fixed asap.
I am not sure about Debian based distro’s I have only installed and used Ubuntu for like less than 10 minutes and deleted it off and re-installed Fedora…
Me and my clients, too. I also use a Debian based distro on my desktop, and don’t have problems there, either. Yet there always seems to be someone flitting about wanting to save us from “dependency hell”. It used to be the Debian crowd. But they finally figured out that other distros had package management. Now it seems to be random projects from every which direction coming at us with a mission to unify everything. If in doubt, you can recognize them by their claims that (1) You have a problem, (2) They can solve it, (3) Their product can mess around with your system without interfering with your native package manager, and (4) Your girlfriend will be amazed, and the guys in the locker room will be impressed.
I’ll stick with Yum and Apt, thanks.
Edited 2008-12-23 01:18 UTC
Well…. I don’t think that it’s just a matter of “solving dependency hell”. I don’t think it’s a problem of the package manager “per se”, but how reliable are RPM based distributions when you are switching from one iteration of the distribution to the next? At least, openSuSE (from what I’ve heard) hasn’t reached the point where you can just “dist-upgrade” just like debian based distros can. How about RHEL4 to 5, or CentOS, Mandriva and so on? Has anybody experienced problems doing that? Has anybody tried?
*sigh*
So there are still Debian folks living in the 90’s.
My (rather extensive) experience is with the Red Hat family. I have “side-graded” quite a number of servers from Fedora to CentOS with relatively little problem. You have to wait for the proper time window in which the CentOS release is later than the Fedora release you are upgrading it from. For straight CentOS upgrades? RHEL upgrades? Fedora upgrades? Generally no problem. Now, when you get to the destination… you have all the new versions of the software. If your firefox plugins are incompatible, then I think I can get you a good deal on Kleenex tissues.
I wince about the same amount when I start an apt upgrade as when I start a yum upgrade. If you are still buying into the concept of only Debian based distros having a smooth upgrade path then you really need to join us in the 21st century. Because even back in the 20th, I was beginning to wonder about that mindset.
Edited 2008-12-23 03:26 UTC
In all fairness, there were teams of Linux zealots telling us that there was no such thing as dependency hell back in 1998. Back then, if you criticised the situation, you were labelled an idiot for doing it wrong. It’s understandable that it’s difficult to assure people that package managers have matured and are now extremely reliable.
Wow. 1998. That takes me back a ways. I had to spend some time reorienting myself. Back in 1998, I would have been using RedHat 5.1 and 5.2. Up2date, Red Hat’s first dependency manager was, I believe, introduced in fall of 1999 in Red Hat Linux 6.1. Oddly, I don’t have a specific recollection of being bothered by dependencies back then. Perhaps because the dependency tree back then was less complex. And possibly because I was focusing upon the one really major dependency issue that I *know* we had back then: The switch from libc5, maintained by the Linux kernel devs, to glibc from the FSF.
Also, since I was coming from SCO OpenServer 5, rpm alone seemed like a godsend even without dependency management.
I think I can say, however, that by 6.1, 6.2 or 7.0 at the latest, dependency problems in RH were mostly a thing of the past. I think maybe Mandrake had the problem resolved a little earlier with urpmi. I distinctly remember that at least through the early 2000s, a significant portion of Debian users seemed to have no idea that other distros had package management. It was like some of them lived in some weird parallel universe in which only Debian had such a thing. I still occasionally run into that attitude in a slightly modified form. Now they admit that other distros have it, but claim that it only actually *works* in Debian. As a user of both apt and yum, I can categorically state that such views have little basis in fact, and are most likely a vestigial meme still being passed around in certain communities. Repos in the Debian world are quite good. But so are repos in the Red Hat/Fedora world. Apt definitely still has the edge over yum on speed, which I appreciate when I am working with deb based distros.
With the diverse set of machines that I administer in my work, I cannot imagine dependency hell being a real problem without my noticing it.
So I guess I would have to say that if there ever was a dependency hell in any sort of practical sense, it was long ago and far away.
Edited 2008-12-23 18:33 UTC
*heavy sigh*
I upgraded openSUSE 11.0 to 11.1 by changing my repos to point at 11.1, and then running Yast to upgrade all packages.
RPM-based distros will break on distro upgrades for exactly the same reason that Debian-based ones will: when you’re using non-standard or third-party repos that aren’t in synch.
The openSUSE build service, which houses a multitude of contributory repos, automatically builds packages against multiple versions (and distros) and updates packages when applicable dependencies change in those targets. The popular third party repos follow factory development and generally have repos available for the new version at release. This means that as long as you point to the new sources properly, then there should be little issue with updating.
As an example, I had unsupported KDE 4.2 packages from the build-service installed in 11.0. Upon upgrading, it updated the the appropriate unsupported KDE 4.2 packages for 11.1 along with the core 11.1 upgrade.
Dependency-hell disappeared a decade ago. If it still occurs, it’s an issue with the packager, not the package management.
You right that everything depends on the quality of the packages. The main reason why Debian and Ubuntu are better in dist-upgrade scenarios is that they offer more packages in their repositories so it’s less likely to hit a package with broken dependencies.
No, openSuSe has as big repositories. Nevertheless the official openSuSE points out that the only endorsed way upgrading the distro is by starting the installer DVD but I made a pure “zypper dup” (before I updated the repository baseurls to 11.1 and I then upgraded “zypper in zypper”). And I had no problem!
But coming from debian-sidux it seems to me that the apt packaging system has more expressiveness than any rpm system (think of all the meta and transitional debian packages!). The openSUSE wiki has a special page describing the difficulties of splitting or merging packages therefore. Ever wonder why they do not just change to apt packages then ? I know there was thus an attempt going on some years ago…
urpmi on Mandriva does what rpm should have in the first place. I’ve yet to have dependency issues using it and sticking with the plentiful Mandriva repositories. rpmdrake for the GUI requiring users.
apt-get and aptitude for Debian are the same way, no dependency issues there either. Aptitude even tracks what packages are installed by request versus what are installed as dependencies so if you uninstall a package and the dependencies are no longer needed, they go away too. I’ve yet to use synaptec so I don’t know how it is for the GUI requiring users.
By allowing side by side installation of packages instead of just upgrading them, would this not be a security nightmare?
Also, I suspect rpm/dpkg can be used in the same way to parallel install most things, but the distributions do not do that due to the above problem and assosicated maintenance costs.
“Also, I suspect rpm/dpkg can be used in the same way to parallel install most things, but the distributions do not do that due to the above problem and assosicated maintenance costs.”
Indeed. It’s a fundamentally dim way of doing things. One historical objection (it’s a giant waste of disk space) is getting less important these ways, but it doesn’t make it any less of a maintenance nightmare (or less of a huge waste of memory).
We are heading to the year 2010, and Linux is still suffering of this basic problem. sad, really really sad.
This is a situation where people are making a problem to fit a solution. I almost never, ever have a problem in Ubuntu. Still have some issues some times with Red Hat but not like I used to.
I still think a combo of how Apple does it and a package manager backend for updates would be great. I love how easy it is to install Apps on my Mac.
Hi,
The dependency problem on Linux does exist, regardless of whether you have to sort out all the problems yourself or if there’s a huge team of maintainers who sort the problems out for you. The fact that you think there’s no problem just means that the Ubuntu maintainers are doing a good job of hiding the problem.
The fact that you need a package manager and a large team of maintainers is the problem. Another package manager (with another large group of maintainers) won’t help.
So the user doesn’t notice any problem. Because their upstream is doing a good job of handling dependencies.
Won’t help what? This user recognizes that the “problem” is a fabricated one. Or at least is one that his distro manages well.
I do agree that another layer is not needed.
Edited 2008-12-23 04:24 UTC
Hi,
The user is using Windows still, where no package manager is necessary. The commercial companies writing Linux software mostly don’t exist. The open source software for Linux is “usable”, but mostly underneath the thin layer of gloss there’s an ugly mess of hacks (autoconf, perl scripts, package manager/s, etc) to sticky tape separate chunks of crud together.
Don’t get me wrong – Linux *is* good. All I’m saying is that nothing is perfect, and in some ways Linux is less perfect than a lot of other OSs.
Until users find their system is filled with cruft, dlls have been overwritten by programs that are no longer installed and the only way to go back to a stable state is reinstalling the OS.
I find the package manager and repository method far better than any other platform method. Windows software is all over the place and unvetted against windows. “runs on windows” breaks as soon as you add Vista into the mix. The install process for Apple is fantastic; drag icon from archive package too apps directory and it’s installed but it too could benefit from a centralized repository; sadly, that would mean iApp store though.
By contrast, having all but VMware Server available instantly after install from the Debian or Mandriva repositories is so much cleaner. I can write a single script barely more comlpicated than a .bat file which installs and configures everything on top of a minimal Debian or Mandriva install.
Let me know when you can install/uninstall/configure a fresh system build using a single script. A prebuilt ghost image doesn’t count and it has to include more than the Windows install disk so a slipstreamed MS scripted install isn’t going to cut it either.
As for the “Linux sucks because there are too many ways to install”:
– tar.gz source means you can install a program regardless of the cpu and distrubibution or BSD your adding it into. Have you a more universal way than “make && make install”?
– package managers are distrubution specific, let’s focus on the distribution rather than a blanket statement like “kernel sucks because it has too many ways to install software on top of it” (Linux is the kernel not a specific distribution after all)
– GUI package managers; search or browse the list of available programs, check the checkbox beside the programs you want, uncheck the checkbox beside the programs you want uninstalled, click [ok] and it’s done.. wow, thats like so totally complicated and stuff, how could anyone possible master such arcane methods.
– cli package managers; search for program based on partial name or descrition (urpmq, apt-cache search), install desired program (urpmi program, apt-get install program).. wow.. that must require a degree in engineering to figure out.. it’s all of two commands one has to be remotely familiar with.
– Windows; install from windows updates/library, install from windows install wizard, install from program developer’s own install wizard, install “run from directory” apps by uncompressing, download unvetted software from any number of different untrusted websites.. yeah, that seems like a simple standard single install method to me.
Seriously though, the “too may choices” crap is a myth that only makes the detractors of any OS feel better about there own personal choices. In working with four major OS platforms on a daily basis, I feel sad for anyone who claims to be tech savvy while only ever able to feel comfortable with a single platform. How boring it would be if my machines only had one OS bootable on them.
Yeah but the issue is removing apps in OSX. There is a lot of garbage still strung around even after you remove the app. You still have to use third party tools to remove these files easily, or if you know the location of the files you can delete them yourself. Removing applications in Linux(in my case Ubuntu) is pretty straight forward and you can purge all files associated with the application easily. Not to mention that OSX is far from consistent when it comes to installation. Just like Windows there are many different installers, unlike windows, some of those installers don’t have away to uninstall the application.
Besides that I’m not a fan of having these huge blobs taking up resources when shared resources is better in both security and in resource usage. Why should I have o redownload the Gimp when all that needs to be updated is libjpg? Don’t get me wrong I love OSX (its now my primary desktop) but that doesn’t mean that I don’t find issues with the way it handles packages. Ease of use usually comes at a price, in OSX’s case is that they made the system easy but unmanageable.
Usually what’s left are .plist files that don’t take up a lot of space, and they don’t affect the system performance.
Now about installation. Usually you will encounter .dmg files from where you have to drag the app to your HDD, or installers (Apple’s own Installer most of the times, and custom installers from software vendors like Adobe).
Doesn’t seem as inconsistent to me. A package manager would be nice though.
“We are heading to the year 2010, and Linux is still suffering of this basic problem. sad, really really sad.”
I find NO facts to back this statement up, if a customer is paying for RHEL entitlements and I had a package 1 time in a new upgrade from RHEL5.1 to RHEL5.2 in the channel and it was fixed in the morning. ***Take in consideration it was added to the channel in RHN that day! I hardly think this is an issue any longer. Even so with RHEL Support open a ticket, post the info and it is fixed case closed. People are human, they make mistakes so I would say they stay on top of them a lot better than the fallacies of a Windows OS. You can actually see what makes the OS tick and make changes to it unlike the closed source counter parts.
So I say being Open Source is a lot more than complaining about something with the current distro’s it is NOT an issue…
Meanwhile, yum/apt/yast are awesome package installers along with the other utilities from other distro’s I give them kudos for keeping something very complex down to a level where users can work with it.
What a load, this is not about being closed or open,or to use apt this or yum that, it is about making a freaking standar already, so I make one package that I know it will work for any distro for any version.
They are all good to work together in polytic issues like drm, patents and Open document but for the issues that matter they are still pre-school in organization, no, I think pre-school children can be even more organized.
Sucks.
Edited 2008-12-23 03:07 UTC
How right you are. I keep hoping that maybe one day, people who keep denying that a buttload of distros/package managers re-inventing the wheel and duplicating each others’ work is not a problem, will finally attain enlightenment.
Debian has a standard .deb which can be managed with apt-get or aptitude or synaptec if your GUI crippled.
Mandriva has a standard .rpm which can be managed with urpmi or rpmdrake for the GUI crippled.
yum is a standard for it’s distribution which works great.
I don’t try to manage my redhat or mandriva OS using apt-get any more than I manage my Debian with urpmi or draketools. Should I complain because slax isn’t just like Debian or Red hat? Using the same lego pieces like a common kernel or being similar and interoperable does not make the different distributions or BSDs the same OS platform any more than win98 or osX is the same as Vista.
But lumping them all together as “Linux” and saying that one distinctly different platform and development goals should be the same as another distinctly different platform and development goals is really just demonstrating your own lack of understanding.
It may be better to remain silent and be mistaken for a fool rather than open your mouth and prove it.
(I’m not all teeth though, I’d happily answer any questions you have as I would for any other person exploring a new OS platform outside there comfort zone.)
Although I like the solution offered by Gobolinux already – I would to point your attention, that estimating nix a way: “I’m pretty sure, it’ll suck” just “in advance” (without even trying) perhaps is a little bit too hasty?
Solving dependency hell would have been nice several years ago. These days, it is rarely a problem unless you’re using the unstable repo of whatever distro, and if you do that it comes with the territory on occasion.
Packaging isn’t really the issue. Binary compatibility is. What good would it do to have a distribution-independent package manager, if my software is going to need recompiled to fit each distribution anyway? Before we can have distribution-independent packages–before we even worry about that–there has to be a way for those binaries to run in a distribution-independent fashion. At the moment, this is not the case, particularly though not exclusively regarding GUI applications.
Once this is achieved, then having distro-independent packages is a must. Until then, this is a solution in search of a nonexistent problem.
Great! Let’s get to work. What are some examples of binaries running on one distro, but not on a contemporary one?
Many reasons. Among which:
* Different GCC versions
* Different GCC compilation options
* Incompatible library APIs
* Incompatible file formats
* Dependence on kernel functionality no longer supported
* Dependence on kernel functionality specially compiled for a distro
* Dependence on specific distro-specific directory structures
* Dependence on specific distro-specific packages that no-one else supports
Please be more specific about how reasonably current GCC versions and options affect binary portability.
Please be specific these incompatible library APIs and file formats.
The kernel binary API has been frozen since Linux Kernel version 1.0, or so.
If your package depends upon something that the vanilla kernel hasn’t opted to merge, and distro support is still spotty, then yeah.
Be specific, please. What variances from the FHS are significant here, and what can be done to correct that?
Again, please be more specific. What packages are we talking about?
The FHS is a Linux-ism, and focusing more on Linux does us even less good in unifying software behavior in any meaningful way with OpenSolaris becoming more and more viable as an alternative. Many of the things advocated in the FHS are flat-out against the Solaris way of doing things.
Look at the autopackage page. They list all the problems related to binary compatibility.
Don’t put your head in the sand.
In other words…all the reasons we have different distributions in the first place. How do you propose to get all distros to agree on all of the above? And all users?
VMWare…
Hey worse yet, if I upgrade the kernel then VMWare breaks….
Yet on Windows NO PROBLEM…
When I look at how to install VMWare server on Linux, and then on Windows I think, “yupe this is the reason why desktop Linux will NEVER pickup…”
Although manual installation if often prefered, iirc you can also install vmware server from the repositories along with prebuilt modules for your current kernel.
yeah that’s a huge problem, you have to rerun vmware-confure.pl and let it reinstall the kernel modules. I tell you; it’s been a trial each of the last five Mandriva kernel updates. It’s a total of 60 seconds additional time after each kernel addition, I don’t know how a server administrator could ever manage it.
(is the sarcasm obvious enough?)
It’s really a non-issue. VMware is server software and any competent administrator should have no issue with having to let the configuration script rerun. Those that do should be a little embarrassed. If your using VMware, your already above the average user who would need tech support to change there windows printer settings anyhow.
The simple solution, at least from an interested 3rd party distributors standpoint, would be to do statically linked binaries all living in their own neat little world under /opt or somesuch.
Right, and you package what like that?
All ‘applications’? How do you define an app? What if it comes with a library? A library that’s used by other apps?
How do you package KDE or GNOME? Half as little lumps in /opt, half with a different package system in /usr? Sounds fun. How do you package GTK+? How about X, how’s that get done? Again, do you split it up like KDE or GNOME?
I really wish people who haven’t the first freaking clue how to build a distribution would stop thinking they have the obvious ultimate answer to package management. There isn’t one.
/opt/gnome/bin
/opt/gnome/lib
/opt/gnome/include
/opt/app_name/subdirectory
Really, it is not that hard. And having the package manager update the global files for the include/linker… should be trivial. Or just do like apple does with their .app containers and be done with it.
Even if I don’t have a solution off the top of my head… It is the XXI century, and this should have been tackled by the linux/OSS community long ago. I think the OSS community would benefit immensely from a sane standardization in package management and library interfaces. Every little distro coming up with new and “improved” ways of dumping everything under /usr, and reinventing the /etc wheel, coupled with gnu’s penchant for changing the interfaces to their libraries/toolchains on an almost seasonal basis got old long ago.
If Solaris got a bit more traction with GPU drivers, OpenCL/CUDA, and some of the commercial packages, I would jump from linux in an instant. Trying to use two commercial packages each expecting a different linux distro snapshot can be a “fun” experience, but I am getting old for this sh*t honestly. Solaris is not inmune from the dependency hell, but the toolchain/interfaces seem a bit more “stable.”
Edited 2008-12-23 05:43 UTC
“Every little distro coming up with new and “improved” ways of dumping everything under /usr”
Er, except they don’t. Everyone puts system libraries in /usr/lib . It’s been a standard for years. Putting them in /opt would be nothing but a new variation – the thing you complain about yourself, ironically – that doesn’t benefit anyone. How is having GNOME’s libs in /opt/gnome/lib in any way an advantage over having them in /usr/lib ?
Sorry, but most people discussing this really don’t have the first clue what the crap they’re talking about.
Edited 2008-12-23 09:16 UTC
You miss the point entirely. It isn’t a solution to the total non-problem of how distros choose how to package the apps they ship with and support, it’s a solution to the actual problem of software outside the normal channels working predictably on any distro the end user happens to have.
The problem is that you can’t define “software outside the normal channels”. It’s a boundary that gets fuzzier the harder you look at it, and then stops existing entirely.
So, you want to provide Super Leet App 2.0 in your Super Leet Third Party Repository. But, oh no! It needs a newer version of libawesome!
What do you do now?
Providing third party packages for very simple, end-of-branch applications has never, EVER, been a problem. Heck, if your app is that unproblematic and end-of-branch you can stick it in a tarball and it’ll work. If you just want to make a big static lump you can do that and it’ll work. You don’t need to wrap it up in a ‘package manager’ which isn’t a package manager because it’s just shoving giant statically linked balls of crap around. The *only* time things get hairy is when you actually have to deal with real dependencies from the underlying distribution, which isn’t something any of these proposals solve (except by introducing a huge parallel set of libraries or statically compiling *everything*, which a) isn’t a solution and b) doesn’t need to be done so damn complicated).
Why isn’t static linking libawesome a solution?
Because Linux distributions and the LSB do not define a suitably complete baseline, so application developers do not have enough information to know exactly which libraries they can expect to be provided already on any reasonable distribution.
If the LSB defined libawesome as a “standard” library that would be provided as a DSO, and defined a baseline version, then I can dynamically link my application to use that version of libawesome (If I use a newer version, that’s my problem). If it isn’t in the baseline then statically linking it would be a perfectly acceptable solution which would eliminate the run-time dependency handling.
The obvious rebuttal is always “What if libawesome has a security issue? Now you have to upgrade every application that has linked libawesome!”. The answer is that if the library in question is popular enough to be used by a large number of applications it should probably be in the LSB baseline, and if it isn’t then you only need to upgrade a handful of applications anyway and your package manager is perfectly capable of doing that automatically for you.
“Because Linux distributions and the LSB do not define a suitably complete baseline, so application developers do not have enough information to know exactly which libraries they can expect to be provided already on any reasonable distribution.”
Sure, but how are you going to get that to happen?
Boost is a good example, again.
Boost 1.37 just came out not long ago. Fedora’s development branch upgraded to it (in a tag, I know, blah blah blah, it’s just mechanics) almost immediately. Mandriva’s just went the other day. That’s because we’re pretty bleeding edge distros and we like to have the latest version of everything. This means that around April-May, Mandriva and Fedora stable will have Boost 1.37.
Debian, of course, didn’t go yet. Heck, sid is still on 1.34, which is now *two* majors out of date (there was a 1.36 before 1.34). This is because Debian is big and stable and bureaucratic and won’t go until they’re damn sure everything in their repos builds against the new major, on all platforms. This means there’s not a chance in hell a stable release of Debian will have 1.37 for at least, oh, a couple of years, I’d say.
All the other distros are somewhere in between.
How do you plan to reconcile this? The reasons different distros contain different versions of things are fundamental to the nature of differences between distributions, not some kind of historical quirk. You can’t get all distributions to just ‘agree’, because it would involve many distributions changing into something they aren’t.
Yes, you could always include different majors of every library in the base. No, that’s not really going to be sustainable for the long term, and how do you handle something that can’t be libified – like, oh, say, HAL? Or Tcl?
Edited 2008-12-23 18:40 UTC
If Boost is breaking it’s API with every release and not bumping the major DSO version then the Boost developers need to learn how to manage an ABI. Although this isn’t just Boost: it seems to be a fairly common issue with a lot of Open Source developers.
I never said getting everyone to agree on the baseline would be easy, but it’s what needs to happen, and it’s what the LSB is supposed to cover.
With your specific example (Boost) I’d suggest the only sane answer is to define a single version that must be available, and then allow distributions to ship newer versions in addition if they wish. That way the distros can relink all their system packages against the bleeding edge, but a third party developer can link against the baseline version.
Edited 2008-12-23 20:47 UTC
They do bump the DSO version. That’s why for distros that do libification we have libboost1.34 , libboost1.36, libboost1.37 – not just a ‘libboost1’ which isn’t actually correct.
They don’t break API for kicks, just when they need to – they just happen to use small version increments for API changes. It doesn’t really mean anything, though. They could change it so 1.36 was Boost 2 and 1.37 was Boost 3, but it’d be the same code. This way of doing things is perfectly OK and compliant with all conventions, just a bit unusual. When they do an increment which doesn’t change the API, it’s versioned like 1.36.1 (which is not an API change from 1.36).
Then it’s a non-issue. Distros can ship all 37 versions for all the difference it makes.
Yeah, would you like to maintain that mess? I wouldn’t. Distros try and stick with a single major of most libs unless it’s unavoidable, for good reason.
And it doesn’t answer my point about how you handle a similar situation with something that’s not entirely libifiable – like HAL or Tcl.
You’ve just arrived at the same conclusion as I did a couple of posts earlier: the LSB specifies a baseline version that third party developers can rely on in any distro, and distros are free to ship any other versions as they see fit.
I use Ubuntu and while there have been times issues have come up, for the most part, I have found solutions within it’s realm. Unfortunately, people just don’t even provide solutions for deb and rpms often times. Now we’re going to add another package manager for developers to work with. I just don’t see it happening.
What I have found most usefull is in fact virtualization. If I really HAD to run some old or specific version of software, I would use a virtual image. When flash doesn’t work, I fire up my Windows XP virtual box and run it in there. I’d much rather see the work being put here as it is guaranteed to work and keeps everything clean. For example, I do a lot of embedded softare development. I wish the entire toolset was provided in as a virtual image that I could just load up. Perhaps some mechanism where the home directory is shared between them.
For newer versions, I tend to find someone who has made an apt source for it and just add it. For the most part this works quite well.
I really like the way Firefox handles ‘new/beta’ versions on Windows. You can have your regular firefox install and then you have the ‘mine field’ install which updates to the latest and greatest. But they’re essentially 2 different applications. I kind of wish linux distros offered this ability for popular apps. It works quite well.
It seems that versions of Linux are the only OS’s that still have this issue. I love that fact that on the Mac OS I can run as many versions of what I want.
Another cool thing I LOVE about Mac OS is that I can just make a copy of an App and it will run like 2 different versions. This comes in handy for instance when I want to rip 2 CD’s at the same time from 2 different CD drives. I can make two copies of my rip App and then point one to one drive and one to the other. Perfect.
On the Mac I can have different versions of the same app, just install each in it’s own folder, no big deal.
You just keep tellin’ yourself that, honey. We believe you. We really do. Honest.
Edited 2008-12-23 04:29 UTC
Wow, that sounds really intelligent. LOL! That is not OS or tech talk, save that for facebook where the kiddies are.
Guess, what you can do all those things on Linux. And unlike on Mac, it’s really easy to run multiple instances of a graphical program without resorting to stupid tricks like copying an app folder.
For popular packages that have multiple versions, distros generally support installing multiple versions side by side. Gentoo even has a built-in system in Portage for having multiple versions (slots) which has worked out quite well. You can also install RPMs in different locations if you need to have different versions at once, or you can always just build from source and install in /usr/local.
Wow, now tell my grandmother how to do any of that… Oh yea you can’t. Because to do what you saying is not something you cant do easy out the box. You have to hit the CLI. Wow, on the Mac you right click, choose copy and then choose paste. Woooo fancy tricks.
Now lets see if you can show me how to do that on linux more simple.
Okay, Grandma isn’t doing that on Windows either, or needing to. Grandma’s gonna understand that she needs to copy an app folder so that she can run two instances of it to do some trick. Grandma’s going to need to run side-by-side two different versions of some piece of software. No. This rarely happens even for power users, much less people like grandma. I can tell you from my experience with regular users using Linux, and also Windows, that they really don’t care about that kind of stuff as long as their two or three apps work. My sister now uses Ubuntu and I never have to fix problems for her because she just doesn’t run into any. Why? Because she isn’t trying to do stuff like this, that’s why.
Ok I got a simple one that my Grandmother did encounter and which is why she doesn’t run Ubuntu anymore and she now has a Mac. Very simple please give an answer. I am doing this on Ubuntu but it can apply to any version of Linux using Gnome or KDE.
So you want to download an application, install it and run it. So my Grandma downloads Frostwire (.deb file) runs it and installs it BUT it doesn’t put an icon on the “start” menu in Gnome. (And I have seen this same problem in KDE)
Now how do I call my Grandma and tell her to fix this issue after a reboot does not work. The app is installed but she can’t find it.
On Windows you would most likely go to Program Files (Or add and remove programs) and look for a folder by the name of the application and copy and past the .exe file onto the desktop or something. Pretty easy.
On the Mac you don’t have this issue, you copy the app into the Applications folder, done. No looking etc. Very simple.
On Linux now to my Grandma the application is in the ether.
Lets see what solution is the most easy (Lets stick with Gnome or KDE which ever one is more easy)
On Gnome, use alacarte, then click New Item and add the program. Enter program name.. Done
On KDE start kappfinder and click Scan…. I think… I dont use KDE anymore
What if the program comes with a name that doesn’t match the package name? Like “The Most Awesome Free App” -> “tma”.
Actually Alacarte is a bit more complicated then stated. In Ubuntu you go to System, Preferences, Main Menu.
So far so good.
Then you choose where on the menu list you want to put your App and then you choose New Item. So far so good, thennnnnn. The Delema.
So you are presented with a box that says create a launcher. You are given the option to choose if its an application etc. Then the name and then “Command” and a browse button next to it. Soooooo, I click the browse button and go to my home directory.
Now picture me trying to tell my grandmother all this and then she gets to he home directory and gets confused.
So now I direct her to go to file system then to USR then to BIN and then search for the program file within that.
Woooooo.
This is why Apple did away with Applications on menus, it’s silly and a huge waste of time! Really a waste of time when things dont work. 🙁
If it’s okay to go to Program Files on Windows, then she can go to /usr/bin on Linux.
Well, there is your problem, right there. Defective deb. Arguably, any application that the user installs should just show up in the menu. Contact the Frostwire guys to find out why they are not doing that. There may be some rationale. But if it doesn’t even show up as a predefined option in Alacarte, it is definitely a packaging bug. And this is *not* a common situation.
If an Apple package neglects to provide an executable, what would you do then? It’s the same situation, really.
It’s actually not the same situation. Because in the example I used the executable did and does work. It was just not added to the menu. This is not always the fault of the developer as at least Gnome is known to have qurkes and may require you to go to a command line and run “killall gnome-panel” to get menus to work.
My whole point in this is package managers, menu issues etc, Linux has useability issues on the desktop. They really need to work them out.
It most certainly is the same situation: The installer failed to properly install the app.
Only the details of the failure vary.
the same situation: The installer failed to properly install the app.
Only the details of the failure vary. [/q]
Actually you most likely would never have that problem on the Mac as almost no Apps use installers, and those that do can still be installed with out the click by click installer.
Fine. That’s cool with me. There are also a whole host of Mac problems that you can avoid by using Linux, instead. Starting with the purchase and upgrade prices. Plus some philosophical advantages, if you are into that sort of thing, and some practical ones that stem from it even if you’re not. I’m not anti-Mac. My POSIX advocacy comes before my FOSS advocacy. (I was a unix advocate long before I’d even heard of what we now call FOSS.) And MacOS X is a strong unix OS in its domain. I’m glad it is there to meet the needs of its users. In the same sort of way that I’m glad that Solaris and OpenSolaris are there to meed the needs of their users. And the *BSDs. Even if I do call Linux my home. I say this because I suspect that you might think I’m some Linux fanboy with my head in the sand. And I’m certainly not that. (Though I realize you have not said so.) I think I would have been more receptive to the “we need to change directory structure” argument a few years ago, when the package manager landscape was not quite so mature in Linuxland. But at this time, I really do see any need for users to ever have to concern themselves with the directory structure outside their own home directory to indicate a failure at a lower level. Your grandmother simply should never have had to worry about it.
So the Frostwire guys screwed up their .deb. I installed VirtualBox the other day from a standalone .deb and it put the “Start menu” in place just fine.
Presuming the installer puts an icon there.
Only because the deb is broken. I can certainly find Windows installers that does not put icons in the start menu.
You’d use alacarte or menu-editor and it would be roughly as hard or simple as manually adding an icon to the start menu in Windows.
Actually it’s very easy in Windows. If the application actually installed then you can go to Add and remove programs and look at the details and it will tell you the path to the executable. No need to worry about icons as icons dont execute, but exe files do.
If you read in my last posting I actually tried to walk my grandmother through Alacart and it’s not as easy as you would lead us to believe. Plus why can’t applications be in a place where a normal user would understand. /var/bin is not something a normal person would understand, it makes NO sense!
“C:\Program Files” Simple and easy to understand.
“/Applications” Simple and easy to understand.
When are Linux companies gonna get with the program and make Linux more useable for Joe Shmoe
Doesn’t seem like ever, which is why Red Hat has left the consumer desktop alone. They know as is Linux will never make money on the consumer desktop.
Red Hat is the only Linux company making a real profit, they should know what does and doesn’t sell. And they have made that very clear.
In Linux a user should only see the basics When I open my hard drive it should be more Mac like. I don’t want to see 50 folders, most of which I have no clue what they are there for. I should see what I need to see. And if I need to see more then I change the view settings.
On the Mac you open your hard drive and you get:
/Applications
/System
/Library
/Users
/Shared
(Developer if you have the dev tools installed)
Out of those my Grandmom knows what all but Library does
Simple.
In Vista:
Windows
Users
Program Files
Simple.
In Ubuntu (As we all know and applies to all versions of Linux but Gobo)
/bin
/boot
/dev
/etc
/home
/initrd
/lib
/lost+found
/media
/mnt
/opt
/proc
/root
/sbin
/usr
/var
/srv
/tmp
Come on the user should not see all that (Unless you want to)….
Gobo
/Programs
/Users
/System
/Files
/Mount
/Depot
Simple.
I am not putting down Linux, just saying this is one of the faults that kills simple useability.
Your point being?
a) No-one is using /var/bin
b) Almost all GUI applications install an icon in the “start menu” in any decent Linux.
c) wether /usr/bin, “C\program files” and it’s myriad of subdirectories or “/Applications” is more sensible have everything to do with what you’re used to and nothing to do with making sense.
And what’s under Windows and Program Files? One billion confusingly named folders and files. Yes, that’s surely better…
But the example you give: “What happens if my grandmother installs a broken deb?” has nothing to do with your directory structure complaints. What if she buys a light bulb with defective threads that won’t easily screw into her lamp. Should all lamp sockets be redesigned to somehow make things easier in that situation. Or should she try another light bulb, possibly complaining to the original manufacturer. As soon as the deb she installed failed to add the application, she was in a failure mode. A desktop user should not have to know or care about the internals. Personally, I think that the FHS structure is pretty good, except for maybe /opt, which is redundant and useless. One might quibble a bit about names. But only developers, distro manintainers, and “power users” (whatever they are) need be concerned about that. Regular users should be able to just install and go. Or at worst, if there is some reason for the app not to appear in the menus by default, bring up Alacarte and check that app in the *existing* menu structure. If you get into adding generic launchers, something has already gone wrong. And it is that something that needs to be addressed.
BTW, on my Ubuntu Intrepid x86_64 box, I just now went to the frostwire site’s download page, clicked on Debian/Ubuntu, installed with Gdebi, and “FrostWire” immediately appeared in Applications->Internet. I clicked on it and FrostWire came up. No filesystem paths or “killall gnome-panel” involved. (No reboots either.) Although if, for some reason, you did want to restart the panel, the user could just log out and back in. (But surely you knew that when you claimed they would have to use “killall gnome-panel”.)
There are things about our ecosystem that could stand to be improved. I just don’t think that you have hit on one.
Because Joe Schmoe is an idiot who has no business looking in the root directory past his home folder. Alacarte could use some work and even then, the application was made as a workaround for the mess that gnome made out of virtual filesystem. Gnome use to have something very similar to windows where you would type something like “://applications” and it would take you to the relevant area, the menu used to be this way as well but it never worked properly and so was removed from subsequent versions of Gnome, because the only sane way to edit menu’s was via a text file someone created Alacarte (I forgot its original name) to alleviate the issue. When Alacarte first came out, it was SLOOOOOWWWWW, buggy and a relative hack but Gnome adopted the project because frankly they weren’t ever going to get the original functionality working again and they obviously haven’t bothered since.
Anyway my rant was really about gnome-vfs, which has been replaced and has seen some major improvements. Hopefully in the future we will use a virtual filesystem to direct users to the correct place. The user should never know about /usr /media /etc, unless they know what they are doing. In OSX, you can take a look at the inner workings of both application balls and the filesystem by going to the command line. You won’t see any of that in the UI. In-fact up until a couple of versions ago you couldn’t even access them via the finder. A well laid out VFS can hide the details and let users go to where they need to go. Instead of user having to go to /usr/bin to look for executables they can go to //applications that point to /usr/bin but presents it in different way (with more colorful icons).
Same with BSD and Linux based OS; you can have multiple versions installed if you need that for some strange reason. Heck, Java on windows gave you an additional version install by default until the latest update 10 where they’ve finally fixed that.
If it really came down to it, I’d probably just open multiple users under cli/su then run as many K3B instances as I need. Another aproach would be using the more powerful command line interface for your DVD ripper and run as many as you need at the same time “it’s complicated, you put ‘&’ on the end of the command line”. I regularily have mulitple scp, copy or moves going on at once from the same command line though I’ve never had an issue opening multiple GUI ftp clients at the same time either.
You are right about all that but what was saying wasn’t that you could not do it in Linux. It’s how easy is it to do in Linux. Yes under the hood you can make Linux do more tricks then a clown! But most people have no clue how to do it. And even us power users have to learn it.
And the fact that you have to go to the command line to do something that you can do with a copy and past in Mac OS is not very user friendly at all. And that is the delema.
Granted, I’m not the average pointy-clicky user but opening a second user login under a terminal window or going full virutal terminal with F2, F3 is pretty easy to do. Actually, I don’t remember a day that I haven’t had a terminal open with the root account since I’m doing admin after logging in as a regular user. Same for my windows workstation, login as domain authenticated user, open a cmd.com with administrator rights to run things that regular user privalege doesn’t support.
When connecting to a remote server, I rarely have less than four terminals with ssh connections open. When using my notebook as a thinclient at home to run X apps off my desktop I have the four terminals open and run all GUI apps off the various ssh command lines. I’ve even had Firefox crash then be asked if I want to restore the session hours later when I open it directly on the workstation. It’s not rocket science (er.. computer science?) by any means.
I find a second user or root shell easier than having to copy a program folder so I can run two separate instances of the same program. I do this also though with portableapps.. I’m currently testing two mail servers so I have two portableThunderbird directories open rather than using my Outlook instance as it’s busy with work related PIM.
On my osX machines, I truly love the copy-icon-from-.dmg install process. It actually stumped me for a while because it was so easy. Uninstall is another story. I haven’t had to ever open two instances of the same program as my osX needs are ssh, VLC and Safari when it’s closer to reach than another of my machines. It’s unix under the hood though so if you really want to learn your OS, find the terminal under utilities.
“Nix Fixes Dependency Hell on All Linux Distributions”
No. No it doesn’t.
I would love to one day have a single package management system for Linux. It would make it so much easier for developers of applications to not have to re-package to a deb, rpm, yum, etc. Its frustrating when downloading a program that isnt in your distribution repository and it only comes in some packages and not the one you need. Linux needs something like windows .exe file. Something that can be loaded on any distribution without compiling. Look how this works when downloading the adobe flash plugin from their site. They have tried to accommodate various distributions, but it’s far from being user friendly.
Agreed totally on that one… the sad thing is that this ain’t gonna happen soon; one guy will rant about how he has “no problems” under Ubuntu, other will say “stick to RPM” etc. What I dream of is an unified package manager with community managed sites… from which I could install e.g. LATEST VERSION of Boost without no problems.
“from which I could install e.g. LATEST VERSION of Boost without no problems.”
boost is a *library*. its releases are *not API compatible*. this means that when it gets updated, all the applications that build against it need to be rebuilt, and possibly patched for API changes.
the alternative, which is what this kind of scheme would require, is that every single app that’s built against boost includes its own private copy of boost. and if you wanted to go from boost 1.36 to boost 1.37 you’d have to change every single app to include a copy of boost 1.37 rather than 1.36, and the rebuild would take five times longer because you’d be rebuilding boost itself in every single one. and if you loaded two apps which both used boost, you’d get two copies of boost loaded into memory. or, five, if you used five apps.
this is the kind of thing I mean when I say people don’t know what the hell they’re talking about.
so it solves a problem that doesn’t exist (dependency hell) and adds features that already exist (conary has rollbacks)
Y’know, BeOS has a very elegant solution.. albeit a rather unknown one:
The solution deals with the search order for dependency resolution:
1. Application’s Folder /lib folder ( good idea )
2. User library folder
3. System library folder
Personally, I’d go a bit farther and require all libraries to adhere to a multi versioning system:
API Version ( source compatibility )
Symbol Version ( binary compatibility )
Library version ( same old thing )
If the API version changes, you know certain source changes may be needed, if the symbol version changes, you know that binary compatibility is somehow compromised – the library version itself is meaningless – except for bugs & whatnot.
You need three metrics: major, minor, revision for the API/Symbol version. Revision changes are forbidden from preventing previous code/linked apps from operating. Minor changes mean that there were changes which will effect, at least some, source/apps from working without some changes. Major should be rather self explanatory 🙂
From there, each application, upon linking, will have encoded version information for every dependency it has – for the symbol versions, naturally. No exceptions.
When you install a new application ( via whatever means ) and run it the first time the system will resolve the dependencies based upon symbol versions for each dependency, failure to come to a match will result in an immediate option for resolution – just tell the user you need a different version of a given library ( or whatever ), grab it, and install it with a special name denoting its symbol version.
So, let’s say I download a DVD ripping software and it needs symbol version 1.2.1 of libdvdcss.so, and I have 1.3.0. I try to run the program, but it fails because I need an OLDER version of a library. At this point, an alert generally comes up with something along the lines of “Cannot execute program : missing symbols.” Instead, we come up and say “This program requires a different version of an installed library, would you like to grab this version automatically?”
Of course, one little side trick would be to automatically generate a lib folder per application, and create links within that folder for each dependency – to speed resolution. In this way you could easily keep multiple library version, multiple versions of the same app ( which have differing needs ), and share all those resources well – and actually gain a little bit of a boost in performance.
One side effect, sadly, would be the extra memory usage some apps would experience because they are loading a library which would have normally already been in memory when the application was written – not to mention the obvious disk overhead that will be incurred – but that is the price one must pay for universal flexibility.
Now… only to get EVERYONE on board….
Would this be a case of “build it.. and they will come;” I certainly think so.
–The loon
NeXTSTEP/Openstep was a snap to manage applications. NetInfo, on the other hand, being a relational database to manage network information was something you grew to understand.
Pretty much every *nix this side of 1998 has all of these. API & library version are the same thing and handled by using library versions I.e. libfoo.so.2 is not the same as libfoo.so.3.
Symbol versioning has been a standard feature of Glibc & other system libraries for seven or eight years now I.e. if you look at the symbols provided by libc.so.6 you’ll see that they have a version string appeneded to them, such as strcmp@@GLIBC_2.0 If the ABI for strcmp() ever changed, you could introduce strcmp@@GLIBC_2.10 (or whatever). Old applications would continue to use the old version, new applications would be linked to use the new version, automatically.
Why nobody mentioned PC-BSD’s PBI packages ( http://pbidir.com ), each package contains all needed dependencies and libraries needed by that application, yes it takes little more storage space, but you are 100% sure that this will work regardless of ANY update or upgrade, more here: http://www.pcbsd.org/content/view/20/26/
PC-BSD is a great system, and pbi files are impressive when you first use them,
but,
there is hardly any of them available. I find myself just using ports once again.
maybe it was the thoughts of duplicating libraries on this laptop with only a 20gb hdd… I know, miniscule, but it is from 2001.
PC-BSD runs sweet on this old dinosaur, but I did find I needed to upgrade it to 256mb to make things works instantly
Edited 2008-12-23 10:44 UTC
Even on a 20 gig hd, duplicating libraries is not really an issue.
Well, yes and no. The disk space itself may not be that big a deal. (Though I’m not sure totally insignificant. Its not how much you have total, but how any wastage, overall, compares to how much you’ve *got left* after you have everything you want or need on it. In that way, it’s kind of like money. $50 may seem like nothing at the beginning of the month… but seem like a lot at the end.
But beyond that… let’s think about backup time. I believe in complete backups. No incrementals, or partial filesystem backups for me. Because there seems always to be something significant somewhere that I had not considered. These additional copies of libs take time to back up. I’ve moved to using rsync and usb drives, which helps a lot. But there is still a penalty for extra stuff that doesn’t really need to be there.
And then there is the shared memory issue. All these copies of different versions of libs have separate copies in RAM, sidestepping the normally *excellent* memory efficiency of unix.
And the security update issue. Each app, along with all of the libraries that it depends upon, needs to be patched separately, for each and every redundant copy of the library out there. Be careful not to miss any.
Shared libraries offer so many advantages that it seems a shame to miss out on them just to turn apps into independent blobs at this time when system package management works so well.
Edited 2008-12-24 20:35 UTC
the question is how many of these libraries are even 500k? The big ones (glibc, gtk, qt, etc) obviously stay shared. if you are talking about a 100meg overhead in both disc AND ram it is still almost irrelivent on a 400$ machine you buy today.
The one place where the exponential complexity of having everything shared makes sense is ease of upgrades. But again, if you don’t test against the upgraded library, you may end up fixing a bug in the library, but introducing a bug into an application that links against it.
I am firmly on the side of “RAM is cheap, life is short” mentality. Core libraries are centralized, everything else gets distributed with the application. That is why I like the OSX model, because it makes everything so damn simple.
the only time you should have a dependancy error is when you build from source and even then it will tell you what’s missing when you compile.
apt-get install someprogramname
synaptic
apt-get remove $(deborphan)
upgrade-system
that will fix you up with any package either installing, upgrading or removing.
when you’re dealing with source packages just read what the errors tell you and if you can’t work it out, google and other people WILL help.
i like aptitude and i don’t think i’m missing out on any kind of feature nix provides.
as others already pointed out, there isn’t any dependency hell in linux packagement. At least not anymore. It reminds me of a dozen other projects that tried and promised and withered into oblivion.
But still there are some minor problems. For example some distros have different names for the same package e.g. xine-lib versus libxine-blahblah. I have an idea, let’s decide on one name and if anyone decides to ship it with a different one we will burn him at the stake(with full youtube coverage etc), as an example.
Really, I mean I can understand the different filesystem structure thing, but this… is ridiculous.
Anyway the next big thing on package management will happen when the big guys(Cannonical,Novell,Debian…) sit down and discuss the issues. I don’t think some students doing their homework assignment have the authority or the ability to solve these matters. At least not in this particular matter.
What if you do not use BASH as your *nix shell, say you use another one of the various shells…for example: TCSH?!
IMHO Problem of Linux is not limited to certain package managers, file formats or the dependency hell. It’s deeper. It’s not technical. It’s the philosophy.
Vast majority of people see Linux as an “operating system” while it is not. Distribution is an operating system. Linux in these terms is the underlying ecosystem in general, Linux is the kernel, GNU/Linux is tehcnical ecosystem.
Distribution is what 2000 or XP Vista is in Windows world. The comparison is not very good but is as close as it gets. Taking Darwin kernel and throwing Gnustep on top of it does not make OSX, Linux or anything else but a new operating system, Datoile, if you please. If there are several different implementations based on these sources there are as many different operating systems.
As long as people can’t see the difference between ecosystem and operating system there will be no “year of Linux desktop” with or without package manager. And most people can’t see the difference. Different implementations based on the same codebase make no different versions of the same operating system but different operating systems which may follow the same standards and be compatible with each other.
This leads to another issue. In the Linux world the distro maintainer (effectively as OS vendor) must support and in most cases to maintain the entire application base since the app developer may not support any specific distribution or the particular distribution.
If the application is open source, this is good and allows to have applications on your system without the support from the developer. However, this requires resources that could be used to provide better core environment.
From the developers perspective, however, number of incompatible distributions will lead to supporting only small number of different ones if any. In most cases it is appropriate to say one is developing apps for RHEL or Ubuntu, not Linux. The latter just happents to be the case since majority of such apps is open source and can be picked up and supported by distro maintainer.
This is a chicken and egg problem which may seem easy to tackle with universal package manager. But no sane person would come up with an idea of package format which is good for both Windows and OSX. Why then think that this will solve the problem in case different operating systems are based on the same kernel and some other major components (not necessarily the same version across distros)?
“Vast majority of people see Linux as an “operating system” while it is not. Distribution is an operating system. Linux in these terms is the underlying ecosystem in general, Linux is the kernel, GNU/Linux is tehcnical ecosystem.”
This is simply wrong. Linux+GNU is an operating system, and it is what an operating system ought to be. A distribution is the operating system plus a selection of applications.
“Distribution is what 2000 or XP Vista is in Windows world.”
That’s about right, and that is a succinct statement of what is wrong with Windows. It is simply far too large. And that is what is killing Microsoft.
Maybe it can be. That’s what the FSF would like you to believe. But that description is not reflective of most Linux distributions, which use the Linux kernel (of course), glibc, plus Xorg and a whole host of apps that the FSF tries, implicitly, to take credit for, when the credit is simply not theirs to take. This is even true for the FSF’s own “gNewSense” OS. The gaps in GNU’s functionality show up early. What’s GNU’s windowing system? Linux+GNU would lack much of what users expect. Starting with a GUI.
I can see that you have bought into their slick propaganda, but suggest that you look more closely at the reality. GNU is but *one* of the pillars of what we normally call a Linux distro. They know they can’t actually upstage the term Linux. The popularity of that term precludes it. But they apparently have no moral problem with stealing credit from less well known projects.
Edited 2008-12-24 19:23 UTC
Not to mention that the FSF seems to be completely incapable of producing a workable kernel. 22 years of work and still nothing usable.
GNU didn’t write Xorg but, hey, Wikipedia says that GNOME is part of the GNU project. So they do have a GUI while the poor Linux kernel developers don’t even have a command line (because Bash is made by GNU), let alone a GUI. (Torvalds likes to use KDE but he didn’t write it.)
Anyway, you should read “Free as in Freedom”[1] that tells about Richard Stallman, because you seem to have trouble grasping his psychological profile. When you talk about “stealing credit” from someone, you’re discussing some ordinary selfish bloke like Torvalds or yourself or me. But ordinary selfish motivations just don’t fit Stallman’s psychological profile. He’s a man of abstract ideas and lofty ideals — that’s what makes him tick.
Linus Torvalds wrote the early version of Linux for selfish purposes — to see if he could do it and to show off to others. The reason why people admire Torvalds as a heroic programmer is that he wrote a Unix kernel from scratch all by himself, without imitating others or asking help from anyone. So Torvalds’ psychological profile shows a selfish guy who is motivated by things like “fame” and “credit”. But if you read that “Free as in Freedom” book, you’ll see that Stallman intentionally avoided writing programs from scratch as much as he could. That’s because things like “fame” and “credit” never meant much to Stallman.
That’s actually why the Hurd kernel never really took off. At first the GNU project spent a long time looking for a kernel that someone else had written and that could be used in their GNU OS, and finally they got their kernel from Torvalds who published his Linux kernel under a free license, the GPL. Torvals wrote Linux because he wanted fame as a heroic programmer, but the GNU programmers never saw any sense in writing a complicated piece of software like kernel from scratch.
The whole point of free software is that you don’t need to write every program from scratch — you can freely use code that others have written and modify it for your needs.
So comments you sometimes see, like “Hah, hah! GNU doesn’t even have a working kernel”, miss the point by a mile. Writing programs like kernels was never really a goal for the GNU project. Their actual goal was to put together a free OS that anyone could use, study, and modify at will. And now we have such an OS in GNU/Linux.
[1] http://oreilly.com/openbook/freedom/
It’s not (only about dependency hell) It’s NOT … … because the MAIN advantage is that you can INSTALL and UPGRADE different versions with different libraries requirements (of a software package) without breaking your current kernel and distro version.
This is something WINDOWS can DO ! but not Linux (or FreeBSD) !!!!
Thanks for your careful thinking and reading.
er, Linux has absolutely no problem with doing that.
I guess you are not a Window’s developer. This was a TERRIBLE problem in the ActiveX/COM era, which a lot of programs still use. .NET programs have the advantage of being able to simultaneously load different versions of the same library, and thus things are getting better.
As AdamW mentions above, Linux has no problem with this.
p.s. I read carefully and I appreciate your thanks.
It seems that the author from the original article on linux.com got it wrong: Today, the real benefit from the Nix packaging manager is not resolving dependency hell, but (1) a simple versioning system and (2) a simple means to manage software packages for testing/development among different Linux distributions. For me, Nix would fit in neatly into this niche.
The last time I ran into a unresolvable dependency condition on a rpm-based system was with Red Hat 6.2. Of course, with a lot of highly specialized and experimental DEB-based distributions and live-CDs out there, this is still a more frequent sight on these systems. I lately had some problems with grml 1.0, still my favourite live-system. That’s the fun with ‘testing’ and ‘experimental’ branches. Things can break, so they can be fixed again.
Someone advertised Nix on the debian-devel mailing list. A few Debian developers have already commented on Nix in that thread. Especially the comment from Daniel Burrows (who develops Aptitude in Debian) is quite good:
http://lists.debian.org/debian-devel/2008/12/msg01027.html
The whole thread can be viewed here:
http://lists.debian.org/debian-devel/2008/12/thrd3.html#01007
meeeh.