“I suppose it’s a losing battle, but it’s one worth fighting, anyway. What makes me think of it is a thread I noticed on the freedesktop.org mailing list. In that thread, Andreas Pour, with whom I do not agree about much, defends obvious common sense against what over the last couple of years has been a growing onslaught. He’s absolutely right, but that isn’t always enough.” Read it at LinuxAndMain.
This will only be resolved if and when Linux distributions standardize. Till then it’s really up to each distribution to decide where they want to “put their toys”.
I like the idea of all add-on software going into a seperate directory (such as “/opt”), as this makes it pretty easy to locate resources if and when needed.
If this occured, then all apps that came with the distribution would be contained in a subfolder of “/usr”; All add on programs would go into the aforementioned “/opt” folder (or whatever we call it).
This would make it a lot easier to keep programs seperate from the “OS”, so-to-speak. At least as far as I’m concerned.
Some of the benefits of this would be similar to the benefits I realize by installing all of my Windows apps on a seperate drive from that of the OS:
1. If you want to reinstall the OS, your program directories are safe. Granted, a number of them won’t work till they’re reinstalled, but an equal number will work and re-create their additional files as needed. And even for those which you must reinstall, the customizations and addons you’ve created will be picked up by the new install if you reinstall into the same directory.
Similarly, you can go as far as re-formatting your OS drive/partition without fear of losing your program configurations.
This isn’t possible if your program directory’s mixed in with your OS files, and “built in” apps (ala “/usr/bin”)
2. By doing this you can also discourage some of the more amateurish virus’s (With Windows 2000 and up, you can also help assist with this by installing Windows to a drive other than C:).
You can do this with the “/home” directory (assign it to a different drive or network resource), so it makes sense we should be able to do the same with programs, without having to heavily hack the system references.
My two cents…
Well, I come from the camp of where I compile and install pretty much everything from source tarballs. I do not use rpm, apt-get, or whatever else is out there to manage my packages. I run Slackware Linux 8.0 as my distribution of choice. I do not use the slack package manager beyond the initial install. Everything I get is installed the same…. ./configure;make;make install
If the install is simply a matter of copying a directory, I create a folder in /usr/local for the app, and copy it there. What could be simpler? I have complete control, and I am not relying on any package manager to mess it up.
Just my two cents.
Currently, I don’t think it makes much of a difference. The user shouldn’t care where the applications are actually located, because they can either invoke them from the $PATH, or depend on the automatic menu system that comes with most Linux distributions these days. Neither KDE nor GNOME themselves care. In various Linux distros, KDE has been installed in /opt, /usr, and /usr/kde/3, and everything works just fine. As long as you remember to pass the right –prefix to any apps you’re compiling, there isn’t really a difference either way. If something *has* to be chosen, for the sake of standardization, I think it should be installed in the /usr tree. Modern (apt, emerge, sourcerer, urpmi) Linux package management systems eliminate the need for the user to care where packages are installed, they’re just “merged” into the system. As a result, having everything in /usr is just cleaner (because there is only one applications directory) and it keeps the PATH simple.
Everything I get is installed the same…. ./configure;make;make install
Yes, this works just fine until it’s time to uninstall something. This is the main reason I use a package manager. Half of the time, make install throws files into at a bin, share, and a man directory at least. I can’t keep track of where all these files go, and I don’t want to. Many tarballs don’t even have a make unininstall, and I don’t like keeping a bunch of needless tarballs around anyway for those that do. The traditional approach feels messy to me. Anything that I have to install from source goes in /opt, and I don’t even like doing that…
As far as the file system issue goes, I’m mostly interested in one thing.
All I really want is to have seperated the things that are “installed” and the things I have to backup.
Simple example, I shouldn’t have to backup Word, but I should back up my .doc files.
I already have (or can get easily) a CD or tarball for installing programs, I shouldn’t necessarily have to back those pieces that do not change.
I’d rather have the apps store their setting and what not away from their binaries, into a common area, but I’d also like them to not go into a soup like the Windows registry.
If I want to back up everything to an app, then I’d backup /{var|opt|usr}/common/app/*, ~/.app/*, and *.app.
This makes recovering the application back to the state I want it to be in pretty basic. Re-install the app, recover these data files from backup.
With a soup like the registry, I can’t do that as I have to backup the whole thing, and more importantly, recover the whole thing.
With the data files buried in with the application, I have to know where all of “important” files are, and selectively back those up.
Stuffing them in a common directory lets me easily back all of them up at once, yet selectively restore them.
Sure, you still want a full backup of your system, but, really, if you can seperate the volatile stuff from the stagnant stuff, it sure lowers your archive burden, and gives you a better where and what everything that is important is.
is the one I use 😉
—
http://islande.hirlimann.net
I totally agree with the
– uninstall problems with tarballs
– settings separate from binaries
issues.
Keeping track of files to uninstall without package manager or uninstall script seems a headache.
You would think settings all go to /etc (systemwide)or /home (user settings), don’t they?
<Linux Newbie>
The Appdirs from the filemanager Rox work pretty nice, but I don’t know how one could handle: multiple binaries belonging to one package (ImageMagick, Xfree86,..), and how one could handle libraries without wasting space.
I’ve been managing UNIX systems long before Linux and 386BSD came out. I was Sysadm at a university. We started out putting everything in /usr/local. After a while we saw that some software “trees” were large enough to be on their own, mainly TeX and gnu. We added /usr/local/tex and /usr/local/gnu leaving /usr/local for all the miscelaneous stuff. The nice thing about the subdirectories was that when we upgraded TeX or the gnu tree we tended to blow away everything and repopulate it; this avoided problems with old files contaminating new installs. Several folks felt that using both /usr/local and /usr/local/[gnu|tex] was messy since they weren’t both at the same level so we added /usr/local/misc for the miscelaneous. Every time we upgraded the gnu software (gcc and friends) we had some people with code that depended on the old version. We came up with /usr/local/new (also sounds like gnu) for the newer version of gcc and friends /usr/local/old for the old crap, /usr/local/test for the version of gcc we were still testing out. This anarchy couldn’t continue; we ended up having to support 5 different versions of gcc because we had projects that started with each version with no time to rewrite the old code for newer versions. From there we went to /usr/local/gcc1.59, /usr/local/gcc2.2.2 … Since there were already subdirectories for each thing /usr/local/* we decided to drop the ‘/local’ part to shorten everyone’s paths so we had /usr/gcc1.59 /usr/tex /usr/misc and so on. When Sun started the practice of using /opt for additional packages a huge debate started. Some argued that /opt should be for commercial applications only and locally compiled stuff should still go in /usr or /usr/local. Since I was boss I chose /opt and that was that. I was used to supporting 5 versions of gcc, 2-3 versions of TeX, plus some oddball in-house apps; all of which had different directories under /opt. When I changed employers I continued using /opt for locally compiled software. This confused everyone and now that I got everyone used to /opt/* I’m now going back to /usr/local but now I require all software installed in /usr/local to be packaged so that the bits and pieces can easily removed. I no longer have to support multiple versions of locally compiled and installed software and pkgrm is able to remove all the cruft when I rev software so I’m back to where I began.
..but I still stick to /opt. It made my life consistently at least a little bit simpler. And not only on Solaris: for smiplicity, our applications install in a directory under /opt on all the supported Unix platforms.
Why-oh-why is the Linux world so haphazard in it’s filesystem organization?
Don’t get me wrong, I love unix. I especially love OpenBSD. But that this is even an issue underscores how far Linux has to go before someone can be a ‘Linux User’ without it becoming a huge part of their life (which they’ll be devoting to figuring out messes like this).
In truth, that’s the real factor.
As long as the applications CAN be placed anywhere, and all of their wants and needs can be specified through configuration, then who really gives a rip where things go.
They go whereever you want them to go.
If I can enforce my happy standard upon the applications, then there’s pressing need for conformity.
You should just be able to back up your entire home directory and be done with it. Most well-behaved applications only put certain global settings in global directories. Usually, you don’t touch these. All the local settings and stuff should just be in hour home directory.
I couldn’t figure out any of the plusses or minus from that article. In any case the idea that KDE or Gnome don’t support pretty arbitrary placement is false; fink for example use subdirectories of /sw so I’m pretty sure a distribution can put them whereever they want.
My personal opinion is that:
/opt should be paid commercial software which needs a license manager so that’s out
/usr/local should belong solely to the sysadmin and local developers so nothing should default there
OTOH I don’t see much downside to the /usr/X11R6 directories.
IMHO the best solution is
/usr/lib/KDE# has all of KDE including executables and KDEoffice
/usr/bin/ and /usr/X11R6/bin/ have simlinks to KDE and KDE office components
similarly for Gnome.
This btw is pretty close to what they do now so its a fairly minor change.
I find it easier to put your commercial apps in /opt/commercial/…, compiled non-local apps in /opt and home-brewed apps in /usr/local. Makes things simpler for me, but obviously from this thread we can see that “simple” for someone is not for someone else.
fwiw: what i do as a simple non-expert linux user who can compile
1. i wipe partitions and install os
2. i add/remove rpms – they go under /usr for mdk/redhat
3. i install 3rd part binary only commerical apps
like acrobat, sun jdk, realplayer – they go in
/usr/local/XXX – this is good because i know they
are there and “closer to my system” than 4 – but
not as close as /usr (step 2)
4. i custom compile and install with –prefix=/opt/YYY
this is good because it ISOLATES this app from the rest
of the system – i can have the distro standard app
and my own compile dversion with its OWN resources
and configs on /opt/YYY/(share/etc/usr/bin).
i like this and it seems to work and is simple and pretty logical.
i don’t want distros to put anything in /opt – thats for ME!
Stacey..
I have been a Slack user for some time now and I too like to compile all my own stuff after the initial install. BUT I take it one step further for neatness and orginization. Package that fresh, optimized software you compiled into a tgz. You then get the convienence of a “package” that you can uninstall/update easier, PLUS the benefits of compiled on your platform. Use the SlackBuilds that Pat includes in the source directories, and if one not available, check
http://www2.tripleg.net.au/ for one.
I think this is also the ideal way for all distro’s and works with RPM and APT, etc. Best of both worlds. That is IF you are obsessed with the minor advantages compiling provide… Sort of like your own source-based distro using your FAV distro!
I must say, I really like the NetBSD approach, where
extra software (including libraries, include files, the lot) is installed in /usr/pkg . Better keep the extra software seperat from the bare OS.
Linux like all Unix derived OSs suffers from having a file system that has grown organically from one that is 30-odd years old. The result is its a disaster.
Windows started with a distaster and has taken some steps to clean up. Its still a distaster, just less of one than it was.
Unix based OSs on the other hand seem to take one step forward and two steps back in this area.
The core OS should have its own area, OS extensions should have another, applications a third and user data a fourth. Within the OS areas files of a given type should be kept together in sub directories. Within applications, files belonging to a single app should be kept together. Within the user area, preferences and application data should be kept seperate, with preferences split into global settings and per user settings.
This also simplifies backup as eveything that cannot simply be reinstalled is in the one place.
Not much chance of this ever happening with Unx based OSs, but its an ideal to aim for.
/opt is simply stupid. It has no clearly defined purpose. If you want to use it for commercial, non-free stuff, OK — but better rename it to something else then, as people seem to disagree what it’s for. Besides, some commercial stuff wants to be installed together with non-commercial stuff (e.g. Flash, RealPlayer plugins), so this doesn’t work too well either.
/usr/kde is really perfectly analogous to /usr/X11. The structure should work as follows:
* Large packages that do not respect the FHS or have files that don’t fit into it get their own directory in /usr or /usr/local (if they’re installed by the user). Their binaries get symlinked into /usr/bin or /usr/local/bin.
* Small packages or those that do respect the FHS place their files in the respective directories (/usr/bin, /usr/share/doc, /usr/share/man, /usr/lib ..).
* Stuff that’s essential for the system to work gets put in /bin or /sbin.
/opt is unnecessary and stupid. Putting X binaries that are not part of the official X distribution in /usr/X11R6/bin is OK when done consistently, but not OK if X stuff is spread all over different dirs. For example, evolution in SuSE is in /opt/gnome/bin/evolution. Gah!
To sort this out, there should be a vote among all distro developers and then the decision should be final for everyone involved.
There is no clearly defined “core OS”, as all Linux distributions rightfully disagree about what is essential and what is not. But /bin, /sbin and /lib represent what the distributors regard as essential on Linux systems. The part that’s messed up is really mostly the /usr vs. /opt stuff and, in general, the placement of very large packages.
I don’t see how Windows directory structure is getting better. Apps and OS alike place their libraries (and OS its executables!) in the same dir, programs all have their own dirs and do not create links elsewhere, making it impossible to simply call a newly installed program from the command line. This, again, makes the command line almost useless unless you manually create batch files for each program you install. Furthermore, there’s no system wide package management — all apps have their own installer and uninstaller, and you just have to pray that it works. On Linux you have a competition of system-wide PMs, which is bad, but eventually one of them will prevail.
I am by no means an expert in this area but what Donald Milne is describing is AFAIK what Mac OS X offers today:
“The core OS should have its own area” = /System
“OS extensions should have another” = /Library
“Applications a third” = /Applications
“and user data a fourth” = /Users/username
“Within the OS areas files of a given type should be kept together in sub directories.” = /System/Library/Quicktime ; /System/Library/Extensions and so on
“Within applications, files belonging to a single app should be kept together.” = Bundles
“Within the user area, preferences and application data should be kept seperate” = ~/Library
“with preferences split into global settings and per user settings.” = User preferences go in ~/Library/Preferences and global ones use /Library/Preferences.
I realise that Linux apps aren’t going to change overnight to support this kind of thing but I think it’s important to mention that this kind of thing exists now and works quite well in my experience.
Hornsby wrote:
Many tarballs don’t even have a make unininstall, and I don’t like keeping a bunch of needless tarballs around anyway for those that do.
This is my beef as well, although I don’t mind keeping gzipped tar files around — real estate is pretty cheap these days. The only solution I can think of is to email the developer and request an uninstall target when there isn’t one. I realize that the alternative is to dig through the makefile to see what make install does. [Ugh]
Providing an uninstall target in your makefile shows that you have confidence in your work and makes life easier for your users.
Al Dente wrote:
but now I require all software installed in /usr/local to be packaged so that the bits and pieces can easily removed.
Would make uninstall do the job for you here?
You can get the advantages of both schemes by using a tool such as GNU stow or xstow; the idea is that you install into a prefix (such as /usr/local/packages/kde), and then stow creates symlinks in /usr/local that point into the files inside the kde package. Removing or upgrading software just means rebuilding the symlinks; you can even have multiple versions installed and switch between them by simply restowing the new version.
I’ve modified the GAR ports system to install packages using stow on my machine, and have built GARstow ports for most of the software I use (including things like XFree86 and Mozilla). For most packages, this is really quite straightforward, and it certainly makes upgrading and removing packages easy…
(Send me mail if you’re interested; it’s not really suitable for the average user, but I can throw a tarball of my ports tree somewhere if it might be of use.)
“I’ve thought for years that distributions — especially Red Hat — have been gradually and subtly working to make users reliant on binaries, Red Hat binaries, and have been gradually and subtly working to discourage users from compiling their own applications, desktops, and so on. Oh, yes, we’re all in favor of open source and all that, but you needn’t bother your pretty little head with actually building the stuff. Just rely on RPM.”
There’s no conspiracy here – the majority of Red Hat’s customers don’t want to build their software, and so the product has been designed so that they don’t need to! For the rest of us, the sources are provided; and the SRPMs make building easier for most people, as well as telling us (in the .spec file) exactly how the default binaries were built. And you don’t have to rebuild the RPMs – you can extract the source files and build step by step if you wish.
Oh, and the complete GPL’d source is provided, for the Red Hat binaries that they are supposed to be trying to make us dependent on.
What more do you want?
For those of you installing from source and who wants to be able to uninstall, there’s CheckInstall at http://asic-linux.com.mx/~izto/checkinstall/ . It generates RPM/DEB/Slackware packages from any source code which has ‘make install’ or similar.
I don’t see how Windows directory structure is getting better.
.Net improves some things, but I’ll just go with the basic Windows2k/XP setup for this discussion.
Apps and OS alike place their libraries (and OS its executables!) in the same dir,
This is an issue in some cases, but another issue is the fact that executables can be treated as libraries in Windows. This means, basically, that the distinction between a .dll and .exe file is very little in many cases, until a user tries to directly execute them. An individual developer can improve that for their own software, though.
programs all have their own dirs and do not create links elsewhere, making it impossible to simply call a newly installed program from the command line.
Most programs create shortcuts in userStart MenuPrograms under their own directories, which is reinforced by the Windows development guidelines. In fact, a shortcut to the executable is pretty much the only thing that belongs there (again according to the guidelines). You can execute the programs from there using the command line, but you have to include the .lnk extension and shortcuts don’t pass command line parameters to the applications (meaning you’re better off going to program files if you want to execute something from the command line, which is probably faster to get to anyway).
This, again, makes the command line almost useless unless you manually create batch files for each program you install.
I tend to spend a lot of time in the command line doing things that I could otherwise do in the GUI, but it’s really a matter of preference, and I rarely use it to launch programs that are not command-line applications. Then again, I spent a good 10 minutes setting up my command line, too.
Furthermore, there’s no system wide package management — all apps have their own installer and uninstaller, and you just have to pray that it works. On Linux you have a competition of system-wide PMs, which is bad, but eventually one of them will prevail.
umm control paneladd/remove programs was the system-wide package management interface for Windows last time I checked. The only time it’s really an issue is if something is either not distributed in a package (installer) or if something has a misbehaving package (installer). The other problem is that Windows doesn’t have a universal update system to download and update all applications on the system (or even a large percentage of them or all applications from Microsoft for that matter).
When it comes to .Net, things tend to move towards either using Microsoft’s installer or not using an installer at all (containing everything within the application’s directory and simply allowing the user to copy the directory over or decompress it to a single directory). Executables and libraries are still together, but versioning is maintained for .Net binaries (in case someone like AOL decides they want to replace a system library like they used to do with the C++ libraries in their ICQ installer).
The primary problem with the Windows directory structure is fairly similar to the problem in Linux. It’s not that the structure itself is bad, it’s that the different developers that can influence that structure don’t agree, or don’t follow existing rules, and both Linux and Windows have changed their own rules on a number of occasions. Most people don’t even know where the MS Office executables are stored on their computers (and MS doesn’t help in this case by stuffing them in Program FilesMicrosoft OfficeOffice10 with a hell of a lot of libraries and other files), but they know how to run the applications. I have literally dozens of applications in Windows that seem to believe they should install themselves in the root of the hard drive Windows is installed on, and a few that even believe they should install themselves in the root of C:, even if that’s just a swap partition.
After viewing the comments I felt ONE argument seems overlooked. What about partitioning your Harddisk for a common home system?
1. swap partition
– no commnet
2. root
hier should go the bare / base system little file i/o on it. Why? For the unlikely event that your system crashes (think poweroutage), this increases the chance that your root partition is still useful. From this startpoint you can work through the rest.
Requires about 40MB on Netbsd or 200MB on linux
3. /home
all your user data belongs there, having this as a separate partition makes it easy to:
– back it up
– nfs export it
– mount it in different OSes / installations you have on your machine
4. /usr
all the applications
If you set up a pure mail/print server you will probably have a /var mounted instead or in addition of /home. But don’t make too many partitions, because you need to know how much will go what place in advance, so you’re guarenteed to be wrong (according to Murphy).
Bottum line: If some application will insist on populating /opt I’ll probably symlink this to /usr/opt
which is really ugly, but shows that every application shoud only start after /usr/..
–
rudi
Well, all this seems to be a case of mutually exclusive choices. The directory tree is an extremely useful organisation of the namespace of files, but it obviously has its limitations.
The way I see it is that you have:
– organise by file type (bin, lib, etc) -> the /usr way
– organise by package -> the opt way
What I’m wondering, is: do we have to choose?
Some possible solutions:
– use opt, create symlinks in /usr to opt. It would be quite hard to manage this huge abundance of symlinks though, not nice.
– make it possible for a file to have multiple directories associated with it. A file would be able to exist in two places at thesame time. But that would probably break a host of semantics.
– Associate the package to a file through filesystem metadata. You could create an /opt as a view on the filesystem a la Beos’s “Live Queries”.
– Following up on my previous point: since both RPM and DPKG keep a mapping of packages -> files, you could create a user-space filesystem (rpmfs? debfs?) that shows a view of the packages, and mount that as /opt.
As for the different depths as in /bin vs /usr/bin vs /usr/local/bin: I believe it’ll be possible soon in Linux to keep them on separate partitions but “merge” them when mounting them on thesame mountpoint, I’m not sure of how it’s called (union mount? shadowfs?).
All the advantages of keeping separate partitions, while keeping only one level in the organisation. If the package manager needs to install something on a specific partition (the boot/root partition for instance), it can simply mount it separately somewhere else, and install it there. (Multiple mounts are possible too)
I’m sure technical advances in filesystems will allow us to have simple, clear and uniform views on our filesystems, while retaining as much of the advantages a complex organisation offers.