UNIX’s method of handling file systems and volumes provides you with an opportunity to improve your systems’ security and performance. This article addresses the issue of why you should split up your disk data into multiple volumes for optimized performance and security.
and here i thought IBM was unable to make product neutral articles. a nice little “getting started and understanding the UNIX file system” article. I am just not sued to seeing an IBM article that didn’t shamelessy plug AIX like crazy
I have to agree, this was a very nice and informative article. I did find one error, regarding /usr/local and how it’s used, but *BSD uses /usr and /usr/local in a very nontraditional way at least as compared to other UNIX systems. The article is correct for non-BSD oses, however.
I’d rather say it’s a different interpretation of a concept, not an error.
BSD differentiates between “the OS” (core system) and “everything else”. For the last part, it’s completely (!) situated inside /usr/local.
See: http://www.freebsd.org/cgi/man.cgi?query=hier&apropos=0&sektion=0&m…
Linux doesn’t have such a differentiation. Here, even the OS part is usually designed by packages (or something similar). The creator of a distributions decides what belongs to the distribution. Therefore, you can see different uses among the different Linusi. Some use /opt, others don’t.
On Sun Solaris systems, there are again other substructures in /usr that nobody knows outside the Sun environment. 🙂
There are other interpretations, such as “use /opt for things that you’ve compiled yourself that isn’t available from your distributions source repository / ports system” or “the home directories are in /export/home”.
Among the UNIXes and between UNIX and the Linusi there are differences, but if you understood why these directory structures are there, why they are well intended and what their historical reasons are, you will have no problems finding your way, no matter which UNIX or Linux you’re using. I think this article gives a good introduction to this topic.
I take your point, I should have phrased it differently. It is a good article and introduction.
That’s pretty much how things are for me on OpenBSD. Ports and packages go into /usr/local.
For applications that are compiled by hand, I put those in /software, which is a convention that I started some time ago after I ran across some software that defaulted to using /opt (cdrtools, IIRC). I know some people will be horrified by this, as it’s not the norm, but that’s how I prefer it.
IBM tends to write some very good articles about UNIX topics.
My understanding of the intent of /opt.
There’s the distribution… mainly in the normal places.
However for large subsystem packages that you want to consider “self contained” bin, lib, etc, and so forth, those large packages would install themselves in /opt. Those large packages being packages provided through some kind of “officially” supported means.
/usr/local, if present is for the end user to house applications… which might include entire subsystems.
Obviously, when it comes to *ix, nothing is set in stone though.
The advantages of not lumping everything into /usr… or /usr/local (depending on OS), is that you might get more flexibility with regards to storage isolation and security.
With regards to performance, if you have multiple drives and depending on your RAID (or lack thereof) setup, you might want to consider something like:
/bin
/lib (different disk from /bin)
/usr (mainly bin)
/usr/lib (different disk from /usr/bin)
/opt/* (alternate disks potentially for large subsystems)
This has the potential of speeding up program loads.
Just something else to consider.
/opt is one of those directories that really doesn’t have a set use. Some systems, i.e. Slackware, install large packages such as KDE into /opt, others don’t. Some, such as Crux Linux, use /opt the way others use /usr/local (/usr/local is nonexistent on Crux).
On Linux, I use /opt like /usr/local, but for software that is to have its own structure. Managed packages don’t go there, /opt is for self-compiled software that for whatever reason would need to be kept under its own folder. For instance, Mysql goes into /opt/mysql if I compile it from source. Same with GNOME, if I wish to be on the bleeding edge, it goes under /opt/gnome if I’ve compiled it. I do not put other structures under /usr/local, that is reserved for self-compiled software that fits the standard folder structure.
For *BSD, I use /opt like /usr/local on Linux, as /usr/local is reserved for installed ports and packages, not for locally compiled software not under control of the ports tree.
Naturally there are no set rules involving these directories. That’s just the way I do it. Ah, the wonderful world of the fhs, eh? Some parts of it are not standard at all.
If you have a set of heavily used files (e.g. the paging file and various OS/application executable files) you actually want to ensure that they are in the same partition toward the beginning of the drive. Splitting a single disk into multiple partitions is not a great idea unless most of those partitions are lightly accessed because of the seek latencies involved.
I’ll comment on your comment’s title. There are valid standpoints that claim there’s no need to partition anyway, just put everything into one partition. The obvious advantage is that you cannot run out of space on a specific partition. A counterexample is that you need either file-based tools for backup or backup the partition as a whole, including stuff you eventually don’t want to backup. The “many partitions approach” allows backup and restore partition-wise, but you need to think about the sizes for the partitions at layout time, before you start using the system.
An important idea is to have a / partition that contains everything neccessary to bring the system up after problems made it crash, so you can perform a boot into maintenance mode (single user mode) and do basic repairs. The content of /usr, furhter of /usr/local and especially of /opt is not needed for that.
Another idea is that crashes and file system defects do not affect a “whole in one” partitions, but only one of the partitions which can be helpful in some situations, especially when you’ve got to restore a huge amount of data otherwise.
On the other hand, I won’t suggest to put many partitions onto one physical drive, at least not more than 5 – that’s my very individual suggestion, other points of view may suggest other behaviour which is, depending on the setting, correct as well. The partitions /, /tmp, /var, /usr and /home are enough. In some cases, /tmp is mapped into RAM, and /home is a different disk.
please don’t use multiple partitions. linux did this, i never really understood why. when openbsd did it it marked some partitions unable to contain suid files which made sense (like /tmp). but still it was a pain. when you run out of space you are f–ked while the disk has enough space. you end up with symlinks to other partitions because there is no other simple solution.
and then i discovered zfs. problem solved