GoboLinux is a distribution which sports a different file system structure than ‘ordinary’ Linux distributions. In order to remain compatible with the Filesystem Hierarchy Standard, symbolic links are used to map the GoboLinux tree to standard UNIX directories. A post in the GoboLinux forums suggested that it might be better to turn the concept around: retain the FHS, and then use symbolic links to map the GoboLinux tree on top of it. This sparked some interesting discussion. Read on for more details.
As the poster explains: “Gobolinux tries to replace the legacy layout, instead of going the way of stacked layouts, giving us yet another standard layout, as if its underlying reasons were superior than FHS’. I think they’re not.” The reasons behind supporting the FHS is that it makes it easier for programs to find libraries and other files in standard locations. “FHS’s goal is to allow apps to find their files and other apps’ files and libraries in a portable way among Unix systems. It’s a _functional_ layout aimed at portability and interoperability.”
Several people chimed in to explain that the FHS does not, in fact, provide the confidence of finding files in expected locations. As user Shevegen explains:
There is no real confidence. Why do some programs have directories under/etc
and others do not? It makes no sense. Why did the FHS make an exception for X11R6 in/usr
? Why does there even exist any/usr
or/usr/local
debate WHILE keeping the/opt
distinction? Why am I forced to keep the distinction of/bin
vs./sbin
? Do I need to use the FHS suggested way for/boot
? What If i choose to use only one partition anyway and if I am the sole user of my system where in an extreme case I would not even need ANY new user at all?It is a layer of ugliness upon ugliness.
Shevegen notes several other problems with FSH, such as the inability to run different versions of the same program side-by-side. “One huge problem of the FHS is that it does not easily allow one to have multiple versions of a program installed. This is what has lead to the whole .so.1.2.3 mess as well,” he explains, “It is a reason why on a typical Debian system one finds a NEW symlink called ruby under /usr/bin
which points to a ruby1.8 symlink (or vice versa). If one compiles ruby from source, he does not get any such arbitrary symlinks.”
Personally, I have invested time in coming up with more elegant ways to organise file systems too, but seeing I’m not a programmer, my work is of the theoretical and conceptual nature. The reason I have issues with the FHS has to do with the fact that is simply unclear. One directory is supposed to be for files of type xyz, but exceptions are made all over the place, and to make it even more fun, different distributions have different exceptions, but in the end, all still comply with the FHS. It is a god-awful mess.
The three letter directory names in UNIX-like operating systems are a relic of the past that should have died out and rotten away a long time ago. Us Linux, BSD, and (yes) Mac OS X users are still stuck with a system that predates the coming of Christ, and the only reason we still have it is because people are too afraid to make the big step and come up with something that is – in every possible way – better than the FHS, but still compatible. In a very brave and commendable effort, GoboLinux has done just that, but instead of being praised for bringing the filesystem structure of Linux systems to an acceptable, modern level, they get ridiculed and frowned upon as if they are some sort of heretics. To me, it is absolutely mind-boggling that distributions like openSUSE, Fedora, and Ubuntu call themselves “user-friendly” while still maintaining a directory structure that requires a degree in computer history to even remotely understand – including all its exceptions and quirks.
In many ways, the FHS resembles my native language, Dutch. Dutch in and of itself isn’t a particularly difficult language – it has a lot in common with both English and German, yet foreigners from both these countries have extreme difficulties in learning Dutch. Those difficulties stem from a very simple fact: Dutch is so difficult because it contains more exceptions than rules. Sure, Dutch has a set of clear rules, but each of those rules has dozens of exceptions, and those exceptions have exceptions, and then those have exceptions that in fact make them look conforming to the original rule, but in fact don’t. There’s a reason why I write more comfortably in English (almost) than I do in Dutch.
Getting back to the original discussion, the idea of using symbolic links to ‘cover up’ the original FHS is a clear-cut case of band-aid fever: it more or less admits the FHS is not very user friendly, but instead of actually fixing it, it just hides it for a while. In many cheesy movies and TV shows, you see cheating husbands flip over any photographs they might have of their wives. While this may postpone the feelings of guilt for a short while, it doesn’t actually make them go away for ever. Using symlinks to hide the FSH is like flipping over that picture of your wife. Or, as Shevegen illustrates:
I simply think the whole Linux world as such is like digging a tunnel in a mountain. One day you realize the tunnel is in the wrong direction, and water breaks into the tunnel, but making a change to the direction of the tunnel requires too much effort so you dig and dig and dig and use cheaper materials to stabilize the tunnel quicker, in order to grow it faster. You make more mistakes this way, but you cant change anymore, you dig faster and faster and faster.
When I first started using Linux, I was confronted by dozens of virtually incomprehensible folders outside of my home folder. I know my way around, vaguely, now, and the names sort of made sense, but it’s still a bit of a shock to new users. When I heard about Gobo about a year ago, I liked the idea, but I didn’t want to move to another distribution. The fastest way to propagate the idea would probably be to write a shell script that creates all the necessary symlinks in almost any distribution. That way, people can try the new hierarchy on for size without leaving their current comfort zone.
This idea isn’t generally bad, but its major problem would be the inconsistency of the naming conventions and hierarchy layouts among the different Linux distributions. While most of the arbitrary (but well intended) names of directories are quite the same, their content or their presence may differ. For example, some distributions feature /opt, others don’t; some place libraries here, others there.
(By the way, PC-BSD has done something similar to FreeBSD with their packages installed via the PBI system – introducing /Programs while keeping the compatibility to the standard system hierarchy.)
While I do like this concept in general, sometimes I feel if there’s a need to do this. On one hand, the users who are familiar with the Linux / UNIX file system hierarchy don’t need (and even don’t want) complicated names for the places they need to access; on the other hand, novice users who nearly generally live within their home directory feel no need to dive into the system’s hierarchy – why should they?
“its major problem would be the inconsistency of the naming conventions and hierarchy layouts among the different Linux distributions.”
That’s the problem with the many distros. One distro can’t rock the boat as long as everyone else offers the safe status quo.
That’s the advantage of the many distros. One distro can try new things while everyone else still offers what people are used to.
Gobo Linux isn’t meant to be a mainstream distro for everyone, but advertises itself as a Linux distro for relatively advanced users (= a minority. So basically this could only be a problem to those few experienced Gobo Linux users who already decided that they like the Gobo ideas, and so likely not a huge problem for them.
what you could do is install rootless, and basically have gobolinux live in your home dir.
thats how gobolinux got started btw. hisham needed a way to manage software compiled inside a home dir on a terminal server.
then it got scaled up to do a full system, initially based on transforming a red hat install iirc.
but later on it was recreated using linux from scratch. and thats the basis of the gobolinux we have today.
I think Linux world should adapt Gobo layout one day. It is so much clearer and easier to manage. Well, maybe we can come to a compromise. Lowercase syntax?
zsh tab-complete can deal with that.
only problem i have bumped into is that there is a lower caps compile command…
Following the Unix structure is kinda silly espectially with many people using Linux for a desktop, appliance, virtualized service, or an amateur/low end servers. The Unix structure is a throw back to the old mainframe days where if you had a computer you crammed as much as possible on it. Today computers are so cheap and common it is better to have a file system designed for ease of administration.
Gobo is a fantastic concept, but I wasn’t able to get it to run stably the 2 times that I tried it in the past. THe OS was stable, but I had a lot of application crashes :/
Maybe Ubuntu adopt the structure in 3 or 4 versions? It really has to be done for mainstream acceptance IMO.
One of the nice things about the Amiga OS is that the user was taught to expect certain files in certain “assigns”: system commands in C:, scripts in S:, libraries in LIBS:, device drivers in DEVS:, etc. (I think “assigns” were roughly equivalent to symbolic links to directories in Linux.) Developers followed a convention of installing their software in their own drawers and creating a new label (an “assign”) by which the user could easily access it, like FINALWRITER: or DOPUS: or whatever. See http://en.wikipedia.org/wiki/AmigaOS#Conventions_of_names_of_device…
Two things amaze me about this system: (1) Amiga developers followed this reasonable convention nearly unanimously, and (2) no other platform seems to have noticed, or else no other platform’s developers seem to respect whatever conventions there might be.
This is a good example of what’s wrong with Linux and the Unix FHS replacement efforts.
It’s not enough to have something that can work. It’s not enough to have something that works for all use cases. It’s necessary to have something where it’s *obvious* what is correct.
Let me give an example: What goes in /usr/local and what goes in /usr? Depending on the Linux distribution or *nix variant, it varies. What about /opt? Again, it varies. Sometimes it varies per-application. The developer–much less the user–has no idea what should be put where, or where to look for what. Since nobody can figure out what to do, or figures out something different, everyone does it their own way.
There has to be a right way to do things AND it has to be an *obviously* right AND people have to actually follow the convention. With few or no accepted conventions and little agreement even on the conventions that exist the developer and user is left hopelessly confused.
Gobo has an interesting approach, but it is far more useful as a conversation-starter than a final solution. What we need more than anything is this amiga-like consistency that you mention: A way of doing things which (1) makes sense, (2) is obvious, (3) everyone knows, (4) everyone uses.
Without that everything will always be a mess.
The Elektra project is an example of an attempt to solve this problem for configuration. It has its issues–including the fact that it is culturally offensive–but represents an understanding of the idea that having well-defined and “correct” ways of operating makes life easier for everyone. Violations of convention are always going to happen, but hopefully they happen when it makes sense and are the exception rather than the rule.
A nice thing about open source in general is that if you believe you can do it better you don’t have to convince anyone, you can just *do* it. But, sooner or later, for your idea to really win it has to be adopted and ‘take over’. Gobo presents an idea about the FHS and how Linux can be more desktop-user friendly. It works, but *everything* works, even the bad-old traditional way. Gobo has failed as an initiative because it has failed to ‘take over’; it’s just Yet Another *incompatible* idea about how to do things.
The other nice thing about open source is that, in general, ideas survive on their merits *only*. Gobo didn’t take over because it has deficiencies that are too broad and are all too clear. I don’t think its *idea* was very bad, but the specific solution was insufficient. The thing to do next is to take what we’ve learned from Gobo and from people who hate Gobo and try again, and again until the solution’s advantages outweigh the disadvantages AND trump the transition costs.
Some things will never change and some people will never switch to any ‘improved’ FHS, but if a plurality of users and distributions can be convinced then real progress can be made. Here’s another example: Upstart. Everyone knows that sysv init sucks, or at least has its problems. Several attempts have been made to introduce improved replacements. Upstart is Yet Another attempt, but… it solves enough problems and has few enough disadvantages that it is now gaining broader acceptance. It’s also relevant because it unifies and standardizes in a meaningful way that which was previously very, very nonstandard.
I’ve been using Linux for several years now, and I’m past the stage where I like to tinker around just for fun. Now, I just use it to do work. And play. And guess what? I almost never find myself in parts of the filesystem besides my home directory.
I don’t care where the package manager puts stuff. As long as an entry shows up in the applications menu. In fact, I just don’t care about the structure much at all. And a lot of other future Linux users will be the same.
If developers feel something needs to be improved about the filesystem hierarchy, if its hindering forward progress, then by all means, improve it. But saying that it needs to be changed because its confusing to “regular” users is silly, IMO.
this is the general theme with unix. It’s great either for those who have no problem with grep and awk on the one hand or total newbies who are satisfied with the apps that come with ubuntu but the problems begin once you get a bit beyond that. It’s either for newbies or total hard core and no in between.
Edited 2008-08-19 01:21 UTC
I agree – it seems to me that anyone who actually cares about the filesystem layout should be able to learn it easily enough. And anyone who doesn’t care, shouldn’t have to. My home directory contains a Documents folder, a Music folder, a Photos folder, etc. From the GUI, that’s all I ever need to use… I never need to worry about the difference between /bin and /sbin and the like, because I never see them unless I drop to the command line.
We care once we have to install new fonts to LaTeX and somehow figure out that these are installed in /usr/share/texmf/fonts/… or something similar. Moreover, since that directory is protected against ordinary users, you have to have sudo access. Good luck if you’re not the administrator and he thinks he has more important things to do than install your font.
Of course there are ways around this BUT the point is that it makes for an incredible hassle. If your system simply had a user-accessible virtual directory named “/fonts”, and you could simply drag and drop your font there and have it instantly accessible to every program in the system because the system would by itself determine where you had permission to place the font, that would be preferable.
So I care.
You almost have that now. If you drop files in your .fonts/ directory in your home folder… most newer apps will pick those up right away if they’re TTF.
Great. Drop your fonts into a directory that, by default, is hidden from any listing. This is exactly the problem I’m talking about.
Just browse to fonts:/// and drag’n’drop.
An artificial, hidden location. Great usability.
I’d rather go to ~/.fonts, but if showing hidden files is too much then well, it’s not like it’s hidden precisely:
http://img229.imageshack.us/img229/6599/fontsfc1.png
Considering all user config files are stored in dot prefixed directories in ~ it is also consistent, although it might not be your thing.
You know… the fonts directory on windows is hidden too since it’s in C:\Windows which is protected by default as a “system folder”.
Also, try installing a font without being an administrator on Windows.
I think the Linux way is better. Perhaps they could put a link to the ~/.fonts location in the preferences menu on Ubuntu which would solve all your complaints?
Edited 2008-08-19 16:38 UTC
Allowing a regular user to install anything that can be used by other users is a security concern.
For example, imagine there was a security bug in the font rendering library and someone has created a font that exploits that security bug to scan all accessible files for a user and look for Credit Card numbers. If you allow any user to install a global font every other user on the system can be affected, however if a user can only install fonts for themselves then only they can be affected.
“We care once we have to install new fonts to LaTeX and somehow figure out that these are installed in /usr/share/texmf/fonts/… or something similar. Moreover, since that directory is protected against ordinary users, you have to have sudo access. Good luck if you’re not the administrator and he thinks he has more important things to do than install your font.”
Well, actually, to use your example, fonts can be installed in the home directory without SysAdmin interference.
We usually do not “think” we have more important things to do btw, as our job is to make sure the network and servers are up and running. You can install your font without issue on any major linux distro, by yourself, without assistance and without hacks.
I think you (and most of the respondents) are missing the point, and I didn’t mean all admins personally. In fact many of them have much more important things to do than install fonts.
The example is meant not about LaTeX per se (which I like and use every day) but about where it is normally installed, and how the OS interfaces with the user in ways that make no sense except to insiders, when alternative conventions are available.
“The example is meant not about LaTeX per se (which I like and use every day) but about where it is normally installed, and how the OS interfaces with the user in ways that make no sense except to insiders, when alternative conventions are available.”
You are right, I did miss your point. In another post I mentioned the reason behind the different bin directories, /bin and /usr/bin specifically is what I mentioned. I definitely agree it would be much easier and less confusing to have a single /bin directory. It could be called whatever you wanted it to, though to me bin (short for binary) makes sense, as it tells me there are binary files in that directory. The names all came from the days when you had to use short names. The only thing I am adverse to is making it like other systems because people refuse to learn.
I also wasn’t disagreeing with your assertions about the filesystem being unnecessarily complex. I agree with a lot of the points you have made.
I was just pointing out that globally accesible file locations for anything are a security risk. I was just using your font example to point that out.
The biggest issue with improving FHS is inertia. Trying to change something that works, even if not optimally may “cost” more than just fixing it. Think of the cost of fixing all the software that has made assumptions about the locations of certain files. Hence trying to point out that something could be better will constantly be met with arguments of why to keep it the same. Legacy is a bi@tch.
I admire the intentions behind GoboLinux and I understand the reason for the path they have chosen using symlinks. It’s a hack, but in order to maintain compatibility with a lot of software it’s a necessary evil. I just sometimes wonder if the cost is really worth it. Is the existing FHS really so bad, is it costing us so much that we should undertake the cost needed to make everything work in a more elegant fashion. I don’t have the answer, I’m just re wording the question.
TeX directory structure has nothing to do with UNIX layout: it uses kpathsea to locate files (by filename, not directory) wherever they are, once they are added to the kpathsea ls-R database.
last time i checked the fonts dir on gobolinux is /Files/Fonts…
but i admit, i have had no reason so far to install additional fonts…
Edited 2008-08-19 14:15 UTC
In a world where we could guarantee that everything we do (or try to do) with our computers will work out just fine, we wouldn’t care. We, however, do not live in such a world, and even experts can bork-up there systems. This is why backups are so essential. There are times, though, when replacing a messed-up drive or partition with an image is just plain overkill, especially in the *nixes, where with a little help and some reading, it is possible for even relative noobies to fix a lot of problems without even needing to reboot…IF they can find things. This is why the FSH needs a revamp to a more consistent and common-sense model, for the sake of users and those who must support them.
P.S. Before the usual replies start coming back that ‘people aren’t ready or interested in diving deeper into their computers’. Let me tell you I am a computer technician who specializes in custom intalls on all manners of hardware, and educating new users. My main customer base is 50 years old and up, and brand new at computing. Many still find the mouse to be an object of hate, as they find it difficult to get used to. I have, of course, installed various Windows, but I have also successfully gotten them to use PC-BSD, Ubuntu, Mandriva (when it was Mandrake), and even gotten a few to use BeOS 8^) They have all been able to learn, and more or less to take care of their own machines. If they call for support, I can tell them to got to /home/apps, or /etc, or ..\Documents and Settings\… or whatever, and they will fix their own boxes. So, yes, I am speaking from real world experience.
Edited 2008-08-19 03:31 UTC
If your problem is finding things, you probably don’t know how to fix any significant problems
So what’s the problem? You’re telling them where to go anyway. Whether you’re saying etc or Documents and Settings is irrelevant. At least on the Linux filesystem you can backup your home directory and bring all your files and program settings with you. Good luck doing that on windows filesystem layout., without a dedicated backup app.
And maybe you’d like to learn. A system that is simpler and more intuitive would be a big help.
“intuitive” is very much in the eye of the beholder…
Rather than respond to your specious reasoning as a personal attack I’ll simply point out that you may have missed the point of my comment, and possibly the whole article. The problem lies in consistency of experience. When the system that is supposed to result in a consistent ability to find files across all platforms using that system becomes an impediment, it needs fixing. You may have noticed that I support and use multiple OSes. When I have to go looking for other likely places that a file may be, due to the FSH being layed out or used slightly differently in a different distribution, and then find that file in an unexpected place, that means that there are fundamental issues and inefficiencies with the system. Not everyone is on the same page, possibly because the system is no longer truly standardized, it leaves room for ambiguity. Ambiguity is not a good thing. All this makes supporting my clients a slower process than necessary, because I constantly am having to verify whether things are where they are supposed to be.
It IS hindering forward progress. I’m an advanced user, and I want the ability to run multiple version of the same program side-by-side. I WANT to test out if that new version of Evolution really does fix more bugs than it introduces. I WANT to see if that new version of Gaim fixes a certain pet bug without breaking ten other things. In Linux, I can’t do this.
I’m an advanced user, and when I look at the FSH, I still think “what the f–k is that about?”. The argument that advanced users “should just learn how the FSH works” is completely nonsensical. It would be better if the standards put forth by the FSH were actually adhered to, but seeing every distribution just does whatever the hell pleases them anyway doesn’t make it any easier. So, just because I’m an advanced user, I have to learn all the distribution-specific exceptions, and invest my extremely precious time in doing so?
Just because I’m an advanced user does not mean I do not want logic, structure, and cleanliness. The FSH doesn’t give me any of those.
Edited 2008-08-19 05:19 UTC
I would have to question a software environment where it’s essentially difficult to trust updates because of breakage – and thus having to actually design according to that.
That’s like building cars so badly that they break often but instead of making them more reliable, just make them cheaper and disposable so that you can walk across the street and jump in a new car if your one breaks down.
I prefer an OS where installing an update is a non-issue, automatic even, because the OS is designed and built in a way that lets developers ship reliable updates.
well in a way, software install is closer to installing a new engine part at times then installing a new stereo component…
Try Debian.
This has nothing to do with the existing file system standard, and everything to do with the distributitions package management system. If you were an advanced users you’d know this.
Many distributions already offer such functionality, especially in the form of libraries. all that needs to be done is to append a trailing version number and perhaps a utility which creates a symlink to the desired version.
You could just go /usr/local/ if your distro doesn’t provide just functionality.
I thank God everday I use an operating system where I’m largely unaffected by the ideas of weiners.
Edited 2008-08-19 08:55 UTC
FHS doesn’t facilitate it either. That’s the whole point: sure, it’s possible to achieve many things with the FHS, but it wasn’t built for it, and it shows. Applications have their files all over the place, and managing multiple instances of such a program is extremely difficult.
Sure, I can cook my dinner on a camp fire in my backyard, but I prefer doing it in my kitchen with all the appliances waiting for me. FHS is the camp fire – we need to create a kitchen with the appliances. Mac OS X has taken a few steps in the right direction, but its still a bloody mess.
Thank you for proving my point. You call this elegant? This is yet another piece of band-aid applied to fix an inherently outdated system. Compare your band-aid solution to my much more elegant proposals:
http://www.osnews.com/story/19711/The_Utopia_of_Program_Management
The reason my solution is much more elegant is because I designed it with all those advanced features in mind, instead of trying to bolt them on afterwards in a shoddy fashion. As someone else already painfully noted, Linux/UNIX fanatics are eager to point out that Windows is stuck with old and outdated ways, but in fact, the UNIX world is much worse off.
I can, either using slots or installing programs on my home directory. I’ve been running firefox2.x and 3 side by side for some time now on my laptop, and I have different versions of several programs and libraries installed together on my pc at home.
For stuff like gaim I’d rather quickpkg the installed version and upgrade. Reverting to the previous one is a matter of seconds.
Don’t you realise how much you’re proving my point here?
Do you get automatic entries in your desktop environment’s menu for those two Firefoxes? Or do you have to manually create .desktop files and add them to the menu yourself? Does your package manager know you have two versions of Firefox installed, and does it keep both of them up-to-date? Or do you have to do that manually? Does it work for other programs too?
So, for one application you install it manually in your home directory, and for another package you have to resort to specifically creating binary packages manually, install them, and then remove them once you’re done? Can you run several binary Gentoo packages created with quickpkg side-by-side? Can your package manager keep track of both of them? Or is it another manual job, just like the Firefox stuff above?
Yeah, real elegant. Another case of massive band-aids and patchwork instead of an elegant design that took all of these features into account from day one. Anything but easy.
Edited 2008-08-19 11:11 UTC
If I use slots then it does.
If I install manually on my home then it obviously does not (well, firefox updates itself, but that’s unrelated).
Then again the point of installing two versions is testing, not day to day usage.
Regarding .desktop files, I don’t think I’ve ever created one of those manually.
No, I quickpkg the old version and then update the app.
I would quickpkg any app I’m messing with anyway, more so if it’s svn stuff.
It’s not any more manual than just installing an app, as you can tell portage to do it at the same time.
Firefox autoupdates itself, so I don’t care if a test version is being tracked by the package manager or not.
And well, I’ll obviusly end removing the version I’m testing if I won’t be using it. What would be the point of keeping it?
Sure, that’s what slots are for. It doesn’t matter wether they are binary packages or not, just the package versions.
I find it quite convenient. And easy.
Whatever floats your boat.
Don’t you realise how much you’re proving my point here?
Pretty much all the replies defending FHS are proving your point.
I’m confused Thom. On the one hand you say you’re an advanced user and you complain that you can’t have two different versions installed? You download the tarball, extract it to it’s directory, compile it, run it with ./program_name and you’re good to go.
Who cares if the package manager doesn’t know about it, and who cares if it doesn’t create a shiny menu entry for you. The fact of the matter is, that because you’re an ‘advanced’ user, you are not the type of person who should even be caring if the directories are /bin or /Programs. An advanced user would know better.
The FHS doesn’t prevent you from having multiple versions of anything. Usually when you download a program, and extract it, it creates a program-name-0.x style folder for you anyhow, so you could still have compiled and running program-name-0.1 and program-name-0.2 etc.
Sounds to me like you’re just whining about a non-issue.
If you want to see a proper way that a distribution can use the FHS, look at Debian. Their strict packaging guidelines make sure that there is always a /usr/share/doc/package directory and at least a README in there.
I do agree that not all distributions are as good at keeping it clean and lean as Debian.
The fact that people always say “Oh, but the files are all over the place” really hasn’t done any research into Debian’s packaging rules.
1st. You’re saying that having 2 versions of i.e. firefox is only for testing new software versions. I want to assure you that it is very common solution for Web developers. You have to test application in different browsers even different versions of browsers.
2nd. You’re saying about Debian’s way of directory structure. Please try install Tomcat and configure it in Debian. /usr/share/doc isn’t the most important in this case.
3rd. Bundle way of handling software would standardize Linux without having multilayer package managers (apt-get is a layer over deb and there are multi-managers which are layers over apt-get).
If it could gain some more momentum it could be a scene-changing approach. Until it does, though, it will remain an interesting experiment, like Init-NG and Autopackage. I hope it doesn’t stagnate like those two projects have and lose it’s direction.
This is a complete non-issue. Like aylaa said, the system handles the files outside of home. If a general user has to care, the system is broken. Hell even Windows tries to keep you out of those directories (by displaying warning messages when you open folders like Program Files). If you’re a sysadmin, it’s really not hard to learn where stuff is. Keeping track of system files is what the computer is good at, let it do that and don’t worry about it.
The article claims that “The three letter directory names in UNIX-like operating systems are a relic of the past that should have died out and rotten away a long time ago” but gives no reason for it. /etc is much friendlier than “Documents and Settings” or “Program Files” for the people that actually care (sysadmins and programmers mostly). Hell its 2008 and there are _still_ programs that have trouble with spaces. There probably always will be, and having the main directories without spaces avoids the entire issue. I’m still waiting to hear an actual reason outside of “its ugly”.
This conversation is starting to sound like the intemperate stamping of feet the hardcore command-line interface users made when GUIs came along, because remembering all those cryptic commands was soooo much easier than point-and-click.
Friendlier to whom? for the home user and budding programmer who’d like to experiment with his system settings? for the starting grad student who needs to type up his thesis? or for the power user who has become accustomed to the status quo and wishes all the n00bs would realize that computers are for grownups?
I think the issue is simply “why change?”. The article argues that the existing layout is ugly and confusing – but the fact is, it’s a subject that shouldn’t matter. Calling a directory /Programs instead of /bin might be easier for a user to read, but the kind of user that’s aimed at will be running programs from a menu anyway, not by entering a file path.
Changing things is pain (for developers and maintainers even if not the user) for no real benefit…
“I think the issue is simply “why change?”. The article argues that the existing layout is ugly and confusing – but the fact is, it’s a subject that shouldn’t matter. Calling a directory /Programs instead of /bin might be easier for a user to read, but the kind of user that’s aimed at will be running programs from a menu anyway, not by entering a file path.”
Well, the fact is even if you call it from the GUI, and you do not have the right path though the program is installed, will not work.
The current system is archaic, and should be updated in some fashion. Think about it. The reason there is /bin and /usr/bin, is that hard drives used to be small as hell, and you would have those on different hard drives. Why can’t everything be in one directory now? Forget /usr/bin,/usr/local/bin, etc…put them all in /bin, or in this case /Programs. We no longer need all those partitions/hard drives, so why have them?
I’d argue friendlier for everyone.
What do you think that programmer, grad student or power user actually want to do? If he wants to manually change some settings remembering that /etc is where system settings are being kept isn’t that hard. If he actually wants to try using the cli one day (oh my god the cli!!) He’ll be happy that the directory is called /etc and not /Documents\ and\ Settings.
What else would you want to do? Move things around in /usr or /usr/lib? Why? If you have the need to do this, you surely are able to find our what the directories are for before you do.
This whole discussion again sounds like: “I want Unix to be like windows because it so much more user friendly! (because I’m used to the windows way)”
The way the Unix filesystem is set up makes a lot more sense than windows, ever tried to keep your settings on another drive, maybe even switch between 2 different set of system settings? Hell it’s already a pain in the ass to keep your user data and settings on a different partition.
He’ll be happy that the directory is called /etc and not /Documents\ and\ Settings.
I thought it would be obvious before, but /settings is a lot better than /etc.
I don’t use Windows, and haven’t in years. My work computer runs Ubunto 8.04, and my home computer runs Fedora 8. That doesn’t mean there isn’t a better way.
Nice try trying to twist my words. Completely unrelated to what we’re talking about.
I already said, sysadmins and programmers mostly. It’s easier to type, and it avoids the issue of programs that choke on paths with spaces (yes, I still routinely encounter those at work).
If any home user has to touch those settings, the system is broken. Changing the name to something else is not a fix, it’s a band aid over a system that should never have forced a casual user into that directory. For a budding programmer, it really is no different. Except that they’re going to rip their hair out when their linker gives an error like “Cannot find object c:\Program.obj” because of those paths with spaces in them.
apt-get install texlive kile (or the equivalent from your favourite GUI frontend). Like I said, if that grad student needs to worry about anything outside of their home directory, the system is broken.
No, for someone who realizes that keeping track of system files is the system’s job, and for someone who has better things to do than micromanage their system. Seriously, this is a non-issue for everyone but those self-professed “power users” coming from windows that are confused by change. No ordinary users could even tell the difference, and shouldn’t have to.
The system can keep track of the system files in a directory called “settings” not just in “etc”. It can also keep track of software in “programs” and not just /usr/bin. In fact the system doesn’t give a damn about the name of the directories at all. So why don’t we just call them what they are used for? Don’t tell me that “init.d” is better than “bootscripts”. So why no change? Because the old hardcore users should learn something new?
Why I agree on keeping the track of system files are the system’s job I know that systems tend to break. I hate to bring up Windows again but I think it fits here because MS is probably the biggest software company on Earth, has thousands of developers including true geniouses on its payroll. But still Windows just like all OSes tend to break. That’s when it needs fixing by the user. Should I call techsupport just because my OS has cryptic directory names and I can’t find my way around because of it?
Maybe there are good reasons not use a file structure of Gobo but that doesn’t mean that we should keep the old one to the end of times.
No, because scripts in /etc/init.d aren’t just bootscripts, settings in /etc don’t include user settings and /usr/bin doesn’t hold programs, just their binaries.
But is people really having problems with the classic unix names? Because I don’t see such a thing happening.
We could just aswell rename “integrals” to “Calculation Of The Area Defined By A Function Between Two Given Points In A Cartesian Map”, but that wouldn’t make them any easier to solve, would it?
Agreed, but until someone comes with a good reason I’d rather stick with the current scheme.
IMHO even leaving the current dir structure intact but renaming the directories would make it more intuitive and more user friendly.
I do. A lot of times I try to help a newbie I have to give a short introduction to FHS. Why is it better to learn what e.g. the etc directory contains instead of just knowing what’s in a dir just by looking at its name?
Odd. I’m not a “‘power user’ coming from windows” in any sense of the word. I’ve used Linux or OSX for nearly all of the last ten years.
I doubt the Gobo Linux users fall into that category either.
The fonts? The startup scripts? If a home user has to touch those setting, the system is broken?!? Whose computer is it, anyway?
There’s .fonts in your home directory and any personal settings are also stored there. The only time you need to touch stuff in /etc is to configure services beyond the default and any user-friendly distro out there has GUI tools to do configuration in /etc. If you are advanced enough to need to do stuff beyond what the GUI tools provide, then you are advanced enough to deal with the fact that it is called ‘/etc’ and not ‘/hold my hand settings directory for users to look at system settings and other things like that’.
I refer you to my previous comment that .fonts is a hidden directory which the average user will not stumble over.
Why should anyone(expect developers and distribution makers) care where the fonts are located? Hidden directory or some kind of magic location, like a fonts folder, are just as pointless. Just right click and select to install it, either as a system or user font. The MIME types should also handles this, and start a font installer when font files are clicked.
Obsessive micro management like this are just pointless, let the system handle it. Average users have more than enough work handling their documents and other user files to be bothered with system files. And for developers the FSH are already handles this in a logical, robust, proven and well known manner.
Edited 2008-08-19 17:36 UTC
Since everyone is so offended that I want to install fonts in a place that I can remember easily, and can’t be bothered to address the actual problem being illustrated, let’s try another example from experience.
Distribution D naturally installs application A to some directory (/usr/share say). You want the more recent version (many distributions take a while to update certain software after all) so “yum erase A” or “apt-get erase A” or whatever, download the tarball from A.org, and run “./configure; make; make install”. But for some mysterious reason A’s system installs the software into /usr/local/share, not /usr/share. You spend a while trying to figure out why A claims to have installed without a complaint while giving you all kinds of file not found errors when you actually try to run it. Eventually you figure it out, or maybe you don’t.
I suppose this is also an example of micromanagement on my part?
The article gives plenty other examples, like shell scripts breaking. Also micromanagement I suppose?
Not offended, simply pointing out your argument are based on an incorrect assumptation.
Trying to construct a problem to support a conclusion you have decided is correct, does not necessary illustrate anything needing addressing.
The application will be in, or linked from a bin/ directory, which should be in the PATH negating the need to know the location of the application to run it. Obviously if the application installer is not broken, it keeps track of its datafiles and everything else located in the share/ directory.
No, only broken assumptations on your part.
If shellscripts use absolute paths to application, I’d say it’s micromanagement. Also called a bug, if the script is intended for general usage.
If it’s done to customize to a specific installation, you need to adapt to the system customization and install applications accordingly(use –prefix).
Edited 2008-08-19 21:47 UTC
Right: wanting to install fonts in a place that I can use & discover easily is an incorrect assumption. I guess lots of users have incorrect assumptions. With this attitude Linux will never be more than a hobby OS.
Yet shellscripts do this all the time. (Happened to me at work today. Not my shellscript. Written by another Linux user, long experienced.)
When you take into account your previous comments:
Were you assume how the average user will have problems locating the place fonts are installed. Since the average user don’t care where the fonts are located, using click and install, your assumptation is incorrect.
Broken shellscripts is not uncommon, and when not written with portability and general usage in mind(not uncommon either) such bugs happen. And it only underscores that micromanagement in shellscripts are not a good idea.
Edited 2008-08-19 22:22 UTC
Someone else already mentioned ~/.fonts
You shouldn’t have to mess with the startup scripts. If you want a program starting in the GUI, use the GUI tools in Gnome or KDE to start it after boot. If you want to configure a system service to start (already extremely unlikely for any average user), use any startup script GUI tool to do it.
If you want to tweak the settings of apache to start with a specified flag or what have you, you’re way advanced and finding init.d is a trivial and insignificant part of the task.
Well, I might have to tweak the settings of Xorg.conf too. That’s way more advanced, but if I’ve bought a new computer with an as-yet unsupported monitor there’s fun I’ll have to have regardless.
I might have to tweak /etc/fstab.
See any one of a large number of experts offering advice to non-experts online. Lots of settings have to be tweaked from time to time.
The fundamental problem here is that you have to modify your xorg.conf. The problem is not that you can’t find it, the problem is that you have to edit it in the first place. Xorg is moving towards the correct solution to this problem (better autodetection (xrandr) and bulletproof X).
Again, something that a regular user should never have to do. If they do, then the autodetection is broken and should be fixed. Renaming /etc/fstab to “/System Settings/mount points.config” or similar is not going to solve anything.
And they shouldn’t have to be. But until those things get fixed, the file and folder names are certainly not the problem. If an expert is posting instructions for how to make a change to fstab, they will say something like “Open /etc/fstab and change line 4 to …” which bypasses the whole problem since someone is telling you where to find that file. The non-expert doesn’t have to find it themselves so whether it is etc or settings makes no difference.
Note: I’m not arguing against the person I quoted, I’m merely taking his comment further. Either way, well said to the original poster.
Hah, try the following for fun:
C:\WINDOWS\system32\drivers\etc\hosts
vs…
/etc/hosts
or…
C:\Documents and Settings\User Name\Temporary Files
vs…
/tmp
I know what ones I’d choose. Not to mention, those are the XP locations… previous Windows versions may be in different locations, and even Windows Vista has a modified filesystem layout from XP. Meaning Linux distributions aren’t the only operating systems not strictly adhering to a tight “standard” across the family.
Also, just for fun, think of all the different locations user data and application data is stored on a typical Windows installation. Hint: Don’t forget the system registry! On UNIX-like systems: mostly /home, with some system-wide data in /etc. Once again, I see simplicity here that Windows just doesn’t have.
I could go on all day on the pros and cons of each OS’ filesystem layout, but as a longtime Windows user (around ten years) and only a Linux user for the last two or three, my preference would still heavily lean toward the UNIX FHS. Both have their pros and cons, but I don’t know how I could go back to not being able to make a nearly full (excluding system files) backup by simply tarring one directory (/home).
Plus, it sure is nice rarely having to leave /home to find some file some program decides to save into some weird directory (ie. Winamp skins go in… C:\Program Files\Winamp?!).
Edited 2008-08-20 06:04 UTC
The reason Linux distros don’t do someting sensible like changing the file system layout is inertia. It’s the same reason so many people use Windows. They’re stuck in their old ways. We Linux users like to tell Windows users how stuck in the mud they are and how they should join the 21st century and use Linux, a cutting edge operating system, with an out-dated, brain-dead file system structure.
In all seriousness, there is some use in keeping _some_ things separated. An app’s binaries, local libraries (not system libraries), and data files should be kept in one place. But configuration files would likely go somewhere else so that you don’t have to muck about in an app’s internals to configure it. That brings me to the other major problem with Linux: All the config files are flat ASCII text in a custom format for each app. Switching to some standard XML format would make it easier to automate things like upgrades that need to combine config files from different versions of the app.
Edited 2008-08-19 02:57 UTC
That reminds me of a question that was raised not too long ago – how come every ‘competitor’ to Windows is a UNIX clone. That isn’t to say that UNIX is inherently bad or deficient, but it is a question that is posed in the hope that things would have moved on from there. Even BeOS had almost all the resemblance of a UNIX as one example – and that was meant to be a ‘fresh start’ and ‘legacy free’ operating system.
I have a look at things such as Plan9 and I shake my head with dismay when I see the hoards of programmers gravitating to the old and decrepit ideas such as *BSD and Linux. What is needed in the world to compete with Windows isn’t yet another UNIX-like clone but something new, original, or atleast something which addresses the flaws in the old paradigm.
I’d love to see 100 of the smartest programmers in the opensource community throw up their hands, embrace Plan 9 and turn it into a viable desktop alternative. A single distribution build in the cathedral model with a single GUI built on under pinnings not based on concepts from 20 years ago.
I know I’m going to be slammed for this, but really, it is bloody depressing when the only thing competition can come up with is yet another UNIX-like clone, be it MacOS X, Linux, *BSD or some other OS. I’m a MacOS X user, and I love it – but I kinda expected a van guard of programmers working on the bleeding edge with freaky ideas rather than simply pounding out code in the ‘same old way’.
Edited 2008-08-19 08:14 UTC
iirc, plan9 was until recently (or may still be) under a very strict license.
on that note, linux seems to be picking up more and more plan9 features as time goes on. most recent example is fuse iirc…
Plan9 has been absolutely free for a long time.
It implements namespaces per proccess and bind commands that make actual file hierarchies look like something of the past. You can install different versions of the same program and more, for example: just run the yesterday command and you will be back to your system from the previous day.
And plan9 really follows its conventions.
And the reason these abilities work so well is because it was factored into the design from the get-go. You can’t keep on building on a system that was designed for mainframes and then hope it will still be up-to-date and capable 40 years later.
You’re right – which is why I think it is a silly effort to try and retrofit these ideas to an existing operating system – when ever it is tried, it’ll be a compromised version with limitations that take away from what it promised.
Linux is repeating the same mistake of UNIX – MacOS X broke free, but I have a feeling that we’re going to revisit these issues again in 10 years time when MacOS X becomes long in the tooth. Same thing can be said for Linux as well.
I gave Plan 9 as an example, but I’m sure there are other ways to go about solving problems. We already have numerous clones of *NIX – what there needs to be is an easy to use operating system which is unrestrained from decisions made two decades ago.
One where the lessons from other operating systems can be learned. An operating system which is documented and well maintained from day one rather than something get grows into an out of control beast which ends up causing problems for the programmers at a later date.
And yet it is… Plan9 is toast, just because it had one or two neato features doesn’t make it even remotely suitable as a base for a modern OS.
That’s how software development works. This thread is the pre-cursor to “second system syndrome” where the old system is just fine, but slightly ugly, so overzealous engineers design the new system to be perfect in every way and end up with a bloated, impossible to finish project that only works in theory. Happens all the time.
And why exactly? Just so it would be a bit cleaner? Well that’s at least 10 years of effort to make it an even remotely viable competitor, and 20 years to match the established OSes out there. If you’re volunteering…. Anyway, the idea that we are tied down by legacy concepts, and could do a lot better if we started from scratch is a myth. There is no pile of ideas out there that are impossible on current systems.
SkyOS is a good example. They started from scratch, and by all accounts it is a clean and efficient OS (since it’s designed by one man, it’s much cleaner than most new designs will be) with a nice logical design. But it’s going on 10 years and there really isn’t anything in SkyOS that we don’t have in Windows/Linux/MacOS.
If you’ve done any programming you’d know that’s impossible. Unless you have a spare trillion dollars lying around and an army of programmers you can control completely. Accept the chaos and work to make it better bit by bit.
“accepting the chaos” is not good enough when most people already use windows. You have to be a lot better and that is why Apple is doing better than linux on the desktop, even though it isn’t free.
What specifically do you see Plan9 offering that cannot be added to existing platforms and that will make it a superior platform for a desktop OS.
Take into account that the main things holding Linux back on the desktop is not the kernel, but apps and drivers.
There are several open source and/or free Operating systems out there that are working on all kinds of bleeding edge and freaky ideas. Basically they all come to the same conclusion, doing things in a new and totally different way is Very Hard and takes Very Long Time to make work well enough for regular use.
The big advantage with Unix is that it’s tried, tested and most of the basic problems are already solved leaving developers open to work on new and exciting things on top of a solid platform. Not everybody wants to code their own network stack and NIC driver before they get to develop a new network protocol.
Maybe, just maybe, every Windows competitor is a Unix clone for the very same reasons unix is still strong after nearly 40 years, while other “modern” and “improved” systems have come and gone…
In computing terms this is like a million years. Sure, you can attribute this to inertia, but then you have to explain why the rest of the “industry” changes so fast and so much.
Exactly what does unix have that makes it so enduring no onde really knows for sure, but the fact is people like it.
I’m not saying unix is a perfect system, it sure has problems. What I’m saying is that the filesystem hierarchy hasn’t fundamentally changed in all these years because it _isn’t_ one of those problems.
What really annoys me is that these criticisms to fundamental unix concepts always seem to come from people who don’t do actual work with their systems and thus cannot tell the difference between problems that affect real-world system usage from “problems” that only exist in their supposedly “power-user” minds. These people often cannot justify their claims with more than “the old way it obsolete” or “this new way is better”.
Hei, the wheel is thousands of years old, it’s obsolete. Let’s just replace it with this “square” thing which is way better…
Like (I think it was) Ken Thompson said when asked what would he change in unix if he could: “I would add an ‘n’ to ‘umount’.”
You seem to be assuming (and I may be wrong) that because something is “old” it’s also inherently broken. The UNIX-like model is “old” (ancient, really, in terms of computer science) and therefore broken? The reason developers consistently go back to UNIX-like systems is because Unix was wildly successful and solves a number of “paradigm problems” without much effort. Multi-user support is built in. Simple backup systems are built in. A massive code-base that can be accessed with scriptable compiling systems are built in.
What you’re asking people to do, essentially, is re-invent the wheel simply because the “wheel has been around for a long time.” In terms of general computing, there are a surprisingly small number of design models that work well.
For instance, remember light pens? Remember how light pens were supposed to replace the mouse because they’re much easier and intuitive to use? But ultimately they didn’t, because the mouse is much *lazier* to use. The light pen design model doesn’t work because users are lazy and, frankly, don’t want to deal with having to move around an ungainly device attached to a wire, point it at a monitor all day long, and look stupid.
There’s a small number of operations that you have to be able to handle to develop a general purpose OS. You have to take input; it could be CLI only, or you could use a keyboard/mouse combo. Or a light pen, or a digitizer tablet, a touch pad on the screen, etc. Ultimately, though, it all comes down to 1) a character stream and 2) an x-y(-z?) coordinate system. UNIX has had that licked for years.
The UNIX model also has a proven history for stability, low barriers to programmer entry, and modular design. All three of those things lead people to want to use it as a base design.
However, when you ask “why don’t good developers create a new operating system paradigm from scratch” you’re framing the question wrong. First, there’s a limited number of models for GPOS’s available…all of which have basically been exploited at this point (pending advances in neural or motion interfaces). There’s a limited number of ways, for instance, that you can move items around in memory. There’s only three archetypes of kernels, all three of which have been implemented as UNIX-like (Mach/OpenStep as a micro, Linux as a traditional mono, and various hybrids and other examples.)
There’s a very limited number of people in the world that are qualified, dedicated, and obnoxious enough to write a successful kernel for general computing. The majority of them have determined that the UNIX-like paradigm is the way to go, especially considering that through the history of computing, other paradigms have tried and failed.
So Linux isn’t UNIX-like because Linus lacked creativity or was conceptually limited; Linux is UNIX-like because UNIX-like is one of the few design paradigms that have survived 70 years of computer scientists being elitist snobs about kernel design.
My main gripe with UNIX really is the file system and config file organization. I have no problem with the kernels in general, although honestly, I don’t have much complaint about the NT kernel. The real problems are in userspace. But even there, UNIX/POSIX APIs are growing and evolving and keeping up to date. MacOS X layers a well-designed graphics API on top of UNIX, and I think they did that well, although X11 really isn’t all that bad either (especially when you hide behind a toolkit).
No. When it’s all said and done, my main complaints about UNIX and Linux all come down to system administration issues. Where do you install an app? How do you uninstall it? Where are the config files? What format are the config files in? When you upgrade an app, do you wipe out all your config settings? When you install a library or an app, are there going to be version conflicts? When something breaks, do I know which of the dozens of log files to look in?
My question is how does Apple do it?
Mac OS is based on Unix and has a similar file system underneath. But for the regular user you don’t see any of that at all.
All applications install in the applications folder. Everything for the user is just what is happening in the users home folder. Out side of that on the Mac you don’t see anything else unless you look for it!
Why can’t a linux distro be the same?
osx does the same as android.
basically it straps a whole new ecosystem on top of a BSD kernel and basic tools.
so ones your past the kernel startup your in a whole different world.
the only way to notice that your on top of a *nix system is by firing up the terminal.
apple could probably alter some libs, pop the BSD kernel out and pop some other kernel in and nobody would be any wiser.
kde is doing the same with their abstractions to exist on top of anything from linux to windows.
The way Apple does it is slick. It’s easy to use on the desktop and that is what matters. It boots fast etc. I am very happy with OSX at this point (Even though I am a Linux man)
It appears to be slick, but in fact, OS X is just as messy as everyone else. The only difference is that OS X creates some fancy directories such as /Applications, and then includes a text file that tells graphical applications to ignore the default FHS. but make no mistake – it’s still there, and it is still as messy and unclear.
Again a case of band-aid. It doesn’t actually solve the problem, it just hides it. You know, flipping over the picture of your wife.
It is messy (Like you said because I have used the command line on the mac and it’s confusing)
But the key on the desk top is to make it easy for 90% of the people that use it. And the Mac OS is just that. Unix done easy. Something a 5 year could use.
I don’t want to give a computer to my kid and they open the file manager and get confused because they don’t know what /var and /bin and /root are. Normal users should never see that.
99% of the time when I am using my Mac I don’t care about all that. Applications are so easy to install on a Mac I giggle everytime I install something new.
In the end all I am saying is that Apple makes it easy and that is what non tech people want.
That’s not the point, and I’m not even contending it. The point, however, is that Apple doesn’t solve the problem – it merely hides it.
Which is hardly a feat worth gloating about.
GoboLinux, on the other hand, actually tries to fix the problem, which IS a feat worth gloating about.
The thing is, who is it actually a problem to. For me when I use my Mac I don’t have a problem. So the problem is not in useability for the average user.
Is the problem with developers? Seems like there are not many problems there ether as software is multiplying on the Mac at a pretty good pace.
In the end it’s easy to use and easy to make and port software to. And that is what matters.
While people spend hours trying to figure out how and where simlinks should go, Apple’s market share is growing and the Mac OS is getting more and more popular. Even Linus loves it.
A lot of times perception is what matters. In the case of the Mac OS the perceptions are:
Its pretty
Easy to use
reliable
Secure
Some of that is reality some is not.
The real reality as you say is the the Mac OS is a mess. Which is true. But in reality at this point who cares? (Besides us tech guys)
You use your Mac desk top with Mac OS, use the Apple TV with Mac OS, the iPhone uses the Mac OS.
The only time Linux (Outside of the Kernel being used and servers) gets major play is when you can’t see what is going on below the surface. Like in phones and other devices.
I just want to see Linux on the desktop be just as simple. So more people will use it.
These days it seems that many promising approaches to building a new computing environment in the short-to-middle term do not try reimplement everything down to the metal, but instead they use, say, Linux, as a “driver” and then build the environment on top. The advantage is that you can instantly leverage all the Linux drivers, be it FOSS or proprietary, while treating the underlying Unix-like system as a black box, as if it were part of the hardware. A nice example of this approach is the Etoile project. This is an excerpt from their site:
http://etoileos.com/news/
When reading this one may think that such a radical departure from the main abstraction of most operating systems means to rebuild everything from scratch. An awful lot of work and no substitute for binary-only drivers. But in fact Etoile is a set of frameworks on top of GNUStep. Files are not eliminated, they are just pushed down in the abstraction hierarchy, so that they become as irrelevant, as transparent, to application programmers as registers and assembler instruction sets are. The crucial point here is that the underlying files are not just hidden from regular users, they are hidden from *everyone*; they are just a convenient backend. That’s what makes an abstraction work.
KDE4 looks nice. Still, there’s the problem that it intends to coexist with other systems that manipulate files in different ways. Think of what happens when you boot from another distro (without KDE) and edit (create, rename, delete) your personal files from there, obviously without the corresponding updates in the Nepomuk database. Of course you could also mess up the CoreObject files of Etoile from another OS (the underlying problem is, I think, computer hardware is not really designed for safe coexistence of different operating systems, otherwise the BIOS would protect the different abstractions ) , but Etoile makes it clear that it’s not designed to interoperate in that way.
Edited 2008-08-19 15:27 UTC
The day they change the directory structure is the day I switch to solaris or freeBSD. Linux is a unix clone leave it like that. If i want my hand held i’d buy a mac.
hand held? It’s the linux advocates who are arguing “who cares? Nobody needs to look at the directory because we have the pretty gnome on top”
The current system is great and all once you look past all the ‘exceptions to the rules’ that are all over the place… But the thing that I love about the GoboLinux filesystem concept is how well it handles multiple versions of programs and libraries.
The current system of is horrid…
Just give me a directory structure with a separate directory for each version of a program / library, and then a symlink for the one I wish to make default. All the hackery involved to run say PHP 4, PHP 5 stable, and PHP 5 RC all at the same time on a computer is just insane.
another neat trick is that one can pop a app dir off there, put it on a external drive, and ones one wants to use it, reattach it to the system via symlinks.
there are scripts available for it, and i have been pondering about learning enough python to automate the process.
memory sticks full of gobo apps anyone?
It’s all fine and dandy to talk about better names and structure making it easier to understand the layout of the filesystem. But I think that there is a more important reason to use bundles (to borrow from Mac OS X parlance): FHS creates problems that are difficult to solve, and the solutions create complexity outside of the filesystem.
The classic example here is the need for package managers. Now package managers serve several purposes, like managing dependencies and helping users find software. But they serve another important role: keeping track of files and avoiding conflicts. Both of these functions are essential while installing, updating, or removing packages. They are also completely irrelevant in distributions like Gobo because Gobo keeps things separate.
Not, I’m not saying that FHS is pointless. There is a rational behind the structure. Having /etc and /home house most of the modifiable files simplifies backups. /bin and /sbin are separate from /usr/bin and /usr/sbin, with the former directories being of primary value to system management and recovery. the bin and sbin directories are separate because they reflect the distinction between user applications and system management applications (either because you want to keep /sbin out of the user’s path, or because you’re paranoid and want to force the administrator to type /sbin/su — or whatever).
That being said, a lot of those reasons aren’t relevant to your typical home computer user.
Not entirely true. It might keep the files separate – but then it goes and symlinks everything back into the old locations for compatibility with the rest of the world. The Gobo package management tools may have an easier job of tracking which files belong to a package, but they’re in just as big a mess maintaining a tree of symlinks. If two versions of Bash are installed side-by-side, something presumably is tracking which one is the default…
the filesystem is.
the structure is like this:
/Programs/bash/version-number/”bin and friends”
inside /Programs/bash there will be a symlink called Default that point to one of the version-number dirs.
that Default symlink again is the target for the symlinks found under /System/Links. (the subdirs of Links is the basis for the compatiblity symlinks btw)
note that at any given time you can specifically call on one of the bash versions by entering the full path, like say /Programs/bash/x.y.z/bin/bash.
What they ought to do is implement a database backend for the file system.
Then each distribution can implement it’s own hierarchy (be that Debian style or Gobo or whatevs) and then developers can choose to program with distribution independence by querying the database directly. Install as many versions of whatever program as you want. Install in userspace with different versions for each user.
One distro could even implement multiple front ends, allowing Ubuntu to install RedHat packages and vice versa.
(edit: Even add in a Gobo front end, just for the user to have a convenient view on to the file system, while letting vendors program to the RedHat frontend. Still same backend. Still all works.)
Edited 2008-08-19 04:39 UTC
My problem is that generally those are who complain the most about linux directories, naming and location, who actually don’t use their system for anything useful, and have large amounts of time on their hands, and with nothing better to do, start complaining about issues many of us never cared about. When I have my Linux set up and my apps installed, messing around among the directories outside my home is so rare I couldn’t even give you a number on that. And I’m pretty confident in that I use my Linux more time and for more things than most complainers.
As regarding Gobo, I tried it, more than once. First, dropping a load of links on top of a Linux directory structure is stupid, and it feels wrong, very wrong. Second, loooong names of those links and using capitals, come on, what’s that all about ? Never should you put the wish to be like somebody else before practicality. Use whatever names you like in your home, but leave the system directory structure alone.
One more thing, some also stated above, those who come to know the Linux directories enough are generally those, who know enough not to complain about such issues. They can live with it, I can live with it.
Thing is, if properly done – not with a bunch of stupid links – I wouldn’t mind if directory names would change. It just doesn’t matter. Not a freaking bit. That’s why this whole debate just leaves me wondering about what these people spend their time with.
No complaints here, and I use my Linux system for useful things (as I have for over 6 years). But a more sensible directory structure would welcome.
Me, too.
They’re not adding the links because they think the links are great — they’re doing it to allow for huge programs (such as Open Office), which would be impossible for them to hard-code to the Gobo directory configuration. If the huge programs were hard-coded for the Gobo directory structure, then the links would not be needed.
The names are long and capitalized to avoid confusion with the conventional Linux directories, and Zsh handles these longer names and the capitalization with the same or less keystrokes as used in a typical Linux distro with a typical shell.
Please read the thorough explanation, “I am not clueless”: http://www.gobolinux.org/?page=doc/articles/clueless
Again, please refer to “I am not clueless” above.
I live with it, too, and I don’t complain about it, but I recognize that the pure Gobo way (no links) would be better.
Again, the links are necessary only to accomodate the huge programs hard-coded for the typical Linux directory configuration. The links are not desired by the Gobo crowd.
Filsystem hierarchy should not matter to the user, at least not the parts that deals with installed software.
The user should just need to know that he adds new functionality, he shouldn’t need to know where the files actually goes.
The software installer or package manager should make shure that executables ends up in the users PATH, FONTPATH,…, and/or in some appropriate place in the program menu.
When a user need to select an executable somewhere in the GUI, he should be able to select it from what’s in the PATH, or in the Program menu if it is some kind of GUI based program that need to be selected.
E.g if you make a new starter for your Gnome desktop you should be presented with a list of programs that is in your PATH. To make it backward compatible and expert user friendly it should also still be possible to write a full path, but the OK button should be greyed out until you actually have selected something executable.
When you remove some funktionality the package manager should ask if it also should remove the preferences for the application. Note, I did not say it should ask the user if he want it to also remove configuration files. This is important because there might be configuration, but not necessarily in files, it could reside in a database or in LDAP depending what application it is. In other words application developers should provide hooks for the uninstaller to do the right thing when asked to remove something.
The Desktop user should just need to see the programs in the program menu, the control center where he can configure the functionality of his system, his own data and the data other people share with him. It should also be easy for him to se what part of his data he have decided to share with others.
Users should just see what they need to do their business, and in most cases that is not computer related so in most cases there is no need to introduce computer related things like “Application Folder”
In the rare cases where it is computer related e.g. for a sys admin there should be ways to make exceptions, and to these people it is good if the file hierachy conforms to some kind of standard, so why not continue to use the Linux FHS for them.
Disagree strongly.
The biggest usability blunder in the history of computer GUIs is the intentional hiding of the directories from the user by Apple (and, later, by Microsoft).
The directory structure is an important mapping model of the system, and shielding the user from this mapping has created a generation of helpless, clueless users, who have to call technical support every time they try to download an attachment from Yahoo mail.
In regards to computer usability, it is extremely beneficial for the user to know the basic locations and relative positions of the software and data.
yep, i still love the dos day when every app or driver was contained in its own dir that one could move virtually anywhere as long as one entered the right path into either config.sys or autoexec.bat.
say what you will about its userfriendlyness but it was simple. and the same level of simple is what gobolinux is aiming for imo. that it appears to be user friendly is just a side effect.
note for instance that rather then having init.d and friends in etc (or /System/Settings in gobolinux) there is a BootOptions file (setting stuff like how the ascii-art for the boot process should look like) and a BootScripts dir.
in that dir one find:
BootUp: a list of things to start on all boots.
Console: what to start if the machine is to boot into a command line login.
Graphic: what to start if its a X based login.
Shutdown: generic tasks for dealing with all kinds of correct shutdowns.
Halt: what to do when shutting down.
Reboot: what to do on a reboot.
some daemons are packaged with Task scripts that minic init.d ones, so that they can be started and stopped with StartTask and StopTask. both of those can be used in bootup and shutdown.
Yes, I agree with you that you need to know these things in MacOS-X or Windows, and if you don’t you are lost. This is the problem. If users didn’t need to know these things they wouldn’t be lost.
You don’t need to know how the phone system is wired to use a phone, even though the phone network and phone sytem is much more complex than even the most complicated computer desktops. So why is it be so impossible to make a usable computer desktop.
It’s not impossible at all.
The only people that can change the status quo are the developers. However, developers themselves are rusted stuck in old ways of thinking, and don’t like change at all – and then they go and shout at users for being resistant to change.
Developers have themselves invested time in understanding and mastering the status quo. They are simply unwilling to let that investment go to waste, and as such, they prefer a crappy, old, incapable system that creates confusion and promotes messy behaviour simply because it’s what THEY are comfortable with.
Many UNIX developers ridicule Microsoft for adding layer upon layer in Windows, but in the end, UNIX developers do exactly the same thing.
Or perhaps it could be stated as “People gotta eat”. It great to experiment with all kinds of cool, innovative, re-imagine the desktop stuff. But at the end of the day, most developers have a penchant for food, clothing, etc. Boring, reliable, meet-the-current-needs applications pay the bills. The others are for fun and excitement.
Personally, I love playing with new and inventive frameworks. I love learning new programming languages, new paradigmns, etc. But I make a living writing what I consider VERY boring applications. Maybe someday I will get paid to think outside the box. Otherwise, my time is too short.
GoboLinux and Replacing the FSH
FSH -> FHS
I’m sorry Thom, but that’s a load of bollocks. English, and any other natural language, has just the same amount of oddities and descrepancies as Dutch has. You are probably more aware of them in Dutch, it being your native language, but stating that Dutch has more exceptions than English does is just a sign of blatent ignorance. Please stick to commenting on computing, as that’s something you have knowledge about (even if I sometimes disagree with you), but stay far away from linguistics, you’ll only make a fool out of yourself.
JAL
best thing about that example is how modern english is a mix of latin, french, at least one germanic variant, and then something more.
its not without reason that one can hold spelling contests in english as show a word is spoken often tells you close to nothing about how its written…
Owh we’ll most likely need defragmenting tools too.
Maybe it’s not necessary to so drastically change the directory structure, but I definitely do think it could be simplified. For example, how many “bin” directories do we need? I just searched on my Ubuntu system and found no less than 6 “bin” dirs. And the list goes on. Four directories named “share” or “shared”. Three named “sbin”. This could definitely be cleaned up.
Last time I worked with Gobo it was because I was challenged to after I called dogshit on their FS renaming scheme. Well, I was proven wrong – IMHO what they are working on is a very good idea. It makes more sense, makes side-by-side easier, and I’m guessing makes multilib a snap. Their build system was kind of a pita back then – around 6-8 months ago – and compiling stuff w/o it was a complete hassle. I think that once they get that polished up they will have a real winner.
referring to to sandboxes and such?
works ok for me.
however, the wiki should probably have some content removed or marked as no longer valid as it makes a single step install (if your dealing with a sane ./configure based source archive) into a multistep one.
basically its this:
MakeRecipe “app-name” “version-number” “url-to-source-archive”
then let the scripts chew on it for a bit and spit out a basic recipe.
then you go:
Compile “app-name”
if all goes well, you should be left with a working app, ready to be used.
as for multilib, if your thinking about having multiple versions of the same lib side by side then yes. i currently have two versions of QT installed.
one to support kde 3.5.x, the other to support the latest smplayer.
its a bit messy tho as one have to juggle SymlinkProgram a bit to get the correct lib version to be Default.
Thanks for the heads up. I can’t honestly recall what issues I was having, but they were real and there were forum posts about similar issues. I think they were in the middle of some major upgrade to the build system or something similar. Sorry if this is vague – I really can’t remember what the problems were specifically. Dumb question/lazy alert: once the packages are compiled, is there anything that keeps track of what you’ve installed and what the deps were? I frequently compile my own packages, but I want something keep track of stuff for me – mundane crap like that is exactly what I think computers are *supposed* to do.
I think that what seems like an overall lack of traction to some is probably caused by the FS changes apparently being newbie-targeted while the OS itself is far closer to slackware or arch as far as management goes (at least when I was test driving it). It’s hard to convince someone who knows the FHS well enough to get around efficiently that they should change, and it’s hard to convince someone who demands (often quite vocally) that things must work w/o any elbow grease or clue-gain whatsoever that they should OMFG compile anything.
I’m interested to see where Gobo goes, though. I think it could be a real winner if it manages to sway some old heads and some new heads, and they can work together.
well a homemade recipe will show up in /Files/Compile/LocalRecipes, but beyond that there is not much that allows one to tell a homemade install from one found in the repository (that i know of at least).
and every versioned dir has a Resources subdir that among other things hold a dependencies file. this is autogenerated at end of a Compile if none exist before.
and the FS (or the distro in general) is not really newbie targeted. its more of a KISS (Keep It Simple, Stupid) design that happens to have some newbie friendly qualities (that is if one ever gets the precompiled packages to match the number of recipes).
I’ve been using GoboLinux for several years now on my home server and also in Parallels for testing Linux compatibility for work. I really love it. It is almost perfect, and its flexibility really makes my life easier, but I just couldn’t quite switch to it for general use.
As I see it, the real problem is software installation. It’s great to have programs organised in directories under /Programs, but I’d like to create my own hierarchy…/Programs/Media, /Programs/PIM, /Users/me/Programs/LatestAlphaSoftware etc… There are ways of working around this, but GoboLinux isn’t really designed with that in mind. In the meantime, I’m working on a tool that will make it possible to move & reorganise bundles, a bit like in MacOS X.
Personally, I think all you babies should stop whining about the traditional Unix hierarchy..
It is most definitely not obsolete, you all need to stop making Unix inferior just to adapt to inferior peoples needs.
I can’t believe that nobody has brought up what a class unix-haters like complaint this is. If you follow class unix-as-a-virus logic, then of course, trying to change things will never work, because “worse is better”. What’s done now may not be great, but it kind of works, and it’s a “standard”, so it must be better. People here have been bringing up the cost of changing things, and that’s the really insidious part of the whole equation, now that there’s this whole linux and free software ecosystem, as long as the cost to change to something better is more than zero, there’s no point. I think the real problem with GoboLinux is that it isn’t being more ambitious. The only way you could possibly make something better than linux that could succeed would be to start from scratch, build it from the ground up to answer every problem anyone has ever had with any OS, fix them all and release that. Just modifying the file storage structure, and trying to link to all the old stuff to keep it compatible, that just makes it easier to write off. If it can’t completely blow linux away, then there’s no incentive to switch. And to make a change that seems so cosmetic, even if it isn’t, nah, that won’t work. But I do applaud the GoboLinux folks for trying to do the right thing instead of the easy thing. Now if they could collect the right fixes from everyone else, create a nice super-os, then they’d have something.
Gobo is really only trying to address that one problem. I doubt they have the skillset or the manpower to deal with the other problems, like the X dependency everything has
Actually, they are doing a couple of other interesting things: the Gobo init system is breathtaking in its elegance, as is the “recipe/compile” system.