We are no longer in control of our home directories.
My own home directory contains 25 ordinary files and 144 hidden files. The dotfiles contain data that doesn’t belong to me: it belongs to the programmers whose programs decided to hijack the primary location designed as a storage for my personal files. I can’t place those dotfiles anywhere else and they will appear again if I try to delete them. All I can do is sit here knowing that in the darkness, behind the scenes, they are there. Waiting in silence. Some of those programmers decided to additionally place some normal files and directories in the same place. Those are clearly visible every time I execute
ls
in my home directory. It is beyond me why my home directory ended up up containing anode_modules
directory,package-lock.json
, ayarn.lock
file (I have never even consciously used yarn!), some 2 strange log files origination from some Java software clearly using an H2 database, and aDesktop
directory. That last one has been created by Steam, which is quite unfortunate as I simply do not have a desktop or a desktop environment on my machine. I dread the day in which I will hear a loud knock on my door and one of those programmers will barge in informing me that he is going to store a piece of his furniture in the middle of my living room, If I don’t mind.
The way Linux distributions handle the directory structuce in general is deeply broken and inconcistent – trying to cram a modern desktop operating system in a directory structure designed for punch card machines is lunacy – and this is yet another example of that. It’s not just developers being lazy; it’s also developers simply being unable to count on distributions making sane choices and following the FHS to begin with.
UNIX-based operating systems are an outdated mess under the hood, and developers are trying to work around that mess by making an even bigger mess using hidden files and random directories all over the place. Of course, saying this is considered sacrilege, as an operating system designed for mainframes in the ’60s is clearly perfect, and never needs to change or alter or improve its underpinnings in any way, shape, or form.
So let me get this straight…
This person is upset that their home directory contains configuration data for their user account that is needed for the programs (or parts of programs) they use? They’re pissed off that the home directory is doing *exactly* what it was designed to do on a UNIX system? Really? Are they also going to get pissed about %appdata%, or !/Library too?
Every multi-user system *must* have a way to keep configuration data inside the account to which it is applicable. Perhaps this person would prefer that Linux have a registry?
darknexus,
I think he’s got some valid points. It was foolish to just dump everything the user’s root directory as hidden files without an organizational plan. It’s too late to fix it, but it would have been better to use a subdirectory for config like /home/name/etc. Although I could do without his environment variables.
I agree that the windows registry is terrible. However I think there could be lots of merit of storing application settings in a proper database instead of a text file. As flexible as text files are, SQL blows them away every time. Obviously unix predates SQL and so that wasn’t possible at the time, but both in terms of software development and in terms of administrative tasks, linux could be so much better if we had a proper database instead of text editing. Gone would be the days of trying to script config changes via regexes or what not. Instead we’d have safe atomic updates, a powerful query engine, extensible tables, views, triggers, etc. While unix config files don’t look so bad next to the windows registry, honestly I think text files are still holding us back from using even cleaner and more productive tools.
Databases were one of the most enlightening parts of my CS education and I feel the power of databases is totally underutilized when it comes to unix configuration files.
Oh yes, that’s exactly what we need, every application depending on a SQL server. No, thanks. That’s unnecessary bloat. Sqlite or something like that perhaps, but again, that makes things much more bloated than they need to be. A standardized text file format would be nice. If everything goes into a database, it won’t be long before we do have a registry-like system with hexadecimal table names, etc. Make things easier to hide, and more developers will want to hide them. I think a standardized hierarchy with standardized formats on text files would be the way to go, in the interest of no bloat and full transparency while reducing the clutter.
We already have .confg, .cache, etc. There really is only like five dotfiles under your home directory when you first set up a system. Complaining about it is pretty stupid, because the Unix style of doing it this way is probably the best around.
Look at some other operating systems, you think an Atari ST with a hard drive stays very clean after you’ve had it for a while? You have to put forth some great effort to keep it clean. Same goes with Amiga, etc.
Windows has the terrible registry, and oh, Macs do it the same way as any Linux system does.
darknexus,
I respectfully disagree. A standard database engine shared by all of the software on the OS would be less bloated than the situation we now find ourselves in with nearly every project building it’s own custom text parser due to it’s structured data requirements. Writing scripts to handle config files is non-trivial given how every project uses different delimiter characters, quoting, equality assignments, section blocks, special symbols, etc to mimic structured data in text files. The end result is that in order to automate things we end up having to write custom code to manipulate each config file. Not only that but these text files don’t offer acid compliance or structural integrity. Databases are naturally extendable with new fields and sysadmins can easily index them however they require. With text files there’s no such capability, you just have to reparse the entire file for every single change. You can’t just query or update the information you need as you can with a database. Additionally, almost all programs will end up caching the configuration data in ram by converting the text file into structured data in ram every time it parses the text file. Not only does this increase the work for the programmer, but it adds overhead and every instance of a program ends up re-parsing the config files and duplicating the in memory data structures. One change to an email record or dns entry in text files and you have to reparse the whole file and rebuild the whole structure and often times you have to restart the whole program. Keeping these runtime caches in sync without restarting a program become it’s own challenge, with some programs resorting to signals, which introduces it’s own complexities and race conditions.
Yes, software developers can solve all these problems, but at the end of the day once you finish solving all these problems you end up with a database anyways. Rather than doing all this work and ending up with a half-assed solution that’s nowhere near as fast, powerful, maintained, or standardized as SQL, we should all take a hint from the database world who’ve already done all of this work and perfected it.
Sometimes change for the sake of change is bad, but unix traditions can also be bad when they get in the way of progress.
Alfman,
What you propose to be the index key for, lets see, Paint program? And when someone else creates a new version of it because he is not satisfied with the current one (it stalled), what will be used as key?
Will freedesktop.org keep a database of applications vs index key for applications available on linux?
As you can see, having a central store point is not that easy.
acobar,
I don’t really understand the crux of your point because you get the exact same thing with text files today. A filename is essentially a unique key for the config file. Remember that filesystems are a kind of database too, only limited to blobs, and filenames are unique indexes that locate data.
I’m not sure how familiar you are with databases, which might be causing some confusion. Switching to a database would not mean piling everything into one huge tangled mess shared by hundreds of applications. That would be as awful as the registry, I don’t want that any more than you do. Instead every application should have it’s own schema. This is how databases are designed in practice. It’s very common to have one database instance serving completely different users (think of shared hosting). One user’s databases won’t step on another user’s database even if they’re both running the same wordpress software because they each get their own schema. This brings up another benefit for databases: permissions can be as fine grained or coarse grained as you want.
I think linux could become a better OS by embracing modern database paradigms, but I’m not expecting that to happen for the reasons that Thom brought up:
At: Alfman 2019-02-04 6:48 pm
I see that I misinterpreted your post as I thought you were, somehow, defending a one database solution and not the equivalent of what we have today as each app setting their own configuration files.
First, the current situation is not what many think. We have programs using xml, sqlite, json and text files with sections (usually with [something] headers) for initialization control. (open your .cache and your .config and see by yourself).
Second, the mere existence of a sqlite file will not help organize the settings for global administration unless sensible names and comments are added to it. It will, though, definitively help with the parsing process. Same can be said about the other formats.
Third, it is “bazaar” we are talking about, so some will defend xml, some json, some the current text ini files and everyone will give good arguments toward their choices.
Sincerely, we should first convince people to store things on proper separable sub directories on users dirs. Even big projects, like KDE and GNOME lacks proper standirzation.
acobar,
No problem.
This is what I’ve got. It’s rather haphazard.
You say sqlite specifically, but I’m not prescribe a specific solution so much as defend the rational for using databases in the first place. The best practice ought to be using a generic data provider. Anyways, my question to you is why would using a database preclude sensible names and comments? You can add comments at the column level or record level by adding comment fields.
Some formats like XML or JSON can be used to represent structured data in text files and that’s a fair point (I don’t like editing these by hand). But it’s really not equivalent to the functionality of a database as it still suffers from many of the text-file weaknesses I mentioned earlier: no concurrency & acid compliant transactions, having to read/parse/rewrite the file in it’s entirety, no structural integrity, no indexing, no SQL queries, etc.
I’m just trying to defend the idea of configuration databases in general terms. The specifics of making it happen are admittedly more challenging due to all the legacy evolutionary baggage and built-in resistance to change among the community. If your telling me there’s “no path from here to there”, you might well be right, it could be too late in the evolutionary lifecycle to even make a dent in the momentum. However I pretty much accepted this from my first post and I would bet against it happening despite my opinion that linux would be better for it.
At: Alfman 2019-02-05 11:04 am
It was a point about your reference to easier general administration, it is only easier if sensible names and comments are used and you didn’t raise this point previously as your arguments concentrated on highlight the parsing advantages.
And, still, xml and json have some advantages like manual editing and recover on severe crash (OK, quite uncommon, unless you are recovering from a dying hard disk, at which point the configuration of applications is your least worry). Your other points are fair but you should keep in mind that most configuration files are really, really small so the advantages associated to “write a new record” instead of whole file just does not apply and make part of your arguments moot. Also, why do you want concurrency for applications loading their configuration?
It would be wonderful if large desktops would provide functions to parse configurations .. and they do, only that developers have the strange habit of ignoring them.
And I agree with you that this should be the path to go through and was only highlighting that some will not believe on it and complemented some of the reasons why answering you more precisely this time.
acobar,
Obviously sensible names are better, but it doesn’t occur to me why programmers with databases would do any worse at naming things than programmers with text files.
“Which kind of doctor is better, a podiatrist verses orthopedist?”
“The one who is able to read charts of course”
haha 🙂
Many databases have protection for maintaining integrity across certain faults like power interruptions to enforce that a transaction is either committed or rolled back. However if the disk sectors are actually going bad, then your data is in danger either way. Raid is a solution to this, but typical consumers only have one disk, which is obviously bad for both performance and redundancy. So by the time hardware data corruption starts happening I’d say it’s time to reinstall on new media anyways. Anyways, I’m off topic…
Well for a single user system it may not matter, but I’m thinking specifically in terms of administrative tasks on multiuser systems and using scripts to automate configuration tasks.
Take a hosting example. Say the customer has a VM with full root access and you provide an HTML front end for them: dns records, system accounts, firewalls, ip addresses, databases, email accounts, websites, ssl certs, etc.
Which approach to you take to script changes to the configuration files?
A) Do you use a database for the front end and force the user to make all changes through it? Having a database is by far the safest way to main integrity and overwriting the text files is the easiest solution. But this will obviously cause problems for users who edit those files by hand. And it could be very frustrating for customers who want to set configuration options that aren’t implemented in the front end. These customers might even have their own scripts to automate configuration changes. These would essentially be overwriting each other’s changes.
B) Do you attempt bidirectional synchronization between the database and text files? You’d have to monitor the text files for changes and then parse them back into the database. This addresses some of the issues above, but also increases the complexity considerably and increases the risk of accessing files in an incomplete state is higher. If you end up syncing to a partially written file, you could end up screwing up the database and if the front end submits a request at the same time it could end up screwing up the text configuration file.
Without atomic transactions, alot of these risks go up. These problems may be unlikely on a day to day basis, but when they happen it could be disastrous.
C) Do you have the front end parse/modify the text files directly? This minimizes the stale cache/synchornization problem, but you likely increase the risk of concurrent access since more than one user may be using the front end simultaneously. Also guarantying that the front end code (php/python/etc) correctly parses numerous config file formats and writes them back correctly only changing the values it intended to change without corrupting/deleting any values that script doesn’t understand is asking quite a lot. These “simple text files” end up causing extreme complexity for automation. And if the automated scripts don’t get the escaping rules correct, it can introduce vulnerabilities.
D) Do you give up with the front end and accept that configuration files are best edited in a text editor? (I’ve tried this with customers, but some become unhappy with this suggestion).
What makes me wish the linux convention was just for all applications to use database directly is that all of these problems instantly become non-issues. You get concurrency and transaction safety out of the box. Not only that, but databases make the scripts and applications easier to write as well without the need for parsing text into data structures and then back out to text files. You can use views and queries to to create new reports easily without writing much code. While I like the original unix idea of piping many simple unix tools into each other to achieve greater results, I think it’s ok to admit that this approach falls short in terms of the database problems that SQL excels at.
IMHO the arguments for databases are stronger, but convincing people to switch their minds can be like moving oceans.
Thanks for the discussion 🙂
darknexus,
How is SQLite bloated? Even the amalgamated single header/source is tiny for what it does. Furthermore, SQLite is most likely on a Linux system already. Web browsers uses SQLite for storage. KDE stores some things in SQLite.
SQLite also benefits from being well tested, and a well documented backwards and forwards compatible format. There are also a bunch of graphical SQLite frontends too if you want to manually edit configurations outside of programs.
Text files may seem simple on the surface, but you end up with a whole bunch of bloat just to get features an application may need, or a user might benefit from, like versioning or migration.
Text files are better simply because you can edit them directly in case a program writes out a funky setting that breaks it. Can’t tell you how many times I’ve had to fix X11 config files when I was learning Linux, and leaving it as a configuration file you can edit with a text editor.
leech,
Why, in this alternate universe where distros store configuration data in databases, would they not have the foresight to build & include a database editor?
Isn’t it unfair to criticize a hypothetical database driven distro based on problems that would naturally have been solved in that hypothetical environment?
Eh the registry’s not that bad. I mean on the 9x family it was awful, but the NT family’s a whole nother kettle of fish which has some clever ideas and structure, and is really not that bad these days, particularly since all the effort’s been put into dont-run-as-admin-by-default policies forced apps to not clutter the system registry more than necessary.
Without the registry things like group policy wouldn’t work, and it does enable some clever hacks to happen (runtime compatibility patching, app/user virtualization etc).
You can actually see how MS tends to treat Active Directory (aka, MS LDAP) as a cross-machine version of the registry too.
The main advantage point of a registry, as I see it, resides on discovery of applications and their settings, as it is way easier to browse them from a central point. For developers there is the already commented value of not having to parse the whole file every time. GNOME has dconf. The main drawback is how easily a bad software can mess its contents and leave the entire system under an unusable state on Windows.
To do that to the whole system, it already has to be running with Administrative privileges (or higher, as even the admin can’t get to some things without jumping through a lot of hoops).
If it’s already the admin, then it can do whatever it wants anyway.
It’s kinda like saying “Software running as root can mess up your system” – well yeah.
If it’s running as a user, it can only mess up that user. Same as *nix systems.
Not saying the registry should be reimplemented on linux btw. Different systems, different approaches, different solutions.
At: The1stImmortal – 2019-02-06 4:04 am
I would argue that, for practical purposes, the situations you painted are not equivalent. Most Windows users, regrettably, use their computers running privileged accounts and I have yet to encounter this situation on linux / *BSD world so, when I complained about the mess that can be done, I was recollecting my (anecdotal?) experience on field, and was not complaining about user configuration problems but system level mess.
@post by acobar 019-02-07 11:53 am
Large part of Windows installed base are business machines, typically very locked down; somewhat more than Linux/BSD desktops (since users of those business machines typically don’t even know admin password, so they can’t elevate priviledges like Linux/BSD users are all too happy to do, often to some random scripts copied & pasted from the web…)
“As flexible as text files are, SQL blows them away every time.”
Wrong, SQL only blows them away if the text files are used as a database. If a database is used as a textfile, the textfile blows away SQL every time. For all other cases, you have to decide case-by-case.
Text file parsing is a very well understood problem with easy solutions. There are numerous libraries to available for a developer to use, and if none of them suits you, lex/flex is easy to learn for any CS major worth his/her salt.
I previously worked for a number of years as a UNIX sysadmin at a large installation with lots of users. At this job I learned to really like the UNIX philosophy where everything is a file and quite often a text file. I can do sed, awk, grep, find, et cetera on them. Using diff and patch I can see differences and fix them. A database wouldn’t have benefited me the least. Since every tool is specialized and more or less singular I harness the power of pipes and redirection to chain them together to achieve the desired result. Extensible tables are not interesting when almost all settings ever are key/value pairs. Views and triggers are useful when I have complex data to handle. If your setting files consists of complex data, something is very very wrong. And if someone needs complex queries via a powerful query engine to get their programs settings correct, they are catastrophically wrong. IMHO they should hand in their developer license and change careers immediately, without passing GO. But feel free to enlighten me when such practices are warranted and I am open enough to change my mind.
Personally I want to be the one changing the settings for my program, I don’t want my program to edit the dotfiles for me. But most would probably describe me as a UNIX power user since I have learned to use the UNIX tools.
IMHO a program should have such sane default settings that the majority of the users wouldn’t feel a need to change them. A program that needs a lot of settings to be usable suffers from bad design. With that philosophy, setting files tend to be simple and often minimal.
If you don’t like hidden files and think programs leave to many of them lying around, well, I can sympathize with that. Programs shouldn’t leave cruft lying around. Looking at my home directories at different systems most of them are quite clean. There are a few programs that misbehave, but never at a level above mere annoying.
Don’t get me wrong, I too have a CS education and learning about databases was both fun and extremely useful. So when I do some serious data lifting/handling and especially when I want to start digging through the data, SQL is usually one of the first tools I turn to. For setting files, no, never. To paraphrase you: I feel that the power of databases is totally unwarranted and unnecessary when it comes to unix configuration files.
TL;DR – use the right tool for the job. Databases/SQL is NOT a silver bullet.
maxz,
We’re talking about the article, which is dotfiles in the user’s home directory. We’re not talking about sysadmining.
Plain text files just encourages every program to invent their own syntax with their own parsing and validation bugs.
Triggers aren’t just for complex data. You can use them for validation or versioning purposes, instead of every programming having their own validation bugs and versioning requirements.
That doesn’t make any sense, since for most users (and they are not sysadmins), they change their settings through the program, or some other, sometimes interactive, process.
I am too talking about the article, and the following (IMHO ludicrous) suggestion to replace it with “SQL” (we should really talk about RDBMS, not SQL). I understand that someone doesn’t like lots of hidden files that they cannot make heads or tail of, but will a database _really_ help you? Or will it just hide information in the files from them and introduce new problems for users? I am a firm believer in giving users access to information, in a way that is readable for them. If they understand it is the next question, but unless they can access it it is very difficult for them to even start understand.
So I am not talking about the settings files of the system, my point of view as a sysadmin is that I have come across a LOT of setting files of users. And I fail to see what the database would add, except make lives of many users worse.
You say that plain text files encourages every program to “invent their own syntax with own parsing and validation bugs”. Storing something in a RDBMS still requires the program to parse the data that comes out of the database. Just because something is stored in the database doesn’t mean that a program automatically understand it. Also, any setting should be user editable. And as I said, parsing is a very well known problem and several libraries are available. If a programmer is writing shitty code, adding a RDBMS will not help with that.
When it comes to your mention of triggers for use with validation, sure, but what would you do if the data isn’t valid? You should do nothing, which is exactly what a well formed parser will do (and then alert the user that X will be ignored because it is not understood). As for versioning? Do you really want users to have their dotfile data versioned? If the average user cannot understand the settings of their programs, storing versioned settings in a RDBMS will make their minds light ablaze. From a former sysadmin that had to help clueless users when something went wrong: please don’t, you aren’t helping anyone with this approach.
Any decent UNIX system would have the users home directory backed up nightly. At our system a read-only version of that backup was available for the user for convenience. So if you trashed your dotfile somehow, you could always get yesterdays file back yourself. Older backups was available upon request of the sysadmin where we retrieved it from tape and mounted it for you. Advanced users that wanted more control over their dotfiles usually employed RCS at the time for important dotfiles that were more complex (like .emacs).
As for the last paragraph you write, if you read what I write, any program adhering to the UNIX philosophy will have quite limited user settings since the purpose of programs are quite singular for most cases and few users have causes to change the settings.
Of course there are exception to this. Three that comes off the top of my mind are your shells dotfile, .emacs and .vimrc.
Don’t get me wrong, RDBMS are great tools – when needed. Ask yourself what data in your dotfiles are of relational nature? What data in your dotfiles needs complex queries that SQL provides? Most data stored in the dotfiles are key-value type data. A RDBMS with SQL is overkill. And for those exceptions I mentioned above, where functions, aliases, formatting modifiers, etc are supported, how would a RDBMS help? Finding a one-size fits all solution benefits neither and introduces overhead.
I agree that it can be useful for a user to be able to edit a setting from within a program, especially GUI programs. But begin able to edit it from outside of that program is sometimes extremely useful too since it allows you to fix problems that occurs when settings makes the program start up in weird ways. It could be tricky but at least I have the option to try. And looking through one dotfile (which are of limited size) is way easier than deciphering a database schema and a dumped database, even for those of us that has experience with databases.
maxz,
No, you’re not. You’re talking about sysadmin stuff.
No, because a database provides implicit structuring of data that is no longer required to be understood by a program. They can simply store simple values – numbers and strings. No parsing required because there is no structure involved in the data. The structure is in the schema, and that is handled by the database.
No, you claimed that using flex/lex was a reasonable solution. It’s not. For most types of settings, flex/lex is overkill because there is no grammar. Only structure is required, and a database provides structure without the need for parsing, and certainly not requiring parser generators.
The trigger would abort the change. With text files, you have to save the file, changing it before it’s parsed again. A trigger would prevent the change from being committed in the first place, which is much more reliable and error-proof.
No. That’s why I’m suggesting databases. I don’t want users to have their dotfile data versioned. Having an inbuilt ability to save edits to a settings file, such as what you can get with databases, is much better than relying on the user to have backed up their dotfiles. That is the whole point about versioning.
But I was also talking about program versioning. Most textfile settings grammars aren’t future proof.
You can keep asking this question over and over again, but they’re irrelevant, since databases have features that I mentioned that are useful that are completely by-the-by when it comes to complex queries. You don’t need to have complex queries just to make a database worthwhile. That’s just a strawman.
There are already many database GUI frontends that can edit databases directly so this point is moot. And again, having something like database triggers that prevents bad edits and does revision control automatically is much better than editing textfiles that do not have this capability.
That might be true but I’m making good use of dotfiles. I sync many of them between machines so I have consistent settings everywhere. Most of the dotfiles are text-based so in a few cases I use shellscripts (sed -i) to change a few settings.
Programmers: please store settings in ~/.config and keep it clean and simple.
Furthermore, the author speaks of “madness” and then lists a great many $XDG variables… oh wel….
File system and file structure management has barely moved since the 1980s.
Designed without prejudice, you would never make it like it is in the modern age. But.. We want compatibility, familiarity, etc. If that’s what you want, this is the mess you get so it’s a self fulfilling prophecy.
I think this is really an issue with most DE’s, not so much with Linux. Running IceWM, I only have a handful of dotfiles in my home directory (including nested directories), all related to starting X11 and my shell.
I use XFCE, and I’ve got 22 hidden files and 41 hidden directories in my home. The vast majority of apps use .config/ and .cache/ and .local/share/. Anywho, they’re HIDDEN, so it’s really not an issue. If you’re messing with hidden files and directories, it’s safe to assume you’re either competent to deal with them, or you’re a dumb-ass.
And Windows is far more messy than linux. They’re just much better at hiding it from the user.
Thom, I think your off your rocker here. Gnu/Linux like Fedora, Suse, ubuntu. Obviously have improved and changed quite a bit since UNIX II of the 70s. Suggesting other wise is insane and divorced from reality. New ideas are challenged, and debated fiercly, but ultimately the right decision ( imho) prevails (see systemd). Dot files aren’t a problem. I’m not sure what anyone else would do. But would love to hear some brainstorming ideas.
Maybe you’d prefer if they were all stored in a hidden partition that no one could see with a ls command?
or
Completely under my control, no dotfiles pls => /home/thom/
I have control over files here but its in a different directory so I don’t look at them => /user_config/thom/
I guess it could just be stored in a config database similar to the windows registry, but I’m not sure why that would be desired over a plain file system approach.
Thom is totally off on this case. The problem has nothing to do with linux having an aging ancestry file system structure.
What people see as a problem with linux derives from the lack of enforcement of how things must be, even though there exist an effort to have a standard for how they should be for most of the things in LSB and freedesktop directives.
The problem the author found itself in is a clear example of people not putting enough effort to learn.
With liberty comes also a need to really understand things to make wise choices, some of the creators of applications this guy use clearly didn’t want to invest their time.
It always perplexed me that Unix and Linux don’t have something resembling Windows’ AppData. On the other hand this is the OS that makes a CD-ROM appear as a directory under the main harddrive, so I have learnt not to question the ways of the Cult Of Unix and the ways of the Holy GNU.
Well, MacOS does that too, what with all that “pesky” UNIX herritage it’s got. It’s no stranger than having separate drive letters when you think about it. Both are just ways to tell the system to look at another device. As for Gnu… well, who knows what they would have come up with (if anything) if they hadn’t had *NIX to piggy back.
It is stranger than having drive letters. In Unix, the forward slash is used as a way to indicate the root of the main harddrive and as the directory separator character, and other drives appear as subdirectories to the main harddrive. Explain that to an ordinary user who doesn’t know what “mounting” is. Then explain to them that in Windows each drive gets a single letter of the English alphabet (in home PCs at least). Ask them which one makes more sense to them.
Mac OS X tries to hide this mess at the GUI level BTW with partial success.
kurkosdr,
It’s one of those things you don’t think about much when you get used to it. But you are right about it being quirky. It would kind of make more sense if the root contained links to all the disks rather than being a mounted disk itself.
/disk1/
/disk2/
/disk3/
At the same time though I understand that having root be a disk is what enables it to be persistent and to store data there. That is an interesting debate to have, obviously these things are subjective.
kurkosdr,
Sure.
Dear ordinary user,
The “/” is the root of the filesystem. Not the hard drive. You don’t need to worry about the hard drive anymore.
There. Not so hard.
And why does the main harddrive doesn’t have a tag following the “/”, but instead immediately after the root you have the directories of the main harddrive, but the CD-ROM does have a tag following the “/” and THEN you have the directories?
kurkosdr,
Because you don’t care about the main harddrive. You care where your personal files are, and they’re in /home/yourname, which is the default view of most graphical file browsers.
A normal user would be using some graphical file browser, which provides direct buttons to external storage appropriately named, without even showing the path.
It depends on what you’re used to. To me drive letters are weird, and there’s a huge rant in how Windows is :-
a) Limited to a single letter for a device.
b) Doesn’t let you use variables to set drive letters ($HOME = F:).
Just take a look at how (say) VMS does the equivalent.
Windows doesn’t really use drive letters anymore though, does it? I’m pretty sure that at this point, the drive letters are basically cosmetic, and everything is handled underneath in pretty much the same way UNIX does it.
At: Thom Holwerda 2019-02-05 7:03 am
Actually, it does. If you have a computer with Windows and multiple versions of Linux you better remove assigned drive letters to avoid them showing up on Windows Explorer which, upon click on, you will be asked if you want to format them (facepalm).
Windows have junction points which is roughly equivalent to mount points. Very useful on business but largely ignored by regular users.
I would really say that Windows is far behind on this matter.
It’s not quite that limited anymore. You can mount volumes anywhere in the filesystem tree under any accessible drive on Windows these days, it’s just a pain for a regular user to actually do it because you either configure it when formatting the drive, or you have to handle it from the command-line.
Win32 does drive letters (and things built on top of it like UWP). The NT API underneath it doesn’t (and some things in higher layers that deal with that layer do). the NT object model isn’t just drives and volumes though, it’s pretty much resource in the OS, so it shares some similarities with having /proc and /sys on Linux but way way more (and with a virtual root)
Since the drive letters are actually a view into that NT object heirarchy, though, as acobar said, reparse points exist that let you do similar things to Linux mount points or bind mounts or symbolic links, but because they boil down to being more like symlinks in the object heirarchy there’s a lot of win32 things that actually break or behave strangely when running into them (node monitors for change notification for example)
If you think / is your hard drive, you should revisit what the directory tree is on Unix 🙂
That’s what the ordinary user sees. Good luck getting the ordinary user to map abstract concepts.
kurkosdr,
No they don’t. Most graphical file browsers for Linux today opens up to their home directory.
And the filesystem is just that, a system for locating files. An ordinary user in the modern age has no trouble understanding the concept of a table of contents, or an index. They’ve read books before. They’ve visited websites before. The file system is just an information location system, not a storage layout system.
It’s your own fault if you confuse users with an incorrect explanation of what a filesystem is.
While dotfiles are messy, at least you can isolate them to a single program.
One of the most useful things I learnt when I switched to Mac OS X from Windows back in 2005 was that you could fix many app problems just be deleting their preference file. Of course you can try that under Windows but good luck finding everything in the Registry. Sadly now with synced preferences and group containers, that approach is getting harder on the Mac too.
Databases are great in theory but file-based configuration files are incredibly flexible.
“… directory structure designed for punch card machines …”
As someone who has used multiple punch card machines, I can says that I have never seen a directory structure on said machines as is described in this article.
“… an operating system designed for mainframes in the ’60s is clearly perfect …”
UNIX was never designed for mainframes – just the opposite. It first ran on a PDP-7, then later, a PDP-11. The directory structure was stolen from Multics (which DID run on a mainframe class machine) but I don’t remember it having hidden files.
I suppose as storage grows this has become less of an issue, but I wonder if coders would accept this behavior if they were paying for cloud storage on all those unwanted files. In the end, some of us are neat freaks while other leave all our stuff lying around everywhere!
There was a time when I liked to play with different distributions, so I reinstalled my systems multiple times a month. It is great to have your /home on a different partition or even disk and not having to worry about personal configuration and files, but after install I only had to reinstall the non standard applications I used and I was ready.
I don’t know how to do that so easily with another way of storing settings. I wanted to try it once when a new windows version was launched, but gave up. I never owned a mac, so I never played with the location of those settings.
When I looked at the directory tree a while ago I must say it was a lot cleaner than years ago when most were directly in /home/user/
Um, no, that _is_ your data. Or are you saying that your personal preferences and command history aren’t yours? Yes, some things dump log files in `$HOME`, but most of those aren’t used much anymore, and yes, some things don’t use `~/.config` like they should, but a vast majority of those predate that becoming a de-facto standard path (let alone an actual standard) or are designed to work properly on old systems that don’t use it.
Also:
* UNIX was not and is not a mainframe system. Some versions can run on mainframes, but they don’t behave like any typical mainframe OS.
* Modern UNIX-like systems are derived from UNIX of the 80’s (System V and 4BSD), not the 60’s. Depending on your terminology, UNIX didn’t even exist in the 60’s (1969 for the original version).
* While classical UNIX systems did indeed run on systems that had punch-card readers, they were secondary input devices. The PDP-7 (the original UNIX platform, predating the famous PDP-11 version by roughly a year) used a regular keyboard for input, and a line printer for output, and pretty much every other UNIX platform has either used a keyboard and printer/monitor, or serial terminal as the primary human interface.
Storing structured data in text files is archaic and fixing it is probably as easy as creating a new OS. Storing user configuration data in a user’s directory is a good idea. Hidden vs sub directories is debatable and not a big deal either way. That is what the original fuss was about. I wouldn’t bother fixing that.
Someone mentioned how it was convenient to be able to copy your home directory to keep same look and feel across different instances of your OS. Fuchsia seems to use a cloud based / enabled utility called Ledger for that. Might have misinterpreted, but looks like it’s a database and you don’t have to copy it between instances. Also probably eliminates the text parsing challenge and manual config file editing chore. Seems like a great idea, if you trust the cloud you are on. However I hope it’s not like Chrome OS where without a cloud it is worse than useless. I lost six hours of work on a trans Atlantic flight. All my saves were re-saving the unedited version. No warning. How horrible!
lapx432,
Great comments all in all!
This line though..I can’t help but laugh at the notion that creating a new OS is easy.
Maybe it’s easy in principal 🙂
“Going to mars is probably as easy as creating a new rocket ship”
I like it when Thom has an opinion on something he clearly don’t understand. Unix was never designed for punch cards.
This blog post is mostly kinda dumb. “Oh no, the application store their data in a hidden plain-text file in my home directory. Whatever shall we do?”
Should applications use the XDG standards? Probably but not all applications do and it’s really not a big deal. Of all the ailments the Linux desktop suffers, storing config files in the home directory is pretty high on the “No one gives a fuck because it doesn’t matter” list.
Also, complaining about the “Desktop” folder and then advocating XDG is a special kind of irony.