The Linux Standard Base (LSB) is a specification that purports to define the services and application-level ABIs that a Linux distribution will provide for use by third-party programs. But some in the Debian project are questioning the value of maintaining LSB compliance – it has become, they say, a considerable amount of work for little measurable benefit.
It’s too much work for little benefit, and nobody wants to do it, so what’s the point – just drop it. At least, that seems to be the reasoning.
But Debian’s not throwing all of the LSB overboard: we’re still firmly standing behind the FHS (version 2.3 through Debian Policy; although 3.0 was released in August this year) and our SysV init scripts mostly conform to LSB VIII.22.{2-8}. But don’t get me wrong, this src:lsb upload is an explicit move away from the LSB.
That’s too bad – the FHS is an abomination, a useless, needlesly complex relic from a time we were still using punch cards, and it has no place in any modern computing platform. All operating systems have absolutely horrible and disastrous directory layouts, but the FHS is one of the absolute worst in history.
Well, personally I like the FHS, and there’s utility in it, especially when you deal with embedded and low disk space systems. Admittedly that’s not desktop/laptops any longer, but that doesn’t mean it isn’t useful to continue using it.
But it doesn’t surprise me as systemd is breaking a lot of that stuff – with systemd guys pushing for everything in /usr and under /run…completely ignoring the embedded folks, and many other aspects of the distributions, and ignoring the history and why it’s there.
Why keep it? To make it easier for people to push applications against Linux – especially those who have packages for Debian that are not included in the official Debian repositories for whatever reason.
And if they really do go down that road…well, then may be it’s time to leave Debian behind.
Problem with the FHS is that pretty much every Linux-distro out there right now is FHS-compliant … and no two of them use the same directory layout! There’s so many optional and alternatives in the spec (at least, the last time I read it a few years back) that anyone can be compliant without actually being the same as anyone else.
When Debian and RedHat are both considered FHS-compliant, there’s something wrong with the spec.
The differences aren’t that great. You know the config files are in /etc/. The difference is minor. apache2 instead of httpd for example. Also you know logs are in /var and binaries are in /bin/, /usr/bin etc. It easy to adapt.
Yeah I think the people who are complaining about FHS are novice/home user types. As someone who’s been a sysadmin for over 15 years and has experience with commercial unixes as well as linux distros, I can tell you that in MY OPINION the FHS is pretty groovy.
You know what they say about ASSumptions…
I’ve been a sysadmin for just shy of 15 years, using FreeBSD and Linux for that entire time (starting with RedHat Linux, then moving to Debian, with some daliances with SuSE and Arch). And I still hate the FHS compared to hier(7) on FreeBSD.
If your only experience with a filesystem layout is the FHS, then it seems like a decent layout. But, once you are exposed to alternatives, it quickly becomes apparent how messed up the FHS really is.
Edited 2015-10-14 15:18 UTC
phoenix,
I agree. while there are some who like FHS, I wouldn’t think that very many would want to switch to FHS if it were just invented today. As with many legacy systems, their popularity stems from historically dominant roles.
Alfman,
Judging by your comments and how you address other’s posts, it is easy to see that you are a knowledgeable and nice guy but ..
Agree, totally.
I don’t see what the big problem is especially to get so upset over. FHS isn’t that bad at all. It’s logical and not hard to implement. What’s the fuss?
Yes I like it too. Mainly because I’m used to it and the break up of different types of files is indeed logical.
Too many programs will break if they change it now. Leave it be.
Edited 2015-10-10 12:07 UTC
I knew a company where sysadmins plays with debian.
A nightmare, the file server guy don’t want to talk to the LAMP one, because his servers are completely messed.
The LAMP one, don’t talk the Directory guy for the same purpose.
You talk to the mail/dns servers person and all you hear, is sarcasms against all the others, you know those stupid morons who can’t manage more than two different servers…
Devs were psychotics…
I now work in a company which use exclusively red-hat based distros. Happy sysadmins joking together. And you know what, we can take our -real- holidays without stress.
Debian is not serious, you can mess with it, but don’t work with it.
Or you will be in trouble.
Debian discredits the overall Linux community. It does as bad as Microsoft did in its worst years.
A poor carpenter blames his tools.
Yes, and a professional uses professional tools.
A professional carpenter arranges his tools to work efficiently.
A professional carpenter pick-up his right tool in a snap, almost blindly.
A professional carpenter, by expertise, knows which tools is the best one for his work.
And the professional carpenter will not be afraid to pay for tools support, if he thing it is best for him.
So, compare professional support choice on Debian and other distros and even OSes.
Just do it on facts.
Now, if you still want to play with Debian @ work, you are free, it is your responsibility. You’ll undertake it…
What you are describing isn’t a failure of Debian.
It’s a failure to hire competent sysadmins who can work together.
At my org, we manage both Debian and Red Hat systems, and they both have good points and bad points.
Honestly, though, once I’ve got them connected to the puppet server, I stop caring.
-“What you are describing isn’t a failure of Debian.
It’s a failure to hire competent sysadmins who can work together.
Yes : This is EXACTLY the point.
Why is it almost impossible to hire competent devs and sysadmin for Debian when you find them for other distros ?
The answer is mostly in the question…
Or the other way around?
Honestly, Debian admins are typically pretty competent too. The difference between them is the same reason why the Debian and Red Hat distros are there to start with, and sadly both sides make religious wars out of it.
I use Debian b/c the tools are wonderful. Between Apt, Schroot/Sbuild, etc, it’s just a great distro platform, and I have yet to see any good equivalents in the Red Hat side. (Yes, I’ve used Yum and dnf, but there are aspects of them that simply don’t compare, or are at least a lot later coming.)
It’s not whether the admins are competent. It’s how professional both sides act and whether they can set aside their differences sufficiently to work together. Sadly, that tends to lack in the software dev community as a whole.
My boss did (IMHO, of course!). And while I consider myself a little bit better than “competent”, by no means am I unique.
It may be that supervisors are hiring people who have run Debian / Ubuntu desktops, rather than experienced administrators who also know Debian.
For Red Hat, you can always ask if they’re RH certified (I’m not. oops.). That doesn’t work as well for Debian, as there isn’t an official “Enterprise” structure behind it.
There is a similar gulf between a linux desktop user and a linux server admin, as there is between a windows user and a windows server admin.
Why do so many bloggers, in the tech area, feel so comfortable spouting such strong opinions about matters they really have little actual clue about?
Maybe because they look at it, see that critical system files are in a directory named ‘etc’, and rightly conclude that the whole thing is a train wreck?
I’m sure, like the rest of *nix, that directory name has historical significance and is probably quite elegant once you understand it all, but from the outside looking in, it is very user-UNfriendly and nonsensical. And yeah, all you fanboys go ahead and mod me down for speaking the truth, as you always do
Edited 2015-10-09 20:30 UTC
Nobody who is incapable of understanding the reason should be editing files in /etc anyways.
Edited 2015-10-10 02:50 UTC
Likewise, anyone who actually wants to learn how to edit these files should not have to be subjected to such a cryptic and arcane nomenclature either. The ‘it’s not for average users’ argument is no justification for a shitty naming scheme.
WorknMan,
GoboLinux’s has done a great job with building a clean hierarchy. It’s just my opinion but for my needs I prefer it to FHS, although they still have to support FHS under the hood because FHS is so pervasive in software as the greatest common divisor. It’s just one of many legacy things that isn’t going away in the foreseeable future, at least not without the support of major players.
Edited 2015-10-10 17:44 UTC
I recently downloaded the liveCD version of the latest GoboLinux and it wouldn’t even boot. I’m afraid that distro is in need of some real investment and development.
If remembering “config files go in /etc” is cryptic and arcane, you have no business editing config files, because you are not nearly ready enough for the black magic involved.
Way to miss the point, genius. If I don’t know the file system well and I’m looking for config files, I imagine the LAST directory I would look in is one called ‘etc’. I would expect them to be in a directory called ‘system’, or similar.
It’s just a small example of how ass-backwards the entire OS is.
Way to miss my point.
If you can’t be bothered to learn about something as trivial as where configuration files are located, you have no business attempting to edit them – that’s how things break.
Manually editing configuration files is an advanced topic, requiring knowledge far beyond “where they’re located”
Edited 2015-10-11 01:01 UTC
Believe it or not, you’re both correct. If you can’t or don’t want to learn where the system files are, you don’t need to mess with system files. However, historical reasons notwithstanding, “etc” is a poor name for a directory storing some of the most critical files for an OS. “etc” evokes the sense that it’s optional or extraneous, not critical or required.
So? That only matters if you don’t know what’s in there, in which case you might not want to be messing in there.
I don’t think it’s bad to design a system around people that know how to use/operate the system. Not everything has to be immediately or easily accessible to the lowest common denominator.
Any other name that conveys enough meaning usually involves much more typing. I’m glad the important directory names in *nix systems are short – less typing, less chance of errors.
The sad thing is that “etc” is hard coded in the kernel.
It is a case in point of a lot of stupid baggage that is deeply entrenched in the Unix world.
Yup.
For all their talk of openness and rapid development and alternative this and we-are-better-that, the UNIX/Linux world is hopelessly conservative and completely and utterly resisting to any form of change, progress, or new ideas. “This is the best way to do things because we’ve always done it this way” – the reasoning of people with blinders.
A shame, really.
Edited 2015-10-12 09:38 UTC
“Unix is akin to a religion to many. Sorry, I do not believe in that religion.” – David Cutler
Editing config files should not be “black magic”. I hate this attitude so much… We were all new to our chosen systems at one point and so we memorized arcane details and became “wizards”. However that doesn’t mean that is how it should be. Config files should have good documentation in the damn file where you are when you need the documentation most.
That, of course, is not what I suggested things should be like, and I certainly provided enough context where that should be clear.
Config files are not (usually) black magic, but if you know little enough to get tripped up by them being in a directory named “/etc”, as opposed to some other directory name, you aren’t nearly ready for editing config files.
Oh right, because /etc/hosts is totally harder and less arcane than C:\Windows\System32\drivers\etc\hosts for the same functionality?
Wait, aren’t you the guy who loves to drone on and an about “poweruser” this and “poweruser” that?
I think you nailed the issue with “user un-friendly”. /etc is not a directory for a user, it’s a directory for an admin.
The name is irrelevant. The directory could be “fred”, or (more accurately) “cfg”, and it wouldn’t matter. It’s where (mostly) static system-wide configuration files go, and the fact that it’s a consistent location is the important part.
A program written for Linux in 1996 that looks in /etc/ for it’s configuration file will work today, because that’s still a valid location. Is that important? Hard to say. It’s still very convenient.
That same program will know that it can write logs to /var/log, store data in /var/lib, and store runtime info in /var/run.
The problem is, most people who dismiss the FHS as “a useless, needlesly complex relic from a time we were still using punch cards” don’t actually administrate Unix or Linux servers.
They’ve almost certainly never logged into a UNIX system, and went looking in /var/log for logs, only to discover that the system logs are /var/adm in /usr/adm or some other half-baked location.
I *like* the fact that I can log into FreeBSD, Linux (debian, rhel, suse, arch, gentoo) and have a pretty good idea of where to look for various types of files.
Then again, Thom seems to be in favor of change for the sake of change, so I’m not surprised that consistency is one of his hobgoblins.
grat,
You may disagree with myself or with Thom on merits, I know many people do. That some of us think FHS is too complex does not necessarily mean we don’t understand it as some people are claiming. We need to cull these ad hominem arguments from the debate; it’s disingenuous to attack the person.
Edited 2015-10-12 18:58 UTC
In my opinion for Linux to gain more home users most common configurations will need to be achievable from the GUI tools. Most Linux distro’s are now mostly there.
It would also be nice to see all distro’s offer a good set of recovery tools preferably on the drive and available at boot. Or at least either on the install disk or on a separate recovery disk. Only in unusual corner cases should you have to actually edit the configuration files. No regular user should every have to manually recover their systems and they won’t!
At no time should a normal user need to learn systemd, how to edit config files, or even the directory structure… it’s not going to happen.
Edited 2015-10-13 06:37 UTC
I have articulated my dislike for the FHS – and those of others – often enough. I DO know what I’m talking about when it comes to this stuff, because I’ve been reading and writing about this specific issue for well over 10 years.
So what if you’re been reading and writing for 10 years about this? It could simply mean that your lack of understanding of a specific operating system, unix in this case, goes way back.
Your response perfectly highlights the problem with the tech blogosphere; most of you have no actual clue what you’re talking about, but don’t let that detract you from voicing strong opinions. In the end it is not about informing, but about click baiting…
Edited 2015-10-10 19:21 UTC
And your comments on this post perfectly illustrate the problem with internet comments.
Touché
Thom,
Can you please describe what you see as bad on FHS ?
I will briefly enlist the reasons I like it:
– What we call linux, and perhaps should be called LiGnuX, is a large aggregate of system parts developed all around the world without a central management (for the whole thing, not the parts). I see no way this model could succeed unless some form of standardization be agreed upon;
– There was a prior effort to achieve standardization on Unix because of the problems the lack of a minimum one was creating to independent developers on the eighties and nineties;
– When bad things happen, and they do, you know where to look at for clues;
– When you need to change things related to the whole system you have a good guess at where to look at. This was very important when there was just crude management plumbing facilities on linux;
– The hierarchy on FHS helps the effort to create isolation layers on linux and, with it, improves security and lower maintenance costs.
It is not perfect and do need some adjusts but way less than what their critics vent.
Most of the complaints I see came from guys that would like to install some software from the Internet and get mad when they can’t because the developer did not make available a version for the guy’s system. And the guy lacks the knowledge about what a multiuser system means and what compromises it encompass, all he sees is that he needed version xx of libzz and his system has version yy of it and go grumpy and gnash.
Well, there are many cases where the developer can generate static linked apps (if he wants to, unless he needs to tap on some internal kernel characteristic – it is the case for some device drivers for sure). The problem is, many are so used to the “convenient” Windows way that they just blame the wrong things. Linux is not Windows, the way things are organized is different, the presuppositions are different and the methods of the basic underlying system are very different.
There are “workarounds”, like static linking, patchelf, statifier, ermine and docker. None will fit all cases but will solve most of them.
As I said, there is a compromise on “linux way” and I sincerely prefer a system with a central repository where things are upgraded/updated when needed.
Also, I like specially openSUSE efforts on complementary repositories.
This whole topic is in itself way more complex than what the shallow criticisms we see over the Internet try to expose or make us believe. There are some proposals to fix it (one from Lennart Poettering that I don’t sympathize that much with, lets see how it develops). As soon as I see a good article about it I will post a link here.
Edited 2015-10-10 23:31 UTC
http://www.osnews.com/story/20195/GoboLinux_and_Replacing_the_FHS
http://www.osnews.com/story/21579/Why_Do_We_Hold_on_to_the_FHS_
http://www.osnews.com/story/19711/The_Utopia_of_Program_Management
In short, it’s so needlessly obtuse, chaotic, unclear, and open to interpretation that literally not two systems that adhere to the FHS actually have the same directory layout. In other words, each and every system that adheres to the FHS actually has different directory layouts. F–k man, not even two *Linux* distributions can agree on how to interpret the FHS.
The FHS is a standard so vague everybody can just do whatever the f–k they want and still claim to “adhere” to the “standard”.
In other words, it is a bad standard, and needs to be modernised, or preferably replaced, with something that wasn’t drawn up by people who thought punch cards were a little too hoity-toity.
Edited 2015-10-10 23:44 UTC
Well, we agree that it needs update and a more strict adherence by all involved but there are many points I fail to agree.
OK, some:
– It is just too complex. It is not, it is easy enough so that any developer that cares can understand and fulfill its basic rules. If you want to see what is really complex take a look inside OS X system directory or Windows system one (I must do because I also work with system maintenance/administration, what I regret);
– There is too many exceptions. Like on every complex system that grows slowly, it needs adjustments (and adherence, of course);
– Symlinks are bad. No they aren’t, they are actually a good solutions for lots of problems related to easy directory navigation and access;
– About app discovery, take a look at /usr/share/applications. Yeah, I know it is not on FHS but it is the solution adopted to fix this problem, discovery, in particular. Take also in account that any linux system has a huge number of programs created with pipping in mind (one of the unix way of doing things). Most users really will never care about them even though some of the applications they use will. Virtually all apps with a GUI have an entry on /usr/share/applications now;
– The security model in place is, probably, the most sane way we could ever devise taking in account that it was created to a multiuser system. For complex sharing cases and more secure requirements there are ACL and selinux (the last is a bit too complex for home desktop case);
– Personal settings are stored on user home directory and have seen some standardization on where they should be. Again, it takes time until things are settled and most developers follow the rules;
– Installation of multiple versions of libraries, apps and isolation of their settings are things people are playing with (like Lennart).
Anyway, xkcd describes the problem at full extent:
https://xkcd.com/927/
Yes, please do. It is always interesting to know about what various groups are doing to improve their distro.
Absolutely, I’m simply responding in kind; Your shitty post and my shitty attitude go hand in hand.
Sometimes, someone not understanding a technological item is not necessarily an indictment against the technical qualities of that item. Sometimes, it simply means that the universe is trying to point out that you’re out of your element. Too many tech bloggers, however, misinterpret their own ignorance as it being somehow authoritative.
It takes a couple of minutes to comprehend the whys and hows of the Unix hierarchy of system files. And one of the reasons for its longevity is that it makes a hell of a lot of sense, if you understand what Unix is and what it does.
Can it be improved? Absolutely. Does it need to be updated? Yes. But to claim that it is somehow one of the worst things in computing ever, that’s just a ridiculous and uninformed opinion.
Edited 2015-10-11 06:05 UTC
Now you sound like Droogstoppel from the Max Havelaar who claims he knows what is going around in the world because he has had the same place at the coffee market for 10 years.
Just because you have an established opinion doesn’t mean you know what you’re talking about. While the FHS certainly has it’s warts it’s not the unmitigated disaster you make it out to be.
Yes, but have you been administrating multi-user unix and linux servers for 27 years?
I have. The FHS may seem archaic and out-dated– but it’s a significant step forward from what we *actually* had in the punch card era. Anyone who’s administered HPUX or Irix knows that the FHS is a fresh breeze of sanity and consistency.
Some of it may not make sense to the average user– but that doesn’t mean there wasn’t a perfectly valid reason for the decisions behind the FHS.
Now, some of those things– like /bin and /usr/bin– are only really important if you’re booting from small disks, or NFS, or PXE (/bin and /sbin should contain just enough binaries for the system to boot). Those are edge-cases these days, but not unheard of.
For instance, I don’t think you could boot RHEL7 over PXE, or off an NFS share, any longer– systemd is so large and clunky that it would be a really bad idea.
Now, some of this could be resolved by using variables, a la Windows with it’s %TMP% and %USERDIR% type indirection, but that adds a layer of complexity that isn’t really needed, as long as everyone agrees on the standard.
But nobody does, and that’s the problem. A standard that’s so vague everybody can do whatever they want is a bad standard. Even then, smearing stuff all over the place, forcing the use of fragile package managers and the like to keep the system running, is just bad design.
It may have made sense in simpler times, and it may still make sense on servers and other specialised hardware, but once you arrive at laptops, desktops, phones – it’s nothing but complexity that breeds even more complexity.
The FHS is complex and obtuse, and a common argument is that it doesn’t matter because users don’t see it anyway – but this argument is invalid. Just look at all the layers operating systems are draping over the directory structure just to make the system usable – endless layers of complexity ripe for breakage. Complexity travels upwards – and this, users DO suffer from.
Many operating systems today – i.e., all of them – could benefit immensely from redesigning their building blocks – including the FHS and whatever the Windows equivalent is called, if it even has a name (it doesn’t). However, doing such plumbing is not as sexy as much of the other work, and of course, especially UNIX people see UNIX as some sort of bible, the One Truth, immovable, irrefutable, and refuse to even entertain the possibility that whatever was designed for time-sharing systems with punch cards might potentially not be a good fit for a modern laptop or smartphone.
Edited 2015-10-12 16:07 UTC
Actually, it does… or used to. It used to be part of what was called the “win32 standard” or some such, and the fact that XP applications generally ignored it, and Vista enforced it, was part of the reason Vista was so maligned.
Every generation of Windows has changed file locations. Remember “Documents and Settings”? Windows 7 had symlinks to it. %SYSTEMROOT%\Profiles was particularly evil.
Amusingly (to me, I have a warped sense of humor), one of the most static locations in windows is:
c:\windows\system32\drivers\etc
It was there in Windows 95, and it’s still there in Windows 8. Contains hosts, lmhosts, and a couple other files that look like they were ripped straight from unix (probably because they were).
Thing is, restructuring all of this means effectively, a new operating system. That’s why OSX (which has a lot of symlink hell to make a *BSD hierarchy look “normal) isn’t really BSD compatible, even though it ought to be.
I agree that poorly behaved applications install in all kinds of weird places– Except really, those are usually done by packages, and are easy enough to clean up.
I’d actually like for there to be a meta-package installer that respects not just the FHS, but the individual distro’s version of the FHS. Converting between .deb and .rpm is an exercise in insanity (but if you’re in that hell, look up fpm. The effin’ package manager. It ROCKS).
You exist in a world where you want a stable, reliable desktop that doesn’t make you see the stuff in the background. That’s fine. As an admin of Red Hat and Debian servers, I *have* to see the background.
For you, the FHS is an abomination. For me, it’s one of the few things that keeps me from going absolutely stark staring mad as a linux admin (I may be making an unwarranted assumption here).
I don’t want a gui. I don’t need a gui. I can’t administer all of my servers with a gui. I need a command line, preferably one with bash or tcsh (although I do wonder what powershell on linux would be like), because GUI’s don’t scale.
Personally, I wouldn’t go *that* far. I think the FHS has limitations that makes it a poor fit for desktops and laptops. The only reason the FHS works as well as it does is because 90% of current Linux software can be ‘apt-get installed’ from the distro repository.
Unfortunately repositories have the same problems as App Stores do: centralized control and the politics that follow.
It seems your problem with FHS is that it is a horrible design if it were designed today, with the full benefit of hindsight.
The thing is, the FHS wasn’t designed (beyond a small number of initial decisions). It grew organically, on different systems, for different reasons, and is merely an attempt for formalize convention. Any FS layout will experience this given enough time. Do we change now, and then change again, and then change again, and again?
Or, just deal with it as-is, and not waste the time and cost needed to change (Since, we’ll have to do it again for the exact same reasons)?
Drumhellar,
I think this is insightful. Although I don’t strictly agree with your conclusions, ironically I agree with you that they will largely be responsible for FHS not going away.
Some people have tried to fix these things, including myself using my own distro with some success. But the maintenance burden of supporting dozens, hundreds, or even thousands of packages is formidable. It wasn’t sustainable given my resources and I had to give up and re-roll the distro to be more FHS compatible.
This is why I don’t envision FHS changing, at least not without support from a major player.
What makes you think it is any different in other areas?
May be, because on other areas there is a hard knowledge about things that should be respected, with body standards and committees analyzing best practices and pushing their members to follow some rules or face penalties when things go wrong.
Somehow, in certain areas of computing, and I don’t know why, people just push hard for “my way” and, as they don’t bear the consequences (even though developers in a long chain of dependences do) and run unscratched with it, they go their “way”. Perhaps, this explain why things are frequently rewritten and old rules ignored for no other reason than “I read a bit about it and did not like what I saw” or “it is rubbish!” without even developing a deeper knowledge about the current choices.
I am not saying this happens on all areas of computing, but it does on a sufficient large number to cause a painful headache for most of us.
acobar,
Should we conclude that this aspect is unique to our field, or that we just happen to care more because it’s closer to home? To someone else who works in law, accounting, in medicine, as a teacher, plumber, etc, they may have their rules, but those will vary by jurisdiction based on the opinions of those in charge. Hopefully those in charge do a good job, but it’s natural for people to debate because they want different things. The closer we are to a field, the more we’ll learn about the intricacies and conflicts going on inside it, just like FHS for us.
We can’t even agree on the most important standards of all: fundamental units of measurement. I say the US should just stamp out english units for the benefit of a universal standard, but even here on osnews there were dissenting opinions.
True, be we are not going to, or probably will not, see someone deviating slightly from what is used just because he thinks an “inch” is a stupid measurement unit and as so will roll out his own and will order the tools, bolts and nuts from manufacturers just to “respect” his “feelings”.
So, in some aspects, yes, I think that software engineering is unique not only because of what it allows his professionals to do but also because of the habits ingrained in many of the practitioners.
Im fine with simplification. It can and has been done on various unixes, linux distros, and OSX (not strictly FHS of course, but more or less the same). Symlinks can go a long way… either by using symlinks to create friendly views or doing the inverse (create FHS compliant views into a friendlier structure). Either way things are still prone to breakage – but if done carefully and pervasively it can work.
But it never really catches on… Fundamental reworking the Unix directory structure is simply not worth the pain it causes, and no one is really happy with symlink magic either. Fact is, most users spend 99% of their time in /home/whatever – venturing out of it is mostly an admin/developer thing. Of course most home users of Linux are themselves in effect admins, so they have to learn at least the basics, but once you have a system configured your back to living in /home most of the time (if you do things right anyway). I just don’t see what the big deal is. Id like it simpler too, but inertia is a bitch…
Throwing away 40 years of learned behavior and breaking nearly every program in existence for the sake of a subjective improvement (and it is purely subjective) doesn’t make sense. Its been tried, and every time a censuses is reached – which is more or less “leave it alone”.
Im not disagreeing with you completely, it is complicated. I don’t think it is needlessly complicated though – there is a valid rationale for almost all of it. The fact is pretty much every single OS in common use, with the exception of Windows, uses something pretty close to FHS – even BeOS did in a manner of speaking. It isn’t going away. Ever.
Linux folks don’t care anymore. They say we’re clinging to the past. systemd is the future. Changing everything from ifconfig to init to directory layouts makes things better they say.
Linux is not a unix clone anymore. We need to all accept it and move on.
What is sad is its NOT “Linux folks”, its the simple fact that pretty much all of Linux has been hijacked by Red Hat and its cronies so they WILL push the systemd party line, even ignoring the “user first” original mission statement of Debian.
Say what you will about Windows and the current version phoning home but I think the current situation perfectly illustrated the value of “voting with your wallet” and why “free as in beer” was never a viable long term strategy. Windows users look at Windows “Hey I’m a supersized smartphone now!” 8 and say “We don’t want that”, refuse to buy, sales go down the shitter, MSFT is forced to change the UI to something the users WILL take and Windows 10 gets more users in a month than Windows 8 did in something like a year.
Now compare this to Linux, Red Hat pushes systemd, users say “We don’t want that” but because Linux users have no power of the wallet? RH is able to simply ignore the users, since corporate customers are their focus, and go straight to the devs (seriously look how many heads at places like Debian and Canonical are former RH or tightly connected to RH) and the devs give the users the bird. Sure you can fork, but so what? How long can a fork last with 1/10000 of the budget and with everything being tied to systemd? Its only a matter of time before too many critical systems are completely hooked into systemd for any fork to function without writing their own OS from scratch!
This is why the power of the wallet MATTERS, as voting with your wallet is the only way you can affect the direction of a company. Since Linux users on average don’t pay the bills of the large distros like Debian, Ubuntu, Red Hat? There is no reason to listen to you, you can take it or hit the bricks. Its sad but money talks and no money? No voice.
You really think Microsoft gives a damn about what I think? I’m not a big enough customer for that. I have no voice.
“Now compare this to Linux, Red Hat pushes systemd, users say “We don’t want that” but because Linux users have no power of the wallet?”
“You’re making the mistake of assuming the majority of Linux users share your opinion. What evidence do you have of that?”
Well, here it is.
http://distrowatch.com/polls.php?poll=8
Systemd poll results (Distrowatch.com, Jul 2015):
I use systemd and like it: 787 (30%)
I use systemd and dislike it: 318 (12%)
I am not using systemd and plan to use it: 111 (4%)
I am not using systemd and plan to avoid it: 1170 (44%)
Other: 260 (10%)
Well, that’s 12% + 44% = 56% of statistically random Linux users against systemd.
This after 5 years since introduction of systemd into Linux ecosystem !
Most of home users don`t care if that`s systemd or not. People haoppy with their OSes mostly don`t go on Distrowatch – what for? Frustrated ones are seeking there – and frustrates hate everything, so they always will be against, no matter what you ask for.
Yes, buntu is used very ofter for un-skilled don`t-care-if-work people. I know some of them. I used to be sysadmin (well not low-level-one making my own google-file-system, I just needed working solution – install-configure-forget-but-update-sometimes) – I wouldn`t care about is that systemd or not even a little more than now.
Please never write any encryption software!!
Using polls as statistics is very problematic because you don’t get a random sample set. In this case you get answers from people that frequently visit distrowatch. The average Linux user does not visit that site. It is like doing a poll on OSNews about Windows 7 vs Windows 10. You could get some fun entertaining stats out of that, but it would in no way represent what the average Windows user out there thinks.
Well go to places that Linux users DO go like Slashdot and SoylentNews and ask THEM about systemd. BTW be prepared to be called some VERY ugly names as they tell you what a steaming POS it is (along with more than enough actual examples and screencaps to show it is indeed got serious game breaking issues) along with telling you in no uncertain terms what you can do with it.
And I’m sorry but you go to the user forums of your favorite distro and post the same question…that is until they censor you, ban you, and wipe all evidence that it was ever there. BTW that is apparently SOP at all the forums controlled by the big three, Debian, RH, and Canonical, even though it expressly goes against the Debian founding mission statement. If THAT doesn’t tell you something is rotten in Denmark? Then frankly I don’t know what will.
Oh and just an observation of an outsider, but you know what the way the distros is pushing systemd reminds me the most of? The way MSFT shilled Windows 8. You even get the exact.same.talking.points. like “you are a luddite (ad hominem), “Embrace the innovation” (again attacking those that don’t fall in line without any concrete reason why they should choose an unproven system over one that worked) and even outright personal attacks by devs and mods. It seriously sounds just like what we heard from the Win 8 shilling, in fact I bet if I only changed a couple words per paragraph they would be interchangeable.
For a bunch that once prided themselves on giving technical explanations that sounded like technobabble because they were so detail dense and which debated and got the users involved in EVERYTHING, to have them suddenly close ranks and start shutting down debate? Yeah I wanna know whose cashing the checks.
I frequent Slashdot on a daily basis and I can’t recally anyone, not a single comment, having any actual real-world examples of issues. The comments mostly revolve around not liking change, therefore bashing systemd, or ignorance, like claiming that systemd spies on you and reports to NSA/GCHQ/whatever, or just plain trolling.
Also, it’s the people who are not happy about something that are the most vocal, but people who are content with it? They generally don’t make themselves heard. You can’t really use that to gauge your argument’s credibility.
In free software people are voting with feet. There is Gentoo, which never replaced OpenRC with systemd. There is VoidLinux, which was specifically built systemd-free. There is Devuan, which pushes systemd-free Debian clone. If people were really all that concerned with systemd, these distros would be boosting right now. They are not. Do you know why? Because most Linux users either are happy about systemd or don’t care enough. Well, people running away from systemd are easy to come across in BSD communities, but there are no big numbers to make this shift visible in stats.
Compare that to Gnome 3 drama. When there was indeed sufficient amount of people disliking it, there happened MATÉ and Cinnamon, which are still there and popular enough. XFCE saw a huge increase in its user base. Unlike systemd, that really concerned a lot of people, and consequences are easily visible.
The same argument can be made for the spying added to Ubuntu and Windows 10. The majority didn’t vote with their feet, so this is perfectly acceptable behavior, yes?
Or it could simply be that once trapped in an ecosystem it costs real $$$ to move, both in time and in getting everything back up and running, so that is becomes very difficult to just switch. Isn’t that one of the arguments on why Windows users don’t switch no matter what MSFT does?
It never was very expensive on any resource. Actually, if you are not in a hurry, it is fairly cheap and trivial. The problem is that most people never really cared about spying, crapware and systemd enough to waste any time on switching.
Only if you do pretty much nothing else than use a web-browser. If you use any software that isn’t available on the target-platform or getting it running stably in Wine or whatever then it’s neither cheap or trivial.
We were talking about switching Linux distros because of systemd. What does Wine have to do with that?
I thought you were discussing just switching OSes in general. Switching between different Linux-distros shouldn’t be too difficult, I agree on that.
I’ll get hate for pointing this out but screw it, I’m too ancient to care.
What this is is the classic “All you need is a browser, gimp and LO” argument of the hardcore Linux zelaot and ya know what? I’ve been building and selling computers since the Shat sold Vic20 and I have never ever met this mythical person, I don’t care if they are 15 year old kids or 75 year old retirees they all have some software they require because if they didn’t wth would they actually need a PC for?
And Linux is NOT magical, just because a driver or software works in say Ubuntu does NOT guarantee that the same will be true of Red Hat. Every minute you spend having to find work arounds and fixes and alternatives? That is MONEY unless your time is literally worthless and again even after all these years I’ve never met anybody who thinks their time isn’t worth anything or finds the tediousness of the above tasks enjoyable or “fun”.
There is a reason why there are sayings like “if it ain’t broke don’t fix it” and why people will keep an OS long past its EOL, its because the effort to switch is usually painful and unpleasant and anybody who uses the above “all you need “argument is either being disingenuous at best or outright telling falsehoods they know to be unrealistic to sell “their” brand at worst, because IRL? If those users do exist they are as rare as hen’s teeth and do NOT in any way,shape,or form represent the typical PC user in 2015.
Eh? There are a good number of clear cases where Canonical has given RH the finger and done their own thing.
IMHO systemd was released far too early. It was made the default in several Distro’s with a good number of bugs present.
The whole thing seemed to be rushed for some reason that I cannot fathom.
Now? It seems to be pretty stable.
The future? In a few years we may well wonder what all the fuss is about.
IF you don’t like systemd then go and fork your own distro and keep init scripts. It is all FOSS so there really is nothing to stop you now is there?
For me, my days of hacking kernels are long gone. I spent a good few years writing and supporting device drivers for VMS. I’ll get to grips with systemd in time but at the moment none of what I do with Linux touches it. I suspect that more users are in the same position. i.e. Systemd? MEH
Let me try and follow your logic:
RH sales and stock priced doubled (800M USD to 1.6B USD, ~40 to ~80 respectively) in the last 4 years.
RH is by far the largest enterprise Linux distributor. (Especially given the fact that CentOS is now a part of RH, and given the fact that Unbreakable Linux is nearly an exact copy of RHEL)
Beyond that, *all* other enterprise Linux (you know, that ones that have paying customers) have either switched or in the process of switching to systemd.
… So, following *your* logic, users *are* voting with their wallet *in-favor* of systemd.
– Gilboa
Edited 2015-10-10 08:12 UTC
Systemd speeds up the boot up and shutdown processes. There’s no doubt about it. I do worry about how deeply it’s embedded in the system. But hopefully with enough time and testing it’ll become rock solid and secure.
While I completely disagree with your assessment of systemd, and find your paranoia too 1999 for my tastes, there is another bigger issue with your theory: how to influence free software.
You *can* vote with your wallet. RHEL will sell to anyone. Or you *Can* vote with code contributions to a different distro. Like debian. All of the devs can vote. And they did ( well both in the tech steering committee elections and in the general resolutions. If you want a vote in free software those are your choices money or code. Its your choice. But you can’t legitimately complain if you don’t do either. You don’t have the right to tell me how or what to code, how I should spend my free time and resources.
“They” say. I don’t know any sysadmins personally who invite this change, and I know a few sysadmins.
I’ve been using Linux for almost 20 years now but in the past few years I have been working in a job that is primarily Windows. I can see why the Windows Engineers are so turned off by Linux now. People argue about init systems and file system layouts like they matter so much more than they do. Windows has some really bad kludges that they have carried on for years now but despite the general dislike of these designs it doesn’t break out into an all out flamewar among users and developers. There are so many interesting things being done in computing and we are bitching about f–king file system layouts. Jesus Christ. Don’t even get me started about systemd.
Edited 2015-10-09 22:39 UTC
With Windows you know what you’re getting, and it’s going to be something that is highly likely to work. Unfortunately Linux can’t say the same. On top of the inconsistency between distros, you can have inconsistencies from one distro version to the next. Depending on your distro of choice, updating can easily be a roll of the dice with risk of all kinds of breakage.
Anyone who is subscribed to the Linux dev mailing lists knows devs are in constant disagreement and conflict over what direction <something> should head in. It feels like Linux and most of its subsystems are in a perpetual state of identity crisis.
Control is a sensitive thing. People want it and when they have too little, they complain to no end. But when they have too much, they always make a big heaping mess of everything. People who love Linux tend to love it for what it is – not Windows. But the reverse is true too. A lot of people choose Windows because it’s not the mess that Linux often is. Like I said, they know what they’re getting and it’s highly likely going to work as expected.
That’s some stale FUD you got there. You realize the late 90s were over a decade and a half ago, right?
So basically what you’re saying is you pay no attention at all to Linux development. You don’t use the Linux dev mailing lists. And, you ignore all of the bug & breakage reports, and regressions that Linux gets on a daily basis. That’s the worst kind of Linux user – the kind that can’t acknowledge its’ downfalls. You can stick your head in the sand all you like but the problems I’ve mentioned are going to continue to exist.
Linux is simply not the tropical paradise people like you would have others believe. Does it make you feel better when I say I love Linux when it comes to my personal servers and htpcs?
Windows has it’s share of problems, they are just different problems. I have been able to maintain rolling release distributions for years with Linux. Sure there have been hiccups but Windows doesn’t avoid this either. I have had more than one Windows bug from updates cause serious problems in recent years.
That must be the reason why a 10 year old Windows version still has a higher market share than both last versions of Windows.
When upgrading your Windows the only thing you know you’re getting is trouble, and if you’re lucky, your machine is not totally bricked (think Windows 8.1 upgrade disaster). I’m not sure about you, but for most people I know a Windows upgrade typically involves getting a new PC.
Edited 2015-10-11 20:42 UTC
In all the Windows upgrades I’ve been through both at home and at work, I have yet to come across a single system that got bricked. I don’t recall at time where there was actually any problem at all for that matter. I/we only do clean installs however so maybe the problem you describe is limited to those to who try upgrading on top of a previous install.
No os is immune from hiccups and sometimes even disasters. But if you’re generally suggesting that Windows is not a stable and solid os for most people then you must not have much experience with Windows and/or people who use it.
I was referring to Windows XP compared to both Win 8 and Win 10. If Windows was as predictable as you said, it wouldn’t have taken us years to get rid of XP.
No, I’m suggesting your ‘you know what you’re getting’ argument against Linux can be completely turned around and applied to Windows as well. I’m suggesting you’re using a double standard when comparing both systems.
Edited 2015-10-12 09:07 UTC
I haven’t made any argument against Linux, I’ve simply shared my own personal experience, what I read (daily) on multiple Linux dev mailing lists, and various forums. I’m neither pro-<some os> or anti-<some other os>. I use the 3 main ones; 2 by choice, the 3rd only because I have to at work. It just so happens that I hear a lot more complaints about Linux desktops breaking because something was updated than I do Windows desktops. It is what it is.
Fair enough, my apologies as I mistook your post for applying double standards.
My experience is that I’ve always run into trouble when upgrading* desktops no matter what OS. As for regular updates: on Ubuntu they tend to break something more often (though there simply are a lot more updates for all software) but it’s usually smaller stuff which is still very annoying. On Windows (7) I’ve had some severe issues that ran into hanging updates and update loops, and I definitely didn’t know what I was getting there.
(*) If I didn’t make it clear enough: with ‘upgrading’ I meant a major new distro/windows version, while ‘updating’ is the usual patching.
Edited 2015-10-13 10:18 UTC
That’s because windows users have no say in what goes in the operating system. They have to take whatever is prescribed by Microsoft. It also shows a lack of enthusiasm from users. That’s because Windows improvements mainly benefit Microsoft the for profit enterprise behind the software. It’s not a community project like opensource software ones are that benefit the whole community and if you want something done you have to speak up or roll up your sleeves and contribute.
What makes you think Joe User wants a say in how the OS is designed? Lack of enthusiasm? Sure, that’s one way to describe it. Lack of interest probably does so better. Most people are neither programmers nor designers. They don’t want to worry about it, or about contributing, and will gladly let someone else handle that. Windows users will make their voice heard if something gets too far off track. Notice how the Start button was removed, but then put back? That was a direct result of Windows user “enthusiasm”.
Also, community driven open-source projects like Linux may sound good on paper, but I can’t agree that all the problems in brings into the picture is a benefit to the whole community. Progress is either forced by one person or group of devs vision, or it’s stalled because of stubbornness, posturing, and lack of true `community` effort. In reality, Linux development is fractured and fragmented to hell. If you think that’s any more ideal or beneficial to users, you’ve got some screws loose.
Except most of the people who bitch are end users who will never contribute and barely know what they are talking about.
There seems to be a trend to isolate applications from the OS a bit more, both in the server world (containers & such) and in end-user apps (eg xdg-app sandboxing).
If the sandbox environment can provide the right subset of libraries/tools to run your app on, perhaps in future it matters less what the base OS file layout etc looks like.
Informative link missing:
https://lwn.net/Articles/658809/
As opposed to:
C:\Windows
Which amounts to random shit being thrown into random locations with no reasoning other than, “we’re too lazy to actually clean up the mess and have a logical directory lay out”? When compared to the abomination that is Windows, the *NIX lay out makes a hell of a lot more sense especially coming from a OS X user who would sooner deal with what we have today vs. the mess of the Windows world.
I think you should scroll up a bit and read the blurb again before trying falsely imply I like the Windows one.
That being said, pointing fingers and saying “but but but that one sucks too!” is not a solid argument to keep smelly crap lingering around.
There’s a reason we don’t use punch cards anymore.
We develop a cross-platform computational biology package, and most of our users are not very technically literate so it would be hopeless expected them to compile their own packages. So, we provide binaries.
We build binaries on OS X 10.6, works perfectly on all versions 10.6 and above, zero problems. On Win, we build on 64 bit WinXP, and again, zero problems at all.
No, Linux, here the nightmare begins. 100’s of freaking completely incompatible distributions, nothing is the same in any distro, so basically, we have to have around 40 different virtual machines to build the Linux binaries. And of course, nothing built it one is binary comparable with anything else. What an complete and utter joke.
RedHat and to an extent, SuSE are literally the only ones in the Linux scene that takes binary compatibility seriously, and takes system stability and compatibility seriously. RHEL is the only Linux where we don’t have problems.
Try to build something on ubuntu 12.04, it won’t work on any other newer Ubuntu. Then of course there is the “rolling release” crap like Arch where you build something one week, and it stops working a week later because they snuck in some incompatible binary change.
I’ve been writing Unix SW for a long time, started with AIX about 25 years ago, and I’ve never had these kinds of incompatibility problems as seen in Linux.
What you say is so true I’m gonna cry. I’m ranting about this Linux mess for at lest 15 years… nobody cares!! It’s incredible.
Exactly the same problem that you describe with apps also happens with closed source drivers/modules and It’s much more problematic because you usually end up with an unbootable system.
I was tired of solving module issues in critical Linux HA clusters running Veritas Volume Manager/Cluster. Every time linux kernel was patched/updated by the security team, the veritas modules stopped loading so you have to recompile the module… incredible but real, and the same happens with FC HBA drivers too and any other closed source 3rd party module. You always have some kind of issue.
I think the main problem with Linux (kernel and distros) is that a lot of technical decisions are taken by people that uses the OS for fun or research but not for business critical servers… as you said, the only Enterprise ready distro is RHEL and It’s miles behind Solaris or AIX in stability. They supposed to be rock solid… but they are not, they change things all the time, they break things and they don’t care. It’s your problem.
That Linux “amateurism” is the reason why I refuse to recommend/run Linux in baremetal systems anymore.
If you want to run Linux, great, use ESX and run RHEL on a VM (and keep an snapshot ready every time you update or install something). Linux distros are not serious enough to take care of drivers updates and/or software updates, they fuck the things up 50% of the time. Sad but true.
I think RHEL seems to be the only one out there that takes kernel binary compatibility seriously. They make kernel updates don’t break any ABIs, and I don’t think I’ve ever heard of a RHEL update breaking a binary driver.
The other ones, forget it.
You are opening yourself up to the standard criticisms like – you don’t really know Linux well, all it takes is a little conditional compilation for cross release compatibility, and the catch-all just read the sources.
.
Edited 2015-10-12 06:40 UTC
Did you try static linking, patchelf, statifier, or ermine ?
What you are doing is kind of crazy to my eyes. Granted, for kernel drivers things are a little more complicated, for the others, the tools above can deal with most of the cases.
You may also take a look on what Mozilla and LibreOffice do as they distribute huge binary GUI apps, even though most of us stick with the distro compiled version.
Static linking is an ugly, stupid workaround that should not be needed in a properly-designed OS; dynamically linked libraries can be updated independently of the application and keeping them up-to-date is the job of the OS, but with statically linked libraries the supplier of the application has to also constantly keep an eye on any and all statically linked libraries the app uses for security patches and then recompile the thing and distribute yet-another patch — it’s a lot of unnecessary extra work that eats away from the time that could instead be used on actually improving the app itself.
Static linking is not an stupid workaround, it is a very valid solution to cases where your application depends on old libraries (like qt3, gtk2, etc) and where the cost associated to migration is not worth all the trouble associated. It is also a valid solution for libraries that are not usually packaged by distros. Granted, for parts of the code that may have to deal with network interaction or where the used library is known to be lousy, well, it is worrisome to keep relying on it and any developer worth its salt should update to current versions or patch its flaws.
Also, static linking is not “all or nothing”, you can use it for the above cited cases and the preferred dynamic to all the others.
The main problem static linking brings to developers is usually associated to the need to generated static libraries, what most distros don’t provide. For that cases you can pick the other tools I mentioned.
Yes, we could require every single developer wanting to target Linux with all that complexity, workarounds and hacks.
Or we could fix the cause of the problems in a central place once and for all.
Would you please enlighten us all how can this be accomplished taking in account that we have many distros and each of them has many versions ?
Dynamic linking on Linux (and BSDs) encompass a dependence chain of libraries. Even though there are proposals to better handle it inside a distro version, one solution for all distros/versions seems very unlikely.
Please, don’t say “Windows fixed it”, as it is more like RHEL case and the cost associated to keep the whole ABI stable is far from cheap (or enticing from the developer POV). Also, you must know that big apps do distribute old versions of dll with them (and store them outside the system directory on this case).
Also, again, take a look on mentioned apps, it will give you a deep clue on where the problem really is and which mitigation methods are available.
As I said, Mozilla, LibreOffice and many other projects that distribute binaries handle it acceptably.
I’ve been compiling my own software for Linux for almost 20 years. I think I have a pretty solid idea of what the problems with Linux are.
dpJudas,
I agree. I think installing packages onto the system should be as simple as extracing a package without the need to mix everything in the huge “FHS blender”. The OS could offer facilities for automatically building and maintaining system indexes for package resources. So instead of executable files being moved into /bin, /sbin,/usr/bin,etc, they could remain associated with the package they belong to. This change alone solves many of my gripes.
Instead of:
-rwxr-xr-x 1 root root 5764 Sep 26 2014 /bin/I-dont-know-what-this-goes-to
we could get something like this:
-rwxr-xr-x 1 root root 5764 Sep 26 2014 /bin/I-dont-know-what-this-goes-to -> /pkg/oh-i-remember-1.2/I-dont-know-what-this-goes-to
This would also permit us to easily install multiple versions of packages if we need.
Installing a package would be trivial:
cd /pkg/ && tar -xf ~/nifty.tgz && update-indexes
Removing a package would be trivial:
rm -fr /pkg/nifty && update-indexes
The “need” for historical quirks such as separating /bin, /usr/bin, /usr/share/bin, etc for different OS modes and/or different partitions and/or different methods of installation goes away completely. Using symlink indexes actually provides administrators with even better control of where things can be installed.
There was a time when Dos and Windows apps worked this way (although not any longer due to registry and dll hell). This particular aspect was tremendously appealing. Installing, backing up and restoring these applications was absolutely trivial in ways that FHS simply cant match.
Obviously modern software is plagued by dependency issues responsible for “DLL Hell”. But I think novel solutions on linux could solve these problems in more straitforward ways without being forced to depend on a repository to mask it.
Edited 2015-10-12 21:27 UTC
Windows C/C++ apps can still function in this way. All you have to do is place all the DLLs next to the executable in the program files folder. There really are no good reasons to install DLLs anywhere else after they changed the DLL search order (can’t remember which version of Windows they did this – it was either Vista or Windows XP).
The main reason you don’t see this work is because Microsoft still provides a Visual C++ runtime re-distributable installer and some devs just add it to their MSI. If those applications added the msvcrtxx.dll’s to their program files folder instead you would not need an installer.
dpJudas,
I make sure my software supports running and backing up this way, but it’s not the industry practice that it used to be. Installers need to get backed up too since lots of software backups today can’t be restored onto a clean system.
Edited 2015-10-13 02:22 UTC
Or, perhaps more your OP intended,
Installer/bundles could come supplied with both its own core components plus any linked library files (and only if system determines it has EXACT same file does installer/maintenance system delete said file from installed file and insert a symlink instead;; any remaining would be left in place as *required variants) ?
Unless I got confused.
mistersoft,
That’s certainly one approach that can be done.
Today’s repositories handle dependencies, but a major shortcoming for those of us who have to work outside of the repos is that we have no standard facilities for managing our dependencies. Doing it manually is one of the least pleasant aspects of software development. It’s solvable, using something like the manifest file I alluded to earlier, but in order to work all developers would have to use it.
This has huge potential, like perhaps a git or svn helpers that automatically fetch not only the main project, but also it’s dependencies. I’m sure this alone would save millions of hours of manual dependency resolution (./configure -> install missing package, ./configure -> install missing package, ./configure -> repo version too old, download & compile new dependency, repeat until done, or give up…).
Edited 2015-10-13 14:58 UTC
This app does not depend on old libs, it uses QT 4.x and some other modern libs.
Problem is, every disto, and even every version of each distor build QT differently, builds all the other libs differently, it is an unmitigated nightmare. Basically, the only solution is to have a VM for each and every possible distro and version out there.
Thats just utterly insane.
It is beyond belief how stupid this madness is, and with this kind of unprofessional hacker attitude of I don’t care about maintaining any compatibility, Linux will continue to be a joke, well all Linux with the exception of say RHEL, Centos or SuSE.
So, we are actually looking at no longer supporting all these amateur hour distros, and only providing binaries for RHEL since our main build guy is leaving and he spent most of his time doing builds for all these distros.
So why don’t you just bundle or link statically the libraries like on Windows/Android/iOS..? An RPM for RHEL and a generic installer for “all” other distros. I’m not sure why this is a big problem on Linux but on Windows it’s ok. I mean, nobody forces you to use those differently built libraries from package archives.
However, I do agree that packaging for Linux sucks. Canonical is trying to solve some problems with Click/Snappy packages, which is great.
Edited 2015-10-13 12:53 UTC
That is why I suggested you to take a look on the options I listed.
On my case, the problem was not how qt4 (and this also happened with other toolkits) was built but with math libraries I was integrating. No distro had the same libraries installed with the same dependency chain (tree actually) I needed and that was the main problem. I ended using a mixture of static and dynamic linking and using a shell wrap with LD_LIBRARY_PATH= set to handle where the binary libraries where stored (as also the main run time). You will need to modify RPATH inside the dynamic libraries you bundle (take a look on chrpath and patchelf, they are not installed by default usually even when you install the developer tools).
Of course, you will have to track the libraries dependence chain and decide what can be problematic. That is a critical point (one that may need try and test). Anyway, you may opt to bundle almost all dynamic modules (see last paragraph).
No problem (or almost) anymore with libs or multiple versions.
As you may probably guess, I end up using more memory and using more space on disk, but it is not really a big problem now days, is it ?
Edited 2015-10-13 13:27 UTC
I’ve had problems with distributions shipping too old versions of Qt even if targeting only Ubuntu world. So usually bundling Qt with the application is the only option anyway. In my opinion .deb’s and .rpm’s and the whole dependency-aware packaging paradigm just don’t work for application developers. For the base system itself they are great.
I’m no expert on Unix or Linux, but it seems more or less the experience with either one really needs a target standard, say around 2020. Moving towards commonality in the location of all binary executables, libraries, and user dependent settings under the installation folder of the individual packages would help admins out significantly. Let the application developer create his or her own creative monster for the installation in however they deem suitable under the application install folder. Storage limitations for the installation should be all pretty much nonexistent in this day and age. Data should be in the home folder under the appropriate user, share folder, or application specific folder. It would make recovery so much simpler than the scattered hell we face today.
MadRat,
Some distros have quite a large head start in that direction, if you haven’t looked already, take a look at GoboLinux…
http://gobolinux.org/index.php?page=documentation
If a major distro were to pick it up, adoption would move rather quickly. But I don’t find it very likely.
Alfman,
Take a look on comment from http://www.osnews.com/permalink?619160.
The main problem never was really where things are located but the chain (tree) of dependence and breakage of ABI on newer libraries versions (unless I got things completely upside down, a possibility, of course).
I also pointed what I (and many others) use as a workaround. A smarter loader (with, perhaps, a better ELF format) would help improve the situation but I dunno it would fix it completely and the cost would end up being not rewarding, I think. I will take a closer look at it anyway.
acobar,
If it’s an ABI breakage, a smarter loader won’t help. If the loader can’t find libraries, that’s a significant problem that should get fixed. There may be problems internal to the library? I don’t really know.
Edited 2015-10-13 14:54 UTC
Its not really a Qt problem, but more in that there are so many different options for building Qt, and every distro builds it differently, with a completely different dependency tree.
The real problem is that on Windows, it is just agreed that Win API, the .net API is standard, and is there and just use it.
Same with Cocoa on OSX.
But Qt/GTK are sort of, but not really system libraries. These libs have so many different config/build options that each distro builds them differently.
Whats even worse is the apocalyptically idiotic xerces parser where you can define macros to build it with or without namespaces, some distoros build with, some without, so its impossible to use as a shared lib, and why we stopped using it, and no just use plain old libxml as it just works and has a standard API.
Nobody in Linux land can seem to agree on what exactly constitutes a system library.
RHEL/Centos/SuSE are the only ones who seem to take this seriously, and I can see how it costs them a HUGE amount of money constantly testing all this stuff to make sure nothing from this herd of cats linux devs breaks anything. And, I can certainly see why corporations have no problem paying them to maintain compatibility.
Spot on. That is what I was trying to tell people.
Even ABI is not the kind of problem people think it is sometimes. You can have the same ABI from a Qt library but if may depend on third part libraries on distro A that were not used on distro B because their use are optional so you end up with a missing symbol.
I did not see people discussing it here but the whole point of LSB was really to guarantee a common ABI. To achieve it a lot of things on the build process (linker and compiler flags, libraries dependencies and so on) should be agreed. After a lot of discussion it never really take off enough to be effective. It also means a lot of work for maintainers. I guess, because of both things, Debian is going to abandon it.
Edited 2015-10-13 15:54 UTC
MacMan,
Maybe you are right, but these are very abstract assertions, can you give an example of something that works on say redhat, but not Ubuntu? At least this way we can be on the same page.
Edited 2015-10-13 18:24 UTC
A lot of IT types getting on Thom’s back about the file system directories. While the current system works for IT, I think it’s a good idea to aspire to regular users being able to understand and maintain their system.
Surely we can admit the possibility of something better than /bin/, etc.
I’d settle for mean or mode of users being able to understand it. Windows initially had some sanity, now it looks just as screwy.
You’d think that someone could write a package recompile program to search out dependencies within source code and simply reconstruct packages with new binaries to be compatible to whichever distribution matrix is defined by the system. Every distribution would benefit.
MadRat,
I researched something like this for my distro, but there are numerous complications.
1. Most software does not come with any dependency metadata from it’s author (because there’s no standard for doing so), it shows symptoms like missing h files when we try to compile it. If we’re lucky, they’ve written this in a readme file. If not, we may have to search the internet and guess where this .h file may have come from.
2. It’s the developer’s responsibility to keep the ABI backwards compatible or at least rename the shared library when it breaks. If he does not, the linker will happily link incompatible ABIs because it has no idea the developer did this.
2. Not all software uses C/H/ELF files, many useful packages use something else even if we don’t realize it (ie lmsensors). That something else (ie perl) will have it’s own dependencies. A generic solution needs to be able to handle this.
3. Some software contains multiple versions of code, so a naive “scanner” might raise false positives for dependencies that are not needed. For example, it might need pulseaudio OR alsa, but not both.
4. Binary headers show static dependencies, my own distro checks all of them automatically. But some software uses dlopen for technical reasons. It doesn’t mean there’s not a dependency.
5. Version incompatibilities can happen if two projects don’t use the same update cycles. The latest version of one package may not work with the latest version of another, which means makes dependency resolution all that more difficult.
6. It might be useful to grab the the dependency metadata from an existing repo, but that’s frequently a couple years out of date (between stable distros and changing software like gnuradio). Unless we want to be pegged to the same version as used in repos, distro metadata is too old.
I’ve said it before, I think solving this effectively would make linux development and distro management so much better. Because of the complexity of solving this generically without metadata, I think it comes down to all package authors adding standardized metadata to their software that tells the users (or rather our automated tools) what they need outside of the package to get the package to work.
It would make a huge different for anyone who’s been faced with manually resolving dependencies outside of their repo tree. But not only that, it would help repo builders themselves out a great deal!
In fact, once this is in place, new tools could be built to generate the metadata automatically at compile time (thereby eliminating most human error). If the developer updates his system with new libraries, his project’s metadata could reflect the new dependencies automatically without devs needing to remember to update it.
Edited 2015-10-14 13:58 UTC