Linux news is getting more and more exciting, and somehow, managing to get less and less interesting. Why? Because the speed of development is getting so rapid that it’s hard to get excited for each upcoming release, keep your system completely up to date, and even remember what the current version of your favorite distributions are. This breakneck pace of development means good and bad things, but I have a few ideas about how I expect it to turn out.
The opinions in this piece are those of the author and not necessarily those of osnews.com
There are literally hundreds, if not thousands of distributions out there. In fact, with Knoppix, almost anyone can make his own. Each season, it seems we watch some distributions fold and others form. It’s getting harder and harder to tell them apart. Think you’re an expert? Answer these questions quickly:
According to a recent post on Distrowatch.com, “It is time to face the facts: the number of Linux distributions is growing at an alarming rate. On average, around 3 – 4 new distributions are submitted to this site every week, a fact that makes maintaining the individual pages and monitoring new releases increasingly time consuming. The DistroWatch database now lists a total of 205 Linux distributions (of which 24 have been officially discontinued) with 75 more on the waiting list. It is no longer easy to keep up.” Distributions change often, as does the popularity of each. Keeping up is almost impossible. Many Linux users install new distributions every few days, weeks, or months. Sadly, many of these folks keep a Windows installation – not because they prefer Windows, but because it’s a “safe haven” for their data which can’t find a permanent home on any given Linux distribution. Can this pace continue? I say no.
Predicting the future is always risky for an author, especially one who contributes to internet sites, where your words are often instantly accessible to the curious. But I’m going to put my money on the table and take some guesses about the future of Linux. Here, in no particular order, are six theories that I believe are inevitabilities. Keep in mind that although I’ve been liberal in tone, nearly everything in this piece is speculation or opinion and is subject to debate. Not all of these theories are necessarily entirely original thought, but all arguments are.
1) Major Linux distributions will collapse into a small, powerful group.
“Major players” in the Linux market, until recently, included Red Hat, SuSE, Mandrake, Debian, and Slackware. Some would argue more or less, but now you have a number of popular distros making inroads into the community, Xandros, LindowsOS, and Gentoo to name a few. Another fringe including Yoper, ELX, and TurboLinux are making plays for corporate desktops. I’m coining a new term for this era of Linux computing: distribution bloat. We have hundreds of groups offering us what is essentially minor tweaks and optimizations of a very similar base. This cannot continue at this pace. There will from this point on, be a growing number of Linux installation packages as people become more skilled, but there will be fewer distributions on a mass scale as commercial Linux stabilizes.
I think we’ll see the commercial Linux market boil down to two or three players, and this has already begun. I expect it to be a Ximian-ized Novell/SUSE distribution, Red Hat, and some sort of Debian offshoot – whether it’s User Linux or not remains to be seen. Sun’s Linux offering, Java Desktop System, will be deployed in Solaris committed companies and not much more.
2) Neither KDE nor Gnome will “win;” a third desktop environment will emerge.
The KDE/Gnome debate is a troll’s dream come true. People are often passionate about their desktop environment. I believe they both have strengths and weaknesses. However, a third DE, with a clean and usable base, will emerge in time, its sole mission to unify the Linux GUI. Only when there is true consistency of the look and feel of the desktop, or close to it, will Linux become a viable home OS for an average user. Currently, we see this consistency forged by common Qt and GKT themes, and offerings like Ximian Desktop which attempts to mask the different nature of each application. This is not about lack of choice – it is, however, about not allowing choice to supercede usability of the whole product.
Features that a desktop must include are obvious by now: cut & paste must work the same way throughout the OS, menus must be the same in all file manager windows, the same shortcut keys must apply in all applications, and all applications must have the same window borders. Many seemingly basic tasks that haven’t entirely matured, or in some cases, been accomplished at all, yet.
In any event, the DE’s importance will lessen once greater platform neutrality exists. This will doubtlessly cause many to argue that I am wrong – admittedly, it’s a tall order especially with Gnome and KDE becoming established and accomplishing so much. I maintain that unless there is some sort of merging, not a set of standards like freedesktop.org, but rather, a common base for development, that there will be a fragmented feel to Linux that simply doesn’t exist in Windows today.
3) Distribution optimization will become more prevalent
Most distributions today can be used for anything – a desktop system, a web server, a file server, a firewall, DNS, firewall, etc. I am of firm belief that Windows’ greater downfall on the server is that it has been a glorified desktop for too long. The file extensions are still hidden by default, you’re forced to run a GUI, and you can still run all your applications on the system. I predict that we’ll start to see flavors within distributions tweaked at the source level for optimization. Systems made to run as a desktop will have many different default options from their server optimized counterparts.
4) Integration will force the ultimate “killer app”
I predict an open, central authentication system will take the Linux world by storm. There still isn’t a Linux comparison to NDS/eDirectory or Active Directory that makes user management across the network as simple as either of the two. While eDirectory will run on Linux, there is no open standard with a GUI management tool that automates this mass management. An authentication service whose job is only to watch resources including users, devices, applications, and files doesn’t exist and can’t be built without serious Linux know-how. This service, which I’ll casually refer to as LCAS (Linux Central Authentication System) for lack of a better term, will be as easy to establish as a new Microsoft domain’s Active Directory.
LCAS will operate using completely open standards (X.500/LDAP) and will be easily ported to the BSDs and to commercial Unixes. Unlike Active Directory, LCAS services will be portable, and stored in a variety of databases, including PostgreSQL, MySQL, and even Oracle and DB2. LCAS, like Linux, will be pluggable, so that as it matures, management of other objects, like routers and switches, your firewall, and even workstations and PDAs and eventually, general network and local policies, will be controllable from your network LCAS installation. Perhaps, in time, it will also manage objects on the internet and how they can act within your network. I envision the ability to block, say, a particularly annoying application‘s HTTP traffic, the ability for certain users to use specified protocols, or installing internet printers via LCAS.
5) Releases will become less frequent, and updates more common
There is a competition for versioning in the Linux world, as though higher version numbers are somehow “better.” Version inflation is commonplace, with companies incrementing the major version for minor overall updates, and going from X.1 to (X+1) after a few application updates and a minor kernel increase. There is also a software trend that eventually, when the version number gets too high and is abandoned in favor of less harsh sounding names. No one would upgrade Windows every six months, so why upgrade Linux every six months? Because the software gets better too quickly! And the best way to get new software that works is to upgrade the whole distro! This is backward. The software should be incidental to the distro, not the reason for its version stamp.
Gentoo Linux just changed their release engineering guide specs to include for a year number with subsequent point releases. This, I think, is the right idea. I predict that we’ll start to see releases like DistroX 2004, DistroX 2005. As a counterpart, we’ll begin to see downloadable updates like service packs, like DistroX 2004 Update2. These updates will be easily installable and will update and patch not only the OS, but all components that came with the distro.
It is not unlikely that we’ll see a front end installer that launches, checks your system and the server, asks which pieces you want upgraded, and then processes it. There are systems like this in place today, however, they are constantly updated. Too often, people don’t patch or update, they just reinstall. We’re going to see only security updates for each distro, and approximately quarterly, we’ll see an official Update. Updates distributed in this fashion are much more likely to be applied by a common user than the slew of updates issued on an almost daily basis. Updates like this allow users to utilize a modern system much longer in between releases – for years in some cases. Unless OpenCarpet catches on, I see a service pack mentality prevailing for all commercial distributions.
6) Linux-approved hardware will become common
Part of the fight for useable Linux is with device drivers and hardware. Getting your video card to work properly, even with a binary driver available, is still way too hard. While this isn’t always the fault of the hardware, we will see, in time, Linux approved hardware. The hardware will include Linux drivers on the accompanying disk. There will be a certification process that tests hardware against a certain set of standards. Soon, a Tux badge on a PC case will be as commonplace as the “Built for Windows XX” stickers on most cases today.
I don’t claim to be visionary by any means. I also don’t want to forcefully bring spirituality into the mix, but I believe all things exist in waves, with highs and lows. Linux started small, it’s gained an audience, and as it swells to a large point, we, the community, should anticipate the future refold of things. The eventual downswing shouldn’t be an implosion, but rather, an opportunity to organize and streamline the existence of free software. It doesn’t have to be a reduction in use, it can be a simple cooperation, reduction of market saturation, and convergence towards standards.
Within the next two years, we’ll likely see Linux kernel 2.8, Gnome 3, and KDE 4. We’ll see exciting new projects. We’ll see many new Linux distributions and many existing ones disappear. We’ll see the pre-Microsoft Longhorn media blitz. And I bet, not too much longer than that, we’ll see some of the above start to become a reality as well.
Adam Scheinberg is a regular contributor to osnews.
The author seems to me to be reactionary, not visionary. Windows not fragmented? What? Anyone supporting small, medium, or large companies, may have a mix of five or more Windows versions, not to mention the service packs, which often break functionality while fixing bugs, each require a different base of drivers, which in Windows are not distributed with the OS, but are managed separately. The applications which are installed typically do their own installation using their own rationalizations on shared libraries, and so the shared library version management results in DLL hell. Some companies must run several versions of VB apps that each require a different VB runtime to run stably, and must make sure that no other softare is installed that would overwrite a core file. XP makes some of these DLL issues better, and some worse. I am harping about ONE point here, but let me just say, that if this one assumption that the author has (that Windows has uniform APIs) goes unchallenged, then the rest of his misleading assumptions can go unquestioned as well.
The term “Windows API” and “consistency of an OS” can no longer merely be considered to be the mostly unchanged parameters and semantics of WinCreateWindow, but must be considered as a complete technical-ecosystem. Microsoft has realized that their entire architecture is sinking, and is boldly attempting to rescue themselves by pushing “dot Net” as the way to support their new component based development model, with built in versioning of interfaces, and many other improvements.
But Dot-net is more like a new operating system, a new VM at least, on top of Windows, than it is like what it replaces or overlays. Now we have the fragmentation of native apps, versus Java VM apps, versus Dot Net VM apps, all of which are going to be a little disconnected from each other. You’re going to eventually see that there will be different dialog boxes, different limitations, and different ideas of what the “filesystem” looks like as you move along. This started in 1995 when they moved the “root” of the filesystem from the list of drives ( “A:”, “C:”,…) to be “C:WindowsDesktop” in Windows 95. The mishmash and confusion between 3.x dialogs and 9x dialogs has now abated, but the same thing is going to happen again, as MS moves towards a more network-oriented view of things in the future. Confusion, and disorientation are necessary in order for anyone (Microsoft, or Linux) to make progress.
Wake up and smell the need to change your mind, instead of telling the Linux community, that they need to reimagine their desktop architectures, and write a new “third DE”. There is no need for one. The only way ONE desktop would take over is if the other begin to collapse and die, or fragment. Having two full featured desktops is just part of the free software world reality. They will grow more integrated and interoperable, as necessary, but they won’t just go away because the idea bothers someone. This ain’t windows, there isn’t one person at the top telling us how to do things. And that’s a good thing.
WPostma
Funny thing about Linux is that I’t getting Windows-like all the way. Big companies are already sticking propietary pieces of code to their distro, which eventually leads to propietary linux. Not whole, but fair parts.
As I see it it’s wise to stick to debian & suns and their idea of a desktop. Competition is shit, ain’t it. Who can say that KDE will never become “un-bloated”. Settings are KDE strongest part. GNOME Could become more “user”.
The most important thing is that it’s happening. We’ll wait and see.
<P>I disagree. Competition between Gnome and KDE is not a requirement. As through most of the history of the two DEs KDE has been ahead in feature set and integration, Gnome HAS been beneficially driven to match KDE or produce equivalents. Lately, perhaps, this dynamic has become more two-way but competition is not a prerequisite for a good KDE or Gnome. </P>
<P>There is no real competition for linux kernel yet the kernel slogs along, getting better and better all the time. It is not competition so much as need and ego. There is need, in the developer or the users, for some feature(s) from the kernel. Or…the developer uses linux and finds problems with the current kernel in regards to some activity. He/she seeks to fix this problem, or provide some feature. No competition needed.</P>
<P>Then there is ego. It is a pat on the back and community praise one gets (as well as something nice to put on a CV or job application) when one comes up with some nifty addition or fix. Ego gets stroked and/or one gets a new job. </P>
<P>Where’s the competition for XFree86? I mean REALISTICALLY. Where’s the realistic competition for the linux kernel? Don’t say “gnuhurd” because it is nowhere yet and it is simply a fact that Linus and other kernel developers are not sitting around trying to come up with ways to best gnuhurd. Just isn’t on their radar. And yet the kernel gains in leaps and bounds totally without competition from another kernel.</P>
<P>We would all benefit if Gnome and KDE joined together at some level, be it at a mere API level, feature set level, etc. I like and use KDE. Gnome looks nice now, as of 1.2, but it just isn’t for me. I have used and will continue to use KDE as my primary DE. All I want is for all my GUI apps to behave and look consistent with my KDE. I expect the same thing if I were to switch to Gnome. I choose my themes, colors, window decorations for a reason. They are right for me. I expect all my apps to obey my desires and not the look/feel preference of someone else.</P>
> While it seems like you know what you’re saying, it doesn’t seem all that different from what I was saying.
In my world, “seeming to know what you’re saying” is often an excellent indicator of actually knowing what you’re saying.
>I love how people who seem to have intelligent things to say virtually always ruin it by inciting negativity with their tone
Regretfully, it is difficult to generate a positive tone when someone so mischaracterizes something, as you did with the Mozilla –> Firebird evolution, despite your subsequent claim to the contrary. To restate: (1) Firebird is the official migration path for Mozilla users; it is, in effect, the beta version of future Mozilla. This best explains the rapid adoption/willingness to test on the part existing Mozilla users. (2) Corporate (and new personal) adoption is best explained by the removal of the email requirement, as described, not some mystical “third way” effect.
> In the meantime, I still don’t buy what you’re saying.
I’m not sure it’s possible to not “buy” what I’m saying. Most of what I did say, other than attributing motivation to individuals or corporations that I don’t know, is taken directly from information gathered over a couple of years of tracking this phenomenon on the relevant lists and so forth, and substantially from the detailed statement issued by the Mozilla foundation regarding the Mozilla –> Firebird evolution. That is, they are not opinions but observations.
In any case, my real goal was to avoid dealing with what I believe are your incorrect predictions regarding the future of Linux distributions, which would take too long to fully explain. Having followed this issue for quite some time, I am in a position to know that you are only the latest in a long line of prognosticators making this very same prediction, streching back quite a number of years. Interestingly, your particular instance of this prediction comes at a particular point in time in which you acknowledge an “explosion” of distributions. I believe that there will eventually form a sort of “rabbinical school” oriented around variations of two leading Linux prognostications: (1) a massive extinction of Linux distributions; and (2) the emergence of a “single Linux” desktop (of which the KDE/Gnome will die, KDE/Gnome will merge, etc. are variants). Within this community of prognosticators, the wise men will toil ceaselessly to research past claims in order to come up with new, slightly altered forms.
What I’m getting at is that you’re predictions do show a failure to grasp the motivation and dynamic surrounding the creation and death of Linux distros. If you’re interested, I could give my opinions. I do believe there are interesting things to be said/written about activity in the Linux distro space — for example, the explosion in the use of Debian and Debian derivatives, the emergence of Gentoo as a comparatively easy to use source based distro — but, please, can we stop with the “99% of all Linux distros are doomed” pieces.
Peter Yellman
0) Yes, there are many distributions, and yes, the number of distributions seems to be increasing. I think that this is, in some ways, a good thing, for several reasons.
a) It allows specialization. Knoppix is great for showing linux to people who have never seen it, seeing if hardware works under linux without doing research/reading, as a rescue disk….
Likewise, there are distributions aimed at being firewalls, or routers that fit on floppies and can work well on old PCs.
b) It allows new things to be tried. My current favorite distribution is gentoo. Why? Because of the package management. It needs serious improvement, in major areas (reverse dependancies, cryptographically signed packages, improvements to speed and stability…), but overall, it works well, and I prefer it to debian or BSD, for the moment, much less an rpm-based distribution. Can you imagine Red Hat having a primarily source-based package management system? It’s not ideal for many users, but it has some major strengths, and the ease with which different ideas can be tried is, imho, one of the most vital reasons for OSS’ quick evolution.
c) It lets people choose what meets their needs. I like gentoo; some people like debian; some like redhat, some don’t like linux. I would be much less happy if I had to use redhat; many other people would absolutely hate gentoo.
I do agree that there will be a limited number of major distributions; that said, I suspect that the major category will become a little blurrier, and that there will be more specialty distributions, rather than ones that aim at doing everything for everyone.
As for not being able to keep a safe haven for data, that’s what separate partitions for /home are for… if you are going to have a separate partition, it may as well be /home/ as windows if your sole purpose is storing data.
I don’t think that there will just be three generic desktop distributions which are the only “major” thing.
One of the great things about X11 is that it has allowed a profusion of WMs and DEs to flourish. How many people would honestly still be happy using CDE as their graphical environment? Some, certainly, but I like being able to choose to run KDE or blackbox. I think that gnome and KDE will work together where it makes architectural sense to – they are fairly different internally, so it’s more complex than just factoring everything into a base library for them to share. There will be new desktop environments; in 20 years, I suspect people will look at KDE and Gnome as we do Windows 3.1, CDE, or a 1980s Macintosh – perhaps good for the time, but archaic.
Cut and paste should work, correctly. Menus should -not- be the same in all file manager windows. A degree of customization by the user, and for what is being selected, ought to be allowed, as well as the option of being able to present the same data in different ways. That said, by default, they should look similar.
The same shortcut keys in all applications is a horrible idea – no offense. If I’m an emacs user, I don’t want to use vi key bindings. If I want to press a key to reload a webpage, why can’t it be the same one as I use to restart a song? There’s a -finite- number of keys available. Some operations, such as ways to quit or get help, would benefit from being standardised (in a way which could be remapped and disabled, but usually wouldn’t be.) Absolute standardisation is a way to guarantee pressing alt-ctrl-meta-cokebottle all the time to do everyday things.
Linux doesn’t feel “fragmented” to me. I don’t notice toolkits much – I can say raw xlib or motif look ugly, but given qt, gtk, xul, and tk, I’m reasonably happy using any of them. I may be in a minority on this; I may not be.
Likewise, I think KDE is -much- nicer looking than gnome. When I started using linux, 4.5 years ago, I preferred gnome, but I was impressed by KDE 3, love KDE 3.1, and am in awe of the better parts of KDE 3.2 beta so far.
As I see it, the current wm situtation is great. It can be improved, and ought to be, in some reasonably major ways, but more compatibility and flexibility are becoming available, and it’s quite workable as a desktop for a user.
I never liked the Mac interface (I’ve been using Macs since ’90). Windows is less consistant than a mac, and I find the user interface on at least XP to irritate me greatly. Long live choice; if I had to use Windows, Gnome, or CDE on a daily basis, I would be a bit less happy with my computers.
Windows doesn’t have a “feeling of integration” with the desktop environment for me. I suspect to a large extent that when you change between windowing systems, you notice differences and it feels awkward for a while.
I think the server/desktop divide is too simplistic. I want somewhat different things from a router, a firewall, and a DNS server. Likewise, I may want a desktop distribution for an old pII with 64 MB RAM (such as my favorite desktop machine until earlier this year), or one that can run off a CD. Not everyone has the same type of computer; the divisions which Microsoft has chosen to market to are somewhat real, but Linux has a chance to meet people’s needs more precisely. This is a true benefit of having many distributions that is not realistically going to come from a closed source environment.
This turned into something a bit too long… I’m continuing my original comment in another post. I apologize if it’s a bit meandering.
Linux does some kinds of integration reasonably well. It can read a lot of filesystems, and speak a lot of network protocols. Your goal sounds like something you personally want; it might help some others too. Great. That said, I don’t think it’s necessary for Linux’s continued growth, nor sufficient to make Linux run on a much greater percentage of machines than currently. It would be better for some tool of the sort to exist than not, but no one killer app will cause a huge shift, I think.
Version inflation is annoying. However, it seems contradictory that you say “Version inflation is commonplace… going from X.1 to (X+1) after a few application updates and a minor kernel increase. … why upgrade Linux every six months? Because the software gets better too quickly!” It does get better quickly, and a large amount of that is due to the fragmentation – excessive fragmentation is unworkable, and so gets avoided, but reasonable fragmentation allows rapid development due to the ease of testing new ideas, and to competing projects being able to inspire each other to improve farther – and then share any improvements they chose to.
The best way to get new software is -not- to upgrade the whole distro. Any user of gentoo or debian may tell you that. That may be the best way on rpm-based distributions, at present; this will eventually change. It is an issue, but just because it is often still done badly is not a reason to consider it unsolved, merely one to look at ways which currently work, and to adopt them or invent something which works even better. I’m not saying that the author of this article should do that; I’m speaking generally.
“As a counterpart, we’ll begin to see downloadable updates like service packs, like DistroX 2004 Update2. These updates will be easily installable and will update and patch not only the OS, but all components that came with the distro.”
Please, please, no. Or more specifically – feel free to do it, there’s nothing stopping people from developing it if they choose (No, I’m not saying that -anyone- has the time/ability/whatever, but if enough people care, it will end up happening.) That said, I think it’s a horrible idea. Updating every component that comes with the distribution -will- break things. There’s no way to avoid that. Large upgrades of this type are notorious, and for good reason – it is impossible to consider, much less test, every possible combination of all software packages against all versions of each other. Upgrades should be easily available. Ideally, they should be downloadable. I have no objection to making quarterly CDs (they’re useful to people with little bandwidth, for instance), but I think it’s an incredibly awkward way to do things for the majority of people.
“It is not unlikely that we’ll see a front end installer that launches, checks your system and the server, asks which pieces you want upgraded, and then processes it. There are systems like this in place today,”
As you just said, they already exist.
“however, they are constantly updated.” You mean the packages? Good… I don’t want to wait 3 months to patch my kernel after the next brk() vulnerability. I -like- having my distribution have a patch quickly, rather than having to rely on third-party downloads.
“Too often, people don’t patch or update, they just reinstall. ”
Yes… this happens on every consumer OS I’m aware of. Linux isn’t a magic bullet for this; it’s also not more to blame – there are ‘one click’ solutions, and given that users of Linux still tend to be more technical, I don’t think this is too much to ask.
“We’re going to see only security updates for each distro,”
As opposed to bug-fix/functionality ones, or?
“and approximately quarterly, we’ll see an official Update.”
I’ve said what I think of this above.
“Updates distributed in this fashion are much more likely to be applied by a common user than the slew of updates issued on an almost daily basis.”
Disagreed. A common user will apply no updates without being shown how or told at least once for whatever system they’re using. Many won’t even after that. Almost -no- end users will say “Oh, it’s the 15th of one of the quarterly update months, I’ll patch today!” A user should have the ability to update as often – or rarely – as possible. Distributions aimed at home desktop users should by default have an icon on the desktop which shows if there are any security updates, and a way to update the system, either globally or per-package, easily. At least for modem users, a 200-meg quartely patch is much more daunting than a few megs per week, with the option to take out what’s not wanted.
“Updates like this allow users to utilize a modern system much longer in between releases – for years in some cases.”
Updates like this are asking for serious trouble, I believe. Time will probably show that both of our arguements have some merit and some flaws.
That said, Gentoo and Debian are not on anything like the quarterly schedule you propose, and seem to be reasonably good at having a longer time installed.
“Unless OpenCarpet catches on, I see a service pack mentality prevailing for all commercial distributions.”
And I don’t.
“Part of the fight for useable Linux is with device drivers and hardware. Getting your video card to work properly, even with a binary driver available, is still way too hard.”
Knoppix. Do I need to say more?
It’s autodetected, highest resolution and refresh rate are auttomatically used, and you boot into a graphical environment within minutes, with no configuration needed.
“While this isn’t always the fault of the hardware, we will see, in time, Linux approved hardware. The hardware will include Linux drivers on the accompanying disk.”
I’ve bought network cards with linux icons, which included drivers for linux on the accompanying disk. Realtek 8139too.
Many of the things you suggest or forsee already exist. Others are more controversial. Still others seem to be quite good ideas, of varying degrees of originality.
Linux isn’t Windows, nor MacOS, nor any other OS. It’s got serious flaws; it’s got serious potential. I think that it has a chance to be free of some of the worse technical decisions which have plagued other systems, optionally with a compatibility layer.
“Where’s the competition for XFree86? I mean REALISTICALLY.”
freedesktop.org’s xserver. It’s not ready yet, but considering the people involved, and how amazing it is, even in pre-alpha stage (yes, it will hard-lock your machine at times… but they do lable it _pre_ alpha for a reason..), I think that within a year or two it’ll be major, and within 5 XFree86 will be less often used, barring unexpected changes in its development model.
“Where’s the realistic competition for the linux kernel? Don’t say “gnuhurd” because it is nowhere yet and it is simply a fact that Linus and other kernel developers are not sitting around trying to come up with ways to best gnuhurd. Just isn’t on their radar. And yet the kernel gains in leaps and bounds totally without competition from another kernel.”
BSD. FreeBSD runs most major Linux applications, such as KDE and apache. There’s also Windows – Linux development has at times benefitted from weaknesses shown in comparison to Windows, such as in a few networking tests under unusual conditions.
Linux developers have used and developed a huge array of other systems; no, GNU hurd probably isn’t a major fascination of most of theirs, but Linux, the kernel, has many competitors.
Looks like Debian and Redhat.
The correct answer was: “yes, infact, even the simplest file location/logic do greatly differ among different distributions but they do not change much within new releases of a distribution”
The truth is that “Real Administrators” will use one of the larger Linux distributions, and understand the underlying concepts of Unix/System Administration. Once this is done, it is a trivial matter in switching distributions or even Operating Systems. Remember, Linux is only a kernel.
Agreed. That was kinda my point.
I don’t think there’ll be any significat unification, look at Perl for example, it’s a mature programming language that can be compared to Linux. It’s been following the “There’s more than one way to do it.” principle and still is, I don’t see unification coming soon in Perl, and neither in Linux.