“Folks in the software industry doubt Linux will suffer from the same kind of forking and fragmenting that limited Unix and its ISV support, but the idea may nevertheless be pushed as an offensive against Linux by Microsoft, which is already running ads in Europe depicting mutant penguins assembled with other animal parts as if to suggest they are unsure what they are.” Read the article at NewsForge.
the question is rather if it’s good or bad. i think it makes sense to leave choice to the user of what progs (e.g WMs / DEs) he wants to use on the other side it’s hard to reach new users who are flooded by the choice, also the developing is sometimes done twice which wouldn’t be necessary.
i think linux is on a good path, there’s the ambition of private contributors who make inovations (even if the practical part isn’t this good) and then there are the big fishes like distros working well who provide money to reach the public and to work on the conecpts.
LSB is a good coordinator of developing. that way we have choice and compatibility! =)
as to M$ – of course they’ll try to show linux as ugly monster-mutant…seems to be the next step after TCO ‘studies’….
Those adds are old, four years old. Just google (images) for “msad.jpg”.
http://slashdot.org/articles/00/10/21/1644248.shtml notice the date on that one….
But in a good way.
We have PPC Linux, uCLinux, Realtime variants of Linux etc. etc.
All these projects depend on the ability to easily fork the Linux kernel and tailor it for specific requirements.
The licensing of Linux effectively prevents the fragmentation the UNIX market saw e.g.
If a RT-OS becomes really widely used, theres nothing (short of software patents) to stop those features being integrated into the ‘vanilla’ kernel – with compile-time switch to enable/disable it.
The RT-OS people have nothing to lose if this happens, it just means better support for their software across a wider variety of devices, and the rest of the world can choose to use it or not use it.
‘Forking’ as such, is really not a problem. Everyone is obligated to provide the freedom to exchange their ideas once they link them into the kernel’s source tree – the idea that you would want or need to fork the kernel without good technical reasons (as opposed to political or competitive) reasons) is pretty laughable.
As for the rest of Linux -the apps, servers and DEs etc. – mutation and diversity are the reason it has come so far so fast – nobody is forced to do anything they don’t want to, in this way more gets done, although the flip side of that is that tight focus on specific areas is often difficult to obtain.
But that is the value proposition of Linux – companies can provide this tight focus, and people will pay for those services, without impacting on the overall growth or diversity of the platform in general.
Considering that Linux is 13 years old and hasn’t forked once, I don’t think it’s likely too either. There was a dodgy patch _quite_ a few years ago, but the commercial interests in Linux are so strong that forking is even less likely now.
The thing is, this is marketing policy, not technical lockin. The *technical* lockin of things like the Windows API is very real. On the other hand, people massively over exaggurate how difficult it is so support multiple distros – the free market will hopefully ensure that if a vendor is getting enough heat from customers wanting support for a wider range of distros, they’ll provide that. The actual barriers are very low (I’m assuming there are no dodgy contracts going on here saying you can only support RHEL or whatever).
LSB does not limit the potential of Linux. If anything, it expands it. The freedom of Linux is that distro that want to be LSB-compliant can be, and those that choose their own route can stick to it.
If the major distros in the home-user and business-desktop categories jump on the LSB bandwagon, we’ll start to see more “made for Linux” software out there. The barrier to entry will be lower.
I think the vision here is for boxes to have “OS: LSB 1.3-compliant Linux” in their system requirements.
consider that filesystem layouts vary from distro to distro (SuSE, Red Hat, Mandrake, Slack, etc all differ) and window managers all look for config info in different places, there is no definitive installation/uninstall application (apt, rpm, urpmi, yum, etc. ad nauseum) and there are even hundreds of distros to choose from; I think there’s no way in hell that LSB will make a dent in the confused mess that is linux.
Of course I’d like to see things change, but I can’t even get my removable USB drive to mount in the same location all the time. It’ll take years for that kind of progress at this rate.
The stuff seems to be all pretty low level. Machine level, and some base libraries. I don’t see how this really affects the end user experience much.
Seems like people are interpretting this as a “you must use Gnome” deal. And I didn’t get that from skimming the 1.3 specifications.
The end user is going to be concerned that they do the same clicks to install their new program, not which libraries the developers chose to use (although, I suppose that matters in the dependency issues).
I hope linux remains varied and full of life like it is. I hope we don’t see it come down to 2 big distros. I generally use big names, but I can name multiple big names I wouldn’t want to use (Gentoo, Mandrake, Suse).
Folks in the software industry doubt Linux will suffer from the same kind of forking and fragmenting that limited Unix and its ISV support […]
If “Folks in the software industry” translates as “out of touch linux zealots”, this statement might hold true, but the rest of the industry is well aware linux *already* “suffers” from fragmentation and a wide range of diversity and that it is one of the platform’s bigger problems, from a business perspective.
Which distro should they target ? Which versions ? Which GUI ? Which package management system should they use ? Will our application work when some aspect of the system gets updated ? Etc, etc.
At a very low level, Linux distros are all very similar. Trouble is, once you get to the higher levels where “real people” are actually interacting with them, they’re all very different.
Why don’t we get any BSD for the desktop articles ?
I like variety. And linux seems to be turning into Windows .
I think the microkernel or exokernel is a good idea. Something about Monolithic kernels seems wrong.
mutant penguins
http://www.geocities.com/msadscan/
“Which distro should they target ? Which versions ? Which GUI ?”
How ignorant..
Distro and version is completly irrelevant. What matters are versions of kernel and glibc and other relevant libs. GUI? What about Xfree/Xorg, that ought to be safe..
“Which package management system should they use ?”
Tarballs? An installation program? I don’t say that other ways aren’t more comfortable, but this way works for ET/UT and the nvidia video-drivers..
“Will our application work when some aspect of the system gets updated ?”
How is this different from ANY other operating system?
“Out of touch”.. Gee.
Distro and version is completly irrelevant. What matters are versions of kernel and glibc and other relevant libs.
Which are (*gasp*) tied to distro and version…
Saying your program requires with kernel a.b.c, glibc i.j, foolib x.y and barlib p.q is adequate when you’re dealing with people who can verify that, but it doesn’t work when they can’t. Not to mention completely ignoring the fact that *they shouldn’t have to*.
It’s far easier – for everyone – to just say “this works with RHEL ES 3.0. Other distros are unsupported”. Of course, then developers get attacked for not supporting Obscure Linux Distribution #57.
Supporting different platforms costs lots of money. From a customer perspective, each different Linux distro *is* a different platform.
GUI? What about Xfree/Xorg, that ought to be safe..
X is not a GUI.
Tarballs? An installation program? I don’t say that other ways aren’t more comfortable, but this way works for ET/UT and the nvidia video-drivers..
Last I checked tarballs didn’t deal particularly well with forward and backward dependencies, uninstallations, verifying files, identifying files, etc, etc.
Again, tarballs are a suitable method for customers who are *experts*, but the vast majority *aren’t*.
The-whole-application-in-a-directory works with OS X and works reasonably well (thus far, but it’s early days), but that’s because Apple defines and enforces *policy*, something the Linux community – and its developers – are loathe to do. Of course, when someone does have the balls to step up and define a policy – eg. Redhat – they inevitably get castigated for it because it steps on the toes of Minority Linux Distributions #37, #50-#57 and #96’s personal preferences.
How is this different from ANY other operating system?
Other operating systems attempt to preserve compatibility across updates, *avoid* fragmentation and define (and enforce) policies – all things the Linux community does not. That’s why I can grab a Windows 95, MacOS Classic or Solaris 2.7 application and be reasonably confident it will run on Windows XP, OS X and Solaris 10. That’s why an application running on OS X 10.1 looks the same running on OS X 10.4. That’s why when I hit Ctrl+C and then Ctrl+V in nearly every Windows application on Earth, it copies and pastes.
The question of whether or not Linux might fragment like commercial unix did is moot – it already has. The important question is whether or not the Linux community is going to work together to *fix* it, or whether we’ll just end up with a few primary distributions (like Redhat and Suse) and a bunch of smaller ones that are reasonably popular amongst users, but largely ignored by commercial interests.