Here’s a quick guide on how to install applications using various types of package formats in Linux. It is aimed at people new to Linux. “Installing software in GNU/Linux looks quite different to the way you’re probably used to from Microsoft Windows. This is due to philosophical reasons. GNU/Linux is a free (as in freedom) operating system. Most of the software is free as well. Thus, the programs can better cooperate with each other and often depend on each other for getting a job done.”
Thus, the programs can better cooperate with each other and often depend on each other for getting a job done
I don’t see it as a feature. I see it as a problem. Installing software is a lot easier and trouble-free on Windows.
I don’t see it as a feature. I see it as a problem. Installing software is a lot easier and trouble-free on Windows.
Agreed. I’m a happy windows and Ubuntu user and I’ve had a far easier time fixing dependancy issues in Windows then Ubuntu.
Part of that is likely due to the fact that I’ve been a windows user for longer than an ubuntu user and fixing dependancies in either OS is not exactly easy for casual users on both platforms.
So true. Ironically, the only time I’ve ever experienced “DLL Hell” on Windows was some time ago, when I was trying to get The GIMP and GAIM to cooperate. It turns out GTK was trying to be helpful and install itself system-wide (in the Common Files folder), but it turned into a versioning nightmare.
I’m wasting a few hundred megs of hard drive space with duplicated libraries? Gee, I don’t care. It’s a less technically elegant solution? Maybe, but it works.
I dunno, at least in the *nix land they actually bother to tell you what you need rather than just including 1/2 of it… I can’t even begin to tell you how many copies of the .net 1.1 files are scattered around various program folders in my windows 2003 workstation…
And I just found a version of python installed with VtM: Bloodlines… I already had python installed, why exactly did I need that?
And does anybody remember the ATI cli errors claiming a file was missing when it wasn’t? or the TV-wonder card software issues with the same thing: dependency errors that don’t actually exist? Or magically dissappearing files from the .net 1.1 pack (the same file on about 70% of systems I look at, which is weird) and an installer that then won’t let you reinstall .net 1.1 because it’s “already installed”… mmm, yes, much easier there >.<
Yeah, hunting things down if you are trying to manually install things in linux is sort of a pain, but at least there isn’t any mystery to it, it tells you exactly what is missing when you try to compile it most times… and at least a good number of package managers in the *nix world take care of such things, openpackage, blastwave, ports, portage, apt, etc…
In itself, it is not a problem.
Dependencies are a Good Thing as they reduce code redundancy and potentially reduce hard drive waste, security risks, memory usage.
What’s a problem is currently how they are managed on many distros. For many users they are complex and the dependency management sometimes goes wrong. This is a problem that is being fixed — compare the older days of dependency hell in RH6 compared to very few issues on a modern debian stable system.
“Dependencies are a Good Thing as they reduce code redundancy and potentially reduce hard drive waste, security risks, memory usage. ”
This is just reciting dogma!
If everyone had the same version of Linux it would be a benefit but there are 503 known versions of Linux at last count.
In the real world people are very willing to trade some had disk space in order to _drastically_ reduce the problems of poorly managed/undocumented dependencies.
I am not a windows user, I am a Slackware user so save the Windows newbie insults!
“I am not a windows user, I am a Slackware user so save the Windows newbie insults!”
Well swaret is not the best package manager. If you don’t like to deal with dependencies, maybe you should try Arch Linux. Pacman is really good.
I don’t see it as a feature. I see it as a problem. Installing software is a lot easier and trouble-free on Windows.
It is only a problem if the package managers doesn’t do their job. In almost all cases they do.
In the windows world you have to hunt all over the internet to find your applications and manually download and install them. That’s far more trouble than doing it the Linux way.
Another advantage of the package management systems in Linux is that you can query the package database on what package a certain file belongs to, and sometimes even what it does. How many windows users have a chance to get info on what various dlls do.
Linux installers also ask the user far less questions, than you get when installing windows software. In Linux the new functionality just turns up in the program menu, no questions asked.
BTW, if you try to install e.g. a Fedora rpm the windows way, by double clicking on it, Fedora will not only install it, it will also download any dependecies that might be needed from the net. How often can you do that in windows.
In the windows world you have to hunt all over the internet to find your applications and manually download and install them. That’s far more trouble than doing it the Linux way.
Here’s my experience: On Windows, to install Firefox, I go to mozilla.com; to install OpenOffice.org, I go to openoffice.org, to install Skype, I go to Skype.com, there’s nothing wrong about it, it’s convenient and it makes sense, furthermore, Google always finds where to download a particular application. Now on Ubuntu, I go to Synaptic and type “skype”: No result, ooops! I forgot to enable the non-free repositories. Now, let’s type “firefox”, no luck, there are tens of “firefox” results, among them the language files, the beta files, some addons, different versions, etc…I have no idea which file to install.
Another advantage of the package management systems in Linux is that you can query the package database on what package a certain file belongs to, and sometimes even what it does. How many windows users have a chance to get info on what various dlls do.
Thank God, I don’t have this “feature” on Windows. Useless. We are in 2006, get a life.
Linux installers also ask the user far less questions, than you get when installing windows software. In Linux the new functionality just turns up in the program menu, no questions asked.
This is bad. I like to have choice. I want to have control on what to install and where, including shortcuts.
it will also download any dependecies that might be needed from the net. How often can you do that in windows.
Windows does it the right way: No need to be connected to the Internet to install software, no dependencies either. How do you want the 5 billion poor people who live in 3rd-word countries where the best they can get is a dialup connection to install software on Linux? What they have done so far has worked: burn a copy of Windows, and burn a CD-ROM of applications from some one who has the apps and a CD burner, then go back home and install everything without a connection to the Internet.
At first you complain about seeing different versions and packages for Firefox in Synaptic. Then, you complain that the package management doesn’t give you enough choices during installation. You certainly have a right to your opinion, but not only does it seem contradictory, I’ve found it far more useful to be able to pick and choose an exact version of a software package than to manually indicate where it should be installed.
Additionally, your description of installing Windows by using a system disk and an applications disk is a bit simplistic not to mention nearly identical to the way Linux distributions are typically installed. A CD can just as easily serve as a software repository for Linux as a server in Bangladesh.
It has been my experience that unless I have a disc made in advance or at least an extensive list of applications, I’ll be installing miscellaneous applications under Windows up to several days after I initially install the base system because Windows doesn’t come with much. On the other hand, with Linux there are very few applications that I need to install separately from the base system because modern distributions are very complete.
At first you complain about seeing different versions and packages for Firefox in Synaptic
Yes, I don’t mind having different packages associated in the list, but descriptions are not clear enough for the average joe user to decide which one he should install. If you go to mozilla.com, you have just one link to Firefox.exe
I’ve found it far more useful to be able to pick and choose an exact version of a software package than to manually indicate where it should be installed.
Obviously, but I’m refering more to other options such as adding shortcuts on the desktop, defining as default browser, choosing language, etc…
Additionally, your description of installing Windows by using a system disk and an applications disk is a bit simplistic not to mention nearly identical to the way Linux distributions are typically installed
This was to stuff up my statement. Both Linux and Windows install the same way.
A CD can just as easily serve as a software repository for Linux as a server in Bangladesh.
No it can’t. Why? Because applications will always ask for dependencies that are not on your CD-ROM. This always happens. Or the dependency is on the CD-ROM, but the version is wrong.
It has been my experience that unless I have a disc made in advance or at least an extensive list of applications, I’ll be installing miscellaneous applications under Windows up to several days after I initially install the base system because Windows doesn’t come with much. On the other hand, with Linux there are very few applications that I need to install separately from the base system because modern distributions are very complete.
Neither concepts are good or bad. Just a matter of taste. I like Windows because it has some basic applications already set up: Windows Media Player, Internet Explorer, Notepad, WordPad. I like the fact that it doesn’t come with MS Office. I don’t mind that Linux ships with OpenOffice.org because I find it a great office suite. I don’t like the fact that Linux ships with applications that I don’t like though (GIMP for instance).
What if you DIDNT like internet explorer? What choice do you have there?
linux doesnt “ship” with any of that you talked about, some distros may bundle it or they may not.
What if you DIDNT like internet explorer? What choice do you have there?
Firefox, Opera, Mozilla suite, Netscape Communicator. Tons of’em
linux doesnt “ship” with any of that you talked about, some distros may bundle it or they may not.
You’re being picky. We’re talking about mainstream linux distributions, not about the kernel, obviously.
“What if you DIDNT like internet explorer? What choice do you have there?”
“Firefox, Opera, Mozilla suite, Netscape Communicator. Tons of’em ”
But you still cant remove Internet Explorer. At least on linux you can uninstall the gimp if you dont like it.
“linux doesnt “ship” with any of that you talked about, some distros may bundle it or they may not.”
“You’re being picky. We’re talking about mainstream linux distributions, not about the kernel, obviously.”
I am not being picky. I AM talking about linux distributions.
This is bad. I like to have choice. I want to have control on what to install and where, including shortcuts.
Choise is good only if you can make informed an informed choise. Most users can’t, instead it creates more jobs for your support staff. First some users may even call and ask where they should install things. Second, when some problem occur the support staff will often need to find out what choises the user have made. In most cases the user have forgotten all about his intial choises and there goes another five minutes.
If you download your packages and have them on disk just like you would have had with most windows programs most package managers allow you to have control over where you install things. It is just that by default they put them where everybody expects to find them.
As for shortcuts, nothing prevents you from putting the shortcut wherever you want after you install. E.g. if you want a shortcut on your desktop, just grab it from the Application menu and drag it to there.
I think your need for control springs from other deficiencys in windows. E.g. In windows you usually work with physical disks. This means that you run out of space in e.g C:¿ now and then, and have to make decisons to install software in non standard places. In Linux you usually have a layer of abstraction (lvm) between your physical disks and your file system. This means that if you run out of space, you can add more physical disk and use it where it is needed, much like how you add more RAM without assigning it to a specific program.
Windows does it the right way: No need to be connected to the Internet to install software, no dependencies either.
No dependencies? Try install some software without the right version of IE. Make sure that you have applied the service packs in the right order, repeat the procedure each time you install a new driver, that may or may not have reverted something essential patched by a later servicpack. Windows have lots and lots of dependencies.
How do you want the 5 billion poor people who live in 3rd-word countries where the best they can get is a dialup connection to install software on Linux? What they have done so far has worked: burn a copy of Windows, and burn a CD-ROM of applications from some one who has the apps and a CD burner, then go back home and install everything without a connection to the Internet.
There is nothing that prevents you from doing the same thing in Linux. I would say that windows would be more of a problem, just think of Windows Genuine Advantage that wants to call home now and then. More programs have licence keys to be registerd over the net, and so on,
In windows each individual software package is usually much bigger than in Linux, meaning that if you should have to download something over a flaky modem line, you have a much bigger chance to fail in windows.
Thank God, I don’t have this “feature” on Windows. Useless. We are in 2006, get a life.
That’s stupid. If I find a file /usr/bin/xlskrmartm or something equally weird on Windows that I don’t fully trust, and I want to know where it came from, what I need to uninstall to get rid of it, this feature is extremely useful.
Sure, might not be useful for home users who shut down the internet at night, but for maintaining an OS installation for a few years through updates and custom software installs, this is a very useful feature.
Maybe running an executable installer on Windows is a little easier than starting a graphical package manager, selecting the desired package, and clicking apply. That’s a matter of preference.
But on Windows, however, installing software doesn’t begin and end with running executable installer. You first have to find the installer on the web or somehow acquire it on a CD or DVD.
It’s often difficult to verify that the site from which you are downloading from isn’t a phishing scheme and that the executable you are running isn’t malicious. The fact that installers don’t all look and feel the same, because they use several different frameworks, doesn’t help the matter.
They ask questions that even an advanced user would rarely care about, like where to install the progam files. The default is appropriate 99.999% of the time, so they shouldn’t ask this question.
The installer can sometime, but not often, fail in spectacular fashion. This happens with *nix package managers, too. In the case of Windows, your recourse is to contact the application software vendor. Microsoft won’t provide any support for third party applications. Most *nix distributors will support any installation-related problems, and often a wide range of runtime problems, that you might have using software from their official package repositories.
A common reason why a Windows installer would fail is that the appropriate .dlls are not installed, and the installer doesn’t include them. This called “dll hell,” which is roughly analogous to “dependency hell” on *nix. On Windows this problem was left for ISVs to fix by including any shared libraries that could conceivable be missing on a working Windows system (of various versions). Accordingly, some installers are more robust than others, and your mileage may vary. On *nix, distributors took on the problem by developing package managers that are generally robust in properly handling this situation. As the package managers improve, the installation of any package becomes more reliable.
On Windows, installers generally store various configurations and shared data in the global registry… I don’t think I need to go any further in explaining this one.
Finally, Windows makes the job of delivering software for the platform far more difficult. I already mentioned some of the reasons, and there are many more. If you’ve ever played with InstallShield or worked for a Windows ISV, you know what I mean. On *nix, if you put up a project page, your source tree builds cleanly with GCC, and your license is GPL-compatible, your job is essentially done. The distributors will take care of the rest if there is any demand for your software.
If you are a proprietary ISV targeting Linux, your job is significantly harder, but no harder than for Windows. The proprietary ISVs that complain that delivering software for Linux is too difficult spend orders of magnitude more on delivering their software for Windows. Delivering proprietary software as a third party is hard, period.
To respond directly to your comment, I see the Windows software delivery model as a problem, and if the Windows market weren’t so vast, more ISVs would agree. In fact, for reasons including but not limited to software installation, ISVs are complaining to Microsoft–and appealing to the courts–regularly that they aren’t treating them fairly.
For the end user, it’s hard to judge whether the Windows and *nix model is easier to use. They’re both generally easy for users with a basic understanding of desktop computers, and they’re both frought with difficulties for the hopelessly ignorant.
But in terms of trouble-free–which I assume to indicate an assurance of reliability, security, and support–the commercial Linux distributors have Microsoft squarely beaten. Microsoft can’t honestly claim that non-Microsoft software for their platform is reliable, secure, or supported by anyone. *nix distributors actively work (i.e. opening bugs and submitting patches) to ensure that externally maintained software for their distribution is reliable, issue their own security advisories for external projects, and commerical Linux distributions provide various support levels for these software packages.
I won’t argue with you because basically I would disagree with every single statement.
I can’t believe that you believe in anything you wrote. It’s plain none sense. Maybe this is what you would like, but reality is a whole different story.
If Linux were so good, then everybody would use it. Last time I checked, there were only 1% of computer users using Linux. Still, Microsoft costs $300 ; Linux is free of charge. There’s obviously something wrong with Linux.
If Linux were so good, then everybody would use it. Last time I checked, there were only 1% of computer users using Linux. Still, Microsoft costs $300 ; Linux is free of charge. There’s obviously something wrong with Linux.
Yes it doen’t come preinstalled on virtually ever PC sold. That’s how MS maintains its effective Monopoly.
You’re certainly entitled to your opinions, and I agree that doing a back-and-forth argument would be useless. We both expressed our perspective on the situation; let’s let other people decide who’s version is closer to reality.
Let me just respond to your “if Linux was so good, everyone would be using it” argument, because it is a tried and true argument made by Linux skeptics. This argument would be valid if the OS platform market were a free and inelastic market. In other words, if businesses and individuals could switch from one platform to the other with no significant migration costs besides the acquisition cost of the new software, then they would switch shortly after one platform began offering better price/performance. If this was the case, and nobody was migrating, then it would necessarily follow that Linux doesn’t offer a better value.
However, for many businesses and certain individuals, migrating away from Windows is hardly even possible, let alone cost-effective. There’s homegrown software that only runs on Windows, or reliance on software from ISVs that don’t support Linux, or undocumented office collaboration backends that aren’t perfectly supported by Linux collaboration software. The market is almost completely elastic, meaning that customers would be willing to spend significantly more than the optimal market price in order avoid moving away from Windows. Optimal price is defined in terms of the marginal cost when the rate of supply is such that marginal cost = marginal price. Note that for most of the softare industry, the marginal cost is pretty close to zero. The only OSs that can compete with Windows in a remotely inelastic fashion are other versions of Windows, and even this is a tricky proposition for some businesses.
There are certainly many things wrong with Linux, but you can’t draw this conclusion from the market share. Non-Microsoft platforms are held to a much, much higher standard when businesses and OEMs are evaluating the possibility of moving away from Windows. They not only need to provide better price/performance, but they also need to provide feature parity as wells as data, application, and protocol compatibility with Windows. That’s really, really hard to do without compromising the architecture that makes your platform superior and without open interface specifications.
In free market economics, if a product were offered at no charge, no rational consumer would decline the offer. In fact, that’s why most governments have rules against setting your price artificially low to kill off competitors. In this sense, the fact that everybody isn’t using Linux doesn’t indicate that something’s wrong with Linux, it indicates that something’s wrong with the market.
Don’t bother with an elaborate economic argument. He’s either too dumb to understand it, or just baiting you.
In free market economics, if a product were offered at no charge, no rational consumer would decline the offer.
That’s assuming the free product provides the same value.
Today, on desktop, it’s not the case with *nix.
Why? The quality of desktop applications.
If I say the number of applications, many will point out there’re xxxxxx number of packages for this and that distribution. But how does it matter, when in almost every app category (other than the most basic stuff, like browser and email), I can find better apps on Windows than on Linux?
Even the same app looks better on Windows due to better fonts. Let alone Windows is basically commandline-free.
It’s quite different on the server side – equal or better utilities, operating with CLI and scripting become preferable etc.
That’s why *nix is much more successful on the server side.
While there’re problems with the market, and factors like migration cost (your stongest argument), you haven’t convinced me that I am “irrational” for not choosing Linux as my desktop at home, which has nothing to do with any “market problem” or “migration cost”, or competency in using *nix OSs.
I’m a little surprised by the whole dependancy hell thing, as I am not so much in dependancy hell as upgrade heven.
I think your post betrays when you last used unix, if your conplaining about fonts.
Boths of these used t be problems with unix, but I’d be temped to day even for the most casual user font rendering is just not a problem.
That’s assuming the free product provides the same value.
Today, on desktop, it’s not the case with *nix.
Why? The quality of desktop applications.
That is a lie. Economics simply say that, even if there is virtually no value in the product, it WILL be wanted. As long as it is sold for no cost (including you going to get it, that is). But who will want to supply such a thing other than the govt?
And quality of desktop apps? you must be kidding. I haven’t seen any win app better than k3b, for instance. I only know that nero/alcohol are on par with it. KPDF and the likes open much faster than Adobe, and gaim is much better than MSN messenger in terms of implementation. I, for one, hate the increasingly cluttered MSN contact list, and MSN’s many bugs. And these are just part of the list.
win have better fonts? IIRC, there IS good font renderers on linux. I think the win proprietary one had been ported, but is not GPL. All truetype fonts (which is the default in win) can work in linux. I think the package is called freetype. Go check it out. Default install, freetype may be a little worse, but the disparity is hardly noticible.
What? being commandline free is good? why, then, is MS investing in Monad? I hope those developers of GNOME/KDE/XFCE/SUSE/RH/… are not angry at your statement. They HAVE spent much time and money at perfecting their GUI configurators. As a KDE user, I haven’t met up with a KDE issue not configurable with control centre + app configure. I think only the most obscure of features are not implemented.
You may still like win on the desktop, but please keep it with you. AFAIK, my exp is that most people sticking with win in the home is because of the abovementioned ideas + the ability to always use pirated software they never touch. Wierd? yes. Welcome to dealing with humans.
Your analysis is generally sound. But there is this error I cannot disregard (I’m taking econs this A’levels)
you have swapped elastic and inelastic demands. Monopolies have inelastic demand that are generally indifferent to price changes, thus a steep (straighter) demand curve
in perfect competition, demand is perfectly elastic — raise the price by infinitesimal, your quantity demanded falls from infinity to zero.
Other than that, you have outlined the reason Linux penetration is small — people are stuck with the worse product because of market failure.
Other than that, you have outlined the reason Linux penetration is small — people are stuck with the worse product because of market failure.
And that explains why Linux, which is free and runs on cheaper hardware, isn’t doing better against OS/X, in what way?
Other than that, you have outlined the reason Linux penetration is small — people are stuck with the worse product because of market failure.
And that explains why Linux, which is free and runs on cheaper hardware, isn’t doing better against OS/X, in what way?
Ow… COME ON! Not that line AGAIN! OSX is a great operating system that happens to run only on a hell of a expensive hardware, therefore turning itself virtually into a niche being sold in great(?) numbers only within the USA boundaries. And perhaps a little bit on Europe, I don´t know.
If you take your head out of the sand and look at the rest of the world, you will realize that Linux has at least twice the installed base of OSX in the whole world on Brazil alone, whose govt settled on OSS.
That Mac/OSX line is getting tired. Please wake me up when Apple can sell ME a computer for less than half the price of a car here!
Take a survey of how many computers that are sold/built without operating systems end up with (legal) windows OS or linux and then see what percentage you get out of it.
Not fair.
linux users still do not take up much of the world pop. and many DIYs will still end up with windows anyway since people are so used to it. Many don’t even know of an alternative.
Windows has gotten past that period of bundling. even if they are not sold together with the system, they have so much mass they force those DIYs to install windows with it. This is called externality.
What is important is that windows had had that bundling tactic during the window where the non-tech-savvy are buying computers. They now have so much market share it doesn’t matter what the else do.
However, this is all going to change. As more and more people get fed up with windows, the alternatives are all trying to catch up to the bandwagon. That is why linux is trying to get bundled — to seize the window of opportunity created by MS’s inability.
In all, your survey doesn’t work now. it will still show a majourity of windows users, be it illegal or not. The critical period hasn’t arrived yet, thats all
It’s more of an organizational thing than a technical issue. On Windows, Microsoft controls all the system libraries, so apps tend to just depend on those. On Linux, the system libraries are controlled by a whole host of different parties. Package managers tend to simplify things greatly in the common case of an open source app, but there is some unavoidable complexity when it comes to binary apps.
Of course, the Linux method is massively superior in some cases. When you do have dependencies, Windows has no formal mechanism of tracking or maintaining them. So Windows application programmers tend to either avoid reusing code, or they just statically link it, which is a problem from a memory-usage/security point of view. Moreover, Windows lacks much of the infrastructure necessary for installing libraries and SDKs. On Linux, I can do “apt-get install libpng-dev”, and boom, and I’m done. On Windows, I gotta go find the binary on the internet, install it, then futz with my %PATH% and Visual Studio’s directories to make sure it sees everything. And of course, nobody ever agrees on where to put things in Windows, so its not uncommon to get a Visual Studio project that doesn’t build on your system because the creator put his DirectX SDK in a different place than you did.
Both designs have their strengths and weaknesses, and I think both designs are pretty well-suited for the market they serve. Windows is a world of a centralized authority creating the platform, and lots of unrelated software vendors writing binary software. Linux is a world of distributed development of the core platform, and a relatively well-organized set of software distributors providing open source software. The standalone installer model makes sense for the former case, and the repository model makes sense for the latter case.
When you do have dependencies, Windows has no formal mechanism of tracking or maintaining them.
Neither does Linux, which is part of why you see so many versions of libz floating around, for example.
On Linux, I can do “apt-get install libpng-dev”, and boom, and I’m done.
You mean on a Debian derived distro which has a recent enough libpng in its repository.
And of course, nobody ever agrees on where to put things in Windows
Unlike Linux, where, depending on the distro, your favorite library might be in lib usr/lib usr/local/lib var/MUMBLE/lib or someplace else, at the discression of the individual who packaged it up for the repository.
The Linux models of how to install software differ from the Windows models, but the above are definitely not areas in which Linux excels.
//I don’t see it as a feature. I see it as a problem. Installing software is a lot easier and trouble-free on Windows.//
Just not so.
I have installed a huge amount of software on a largish number of Windows systems and Linux systems.
Most of the Linux systems were far easier to install software on.
Windows comes towards the bottom of the list of “ease of software installation”. Quite near the bottom.
Dependency hell will never be resolved in Linux because there are over 500 adhoc incompatible flavours of Linux.
Utter madness!
Linux _MUST_ be standardised.
Pray to God that FreeBSD does not go the same way!
Edited 2006-07-16 11:27
I like that its a strong statement, but really I do not know what you mean.
The main benefit for Linux of standardising comes is for propriatry binary programs, and those hold back compatability.
There is actually only one linux, in real terms only two package managers, and lets face it 3 distributions with deriviaties.
I always thought there was fifferent flavors od *BSD
Dependency hell will never be resolved in Linux because there are over 500 adhoc incompatible flavours of Linux.
The problem GNU/Linux has is that outsiders are hellbent on seeing GNU/Linux as being a single Operating System. Unfortunately, GNU/Linux doesn’t fit the “Windows jacket”.
The 500 “flavours” are 500 different OSes with a common heritage and that is GNU/Linux’ biggest weakness. It mostly means that incompatible software from one distribution regrettably will work on similar distributions, but not quite. If this were not posible, GNU/Linux would be in much better shape, perception wise.
Now we have a lot of n00bs with updateritis trying to plug Debian binaries in to Ubuntu or Mandrake RPM’s on Fedora Core. These OSes are distinct OSes on to themselves and thus incompatible. Dependancy Hell only occurs if one is stupid enough to try and go cross-distro with software or if one is hellbent on using experimental software, whilst not being skilled enough to handle the problems which come with alpha software.
Just because you are free to disregard the natural boundaries between different Distrubutions, doesn’t mean you should. If you screw up because you jam square pegs in round holes, doesn’t mean you should blame GNU/Linux.
//These OSes are distinct OSes on to themselves and thus incompatible. Dependancy Hell only occurs if one is stupid enough to try and go cross-distro with software//
Very often dependency hell does not occur even here. Stick with repositories from LSB-compliant distributions, it helps to use a package manager designed for cross-distribution installs, and also can fall back to source RPMs if need be.
I have done this. I have never had an unsolvable dependency conflict – even going across repositories from different distributions.
“Now we have a lot of n00bs with updateritis…”
Ah yes! The old eliteist crap about your too stupid if you can’t do it etc.
Look at it this way, an author announces a new application that could be very useful to you.
Your choices are
1) try & fail misearaably to compile the source that have released beacuse dependencies of the source are not clearly documented by the author (typical situation) and you don’t have intimate knowledge of the percularities of the one of 503 permutations of Linux that you are using.
or
2) Try to package the source for your distro & fail for the above reasons plus the packaging process is arcane.
3) Wait for someone to package it for you for your specific distro & pray they got it right.
I’m speaking as a Slackware user of over 6 months.
Ah yes! The old eliteist crap about your too stupid if you can’t do it etc.
Can’t handle the fact that your GNU/Linux knowledge is inadequate after only six month on Slackware? You are a noob and you need to learn. I was in the same boat 8 years ago. I learned and enhanced my own skills and if it is elitist to be proud of your own achievements, yes, then I am an elitist.
I never said people are too stupid to do it. I only see a lot of people trying to do stuff that is clearly not in their grasp yet. They fail to realize that they will fail, because of a lack of knowledge. There is nothing wrong with being a noob, there is something wrong with staying a noob.
Anybody can master GNU/Linux, it just takes time and the WILLINGNESS to learn something more than next –> next –> Ok. The reason I can compile software from source, can convert an RPM to a deb with Alien, can fix dependancy problems with GNU/Linux is that I wasn’t too lazy to do my own research.
If anybody is unwilling to read up on GNU/Linux, expects software to be pre-tested to the point of bitrot and expects no rough edges at all, please just use closed source, commercial software from either MS or Apple. They cater to your needs. Only if you want software that puts you in the captains chair, go and use GNU/Linux.
It is about evaluating your own true needs and then picking the right solution. Your posts sofar suggest to me you went with the wrong OS.
> If anybody is unwilling to read up on GNU/Linux,
> expects software to be pre-tested to the point of
> bitrot and expects no rough edges at all, please just
> use closed source, commercial software from either MS
> or Apple. They cater to your needs. Only if you want
> software that puts you in the captains chair, go and
> use GNU/Linux.
Why is pre-tested software without rough edges set equal to closed-source commercial software? The acticle points to the same direction. I have used free and nonfree, good and bad programs, and I have yet to find any correlation between these two dimensions.
I’m somehow waiting for a real groundbreaking FOSS application (even more than Firefox) just to defeat this lame idea.
“You are a noob and you need to learn”
Great attitude!
It’s the 21st century, you want to sit in your mothers basement being an expert on Linux that’s fine for you but millions of people _DON’T_
We have sucessful business’s, Wives & Children, vacations & a life in general.
I have a CS degree amongst others & think your head in the sand attitude does disservice for Linux.
It’s the 21st century, you want to sit in your mothers basement being an expert on Linux that’s fine for you but millions of people _DON’T_
That’s the reason why millions of people have viruses today. Noone must be an expert, but you should know at least some basics. You have to do so under Windows (eg. you must configure and update your virus scanner regularly). Why should similar things be wrong for Linux? Just sitting in front of an PC and everything must work without knowing anything, that is unrealistic and doesn’t work – even in 21st century!
> The problem GNU/Linux has is that outsiders are
> hellbent on seeing GNU/Linux as being a single
> Operating System. Unfortunately, GNU/Linux doesn’t fit
> the “Windows jacket”.
>
> The 500 “flavours” are 500 different OSes with a common
> heritage and that is GNU/Linux’ biggest weakness. It
> mostly means that incompatible software from one
> distribution regrettably will work on similar
> distributions, but not quite. If this were not posible,
> GNU/Linux would be in much better shape, perception
> wise.
There’s one thing I would do to avoid these problems, if I were a distro-maker: I would make a new OS, not a Linux distro. That means
– use Linux as a basis
– don’t mention Linux except in the credits section
– don’t call the damn thing Linux
– make it incompatible with Linux binaries
– make it incompatible with various existing package formats
just so nobody tries to mix in other distro’s stuff unless he really really knows what he’s doing.
//Dependency hell will never be resolved in Linux because there are over 500 adhoc incompatible flavours of Linux.
Utter madness! //
This is simply not true.
http://en.wikipedia.org/wiki/Smart_Package_Manager
“Smart is a package manager software project. It has the objective of creating smart and portable algorithms for solving adequately the problem of managing software upgrading and installation. This tool works in all major linux distributions, and aims to bring notable advantages over native tools currently in use, like apt, apt-rpm, yum and urpmi.”
Although Smart is not a complete solution so that every repository will work with every distribution …
//Linux _MUST_ be standardised. //
… there is of course this as well:
http://en.wikipedia.org/wiki/Linux_standard_base
“The goal of the LSB is to develop and promote a set of standards that will increase compatibility among Linux distributions and enable software applications to run on any compliant system. In addition, the LSB will help coordinate efforts to recruit software vendors to port and write products for Linux.”
… and together this is a complete solution.
“//Dependency hell will never be resolved in Linux because there are over 500 adhoc incompatible flavours of Linux.
Utter madness! //
This is simply not true.”
Ever heard of http://www.distrowatch.com friend? Have a look to get informed better.. (ahem)
DistroWatch database summary
* Number of all distributions in the database: 503
“http://en.wikipedia.org/wiki/Smart_Package_Manager
Smart is a package manager software project…
Although Smart is not a complete solution… ”
Your darn right it is’nt! and neither are any of other myraid attempts to cope with the insane situation pf hundreds of Linux distro’s.
“//Linux _MUST_ be standardised. //
… there is of course this as well:
http://en.wikipedia.org/wiki/Linux_standard_base
The goal of the LSB is to develop and promote a set of standards that will increase compatibility among Linux distributions and enable software applications to run on any compliant system…”
Exactly what’s needed in deed except _no_body_in Linux_ Land_is_adhering_to_it!!!! Every distro is another fragmented different way of doing things.
“… and together this is a complete solution.”
Wishful thinking friend. I dearly want an alternative OS but 503 versions of Linux is cleary not tenable.
I dearly want an alternative OS
Why don’t you state what you really want? You want Windows, but without all the shenanigans, pricetag and security problems attached to it by Microsoft. In this case GNU/Linux will never do, because GNU/Linux will never be Windows. GNU/Linux is Unix and it appeals very much to the people who like the UNIX(tm) paradigm.
You might want to support the ReactOS project, they are busy recreating Windows without the meddlesome interference MS imposes. http://www.reactos.com
Actually I’m a Slackware user so save your Windows insults pal!
Slackware reduces the rpm/deb madness somewhat without the insanity of Gentoo et al.
I’m strongly inclined to move BSD now that desktop friendly FreeBSD has come on the scene.
LSB is for Linux what Posix was for Unix, and while it might increase compatibility, X.org’s recent decision to ignore 20 years of X11 naming conventions and all of the hiccups it caused should be sufficient warning to most that LSB will be about as successful as POSIX was.
And if it’s not, there’s always Mark Shuttleworth’s explanation of why LSB won’t work.
I disagree. I’m a happy Ubuntu and Windows user, and I can’t honestly say that one system is better than the other in this regard.
Package managers are great for system software, and work well for other kinds of software as well. I haven’t encountered a single dependency issue in two years. Not one. Nada. Zilch.
On the other hand, graphical installers are good for cross-distro programs, which is what they are used for in Linux (Crossover Office, games, Google applications).
So Linux has the best of both world, really.
Excellent article, the area users fear most needs the most edification. The more light on these areas causes more discussion and more attention brings more fixes. To know the penguin is to stop fearing him, it’s great contributions like this that propel Linux desktop adoption.
Or on windows you can just google for “mp3 player” and get all kind of choices. Thousands of trials, tons of adware, buggy software, limited functionality and so forth. Yea, that is advanced.
btw-all those you mention you can also download the linux version at the website also.
Control == shortcut placement? party on dude!
burn a copy of linux (legally), burn a cdrom of packages and go home and install…
I am amazed allt hose poor people have computers AND even a cd burner much less a home
Dude I will give you a +1 just for the laugh of reading your post…
“No it can’t. Why? Because applications will always ask for dependencies that are not on your CD-ROM. This always happens. Or the dependency is on the CD-ROM, but the version is wrong. ”
No that is the reason you have a distro that puts together a release. You download the CDs and that is what you use to install and the correct version is on there.
Or if you download the stuff then you simply use a package manager to pull in all the required dependencies and they are burned to a disk also.
So what happens on windows when you load all that stuff and you go to play XYZ format and it says it needs to download codec XYZ to playback that file type?
First post was funny, this one is getting old quick
This discussion reminds me of the poem about the five blind men describing an elephant. Everyone seems to have a fully formed opinion based on relatively limited experience with the alternatives and seems to be arguing the strengths of the approach that burned them the least against the weaknesses of the approach that burned them the most, while ignoring the weaknesses of their favored approach and strengths of the other.
Both approaches have strengths and weaknesses, neither is particularly better than the other, and both have been greatly improved over the last 10 years.
Although I’m rather surprised that nobody seems to have mentioned the third approach, but then, since it applies equally to both sides, I guess it’s not fodder for religious wars.
I must say that this article was pretty good, and more extensive than I expected. It threw some jabs at Windows without seeming overtly biased, and overall it gives a pretty good impression to users new to *nix.
Now the bad:
For an article targeted at complete newbies, there was some unnecessary material covered. For instance, it talks about compiling packages from source and Gentoo. Your average Windows user will likely be put off by the possibility that they might have to compile software, and the truth is that these users won’t ever do this. Gentoo can be a great first Linux distribution, but only if you are a Windows/Mac power user that generally understands computers and likes to learn about them. If that’s the case, then they really don’t need to read this kind of article.
There were some explanations that went beyond simplification and became “wrongification.” For example, this article taught me that in UNIX [shells], ./ means “run this application.” No, it doesn’t, it bypasses the default PATH, and it only runs an application if you’re in the same directory as the application (which the article didn’t tell the reader to change into). This was in another part of the article that was explaining things that newbies don’t need to know, and this is precisely the result.
Good article, unfortunate compromises. He should pare it down into one version for novice newbies, and beef it up into another version for advanced newbies. I was once in the latter category, and I believe it makes good sense to target them with good, solid introductory materials. From the perspective of the FOSS community, we want the advanced Windows and Mac users to come over to our side more than anyone else. These are the users who have the skills to contribute to the community.
It’s interesting that everyone here is discussing dependencies in GNU/Linux versus the way Windows does it with the all in one deal, generally. Hasn’t PCBSD fixed this for the normal user. I am by no means a huge fan boy but I plan to install it on my new system once I return to the States from overseas. I realize that PCBSD is not GNU/Linux but this issue can be addressed. Just a thought.
Edited 2006-07-16 04:33
“Hasn’t PCBSD fixed this for the normal user”
It would seem so.
The nbi packages are simillar to Mac way of doing things. Put everything needed into the installer package & keep them with the installed app.
Maybe PC-BSD will really catch on because it sticks with the BSD base & just gives it a desktop user friendly installer & simple application installer method (regardless of it’s technical merits).
PC-BSD is an attempt to address the issues of what the majority of desktop users want versus what cloisted geeks want.
Edited 2006-07-16 13:27
> This is due to philosophical reasons. GNU/Linux is a
> free (as in freedom) operating system. Most of the
> software is free as well. Thus, the programs can better
> cooperate with each other and often depend on each
> other for getting a job done.
That did a good job of *not* explaining what ease of installation has to do with free software. I think the author is trying to find an argument here when there is none. Free software has *nothing* to do with modularity, and both have *nothing* to do with how hard or easy it is for the end-user.
I’m especially concerned about stuff like this:
> ./configure
> make
> make install
>
> Right now, the software should be installed in our
> system. If errors occurred, we need to read the
> messages carefully and probably install additional
> packages which are required to compile the desired
> program. The list of dependencies is usually located
> in README or INSTALL text file which is delivered
> together with the sources.
This can be a real pain for the experienced user, let alone be impossible for the average user. By claiming this to be an essential part of the concept of free software, the author sheds very bad light on free software, and even freedom in general.
Why not say the truth? Unix has no real package management because it was a system designed for geeks. Any package management like DPKG was added as an afterthought, and thus sucks in its basis. It works most of the time, but not always, and neither the kernel maintainers nor the distro maintainers have a clue how to retain control over this mess and fix the real problem.