“There is one huge difference between the free and non-free software that has some very practical implications in the way we use it. One of those implications are the dependencies between single software packages in the free software model. What do they have to do with the free software philosophy and why should not you be afraid of them? Read on to find this out.”
“Dependency hell” is at its best when you try to update an older release like Suse 7.3, Suse 9.0 or old versions of Red Hat (sometimes you need to to so). But why should I complain about this? Updating is possible in all cases although you have to invest a lot of time. Especially Red Hat 7.3 shows this impressively, it is still maintained by the Fedoralegacy Project (until the end of 2006, after this date you are on your own).
But think about one thing: How do I update Windows NT without the help of Microsoft? It is not possible. Period.
For example, it is not possible to update the WinNT kernel to the kernel version of WinXP. If you want to update the original 2.4.10-kernel of Suse 7.3 to the latest kernel 2.4.33.3, it works, it is no problem, thanks to free software.
In my opinion, this is the main difference between proprietary software and free software. And, for this main advantage, I accept the beloved “dependency hell”.
Dependency issues also exists on Windows, but in most cases they are somewhat transparent to the end user, as a result of setup applications including all DLL’s possibly required by the application.
Many applications have a dependency on VB or on IE or both, and so on. Some applications require the .NET framework to be installed and so on.
So fact is, that dependencies happens no matter whether the software is open source or not.
But many Windows users are accustomed to downloading a freeware program, double-click on the exe, it installs without any hazzle and … works.
When Windows-only users touch a Linux system the first time, they are capable of installing it without any problem nowadays, but afterwards they have the wish to install software on their own, DVD player, Google Earth, XnView and so on. And then a lot of them will be completely shocked and return to Windows.
Sometimes, a software company tries to avoid this problem by providing shared and static rpm packages of its software like Opera does, for example. But a Linux system completely built of such static packages would be a complete horror to maintain and update.
But many Windows users are accustomed to downloading a freeware program, double-click on the exe, it installs without any hazzle and … works.
In many cases you can do the same on Linux. It’s perfectly possible. Actually it could be (and sometimes is) as easy as unzipping a file (like “installation” of apps on Syllable) and run the executable just like that. Bygfoot Football Manager actually works that way.
Besides, any decent package manager these days handle dependecies transparent to the end user. Problems only arise when mixing official repositories with unofficial repositories (it’s like using Windows security updates from a third party) – and mixing repositories are doomed to go wrong, unless there is a very close collaboration.
Distributions like Gentoo are different, but maintaining such systems are not at all for ordinary or intermediate users.
Yes, but we should not forget the main disadvantage of the “Windows approach”, as can be seen when we look at Microsoft’s GDI+ security update “tool”:
https://www.microsoft.com/athome/security/update/bulletins/200409_jp…
The main point is the sentence in which Microsoft points out that you might have to install multiple updates from multiple locations, depending on the software you are using.
Multiple updates from multiple locations? What an “update hell”!
Yes, it might be possible to update all these different jpeg-lib versions (for example), that were installed with all these Windows graphics programs (Corel, Adobe, Paintshop and so on), if there is an update for your version.
In real life, most Windows users are not aware of such security updates for the different versions of one single (graphics) library on their systems. And no one is talking about the fact that in this consideration a lot of Windows installations remain unpatched even if they are regularly updated via Microsoft’s update service.
In my opinion, the Linux approach is better.
Microsofts GDI+ update tool wouldn’t even run on my machine…
Besides, any decent package manager these days handle dependecies transparent to the end user. Problems only arise when mixing official repositories with unofficial repositories
Problem is, you often times (or at least I did) have a need to do this, and then things have a tendancy of breaking.
>Sometimes, a software company tries to avoid this problem by providing shared and static rpm packages of its software like Opera does, for example. But a Linux system completely built of such static packages would be a complete horror to maintain and update.<
I really have to disagree on this one, PcBSD uses the static system with it’s pbi packages and it is a dream to maintain and update. I also might add it is faster then most linuxes I have used and feels less bloated in general.
Now don’t get me wrong, my main systems are running Gentoo and Arch linux and they work well, but the simplicity of the PcBSD system is hard to rival, linux could learn a few things from them.
I usually agree that the pros and cons of Windows versus Linux (or “desktop pc *nixes” as I like to put FreeBSD on that category also) pretty much null themselves. Windows is pretty good at some tasks (although I can’t use it) and Linux is pretty good at others.
In package management, both ways are nice. Both have upsides and downsides. But I can’t understand why you say that “A Linux System completely built of static packages would be a complete horror to maintain and update”.
Please. Explain to me HOW would it be any different from a default windows installation ?
All a user would need is a distribution that tried the Windows approach. A default set of system libraries (that would be Gtk+, Qt, KdeLibs, GnomeLibs and all those needed for the basic desktop system the distro wants to ship) and a repository (oops, now we’re doing more than what windows do, but thats how distributions work) of static packaged applications.
And I really have a hard time figureing why that would be any better of worse than what we have in windows today.
A current Linux distribution may have ten graphics programs on its cds and for all of them you actually need one libtiff, one libjpeg and so on.
So if you need to update one of these libraries due to security issues, you need to update only one, not ten.
One point of the Windows approach would be the easy installation of free or proprietary software which was not included on the original cds of the distribution in order to avoiding “dependency hell”. In this case it could be that ten libraries are updated (that came originally with the distribution), but not the other ones of the additional installed software. Why should the publisher of the original distro be held responsible for updating “alien” packages and all the files that have been introduced by these other applications into the system?
Such a Linux system could be unsecure without knowing it.
And if I’ll have to choose between the “update hell” and the “dependency hell” I’ll would take the latter one for security reasons.
Dependencies exist in other environments too. In Windows, they addressed the problem by simply including a copy of every dependent resource with a program (assuming the packager knew what they were doing).
This is sometimes good, because everything “just works”, but also bad because you can sometimes run software using an outdated version of a library that’s already been updated by MS.
The Linux crowd addressed the problem with several different dependency handling schemes, such as the APT or URPMI repository. In fact, this works exceptionally well and today this is just as reliable (and sometimes more so) than the approach popular on Windows.
There is indeed a dll hell.
The biggest complaints from people about static build apps are that the executable are bigger and more difficult to update. But with hard drive size/cost at $0.50 a Gigabyte this is becoming if it isn’t already a non issue. As for one dependancy to update all the apps. This is the only issue that would need to be addressed for Static Build apps. (which could be remedied if people took all the time that they do in Package Managers and create a a good update manager. As well the problem with upgrading a dependancy and having an older app stop working because the new version looses some compatibility or the original version used some non standard hack (or used a bug to their advantage). But all in all Static Builds solve more problems then they cause.
Other Problems with dependancies include FileSystem Sprawl. I guess I am old fashon but I like the old DOS method for apps. A directory with all the files that are needed to run in that directory or in subdirectories of it. (Mac OS X does this too for the most part). This save us from using ever growing complexity package mangers to keep track of the code. And if you want to uninstall the App Just delete the Directory and the file is gone.
This File System Sprawl and Dependancy Hell all came from people trying to fix the minor problems with Static Build Executables and Single Location Apps. But ended up creating larger problems for the future.
These problems make it extramly hard to explain to new users.
Because the common Linux New Users questions are…
1. How do I install this App I downloaded it but I can install it.
1. A. Why can’t I just Click on it and have it installed.
2. Install Failed. It says I ned LibLFSDKLJdaskjfla.so What the hell is that, Do I need it and how do I Install that.
2. A. Why cant the app just give me all the files I need to get it to work.
3. if (!Dependancy.installed()) { goto 2; }
4. Ok the App is finally installed. Where is it?
4. A. What is /etc?
4. B. What is /usr/bin?
4. C. What is /var/* (this stupidest place for distros to put files!!! other than Log Files and Proc Files)
4. D. So which one do I run?
4. E. Why cant it just install an Icon when it installed so I can run the application.
A week later, you see them running windows again. They say Linux is not for them. And they no longer trust you on your recommendations.
If users can’t adapt to new circumstances, they are better of running Windows. It never was, is, nor should it be GNU/Linux goal to run on every computer.
Leave GNU/Linux to those who can see the merit of it. Other are free to use something else that is more befitting of their needs.
There are bigger problems with static linking:
* When you have static linking, security updating becomes harder. Instead of updating one library file, you update several dozen (assuming that the statically linked applications are diligent in making security updates)
* When you have static linking, it takes longer to load since instead of using the library that’s already in memory or loading the dynamic library only when it’s needed, it loads the whole mass at the begining.
* Related to the previous point, it takes up more RAM space. Instead of sharing libraries or only loading libraries when needed (if needed), it loads everything again.
What you are saying is partially true. For example: if several programs are using the same dynamic library, you do have to potential to reduce memory usage and avoid reloading the library. BUT, that assumes that you are concurrently loading software that uses that library. That probably is a safe assumption for libraries like GTK+, Qt, and ncurses. Yet that probably will not be the case for something like Motif, where any given person may use just one or two Motif applications. Also, it is my meager understanding that Linux will not load a page into memory unless that memory location is accessed. So large static binaries should not make much of a difference in terms of RAM consumption or application launch times.
In reality, any application should probably have a combination of statically and dynamically linked libraries. The developer should choose to statically link a library that is small or uncommon, simply to reduce the likelyhood of dependency hell. The developer should choose to dynamically link a library that is large and common, simply so that the user can reap the rewards of dynamic linking in cases where those rewards actually exist. Unfortunately, most developers and packagers either don’t want to put that much thought into their decisions, or they make their decisions upon pure ideology.
I think there are two problems here: technological and conceptual.
The technological problem is that if you distribute programs with all dependent libraries, then you install the same library many times, which can potentially lead to space and efficiency problems. The Operating System of the Future (TM) will notice that and store only one copy on the disk and load only one copy to memory. Similarly, smart installers will not download libraries that are already present; they will just confirm the hashes as rsync does. We are not there yet, but at least the direction is known.
The conceptual problem is more difficult. What do you do if you have an application that uses a library, and the library has just been updated? Do you update, risking breaking the application, or not, risking missing a security patch or a bug-fix? In other words, do you want the application maintainers to resolve library problems for you, or do you think their time is better spent on the application itself? If two applications use the same library, do you want to upgrade it in one go, or in two steps, keeping two versions for a while? These, I believe, are the real problems, and the answers will vary from application to application. The ultimate solution might be a package management system that seamlessly supports both models.
Edited 2006-10-16 01:24
The technological problem is that if you distribute programs with all dependent libraries, then you install the same library many times, which can potentially lead to space and efficiency problems. The Operating System of the Future (TM) will notice that and store only one copy on the disk and load only one copy to memory
Your OS of the Future (TM) is a linux distro, which is available now. They do everything you describe without the design problems.
Similarly, smart installers will not download libraries that are already present; they will just confirm the hashes as rsync does. We are not there yet, but at least the direction is known
We are there already, and no need for a hash which is counter productive.
Currently, distros sign packages to avoid trojans, and produce one generic version of library for each architecture, and sometimes optimized ones for sub architectures.
The conceptual problem is more difficult. What do you do if you have an application that uses a library, and the library has just been updated? Do you update, risking breaking the application, or not, risking missing a security patch or a bug-fix?
This one was solved before I even knew an Unix : you update, then if it doesn’t work, you revert to the old one.
Only in Windows this kind of thing is not easy, as you’re not sure you will be able to go back properly.
In other words, do you want the application maintainers to resolve library problems for you, or do you think their time is better spent on the application itself?
The library is part of the application … The maintainer already resolve library problems for you, which is spending time on the app.
If two applications use the same library, do you want to upgrade it in one go, or in two steps, keeping two versions for a while? These, I believe, are the real problems, and the answers will vary from application to application. The ultimate solution might be a package management system that seamlessly supports both models
That’s BS. You can do both on Linux OS. I can update libraries on my system, and remove libraries before installing the new one, after, or not at all, and the result is generally the same : the apps will continue to run just fine with the library in memory. Then, once reloaded, they will pick the new library : no problem at all.
This is no problem at all and will not vary. What will vary is the behaviour. Apache won’t have problems, KDE will stay up several days, then some apps will start having strange behaviour or crash (yes, I already updated a complete KDE while it was running).
I actually update my apps and libraries all the time while they run, and have yet to have any problem. Only glibc updating is sure to break your machine in less than 24h, if you remove the old one.
Static linking may free one from dependency hell but only until they need to update everything and suddenly find themselves in ordering hell.
1. How do I install this App I downloaded it but I can install it.
In general the app is either in your already in your configured repository or has an installer. If it isn’t then it’s likely so cutting edge that you don’t want to install it unless you know what you’re doing. Open source software is produced in the open, so you can get your hands on it much earlier in the process.
2. Install Failed. It says I ned LibLFSDKLJdaskjfla.so What the hell is that, Do I need it and how do I Install that.
System installers manage this these days. You also need X, Y, Z that I am going to install. That’s what automatic dependency resolution is for.
4. Ok the App is finally installed. Where is it?
This is my personal bugbear. Who cares? Why should you care about where an app is installed, all I need to know is where to access it on the menu. It’s physical location is for the system to look after. I’m interested in two things, installing a package and removing it. As a user, why should I babysit this?
Why does a logically structured system where you can learn where to find installed bits of applications if you need to compare negatively to a system where you’ve got a inconsistently named application stored under c:program files. Damn, which company made that program I just installed again? Is it under c:program filesobs widgets or c:program fileserts nu software corp.
4. A. What is /etc?
system wide configuration
4. B. What is /usr/bin?
user accessible executables managed by the system
4. C. What is /var/* (this stupidest place for distros to put files!!! other than Log Files and Proc Files)
Variable files. What is stupid about it? And more importantly, why should you care about it?
4. D. So which one do I run?
The one logically positioned on the menu. Or just type the program name in a run dialog box. Or at the command line.
4. E. Why cant it just install an Icon when it installed so I can run the application.
Install an icon? Just drag it from the menu to the desktop. I have had commerical apps drop icons on my desktop when I’ve installed them.
The thing I find most amusing is those that look at a logically laid out file system and compare it to something like the mess that is the windows c: drive and claim the latter is superior. Or that installation of programs on Windows is something to aim for. We had a new laptop set up last week, and the compusys engineer was saying a visual studio install would take about and hour and fifteen minutes. That long for one program!?! Eclipse takes a few minutes on Ubuntu. The whole OS can be installed quicker than that.
@r_a_trip
There is no such thing as a “GNU/Linux goal”. There are different goals of different Linux distributions. And some of them do have a goal to run on every computer. Ubuntu does, Mandriva does, so does openSUSE.
Edited 2006-10-15 15:20
There is no such thing as a “GNU/Linux goal”.
True, but that is not how an outsider approaches GNU/Linux. Outsiders mostly come from the Windows world, where One OS, One Browser, One Mediaplayer paradigm rules the roost. They try to push that model onto the myriad different styles, goals and ways of GNU/Linux.
In the current zeal to make Windows and Microsoft vanish, some factions are willing to make every sacrifice imaginable to put GNU/Linux in the position of sole Windows replacement. I don’t believe that to be a viable plan. It would hurt GNU/Linux more than anything else.
I’d like to see Microsoft go through the same process as IBM went through. From an arrogant 800 pound Gorilla to a 400 pound Gorilla with manners. Coexistence instead of mutual annihilation.
Yes, I know that there are at least two different types of dependency in the Linux world — but I am going to focus on programs that link against external libraries.
There appear to be a few myths about libraries that are spread around by computer science types. And there were also a few myths added by this article. Let’s look at the latter first:
1. Windows programs don’t have external dependencies. Of course they have them. Windows developers simply know which components are a part of the operating system and which are likely to be a part of the operating system in the future (e.g. .Net). Windows developers also use libraries developed by third parties. It is just that they bundle the dynamic libraries with their software or use static linking. A well behaved application won’t install those extra dynamic libraries as part of the system.
2. Dynamic libraries save disk space and memory. This is only true if a large number of programs use the same libraries. And I do mean a large number, because dynamic libraries are usually designed as general purpose libraries. That means that a lot of the code in the library will not be used by a given application. When you are using static linking, the linker can frequently figure out which code will not be used from a library, then discarde it. So it is not as though a statically linked program has to be excessively large.
3. Dynamic libraries can be updated to fix bugs and to add security patches. If there is a known bug with a library, third party developers will usually write their own code to work around that bug because they cannot necessarily wait for a bug-fix. A Linux box will frequently have several versions of the same libraries because of incompatabilities. Those old versions are rarely updated to account for bugs or security patches anyway. So this effect is limited.
Yes there are times when dynamic linking is good. Core operating system components that have a stable API is one such case because you can make extensive use of the shared code aspect and you can have updates without breaking stuff. Which, incidently, is what happens in the Windows world (for the most part). The Linux world doesn’t fit into that model very well, which is why Linux is known as having dependency hell.
Edit: wrong choice of a word.
Edited 2006-10-15 15:30
I can think of two “quick” fixes for the problems you outline in point 2 and 3.
2. Dynamic libraries are “bloated”.
Re design how shared code are handled by the OS then. Just load those parts that would actually be used.
3. Packages ar too tightly coupled with specific versions of libraries.
This is a problem that has been solved for ages. Use design patterns to minimize coupling. F.ex. make package depend on interfaces instead of on ohter packages.
Too bad I cannot mod you up higher than 5.
Excellent post!
1. Windows programs don’t have external dependencies. Of course they have them. Windows developers simply know which components are a part of the operating system and which are likely to be a part of the operating system in the future (e.g. .Net). Windows developers also use libraries developed by third parties. It is just that they bundle the dynamic libraries with their software or use static linking. A well behaved application won’t install those extra dynamic libraries as part of the system
This is plain wrong. DLL hell is a problem even on the latest Windows. That’s one of the main reason why people install only one app on one Windows server (or at least, several programs from only one vendor : MS for example).
That’s also one of the reasons why Windows breaks beyond recognition when you install too much programs on them, even if you uninstall them.
2. Dynamic libraries save disk space and memory. This is only true if a large number of programs use the same libraries
BS, this is true as soon as 2 programs use the same libraries, which means nearly always.
And I do mean a large number, because dynamic libraries are usually designed as general purpose libraries. That means that a lot of the code in the library will not be used by a given application
Wow. Where did you get such nonsense ? Not all libraries are libc, actually, very few are. Most libraries are mostly used, not everyone does generic game engines.
The lot of code not used will simply be swapped out, so there is no problem at all. Lazy loading is efficient too.
When you are using static linking, the linker can frequently figure out which code will not be used from a library, then discarde it. So it is not as though a statically linked program has to be excessively large
But yet, it has to be entirely loaded in memory, no things like lazy loading. And actually, even with GCC, you can do that with dynamic libraries too.
3. Dynamic libraries can be updated to fix bugs and to add security patches. If there is a known bug with a library, third party developers will usually write their own code to work around that bug because they cannot necessarily wait for a bug-fix
Which means every one of them will have to do it, and you will have to update everyone of these programs. Updating the library means only one package has to be updated.
To see the magnitude of the problem, just look at zlib : a huge number of apps depend on it. If I static compiled my apps against it, it would be a nightmare.
A Linux box will frequently have several versions of the same libraries because of incompatabilities. Those old versions are rarely updated to account for bugs or security patches anyway. So this effect is limited
This must be the worst of all you said. A Linux box will NOT have several versions of the same libraries because of incompatibilities, that’s plain false.
There are a few well known libraries like that (berkeley db, libstdc++, openssl, …). These just need a recompilation of the app dependant on it at worst.
Besides, this has NOTHING to do with security updates.
A security update in a library will not be incompatible with the previous unsecure version (even the latest security updates in OpenSSL didn’t require any recompilation or dependant apps).
And the old version is updated to a new version, which will be then taken by all the apps automagically, ELF is designed for that.
I compile everything from source on my system, and the only different version of library I keep is for a closed source app (libstdc++ for Real codec).
And for an example of things that will instantly break with static compile : all KDE apps, and more generally, all C++ apps, and they would be HUGE if statically compiled. While right now, even mp3kult still works under the latest KDE.
Yes there are times when dynamic linking is good. Core operating system components that have a stable API is one such case because you can make extensive use of the shared code aspect and you can have updates without breaking stuff. Which, incidently, is what happens in the Windows world (for the most part). The Linux world doesn’t fit into that model very well, which is why Linux is known as having dependency hell
So much FUD it’s unbelievable. Linux OS are heavily dynamic linked, and it works far better than in the Windows world. Linux OS don’t fubar after 6 months of use, servers install tons of apps without problems too, don’t need reboot to install apps, … And package management solves all the other problems of updates.
The following criticism of “non-free” software is downright dumb.
Thus, lots of effort is put into “reinventing the wheel”, that is reimplementing the tools that are already widely available on free licences.
Most developers on Windows have the option of buying royalty free controls that will do almost anything you like. FTP, graphing, math etc etc. They are inexpensive and easy to distribute (especially if you are using the .NET framework). Or they can find free examples by the 10’s of thousands.
We have the same option in open source land. Except we don’t have to pay for a solution, that may cease to exist tomorrow.
We have the same option in open source land. Except we don’t have to pay for a solution, that may cease to exist tomorrow.
I think the number of abandoned open source projects outnumber the successful one several thousand to one.
The difference with FL/OSS is that abandoned projects are abandoned on merit, not on the economic viability of the software creator.
With great proprietary software, it can vanish, just because the company making it is better at writing the software than managing business.
The difference with FL/OSS is that abandoned projects are abandoned on merit, not on the economic viability of the software creator.
Boredom … not cool enough … etc etc is more likely.
Then there is the situation where OSS projects fork into “open” and “not open” where the new development continues with the “not open” side. RedHat is an example. Sendmail is another. I believe Snort now has a proprietary arm called SourceFire. And hasn’t the same happened with Nessus/Tenable?
And those are the few I know about. I’m sure there are lots more.
With great proprietary software, it can vanish, just because the company making it is better at writing the software than managing business.
It could. But it tends to happen a lot less often since good projects do tend to make money and developers get paid regularly.
We’ve suffered enough from propietry component/db/toolchain vendros. Things like:
– Company going down
– Moving features from “standard” to “enterprise” products
– Changing direction without providing support for those wanting to continue with the old one
– Adding restrictions in newer versions
Few years ago we’ve said enough. Our entire toolchain is based on FOSS. Not one of them failed us.
It’s not a problem. Sources are available if needed. If an open source project dies, it’s because nobody cares about it. Therefore it doesn’t matter if it dies.
In OSS, by modularizing a portion of an application and making it a new project, from a selfish perspective you can get more developers effectively working on your application. For example, they use the library in their applications yet need to improve it in a few ways.
If you compare a production enterprise application or desktop application. Most vendors will help with upgrade paths on both, that you can export and move your data. Many times its painless and easy to do.
Now as for desktop upgrades, I’ve kept the same applications on a different partition in windows by keeping my applications in c:apps, net, gfx, sound over the years. Installing windows/office/big applications on D:.. Now I also use registry applications to save the application/serial/updates in the application directory so I can just click the registry .reg file and re-add it. The same thats used for IT since 95.
You just have to know the process, tools and have a better than basic understanding of windows to help with updates. So bashing windows for dependencies could be the skill set of the user.
As for Linux, I’ve had my fair amount of broken libraries, missing headers, and wrong version of applications to call it anything but standard. And if you need an older version of a distro for Vendor support, dont even try to update via an repository, you are stuck with they released and bug releases at the time. Centos4.4 and bugzilla is a good point, you cant install from rpm unless you add a fedora 4 or 5 which ruins the idea of running Centos4.4. Ubuntu at least is now certified by IBM, Oracle <insert major vendor> now so you can at least have an updated GNU tool set, perl mods, and not have to install everything by hand.
Its getting better, The registry for windows is the biggest POS in windows, but most 3rd party applications use it heavily, so upgrading is easier. I’ve gotten my system down, I can re-install windows/vista/nt, and not have to re-install my non-microsoft applications.
Microsoft on the other hand does stupid shit like putting icons in the system32installer directory for Office. What a major clusterf–k to have start moving things around. An install directory should contain all files and all librarys, not mix and match…
Maybe someone whos used more than 1 os should work at microsoft to teach these guys how to write an installer…
“Closed-source non-free software is usually developed within a single software company. The whole code is developed from scratch, since non-free software cannot use the GPL-ed software by default. Thus, lots of effort is put into “reinventing the wheel”, that is reimplementing the tools that are already widely available on free licences.”
Has the author worked on any proprietary software project? We use external libraries all the time. We just pay for and license them.
There are the ones that are provided by ‘the development framework’ (.NET, MFC…). In modern windows system, these MS dependencies are included with the OS.
There are other more nonstandard libraries that are used all the time. Typically, an application installs these privately.
“The lack of source code usually comes with the lack of friendly API for external programmers and lack of documentation (there are exceptions to this rule of course).”
The opposite is true. Since there is no source code. These external libraries have to include great APIs and documentation. I mean, they’re selling them. Why would we pay for a library that’s going to be hard to use?
The article really fails at trying to link issues of dependencies with the ‘free/closed software model’
@Yamin
I do develop closed source software for living (unfortunately) and I know how it works. I did mention that the alternative to developing everything from scratch is buying the boxed proprietary libraries or using the software published on liberal licences like BSD. You probably haven’t read the whole article, I guess.
When I talked about the proprietary software being harder to cooperate with I didn’t mean the close source libraries that you can buy — I think that’s obvious that they need to have a friendly APi so that anyone buys it!
What I referred to is the normal desktop apps you use every day, like MS Office, Photoshop, iTunes and such. It’s much harder to write free software that cooperates with them in comparison to OpenOffice.org, GIMP or amaroK.
@Michuk,
No I read the article. But again, why rant about having to write things from scratch? It just doesn’t happen that way. What is your point behind point this ‘issue’ under the non-free development model and not in the free development model?
As I said, the attempt to like dependency issues to open/close source just doesn’t work, because it’s not true.
Now, if you start talking about writing interopable applications (like Office…). That’s a whole other issue and I don’t think too many people are going to read the article that way.
But even here. A lot of applications have a plugin nature which allows people to easily write applications and addons. Lockin issues (like non-standard file formats and communication channels…) are harder to handle in closed source applications. But like I said, the article doesn’t read as if this is the point.
@Michuk,
I don’t really know why you’re trying to combine issues of dependency hell with non-free/free software. It’s not gelling.
Do non-free software use libraries.
Yes, all the time.
Does free software use external libraries
Yes, all the time.
That said, from your last post when you speak about things more on the application level. I reread the article, ignoring the comments on closed source.
Now I can kind of see the point. Open source apps are more likely to make nice ‘libraries’ because they anticipate and want it being used by others. For example with the Gecko rendering engine. That seems reasonable enough, though I’m not sure how true it is in general.
That said, the IE rendering engine is there as well. And there are several browsers based on IE. Who’d use em, no one knows
Do non-free software use libraries.
Yes, all the time.
Does free software use external libraries
Yes, all the time.
Sure, I agree.
My point was (at least one of my points) that the OSS libraries are usually independent and usable by everyone. They are installed as independent packages in the system as well (just like GTK framework in Windows) which makes them easy to manage (update, uninstall, replace, etc).
Closed-source dependencies are usually bundled with the proprietary the software so you don’t really care whether they are external dependencies or just part of the software.
But you are right — I probably could have made the points better in the article. The main idea was just to describe the way the software is crated in general and explain the reasons for the so called “dependency hell” some Linux users might experience (but it’s also mostly a relict of the past now).
Dependencies – the polite way of saying Spagetti code, which is literally ALL it is.
There is almost a … fear among modern programmers of having to WRITE anything, relying on libraries atop libraries atop more libraries – resulting in bloat upon bloat upon bloat, the reason linux and most other open source Operating systems are rapidly approaching a level of bloat that makes XP look outright clean… and a problem that could probably have been avoided in the first place if the native “API”‘s (and I use that term in the loosest sense) weren’t borderline unusable (yes, X11, I’m looking at YOU). GTK, Qt, these probably wouldn’t even need to EXIST if the X11 API wasn’t nigh impossible for the average programmer to make heads nor tails of.
There’s a feeling of hack atop hack atop hack for the barest functionality in open source operating systems that comes from the library atop library atop library you see in all but the simplest of applications, which really destroys any sense of any of it being well written much less well integrated.