Lately posted on Slashdot, an article written by Joel Spolsky mentioned the trouble through which Microsoft went to make each version of Windows backwards compatible. In one case, for the game Simcity, they even changed the way memory handling was done when running that application. You can find additional stories of software tricks that recent versions of Windows have to perform in order to run these bug-dependant applications on the web. After reading the story, I discussed with a couple of friends how weird this was and how Free Software completely avoids this problem.I was still amazed that there are special hacks in Windows for certain applications. I saw the compatibility settings in the property dialogs of executable files when using Windows, but I never thought that the effects could run so deep. These hacks must introduce weaknesses into the system. The it came to me that in the case of SimCity there might even be a huge problem, here is my theory, be warned that I am unsure of the following.
If Simcity requires a “freed” pointer to be referenced (see Joel Spolsky’s previously mentionned article), you could code an application that exploits this fact and name the executable simcity.exe. There are hundreds (if not thousands) of applications supported by Windows XP dating back to Windows 95, and thus must have very special hacks for them to run. Does this mean that Windows XP is riddled with holes because of these applications? Will MS remove all the hacks, which cost them many hours of work, to make XP more secure and therefore breaking these applications? How would its customers react? Being used to backwards compatibility, customers will blame Windows for not running their favorite application.
Maybe I am wrong and there are no possible security breaches due to the compatibility hacks. It would be interesting to investigate this further. But if this is true…
Security Nightmare
What a security nightmare Microsoft created for itself by supporting bug-dependant software!
In the closed source world, this is a major unavoidable problem. When a bugfix in software A breaks a dependant application B the available options to solve this are few: either you hope the broken application B developers fix their bug or software A has to have a workaround added to support application B. Why couldn’t you simply rely on a fix from the developers of the broken applications? Well, there are many reasons:
– The company that created the application could be out of business;
– The company could ask for funds to fix the bug, after all their application worked with the previous version of your software;
– The company could not have the ressources to fix the bug, their developers could be busy developing some other software;
– The company could have abandonned the application and doesn’t want to put ressources into anymore.
The only thing left to do is to make workarounds in software A. However, all the time, money and work invested into the workarounds is also the source of security holes in software A!
Well, apparently SP2 for Windows XP will be removing support for some old applications to improve security. Kind of makes you wonder, Microsoft first spent an enormous amount of resources to build support for bug-dependant applications and now it finds itself to spend an enormous amount of resources to remove this support. Not only that, but this compatibility reduction doesn’t seem to be well received by Microsoft’s customers.
This is unimaginable in the Free Software world. Because of the availability of source code, it can be updated not only by the original authors of the software but also by others. This means that if the authors are unavailable for whatever reason, someone else could fix application B in order to make it work with a newer version of software A.
Let’s say that a program B which depends on A is broken by an update on A. I see two types of incompatibilities:
– binary incompatibility: this means that a simple recompile of B will fix the problem.
– source level incompatibility: the developers of B will be notified of the bug and could fix it. If the developers of B cannot, for some reason, fix the bug, someone else could pickup the source code and fix the problem.
If my company depended on software B and the bug fixed in A improves security, I wouldn’t be forced to sacrifice one or the other; I’d be *certain* that both could still work together, even if it meant I had to bounty a programmer for a couple of hours. As an example, instead of creating holes in Windows, Microsoft could have of spent all those resources on improving Simcity and the many other bug-dependant applications Windows has to support. You could say it’s not their reponsability to improve SimCity, but why should they spend time and money in making hacks into Windows, thus making it vulnerable, instead of improving the general state of software.
An additional reason to adopt Free Software: you can avoid creating security holes in order to be backwards compatible.
“Emulation” to the Rescue
Could emulation be the solution to support old bug-dependant applications without compromising security? Dosemu seems to be able to run SimCity, Wine does a a pretty good job of running most Windows application. I remember in the days of OS/2 that it ran Win3.x applications better than Win3.x. Do these applications introduce the same kind of weaknesses into the system as do the hacks in the latest Windows?
Of course, the best way to get rid of this compatibility nightmare is to write a Free version of the software you need.
About the Author
Stefan Michalowski is a computer enthusiast who likes to spread the Free Software word. Preaching by example, he has been using multiple flavours of GNU/Linux since 1995.
If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews.
If you ask me, the backwards compatibility of Windwos is
plain great, and a huge contributor to its success.
The hacks for bug dependant programs are ofcourse very questionable though.
I wish open source would care more about binary
combatibility, these days you’re lucky if a program compiled
on distro x version y runs on distro x version x+-1.
Let alone distro z.
Please remember not every program and system is open source.
While making multi distro/version binaries is doable, it’s
a painful task, way to painful. LSB helps in some area, but
you need an LSB compilation environment, but far far to few
libraries are standarized yet, static linking doesn’t even work for many libraries.
Most of the people at my company do windows programming.
They set up a project in VC++, push a compile button, and
the resulting program runs on atleast WinNT to WinXP.
Should we port any of these to Linux, and support the multiple
distros and versions, we’d be out of time to do real work.
People, please care about ABIs .
i agree, the compatibility to older software on windows is just amazing.
My dear anonymous friends, you could at least have tried to counter the authors arguments instead of telling us how great windows is and spreading FUD about Linux.
Last I checked you could simply install mozilla, opera, realplayer … on any linux distro so your ranting is rather pointless.
And no, you don’t have to agree with the author but that doesn’t mean that simply ignoring his arguments is very convincing.
“Last I checked you could simply install mozilla, opera, realplayer … on any linux distro so your ranting is rather pointless. ”
Yeah, binary compatibility is certainly *possible* on Linux. But the fact that you could name only 3 (three) projects (let’s not forget OOo!) shows that this is an exception, not a rule. Take a typical Linux distro and look at all the software it contains. How many of the these programs can you download from the developers site in binary form so that they would work on any distro? Perhaps something like 0,005%?
Now, Linux is not to blame here. It’s just that the most OSS developers don’t care about binary compatibility and tend to make their creation depend on latest and greatest lib versions. That’s OK, that’s their right, that’s the way it’s done in free software world, but the fact remains: binary compatibility is rare here. Even autopackage (when it’s ready) by itself won’t be able to change that developer attitude. And probably nothing will.
Those were just examples to show that the claims about closed source software on linux are simply not true.
Now you are addressing an other problem and that affects mainly open source programs. Wether what you describe is indeed a problem, has been solved for a long time now by good package managers, desperately needs an other approach (like autopackage) has been discussed at great length here and on various other sites.
But still, what our anonymous friend claimed, that his company had to support 656 different version if it produced linux software is simply wrong. That’s all I wanted to point out.
Btw.: acrobat-reader, crossover-office, winex (or whatever it is called today), textmaker… ;-D
Let’s not forget SUN Java and Flash plugin. Hm, shall we start counting?
Take a typical Linux distro and look at all the software it contains. How many of the these programs can you download from the developers site in binary form so that they would work on any distro? Perhaps something like 0,005%?
Not many, typically only propertiary software. Reason is, downloading software in a binary form wich would fit all distros is not the idea on Linux, it’s the other way around. The developer release the source, and packagers release distro-tuned packages. I’m seeing this work great right now on my Mandrake system, utilizing urpmi and a couple of repositories.
Of course it doesn’t work that easily with propertiary software on Linux. The registered version of Mandrake’s got most of them in rpm (I believe?), but I haven’t tried this. It also depends on a good relationship between the companies of the software and the distro, wich will of course not always be the case.
Anyhow, developers of binary-only apps for Linux should be more clever to provide packages for different distros and CPUs. Typically, you’ll find Red Hat and Debian, for x86 only. This does not take advantage of Linux at all, and shrinks it to an every-day desktop OS, with a though-of defacto standard of Red Hat. Times change, all the time, so it would be much better to have packages for like 10 different distros, on different CPUs. But of course, with compability, you’ll only get as far as the number of distros you provide packages for, when you do binary-only software for Linux.
Dunno, have just had great luck with urpmi and the OSS software found there. It works much better than downloading an installer for win98/NT/Xp, wich after installing may not show up in “Add/Remove Programs”, and may leave me without an uninstaller. Urmpi has much better control of the system, and knows how to get rid of properly them too. For most users having the files gone through Mandrake, or other trusted packagers is good, cause they have no reason leaving files for advertising on your computer after uninstalling the app.
The example with the Windows-installer was from the demo-version of Fruityloops, btw. Does not show up inn add/remove, and no uninstaller. Puke.
PS: Have had good luck with compiling also, at least after getting rid of dependencies and figuring out wich devel-packages to install ;]
“Not many, typically only propertiary software. Reason is, downloading software in a binary form wich would fit all distros is not the idea on Linux, it’s the other way around. The developer release the source, and packagers release distro-tuned packages. ”
That might (just) work now, when there are not many programs for Linux. But what happens in a few years’ time, when there may be ten times as many, some of them very obscure?
What I keep wondering, is the “free” way really better?
From a helicopter perspective, the author is right: Binary incompatibilities can be resolved with a recompile, source incompatibilities can be resolved by adapting the (freely available) source code.
But both ways require me to take action: Recompiling, finding a newer version, patching around in sources. And we’re not speaking of my projects, and not only about A and B, but we’re speaking of hundreds of packages maintained by hundreds of people in dozens of different versions.
Once you got a “distro” where every part works with every other, one update could result in a cascade of maintenance because it breaks other subsystems. Yes, I know, there are packages and package-handling software, but quite too often, those fall just short of doing “the thing” for you. More often than not, I had to take manual action – because the ebuild for Subversion wasn’t yet up to 1.0, because the RedHat db4 RPM was compiled with an option that hickuped on my server…
I agree that the “free” way is an alternative, and that it might appeal to geeks. But it does introduce additional work at the wrong end – the user end.
And as soon as we’re speaking about productive environments, it becomes even worse. Rolling out new software, here in the company I work for, takes weeks. There’s no such thing as a quick recompile, or touching the sources to make things work.
Even worse, sometimes there’s this one piece of non-free software that just doesn’t cooperate… I know, some people believe that anything not “free” is evil, but there’s evil in this world and we have to cope with it.
I don’t want to belittle the “free” way, but it has it’s footfalls.
“Yeah, binary compatibility is certainly *possible* on Linux. (…) this is an exception, not a rule”
That’s because, while possible, it’s not all that desirable. Not needing binary compatibility is cheaper and faster, and requires less checks. Those who need it for their own purposes can achieve it, but will be at disadvantage.
Closed way
You either get a hack from company A or Company B because it’s fast and cheap. As long as it works right??? A&B may argue it out to figure out who actually has to fix it. You have to find the update yourself, checking with both Company A and Company B. You have to download it. You have to click the installer. This fix may or maynot be tested but it’s here.
Open Way
Someone notices the bug, it gets submitted, a week or so later an Update appears in a package rep. You click update( or cmd-line what ever) It gets downloaded and installed. now in all likely hood This is a proper fix, may not be tested, but it will be working.
well i think binary´s arent a problem i have often downloaded rpms or debs and converted them to tgz and installed them manualy with out anything bitching. when i have made some project in glade they have worked on all my friends linux computers and we do not run the same dists gentoo redhat debian mandrake etc
>My dear anonymous friends, you could at least have tried to
>counter the authors arguments instead of telling us how great
>windows is and spreading FUD about Linux.
If you read what I wrote,(1 comment) you’d see i talked mostly
from a developers view. I mentioned that it certanly is doable, it’s just _plain hard_.
The free way in the moment is the better way I think. Lets look it this way: Microsoft just recently stopped (or will stop) the support for Windows 98 and NT 4 on consumer level. That makes 6 years of support and many pieces of software newly produced still run even on Windows 95. So Microsft is doing a great job supporting their old system.
Now on the other hand, if you go over to a website of a commercial linux distribution (lets say SUSE) you wont find support for distributions that old. SUSE only presents downloads down to Version 8 of their distro. And version 8 isn’t that old, isn’t it. So if you want support for older systems you need to habe access to the softwares source to be able to maintain the system.
Updating to a newer version of the distribution isn’t an option in the average consumer level world.
The way I see it, most Open Source applications *ignore* the problem. They don’t care about backwards compatability; it’s not like they try to maintain it.
As a user, I get pretty pissed off when binary compatability breaks when it really shouldn’t. Libraries that are depended upon by closed source applications should NOT break compatibility. With GPL’ed libraries, those that require applications to be GPL as well, breaking compatability is less of an issue, since it’s possible to fix the application.
GCC is a very good example of how too much freedom can really suck Breaking compatability so damn often as they do, should be punished
Now on the other hand, if you go over to a website of a commercial linux distribution (lets say SUSE) you wont find support for distributions that old. SUSE only presents downloads down to Version 8 of their distro. And version 8 isn’t that old, isn’t it. So if you want support for older systems you need to habe access to the softwares source to be able to maintain the system.
There is a big difference between an old Windows version and an old “Suse/Red Hat/Mandrake/etc” version…
Win98 will always be Win98. Doesn’t matter how much upgrade you apply to it, it will never reach the WinXP level.
Suse 8/Mandrake7.2/RH7/etc… can be upgraded, updated, patched over time. An old Mandrake 7.2 that has been upgraded package by package over time can be at the same level of the latest Mandrake distro. A linux distro is only a starting point!
And also, upgrading to the latest version of a distro is quite easier than upgrading to the latest Windows version.
Just my 2 cents
!
>That makes 6 years of support and many pieces of software
>newly produced still run even on Windows 95. So Microsft is
>doing a great job supporting their old system.
>SUSE only presents >downloads down to Version 8 of their
>distro. And version 8 isn’t that old, isn’t it. So if you
>want support for older systems you need to habe access
>to the softwares source to be able to maintain the system.
There is big difference. With Windows 95 your stuck. You don’t have access to security patches and you can’t fix the problem. Even contacting Microsoft will not help you as they do not have anyone to provide support. And of course, you cannot access the sources to fix the problem (or hire someone to do it). With Suse, you can always hire someone to provide support for an older version. You have a chance of fixing major security breaches.
Also, comparing Microsoft to Suse is a bit unfaire. Suse is tiny compared to Microsoft, it cannot handle supporting a huge number of releases, simply because of the lack of employees. Microsoft can afford to keep a number of employees to provide support.
Now, most companies are not the size of Microsoft, but more the size of Suse. So if you buy a closed-source package, most chances are that they will support at most 2 releases. So if you decide not to upgrade, you are stuck with a product that could have unfixed security issues and no support.
So you say I could install my old SUSE 7.0 on my system connect to the suse 9.1 repositry and update my system to suse 9.1 without a hitch?
I never ment to compare MS to SUSE. The only thing I said is that SUSE doesn’t have to do the maintaining job because all is open source and you can patch around in the hole system yourself if it has to be.
>Not many, typically only propertiary software. Reason is,
>downloading software in a binary form wich would fit all
>distros is not the idea on Linux, it’s the other way around.
>The developer release the source, and packagers release
>distro-tuned packages. I’m seeing this work great right now
>on my Mandrake system, utilizing urpmi and a couple of
>repositories.
You are right. _You_ , i mean. It works for you, good.
Please take your time to read the first link in the article.
The one that points to;
http://www.joelonsoftware.com/articles/APIWar.html
Read the lot of it, and try to understand some of the most
important pice in it. (Namely, that attracting developers to
your operating system is about the only thing that matters.)
And please, stop thingking as applications only as the ones everyone uses, mail apps, browsers, office suites et.c.
Summing up all the companies I’ve visited, I’ve seen both small and big specialized applications, they by far outnumber the more common ones.
But they’re used by few, some specialized app in one company, some in others. They all do use and rely on the common applications ofcourse, but
the specialized ones are just as imporant for just them.
If you want such people to use your OS, the required applications must exist (or someone must be willing to develop them) -> attract developers. The OSS community isn’t
going to be able to develop every little piece of program everyone needs (When was the last time you saw a ISUP protocol tester on freshmeat ?).
And a hint, binary compatibility is a nice way of attracting
developers, and assure them they don’t have to redo/update/
keep track of/ every little piece of updates that goes on elsewhere.
> Yes, I know, there are packages and package-handling
>software, but quite too often, those fall just short of
>doing “the thing” for you. More often than not, I had to
>take manual action – because the ebuild for Subversion
>wasn’t yet up to 1.0, because the RedHat db4 RPM was
>compiled with an option that hickuped on my server…
Let me ask you this, what would you do with closed-source software if you had the same problems? You would have to contact all those companies and ask them to fix the problems and then wait. With Free software, you fixed all this yourself.
I agree that fixing is not easy and free, it does require time and money, but it’s better spending that time and money on fixing the real issue instead of creating workarounds that create wholes in other software (A has a problem and I fix B to make it work with A but at the same time I create a whole in B).
As a user, I get pretty pissed off when binary compatability breaks when it really shouldn’t. Libraries that are depended upon by closed source applications should NOT break compatibility.
Blame it on those who make their software closed-source ! They choose that way they must suffer the consquences, that is it’s *their* job to make new versions when needed by their customers.
OSS world shouldn’t be slowed down because some companies/people wants to penalize their clients by making closed-sources proprietary softwares.
Microsoft broke thousands of Windows drivers with the move to NT/2K/XP from 95/98/ME. Longhorn will likely break them all again.
What starts to happen is that some software requires newer versions of Windows, and some software+hardware only works on older ones.
If you are unlucky enough to own older hardware with binary drivers that the manufacturer cannot be bothered to update (thanks digidesign), then can you have thousands of pounds of hardware turned into a paperweight. Or, you can run another box with an older version of Windows.
If the code is open then there is at least a chance you can escape the forced upgrade cycle.
QUOTE:
How many of the these programs can you download from the developers site in binary form so that they would work on any distro? Perhaps something like 0,005%?
END QUOTE:
That is such a windows way of thinking, using fedora with yum or aptget for rpm or debian with apt-get there are erm tons of binaries to download and just run. Not to mention that the same is possible to be done for any distro that is based on those distros.
The fact that the source is available means that anyone can make a package for any given software and make a binary available. In fact, the sooner that EVERY distro has a version of apt-get or yum for it the better! No more downloading from websites but using these systems exclusively unless one is able to compile from source & create a package to make generally available.
IMO MS should have compatibility for stupid apps for a time limited period only, ie the OS that comes after the release of that application, and then the next version after that removes it again.
OR
have compatibility as an optional extra that can be uninstalled.
I don’t know how possible these solutions are, but IF it is a security hazzard it most definitely needs reviewing!!
wonder if it may improve the os speed.
I recently wrote a small XSLT application that I considered releasing to the wider world.
However I discovered several problems. Mostly I couldn’t get it working properly on even a slightly older linux system (redhat 9).
Most OSS software is changing so quickly that I can’t code anything on a new system and expect it to work on a slightly older system. While I can be somewhat assured that it will work on newer systems (assuming stuff doesn’t get deprecated), I can make no such assertions about older setups.
To me this is the big problem with commercial software under linux. With desktops stabilising and likely to be updated less often (stability is the watchword of business). How far back do you need to support. Should I aim for gnome 2.0+ or kde 3.0+ or can I just go for kde 3.2+ and gnome 2.4+.
And this is only source level compatibility.
Does this feature of Windows suggest the probability of endless
permutations of “ecological” niches for virii, etc.
Along with the tax we (most of us) pay to MS is the tax to
anti virus companies, firewall companies, etc.
Well first the author reads Slashdot, so that should tell you something about the article right there. Another one of those Microsoft is bad, free software is good articles. Did you complain when linux moved from 1.x to 2.x breaking all compatibility?
> Summing up all the companies I’ve visited, I’ve seen both small and big specialized applications, they by far outnumber the more common ones.
>But they’re used by few, some specialized app in one company, some in others.
>They all do use and rely on the common applications ofcourse, but the specialized ones are just as imporant for just them.
You bring up an important point. People and companies currently depend on specialized closed-sourced applications. Because they are closed-sourced, the companies are in a vendor lock-in. This is somewhat ok as long as the vendor provides support for their software. But when the vendor stops supporting it’s software for whatever reason, the user company is in a difficult situation. Either they have to migrate to a new solution, or stay with the old unsupported version.
Even Microsoft cannot check every available closed-source application to see if it’s latest version of Windows supports it. You will end up with unsupported software in the closed-sourced world. But you will not have the same freedom as in the Free world, to be able to fix your application to make it work in a new environment.
As in every case of freedom, there is a price associated to that, but my argument is that this price is worth it and actually gets voided by the fact that backwards compatibility also has a price, that of security and stability.
> The OSS community isn’t going to be able to develop every little piece of program everyone needs
Why not? I mean, what is so different in developping closed-source applications and open-source application? Both use the same languages (C/C++/Java/C#), both have tools to help develop applications. What would stop companies from making open-source or Free software for all these specialized applications?
> (When was the last time you saw a ISUP protocol tester on freshmeat ?).
Just because it hasn’t been done doesn’t mean it will not be done. Also, Free software does not equal automatic public distribution on the Internet. You could write and sell Free software to a small set of customers. The only thing you have to provide is the source code with your software and give the right to your customers to modify and distribute that software. I think most customers will continue paying you for updates, because you will have the expertise and you will be the trusted source. But if you, for whatever reason, disappear or cannot support your software, your customer will be able to find support elsewhere.
> Please take your time to read the first link in the article.
>The one that points to;
http://www.joelonsoftware.com/articles/APIWar.html
> Read the lot of it, and try to understand some of the most
important pice in it. (Namely, that attracting developers to
your operating system is about the only thing that matters.)
Joel makes a valid point, and I think that Free software does provide some backwards-compatibility. You can run gtk-2.0 and gtk-1.0 applications at the same time. You can run qt2, qt3 at the same time. But when you have the source available, the rules change. You do not have the same restrictions as previously. You can innovate more freely than before. Think about it. Microsoft cannot innovate too much because they will break comaptilibity with old *broken* applications.
Heavens to murgatroyd, installing software is not yet like slipping a suppository to a comatose quadraplegic, such a tragedy. Just like condoms, one size will never truly fit all. How many hundereds of platforms is linux on? The other guys uh, 1? The tragedy is people can’t seem to see anything beyond their own little desktop. Why is there so many choices vs why are there no choices. Yes every distro is like a whole ‘nother version and a completely different os, so pick your pony and ride it.
Who the fsck writes XSLT applications? Most write stylesheets for data transformation. You can’t get Xalan/Apache/Cocoon to run on your linux box? I smell weakness.
It’s sad that in Free/OSS binary compatiblity isn’t considered important. When you regard source compatibility as the only thing that matters, okay. But being able to run binaries across different distro’s is still very useful.
I compiled XFree86 from source a while ago, and in documentation it said that names of some libraries were changed to reflect that they were part of a newer version of XFree86. So, you have an application, and to use the newer library, you need to recompile it. The library is (as stated) fully compatible with the old one, provides the same functions, and interfaces with applications the same way. Yet, only for the sake of indicating it’s part of a newer version of Xfree86, its name was changed. Frankly, this sort of thing is STUPID. Changing library names or programming interfaces should ONLY be done when really necessary.
The same goes for backwards compatibility. It’s okay to break that, if that is the only way to implement new functions in an OS. But all effort should be made to avoid it when possible.
Just consider defined programming interfaces (API’s). OS developers shouldn’t worry about breaking apps that rely on functions not defined in API’s. But they should be careful not to break apps that rely on defined API’s. Keeping apps working when the OS underneath changes, is what API’s are for, right?
Only source compatiblity matters? If so, you forget one thing: recompiling an application can be quick and simple, but may not be. Keeping old binaries around is always the easiest thing to do. Rebuilding a binary can be very difficult considering all build tools and dependencies that you may need. To recompile programs, you need a far more complex system, than you need for just running binaries.
the open source model should work better than the MS model does, but the author compares the reality of MS to some sort of idealized potential for open source.
Yes, since the source code is open, developers should be able to look at all source to ensure compatibility. But nothing like that ever happens. Are you supposed to read source code to check for any compatiblity problems in every version of linux over, what, the last 3 years? What a PITA.
The real difference between the MS approach and the Linux non-approach is that the OS vendor is taking it upon themselves to actively seek out potential problems and come up with solutions. With Linux, it’s a free for all (as in beer, speech, and not worth paying for)
Let’s not forget SUN Java and Flash plugin. Hm, shall we start counting?
Funny, I do believe I’m not the only one who had to dig around for various versions of Java (Blackdown, IBM, Sun) until I found one that was linked against the right glibc.
glibc is not fully backwards compatible, and linking against any given version of glibc WILL make that binary potentially unusable under other versions.
I’m not saying open-source software should necessarily do what Windows did and ensure 100% backwards compatibility. But let’s not delude ourselves either, thinking there are binaries which will run everywhere, ’cause there aren’t.
“Did you complain when linux moved from 1.x to 2.x breaking all compatibility?”
No, I didn’t complain when I moved from 1.x to 2.x. Nothing broke.
When I switched from a.out to elf, a.out still worked. When I switched to libc6, I still had the previous libraries installed. Executables linked against the old libraries continued to work. Closed binaries that depended on bugs in libraries, such as Netscape, worked by using a wrapper that used the LD_LIBRARY feature to link to superceded versions of the libraries.
The distributions didn’t always get it right. Package managers don’t always deal with multiple versions of libraries well. Manually solving backwards compatibility problems can be a pain. But it is possible to solve compatibility problems with open systems without depending on vendors. Closed systems can prevent solving those problems.
Closed systems do a good job of making the easy stuff easy. So do open systems. Open systems weren’t always easy to use, but routine operations are now easy to do with both open and closed systems.
Open systems make solving hard problems possible. It might take a lot of work, but hard problems with open systems can be solved. But no amount of effort can solve problems with closed systems when the vendor decides not to solve the problem. Vendors make decisions based on their interests, not my interests. Microsoft has discontinued support for Windows 98, so there will be no more security fixes. Red Hat has discontinued support for Red Hat 7.3, so there will be no more fixes from Red Hat. But because Red Hat is open, there are fixes available from others. That’s the difference. Both vendors dropped support for old products, for perfectly valid reasons. In one case, that means no fixes from anyone. In the other, fixes are available from others. I’ll take “possible, but hard” over “impossible” any day.
It’s not that no single OSS project cares about binary compatibility. KDE, for example, should not have any incompatibilities between minor revisions, so that an application compiled for KDE 3.0 should also work perfectly on the upcoming 3.3, and vice-versa.
Other projects care less, indeed. GNOME’s new file selector isn’t even source compatible. Therefore (I assume that) a GNOME 2.6 application won’t work on 2.4, let alone 2.0.
Luckily, often it doesn’t matter for open-source apps, as they are all included in the distribution with the right library versions, and closed source apps are often either statically compiled (like Acrobat Reader) or work with a compatibility package (Kylix)
The biggest problem is people insisting on using 10 year old software. The spend the money to upgrade their hardware and OS but do not bother to update their software with newer version or compairable soulutions. I sometimes dust off my old 200MMX system with DOS 6.2 on it just to play Transportation Tycoon. I don’t expect it to run on my XP system(Although it may). Microsoft has been really kind to the whiners it would be nice to see them set a hard time frame for backward compatibility like 5 years.
maybe you should check out openttd: http://sourceforge.net/projects/openttd/
its a “remake” of ttdx useing all the old graphics and sounds (you need a original of ttdx to transplant the old stuff from)…
People seem to forgot that Linux was not designed to run proprietary/closed software. It is the responsibility of proprietary vendors developing software for free software projects, such as Linux, to keep up with the changes that might affect their products.
If proprietary software X doesn’t run on a new version of Linux, how is that the Linux developers fault? It’s not as if they signed a contract with the Linux developers to provide backwards compatibility with their products.
Microsoft explicitly promises their developers that they will provide backward binary compatibility for their products. So Microsoft is bound to its words. Please show me where any free software or open source software developer explicitly promises any developer backward compatibility for their products.
The point is free/open software evolves at such a pace that many proprietary and commercial vendors can’t catch up! So we see a bunch of whiners resurrecting. Rather than whine, why don’t they solve the problem? Why don’t they streamline their development process so that is accounts for contingencies that occur all too often in this ever-changing, volatile and dynamic ecosystem.
Now I do agree that breaking any sort of compatibility frequently is unhealthy. However, once in a long while it inevitably occurs in the interest of advancement. Note, it is usually very fragile software that breaks. Well designed, well crafted and robust software stand unaffected irrespective of core changes upstream.
A neat thought just struck me: wouldn’t making LSB required libraries inhabit their own directory solve most future binary-only problems? In other words, all LSB-compliant binary software should only link the libraries supplied in a standardized directory like “/opt/lsb-1.foo/lib” and use only other programs in “/opt/lsb-1.foo/bin”.
Now, if a distro contains lsb-1.foo compliant libraries, the directories or libraries in them could be symlinks. Any binary dependencies not required by LSB-1.foo would need to be installed by the application and they should be built agains the lsb libraries. A simple shell script to set library and binary paths may need to be called when the program starts, unless the program uses hard-coded library paths.
So, when a binary file is distributed, it uses the lsb-1.foo libraries and binaries. Multiple versions of the LSB standard can be present in “/opt/lsb-1.foo”, “/opt/lsb-2.bar”. Distros can choose to include and support the packages as they see fit. If you elect not to bother with symlinks, a simple binary package (even a tarball) would supply the libraries for almost all distros.
While the idea of binary-compatiblity is usually thrown around in support of proprietary software, the FOSS would benefit as well. It would mean that software could be released as LSB-compliant, not RedHat/SuSe/Debian-compliant. It would mean relatively large projects or those requiring significant numbers of specific or obscure libraries could be built to work on LSB-compliant distros.
Imagine using the same KDE-4.foo binaries on your Fedora box at home, your RHEL box at work, and your Debian box at your girlfriend’s house. Such compatability would mean developers and contributors spend less time compiling and testing across disparate distros and more time actually developing.
Of course, distros will still ship built against libraries incompatible with the LSB, which is why making the LSB a standardized set of libraries in a standardized location within a disto make sense. You place things in the right place and everyone benefits: a binary standard for those who need it, and the distro can charge full-steam ahead without worrying about binary problems.
Lot’s of OSS projects take care not to break backwards binary-compatibility: glibc, KDE, gnome, qt, gtk+, etc.
GCC keeps breaking C++ compatibility, and with good reason. I’d rather not have to live with ABI bugs just so some stupid closed-source program I don’t even use can keep working. In contrast, Visual C++’s hell-bent desire to keep backwards compatibility have hurt it. GCC 3.x uses a cool table-based exception-handling mechanism that has no runtime cost. Meanwhile, Visual C++, to preserve compatibility, can’t use this sort of mechanism, and uses Windows’ structured exception handling, which entails a performance hit.
Just because it hasn’t been done doesn’t mean it will not be done. Also, Free software does not equal automatic public distribution on the Internet. You could write and sell Free software to a small set of customers. The only thing you have to provide is the source code with your software and give the right to your customers to modify and distribute that software. I think most customers will continue paying you for updates, because you will have the expertise and you will be the trusted source. But if you, for whatever reason, disappear or cannot support your software, your customer will be able to find support elsewhere.
That’s extremely naive. There is absolutely zero incentive for a company that develops software to do as you suggest.
Actually it isn’t naive as it has been done allready a couple of times.
That’s extremely naive. There is absolutely zero incentive for a company that develops software to do as you suggest.
IIRC, theKompany distributes this way, or at least used to. They would allow paying customers to get the source, but not the general public. Note, this was for the products they sold, not the products they distributed for free.
When an application takes advantages of bugs, or undocumented behaviour, it is unfortunate but it deserves to break. Emulation is possible in the worst case, I guess, when the application can not be changed.
However, binary compatibility of say, C libraries should not ever be a problem. For starters, I can’t find any reason you would need to change your ABI for linking and calling C code. It really is quite silly.
If you release a new version of a library which has a different (even slightly) API you could keep track of this with *interface versions* (distinct from normal versions which are dictated by development, which may or may not be backwards/forwards compatible). It may be necessary to have multiple versions of the same libraries but that is a small price to pay for having working software.
C++? I think that is a bad idea for shared code. It is always possible to create OOP interface classes to a C library, much harder the other way round. C++ (to my knowledge) does not have a widely agreed-upon ABI like C’s cdecl and stdcall.
> That’s extremely naive. There is absolutely zero incentive for a company that develops software to do as you suggest.
What I gave is pretty much the definition of GPL. What you are saying is that GPL is naive. You may be right, but then you have to say that Novell/Ximian, RH, Suse, Mandrake, Mozilla, Sun, IBM and many others are naive to base their business (or some of it) on GPL.
That’s funny, I was always under the impression that Visual C++ was a lot faster than GCC (the C++ part). However, there are most likely other reasons for that, and I don’t blame GCC for being ‘slower’ since GCC is targeted for many platforms.
But while many feel that closed source developers should be responsible for releasing compatible software, we keep being put up with ugly staticly linked motif apps thanks to that attitude. If Mozilla uses a C++ based plugin system, then it will fail on the GNU/Linux platform. No one wants to have to ask their customer what version of Mozilla they use, what compiler was used to compile it, and them present them with the proper version. Also, what happens when such a company goes bankrupt? Should we expect everyone to open up their code? I personally think not.
What I gave is pretty much the definition of GPL. What you are saying is that GPL is naive. You may be right, but then you have to say that Novell/Ximian, RH, Suse, Mandrake, Mozilla, Sun, IBM and many others are naive to base their business (or some of it) on GPL.
In the case of Novell/RH/Suse/mandrake/Zilla/Sun I agree that it’s very naive…. in the caase of IBM I’d say it’s a wise move as they base so much of their money off consultancy and they have an old vendetta with MS that they still suffer from in terms of “company confidence”.
Open source means everything is fixable with enough time and effort. What large companies are interested in is reducing time and effort spent on what is not their main focus (software development and bugfixing), and devoting it instead to selling more widgets. So in reality, whether you are dealing with closed or open source, what the large company wants to know is how long you, the vendor, will support the OS and associated applications, ’cause the large company does not want to be bothered. That’s what they’re paying you for, so they don’t have to be bothered.
Just read a couple of interviews with Red Hat execs where they’re talking about a 5-6 year support time frame, which is pretty good, but not out of line with MS’s support for WinNT and some of its business applications, for example.
Yeah, binary compatibility is certainly *possible* on Linux. But the fact that you could name only 3 (three) projects (let’s not forget OOo!) shows that this is an exception, not a rule.
Holy dorkishness, argumenting in the 21th century:
A: Bla == true.
B: Bla != true, since X, Y, Z.
C: Because there is only X, Y, Z Bla == true.
LMFAO!
No, it does not. In order to proof your point you’ll have to state various examples where the opposite is the case as those 3. Then the athor again most likely states some in which it is true, so it wouldn’t be constructive it would become some kind of rally of who is able to state more examples. A more fine, in-depth analysis as post or article would just be more constructive than the flamish path driven here.
That’s extremely naive. There is absolutely zero incentive for a company that develops software to do as you suggest.
If the main (and in some extend only) purpose is to develop and sell software, i agree. However that isn’t the case with Novell, IBM, BBC and many other companies. IBM doesn’t say: We sell this GPL Linux kernel for $500 per computer, does it?
Just not on Linux. Try Solaris instead. We developed OSS drivers and the same binary works on Solaris 7, 8, 9 and 10. Even FreeBSD/OpenBSD/NetBSD/DragonFly are better than Linux when it comes to binary compatibility. Nothing really breaks in BSD (otoh FreeBSD 5.x-CURRENT series has been just arbitrarily breaking the device tables).
If there were ONE distribution of Linux (kernel+libc+gcc) – like FreeBSD and everybody added stuff on top of the kernel, that would be much better. Redhat fscks the kernel, Fedora uses incompatible glibc, SuSE ships kernel sources in a wierd way. You can’t get Linux 2.6 to work on Redhat 9 (rpm stuff breaks unless you enable BSD accounting in the kernel config). You can’t get RealPlayer8 installed unless you do LD_ASSUME_KERNEL=2.4.2 and other stupid hacks.
Unless people really are taken to task for breaking LSB, Linux is a train wreck in the making. No wonder Oracle will not certify Oracle 10 for anything other than Redhat EL.
best regards
Dev Mazumdar
Isn’t it true that these binaries can target a more specific version of an .so? You could keep them around. A good binary installer should be able to determine what libraries are needed, and the script to run the application could set the LD_LIBRARY_PATH variable to include the application’s own folder, i.e. /opt/application/lib, to work.
This, of course, means bloat. But hey, there’s a segment of the crowd here that wants binary compatibility.
Me, I think the way alternative architectures like AMD64 can have an entire suite of software custom-compiled as they can with Linux is awesome. To hell with binary compatibility. Just make sure we have hardware and data compatibility. Perhaps the fact that I’ve been running Gentoo for the past year and a half colors my perspective…
Open source portability and fixability, binary compatibility, speed – pick any two.
To mystilleef:
People seem to forgot that Linux was not designed to run proprietary/closed software.
Linux is just a kernel. A kernel should not care about what is the license of software that runs on it.
Holy Linux! Speaking about what was it designed for: single user X86 CPU computer without network support. This is what Linux was designed for by Linus.
Look at Linux kernel 2.6 and realize how Linux changed since then.
Nothing but lack of focus prevents Linux developers from supporting its backward compatibility. That can and should be changed.
Even though I barrack for Linux, I belive that Backwards compatability is where work needs done (mainly in regards to libraries, as the kernel has backwards-compat worked out).
What I’m mainly ttrying to say is that dev’s should relise that even though something is deprecated etc… they should still keep it somewhere in the API (maybe in a deprec. section), insteading of just going ahead and deleting it, meaning that essentially program will need work done to be compatable with the new version.
Of course you do!
If you use old software that only works with specific Linux distros and it isn’t open source or being updated to run on the newer distros then you have two options. Run an old distro that supports your old software or modify the source to work with a newer distro. If enough people use the software and its open source it will be updated. If not then you are hardly worth the time and effort to provide backwards compatibility unless you pay for it out of your own pocket.
And that’s fair, IMO.
I think you’ve got your history and facts upside down. Linux was designed to emulate Unix after the frustration of having to deal with proprietary bullsh!t that UNIX was buried under and the expensive hardware that accompanied it.
Even though Linux’ humble beginning was on a single x86 CPU, to say that it was designed for single CPU with no networking contradicts the whole purpose of emulating Unix(the backbone of networking) and the POSIX standard(the standard Linux was designed to adhere to). Get your facts straight!
Backward compatibility is as silly as it is broken. It was introduced by Microsoft in an attempt to pacify commercial developers who were frustrated by the way earlier Microsoft operating systems broke their products That is why Windows comes with the kludge we know it to be today. If you look closely at your Windows2000/XP Professional systems, you’d find Windows95 files, dos files and even games hidden around your system files. The 3D pinball game anyone?
Mind you these files aren’t needed by Windows2000/XP. There are just their for decorative purposes, I’d like to believe. The remnants of…uh…huh, that’s right, backward compatibility! You say Linux developers lack focus. I’d like to know in your opinion which developers have focus, Microsoft’s?
The only people I see voraciously whining are commercial proprietary vendors with fragile software who can’t keep up with the pace open/free software evolves. All of a sudden, free/open software developers need to freeze the sun, the moon and time so that commercial proprietary vendors can catch up. We can only dream.
Heck the king of backward compatibility, Microsoft, break compatibility themselves. Try opening your Word97 files in OfficeXP/2003. You can’t. You’d have to use openoffice.org to do that. So much for backward compatibility. There are issues plaguing Linux today, backward compatibility is the least of them. Proprietary vendors need to develop a different strategy for developing products for Linux quite different from Windows. That’s just it.
Much of the Linux community have no interests in proprietary politics. Microsoft does. The Linux community’s slogan is, “Show us the code!” as opposed to “Show me the money!” Even Real Networks is beginning to come to terms with that. Others will learn and follow suit.
What I gave is pretty much the definition of GPL. What you are saying is that GPL is naive. You may be right, but then you have to say that Novell/Ximian, RH, Suse, Mandrake, Mozilla, Sun, IBM and many others are naive to base their business (or some of it) on GPL.
There’s a place for both open source software and closed source. Open source as a business model doesn’t apply to all situations. We were talking about niche/specialized applications. It would be too much of a risk if you open source your main avenue of revenue. Open source lends itself better to generic applications around which interest can be created in the community.
“People seem to forgot that Linux was not designed to run proprietary/closed software.”
How in the heck whether software is closed or open sourced make a difference when run in Linux? Will the OS give me a warning telling me that what I am about to run is closed source? Just because the OS itself is open source, doesnt mean that it was meant to run only open source programs.
mystilleef: So you don’t care whether Oracle runs on Linux at all? I’m sure that would please the business users who are thinking of migrating to Linux right now. It’s morons like you who give the Linux crowd a bad rep.
That you aren’t even able to open Word 97 files in newer Office versions certainly give away your cluelessness.
…if anyone really had an binary compatibility problem within their STABLE distribution?
I mean, it’s pointless to complain to the car factory about a nameless car part not fitting in your Mercedes, since it’s not their job to make it fit. Why should distro’s do so?
Don’t get me wrong, it would be great if any given software would run on any given platform, but that’s not possible. Ever.
The thing that pisses me off is not that some fooprogram doesn’t run on barplatform (without compatibility measures like old libs), it’s the people who use programs (developed by sheer good will) for free and dare to complain when progress happens and something breaks as a result.
Use the stable releases, or face the fact that YOU may have to make the thing work properly.
Please respect the voluntary work of others.
The main reason backwards compatibility is important to people is that they have generally invested a lot of money in applications, and don’t want to lose that investment or be forced to pay upgrade fees just to get back to work.
That concern disappears when the applications are free. All you can say is that it’s inconvenient to upgrade to the latest version. Big deal.
Most users want the latest and greatest. Many Windows users would gladly accept pirated copies of all the latest versions of the major applications. The only true barrier to upgrading for most is the cost.
Why should I care if Oracle runs on Linux? For all I care, Oracle could die a horrible death. And how those Oracle automatically become the flagship product businesses need to move to once they adopt Linux?
Is there somewhere in the memo where they say all businesses that adopt Linux must use Oracle? If Oracle solves your business needs, good for you. But why should I suddenly care?
Excuse me, but I’m not an Oracle shareholder! I don’t profit from Oracle’s businesses on Linux. What? Do I suddenly owe Oracle a kiss because they see profit generating opportunities on Linux?
Sweet, I am moron because I have no vested interest in Oracle.
How in the heck whether software is closed or open sourced make a difference when run in Linux?
It is a lot more easier, less expensive and less cumbersome to manage open source software in Linux than closed source. Qt realizes that. Real Network does too. Nvidia even had to provide open source wrappers to link their modules to Linux.
Will the OS give me a warning telling me that what I am about to run is closed source?
No, it doesn’t. I don’t see why it should. However, installing, maintaining and upgrading it will one make your life as a user hard.e.g the need to recompile the nvidia driver everytime I upgrade my kernel.
Just because the OS itself is open source, doesnt mean that it was meant to run only open source programs.
In many instances, it does. You can’t link proprietary modules to many parts of the Linux kernel, for instance. In several cases proprietary applications can’t link to free/open libraries on your system. They need to write their own.
Writing closed software is not impossible, it’s just hard. And that’s because like I said, the system is inherently designed to thrive in an free/open source atmosphere. Much of these is delibrate by design.
From Windows 3.1 up to Windows 9x, WIN.INI (or SYSTEM.INI) used to have [compatibility] and [compatibility32] sections under it to cater for software compatibity.
And I believe Windows Update IS _still_ providing compatibility updates to Windows 2000 …
“It is a lot more easier, less expensive and less cumbersome to manage open source software in Linux than closed source”
To clarify: it would be easier, less expensive and less cumbersome anywhere, not just on Linux.