Miguel de Icaza talks about the problems independent software vendors have with Linux distributions’ API/ABI backward compatiblity.
Miguel de Icaza talks about the problems independent software vendors have with Linux distributions’ API/ABI backward compatiblity.
People sometimes focus on what would happen if Microsoft changes an API in .NET, how would that impact Mono? This is not as much as a problem, given Microsoft’s commitment to keep binary compatibility;
Microsoft has a very bad history at respecting the standards, but ok, I´m convinced by what Miguel de Icaza says, they have no other choices this time.
But he doesn´t yet have adressed the issue about Microsoft using the DMCA lay or one of their thousands patents to counter Mono, if it becomes too sucessfull.
To convince you that this time, there is a little more than the good old conspiration theory, let me quote the FFII (Foundation for a Free Information Infrastructure)
==============
http://swpat.ffii.org/players/microsoft/
Heise report about Steve Ballmer’s talk at CeBit. At a speech event together with chancellor Schroeder, Ballmer says that Microsoft owns lots of patents which cover its new DotNet standard and that it aims to use them to prevent opensource implementations of DotNet. The key phrases read, in translation:
Responding to questions about the opening-up of the .NET framework, Ballmer announced that there would certainly be a “Common Language Runtime Implementation” for Unix, but then explained that this development would be limited to a subset, which was “intended only for academic use”. Ballmer rejected speculations about support for free .NET implementationens such as Mono: “We have invested so many millions in .NET, we have so many patents on .NET, which we want to cultivate.”
(Original document: http://www.heise.de/newsticker/data/jk-12.03.02-000/ )
…I’m not 100% sure that APIs are really that much of an hassle. Binary distribution OTOH, is the the real problem, since the whole open source developpement model is based on source, not binaries. Gentoo, and in a different way Debian, fully aknowledge this model, and I think its the real strength of these distributions.
If you really want a closed source like commercial model (which is partly why ABI stability is so important) applied to software for Debian, you have to stick with Debian _stable_, which only add significant changes in the fundamental parts of the system with a new major version of the distribution.
Developping on unstable is more like working with regular CVS pulls of the development kernel : you can’t ask for a devel. kernel to keep backward compatibility between two CVS revisions.
My question is how long should API/ABI last. Microsoft is a very bad example, but at some stage you should brake the shackels ie put a date on it. Like when Redhat update the commercial/client distributions they service the compatibility for say 2 yrs then they move onto the new. At some stage you have to say at what cost are we doing this, of coarse to buy something & finding that 1 yr later API/ABI has been broken is not professional, but to expect having a product 2 yrs old and expecting it to be compatible or servicable is extending the friendship, I thing.
THis year I had to officially play with linux and a commercial product. The product in question has issues on other Unixes but Reda Hat Advanced server 2.1 beat them all. If you need commercail support I do not recommend linux …
—
http://www.advogato.org
You are right, linux is much more geared towards open source compatability (though API compatability is a huge pain too), then closed source compatability (I mean ABI compatability here).
Look at products like XD2, even though they only released it for RH and Suse, it was only a matter of time before there were debian-ized packages of it, and that the gnome maintainers for the different distros started using some of thier patchs.
Problem is, that there are still alot of tools out there that are not open source that want support on linux. Telling these companies (like oracle) to go to hell, is not what we want. Hopfully one day oracle will be open source, but until that time, well, linux can be a pain….
“THis year I had to officially play with linux and a commercial product. The product in question has issues on other Unixes but Reda Hat Advanced server 2.1 beat them all. If you need commercail support I do not recommend linux …”
I am not sure what you actually mean by this: Did Red Hat beat them AND you do not recommend linux??? I think I am confused. Could you explain?
There are two ways to deal with binary incompatibilities when they get in the way of innovating:
1) Don’t innovate. That seems to be Miguel’s stance.
2) Let the new release live side by side with the old release.
Given the choice, I’d go with door #2. This is where projects with experimental branches, such as gtk, linux, etc., can keep mainaining the old versions, but spend most of their time on the new one. Perhaps the best solution is to save API/ABI changes for when you feel it’s time for a major point release.
I usually agree with Miguel, but in this case I find that he needs corporations to embrace Linux more than we do.
Thanatos wrote:
>There are two ways to deal with binary incompatibilities when >they get in the way of innovating:
>1) Don’t innovate. That seems to be Miguel’s stance.
Reaaaaaaaaly?
Open Source is a wonderful thing, with the choice you have to program with and the toolkits you can use (GTK+, QT, Motif, etc).
Of course this also makes it a headache. You need such amd such a library to run this program. Oops, it doesn’t work with your kernel or some other library. Want to upgrade, you need this this and this.
I understand was Miguel is saying. Fixing it is a whole other issue.
Here’s what I drew that conclusion from:
And of course, part of the problem is that it is not fun to work on backwards compatible ways, nor is it fun to work when innovation is hampered by backwards compatibility. Proprietary software is not going away any time soon, and our community should start thinking about ways of co-existing with it, such that ISVs help our platform by enriching it with a larger software offering.
He doesn’t come out and say innovation should be sacrificed, but see the way he seemingly jumps to the next topic mid-paragraph? I think he wants us to draw the conclusion that we need to avoid innovation in place of compatibility. Kind of like, “I know you guys don’t want to keep compatibility if it means you can’t innovate, but we need proprietary companies to support us…”
Also, check this quote out,
With Mono we are trying to keep things compatible with the .NET Framework: even when we believe we can do better, we avoid introducing incompatible changes.
Although this isn’t the same thing as breaking API/ABI compatibility in one’s own software, and he goes on to say that he tries to get new features put into new libraries, I think this sentence is again trying to get developers to put binary compatibility above innovation in terms of priority, in order to slow down development to get proprietary software companies to develop for Linux.
My opinion is that either big companies make the innovation happen, or we do. I’m pretty satisfied with our progress, and since it’s pretty easy to deal with API/ABI changes by allowing multiple versions to co-exist, I think we’re alright on our own.
In most cases, the revenue opportunities for the small and medium ISVs are too small to make a Linux port worthwhile, and when it is, the staffing requirements for maintaining and testing their software for a dozen of distributions and release versions quickly becomes a big burden.
A couple things that are necessary to remove the headache of distribution-agnostic packaging:
1) glibc ABI freeze. There is absolutely no need for the interface of glibc to change as much as it does. Is there any real advantage to be gained in redefining fpos_t as a structure instead of an integer? Not that I could find, but this is one example of broken binary compatibility between glibc 2.2 and 2.3.
2) A set of binary compatible base packages which would be guaranteed to be present on all distributions. Failing this, consistent package names for standard libraries across all distributions, so dependancies can be handled in a distribution-agnostic manner.
The only way to combat the lack of a common applications platform for Linux is to statically link all software before releasing it, which is a less-than-ideal solution.
If the developers decide to change something that breaks binary compatibility, then you can bet there was a technical advantage gained in doing so.
I agree though, glibc should hold off on breaking ABI compatibility until the kernel’s developement tree is within a few months of release. If you checkout glibc cvs, you’ll notice it uses 2.6.0 kernel features (as well as the new NTPL lib).
If the developers decide to change something that breaks binary compatibility, then you can bet there was a technical advantage gained in doing so.
I don’t see any functional advantage gained over the break in backwards compatibility. In fact, a second type is defined for 64-bit offsets, fpos64_t. The fpos_t structure supposedly contains “conversion state” information now, but how through the exposed API is this information actually used? Perhaps GNU has created some proprietary extensions to work with these ANSI C types, although for what reason and to what end I’m unable to ascertain.
The fact that they’re breaking binary compatibility of ANSI C functions yet fail to state explicitly why they are doing so speaks volumes to me.
I’ll take the risk of being stupid, but why isn’t it possible to define some kind of standard for libraries in which you define functions at the beginning of the compiled binary and add pointers to the location of the function elsewhere (wherever) in the binary. In that way, apps can always find the right function via the pointers, but no application should have to break anymore with a new release of the library, or am I missing something?
For which I dont think linux is the future.
The home user just cant find an app, download, click, install, use. Most apps are done by hobbysts, for hobbysts. Not by hobbysts for regular people who cant compile.
Most developers who really understand their system will get some distro more according to their skills, like debian or gentoo, or some BSD. They can even less develop packages/binaries for the “easier to use” distros. And hey lets admit it, there are less packagers than programmers.. has anyone here ever heard of a “community of packagers”
where you can just go and say “Hey dudes, I wrote this cool program, could you package/ebuild it for debian, redhat, mdk, gentoo, etc?” No that just doesnt happen, only the most important or known apps get packaged, and without packaging many dont become important or even die due to lack of user feedback. Sort of like a catch 22.
Linux may work as server, it may work on the enterprise desktop which is usually just static configurations, but it will never work on the home desktop, it just lacks that flexibility. Really, visit 10 of your friends and check which programs do they have on their computers, you’ll notice they have very different stuff each. They install programs they find on sites, that come on magazines, or
that they friend downloaded and recommended them, etc.
It is NOT TRUE that home desktops are static configurations, even the Lindows guys understand this and
try to add that extra “click’n’run thing” which attempts
to make it more interesting by promoting the software instead just expecting the user to assume it’s there.
On Linux programs cant be distributed with such diversity. they just get them from their distro or they dont.
If the developers decide to change something that breaks binary compatibility, then you can bet there was a technical advantage gained in doing so.
Boy, that’s convincing. We certainly wouldn’t want to challenge any developers’ decisions or anything.
> I don’t see any functional advantage gained over the break in backwards compatibility. In fact, a second type is defined for 64-bit offsets, fpos64_t. The fpos_t structure supposedly contains “conversion state” information now, but how through the exposed API is this information actually used?
Perhaps they want the same code to compile cleanly for 64-bit architectures? Why don’t you ask them?
If there really is no advantage, then I will admit that the developers did something stupid. However, I maintain that if it comes down to a choice between backwards compatibility and innovation, I choose innovation. If the developers broke compatibility without adding anything cool, then they didn’t face that choice. No one will argue that backwards compatibility should be broken for arbitrary reasons.
Re: RJW
Boy, that’s convincing. We certainly wouldn’t want to challenge any developers’ decisions or anything.
If you know about what’s actually going on, as Bascule does, than you have every right to challenge their decisions.
API/ABI compatibility. Well said Miguel. I have noticed that of late, I can download lots of apps compiled for Redhat 8.0 and run them on 9.0, and on Fedora. But it needs to extend further. Committments need to be made that go far in to the future. Like guarantee API stability for say 3 years. 5 years would be more ideal. Actually, guaranteeing it for one major revision of the APIs would be awesome too. One of the reason Linux has been moving forward at speed is that they did not do this. But as soon as we catch up to the commercial vendors, which in many instances we already have, that stability becomes more important than ever before. how many programs can you donwload on the internet that can run on version of windows from Windows 95 to XP. Probably a lot. 2000 did break a few things though.
Binary compatibility between apss is also another beast to be tackled, as well as standard locations of libraries, or at least, a way to be able to install the same binary on every distro. Ximian has to prepare different sets of packages for Redhat 7.3, 8.0, 9 and SUSE. Commercial vendors do not really want to do this. One thing that could happen is that vendors could choose a pseudo standard, and elevate that to become the standard Linux, which would kill other distros. If I were a vendor, and I were big enough, I could do that. its a pain trying to figure out how problems could be related to the version of Linux someone is running, which kernel, which distro, what other packages are installed, what locations etc.
I’ll take the risk of being stupid, but why isn’t it possible to define some kind of standard for libraries in which you define functions at the beginning of the compiled binary and add pointers to the location of the function elsewhere (wherever) in the binary.
You’re describing how the jump tables of libraries currently work. Symbol names are mapped to function pointers.
In that way, apps can always find the right function via the pointers, but no application should have to break anymore with a new release of the library, or am I missing something?
When the symbol names or function arguments change, binary compatibility is broken.
Didn’t he know?
If there really is no advantage, then I will admit that the developers did something stupid. However, I maintain that if it comes down to a choice between backwards compatibility and innovation, I choose innovation. If the developers broke compatibility without adding anything cool, then they didn’t face that choice. No one will argue that backwards compatibility should be broken for arbitrary reasons.
This is a very good argument, one which I claim is the root of all problems. Project development should be incremental and additive, there should be no gaps or breakages in any way, otherwise it isn’t worth making the effort. It is like leaving a huge mess on your workplace after a days work. It doesn’t look good on the eyes of endusers, even developers who’ll be affected by it will balk at it.
He is absolutly correct here
we need api stability to win on the desktop
because – games – are not open source
I won’t buy linux games until I know it’ll still work in 2 years time when I want to play it again.
and they don’t make games for Linux because nothing works the same on 2 PCs with different APIs , and games are the key to the Home PC where windows is the norm
Listen guys, I never said there was anything wrong with innovation, but we should assimilate in the community the fact that we have to do this at well determined points in history. Look at Gnome: you keep old interfaces, and introduce new ones, you never change the meaning of something that has been defined.
Yes, you have to carry on cruft, but you also avoid pissing developers off.
Miguel
Assuming that was the real Miguel…
Thanks for the article. You hit my most sore point in Linux, the fact that there is little to no backwards compatibility.
Doesn’t it bother you people at all that you can’t just install a newer version of a library without breaking some or all of your system? Why should I HAVE to recompile everything? It is wasted time and frustration.
On most other systems, a newer version of the same library is improved, not different!
Mutiny
BTW, I like Mono about as much as WINE. It is a good thing to also have, but I would never rely on it.
There is too much damning evidence that we will always be playing catch-up and that the rug may be pulled out from under us.
I hope you are wrong, but Microsofts comments (and previous actions) seem to point to pulling the rug out after wasting years of some of our best programmers time.
When the symbol names or function arguments change, binary compatibility is broken.
Again, at the risk of sounding stupid, the follow-up question: why do they change at all? If you want to upgrade a function called ‘drawcircle’, why not call it ‘drawcircle2’ and add it to the end of this jump table? Leave the old ‘drawcircle’ where it is and give the new one a new entry. New programs use the new function, while the old ones still can find the old one. Clean up the mess when you do a big version change. And even then, the old and new library should exist together.
If the purpose of the mono project was to create a compatibility layer to run unmodified windows .NET programs on linux, you would be right. It will never be possible to adapt the thousands of APIs microsoft will create for .NET completely to linux. The best you can hope for is that mono will be able to run Windows .NET applications that were written with portability in mind.
But that is not the purpose of mono. The real purpose of mono is to make the .NET infrastructure available for Linux. And the .NET infrastructure is a really fine piece of work that is ECMA standardized and can thus be implemented by everyone without paying license fees.
The higher level APIs of .NET are another matter. They are not standardized and microsoft might be able to use patents to stop third party implementations. But who cares. The higher level APIs of .NET are not that good anyway. Some of them, such as Windows.Forms, just plain suck! We (the open source community) can build our own superior high level APIs on top of the excellent .NET infrastructure. But we *need* that infrastructure. The Java VM is not an option since it lacks many important features. So it would be really stupid not to implement the .NET ECMA standards.