As has been reported on LWN recently [OSNews coverage], Andrew Morton (one of the core Linux kernel developers) has been heard to worry that bugs are being added to the kernel more quickly than they are being fixed. But it is hard to know for sure. In an attempt to obtain a little more data on the problem, Andrew has asked LWN to run a survey of its subscribers. The results will, hopefully, shed some light on how a wider part of the community sees the kernel quality issue; they will be discussed at the upcoming kernel summit.
I’m not sure how I would interpret the results of this survey. I’m inclined to believe that the average LWN reader, similar to the average OSNews reader, thinks they knows more about the Linux kernel than they really do. I tend to update my kernel pretty often, but I don’t feel that really puts my finger on the incremental quality changes the occur over time.
I recently worked in the quality area for a large UNIX distributor, and the only reliable metrics for quality are based on defect rates. You can look at internally reported defects, customer reported defects, defect rates leading up to a release… you can classify defects by cause, impact, severity, how they were found…
Knowing how the community perceives quality is useful for marketing, but as the maintainer of the development branch of the Linux kernel, that’s not Andrew’s primary responsibility. If he wants to get a better sense of the quality of the Linux kernel, he’ll have better luck scripting the bug database than surveying LWN readers.
Software metrics are a perilous business on a good day.
If Morton can get a qualitative check on the mood ring from Some People Who Might Know, then it’s possible that some useful management decisions could come out.
IOW, if he’s not trying to be rigorous, then why be pedantic?
I’m sure I speak for a lot of people when I express my appreciation for all those kernel developers that slog through and solve bugs. They truly are unsung heroes.
I hate those ABI changes in linux with every damn release. And they’re saying that Windows drivers stop working in the new versions of system. Why do I have to rebuild drivers that came from outside the kernel every release ?
Maybe I’ve problems understanding your point, so please bear with me, but : The ABI is the Application BINARY Interface. When you really have to rebuild the driver against the source of a new kernel release, you must be in possession of the sources for the driver (which according to your statement comes from outside the kernel trees), which leads me to following scenarios :
– The driver has a (open source) license, which is incompatible with the GPL v.2 ( I can’t see, how a stable ABI could cure this, btw.) which basically means , that you can’t redistribute the module if you compiled it against the GPL linux kernel. So this module would have other, more practical problems rather than the rebuild-it-yourself process.
– The driver wasn’t accepted in one of the main trunks of the linux kernel. If the driver is Ok, this would be a rather rare case, and I would speculate, that (given there were reasons for not accepting the driver in question other than for an incompatible license or intellectual property reasons) there are issues with this driver beyound the compile-it-yourself troubles (hint: quality issues).
– The driver is for a very very very rare device (had this case once myself with a driver for a custommade CCD chip interface) and the driver vendor sees no point in handing the driver to the kernel devs for inclusion. (Nevertheless, lack of a stable Kernel ABI was our smallest problem back then 🙂 )
Anything missed ?
Probably, you meant, that every time you change the kernel, you have to rebuilt the wrapper for your binary-only kernel modules (would explain, why you complained about the unstable ABI).
This issue has been discussed elsewhere (good summary of the kernel dev’s point of view : http://lwn.net/Articles/159313/) , or like Linus once put it :
“Basically, I want people to know that when they use binary-only modules, it’s THEIR problem. I want people to know that in their bones, and I want it shouted out from the rooftops. I want people to wake up in a cold sweat every once in a while if they use binary-only modules.” – Linus Torvalds, 1999
You may agree or not agree with this policy, but since it’s their work, and they do not break the ABI malicously (at least that’s not the impression I have, YMMV though), I would suggest respecting their point of view in this matters.
Regards
The driver is Open Source. It’s driver for rt2500 wireless chip. In SuSE 10.0 everything worked OK when new kernel was installed – driver was simply rebuilt. In SuSE 10.1 they removed the driver. I really tried to get used to some linux distros (Mandriva, Slackware, SuSE, KateOS, Fedora) and none of them supported WLAN correctly – I mean that there was no good user interface. KDE’s WiFi Manager works OK, but I want automatic switch among networks as Windows does, and it does it the right way. So I got rid of Linux.
>> “Basically, I want people to know that when they use binary-only modules, it’s THEIR problem. I want people to know that in their bones, and I want it shouted out from the rooftops. I want people to wake up in a cold sweat every once in a while if they use binary-only modules.” – Linus Torvalds, 1999
What a crap. Because L.T. said this it doesn’t mean it’s true. I don’t give a damn if driver is open source or not – device should just work when plugged in. I don’t want to run stupid hardware managers – setup that came on CD from vendor is enough or OS should automatically search for driver. I’m not talking about old non-PnP hardware which I have thrown away couple of years ago.
I had the impression (second hand, since I do not own such a wifi card), that the rt2x00 drivers (http://rt2x00.serialmonkey.com/wiki/index.php/Main_Page), , the successors of the rt2500 module family made it into the 2.6.x kernel series, I might be wrong on this though. Ralink made the right steps to open up their driver, so kudos to them, but it literally takes time to stabilize things out. I know, this advice is often received as an arrogant rant by an elitist, but I have myself learnt (the hard way), that it’s the sad truth : If you buy hardware and want it to work with linux natively, research before the purchase. And if you have existing hardware, not working with linux, start (learning) to tinker or wait until someone else has made progress. And yes, giving input (aka complaining) to the maker of your distro, that driver xy has stopped working, is in my book part of the tinkering process.
My point was, that when a module is stable enough and is compatible with the license and the policy of the kernel dev’s, it’s usually incorporated into the kernel. And no, just beacause Linus Torvalds says something doesn’t mean it’s automatically true. But the policy regarding binary only modules is something at the heart of developing the kernel, and – for reasons stated in the linked documents, that were sent in reply to your original post – the kernel devs have a strong opinion against supporting the introduction of binary blobs into the kernel.
You seem to have tried to customate yourself with linux, and it’s sad you’re dissapointed. But if “just works with binary drivers” is the thing you’re after, then imho linux is – at least in it’s current incarnation – not particulary suited for your needs. Which should be perfectly OK, since the goal of the kernel devs was to create an open source operating system Kernel under a free software license, and not to take over the world (e.g. gain a 90%+ dominant market position).
BTW.: The lack of userland tools to manage the hardware is not particulary a kernel problem, from your post this issues seem to be more on the usability front, which could get some improvements too, but that’s not the point of the article we’re discussing here.
Regards
Martin
I’m not saying that Open Source drivers suck. If the hardware maker cannot or really doesn’t want to release full source code for their driver they could at least provide source for kernel dependent part and closed source kernel independent parts of the driver which is then linked with the compiled part (it’s done that way with some winmodem driver in SuSE).
Generally I’m angry about SiS chipsets which suck. SiS doesn’t event provide linux 3D driver for their crap. Maybe one day I’ll try reverse engineering their driver for Windows . Screw the SiS driver’s license, I have the right to use what I paid for (2500$ for a laptop 2 years ago).
Standard answer:
http://lxr.linux.no/source/Documentation/stable_api_nonsense.txt
It’s the “standard” answer, but it’s still wrong. There have been stable in-kernel device interfaces in Unix since 1975, if not earlier, and Unix has suffered none of the problems Greg seems to think comes from such things, nor has Linux yet to show an improvement in in-kernel driver interfaces over where Unix was as long ago as 1985.
The main reason why Linux in-kernel interfaces suffer so much churn is that what passes for “design” in Linux is reactive rather than pro-active.
The only ABI Linux has is the userspace/kernel ABI, and hasn’t changed in a long time. There is no kernel/driver ABI.
http://www.minix3.org/
Try it. Very small kernel. More reliable . Better security.
“Basically, I want people to know that when they use Linux, it’s THEIR problem. I want people to know that in their bones, and I want it shouted out from the rooftops. I want people to wake up in a cold sweat every once in a while if they use Linux.”
http://www.minix3.org/
Read the link about reliability
http://www.minix3.org/reliability.html
How do you want to arg with linux zealots ?
Even if their kernel developpers said that it is full of bugs, they will deny it.
While I agreee with the way the kernel devs are doing things for the most part, this one issue is a very stick one. Lets face it, this mostly boils down to hardware makers (video cards esp.) not wanting to open their drivers. Considering that it is slowing main stream acceptance of Linux, maybe the devs auto consider a compromise. Having good hardware support is a benefit to Linux. And just because the devs don’t care about Nvidia or ATI, doesnt mean that they aren’t important to the OS and the community. Think of the people who actually need 3D capabilities for their work, be it modelling or design or what have you. Anyway, just my 2 cents.
Slightly non-standard answer. You get to read this:
http://lwn.net/Articles/162686/
There are two worlds…
The Linux kernel devs, they only care about their kernel, nothing else.
Then there are the devs who try to assemble a unix like operating system using hundreds of open source components that just happen to include a Linux kernel, resulting in over 500 different known implentations, each with it’s own ad-hoc decisions & incompatibilities.
For this reason & the licensing issues I am investigating making my future desktop OS a BSD such as PC-BSD.
Edited 2006-07-11 13:50