Every year most Linux hackers attend a conference where they talk about all those topics only kernel hackers talk about. Papers of all the talks are now available, with a wide range of topics: NTPL, XEN, page cache performance, I/O scheduling, future ext3 development, VM, and more. Also, as every year, LWN’s excellent kernel summit coverage is now freely available.
Since this post is hard to have a flame war about, I will start things off with a question:
Form reading the coverage of the Linux Kernel summit, it seems that kernel development is increasingly being pushed around by 3rd party hardware and software vendors. Although the outside interests would like to have it their way, the kernel developers remain grounded in their commitment to the community model. Do you think that the Linux kernel developers should continue encouraging 3rd parties to play by the community’s rules (and risk slowing or stalling Linux adoption), or should they allow 3rd parties to retain control over mainline kernel modules specific to their products? To what extent?
I recommend reading http://lwn.net/Articles/144269/ for some background.
The link you point to is a great example of the contradicting goals of hardware manufacturers and what Linus wants.
Until there is a little give and take on both sides there’s always going to be driver issues with linux. And until there is a stable kernel API nothing is likely to change.
The link you point to is a great example of the contradicting goals of hardware manufacturers and what Linus wants.
Until there is a little give and take on both sides there’s always going to be driver issues with linux. And until there is a stable kernel API nothing is likely to change.
It is a great summary of the conference.
As for give and take…I don’t see a good way to do it except for;
* Move all propriatory bits to firmware.
* Start development on 3rd party drivers in the open as part of the mainline kernel. (Includes treating the drivers as open source even prior to public disclosure. This eliminates some problems with the transition.)
* Drop the idea that the ABI will eventually be stable.
That last one is important for many kernel developers because;
* If the ABI were stable, closed drivers would become common and would cause developers to spend time tracking down phantom problems with drivers they don’t have the source to. (This is the reason for the kernel tainted flag.)
* If the ABI were stable, there would be one fewer reason to bring the drivers into the mainline kernel source.
* If the ABI were stable, it could not be improved.
Do you think that the Linux kernel developers should continue encouraging 3rd parties to play by the community’s rules (and risk slowing or stalling Linux adoption), or should they allow 3rd parties to retain control over mainline kernel modules specific to their products? To what extent?
If the source will be open, and 3rd parties want to ‘retain control’ over mainline kernel modules…they can, though not in the sense of being the only place to go. They have to put up or get out of the way. Basically, the whole idea of ‘control’ in OSS is based on deed; if you do it, you have control…if you don’t or others do a better job…you do not have as much control.
For example: Next Tuesday, if Linus decides that he is more interested in finger painting and drops any future coding, someone will replace him. He is not the leader of the Linux kernel project now because he started the project. He is the leader because he leads well and is trusted. He is in control based on merit. He is in control based on effort. He is in control based on trust.
Any other person or group — corporate or private — has a chance to do the same. If they are able and willing is another question entirely.
If you are talking about closed source, they can use methods like Nvidia’s and some of the binary software modem drivers do; release a binary (closed) with an open sourced kernel driver.
What they loose is;
* The ability to get feedback and source changes.
* Any end user or developer support from the kernel developers.
What they gain is;
* The ability to keep the source closed. In some cases, this is necessary though the legitimate cases are very small.
OK, after going back and reading the fine article, I do have one basic idea besides the normal knee-jerk reactions to people who only like to stir hornets.
To set the stage, here are what seem to be the core issues that have 3rd parties interested;
* ABI stability specifically to support closed source.
* Reuse of object code developed for Windows. (Not specifically stated, but implied in a few places by multiple people.)
* Keeping business plans secret as long as possible.
The problems that the mainline kernel developers have with these are;
* ABI + closed source: If something goes wrong, the source isn’t available to debug the problem. This is much of the problem with Windows drivers. To solve that, Microsoft introduced a beurocratic method to certifify drivers on that platform.
* Reuse of Windows binary object code: The problem with this is that the decisions made to develop a Windows driver aren’t necessarily optimal for Linux. Ham fisted approaches to force the OS to do what is required can cause problems that lead us right back to the same problems mentioned above. Also, since Windows object code targets only x86 processors, any bugs that are hidden under x86 may show up when recompiled for other platforms…if they are recompiled at all.
* Secrets in general: Can’t write code or debug if there are too many.
The solution that seems to make the most sense is to treat the commercial driver like a public project as soon as possible — even well before it can be disclosed to the public for business reasons.
Before release to the public, get good developers, give them time, and write code based on the idea that it will be ported. Start development in an open manner before public disclosure, put any propritory bits that do not tightly interact with the OS in firmware, and transition to public development with working code and cheap developer samples. At that point, getting the code included in the mainline kernel is more likely as the parts that matter to the device users are fully disclosed. Documentation of the hardware would also be nice.
Do you think that the Linux kernel developers should continue encouraging 3rd parties to play by the community’s rules (and risk slowing or stalling Linux adoption)
I think quality comes first.Linux kernel maintained by skilled kernel developers who have fun.Third parties could certainly help by contributing driver code and or other stuff usefull that has been propietary for years now.The community model doesn’t discriminate,in fact all can benefit from it.It has it’s purpose not for nothing and ensures monopolistic entitities can’t stall innovation/development later on.
or should they allow 3rd parties to retain control over mainline kernel modules specific to their products? To what extent?
More specific?
Would be nice if a lot of propietary drivers become OSS under a licence in which most people can find their benefits.Instead of reverse engineered drivers the official ones.I think it pays off for the hardware vendors.Take for example nvidia’s exellent driver support.When someone who runs Linux is going to buy a graphics card chances are almost 100% it’s going to be a nvidia chipset based one.
The problems that 3rd parties have supporting Linux support include:
Supporting multiple kernel versions and/or patchsets
Supporting slowly maturing kernel subsystems
Firmware and associated GPL issues
Features that should not have to be driver-specific, i.e. handling failover
Driver certification vs. community control
Corporate code review process lags kernel development
Varying coding conventions/quality particularly wrt embedded devices
The “shining example” noted, the nVidia graphics driver, is an exception to the rule. Normally the kernel maintainers do not allow binary kernel modules. For one thing, Linus has always rejected the idea of a stable ABI (note: he doesn’t necessary reject the idea of stable APIs in theory, only in practice , so it is up to nVidia to fix its driver when the kernel breaks its ABI.
The problem with OSS drivers is that for a lot of hardware vendors, releasing driver source code is not harmonious with their business model. At the kernel summit QLogic mentioned that Linux deployments represent a “double-digit” percentage of their sales, so I think that market share and reluctance to open source code are related.
I think a central theme to 3rd party Linux support is the rapid pace of development. The Linux model says: develop rapidly, release often. Most other models say: develop as the market demands, release no more than twice per year. Linux represents a moving target in many ways.
The “shining example” noted, the nVidia graphics driver, is an exception to the rule. Normally the kernel maintainers do not allow binary kernel modules. For one thing, Linus has always rejected the idea of a stable ABI (note: he doesn’t necessary reject the idea of stable APIs in theory, only in practice , so it is up to nVidia to fix its driver when the kernel breaks its ABI.
Talking about the current situation, Nvidia’s drivers are not in the mainline kernel. The parts that are available include a GPLed part (OSS wrapper) and a closed part (binary).
If the drivers are in the mainline kernel, they will be checked (if only modestly) with each kernel release. Since not everyone has every piece of hardware, the contributors to the mainline kernel modules are mostly responsible for making sure they don’t break when the kernel ABI is updated.
Do you have recommendations for changing this, and if so, how?