As the number of Linux kernel contributors continues to grow, core developers are finding themselves mostly managing and checking, not coding, said Greg Kroah-Hartman, maintainer of USB and PCI support in Linux and co-author of Linux Device Drivers, in a talk at the Linux Symposium in Ottawa Thursday. In the latest kernel release, the most active 30 developers authored only 30% of the changes, while two years ago, the top 20 developers did 80% of the changes, he said.
It’s great to see that in spite of all the big companies contributing to Linux, the “community” has an important role in its development too.
I really thought that lonely hackers were almost extinct and the kernel was almost fully controlled by 4-5 companies who hired professional developers. But the numbers show that it’s quite the contrary, and individual contributors are growing each year.
The Linux kernel is really an amazing project. Probably the crowning jewel of Open Source (and there are lots of incredible Open Source projects out there!).
But you’ve got to wonder how many of those contributors are on the payroll of big companies.
Maybe if they didn’t try to have *every* linux driver in the tree it would be workable.
Ideally the driver APIs would be stable, and allow binary forwards and backwards compatibility. Then you could let other people maintain the drivers.
Same goes for linux software in general really – if there was any sort of binary compatibility you wouldn’t need the huge repositories of debian, gentoo etc.
Actually there is binary compatibility. It is perfectly possible to create binaries that runs on gentoo, Debian, Fedora, Linspire, Arch Linux, Slackware, LFS and whatever distro you can think of.
Of course you can occasionally run into missing .so-files, but this is no different than problems with missing libraries in OS X or Windows
Binary compatibility slows development. You get stuck with not fixing API issues because you need to maintain backwards compatibility for everything.
Having every driver in the tree means that needed API changes can be made quickly and all drivers can be updated easily.
Linux is a monolithic kernel, most drivers run in kernel space and therefore can panic the kernel, it’s good to have the kernel developers checking to make sure that drivers aren’t written insanely.
In the Windows world there are many very badly written drivers by cheap companies which cause instablity in the platform.
First, I don’t see how your subject relates to your post. If drivers were maintained out-of-tree, the kernel would still be monolithic.
Personally, I think that Greg argues the stable API debate backwards. He says that Linux doesn’t need a stable API because they maintain everything in-tree, and the lack of stability that lets them improve the kernel at a faster rate.
This is true, but it doesn’t explain why it’s worth maintaining everything in-tree. For that, you have to consider the quality angle. Even if you have a stable API policy, there are always going to be bugs in kernel components, and their behavior may change in very subtle ways. In order to ensure a high-quality kernel, you have to test development builds as complete units, including the drivers.
At the end of the day, drivers and other modular components get loaded into the kernel address space and run in kernel mode. Their quality is as critical to system stability as the core kernel code, and they have to work together. That’s why the world’s most sophisticated distributed kernel development project maintains as much code as possible in-tree and highly encourages vendors to do the same.
Binary compatibility for userspace applications, on the other hand, would be a good thing for the Linux community. However, this is made difficult because of the same issues that made the Linux kernel project work so well. The userspace library stack is split amongst numerous projects with their own source trees and release cycles.
This is why package compatibility is only provided at the distribution level. There’s no universal project for making sure the libraries that make up a Linux system are developed and released as a cohesive product. Each distributor fulfills this role, and none of them seems to be in a position to become the consensus unified Linux distribution.
Instead we have about 3-4 major Linux distributions, each with their strengths and weaknesses. If the kernel project embraced out-of-tree development, we’d likely have 3-4 major kernel distributions (and many minor ones), each with their strengths and weaknesses. I prefer the unified mainline kernel with its track record of rapid improvement and high quality.
Maybe a simple C program with no dependencies, but for anything more complicated, no you can’t. See for example: http://trac.autopackage.org/wiki/LinuxProblems
Oh come on! When was the last time you had a DLL missing in windows?
Yes but they have to be update by the kernel maintainers! Thus resulting in the current situation.
I was actually referring to their development model as monolithic. Sorry that was very ambiguous.
True but at least you have the option of using them. Better than nothing!
I don’t see why. It would be easy (or at least possible) to make it like Xorg is now – have a core package, and then separate driver packages, and some way of automatically installing them: “You have inserted device X, this requires the package Y to be installed. Continue?”
Userspace binary compatibility should be their priority though. Windows manages it! Window XP can run warcraft 2 with no help – try running something that old on linux! Also try compiling something on a ubuntu, and then running it on debian stable. Impossible.
you can run stuff very much older on linux, you just may not be able to with a DEFAULT installation. mainly the stuff being older libc’s or libstdc++ but if bundled, it will work perfectly.
A lot of developers that develop do so because they love to create code, and once they’re pushed into what becomes a management position, they quickly become unhappy. I know I’ve worked under at least one development manager (who still codes) that purposely has avoided going higher in management, because they don’t want to, they just want to code. What I see here is the same sort of thing happening, and depending on the developer, they may leave the project for one of many reasons, even if they’re in the top echelon of the development process, because they no longer like their role, however necessary it is for the overall success.
I wouldn’t be surprised to see it end up being that a lot of people that have become senior in the project end up leaving the project, or at least taking much smaller roles, due to this reality, and a churning of people that are in the gatekeeper roles taking place.