Linus Torvalds has threatened that if developers add ‘last-minute things’ to the next version of the Linux kernel he will ‘refuse to merge, and laugh in their faces derisively’.
Linus Torvalds has threatened that if developers add ‘last-minute things’ to the next version of the Linux kernel he will ‘refuse to merge, and laugh in their faces derisively’.
It was obviously a toungue in cheek comment intended to make a point in an amusing fashion. Everything that I have ever read about the man indicates Linus has a good sense of humor.
Good sense of humor, indeed. Which reminded me of Santa Claus and Tooth Fairy, the real fathers of Linux™.
This is just common sense. Of course Linus must get tough about the time issues, especially when considering the widespread use of linux in corporate environments.
Any slip due to people hurrying in late patches is unacceptable, so Linus is doing the right thing.
I just hope the rest of the community can see this.
Linus threatens a lot of people…
But at the end of the day he has a wife and kids, more than any linux hacker could dream of.
A classic tale, but why news? “Kernel coders get tough on Torvalds” would have been news (and not before time, some might say).
Right he is
Otherwise people would have started to moan Linux’s not enterprise ready, because Linus allows last minute patches to slip into the mainstream kernel. Then again, some people do that anyway.
I think that it is time that instead of adding features to the kernel, the kernel maintainers take the time now…to start stripping out stuff from the kernel that is not needed. Kind of reducing the bloat. I have not used Linux in a while so I might be totally off base here but I regard it as the most technically advanced OS there is. I have read more than a few people complaining about bloat in the kernel. I dont know how true it is but I think it is time to take stock of what the current kernel has and what to get rid of from it.
Not most advanced but most complex. Thousands of moving parts that doesn’t fit too well together.
Quote”Not most advanced but most complex. Thousands of moving parts that doesn’t fit too well together.” by bending unit
I agree linux may not be the most advanced in some areas but as with all GPL code .. it ends up somewhere else. This isnt a bad thing but complex structures of code can be a problem. Thats why module based kern* works so effectivly. Most of the so called bloat is just because of the Vast amount of hardware linux supports and security implementations to prevent things like buffer overflows etc..
This also means developers will be able to test patches for longer before submitting them.
I disagree about kernel bloat, if bloat is an issue it has to do with the distro compiling support for every hardware in existance into it, simply compile a kernel that fits your system (if you are complaining about bloat you probably know how to do this, if you don’t, there are many references that show you how to)
I agree that it is time to stop adding features, I think they should go to an official yearly kernel release with new features, otherwise it should simply be bug fixes and security updates.
Linux has advanced to a point where if they simply perfected the existing technologies it would be a force to be reckoned with in the market place. In the coorporate world, bugs simply are not tolerated.
“I disagree about kernel bloat, if bloat is an issue it has to do with the distro compiling support for every hardware in existance into it, simply compile a kernel that fits your system (if you are complaining about bloat you probably know how to do this, if you don’t, there are many references that show you how to)”
Well, most of the canned distributions compile everything (that can be) as modules, and today coldplug/hotplug is quite good at loading (if not so good at unloading) modules on demand at boot/runtime. So, the modules take up disk space on /, but don’t inflate the memory-resident kernel image.
I’m squarely in the optimal configuration camp: disable everything I won’t ever need, compile statically everything I always use at boot, and compile anything I would plug into my system or use periodically as modules. But 99% of users just want to have all available support ready for autoloading if they happen to need it.
It is a fallacy to talk about disk consumption as a factor contributing to the buzzword known as “bloat.” Software is only bloated if it consumes vastly more memory or CPU cycles than it should (or if it links to other software of this type). You might be able to make a case for isolated cases where software accesses I/O resources in a bone-headed manner, but that’s more “crappy” than “bloated.”
“I agree that it is time to stop adding features, I think they should go to an official yearly kernel release with new features, otherwise it should simply be bug fixes and security updates.”
I respectfully disagree. The OSS development model (and the Linux kernel project in particular) has always benefitted from release-early/release-often. I think that the kernel development process is headed in the right direction for two key reasons: 1) the development tree (-mm) is now a branch, rather than a fork, so patches that test favorably on -mm are likely to integrate nicely with the production tree, and 2) a distinction is being made between features and fixes, the former can only be merged in a specific window, and the latter are merged in frequently-updated cumulative patches.
There are always new features to be added to the kernel, it will never be “done.” One of the aspects that proprietary OS development managers fear the most about OSS and Linux in particular is that it is gaining features, stability, and credibility at a completely unprecedented rate. The only advantage that proprietary development houses have over OSS is centralized dynamic testing infrastructure. Projects like Linux have been taking advantage of the best-of-breed static analysis tools before each release candidate. But I wonder if the number of downloads for -rc kernels or -mm kernels is scaling with increases in production kernel downloads. Is the community, per user, getting more or less effective at testing non-production kernel releases?
But the “official kernel release” you mention already exists, in a sense. Just use the ones built for RHEL or SLES, they get far more testing than the vanilla kernel.org releases.
“Linux has advanced to a point where if they simply perfected the existing technologies it would be a force to be reckoned with in the market place. In the coorporate world, bugs simply are not tolerated.”
No, it hasn’t advanced enough. Enterprise customers demand features like variable/large page sizes, extensive dump capabilities, dynamic hardware load management and partitioning, checkpoint-restart/live-migration/high-availability, and other features. All of which Linux either doesn’t have or is just starting to provide initial support.
No more commit for you! [you must immitate the soup nazi]
Hellllllooooooooooo…..
Newman.
*incredulous look*
You gave me fleas!
This bit of jollity merely reminds contributors that they need to occasionally brush away the cobwebs of laziness and appreciate someone’s schedule other than their own. It is good when someone reminds me this way. It is good for others, too.
This is a developer issue, not a user issue. Users are interested in features in general, and whether their own desired feature works, be it a program, driver, or GUI. Developers need some time for ‘FCT,’ which is Final Checkout and Testing. Their code, which might work perfectly in an environmental vacuum, needs to be tested in the context of a working kernel.
Sometimes you have to get tough when herding cats…