News reports out of LinuxTag late last week have Andrew Morton, the second highest ranking Linux kernel developer, calling for a bug-fixing cycle to cut down on the growing number of bugs found in the latest kernel release. NewsForge contacted the number one man on the Linux project, Linus Torvalds, who acknowledged it might be the right time for a “bug cycle”.
I don’t necessarily disagree with either LT or AM, but I think AM pretty much backed LT into a corner.
I greatly respect the Linux guys for simply standing up and saying that the bugs are real and there are a pile of them. Its a kernel and not a complete enterprise class OS. A kernel needs to be rock solid.
Linux is not rock solid.
I look at the cost of Red Hat Enterprise Linux and I look at free Solaris 10 ( real UNIX ) and the OpenSolaris community and I can not find a reason to fork out money for Red Hat Linux anymore. I would rather be with a team of engineers that publish their code and their bugs and deal with them in a business like fashion while also giving the complete OS away and support can cost 30 cents a day.
Fix the Linux bugs, sure.
What Linux may really need is a complete reinvention in order to survive its own popularity and the jingoistic blind following of people that pretend to do real engineering and change management.
I greatly respect the Linux guys for simply standing up and saying that the bugs are real and there are a pile of them. … A kernel needs to be rock solid.
I agree entirely, didn’t mean to suggest otherwise.
Its a kernel and not a complete enterprise class OS.
Enterprise-class OSes are built around it.
Linux is not rock solid.
Works fine here, AM’s comments notwithstanding.
I look at the cost of Red Hat Enterprise Linux and I look at free Solaris 10 … and the OpenSolaris community and I can not find a reason to fork out money for Red Hat Linux anymore.
There are cheaper Linux options than RHEL. Fedora Core, for instance.
(real UNIX)
“Fake UNIX” (Linux, BSD) seems to be good enough for Google and Yahoo, and plenty former “real UNIX” customers. Good enough for Hotmail, too, until the OS was changed for idiotlogical raisins. Haven’t you heard? The last real UNIX was V7. If you’re after “real UNIX”, why did you ever consider Linux in the first place? Good luck getting V7 to run on AMD64, and bid a fond farewell to logical volume management, journaling filesystems, tab completion, and Unicode support.
I would rather be with a team of engineers that publish their code and their bugs and deal with them in a business like fashion while also giving the complete OS away and support can cost 30 cents a day.
I hear LT and AM are pretty businesslike. Linux distributors give complete OSes away, not the kernel team. I doubt the kernel team at Sun are in charge of the complete OS either. FYI, the kernel is published in its entirety at http://www.kernel.org, bugs and all. I agree it’s not very “businesslike” to admit your product’s failings, but then you claim to admire the fact that they did.
What Linux may really need is a complete reinvention in order to survive its own popularity and the jingoistic blind following of people that pretend to do real engineering and change management.
What, you mean like those unprofessional people at HP and IBM? Shame on them for employing script-kiddies to work on the Linux kernel, when they could be so gainfully employed working on AIX and HP-UX, huh?
If a complete re-write/change to how the kernel is developed were needed, I’m sure it would happen. There have already been significant changes to the way it is developed and code is written and signed off.
I’ll take this as notification you’re leaving the Linux community. Goodbye, and good luck getting your hardware to run on OpenSolaris.
Over the past seven years I have tried on a number of occasions to work with Linux in an open source way. I found the Linux from Scratch project and thought it was a gift. Here was a way to walk through the entire build process step by step and to actually create a Linux distro. In my mind that was the ultimate Linux experience and most educational.
Over a period of six months in 2002 I tried to get Linux from Scratch to build on my quad CPU SparcStation 20. I had a copy of Red Hat 6.2 ( zoot ) for Sparc and that installed well enough and was to be the base for everything that followed. I continually ran into x86-isms that stopped my progress but I worked at it for six months. I tried again in 2003 and I stayed in contact with the LFS people and made donations etc etc.
After much work I saw that this open source idea was all very well and nice but it didn’t work if you stepped off the beaten track the slightest bit. One should be able to compile and install binaries from pure code on any architecture provided that the hardware specific modules are in place. The build of Linux from Red Hat proved that I could run everything on Sparc. But I could not build everything myself.
I also discovered, over time, that despite my best efforts, if I disagreed with anyone from the church of Linux in any mail list or spoke about Sparc or Solaris or even Tru64 UNIX I was often ridiculed.
This last experiment on my part was simply to point out that the Linux kernel is not the enterprise class kernel that people profess it to be and the addition of the latest hardware support has not strengthened the core of this mighty oak. In the wind it shakes. And sometimes it breaks.
As for the OpenSolaris project, I guess I would rather have an OS that runs for years with the same kernel which scales from a single processor to hundreds. Same kernel. Solid to the core. The latest gadget is of no interest to me. The latest Solaris Express Community Edition ( build 38 ) offers ZFS and Zones and DTrace and it runs like a charm on Opterons or UltraSparc ( and soon PowerPC ) with a really cool interface on top.
Lastly, no one in the OpenSolaris community ever throws ridicule at me for saying “Linux has this and I want it in OpenSolaris”. The reverse just isn’t true and I have tried and tried and tried with many long hours of work.
> I’ll take this as notification you’re leaving the
> Linux community. Goodbye, and good luck getting
> your hardware to run on OpenSolaris.
I don’t think I was ever in the Linux community despite my attempts to give it my very best efforts. And I did make sincere efforts.
Over the past seven years I have tried on a number of occasions to work with Linux in an open source way.
Just to make sure we’re both singing from the same songbook, you might like to clarify what you mean by “an open source way”.
Over a period of six months in 2002 I tried to get Linux from Scratch to build on my quad CPU SparcStation 20. I had a copy of Red Hat 6.2 ( zoot ) for Sparc and that installed well enough and was to be the base for everything that followed. I continually ran into x86-isms that stopped my progress but I worked at it for six months. I tried again in 2003 and I stayed in contact with the LFS people and made donations etc etc.
After much work I saw that this open source idea was all very well and nice but it didn’t work if you stepped off the beaten track the slightest bit.
You’re attributing qualities to “open source” and “Linux” which may well have more to do with LFS. And (different point) LFS is under no contractual obligation to provide support for SPARC, AFAIK. Besides which, you later admit that Red Hat, as provided, worked fine on SPARC.
I can’t vouch for the Gentoo SPARC port per se, but I can vouch for the professionalism and openness of the Gentoo community. It’s also a damn site easier to install and manage than LFS, I say from experience.
I suspect that unless you have experience to the contrary (and those factors haven’t changed in the intervening years), you’re going to be disappointed with OpenSolaris. You say that you have become disenchanted with open source, and yet OpenSolaris IS open source. You say that you want the freedom to do what you will with the OS without complications, and yet I suspect that a vendor which believes that enterprise class OSes should be built on known subsets of available hardware will actually control the OS more tightly. AFAIK there is of yet no Solaris from Scratch.
One should be able to compile and install binaries from pure code on any architecture provided that the hardware specific modules are in place.
And provided that the developer(s) has/have the expertise and resources to port their application to platforms X, Y, and Z. I don’t think any single organization (even NetBSD) can claim exactly equivalent support for all hardware in the known universe.
The build of Linux from Red Hat proved that I could run everything on Sparc. But I could not build everything myself.
Red Hat’s value add is that they make some decisions for you. The two approaches are orthogonal.
I also discovered, over time, that despite my best efforts, if I disagreed with anyone from the church of Linux in any mail list or spoke about Sparc or Solaris or even Tru64 UNIX I was often ridiculed.
I highy doubt that the Linux community is, or any longer feels itself to be, so small and persecuted that any deviation from “The Torvalds/RMS Way” is seen as heresy. In fact there is no Torvalds/RMS Way – that’s two approaches, which often find themselves at loggerheads, right there. Try listening to a Mac or Windows fanatic sometime. [No, I’m not implying that anyone who doesn’t spit at Macs or Windows PC’s is some sort of educationally sub-normal loon].
SPARC/Solaris and Tru64 don’t have a huge and growing list of devices to support. Linux has incorporated concepts from HP-UX (LVM), AIX (JFS) and IRIX (XFS, OpenGL).
This last experiment on my part was simply to point out that the Linux kernel is not the enterprise class kernel that people profess it to be and the addition of the latest hardware support has not strengthened the core of this mighty oak. In the wind it shakes. And sometimes it breaks.
As I said in my previous post, many people would disagree. I don’t have any experience with Win2K3, but I would suspect that due to the volume of hardware which Linux and W2K3 support, the latter is the only OS you can justifiably compare to Linux as regards stability due to driver/device issues.
As for the OpenSolaris project, I guess I would rather have an OS that runs for years with the same kernel which scales from a single processor to hundreds. Same kernel. Solid to the core. The latest gadget is of no interest to me.
As noted above, easy when the company controlling the OS also controls the hardware. Drivers for whatever “gadget” you don’t feel like including support for can be left out of a custom-compiled Linux kernel, which it sounds like you have the expertise to create. The effect on stability of open sourcing Solaris is yet to be seen, and if (as I suspect) they won’t let just any Tom, Dick, or Harry, hack the kernel and devices, you (as anyone else) will have some work to do to prove yourself. But Solaris has a head-start of some 20 years over Linux when it comes to running on high-performance hardware.
Lastly, no one in the OpenSolaris community ever throws ridicule at me for saying “Linux has this and I want it in OpenSolaris”. The reverse just isn’t true and I have tried and tried and tried with many long hours of work… I don’t think I was ever in the Linux community despite my attempts to give it my very best efforts. And I did make sincere efforts.
Well, without being facetious there’s a first time for everything. I have heard approving noises being made about writing a ZFS-clone for Linux (it can’t legally simply be ported because of licensing issues). Solaris was designed specifically for SPARC, so it bloody well better had work on that platform. Its performance (in terms of even being able to bloody boot) on Intel is craptacular on arbitrary hardware, and whilst because of less hardware variation it may have an easier time on PowerPC, its performance on *that* platform is as yet at best untried. If relative degree of freedom from crashes is the sole criterion for an Enterprise OS, then AmigaOS on MC68k hardware is more of an Enterprise OS than Win 9x.
To sum up, I’m sorry that you’ve had bad experiences with the Linux community. I hope that you will have better luck with OpenSolaris. But (especially given what you seem to expect) I honestly don’t foresee that you will get much preferable results from OpenSolaris, unless (I have to say yet again) known hardware is a huge factor. And since Linux is not and does not attempt to be a niche platform (in terms of hardware support, if nothing else), I’m not sure they’re in the same league.
>Just to make sure we’re both singing from the same
>songbook, you might like to clarify what you mean by
> “an open source way”.
Oh, I mean that the source is available to me and that I can compile it regardless if I am using Sparc, ARM, PowerPC, x86 or Opteron. The source is not architecture specific. I have the freedom to choose the direction that I can go with it. The Linux kernel gives me a lot of that. Some of the things I build on top of it do not.
As for LFS, that is where I really started out with Linux. I had installed Linux in previos years and actually ran a website for a while on an old 486DX2 with 32MB of RAM. It just ran really well for what it did using something called Turbo Linux. I was singularly impressed. That is why I jumped into LFS and tried to “roll my own” on top of a quad CPU Sparc box. Things did not go well.
> you’re going to be disappointed with OpenSolaris
Not at all.
Thus far it has been very educational and I can build a complete OS in about an hour on a decent Opteron box. Reboot and be running. Not just the kernel but the WHOLE thing.
See : http://www.blastwave.org/articles/
and pick BLS-0050 there.
Also .. I am not hardware locked and you can grab a box and throw Solaris on it in a flash. Or in my case I can grab a pure OpenSolaris distro and just boot it :
http://www.blastwave.org/dclarke/schillix-0.5.2.txt
That is an IBM e325 system with a Kingston USB 512MB memory key hanging out of it and it boot and runs SchilliX no problem. All interfaces are discovered and configured for DHCP and the USB memory key is found and mounted.
OpenSolaris is not a “hardware lock in” at all.
The PowerPC port will prove that.
I think that Jonathan Schwartz has a vision in which the source will be wide open and the 20 year head start you mention is laid out openly for everyone.
I think his latest blog says it well, the geeks are in charge now and given sufficient technology and open minded participation we can do anything we need to do. I just wish that the Linux community would work along side the OpenSolaris people such that we can perhaps build something bigger and better than both.
http://blogs.sun.com/roller/page/jonathan?entry=the_geeks_are_in_ch…
Linux is not rock solid.
It’s been pretty rock solid for me. Months of uptime (only downtime has been power failures), one crash since 2.6 debuted (caused by a bad cpu), and great hardware support. I mention that because I tried OpenSolaris once and nothing worked. It wouldn’t even boot. So much for rock solid when the damn thing doesn’t even boot. I guess 5 year old hardware is just too new for Solaris.
It won’t hurt the Linux kernel any for folks to take a break from new features and do some more bug fixing once in a while. And I’m glad to see people honestly facing up to imperfections; it’s a welcome change from the constant spin you see with corporate products.
Are the 2.6 kernel folks trying to add too much new functionality with each new release?
Maybe they need to break things into a series of smaller discrete steps, making it perfectly clear which features are being addressed in each new kernel rev.
I don’t know how controlled the process is now, and I know the fact that you have multiple concurrent groups of folks working on kernel issues makes things less clearly defined, but it sounds like there might be too much adding going in without consideration for existing bug reports.
When I worked at a large mainframe application developer, we couldn’t release a new version unless the level of known defects in the new level and all previous levels reached a *very* small level.
This is the obvious result of not having an unstable 2.7 tree alongside the stable 2.6 tree, which is how the Linux kernel has previously been developed – the latest crazy ideas just get thrown into 2.6.x-mm and then into 2.6 itself; it was obvious from the outset that this was not a good idea, and I said so at the time. (Yes, I will dig out links if asked; I do admit that I was not overly vocal about it).
If the new patches went into a 2.7 tree for testing, like they did in the past, then this thread would not exist in the first place – it would just be “2.7 is unstable … well, so what, it’s the testing branch”
I agree. Long-live odd-numbered releases.
Well, they’ve been avoiding the 2.7 number because SCO wants the complete source code to it
Maybe a bug fixing contest would be cool.
What would be nice is for Novell/RedHat/IBM etall to pony up some money for some prizes and have a “summer of code fixes” and give prizes to those that do the best work etc.
After all, what could be better than offering a $100 gift certificate for beer and steak at Ruth Chris to people doing bug fixes in their spare time.
Haha. Nice idea. And maybe if there is some faulty beginners-level perl code in there in the kernel somewhere I could join the contest myself. Beer…
A bugfix contest would be just a patch that hides the real problem
IMO the communication developers <-> users in the linux kernel is poor and can improved (so the quality of the kernel improves), there’re lots of developers that never got into this “bugzilla.kernel.org” thing. And bugzilla.kernel.org does not get the same attention than fedora and ubuntu bugzillas anyway. many developers may not know that they’ve bugs at all.
IMO, what we need is to make bugzilla better. The FOSS community works in a distributed manner. However, bugs reported in ubuntu won’t get noticed by kernel developers. THis happens in the kernel but the rest of FOSS projects are affected by this aswell.
The idea of distrobution bug trackers is that they distrobution developers are supposed to send patches or bug reports upstream if they can’t fix them themselves.
It creates a barrier between the user and the upstream developers, which is good because the last thing you want is joe average submitting bugs to the linux kernel because xorg crashes and he doesn’t understand the different between ‘linux’ the OS and ‘linux’ the kernel.
This way there is less pressure on the upstream developers. But it does mean that the distrobution developers do need to do their jobs and pass those bugs up.
A bug fix release is a great idea. It definitly can’t hurt and will benifit more users than a few extra features would.
– Jesse McNelis
I’m thinking it’d be better to have lots of small regular bug fix sessions rather than a much less frequent but much longer bug fix cycles.
They could even have formal arrangements for “regularly scheduled” bug fixes – for example, one month every six months, or perhaps whenever the number of known bugs hits a pre-determined limit until it reaches some lower limit.
1. Bug spotting contest. Just find the most bugs. Rank the developers according to # of bugs/line. Or perhaps rank the countries according to the same measure. Print a funny t-shirt. See you in the rink again Finland!
2. Team Bug Match. Have projects/groups of developers reviewing other projects’/groups’ code. Hilarious ranking measures ensues. Fix some bugs – the ultimate taunt!
3. The heuristic approach. Hijack Ubuntu’s Adept service and randomly send kernel code snippets out to all users and wait for fixes to pop up. You might think this approach wouldn’t be so effective and successful – but just look at SETI@home, wow!
Edited 2006-05-08 21:38
I respect Andrew M for actually coming out and stating that there is a problem. Some people might see the idea of bug fixing cycle as counter productive but over past couple years the amount of features going into the kernel 2.6 has increased dramatically and to my experience the stability has decreased. I don’t want to start any arguments over windows/mac/linux as it is usless. What is important is looking at every distribution and looking how many kernel updates are released after the final version is released, usually 2-3, but might be greater. When I get each kernel release with it being final code I expect the kernel to be stable with least amount of bugs. The mear fact that I need to patch it with 2-3 patches after the final release kind of tells me somthing. Furthermore, I don’t want to bag on the developers as it is really them who are driving inovation in the OSS community but certain steps need to be taken, not only in the kernel community, but in alot of major projects. I think a major lack or difficiency is the implementation of some easy bug fixing submission system. What I mean, is create a universal bug submission system that would submit bugs to distributions and those would sync up with the project it is related to or vice versa. One final posiblity is to extend the testing cycle which I feel might aliviate some headackes.
Or just some spelling / grammar check would be a good start ๐
Patrick Volkerding, from Slackware, did not want to include 2.6 kernel to his final version of Slackware. He knew 2.6 kernel had a lot of bugs.
Shoul I keep running 2.4 until Torvalds decide to fix the problem??????????//
-2501
[i]Shoul I keep running 2.4 until Torvalds decide to fix the problem?????]/i]
Probably not – I’ve been running a (Gentoo) 2.6 kernel for 24 hours a day for the last 18 months. KDE (or applications running under KDE) crashed a few times, but there’s been no other problems, and no problems at all that involve the kernel.
In that time there’s been about 6 power failures here and my internet provider as stuffed up my ‘net connection at least 4 times.
Overall the 2.6 Linux kernel is doing a lot better than everything around it… ๐
Maybe it’s just the Slackware Linux 2.6 kernel that sucks?!
Just kidding, Patrick, we all love you ๐
I might be wrong here, but AFAIK the slackware kernel sources are basicaly the official sources, without any Slackware specific patches. This is my understanding, feel free to correct me.
Regards
I’ve no idea, one way or the other. I was simply making a joke.
Patrick Volkerding, from Slackware, did not want to include 2.6 kernel to his final version of Slackware. He knew 2.6 kernel had a lot of bugs.
Actually, this is not true for 11.x. To quote the Slackware-Current changelog:
BTW, I think 2.6.16.x, being the first kernel series in the 2.6 series that
has been promised some long-lived support, will be the 2.6 kernel you’ll see
in the next Slackware release. If/when 2.6.17 (or 18, etc.) come out, don’t
expect to see me chasing after it immediately. I’m looking for a kernel
that can be counted on for stability — not the bleeding edge. Of course,
once 2.6.16.x is considered tested enough to leave /testing (and it does
seem close), perhaps a newer kernel might take its place here just for fun.
That’s one thing the FOSS world has got going for it. That level of transparancy in commercial world woudl be akin to suicide. Imagine Ballmer saying the same – ther’d be an instant correlation in share price.
I was just about to make that same point. This is what I like about open source. You can throw around the idea about a cycle deveoted to just making the kernel better and the marketing and sales people don’t have any power over it. In commercial enterprises often times sales people sell ‘features’ to companies and they are not so concerned about whether something is getting buggier. As long as it is not a fatal problem or just a little slow down then stuffthe damn eye candy feature in. Open source brings back sanity to development. Honesty and candidness are refreshing in a world of hype.
Edited 2006-05-08 22:43
That’s one thing the FOSS world has got going for it. That level of transparancy in commercial world woudl be akin to suicide. Imagine Ballmer saying the same – ther’d be an instant correlation in share price.
Hehe – Microsoft just say “Vista’s been delayed again” and don’t say why…. ๐
I gather that they’re really talking about suspending development of features, additional drivers, refactoring work, etc. in order to take the time to fix bugs.
I can understand that code refactoring and new drivers would be happening all the time, but how many new features are currently in the works that couldn’t run parallel to bug fixing or stay out of the vanilla sources until the bug fixing is done? I’m honestly asking. I just haven’t heard a lot about this arena since they were talking about Infiniband etc.
I’m so sick and tired of this I’m in control and I make the decisions BS. The “higher up’s” in the Linux community need to lose this dictatorship and adapt to the BSD way of running things. Instead of having a “elite” group of developers working on the latest kernel, linux should be more open on who makes the command decision.
Problem is that these guys are human. Now who wants to sit around all day cleaning up stuff when they could be working on ultra cool feature X? nobody. The problem is that often times people dont follow up with their bug reports or the bugs dont reach the right people. So much code is getting changed that its a never ending cycle. Unless you stop every once in a while and let things get caught up.
Now who wants to sit around all day cleaning up stuff when they could be working on ultra cool feature X? nobody.
Nobody except Theo and the OpenBSD team.
Mostly what they seem to be doing these days is blathering on about how much better they and their OS are than everyone and everything else. I’d hardly call a partitioning program that doesn’t demand you create partitions by typing out absolute cylinder address [satire] in hex around a cauldron during a full moon [/satire] an “ultra cool feature”. I’d say it’s pretty standard.
No remote holes in the /default install/? Come on guys, the level of functionality in OpenBSD’s default install is even less than XP’s. How about using those genius brains to audit MySQL or K3b?
Problem is that these guys are human. Now who wants to sit around all day cleaning up stuff when they could be working on ultra cool feature X? nobody.
Actually I think you are wrong. Taking myself as example I know I’d LOVE to clean up and refractor my code. If only I didn’t feel guilty for not working on new features for my employer.
So maybe the correct question is: Who would pay for it?
It’d be better for Linus to start from scratch, with a stable API from the beginning. And he can name it Lunix.
It’d be better for Linus to start from scratch, with a stable API from the beginning. And he can name it Lunix.
Sure, that makes sense.
Please stop trolling.
Please stop trolling.
Easiest way to kill a troll is to not feed it, mate! ๐
I think should be pure bug fixing, myself have had no stability issues with 2.6.x. Only once have I had to send a bug report to Andrew Morton because reiser extended attributes didnot compile.
I keep seeing Windows fans talking “Features Features Features”….
I keep seeing Torvalds and the Linux fans talking “Features Features Features”…
Is it only me who thinks MSFT and Linux devs are doing the same tradeoffs to stability? As it seems, now it’s only Unix which is safe (why am I not surprised).
I think the reason Unix keeps the security model tight is because they don’t have the MediaItch to ramble about something new, every day, every week….
Is ZFS a bugfix?
Media, schmedia. The media doesn’t want to talk about the new security fixes in Vista, only how MS are increasingly late with it. Granted, if past experience is anything to go by, they may not be all Microsoft cracks them up to be, and they shouldn’t have to be put in anyway, but given how much Microsoft has already admitted to shedding, it would be very difficult for them NOT to put any of them in, even if non-Microserfs realize they suck.
Is it only me who thinks MSFT and Linux devs are doing the same tradeoffs to stability? As it seems, now it’s only Unix which is safe (why am I not surprised).
Right. In the last year or so, I haven’t had more than one or two crashes, with the exception of trying a third-party wireless driver. Maybe it is because I am using an ‘enterprise distro’ with an ‘enterprise kernel’, but practice does not confirm what you are saying, except maybe if you ride the (mm) kernel edge.
At any rate. Look at the Coverity reports:
http://scan.coverity.com/
The number of defects/KLOC is fairly low for the Linux kernel. At this moment well under the ratio for NetBSD and FreeBSD (which aren’t too bad either). Not that static code scan gives numbers to live by, but it is a fair indication of code quality.
But AFAIK those coverity scan’s for NetBSD/FreeBSD include the userland programs, and the linux scan only includes the kernel. I remember reading somewhere that alot of coverity error scans were reported in /usr/src/games ๐
But AFAIK those coverity scan’s for NetBSD/FreeBSD include the userland programs, and the linux scan only includes the kernel. I remember reading somewhere that alot of coverity error scans were reported in /usr/src/games ๐
It would be interesting to see additional statistics. Judging from the NetBSD commits, there are indeed userland fixes, but I don’t see too many skewage from /usr/src/games . It would be cool to have a “Most secure bsdgames operating system – no local holes in 8 years.”, what is life without hack and rogue?
At any rate. Look at the Coverity reports:
http://scan.coverity.com/
The number of defects/KLOC is fairly low for the Linux kernel. At this moment well under the ratio for NetBSD and FreeBSD (which aren’t too bad either). Not that static code scan gives numbers to live by, but it is a fair indication of code quality.
Ehrr.. Linux is ONLY a small Kernel. NetBSD + FreeBSD is COMPLETE operating systems. Howabout comparing NetBSD with Fedora or something like that, would be more fair.
But I’ve always had the feeling that the rate of ‘releases’ is far too much – the ‘lets just get it out the door’ seems to be the motto, rather than saying, “here are some bugs, and its not going to ship until these bugs are corrected.
What they need is a stable line kernel where NOTHING gets added to that base util it has be rigerously tested over and over again, and checked by numerous people – there needs to be a stable 3-4month release cycle of a new release every 3-4 months, correcting only bugs, security problems, and thats it; every 6 months, new ‘projects’ that wished to be part of the main kernel line are reviewed, if they’re up to the standard, then they are merged but disable by default, until a further 6 months of testing makes them tested enough to be classed as ‘for general use’.
There is no use developing software at 300kmph, when there is a trail of bugs following – concerntrated on the bugs and security holes first, then further down the track, drivers then the next priority, as long as they don’t stuff around with the stable parts, then right at the end, new features – that personally should be only merged with new major releases, but if deemed necessarily, properly tested in the real world before merging. It will take a while, and at times, Linux *may* fall behind, but personally I think that the rush to get to the finished should not be done at the expense of stability and security.
Browser: Lynx/2.8.5rel.4 libwww-FM/2.14
Which is exactly why a lot of people have been saying there should be an unstable (2.7) branch again. If nothing else, I think it’s a lot less confusing than a stable scheme consisting of vmlinuz-a.b.x.y. Is kernel-2.6.16-gentoo-r10 stable or unstable? (Yes, I know I can look in make.conf, or emerge -p, but even stable kernel releases get put in ~arch first.)
What they need is a stable line kernel where NOTHING gets added to that base util it has be rigerously tested over and over again, and checked by numerous people
This possibility was discussed on LKML when the new development model was being hashed out.
The problem with it is that there is no one to do the rigorous testing.
Andrew Morton is right, f–k the new features, it’s time to fix the most persistent/annoying bugs. A cycle dedicated to this won’t kill anybody, and it’s definitely needed.
Reiser4 is the only “new” feature that I want to see in vanilla kernel before bug fixing. Or it never be included. Never.