Andrew Morton, the lead maintainer of the Linux production kernel, is worried that an increasing number of defects are appearing in the 2.6 kernel and is considering drastic action to resolve it. “I believe the 2.6 kernel is slowly getting buggier. It seems we’re adding bugs at a higher rate than we’re fixing them,” Morton said, in a talk at the LinuxTag conference in Wiesbaden, Germany, on Friday.
A sad day for the linux fanboyz …
To see that, 2.6.16.14 already and I agree that drastic action does need to be taken. Problem is people want more features and drivers, with this comes bugs.
Edited 2006-05-06 18:22
if it were a microkernel, it wouldn’t have added more bugs when you added drivers, and features.
Perhaps the µK itself wouldn’t have more bugs, but its services would. Since an µK is pratically useless without services… Instead of adding bugs to the kernel, you would add bugs to the core system (µK + services). The system could be more stable, but you would still have bugs.
People should think before making such inane claims.
Dave Jones, Fedora Kernel maintainer’s take on this seems interesting.
http://kernelslacker.livejournal.com/37851.html
I don’t understand Dave’s comments…
That’s a ton of bugs, and there’s very little duplication. The vast majority of those 681 bugs are valid upstream bugs, yet none of them are in the upstream bugzilla. […[ (And before someone suggests launchpad, just “no.”. k?)
Why not Launchpad? Is this not the exact problem it is designed to work for? Does anyone have any further insight – surely if there are legitimate reasons behind not using Launchpad, those issues could be addressed to make it fit better?
He answered to this question in the comments:
>>
Launchpad’s approach of being a ‘meta-bugtracker’ tracking other bug-trackers is interesting, but a) from what I’ve seen of it so far, it’s painfully unpleasant to use. b) it’s closed source.
<<
Good to know his take on the issue.
Yes, excellent reference and very interesting.
A bugfix cycle could be nice, though some newer hardware might miss out on potention inclusion that round. Probably worth it though, if his hunch is correct, and the increasing number of bug complaints is due to more bugs, and not more users
The summary is not fair : “This is particularly a problem for bugs that affect old computers or peripherals”.
This is important because it implies issues for low budget people and geeks’gateways/firewalls, but a small number people are concerned about bugs for old harware. (they talk about five-year-old hardware)
Support of old harware must be kept because the community tends to use old hardware, but you can’t say that linux is getting buggier for general purpose, a few people will notice this quality decrease.
Perhaps it is time for them to stop their “live” fixes into the 2.6 production kernel, and start up the odd-numbered development line, 2.7.
Same thing they did while the 2.4 kernel was released (2.5 was test), and same as before that, and before that…
I’m surprised he didn’t mention that (that I saw, anyhow) in the article.
Perhaps it is time for them to stop their “live” fixes into the 2.6 production kernel, and start up the odd-numbered development line, 2.7
They don’t need to do that. They could mark 2.6.17 as “stable” release and then continue the development for, say, another 10 releases, and release a “stable” release again.
And no, this wouldn’t be the same than starting 2.7. Many people seems to miss the point that the “new development model” was created to _avoid_ bugs while at the same time adding new features, in some way. In a pure development release, people starts mergin stuff with less care than in a normal release and continues doing so for a long period of time. When you want to take the release out, it takes AGES to stabilize it (look at how much it took to release 2.6)
However, in the new development model you have no “unstable” release. All the releases are supposed to be stable, so people needs to do things well if they want to see their stuff merged. Instead of a Big Unstable Release which takes a year to become .0 and another year to become really stable, do small incremental improvements. This ensures that at least you can control much better the bugs: It’s much easier to fix a bug report that says “something broke between 2.6.14 and 2.6.16” than “something broke between 2.4 and 2.6.0” because the number of changes between 2.6.14 and 2.6.16 is much smaller. From a Q/A POV, the current model makes things easier if you take care of them. IOW: While people can’t avoid that 2.6 releases are not 100% stable, they’re _quite_ stable. Look at the amount of things that have been added in the latest 5 releases: http://wiki.kernelnewbies.org/wiki/LinuxChanges – maybe 2.6 is not 100% stable, but looking at the current amount of changes, I’d say that it’s a major hit that they can guarantee so much stability with so much stuff happening. You really need a good Q/A process to take kernels like that out and not make crash every machine at some point.
So my take: Sure, stop adding changes to 2.6 stable. But don’t start 2.7: Start 2.8 directly with the same policies than now. Or mark 2.6.16 as stable release and continue developing 2.6.x (it’s the same), but avoid pure development releases. For BIG projects like the linux kernel is, pure unstable releases are a bad, bad, bad thing.
Edited 2006-05-06 20:15
“.. Instead of a Big Unstable Release which takes a year to become .0 and another year to become really stable, do small incremental improvements. This ensures that at least you can control much better the bugs: It’s much easier to fix a bug report that says “something broke between 2.6.14 and 2.6.16” than “something broke between 2.4 and 2.6.0″ because the number of changes between 2.6.14 and 2.6.16 is much smaller. ”
This is why versioning is important. The metrics or applied measurement and practice of pairing a release ” header ” to output is independent from the decentralized, worldwide and sometimes spontaneous input of release activity itself. Using some archival functions in git was a step in the right direction for Torvalds, et al. however there is room for improvement.
To me, versioning says everything about a project. There is time, and then there is the time it takes to complete an estimated or forecasted task both decided and arrived at by collaboration and personal effort. For example, the program sox for mpeg/*.Wav finagling is at version 12, while a young project with less than 6 months of development may be at increment .5, .99 or 1.01.
To serialize physical or virtual objects is commonplace. Take L.o.C. and book reference numbers. There is ISBN, where the biggest challenge is keeping the search database current with demand based, online need-to-know criteria. A borders.com worker told me the publishers are adding 3 digits to the ISBN. So, in the publishing realm some folks decided where, how and why before adding digits to a means of better providing book tracking services.
We need something similar in linux/sunos/mac world. A quick example: a product ID or UPC number can be generated every time a patch, bug fix or workaround to 2.6 is issued BUT.. there has to be strict definition, implementation, application and EXTERNAL review of this ad-hoc If, Then, Else rule.. Let us say this productID number has to be filed or put in an open repository with pure ODF storage and retrieval ( get and put ). A core productID framework could incorporate an incremental versioning system into the document itself. As long as it begins with UTF-8 or ISO-8859-1 or something standards compliant before its format gets translation it will be alright–heck, most mailing lists and list’s Web folders do that. The ” Else ” part is harder: in this situation if we don’t have the productID parity directly correlated or inline with the open repository there is no metre to the release and alas no true versioning.
[][)// – [email protected]
This was discussed extensively on LKML early last year, and the new model was introduced to replace the old even/odd model, because it was thought to solve various problems.
I doubt there’d be much support for going back to the older model after giving the new model such a short trial.
Problem again : Drivers .
How to fix ?
Use Windows drivers with some sort of emulator.
Yeah, and what will you use with x86-64? PowerPC? Itanium? ARM?
And when a bug comes up in the windows drivers?
And the performance and extra linux features lost?
This is the same problem as with binary blobs and drivers, they are not maintainable, and they are architecture-specific.
Please dont talk about you don’t even know!
Before all you sophomoric Linux fanboys mod me down, you can blame this ENTIRELY on Linus not “designing” a stable kernel API.
A stable kernel API would never introduce bugs in drivers no matter how long ago they were written!.
Hey Linus, talk to Sun or BSD guys about how to make rock solid and future proof kernel interfaces if you are unsure of your engineering skills.
you can blame this ENTIRELY on Linus not “designing” a stable kernel API.
http://www.kroah.com/log/linux/stable_api_nonsense.html
That paper does not say that stable interfaces are a bad thing. It does say that the USB stack design in Linux was poorly thought out, and then poorly fixed, at least twice.
No one gets interfaces right the first time, but everyone who refuses to design interfaces meant to last inevitably fails to do a very good job, over and over again.
Greg Kroah Hartman has no business working on USB interfaces that he has no clue about. How come windows and Mac and BSD have had stable USB interfaces based on Specs?
Do not blame stable kernel APIs for your own personal engineering shortcomings.
The world of engineering (electronic/mechanical/chemical) all MUST rely on stable APIs or interfaces or you’d be constantly introducing bugs.
If this simple idea cannot get through the heads of people like Kroah-Hartman and Linus, they should go back to College and take a refresher course in software engineering and leave engineering to more sane people like Andrew Morton and David Miller.
Let this be a warning to all Linux subsystem developers – ALSA, Video, Network, Storage – DO NOT F*** WITH APIS.
Edited 2006-05-06 22:34
Perhaps if you would think things through before posting them, you would realize that posting your comments in a more civil manner would not result in you getting modded down. For instance, don’t call people “sophomoric Linux fanboys” and completely remove your last sentence. It really isn’t that difficult to post your thoughts without resorting to petty name calling and personal attacks/insults, perhaps you should try it sometime. Here, I’ll give you an example if you find it difficult:
“I think Linus’ decision to not develop a stable kernel API is causing a lot of the problems. If there was a stable API, new bugs would not be introduced into old drivers, like with BSD and Solaris.”
See, it really isn’t that difficult if you try harder.
And for the record, I agree with you, just not the way you expressed your opinion.
The answer is simply! There needs to be more kernel devs!
/me of to learn asm/c so i can help out
No. The answer is there needs to be /FEWER/ kernel devs,
as many of the current ones are not up to the task. They
just produce buggy code.
I don’t think your assertion is justified. Another poster quoted an example from a changelog which was akin to “yeah, this should work…. but I can’t be bothered testing to make sure.” It’s that sort of attitude which assists bugs to enter your codebase.
Sure, there is a wide range of abilities in the people who do active kernel development (not just for linux either), but by not having backup in the form of rigorous testing and code review, the strong aren’t helping the weak. If the strong coders did help the weaker ones I doubt there would be such issues.
ha ha etc.
I was talking to one raid vendor about a bug recently, and they claimed that redhat had broken the scsi mid layer again. After I investigated the bug myself redhat had just added debugging code that called BUG() before the data got silently corrupted.
The point is that hardware vendors will always blame their problems on someone else.
Also you can’t design a “stable” API. “Stable” just means getting older without changing. It’s like trying to plan how old you’ll be next year. It’s nonsensical.
Also you can’t design a “stable” API. “Stable” just means getting older without changing. It’s like trying to plan how old you’ll be next year. It’s nonsensical.
And yet Dennis Ritchie designs a set of APIs in the 1970s that have remained stable for thirty years.
See, this article just proves that Linux is the buggy piece of amateur programming we all thought it was. It is already bloated beyond repair. I think they should just scrap it and start over.
Gee, sounds a lot like Windows.
Blaah… go home, Poo.
Linus must create a stable design api…
Yes, this is very important.
Or use a dual API, one is a new API, another is compatible with the current driver
Before all you sophomoric Linux fanboys mod me down
I modded you down precisely because of the way you started your post. No matter how right or wrong you are, you can’t expect to have a rational debate if you begin your argument by insulting the people you’re talking to.
So you do admit you’re a sophomoric linux fanboy who doesn’t know anything about OS engineering.
Thanks for proving my point!
At first I thought you might just have social problems, now it is clear you are nothing more than a troll.
Down you go.
So you do admit you’re a sophomoric linux fanboy who doesn’t know anything about OS engineering.
Did you read anything of what I wrote? I did not mod you down because of your opinion, I modded you down because your post did not comply with OSNews’ terms. That’s too bad, because your lack of maturity tends takes attention away from your actual argument.
As far as sophomoric goes, I think all you have to do to see it is look into a mirror…
Osnews is starting to really suck.
The DELETERS are taking over.
This part of the article caught my eye:
Morton admitted he hasn’t yet proved this statistically, but has noticed that he is getting more emails with bug reports
Which:
A> Makes the whole article little more than idle speculation
B> Ignores the possibility that it’s as buggy as ever, just that there are more people USING it now reporting bugs. With Distro’s like Ubunutu and Linspire opening the gates to ‘normal’ users a bit wider and a jump in install base, OF COURSE he’s gonna get more bug reports.
Of course I really like:
One problem is that few developers are motivated to work on bugs
Because I’ve been saying that all along about open source in general pretty much since I first heard of the concept. For every problem that the handful of companies actually paying programmers to fix, there are likely dozens of bugs not getting fixed because nobody with the skill to fix it has a REAL incentive to do so.
Makes the whole article little more than idle speculation
It’s speculation, and Morton admits it as such, having assigned himself the action item to dig up the numbers.
But it’s not idle speculation. Had I said it, it would be idle, because I’m just a random kernel developer, AFA LKML, but AM is the gateway for code, and lives and breathes stability.
In his case, it’s more highly informed speculation.
Well, such a claim should be testable.
One, check all the bug reports for 2.6.16 versus the bug reports for, say, 2.6.8 to test to see if there are more reports;
Two, run both kernel codebases through some kind of bug-scanning program (is Coverity a program, or a service?) and see which one has more bugs (I realize the program wouldn’t do a perfect job, but it should at least allow for a comparison of numbers of specific types of flaws.)
(Three, fix all those flaws that the scanning tool found)
Its been mentioned soo many times that 2.6 will still be for many things less stable than 2.4 .
Finally somebody says it openly – well someone who lots of people say is someone who should know about it – isnt one generally to trust ones instincts – he is propably IMO right – so come on – bug-killing phase .
Andrew Morton AFAIK is not some slick PR guy – he is a kernel dev so he is not selling something – it’s simply his view – his personal opinion .
Less bugs & more security would not harm Linux IMO
It’s the drivers.
It’s whats wrong with linux and why no one will use it.
It’s an wayyyy overrated operating system.
Needs to somehow use windows binary drivers.
It’s the drivers.
It’s whats wrong with linux and why no one will use it.
It’s an wayyyy overrated operating system.
Needs to somehow use windows binary drivers.
If I understand you correctly, you say that the driver model for Linux is a problem. Yet, your solution is to add in another layer that kludges Linux into using drivers not even written for its own platform?
Sounds like a recipe for even more issues! Unless I am not understanding your point…
If I am wrong, please clarify, because your post isn’t making much sense as it appears to be written.
He has a point.
It is *very* hard as a hardware vendor to support Linux.
For Windows you simply put out a driver that works and you only have to fix bugs occasionally and release a new version every 5 years or however often they release a new Windows.
For Linux.. you have a kernel release every few months and a large number of distributions each of which compile the kernel differently and often support multiple arcitectures. In any release they can partially or completely rewrite the code that your driver talks to. The headers can change, the data structures, everything, in every release. Creating code that will compile against all versions of the kernel using multiple compilers is a coding nightmare. Creating a version of the driver for each kernel revision and supporting them all is a coding nighmare. Full QA is darn near impossible
Open sourcing your driver can help, but that doesn’t work when people come to you to get a driver because their distribution chose not to include the kernel’s driver for your hardware in their binary kernal.
Basically Linux currently only works well with hardware that Linux distributions support natively. Companies trying to support customers using their hardware with Linux are left with nothing but bad options.
You outlined the problems with “moving target” ABI very well. Kernel developers should provide at least ABI/API matrix in which they note what changes between versions and which ABI’s they not reccomend to use because they will likely change. This will greatly help.
Then distro vendors should also put up a website together and outline which compiler versions they use in releases and what is incompatible between those.
I strongly disagree about the move to fixed API, it means in the end that kernel can’t evolve without nasty workarounds and devs are aware of that very well. But they should at least try as much as possible to leave compatible stuff in there until this is a maintenance or design problem. The reason why windows works with old drivers is because its kernel almost didn’t change since W2k release.
Adding microkernel bits to linux isn’t impossible either. Fuse does that for FS drivers, similar beast is DRI. Non-performance critical device can be a first target.
That’s something that the commercial unix vendors have been doing for years. It would be great to see it happen in the linux world too.
Right, because everyone knows Windows drivers never crash. Well, except MS, who says that virtually all BSODs are caused by them, and are moving drivers into userspace for that very reason.
Which comments have been deleted, exactly? Just because an off-topic or troll comment has gone under your threshold doesn’t mean it’s deleted. All you have to do is adjust your comment threshold to a lower value (in your preferences) and you’ll be able to read all those quality posts that end up being modded under -1.
BTW, talking about comments and modding is off-topic. Feel free to mod this comment down as well.
It’s whats wrong with linux and why no one will use it.
Actually quite a few people use it. Generally, Linux has got very good driver support, and these types of problems are not very common. In any case, they mostly affect those who change kernels often.
I agree that stable APIs are preferable, but that’s hardly a reason not to use Linux.
It’s an wayyyy overrated operating system.
In this case, you’re really talking about the kernel, not the OS.
Needs to somehow use windows binary drivers.
I guess you’re talking about ndiswrapper? To me the fact that Linux can use Windows drivers is emblematic of its strengths rather than its weaknesses. It shows how adaptable the system is.
The truth is that Linux is constantly evolving, and at a faster pace than other “major” OSes. If you don’t like it, don’t use it…or, better yet, improve it!
It’s too bad that people like Andrew Morton can’t give his pragmatic opinion on current kernel needs without the crowd of anti-Linux trolls jumping up and down, screaming: “See? We told you Linux sucked, the main kernel guy says so!”
Seriously, guys, get a life. We’re more interested in constructive debate, i.e. how to solve the issues.
It’s too bad that people like <some dev/engineer/PM from OS> can’t give his pragmatic opinion on current state of <OS> needs without the crowd of anti-<OS> trolls jumping up and down, screaming: “See? We told you <OS> sucked, <guy above> says so!”
Fixed for you.
In his case, it’s more highly informed speculation.
Indeed. And the fact is that, as the lead maintainer, most developers will in fact listen to him, so this is a good thing – it means that some more focus should be put on bug fixing.
Without going back to the old odd/even number systems, it might be a good idea to designate “milestone” versions, which are known to be stable and have less bugs than other versions (though they may have less features or performance). Just a thought.
how do you tell a Linux kernel developer to piss off in nice words?.
It’s more easily done at a company like Apple or by the BSD core team.
There’s no excuse to not having a stable APIs within the kernel, except for the fact that there was little engineering foresight in the first instance.
As Linux kernel gets bigger with more drivers, this issue is only going to get worse.
Sounds like a recipe for even more issues! Unless I am not understanding your point…
If I am wrong, please clarify, because your post isn’t making much sense as it appears to be written.
The problem is, the original author jumps from wanting a stable driver API to providing an API which can allow him to load Windows drivers.
Lets address the first issue; in the case of the WDM – Windows Driver Model, the new model which was introduced in Windows 9x, which was meant to span from 2000 to Windows 9x – which has then be split into two parts, there is the driver for hardware and the driver for the display.
Both would require substantial modifications to the kernel and X11 server just to get it up and running, and even then, you’ll be stuck in a situation whether its actually worth all the trouble given the alternative.
The alternatie is a stable API, but to provide a stable API, not only does it require them to establish a concrete API but to also provide backwards compatibility when they fix up something – everytime Microsoft fixes up something on Windows, not only do they have to fix the problem, but provide a work around, a ‘mock broken functionality’ as to allow those companies who relied on the broken feature, for their software to continue running – this will be the same situation with Linux.
Also, lets just say tomorrow Linus comes out and announces a brand new stable API – the problem won’t go away as each distribution is compiled with a different compiler, incompatbility between compilers also exists as well.
Until GCC come up with a stable ABI and Linux comes up with a stable API, the problems aren’t going to be magically fixed over night via the scream of ‘I need a stable API!”.
Also, lets just say tomorrow Linus comes out and announces a brand new stable API – the problem won’t go away as each distribution is compiled with a different compiler, incompatbility between compilers also exists as well.
Bingo, you just have to round up the GCC guys and threaten them with a trip to Gitmo if they don’t wisen up and stop fscking around with REGPARM/NoREGPARM, 4KSTACKS/8KSTACKS and other basic binary incompatibilities.
I’m sure that MIT has a good course on Compilers and FSF offices are a stones throw away.
What does 4KSTACKS has to do with GCC?
G.
Also, lets just say tomorrow Linus comes out and announces a brand new stable API – the problem won’t go away as each distribution is compiled with a different compiler, incompatbility between compilers also exists as well.
Are you sure about this? How many times have you changed your kernel configuration and found that most of your applications are broken?
All sane “API specifications” include calling conventions. That’s why your applications don’t break when you modify the kernel, and that’s why all compilers can support the stable API.
There’s only 2 “valid” problems with creating a stable API for device drivers. The first problem is the GPL/open source mentality, where it’s not “fashionable” to allow other peoples work to be closed source. The second problem is backwards compatibility, which can be solved in a number of ways.
The normal way to solve the backward compatability problem is to have “OS versions”, where the API can change between versions. For example, “Linux 2.9.x” could use “driver API version 2.9”, while “Linux 2.10.x” could use a (possibly completely different) “driver API version 2.10”. In addition, “Linux 2.10.x” could also support the old “driver API version 2.9” (but this old API could be dropped in “Linux 2.11.x”, which limits the amount of backwards compatibility). This is how Microsoft do it (although they only release a “new version” every 8 years or something).
Another way would be to allow new API functions to be added at any time, and allow old API functions to be removed after a warning period – a device driver that works now might generate “API function 0x1234 is deprecated” warnings for 12 months and then might stop working after that.
The last way would be to ditch the buggy “mega-beast” and switch to a micro-kernel (there was an article about this)… 🙂
Ignoring for a second the GPL vs. closed source binary drivers problem, considering the pace in which the kernel is being developed now-days it’ll be a shame to halt the development for a year or two, spending every available resource on a stable in-kernel API considering (what-I-see-as) marginal effect on stability.
Being a in-house driver developer for my workplace, I share the grief of having the kernel constantly change under my feet. However, I rather have a fast evolving development platform, then have a slow moving kernel, trapped with ill-designed, out-date interfaces that that consume huge amount of resources required to maintain it, all in-order to preserve compatibility with old closed source drivers.
Other then that:
1. The Linux (kernel) is a fast moving target and will remain as such for the foreseeable future. Like it or not, the in-kernel API will remain unstable.
2. Most of the kernel bugs I’ve witnessed had nothing to do with the kernel API; most of them were plain old module bugs/problems. A changing the kernel API will mostly break module compilation; beside regparam/4K stacks, I never -personally- witnessed an API change that caused a in-tree kernel/module to break. (come to think about it, most of the casualties of 4K/regparam were closed source/out-of-tree modules to being with).
3. If one designs a out-of-the-main-kernel-tree module/driver/etc (open or closed), it’s his responsibility to maintain it as the kernel tree moves.
4. The price of having a stable in-kernel API is too expensive considering it’s marginal effect on the kernel stability.
G.
… And 120 seconds after I posted the above, I tried compiling my module against FC5’s latest 2.6.16 kernel just to find out that d_node semaphores have been replaced…
Oh… The joys of living on the bleeding edge
However, I rather have a fast evolving development platform, then have a slow moving kernel, trapped with ill-designed, out-date interfaces that that consume huge amount of resources required to maintain it, all in-order to preserve compatibility with old closed source drivers.
But that’s a false dichotomy. I’d rather have a well thought out slowly changing API with backward compatibility for open source drivers than either of your choices.
“fast evolving” would be more interesting if Linux was doing anything more than evolving to where other systems have been for decades.
But that’s a false dichotomy. I’d rather have a well thought out slowly changing API with backward compatibility for open source drivers than either of your choices.
I doubt that it’s doable in the foreseeable future.
“fast evolving” would be more interesting if Linux was doing anything more than evolving to where other systems have been for decades.
If you consider where Linux 1.0 and GNU were ~10 years ago and compare that to Windows NT 4.0 – and then take Windows XP and compare that to linux 2.6 and KDE 3.5.2, you’ll fully understand my point.
Linux is a young kernel; GNU is a young platform. It has yet to find it’s bullet bullet when it comes to kernel and in-kernel API design and as such, it must not be locked into current APIs. (Which were invented as you go).
Oh… and considering the number of servers running GNU/Linux based distributions (Considering Microsoft’s seemingly infinite marketing and development resources, let alone their market dominance), GNU/Linux must be doing something right… (As volatile as the in-kernel API is.)
G.
> Linux is a young kernel; GNU is a young platform.
Linux is 15 years old, which is quite a lot in the computing industry. NT in its current form is roughly the same age. One could argue that NT borrowed knowledge that was developed earlier for VMS, but the same could be said about Linux which borrowed almost all of its concepts from Unix. GNU started in 1983 and is based on Unix concepts as well.
If you say “young” – compared to what?
Linux is 15 years old, which is quite a lot in the computing industry. NT in its current form is roughly the same age. One could argue that NT borrowed knowledge that was developed earlier for VMS, but the same could be said about Linux which borrowed almost all of its concepts from Unix. GNU started in 1983 and is based on Unix concepts as well.
Umm… You are kidding right?
You’re actually comparing Linus’ student project to Windows NT 3.1, which entered development in 1988 (!!) and was backed up by two (and then one in 1990) of the largest software companies in the World, Microsoft and IBM?
Microsoft didn’t “borrow” knowledge from VMS. Microsoft hired most of the VMS development team (from DEC) to work on Windows NT.
How can you compare the sheer size of Microsoft, IBM and DEC to Linux? Even today, when Linus is gaining strength is still minute compared to Microsoft.
Now, unless you have something constructive to add….
> How can you compare the sheer size of Microsoft, IBM and DEC to Linux?
> Even today, when Linus is gaining strength is still minute compared to Microsoft.
I responded to your posting, which was about the age of Linux. I never compared sizes, nor did you until now. You are pretty much twisting words that have already been written.
I’d like to inform you that if you see Linux as the revolution against the giant MS (which I think it almost is, although I think it still sucks technically), then you will help your cause a lot more by not writing such nonsense, but coding, bug reporting, educating newbies, …
Otherwise, if you only want to “win” a discussion, then you should be aware that there are more satisfactory things you could do in your time. And by the way, if it was you who modded me down, you should develop other means of power than down-modding. It won’t help you in real life, and only real life counts.
In Hebrew we have a saying that (translated to English) goes something like this:
“In an argument, the one that shouts the most (resorting to irrelevant personal insutls), has the weakest points”.
Enough said.
Well, it was an advice. You can take it the way you want.
A slowly changing API is technically doable. Ritchie proved that to be true back in the late 70s.
A system that so many people (my current best estimate is that at least two thousand folk are hacking away at the kernel these days,) have modified over such a long period should not be described as “young”. That it has to be is a demonstration of the failure of the “evolutionary” development model.
Forget Windows. Consider Unix. Consider what a small group of people at Bell Labs did in less time. Or even better yet, consider how far BSD developed at Berkeley in its first five years, again with a much smaller team than is involved in Linux development.
The saddest thing about Linux is that there seems to be no willingness at all to learn from the past. It’s all trial and (mostly) error.
Is GNU/Linux doing something right? sure. All those servers work because of what they borrowed from Unix design. The stable APIs between the kernel and user land; the stable APIs of the libraries; the stable programming language. Those all come from AT&T Bell Labs, and the University of California at Berkeley.
Where GNU/Linux works, it works because it has borrowed from earlier systems. Where it fails, it often fails because motion is confused with progress, or because sloppy planning is called “evolution” and excused.
Design is hard. Few people are very good at it. The original OS/360 team was good. The guys who did Multics were good. An argument can be made for the people behind VMS. Ritchie was probably the best OS designer ever. There were others. None of them seem to have been involved in Linux, to date.
Everything Morton said was `right on the button’. Recognizing a problem as soon as possible is 50% of solving it in the optimum way and minimize it’s impacts. This kernel is more and more of a success. Thumbs up to all the devs for not letting themselves go loose and keeping up the quality standards on the codebase. This is a trait of responsibility-awareness from their behalf that they have every right to be feeling most proud of. I’m pretty confident that action will be taken to address the problems that Morton talked about as soon as possible.
I’m in favour of a stable kernel API. A couple of years ago I wouldn’t have predicted a move away from the even/odd-numbered series kernels approach, but it happened in the interests of improving stability. Linux also has a relatively stable userspace API (indeed it has to, both to remain compatible with UNIX and to maintain its position as today’s de facto standard UNIX), and yet still manages to innovate, so I don’t see any problem with a stable kernel API at least between minor versions.
As for whether the open source model is wrong, I believe that the rate of innovation (or at least improvement) in Linux has proved that it is right. We’ll probably never know whether Microsoft are honest with themselves internally about the bugginess of Windows, but whether they are or not is irrelevant. Linux simply is several orders of magnitude less buggy than any Microsoft OS I’ve ever tried, with the possible exception of DOS; but show me a Windows user who’d rather use DOS, and I’ll show you one who’d rather use Linux, too.
The “bugfreeness” of Linux is probably directly attributable to the UNIX tradition of putting BUGS sections in the manpages, i.e. admitting the ones you can live with, fixing the ones you can’t.
In the words of one UNIX founder, “that’s a level of honesty you don’t [otherwise] find.”
Having a stable in-kernel set of interfaces seems to have worked quite well for Solaris, AIX, HPUX and Irix. I always wonder about why Linux doesn’t have one, and think that perhaps it’s due to a lack of sufficient design process, or maybe just an attitude of “well we’re better than that, we don’t need to do what they do.”
Having processes (which you *use*!) isn’t evil, people, it’s how software engineering keeps going.
Actually, if we were to substitute “Windows” there, it wouldn’t be the anti-Windows posters that would make the most noise, it’s the Windows fanboys, as was recently demonstrated when a MS employee dared to criticize the leadership.
The difference is that ANY criticism of Microsoft (not Windows, mind you, which would enable you to make a valid comparison) here is instantly met by a barrage of rationalization, misinformation and insults on the part of MS apologists such as you.
Get it? A lot of us criticize Microsoft, not Windows. Yet a lot of the MS fanboys criticize Linux, not Linux companies. Hard not to think that they’re really astroturfers posing as real users…
archisteel get off your high horse. you linux fanboys critize Microsoft and Windows. Why do you assume that us guys criticizing Linux aren’t Linux users? How would I know about REGPARM/NOREGPARM and about Linux’s lack of a stable kernel API if I’m not a Linux developer/user?
You should be very worried about people like me who really know the internals of Linux. We work with other oprating systems and are all about exposing the soft underbelly of Linux.
Y’all can talk all day about astroturfers and Microsoft windows fanboys but at the end of the day, just look at Linux’s faults and deal with them – thank us for shouting at you – because we care about Linux (as much as we care about Windows or Solaris or MacOS)
Let me introduce the changelog for Linux kernel 2.6.16.14.
commit bf7d8bacaaf241a0f0157986fd4e1e6834873d50
Author: Chris Wright <[email protected]>
Date: Thu May 4 17:03:45 2006 -0700
Linux 2.6.16.14
commit 4acbb3fbaccda1f1d38e7154228e052ce80a2dfa
Author: Olaf Kirch <[email protected]>
Date: Wed May 3 21:30:11 2006 -0700
[PATCH] smbfs chroot issue (CVE-2006-1864)
Mark Moseley reported that a chroot environment on a SMB share can be
left via “cd ..\”. Similar to CVE-2006-1863 issue with cifs, this fix
is for smbfs.
Steven French <[email protected]> wrote:
Looks fine to me. This should catch the slash on lookup or equivalent,
which will be all obvious paths of interest.
Signed-off-by: Chris Wright <[email protected]>
How did that kind of thing even make it through?
Version 2.6 development is a joke. I’m considering going back to 2.4 on my file server, simply because this kind of thing isn’t instilling too much confidence for me.
archisteel get off your high horse. you linux fanboys critize Microsoft and Windows.
I don’t. I use Windows every single day, as much as I use Linux. I also act as “family and friends” tech support. I’ve been using Microsoft operating systems since the very first days of DOS, when PCs were still only made by IBM.
Why do you assume that us guys criticizing Linux aren’t Linux users? How would I know about REGPARM/NOREGPARM and about Linux’s lack of a stable kernel API if I’m not a Linux developer/user?
That proves nothing. I’m not a Linux developer and I’ve heard of these as well. All Linux development is done in the open, so it’s very easy for anyone to find out about Linux’ strengths and weaknesses.
You should be very worried about people like me who really know the internals of Linux. We work with other oprating systems and are all about exposing the soft underbelly of Linux.
So you admit having an anti-Linux agenda, then? Good to know, I’m adding you to my list.
Now, why should I be worried? Try all you want, you naysayers can’t slow down Linux development and adoption.
Y’all can talk all day about astroturfers and Microsoft windows fanboys but at the end of the day, just look at Linux’s faults and deal with them – thank us for shouting at you – because we care about Linux (as much as we care about Windows or Solaris or MacOS)
I hear a lot of “we” and “us” coming from you. Usually, when someone feels the need to invent some sort of group and claim to be part of it, it only means that they don’t have enough confidence in their arguments to let them stand on their own merit. Either that or you’ve got multiple personality disorder…
Now, as far as astroturfers go, I’m still convince there are a few on this site – I mean, who would spend so much time defending a multi-billion company that stifles innovation and competition through abusive monopolistic practices, except people who are on their payroll? (Before you throw a hissy fit, I’m not saying YOU are, just that there are some here.)
Meanwhile, this is taking away from the main point of this article: no kernel is perfect, there are currently too many bugs in the development kernel, Andrew Morton called it, now it’s time to fix those bugs. I still believe that those who take this rather mundane news item and put an anti-Linux spin on it are idiots, any way you look at it.
Are you sure about this? How many times have you changed your kernel configuration and found that most of your applications are broken?
Why can’t I run a kernel moduled, compiled with GCC 3.4 on a kernel compiled with GCC 3.0?
All sane “API specifications” include calling conventions. That’s why your applications don’t break when you modify the kernel, and that’s why all compilers can support the stable API.
And if you read Linus’s statement on that, it would be alot clearer for you – but hey, ignore them if you wish, you’ve done already.
There’s only 2 “valid” problems with creating a stable API for device drivers. The first problem is the GPL/open source mentality, where it’s not “fashionable” to allow other peoples work to be closed source. The second problem is backwards compatibility, which can be solved in a number of ways.
I don’t see it as an ‘opensource’ or ‘GPL mentality’, because there are already GPL software out there, quite happy to provide backwards compatibility, even for the so-called ‘evil proprietary software vendors’.
The last way would be to ditch the buggy “mega-beast” and switch to a micro-kernel (there was an article about this)… 🙂
How the hell does that fix the problem? you’ll still require a stable API, and assuming that the microkernel is developed by linux god himself, you’re going to be saddled with the same crap.
There was a solution provided by Calderae, the UDK – no one caught onto it; had that been adopted, and made the driver API standard for Linux, we wouldn’t be having these parlour games today – life would have moved on, and we would be moaning about important things.