Standford University research team has found that Linux has fewer bugs than the competition using Coverity, a static source code analysis tool. Andrew Morton, one of the core developers says that the major bugs detected using this tool have already been fixed. It might be of interest that Linus Trovalds has developed a similar tool called sparse specifically for the Linux kernel which has proved itself to be pretty useful.
“Though Coverity does not have any data about the relative frequency of flaws in Microsoft’s Windows operating system”
I think they need a little more info to complete the study…
So they compare the Linux kernel to the entirety of Windows XP? When they say 40 million lines of code, does that include the GUI? cmd.exe? etc? If so, the Linux comparison should include KDE, Gnome or (your favorite de here) along with bash, csh, or (you get the point).
Also, as noted by @DV, they didn’t really run the same tests on XP. Rather, they compare statistics on software in general to the Linux Kernel.
The comparison being made is based on OS vs OS.
Linux, the kernel itself can be run as a full Operating System by itself, without the use of a GUI front-end, or addons.
Windows XP however can’t run in this mode. The GUI is part of the Operating System, and isn’t separate.
It’s not a comparison of source vs source.
>> they didn’t really run the same tests on XP. Rather, they compare statistics on software in general to the Linux Kernel. <<
Yeah, everyone knows Microsoft is famous for their bug-free code..
So they compare the Linux kernel to the entirety of Windows XP?
I’m glad I’m not the only one who caught that. However, my understanding of Windows (perhaps wrong) is that it’s really hard to distinguish the kernel from the rest of the OS, because of Microsoft’s technique of bundling apps into the kernel (eg IE), so perhaps the comparison isn’t all that unfair after all. I don’t know. Can anyone enlighten me?
In the case of the linux kernel, the number of bugs I guess is taken from bugzilla or something similar, which means it is the number of identified bugs.
In contrast, the well known number of “20 to 30 bugs for every 1,000 lines of code” are the real bugs existing in a source code and it is a number gained from experience.
Since not all the bugs need to be known, you cannot compare both numbers. It is like comparing apples with potatoes!
Additionally, I agree with the facts that you cannot compare WinXP with only the kernel and that you need more exact information about WinXP.
I have little doubt about the seriousness of the study but the way it is reported make it sound like it is a comparative study with rivals. I can’t imagine these guys spending that time and coming up with things like : “Windows XP, by comparison, contains about 40 million lines of code, with new bugs found on a frequent basis”.
So Linux is averaging about 1 bug every 6,000 lines of code, while “Proprietary” software averages about 360 bugs every 6,000 lines of code. I find this highly unlikely. Can we say biased report? Linux zealots are no better than Microsoft FUDsters. Can’t we just use the right tool for the job?
n the case of the linux kernel, the number of bugs I guess is taken from bugzilla or something similar, which means it is the number of identified bugs.
—-
no.nothing it taken from bugzilla. it is generated by the code analyser. do read the article before commenting on it
I have little doubt about the seriousness of the study but the way it is reported make it sound like it is a comparative study with rivals.
—–
you need to read the other statistics produced by the research team which has actually looked at commercial code. before that you dont post misinformed rants like this
Linux zealots are no better than Microsoft FUDsters. Can’t we just use the right tool for the job?
—-
when did a standford research team become linux zealots. your idea is ridiculous
IE doesn’t live in the kernel. That’s just bullshit. It IS however (by microsoft) meant to be an integral part of the userland, used by all applications that need to display a little html such as the CHM help system and lots of other things..
IE doesn’t live in the kernel. That’s just bullshit. It IS however (by microsoft) meant to be an integral part of the userland, used by all applications that need to display a little html such as the CHM help system and lots of other things..
—-
that explains why MS doesnt allow you to uninstall it and preloads it by default along with the OS. also why ActiveX bugs in IE create all those gaping security holes which makes CERT recommend an alternative browser and also why Firefox gets 10 million downloads within a month
Any decent programmer knows that the number of lines of one program is just one of its many metrics. Solely its completly meaningless.
I let you have your own conclusion about computing statistics on a more than subjective metric value.
Analyzing the source code of Linux for the purpose of “seek & debug” is all the good we can wish to these guys (and to us).
However, this article based on their work is just a mathematic stunt… a scientific fraud.
but does anyone who reads osnew NOT read slashdot?
Yea. A dun read that.
As somebody already mentioned, this story was already posted on /.
But you guys here really were able to give even dumber responses to the article (btw. actually reading the article usually helps a lot if you want to avoid stupid posts in the future).
No, that Win XP has more lines of code than Linux is not relevant, as they compared the number of bugs per lines of code found in the Linux kernel to the number of bugs usually found in commercial code. And frankly, the quality of the Linux code is just amazing.
“Linux, the kernel itself can be run as a full Operating System by itself, without the use of a GUI front-end, or addons.
Windows XP however can’t run in this mode. The GUI is part of the Operating System, and isn’t separate.”
Are you sure of this?
Are you sure of this?
—-
are you unsure. if so why?
“when did a standford research team become linux zealots. your idea is ridiculous”
It’s because the MS crowd can’t stand criticism, so they automatically dismiss everything as coming from “linux zealots”.
Although I personally do consider Linux (yes, the kernel) to be more stable & crashfree and than the NTkernel or the 9xkernel, I highly doubt the difference is this big. Something must’ve been twisted.
highly doubt the difference is this big. Something must’ve been twisted.
—–
standard research team has posted their list of bugs many times to the linux kernel mailing list. they give detailed statistics while your assumption that its twisted is mere speculation.. guess whats more valid?
on this computer i am typing this message on now, wont run windoze very good – it crashes a lot,BSODs, lock ups. have not had windoze on it for a couple of years…
but the more traditional Linux’s run great on it (slackware-10, Debian-Sarge)…
patiently waiting for Sarge to go stable :^)
Fedora-core-3 is ok but kind of slow, same with mandrake-10 -flakey, Suse -flakey…
IE does not live in the kernel, it is a userland aplication.
MS does not let you uninstall IE because the HTML engine is used extensively throughout Windows for various apps. (As an example – the Add/Remove Programs control panel item uses it to display the list of installed programs.)
The NT kernel can run without a GUI.
Yes, the Windows build of the kernel does have hooks for various subsystems to enhance performance. GDI is just one example.
Windows XP is a product based on the NT kernel.
XP just happens to have a GUI.
Come on people, we’re reading osnews! we should all know that there is a lot more to an OS than just the kernel.
The NT kernel can run without a GUI.
—
how. where?
“MS does not let you uninstall IE because the HTML engine is used extensively throughout Windows for various apps. (As an example – the Add/Remove Programs control panel item uses it to display the list of installed programs.) ”
how come MS windows is the only operating system requiring their own browser for everything instead of letting end users choose what they want to have in their system. whats wrong with letting users swap the html engines.
why tie something as vulnerable as IE to windows so deeply. why does windows update *require* only IE. how about MSN. cant they function with standards compliant browsers. are they saying nobody using Mac or Linux should be browsing MSN when it was working fine earlier?
“
When Windows NT came out first Win32 was just a subsystem like Dos or OS2 and posix (yes, even posix, because of some sort of US gorvernement practice ever OS they used had to be Posix compatible). Anyhow. Performance issues drove MS to the idea to move the Win32 subsystem from userspace to kernelspace with it GDI (if I am not mistaken).
As far as I now parts of IE run in kernel mode (the renderer as a commenctlr I think, but this is some sort of spekulation). But whats really ment wenn one is talking about the IE running in kernel mode is the IIS. It hase some services running in kernel mode, which is a bit riski if you ask me.
most probably it would be possible to repack the a kernel to boot up a bash console using the Microsoft Services for Unix (since its just a subsystem like Win32). But I don’t want to know how difficult this will be.
You should destinguish between the product politics of Microsoft and the technological possibility.
“The NT kernel can run without a GUI.
—
how. where?”
This was a statement about the design of the kernel and not the OS you are using… refer to my last posting.
Erm… I doubt it highly that the Linux kernel has less bugs than, say, the QNX muK (Neutrino) for example. Not in the first place because QNX has been in development since 1982 (if I’m not mistaken), but also because I think it’s very unlikely that a monolithic kernel will be the kernel with the least bugs… Why? well, simply because a muK is still far smaller and therefore less likely to have bugs– assuming that when examining different kernels, they do not include the parts running outside of kernelspace.
You cannot compare “Linux” with the NT kernel, simply because Linux is monolithic and the NT kernel isn’t.
A statement like “kernel with the least amount bugs” is extremely deceptive, in my opinion.
“The NT kernel can run without a GUI.
—
how. where?”
If you need a prove of concept. Put in the WinXP Setup cd… start setup and enter the repair console… Voilà. You are running a NT kernel with a commandline interface.
“You cannot compare “Linux” with the NT kernel, simply because Linux is monolithic and the NT kernel isn’t.”
Don’t get me wrong, but I thought the NT kernel was as monolithik like the linux kernel, wasn’t it? If not what is the difference?
Damn it. I hate to korrekt my self:
http://www.e-paranoids.com/k/ke/kernel__computer_science_.html
NT is a hybrid kernel…
The comparison being made is based on OS vs OS.
No, it isn’t. At least not with consistent definitions of “OS”
Linux, the kernel itself can be run as a full Operating System by itself, without the use of a GUI front-end, or addons.
Really ? Doing what ? How are you planning on starting this kernel without a bootloader ?
Windows XP however can’t run in this mode. The GUI is part of the Operating System, and isn’t separate.
How they’re distributed is irrelevant. The comparison is still invalid.
It’s not a comparison of source vs source.
Which is actually the primary reason their comparison is flawed. They’re comparing known data about one thing to completely unknown data about another. It’s a bit like saying a Ferrari 360 is faster than any other arbitrary vehicle in the world based on its top speed being higher than the speed of cars averaged over the whole world.
f you need a prove of concept. Put in the WinXP Setup cd… start setup and enter the repair console… Voilà. You are running a NT kernel with a commandline interface.
—–
you know what. thats not a proof of concept. its totally bogus. a recovery console in the setup cd is independant on the kernel that actually is installed on the machine.
—
Don’t get me wrong, but I thought the NT kernel was as monolithik like the linux kernel, wasn’t it? If not what is the difference?
——
wrong. openbsd kernel is monolithic. Linux kernel is extremely modular
the truth is not there is no production version of windows nt without a gui.
NT is a hybrid kernel…
—–
MS has dropped that claim susequently. check the microsoft sites
Ok so windows has more bugs. But it has a lot better user experience for many people. If I’m wrong, why so many people still choose windows instead of linux?
It’s possible. Read about it…
http://www.microsoft.com/windows/embedded/default.mspx
****
Ok so windows has more bugs. But it has a lot better user experience for many people. If I’m wrong, why so many people still choose windows instead of linux?
*****
Speak for yourself.
It is *not* the UI that keeps me on Windows. It is the applications. When Adobe ports Illustrator to Linux… I am there !!
—
If I’m wrong, why so many people still choose windows instead of linux?
Top 5 Reasons:
5) Because that’s what runs the latest games.
4) Because that’s what runs Microsoft Office.
3) Because that’s what they use at work or school.
2) Because they have never heard of Linux.
And the number one reason people choose Windows instead of Linux:
1) Because they have no idea what an OS is and use whatever their computer came with.
Sure, some people do genuinely like Windows better. However, you’re deluding yourself if you think that’s why the majority of people use Windows. The real reasons are much more mundane.
no.nothing it taken from bugzilla. it is generated by the code analyser. do read the article before commenting on it
So they’ve apparently got a code analyser that can detect all bugs, but buggy OSS code is still being released.
Hmmm.
The NT kernel can run without a GUI.
—
how. where?
Embedded NT. The recovery console.
how come MS windows is the only operating system requiring their own browser for everything instead of letting end users choose what they want to have in their system.
What can I swap khtml for ? How about OS X’s WebCore ?
whats wrong with letting users swap the html engines.
Who else makes a compatible HTML engine ?
why tie something as vulnerable as IE to windows so deeply. why does windows update *require* only IE.
Because it’s Microsoft’s product and they can do what they want with it.
how about MSN. cant they function with standards compliant browsers. are they saying nobody using Mac or Linux should be browsing MSN when it was working fine earlier?
MSN appears to load up fine in Firefox.
How do you justify Microsoft having to pander to your wants, but nobody else having pander to theirs ?
you know what. thats not a proof of concept. its totally bogus. a recovery console in the setup cd is independant on the kernel that actually is installed on the machine.
Right. But I bet you don’t think the kernel a Linux install CD boots is “independant on the kernel that actually is installed on the machine”, though, right ?
Added to that, you can also boot the system on the hard disk to the same recovery console.
the truth is not there is no production version of windows nt without a gui.
Embedded NT.
>> it is generated by the code analyser.<<
Again, I repeat my comment. Those are IDENTIFIED bugs, not ALL the existing bugs. If the code analyser would be able to catch all the existing bugs in a code, then go ahead, throw away all the debuggers and only use their code analyser and get ready for a bug-free world.
I do not doubt, that the code analyser of Coverity helps to reduce the amount of bugs in a software, but by far it will not find all the bugs!
>> do read the article before commenting on it <<
I have read the article and:
1) It does not say that they have used the code analyser for their report. That is a conclusion one expects. My comment of “bugzilla or something similar” points towards the fact, that identified bugs are counted, the same the code analyser supplies.
2) If you have access to the report, which is due to tuesday, please let me know where to get it and I will love to read it, really.
“”when did a standford research team become linux zealots. your idea is ridiculous”
It’s because the MS crowd can’t stand criticism, so they automatically dismiss everything as coming from “linux zealots”.”
You guys are barking up the wrong tree. I don’t have any religious motives at stake. Just pointing out how stupid and biased this report is. I use Windows, Solaris, FreeBSD, and Linux. I just find it strange that Stanford (not Standford) researchers were able to make such a stunning conclusion about closed source operating systems seeing how they don’t even have the source code to it. Reminds me of all the bs studies done by “completely independent groups” showing Windows more secure than Linux.
Where do they even get their 20-30 bugs every 1000 lines of code statistic? That is the stupidest assumption in the world. This might be a fair estimate for programs written by college students and small companies writing applications for internal use, but I don’t think Windows (or any other commercial grade product) comes anywhere near that number. Do the math for Windows XP (45,000,000 lines of Code * 30 bugs / 1000 lines) and you will get 1.35 million bugs in Windows XP. LOL, with that kind of number, I am surprised Windows even boots up. But go ahead, keep believe whatever you want to believe, call me a zealot, call me names. Whatever.
Typically commercial code doesn’t come with source so how did they make this comparison?
What do these people consider a “bug” ?
What do the developers of “commercial software” consider a “bug” in that 20-30 bugs/KLOC statistic ?
yes, me. I can’t stand slashdot.
Why don’t you check the number of open and unfixed bugs in Windows 2000?
Those are bugs that Microsoft knows about but it is yet to fix, in some cases for over seven years.
When you look at those numbers, 1.35 million bugs does not seem quite as astronomical.
Really ? Doing what ? How are you planning on starting this kernel without a bootloader ?
—-
embedded systems
”
So they’ve apparently got a code analyser that can detect all bugs, but buggy OSS code is still being released.
”
apparently you failed to read the article…
———————–
“This is a benefit to the Linux development community and we appreciate
Coverity’s efforts to help us improve the security and stability of Linux,”
said Andrew Morton, lead Linux kernel maintainer. “We’ve already addressed
the top priority bugs that Coverity has uncovered. It’s a very useful system
for high quality code.”
———————————
‘
Where do they even get their 20-30 bugs every 1000 lines of code statistic”
from another research finding. read the relevant press releases
http://lwn.net/Articles/115530/
“Again, I repeat my comment. Those are IDENTIFIED bugs, not ALL the existing bugs.”
which has nothing to do with bugzilla whatsoever
“What can I swap khtml for ? How about OS X’s WebCore ? ”
I have no khtml in my linux. try removing IE in your windows system. I have OS X without webcore too.
“MSN appears to load up fine in Firefox. ”
use it completely. just loading up the website is a poor test. guess you didnt even bother
”
Who else makes a compatible HTML engine ? ”
stick to standards and you can swap it with mozilla or opera or webcore or whatever.
“Right. But I bet you don’t think the kernel a Linux install CD boots is “independant on the kernel that actually is installed on the machine”, though, right ?”
it is not independant. its the exact same kernel. dont believe me. go look into the redhat installer. the same kernel is used for both the installation and after the installation. in fact the whole installer runs in a chrooted environment used to boot up after the installation too
Whom is likely to know more about this subject, researchers from a prestigious university like Stanford conducting a four-year study or the average OSnews reader? Carnegie Mellon’s computer science department or the average OSnews reader?
Thom, as many people posted before you, the metric was ‘number of bugs per line of code’. Anyone who’s done one year of maths would therefore be able to work out that the total amount of code under analysis, once there’s a large enough amount to make it a worthwhile study, is irrelevant. If you’re comparing *per line of code* you can make a perfectly fair comparison between 100,000 lines of code and 100,000,000.
Luk – “Although I personally do consider Linux (yes, the kernel) to be more stable & crashfree and than the NTkernel or the 9xkernel, I highly doubt the difference is this big. Something must’ve been twisted.”
Well, IANA coder, but it seems to me that a lot of what is being measured here is ‘bugs’ that don’t affect functionality in most circumstances but are more examples of bad programming practice that could *potentially* cause problems in some cases. This type of automatic analysis can’t catch more complex bugs, so it’s by no means complete and wouldn’t necessarily correspond exactly to system stability (i.e. just because one system scores 10x better than another on this test does not mean it will be 10x as stable).
Windows Server can also run in headless mode.
>> Where do they even get their 20-30 bugs every 1000 lines of code statistic? That is the stupidest assumption in the world<<
This is a well known statistic in software development. Although it may sound too much, it is not. Take into account that a lot of bugs do not show up under normals conditions and often they show up when you start to reuse a code for other things.
The military also uses similar statistics. In their case they consider a code to be excelent, when it has around 5 bugs per 1000 SLOC. So to get the count down from 20-30 to 5 they invest a LOT of money. And we are talking about software onboard of fighters, satellites, etc.
“So they’ve apparently got a code analyser that can detect all bugs, but buggy OSS code is still being released.
Hmmm.”
Look, man, you make a lot of valid points here, but you’re just being contrary now. I’m sure you *know* that no-one ever claimed it identified all bugs. None of the linked articles is even close to claiming that. It’s also in no way limited to OSS software – the university is more than happy to analyse anyone’s code for a price, and presumably Linus’ checker could be applied to proprietary code as well. Now stop posting inflammatory crap.
Source code analysers generally only detect a certain class of bugs. Their basic analysis can catch something like use of memory after freeing, incorrectly-typed access to variables, etc. What this means is that the analyser might catch a bug, and while it’s technically incorrect, might not have a user-visible result. It also generally flags things that won’t trigger unless the user does something like feed the code a 1000-character variable name.
If anybody would RTFA, that’s precisely why they say this study is an analysis of *code* quality, not *product* quality. A bug the user doesn’t encounter doesn’t affect the product, but indicate lower code quality.
Now, that is not to say that the article is all that good, especially with regards to the comparison to Windows XP. However, the 20 bugs per ksloc figure is actually a pretty common number that you’ll see regarding proprietory products *on average*.
To sum it up, what the study shows (as opposed to what the article says — which, like most mainstream technology reporting, bears little resemblance to the truth) is that Linux has a lower density of a certain class of bugs than the industry average. Since similar studies show similar results for other open source products, this is pretty easy to believe.
actually most distros use a slightly different kernel for the install process, which is why they have to reboot at the end of it. Mandrake has a customised install kernel called kernel-BOOT. I haven’t used Fedora for a while so I’m not sure, but if you still have to reboot after the install process, it’s probably using a slightly customised install kernel too.
Thom, as many people posted before you, the metric was ‘number of bugs per line of code’. Anyone who’s done one year of maths would therefore be able to work out that the total amount of code under analysis, once there’s a large enough amount to make it a worthwhile study, is irrelevant. If you’re comparing *per line of code* you can make a perfectly fair comparison between 100,000 lines of code and 100,000,000.
The problem is not all lines of code may not be equal. For example, the kernel source of NT probably receives a lot more development attention than the Win16 WoW layer.
This type of automatic analysis can’t catch more complex bugs, so it’s by no means complete and wouldn’t necessarily correspond exactly to system stability (i.e. just because one system scores 10x better than another on this test does not mean it will be 10x as stable).
As I said in another post, there is also the issue of exactly what a “bug” is. In that oft-quoted “sixty-odd thousand known bugs in Windows” statistic, Microsoft considered things like spelling errors to be “bugs”.
You don’t have to be a Stanford or Carnegie Mellon researcher to do the math. With the assumption of 20-30 bugs every 1000 lines, they are saying Windows XP has 1.35 million bugs. That number is completely unfeasable. Do you even understand how big that number is? A more realistic number might be 150,000 bugs in Windows XP. But that would mean that Windows only has 3 bugs or so every 1000 lines of code and that kind of number lacks sensationalism.
Certain things might be believable. Like if they said Linux is 5% faster than Windows, or has a factor of 10 times less bugs than Windows. But when you publish reports saying something is a factor of 360 greater than something else, then you pretty much lose all credibility. But hey, as I said, whatever floats your boat.
In other news, Stanford researchers showed Linux was 360 times faster than Windows. LOL.
embedded systems
Firstly, there’s stilla bootloader, even if all it does is tell the CPU to start reading from a particular storage device and then execute the code.
Secondly, are you claiming the Linux kernel has sufficient functionality included with it for all embedded systems ? And you criticise Windows for being bloated…
apparently you failed to read the article…
So their tool *can’t* detect all bugs ?
I have no khtml in my linux. try removing IE in your windows system. I have OS X without webcore too.
Then your systems compared to a Windows system with that functionality. Apples to oranges.
use it completely. just loading up the website is a poor test. guess you didnt even bother
No, I didn’t. Having never even loaded msn.com before I wouldn’t even know where to start looking to test functionality.
stick to standards and you can swap it with mozilla or opera or webcore or whatever.
What standard defines an embeddable HTML component for Windows ?
it is not independant. its the exact same kernel. dont believe me. go look into the redhat installer. the same kernel is used for both the installation and after the installation. in fact the whole installer runs in a chrooted environment used to boot up after the installation too
Last I checked the installer used a generic kernel whereas the installed kernel is architecture specific.
>> “Again, I repeat my comment. Those are IDENTIFIED bugs, not ALL the existing bugs.”
which has nothing to do with bugzilla whatsoever <<
Yes and no. Bugzilla is used to report identified bugs in a code. Hence, not all identified bugs need to be un bugzilla, but all bugs in bugzilla have obviously been identified. Hence, bugzilla can be used as a measure of identified bugs.
So again, compare identified bugs with identified bugs and do not mix apples with potatoes.
>> researchers from a prestigious university like Stanford conducting a four-year study or the average OSnews reader? Carnegie Mellon’s computer science department or the average OSnews reader? <<
Speak for yourself. I’m not a computer scientist, but I already write software codes for more than 10 years and most of that for large numerical simulations for research purposes, where the amount of bugs also has to be low! So I know about bugs and bug-counts.
What I meant from my statement was this:
1) Wired compared the Linux 2.6 kernel to Windows XP.
2) Since Microsoft will not release the source code for XP, a comparison had to be made on what code was available.
3) Carnegie Mellon’s CyLab stated information they’ve seen in commercial software code, with Windows XP a part of that software code.
4) Based on that information, they extrapolated that commercial software has 20-30 bugs per 1000 lines of code.
5) Applying this logic to Windows XP, XP would have between 800,000 and 1.2 million bugs for 40 million lines of code.
Wired doesn’t state if the 40 million lines in XP are including everything or not, but even if Microsoft isn’t the average, that’s still a significant amount of bugs more than Linux carries.
To sum it up, what the study shows (as opposed to what the article says — which, like most mainstream technology reporting, bears little resemblance to the truth) is that Linux has a lower density of a certain class of bugs than the industry average.
But does that 20 bugs/kloc number (which I agree is a fairly commonly stated industry average) _only_ include these types of bugs being automatically detected, or does it include other types of bugs as well ?
There is another aspect to consider as well – at which stage in the development cycle of the respective codebases is each analysis being performed ?
>> But does that 20 bugs/kloc number (which I agree is a fairly commonly stated industry average) _only_ include these types of bugs being automatically detected, or does it include other types of bugs as well ? <<
It includes in general all bugs in a code. Anyway, the number should be taken as a guideline, not as a rule. From that point of view, what type of bugs it exactly represents is not so relevant. The important thing is the order of magnitude of the number.
They don’t have the source code to XP so they can’t do source code analysis on it for comparison.
I’ve been following this thread all day, I can sympathize with both sides of the field. I don’t want to get involved in any of the fighting but there is one thing that keeps coming up that I would like to correct. You can completely remove IE from Windows XP. When slipstreaming a Windows install CD with SP2 and newer updates/drivers you can completely absolutely remove IE, along with several other things, like the media player, firewall etc… I have installed Windows XP in under 400 MB of space and it runs fine (running with the classic GUI) on a Pentium II 233 with 64 MB of RAM. I am not going to make any claims about the bugginess of Windows/Linux and I know that you can not install it this way with factory CDs, but it is possible. And if you don’t know how to do it yourself there are tool to help as well, I recommend Nuhi nLite URL (http://nuhi.msfn.org/) I would like to say though it is a little unfair to make a claim about how many bugs there are per line of code if you haven’t seen it….
What are you talking about? We hadn’t even posted in this thread before I wrote my post, and my post wasn’t directed at anyone in particular. As for 18 years of experience, stop swinging it around and put it back in your pants. The only thing that counts is a logical argument…
They don’t have the source code to XP so they can’t do source code analysis on it for comparison.
—-
since the actual press release doesnt directly compare window s xp in anyway to Linux this is pointless
@lumbergh: It’s quite clear that the Wired writer added that comparison to Windows XP. The actual study hasn’t shown up yet on ACM, so I’ll see later if that appears in the actual study.
It includes in general all bugs in a code. Anyway, the number should be taken as a guideline, not as a rule. From that point of view, what type of bugs it exactly represents is not so relevant.
Well, yes it is when broad, sweeping, unqualified statements like “Linux: Fewer Bugs Than Rivals” are going to be made based on it.
If you’re going to say things like that, then you have to have a consistent definition of what a “bug” is across the board, or else you’re not comparing two like things.
>> Well, yes it is when broad, sweeping, unqualified statements like “Linux: Fewer Bugs Than Rivals” are going to be made based on it.
If you’re going to say things like that, then you have to have a consistent definition of what a “bug” is across the board, or else you’re not comparing two like things.<<
I agree with you, that a definition of what a bug is, is something important. The problem is that people use things out of context. If somebody in the software industry talks about 20 bugs/kSLOC, he/she does not care about the exact definition of what a bug is. Instead, he/she is interested in the order of magnitude of the amount of bugs. That is the point I want to highlight.
One can make comparisons about certain type of bugs, for example the ones identified by the code analyser mentioned in the article. But in that case it has to be said in that way. Otherwise I agree with you that it only leads to wrong judgements, like it can be seen in this post.
that last AdamW was not me. My posts will always come from a 204.xx IP address (when I’m at work) or a shawcable.net domain (when I’m at home), and will always have my email address linked to the name. I am not a coder nor do I claim to be. Sorry, Rayiner.
I figured it wasn’t you — I just assumed that there was another AdamW. I have no idea why, it seemed logical at the time
I ran a blog entry on this but my point basically is 5.7 million lines of code vs 40 million lines of code, tjhe more code you have the more flaws you will have. Linux has flaws and since Linux is based on UNIX I wonder how many flaws UNIX has, after Sun Open Sources Solaris I guess we will find out.
Linux is based on UNIX
—–
No. its is not “based”. Linux is unix like..
anybody who applies for certification with opengroup and follows those base standards can get the “unix” trademark
As I already posted, today, in this very thread, the metric used for comparison was bugs per line of code. This means the fact that codebase A is larger than codebase B is utterly irrelevant. There are *other* problems with the analysis of the article (it just used an oft-quoted general figure for commercial software as the figure for Windows, and failed to take into account that this number probably includes bugs the automatic analyser does not detect), but your objection has no validity whatsoever.
Raynier said “As for 18 years of experience, stop swinging it around and put it back in your pants. The only thing that counts is a logical argument”
This site is discriminating against anyone who
except ‘certain people’….
This has insulted me and needs to be removed.
I mean why didn’t they just run their tool on Linux and OpenBSD kernels and compare the results? Why didn’t they run it on NetBSD and FreeBSD kernels and compare the results? They could even have put a nice little table in there and categorized the bugs, demonstrating the strengths and weaknesses between these 4 different kernels. They didn’t bother because it doesn’t make for a good story I guess.
different kernels. They didn’t bother because it doesn’t make for a good story I guess.
—-
actually they did and the results are available. linux fared better if you want the short story but feel free to google
The original attack on Rainer was entirely unjustified and unsubstantiated.
I have seen other Rainer posts and he is a very knowldegable and coherent writer.
—
Don’t feed the bridgedwellers.
Howdy all
Hmmm it`s always amusing to see people who clearly have no clue about modern software process mouth off at something like 20 – 30 defects/bugs per sloc.
A defect or bug might be something as simple as a spelling mistake or the scope of a variable being wrong, so yes 1.5 million defects/bugs in the windows source code might be feasible.
Also just because something does not directly crash an application does not mean it isn`t a bug!
P.S. do not bother flaming me unless you actually know what “scope of a variable” means … that should rule out atleast 99.9% of you trolls
article was horrible–did not include enough supporting details from the university research.
all those bugs are probably all from windows ;-D
Here is the page from Coverity (makers of the source code analysis tool), detailing the bugs and where they were found:
http://linuxbugs.coverity.com/linuxbugs.htm
Just had a quick look at the article. Shame they didn’t compare against OS X. That’d be a fair comparison, since OS X doesn’t require a GUI (you can run Darwin as a functional OS, although it doesn’t do a lot) just like Linux, and even more interesting since they both are Open Sourced (albeit under different models).
f you need a prove of concept. Put in the WinXP Setup cd… start setup and enter the repair console… Voilà. You are running a NT kernel with a commandline interface.
—–
you know what. thats not a proof of concept. its totally bogus. a recovery console in the setup cd is independant on the kernel that actually is installed on the machine.
Yes, the console on the CD is independent of the kernel on the machine… until you run the console on the machine using the native kernel (“winnt32 /cmdcons” anyone?)
Yes, once everything is loaded, the GUI is a fundamental part of Windows NT, but you can run recovery console on your machine using parts of the native kernel.
I don’t believe that every part is loaded, however, so it is not really the same thing as comparing it to the OS kernel, is it?
Actually the article isn’t a comparison of Linux and Windows. It does make an awkward comparison between the two OSes in code size. From the article:
The report, set to be released on Tuesday, states that the 2.6 Linux production kernel, shipped with software from Red Hat, Novell and other major Linux software vendors, contains 985 bugs in 5.7 million lines of code, well below the industry average for commercial enterprise software. Windows XP, by comparison, contains about 40 million lines of code, with new bugs found on a frequent basis.
Lots of trolls seemed to have jumped on the last sentence and missed the point of the article, which is that as a piece of code the Linux kernel is much less buggy than the average commercial code.
Anyway, anyone that has worked on a large software project will tell you that 985 bugs for 5.7 million lines is a VERY low number. Doesn’t mean it’s the lowest, doesn’t mean it’s the best code either as others have already pointed out, it just means that it’s pretty good code.
Now MS may in fact be able to beat this…I, for one, invite them to open their kernel code for comparison.
P.S. Regarding the name-stealing troll…let me reiterate this plea:
Eugenia, I beg you, please introduce a user-based comment system to OSNews. I don’t care if it allows anonymous posters or not, but I think your regular readers would appreciate knowing that someone won’t troll and start flame wars using their identity. Partial IPs and Report Abuse are just not enough. Please consider it.
Hmm The link that http://linuxbugs….shows majority of bugs are Null Pointer bugs!!! phew…now I know why I have always been told not to muddle too much with pointers!!!
Funny how they don’t bother including bugs CAUSED by logic flaws, scoping, hacks and all other bugs that can’t be found with an automated tool into the bug number. Bugs caused by bugs? There are far more types of code defects than those they analyze. Like several people in the thread have already said, the metric produced by Coverity is meaningless. It can help you look for problems in your code, but as a statistic, it’s completely useless.
Any half decent compiler can find many of the defect types on http://linuxbugs.coverity.com/linuxbugs.htm . FxCop can do almost exactly the same thing for .NET assemblies
You can bet MS has a tool they use internally for this kind of analysis. But these kind of analysis tools CANNOT tell you if your logic or implementation of a function is just plain wrong and not doing what it’s supposed to exactly do.
Again, this study compares *code* quality not product quality. The reasoning behind the study is that more carefully coded code is likely to be have fewer mistakes like array bounds overflows, null pointer dereferences, etc. These can have serious user-visible effects (system crashes, security breaches, etc), but more relevently, they indicate poorer quality code.
“Our findings show that Linux contains an extremely low defect rate and is evidence of the strong security of Linux,” said Hallem. “Many security holes in software are the result of software bugs that can be eliminated with good programming processes.”
Suprised me.One should in my opinion consider the fact that the scientists at Carnegie Mellon didn’t scan the vanilla kernel.They examined respectively the Rethat,SuSE,… kernels.Those are heavily *patched*.So you half the praise should go to the propietary Linux vendors as well for delivering such a good base to develop on.Besides that this study shows one more time how mainstream Linux has evolved to a even more mature base to work with.
“What can I swap khtml for ? How about OS X’s WebCore ? ”
I have no khtml in my linux. try removing IE in your windows system. I have OS X without webcore too.
It’s really pretty easy to remove IE from Windows.
Try XPlite, among other things. It kills some functionality in Windows when it’s gone, and things like Steam won’t work properly, but it’s possible, and easy, to do.
In essence the poster of such a quaint little phrase is telling the reader “i have special knowlege”, “i know the truth”, “i know The Right Way To Do Things”.
Allways a pleasure to meet someone with extraordinary self-knowledge.
actually they did and the results are available. linux fared better if you want the short story but feel free to google
I googled.
Where are the results that support your claim that Linux fared better? Just curious.
> But when you publish reports saying something is a factor of 360 greater than something else,
> then you pretty much lose all credibility
Right, so of I publish an article claiming that the sun is 500000 times more massive than the earth, I loose all my credibilty? nice
Howdy all
Just found this from a quick google.
http://marc.theaimsgroup.com/?l=bk-commits-head&m=109417401129659&w…
As with all checkers sometimes found bugs are false positives
“Linux has fewer bugs than rivals”
So Linux has more rivals than bugs?
Windows sure suck, just not as bad as Linux, and Microsoft might not be the biggest innovator (Apple is though) but hey, I see KDE/Gnome/XFCE etc taking over the look of Windows and modifying it slightly but not more.
What’s the point of your post to begin with? Another MS bashing article…
Let’s take me for instance, I have 3 criterias when I chose an OS
1. Free or with a viable future… meaning BSD/MIT licensed software or proprietary, NOT GPL.
2. Ease of use (Such as Windows or BeOS or OSX).
3. With a predictable roadmap (Such as Windows/OSX/Haiku/SkyOS/*BSD)….
Another HUGE criteria is that I don’t have to reinstall or switch distro every X months because it simply sucks or lack of support or that the player is removed or that dependency hell has f**ked up my machine…
So to me, Linux is the most buggy foundation I’ve ever come across, since the major issues which concerns me are simply not solved… (Namely 3 above)
1. Free or with a viable future… meaning BSD/MIT licensed software or proprietary, NOT GPL.
History, it seems, has proven you wrong. Lots of GPL’ed software have had remarkable growth over the past few years, and there’s no reason to believe that the fact that a software is released under the GPL means it has less of a future than the other licenses you list. In fact, I think it’s fair to say that Linux has a very promising future ahead of it. I think your anti-GPL agenda is clouding your judgment here.
2. Ease of use (Such as Windows or BeOS or OSX).
KDE and Gnome are certainly as easy to use as Windows. A well-configured Linux system is easy to use, period. That doesn’t mean they’re not different, but there you go.
3. With a predictable roadmap (Such as Windows/OSX/Haiku/SkyOS/*BSD)
That doesn’t even mean anything. How is the BSD roadmap more “predictable” than Linux? How is OSX’s? Haven’t you noticed that IBM, Novell, HP, Intel (just to name a few) are all investing in Linux? Do you think they just like throwing money down the hole?
Another HUGE criteria is that I don’t have to reinstall or switch distro every X months because it simply sucks or lack of support or that the player is removed or that dependency hell has f**ked up my machine.
Good, because that doesn’t happen on Linux. Dependency hell hasn’t been a problem on Linux for years.
Enough with the anti-Linux, anti-GPL FUD already. And your rant had nothing to do with the article.
So basically, when you sift through all the garbage
posted on this thread so far; the bottom line is that the “research” is based on assumption and really nothing more.
“> But when you publish reports saying something is a factor of 360 greater than something else,
> then you pretty much lose all credibility
Right, so of I publish an article claiming that the sun is 500000 times more massive than the earth, I loose all my credibilty? nice”
Dude, are you serious with your response? Did you even take it into context? Did you even read the entire response. You can clearly play stupid and say the Universe is billions of times bigger than the size of an atom, or you can actually refute what I was trying to say. This was, America being the way it is, the equivalent things man creates in this competitive environment would not be a factor of 360 times better or worse.
For example, a Japanese car might have less flaws than an American car, but I won’t ever believe a story that says an American car has 360 x more flaws than the average Japanese car. That means if a Japanese car has on average 20 problems throughout its life, an American car will have 7200. Noone will go for that kind of crap.
Another example, if Linux is 360 times more secure and only 2 security holes are found in a distribution in a given year, it would mean Windows would have 720 security holes found in that one year. You will have to run update twice a day to keep your system updated.
Lets be fair and just agree that the article is stupid. If Microsoft ran a similar study and found that they had only 1 bug in every 1,000 line of codes, what would you think of the report if they compared themselves to Linux and just assumed in their report that Linux has 20-30 bugs every 1000 lines of code since Linux is a typical software and that’s how many bugs typical software have. I think I would laugh at the Microsoft report.
“Windows sure suck, just not as bad as Linux, and Microsoft might not be the biggest innovator (Apple is though) but hey, I see KDE/Gnome/XFCE etc taking over the look of Windows and modifying it slightly but not more.”
Ok, this is really frustrating for me. How is Apple a bigger innovator than Microsoft? I understand that a lot of basic GUI and some neat hardware and stuff was pioneered/invented by Apple 20 years ago. But it was 20 years ago, OSX is not inventive. They use someone elses kernel, took Postscript from Adobe, used design from Next, etc etc etc. XPs kernel might have some VMS stuff from 10 years ago, but at least most of the code is their own. Also I’d like to point out that little to none of the internal hardware on a Mac was designed by apple, just the case.
I’m sorry if this sounds like ranting, but people are always talking on OSnews about how people shouldn’t reinvent the wheel. And so when groups are taking the best ideas from others and incorpating them into their software (like KDE or Gnome do), why are they different? There shouldn’t be a double standard in this. I don’t love Microsoft or hate Apple, but to blindly claim that one doesn’t innovate where the other does is to ignore more than a decade of products and technologies developed by both companies.
@Lumbergh:
> They don’t have the source code to XP so they can’t do source code analysis on it for comparison.
well they should. didn’t MS tell everybody they’ll give their code to universities under the great shared source license? ha ha.
@slash:
> they are saying Windows XP has 1.35 million bugs. That number is completely unfeasable. Do you even understand how big that number is?
given the size of win xp, the number of bugs is quite normal.
> A more realistic number might be 150,000 bugs in Windows XP.
so because you don’t like the results you just make up your own numbers? that’s absolutely ridiculous!
It’s sad to see this despicable marketing technique continuously applied.
Two examples on these pages:
Anonymous (IP: 213.80.61.—)
This guy is spreading pure Microsoft FUD, as [A nun, he moos] already pointed out.
Anonymous (IP: 61.95.184.—)
The GNU/Linux FUD of this guy went completely unnoticed.
To another guy (QuantumG) complaining for the absence of a Linux-BSD kernel bug comparison, this guy replied:
“actually they did and the results are available. linux fared better if you want the short story but feel free to google”
This is totally false, as no such comparison exists. Notice the way this guy has put it though, in order to make his own statements more difficult to prove for what they are – i.e., FUD.
GNU/Linux FUD against BSD, to be precise.
And this is nothing new: GNU/Linux advocates have been repeating that “BSD is dying” for *years*. Here’s how much truth there is in that statement:
http://news.netcraft.com/archives/2004/06/07/nearly_25_million_acti…
“[FreeBSD] has a secured and strong foothold with the hosting community and continues to grow, gaining over a million hostnames and half a million active sites since July 2003.”
This is one of the reasons (the other ones are technical) why I prefer the *BSD systems, and FreeBSD in particular. Because it’s software written with an academical, and not political, spirit.
“This is totally false, as no such comparison exists. Notice the way this guy has put it though, in order to make his own statements more difficult to prove for what they are – i.e., FUD. ”
perhaps you can actually go ahead and google but since you wont let me spoonfeed you.
http://lwn.net/Articles/115530/
read the site details and comments