“Windows has grown so complicated that it is harder to secure. Well, these images make the point very well. Both images are a complete map of the system calls that occur when a web server serves up a single page of html with a single picture. The same page and picture. A system call is an opportunity to address memory. A hacker investigates each memory access to see if it is vulnerable to a buffer overflow attack. The developer must do QA on each of these entry points. The more system calls, the greater potential for vulnerability, the more effort needed to create secure applications.”
A system call is an opportunity to address memory. A hacker investigates each memory access to see if it is vulnerable to a buffer overflow attack.
hacker? i thought hackers build things and crackers were the ones who did this sort of stuff.
hacker? i thought hackers build things and crackers were the ones who did this sort of stuff.
Ah get over that nonsense already. Just learn to deal with the damn fact that language isn’t set in stone, it evolves, and thus meaning changes, words contaminate echother and pronunciation changes. DEAL WITH IT.
I guess you are also up in arms when people call a homosexual gay? Since gay ment something completely different 50 yrs ago?
oh great, i finally got a reply from osnews dumbass
Kudos to you, Thom.
Come on guys! Can’t we all just get along and be gay?
Also, language evolves yes, but spelling must be maintained in order for us to communicate properly:
http://dictionary.reference.com/search?q=ment
Ment != Meant.
http://catb.org/~esr/jargon/html/S/spelling-flame.html
Spelling flame is stupid, and that’s exactly what this reply is supposed to convey.
Thom you’re right, but dude, don’t feed the trolls
“Thom you’re right, but dude, don’t feed the trolls ”
Feed them or not, they will still troll as insects will bite. I for one find it satisfying to swat at feeding mosquitos.
Mosquitos keep biting, even if you don’t swat them. Trolls keep coming back for more if they get a response. They feed off of a us being annoyed.
Just like when you’re 8 and your brother was 5, he’d annoy you because he knew he’d get a reaction. Ignore him and he’ll go away after a while.
Vote him down so we don’t have to hear him in the meantime.
“Vote him down so we don’t have to hear him in the meantime.”
Ah, but trolls like insects are near limitless in numbers, and my single mod point cannot compete with the keyboard mashings from mebers of the swarm. ;^)
you agree with thom to tell any body how to
think. if this site was a real professional site he would not have answer on that matter. i replyed back to him in the same matter.
Was anyone proud to be gay 50 years ago? That’s the difference.
http://www.google.com/search?q=proud+gay+50+years+ago&sourceid=mozi…
“DEAL WITH IT”
Why can’t I mod parent up?
+5 Funny, +5 Relevant, +5 Fact, +5 Insightful, +5 Informative
I see you have new product: iArseLicker.
As richard stallman describes it, A hacker is someone that has playful cleverness with technology.
Therefore the difference between a hacker and a cracker is intention not action.
A hacker can still break in to computer systems, but the intention would not be to do damage, but to find out how and if they could actually do it.
A cracker would have the same action but would be doing it with the intention of destruction or theft.
The problem with the definition and why is has been confused is because general society is fairly lacking in complex thought.
Words with even slighty similar meaning are combined in to one meaning. ie. the Internet and The world wide web, are quite different but have been combined in to meaning the same thing(the web).
– Jesse McNelis
“As richard stallman describes it”
I am willing to bet that most of the English speaking people in the world did not learn the language with the help of the RMS Dictionary. I know I didn’t.
Funny and true, but nonetheless Stallman’s definition was true long before the modern definition that applied to pretty much anyone with malicious intent and a networked computer.
I can’t blame people for being angry over the more recent meaning of “hacker”, the most common malicious type (just about the only type lay people associate with the word) you’ll run into these days is the stereotypical bored kid who knows zilch about computers, but can “hack” because of some facilitating tool they downloaded off the internet. I am proficient with computers, and I’m quite proud of my ability to program, but hell would freeze over before I would call myself a hacker because of the connotation that word carries today.
Hackers are people who disect something to learn how it works, then endevor to build a better widget based on the knowledge acquired. For example, if I were to ‘hack’ the IIS and how it works, I might end up developing a more secure web server based on how things should and shouldn’t be done.
Crackers on the other hand tend to be those who hack something so that they can learn to outsmart it, exploit weaknesses and gain unauthorised access to resources that are made available in an unsecure manner.
Which has more potential buffer overflows? 3 system calls with 1 buffer each or 1 system call with 10 buffers?
Just because an OS makes a lot of system calls, doesn’t mean that it is inherently less secure. As usual the blog doesn’t address specifics such as what was running or installed on either system when the system call image was taken.
While interesting, without the system information I see this as nothing more than a troll.
The author is saying there is a lot more system calls so there is a lot more room for error. Because of the massive amount of system calls, doing a security audit of all of them is much more unrealistic then with Apache.
Which is why the article should be called “why IIS is less secure than Apache”. Equating Apache with Linux is a gross oversimplification.
Precisely. I’d be confused about the security of my system if I ran apache on my Windows OS. Perhaps the author didn’t realise Apache was ported to Win32?
Pretty pointless and sub-zero value in this “article”. It seems to me that this is more of an attempt to get one’s self exposed on major websites rather than offering anything useful.
One paragraph and two pictures (which incidently we have no idea what is on them because the quality is so poor) is a stupid way of trying to prove a point. Certainly not an article I would think is warranted a listing on a site such as OSNews.
Thanks, I’m glad at least some people here aren’t wearing blinders.
The article should be called:
“Security Comparison for Apache on Linux vs. IIS on Windows. A comparison of apples and oranges.”
If Apache were supported on Windows and or IIS were supported on Linux you would have a point.
Apache does happen to “work” on Windows.
And I am saying that without a frame of reference (how the OS and the web server was installed) the information is meaningless. Was all of IIS installed and running on a stock Windows Server 2003 installation? Is apache being used with PHP and MySQL? Was either OS and application secured prior to the test being run? My guess would be no.
Actually, there’s more to it than that. Yes, *all things being equal*, the more system calls you have to make, the greater the code you have to audit each time you make a change (even a bug fix).
However, you also have to factor in the complexity of each system call. An OS with a single system call that gives you infinite flexibility is impossible to audit and enormously insecure. Have a look of the Web 2.0 equivalent of this:
http://www.thedailywtf.com/forums/68115/ShowPost.aspx
In this case, I’ve found Win32 calls to be far more complex than comparible Unix calls. Compare the Win32 API of creating a process:
http://www.cs.rpi.edu/courses/fall01/os/CreateProcess.html
with the Unix API of creating a process:
http://unixhelp.ed.ac.uk/CGI/man-cgi?fork+2
The Win32 API has a lot of bells and whistles that few people ever use. The Unix API is dead simple but accomplishes much the same thing. Thus the Unix API for creating processes is a lot easier to verify and harder to exploit than the Win32 API.
Actually the Unix way of creating a process is execve, execl and all the other variants (there are about six). Fork and CreateProcess are entirely separate things.
http://unixhelp.ed.ac.uk/CGI/man-cgi?execve+2
Futher, setting up things like limits and so forth requires further Linux calls, whereas Windows encapsulates it all in the CreateProcess call.
Having done some Win32 stuff, and looked at Posix-style Unix stuff, I can say that the Windows API is actually a bit easier to get working with: there tends to be a lot more individual calls required by Unix.
Also there are annoying things with the Unix API, like the fact that semaphores only work between processes, and mutexes only work between threads, and neither can be used in the other’s case. Win32 did a whole lot better when it came to concurrent programming. This is why there is no Win32 version of the fork() call, Win32 actually has very good multi-threading support.
The Win32 API has a lot of bells and whistles alright, but most of them are pretty useful. The classic Unix API is really showing its age these days, and tends to require huge amounts of individual function calls to achieve something.
Just because an OS makes a lot of system calls, doesn’t mean that it is inherently less secure.
The number of system calls is a measure of complexity, and complexity is a very good measure of insecurity.
Using the degree of complexity to estimate certain end-result parameters is not an unusual practice. It’s just a human weakness that parameters like security, correctness, etc, all scale inversely with complexity.
Using complexity is a measure of RISK not insecurity, insecurity is measured by vulnerabilities, no?
I’m not talking about direct measurements, I’m talking about emperical correlation. In the real world, some things tend to be correlated enough that you can use one to predict the other, even though there is no causal underlying relationship.
For example, an engineer might estimate the cost of an aircraft based on things like its weight or the number of passengers. Obviously, the cost of a machine is not directly determined by its weight, but in practice, there is enough of a correlation between the two that it makes for a good estimate.
The security/complexity relationship is similar. Complexity is a very good estimator for vulnerability. Obviously, how many times a program will be cracked is not directly determined from its complexity, but in practice, it has been repeatedly observed that more complex systems are measured to be less secure than simpler ones. Thus, using complexity as an estimator of security is entirely valid.
Unfortunately emperical measurements can get you in serious trouble in the real world. Most security managers want tangible information about their systems, not educated guesses. The last thing I want to hear from anyone is “swag” (“scientific wild-ass guess”) about whether something is secure or not. And all of the engineering mumbo-jumbo in the world doesn’t change that.
Yes there are a lot of system calls in both images, but without any way of knowing what is running on those systems, one cannot make a determination as to what is necessary to run on those systems, and what can be turned off.
You seem to keep evading my point, without any reference as to the function of those systems how can anyone make a judgement about what is too many system calls? Part of the system engineering process is documenting and minimizing systems to only what is necessary to run in order to provide the desired level of functionality based on technical and business requirements. Your statements are nothing more than assumptions based on non-existent information.
Do you work for Fox news network? You seem to be an expert at sidetracking the discussion, refusing to deal with well-argued posts such as Rayiner’s and dodging every question.
Your post clearly smells of astroturfing and it is done so systematically that it is clearly a likely empirical measure that you paying to spread propaganda.
As to the article, it is very clear what is going on.
“A static page with a jpg image”. Get it?
The diagram then shows the number of system calls that IIS makes on Windows and those that Apache makes on Linux to fullfill the same goal.
Complexity breeds insecurity. This is not theoretical mambo-jumbo, but it is a clearly understood, accepted and empirically verified principle of software engineering.
Therefore, IIS is clearly more insecure than Apache. Its very complexity makes it nearly impossible to audit its security and placing trust into the unknown, which is what you want people to do by sidetracking the discussion and disregarding IIS’s and Windows’s complexity, is superstition not science.
Later.
I’m not sure how exactly it’s clearer given that the pictures cannot even be read.
Later.
Is this a joke?
And you seem to miss the point just as much as Raynier does. The diagrams are useless because there is no frame of reference as to what is installed or running on either system to make a serious determination as to the complexity or security of either system based on the information provided. His argument about complexity based on the number of calls is generally accurate, but in this case with no useful information as to what is running on the system how can anyone make statements about which is more secure? In this case Raynier’s statements are nothing more than a generalization since there is no information to support his position other than a couple of images generated by a piece of software and a brief explanation as to what the significance of the images are. There is no description as to what those system calls are, or what they are being called by, so how does anybody know what they are and whether they are necessary for the system to function. Somebody could be playing Solitaire on the Windows machine for all we know, and X could be running on the Linux machine. The complexity is based on what is installed, and since this is an unknown I wouldn’t make any statement about the “security” of either system without a lot more information.
Is it so much that I “avoided” answering Raynier’s questions, or that I ignored them since I didn’t see where his argument was relevant based on the information provided. And that still doesn’t answer the question as to why either you or Raynier haven’t answered any of my points.
Like I said originally, the piece is a troll and as usual, anybody who has a contrary view is going to get slammed for it.
And you seem to miss the point just as much as Raynier does. The diagrams are useless because there is no frame of reference as to what is installed or running on either system to make a serious determination as to the complexity or security of either system based on the information provided.
Apache vs IIs webserver both serving exactly the same page as default as possible.Now what extra sofware that has to be installed to serve the page in question doesn’t matter.You could say an IIs server forces you to have more running (GUI for example) than is necesary on other platforms and thus there’re needlessly more systemcalls and we all know what that means.
Secunia disagrees with you in this case: http://blogs.msdn.com/michael_howard/archive/2004/10/15/242966.aspx
We seem to be approaching the problem from two different viewpoints. You are under the assumption that this graphic is intended to provide evidence to support the case that Windows is less secure than Linux. I’m assuming it is intended to provide evidence to explain why Windows is less secure than Linux.
If you do not believe that Windows is less secure than Linux, then you obviously don’t get anything out of this graphic. If you believe the body of existing evidence supports that assertion, then this image gives you some understanding of the underlying reason.
The graphic does not have a single line of text documenting any of the system calls being used by either system. There is no accompaning text in the article that discusses the test, the operating systems used, or any methodology to speak of. If this was an actual test then there should be supporting documentation to back up the statements made by Richard, and there is none.
I am not trying to say that either operating system is more secure than the other. That is your assumption. I don’t get anything out of the graphics because there is nothing to get! For all we know, this could be two pieces of abstract art being posted for amusment purposes. My beef is with the total lack of documentation provided by Richard or Sana Security (which I am sure is trying to pimp their software through this little adventure). If this is to be taken seriously (and I don’t take it seriously at all), then the author(s) should be able to provide the supporting documentation as to how the test was conducted so that anyone reading it can make an informed decision. Instead, two graphics and a lot of rehashed FUD about the “insecurity” of Windows and is published with nothing to back up the assertions made by the blog post, with the exception of the new spin about system calls and how they “relate” to security.
In the security community, Full Disclosure should not be limited to vendors of hardware and software responding to vulnerabilities. Security software and hardware companies and researchers (both individual and corporate) should be just as forthcoming, if not more so. I see this piece as a troll because of the lack of documentation. If these people were truly serious about security, research, and the value of their software, they would not have to hide behind FUD. There is no evidence to support anything the blog entry is saying, and your arguments do not change that at all.
The problem here is the assumption that bigger number of system call equal increased complexity: it’s not necessarily valid.
If memory serves, Windows XP kernel is based on a micro-kernel inspired design whereas Linux kernel is a monolithic kernel, this could explain the difference.
AFAIK, the vulnerability of Windows are more caused by design issues (designing API without regards for security ex:ActiveX), everyone running as the administrator, etc .. than the kernel itself.
If memory serves, Windows XP kernel is based on a micro-kernel inspired design whereas Linux kernel is a monolithic kernel, this could explain the difference.
Even so, Windows hasn’t resembled a micro kernel since NT 3, everything including NT4 onwards was ‘throw as much shit into the kernel to speed it up” – when it doubt, throw it in the kernel, with little or no regards to the impact that these choices have on the stability and security issues surrounding the decision.
Linux wasn’t designed to be micro because it was developed out of pragmaticism rather than academic jerk off sessions; does it work, does it do the job, is it easy to maintain; they’re the quesitons one can ask when looking at FreeBSD, Linux, Solaris, and others which are monolythic.
Windows NT on the other hand was developed in the era of buzz words; look at early views of NT, and interviews with Microsoft representatives, and it was laden with the need to make sure that every facit was buzz word compliant – think they were the only ones? nope, Borland re-wrote the WHOLE of Paradox (C to C++), nothing to do with anything pragmatic, it was simply a re-write with no real purpose.
Stupid decisions made 20 years ago, for stupid reasons; in the end, it doesn’t matter whether something is micro/mach/exo/super-duper-micro, if the damn thing works, and it is stable and reliable, quite frankly, the parlor games relating to its ‘architecture’ can find a place a university libraries.
Using complexity is a measure of RISK not insecurity, insecurity is measured by vulnerabilities, no?
Yes,and where do these vulnerabilities come from?
Coding mistakes?
The more lines of code the greater the chance of making mistakes?
Now let’s go back to my original point, if you have no idea what is running on either system at the time of the test, then how can you say that one is better than the other? Or that one is more complex than the other without having any idea what is running or why?
This is one of those articles that needs about 20 pages of documentation as to how the OS and applications were installed and what was taking place on the system during the test. Then the images of system calls would start to make sense, until then they are useless.
A syscall that tries to do everything but only has 1 path out is just as big a security hole as a syscall that delegates work to other syscalls depending on state.
The number of system calls makes no indication of the complexity within the system calls itself. Even worse if you play copy-paste, your hole is now in multiple places instead of one.
> As usual the blog doesn’t address specifics such as what was running or installed on either system when the system call image was taken.
softwares installed on a system, don’t change the system calls … so there is no need to advert the the softwares installed and running …
The author never said Windows was less secure, he said Windows was harder to secure.
Oh yeah? Then how come the title of the article (not here, the actual article) is:
“Why Windows is less secure than Linux”
Thank you, please come again.
Because it is the most VISIBLE platform.
Is that really true, though? According to Netcraft, Apache runs 64% of web servers on the internet, while IIS runs 25%. By your logic, exploits of Apache should be far more common than exploits of IIS. However, in practice, we see the opposite to be true.
What I don’t understand is why everyone falls into a “all software development practices are created equal” line of thought. Is it really hard to believe that projects which have no shipping deadlines, constant peer review, and high developer enthusiasm produce better code?
It says “Windows” in the title of this thread, it does not say IIS or Apache. This topic title is about Windows vs Linux.
Um.. read the article. It’s about Apache and IIS.
Is that really true, though? According to Netcraft, Apache runs 64% of web servers on the internet, while IIS runs 25%. By your logic, exploits of Apache should be far more common than exploits of IIS. However, in practice, we see the opposite to be true.
i don’t know from where you get your information, but to me it looks a bit different:
IIS: http://secunia.com/product/1438/
Apache: http://secunia.com/product/73/
Did he say his statement was limited to Apache v2 and IIS v6?
It still applies. I’m fairly certain Apache 2.x is more common than IIS6.
It most certainly does not still apply.
He is talking about multiple versions of each, whereas you are narrowing it down to only one version of each. If you start off with different ingredients, then logically and obviously the cake will turn out different.
The number of vulnerabilities doesn’t matter if the open source world can fix them near instantly.
Secunia’s data is also often outdated.
Also, Microsoft doesn’t publish all of their vulnerabilities because they have something to lose if they did: shareholders.
dude, get the real facts.
http://www.theregister.co.uk/security/security_report_windows_vs_li…
According to danish Secunia IIS6 has twice as many open security holes, than does Apache 2.0.x.
So we can conclude this: IIS6 has had fewer advisories, however they have not been parched (closed). Apache 2.0.x has had many more advisories, however all but one has been patched (closed). This leaves 2 unpatched for IIS6 with 1 unpatched for Apache 2.0.x
You can probably figure a lot of other things to do with statistics. Do that and then we can all bash each other with wonderfully meaningless statistics.
Final conclusion: When comparing apples with oranges, apples tend to have more worms than oranges, unless the oranges aren’t really oranges but actually rotten apples, and one cannot see the difference. Or perhaps it’s the apples which are unripe, or combination of all. (It doesn’t make any sense, but neither does statistic when used this way.)
Got a link?
Secunia.com shows 2 vulnerabilities for IIS6, neither of which is highly critical.
It also shows 30 for Apache 2.0.x, 2 of which are unpatched, (one is which claims to have been reported in march 2004). Now, I don’t know how accurate all this is, but I haven’t seen any sites that contradict this information.
IIS4 and IIS5 were complete shit security wise. Microsoft actually took the time and got IIS6 right with security, and are now trying to get all the *other* stuff right with IIS7. I probably still won’t use it though, as I’m not a huge fan of ASP.
SecurityFocus reports 4 vulnerabilities for IIS 6 since 2001 and reports 3 vulnerabilities for apache version 2.0.55, all of them from this year. Just use the search function from the Vulnerabilities tab.
Ok, I’m not sure what your point is though. Comparing one minor version of Apache for just this year to all of IIS6 for 5 years? I’m not trying to be insulting, I just don’t know where you’re cmonig from
SecurityFocus’ site is really bad for trying to do any statistics on this stuff. It’s best for indepth information and little more.
Compared to most sites, SecurityFocus is considered one of the most respected by security professionals. I don’t know what you use, but this is the site I go to first. My point is simply this, for all of the bantering back and forth about the “insecurity” of IIS, it looks pretty good to me compared to apache (and that is based on one version) if you look at from the aspect of the number of vulnerabilities. Obviously someone did not conduct any research, in other words they were trolling (also what I have said all along).
Of course you can search by individual version and find more information. Since the blog post does not go into specifics (what I have consistently said all along) it is impossible to make any statement about the security of either system used in the test. All I was trying to do was back you up since some people don’t trust Secunia as a valid and up to date security information site.
Ok, thanks I just didn’t understand what side you were trying to argue
I agree though, securityfocus is a great site. It’s just bad for trying to gather statistics for products. Secunia is good for that. Securityfocus has the better information, but secunia has the easier to use site.
Ah, logic on an analogy . (I’m giving you a hard time, I know you were kidding).
So WHY are the “bad guys” NOT targeting all the apache-servers..for exampel?
Could it be that you have so MANY automated tools for Win32…? And it is SO easy to exploit?
zetor
What? They do target apache/linux.
http://www.zone-h.org/en/defacements/
I noticed the top of the list is dominated by Linux entries. I also noticed that from page 14 on it is almost all W2k. W2k3 and FreeBSD.
Just the same the Linux numbers would appear to be over 50% of the total. Solaris appears to be the least frequent of the common OSes.
This is for homepage defacements. How about for websites that dispense malware, imitate bank or shopping sites or simply steal data to sell to spammers? Any info on that?
Read AV’s can be exploited.
Unhandled exceptions can be exploited.
Not just Write AV’s that most people think can only be exploited.
Failure to check simple things, failure to initialize things to a known state. This is not just a MS problem, it is a problem with developers anywhere. They think they are too good to check simple things. Also a problem I noticed is a cultural attitude towards users. In some countries I have worked they think users will do the RIGHT thing. Anything bad, well, they shouldnt do that. Right.
Linux/Windows? No, this is apache/IIS.
The diagram means next to nothing as well.
What a waste.
Agreed. At least show the names of the modules being called through the pipeline request. Or how they mapped the request to get the “purdy” pictures. What version of Linux/Apache? What version of Windows/IIS? Etc.
IIS6 has shown itself to be quite secure…posts like that are total FUD.
For simple applications, dont be stupid. its a “systems” language, not an App language. We are 1337!
Dammit Im calling the FBI unless you remove this website from my homepage!
if this site was a real professional site he would not have answer on that matter.
…and you’d be banned by now.
Browser: Mozilla/4.0 (compatible; MSIE 4.01; Windows CE; PPC; 240×320)
if this site was a real professional site he would not have answer on that matter.
…and you’d be banned by now.
That’s neither here nor there because if this was a “real professional site” you wouldn’t have posted a stupid flamebait article like this in the first place.
How is this article flamebait? It only states facts.
Personally I feel that it’s not inappropriate to link to an article like this on an OS related website.
What facts, or are you just trolling like the blog post obviously is?
Now now, Thom. Be nice. You _did_ reply stupidly from the beginning, so be nice.
how many times do you comment a day?
Browser: Mozilla/4.0 (compatible; MSIE 6.0; Windows 95; PalmSource; Blazer 3.0) 16;160×160
Why am I suddenly getting the Starcraft Overmind’s voice stuck in my head?
Windows spawned from Dos which was a single user, single threaded, non-network aware operating system. Services are on by default so everything just works without configuration. It wasn’t built with security in mind…
Linux was built off the same principles as Unix and Minix, multiuser, multi-threaded, and full networking from the start. Services (daemons in Unix speak) need to be enabled if you want to run them. This minimizes security problems.
It’s really a no brainer if you ask me.
Windows spawned from Dos which was a single user, single threaded, non-network aware operating system. Services are on by default so everything just works without configuration. It wasn’t built with security in mind…
Linux was built off the same principles as Unix and Minix, multiuser, multi-threaded, and full networking from the start. Services (daemons in Unix speak) need to be enabled if you want to run them. This minimizes security problems.
It’s really a no brainer if you ask me.
No, it’s not that black and white.
Windows may originally have run on top of DOS, but ever since the introduction of the NT kernel it has been it’s own OS with DOS-based apps requiring a compatibility layer to run. The NT kernel itself is pretty well engineered, and is considered quite secure. It was built with security in mind, specifically for business customers.
Linux may be built on *nix principles, but I doubt the linux kernel itself is any more secure by design than the NT kernel is.
The difference, as you point out, is often the services and applications running on top of that kernel. Daemons need to be enabled in linux, services need to be enabled in Windows. But having user/system based permissions alone do not protect either OS if system-level services are compromised, or exploits exist that allow things like privilege escalation.
The differences in security between the two often diverge across philosophical approaches rather than coding complexity or skill. Linux has an edge here for a number of reasons; for instance, shared libraries make it easier to fix vulnerabilities at a single point rather than auditing your apps for library dependencies, and security is managed in an open manner and exploit/vulnerability reports are often accompanied by patch code that is rapidly distributed among different distros.
Microsoft’s historical flaw has been their tendency to blur the line between application and OS, which increases attack vectors against the OS exponentially, combined with weak default settings in terms of user permissions and default services in the name of user convenience. But even Microsoft is getting that now.
There was a time when a default Red Hat install could be compromised within minutes of connecting to the net due to default service enablement. In fact, even today I’d be willing to bet that a competently installed, configured and patched Windows installation would be at least as secure and maybe even more than a typical linux install running with default settings (excluding IE, of course, since that continues to be Santa Monica freeway of attack delivery).
Of course, my personal preference is linux for day to day use because I consider the approach to security to be far more vigilant and transparent, but if we lull ourselves into a false sense of security by saying “Windows is insecure because it sucks” then we run the risk of falling into an OS X-like trap of denial when it comes to acknowledging potential risks.
Security is a mindset and an approach, it’s not necessarily a technology. That’s the no brainer part.
In fact, even today I’d be willing to bet that a competently installed, configured and patched Windows installation would be at least as secure and maybe even more than a typical linux install …
Of course, my personal preference is linux for day to day use because I consider the approach to security to be far more vigilant and transparent …
Wow. Judging from this post, I’m guessing you are either running for office, training to be a manager, or taking a debate class. Not that any of those are bad, but usually it’s best to take one position in one post and defend it instead of delivering all possible positions.
Wow. Judging from this post, I’m guessing you are either running for office, training to be a manager, or taking a debate class. Not that any of those are bad, but usually it’s best to take one position in one post and defend it instead of delivering all possible positions.
I don’t think so, my whole point is that the issue is not black and white. OP said that linux is more secure because it’s based on *nix, where NT is based on DOS. That’s false logic.
I also think that a properly locked down MS system can be as secure as your average distro with default settings. So it’s not just a case of Microsoft is crap and linux rulz.
I rounded that out by pointing out that Microsoft’s flaw is application/OS integration and weak default settings, hence the need to have to competently lock down a Windows system to make it as secure as your average linux distro which comes with saner defaults. It’s not necessarily the technology itself that is better or worse.
The point about me prefering linux because of it’s transparent approach to security was a personal reference, and also intended to draw perspective on my argument rather than appear to be dismissed as an MS apologist.
Too many people try to wrap security up in a nice little bundle by saying “Windows sucks, use *nix or OS X”. There’s superficial truth to that, but assuming it’s that simple at the core leads to complacency and a sense of invulnerability that no community can afford.
Besides, how you can rationally argue a position if you can’t at least understand and interpret the other side’s position? You have to acknowledge a counterpoint to refute it intelligently. Like I said, it’s not black and white and taking a stand by dismissing the opposing view out of hand is just fanboism, so I avoid doing that.
And running for office? Trust me, one day I will.
Be afraid… Be very afraid…
Great points, kudos.
Security is a mindset and an approach, it’s not necessarily a technology. That’s the no brainer part.
Security is a process that involves more than technology.
Technology is the basis,if that’s not good than where can you build upon?
Come on guys, what is the point? Linux is more secure, no windows is more secure? It is senseless to jump around waving with a flag and trying to prove you are right. If I get paid $500 per hour to make a web shop written in assembler – I’ll find the way to do it. If smb wants very much to break into a system and has enough recourses at his disposal, he’ll find a right person out there who’ll do it. And I guess there are people who can, aren’t there? Be it linux or openbsd 25
PS: Or shall switch to mac or reinstall my desktop machine just because windows makes more system calls?!!
The number of vulnerabilities doesn’t matter if the open source world can fix them near instantly.
Secunia’s data is also often outdated.
Also, Microsoft doesn’t publish all of their vulnerabilities because they have something to lose if they did: shareholders.
dude, get the real facts.
http://www.theregister.co.uk/security/security_report_windows_vs_li…
I mean, jeez — there are still Windows 95 security holes that are unpatched!
http://secunia.com/product/393/
“Windows [n.]
A thirty-two bit extension and GUI shell to a sixteen bit patch to an eight bit operating system originally coded for a four bit microprocessor and sold by a two-bit company that can’t stand one bit of competition.”
(Anonymous USEnet post)
Groovy!
now i gonna download a linux-Distro to install it and never use Windows again… Yeah right
Does this prove that Windows IS less secure? No. Does it demonsytrate that Windows is POTENTIALLY less secure? Yes.
Please, don’t leap over all logics to make your point.
Does this prove that Windows IS less secure?
Nowhere in the text of the article does he claim that these images are proof of Windows being less secure.
Does it demonsytrate that Windows is POTENTIALLY less secure? Yes.
That’s precisely his point. Just read the very first lines of the post — his intention is not to provide hard evidence of Window’s insecurity, but rather give some perspective to an argument he already believes is supported by other evidence.
So then why the title “Why Windows Is Less Secure Than Linux” ?
Let’s not confuse the issue with facts, it’s such a nice troll piece we wouldn’t want to ruin it with publishing actual facts!
Let’s not confuse the issue with facts, it’s such a nice troll piece…
Call it what you like, but it isn’t a trolling. Feel free to look up the definition some time.
As Rayiner has already pointed out, complexity is quite a good estimator for vulnerability. It’s not hard and fast, but there is a definite correllation which proves to be true in Microsoft’s case. In Windows’ (and IIS’) case, a lot of that complexity is absolutely needless because you only need to look at what it does from a functional perspective and then compare with other systems.
There is already a reservoir of information out there as to how Windows and Microsoft software are less secure (can’t even believe that is still being discussed), but this at least tries to fill in a small gap as to one of the reasons why.
And I pointed out more than once that the piece is a troll since it contains ZERO documentation as to how any part of the test was conducted, how the systems used in the test were built, etc and used the standard Microsoft is insecure argument with nothing to back it up. The images are useless since there is no information about what is being displayed.
So maybe you and Raynier can enlighten the rest of us as to what system calls are being made and by what on those images?
And I pointed out more than once that the piece is a troll since it contains ZERO documentation as to how any part of the test was conducted
It is not a troll. As mentioned, look up the definition.
There was no test conducted, so I don’t know where you get that from. Look at the Apache diagram, and then look at the IIS diagram. This is additional information as a supplement to about seventeen thousand other studies and tests that have been conducted, and peoples’ own experiences about what software causes most problems.
In addition to this, does the author then have to reproduce evidence and test conditions from every other study and test conducted just because people like you have a memory like a goldfish, or more likely, just don’t want to see it?
Memory like a goldfish, as usual if you have no technical argument resort to insults. Just what I have come to expect from a Linux zealot. And like I also said earlier, if anyone has a contrary view of this blog post, they will get slammed for it. I suppose you and Raynier are defending this troll because it puts Linux in a good light compared to Windows. And like Raynier you miss the point (if you are even capable of getting it).
Then tell me how did they come up with the images if they did not run the software on a couple of systems? Well that sounds like a test to me. And of course why should anybody post relevant information about how the test was conducted, it was done on a Linux machine so nobody should question it, and the outcome against the competitor is obvious. If Microsoft or Sun would have done something like this, the Linux Lunatic Fringe would be crying “foul” until the cows came home!
Yes, the author should provide evidence as to how the test was conducted, or are they afraid that somebody might figure out the test was rigged to produce a desired result? Doesn’t matter, until someone produces some data to back up those images, any number of people are going to raise the believeability flag.
And here is the definition of troll, and yes it fits:
http://en.wikipedia.org/wiki/Internet_troll
So, your argument is that the images incorrectly portray the comparative number of system calls of Apache on Linux vs. IIS on Windows?
Or it’s that his conclusion from the images is false?
It seems to me to be pretty legit to post comparative images of two competing platforms showing the same sort of measure on both of them.
Argue that the measure doesn’t matter. Or argue that the images are incorrect. But I see no reason to call them a troll for it. They might be baiting a discussion, but if you consider that to be a bad thing then surely you do have the intelligence of a gold fish.
Anyway, I’m not sure how he’d get this information on Windows. But on Linux it could probably done via a tool like strace, and I’m sure Windows has something similar.
But is he using IIS6? If he is, then it’s NOT true in Microsoft’s case, as IIS6 has very few known flaws.
Is this mostly visited by kids? I can’t tell. I kinda assumed it was people who work on operating systems or are programmers. I guess it does seem a little immature and I should quit visiting. People will think I’m a child molester. Actually, it seems like a corporate propaganda site–or people spouting what they’ve been told. There is a obsessive interest in security. I liked the artcicle on filesystems a while back, but this security stuff is dull.
You are right about the article, I think what people are forgetting is this; the research doesn’t necessarily prove that Windows is insecure, but what it does show is it has a greater potential to be insecure given its complex design.
A good example of this would be, for example, right now with agriculture, and the lack of diversity in the banana’s that are grown, now today, we’re able to control diseases quite well, but what happens when those less common banana’s die out, leaving on a very small number of breeds out there, then if a diseas does come along, and possibly threaten, the potential consequences could be catastrophic.
The problem with Microsoft is simply, they suffer from the NIH sydrome something terrible; 20 years, when they had a UNIX licence, they had the potential to develop a world class, easy to use operating system based on AT&T UNIX; but instead, they threw it all out the window in favour of these trendy buzz words that were floating around at the time – I’m sure you remember them, ‘object orientated’ being the biggest example of this.
The net result, they re-invented the wheel, thus opening up a bigger can of worms than they expected, the end result is we have a new operating system bringing new problems that are just being uncovered now.
I’m not saying that Microsofts products are necessarily crap, but at the same time, alot of the technological choices Microsoft have made, have been based more on politics than actually something factural.
Browser: Lynx/2.8.5rel.1 libwww-FM/2.14
Looking at these graphs in article, I can get only one piece of information: IIS has much more layered approach than Apache. I think this is not bad at all. Well, because I don’t know, how these images were constructed, I can completely misinterpret even that piece of information.
To estimate complexity it would be needed compare many more scenaries, including running server scripts (ASP/PHP), database access etc. I don’t say that IIS isn’t more complex than Apache – but unfortunately pointed article doesn’t prove that.
Another indicator of code quality is which developers, given the chance (NT ) or Linux, successfully worked with IBM on api’s?
Bill Gates / NT Team Couldn’t.
Linux can and does.
If you can’t defend your design decisions you attempt to undercut your partners.
What happened to the UNIX philosophy of do one thing and do it well? What if some of these system calls are doing multiple things, whereas more of the system calls on Windows are doing *less*. What does that mean then?
There simply isn’t enough background information in the article to come to ANY conclusion.
You can talk about correlations all you want, but in the end, that’s that an issue of possibility and not reality.
I am a Linux user and I would have to agree with the MS folk on this one. More “content” to back up the digitals in the claim.
Hate to say I like windows but just viewing them as computer generated art I have to say I rather like them and would really like to see more. Might suggest playing with adding some colorization even, colors % location hits or something.
That’s funny!
Makes perfect sense. The more syscalls, the more code needed, the more flow control needed, higher probability of bugs not being found before a stable release.
This is probably the first article in a while to imply there’s a connection between program complexity and security. Most of the recent articles only focus on user/installation base and security.
And for the last time, where is the research to support the statements being made in the blog post by Richard? His statement “A system call is an opportunity to address memory. A hacker investigates each memory access to see if it is vulnerable to a buffer overflow attack. The developer must do QA on each of these entry points. The more system calls, the greater potential for vulnerability, the more effort needed to create secure applications” is supported by no documentation, no tests, no proof-of-concept code. There is far more to attacking systems than looking for buffer overflows, as this example shows:
http://www.securiteam.com/unixfocus/5PP0L0AA0Q.html
Just because a particular system has more running software and makes more system calls, does not make it vulnerable to exploitation per se, it depends on whether the system calls in question being made by the system are vulnerable to exploitation. And again, where is the research to support these “findings”, there is none.
When I read a security paper, blog entry or article I expect to see a lot more than a paragraph of unsupported supposition, FUD, and two images used to justify the position of the author with no explanation. There is nothing here that assists a security researcher or security professional. This is not the quality of research I would read from Daivd Litchfield (NGS Software), Bruce Potter (The Shmoo Group), Michael Lynn (formerly of ISS Systems), or the various authors of articles on SecurityFocus, SecurityTracker or Securiteam. This is a troll, its very title is inflammatory and I feel that Michael posted it for no other reason than to “see if it sticks”.
Finally there is the point of journalistic integrity, both ZDNet and OSNews should have seen the blog post for what it is and ignored it. I would much rather see fewer, but higher quality articles on OSNews than to see tripe like this being posted.
I would much rather see fewer, but higher quality articles on OSNews than to see tripe like this being posted.
Err, or you just read the actual blog post, and see that the author wasn’t implying that it was otherwise in the first place. I linked to it because the actual topic is very interesting– does an increase in complexity mean a decrease in secuirty?
Just because a particular system has more running software and makes more system calls, does not make it vulnerable to exploitation per se, it depends on whether the system calls in question being made by the system are vulnerable to exploitation. And again, where is the research to support these “findings”, there is none.
The evidence is in simple statistics, Robert. The more kilometres you drive per year by car, the higher the chance you’ll die or get injured in a car crash. There is no need for research to confirm that, because it is dictated by simple common sense.
The same goes for this subject. The more potential points in a system for buffer overflows, the more likely the system will get compromised. It doesn’t mean that systems with more syscalls MUST be insecure, it just means that the CHANCE it is less secure is greater. Just like with the kilometres: if I drive 13 000 kilometres a year by car, and you 5 000, the chances are more likely that I will have a car crash, than you.
Let’s play your silly game for a moment in a real life exploit. The first machine you are probably going to see is either a firewall or outward facing DNS server. Now tell me how many system calls are being made on these machines if they are built within sound security guidelines? So much for those statistics.
In a properly designed and deployed DMZ, the chances of being exploied are significantly reduced. Most of the “juicy targets” with all of those system calls are inside the DMZ. Now if an internal user exploits the system internally, all bets are off.
At this point we are going to have to agree to disagree.