Windows 2003 Server is a more reliable server operating system than Linux, a research firm said Monday. According to the Yankee Group’s annual server reliability survey, only Unix operating systems such as HP-UX and Sun Solaris 10 beat Windows on uptime. Windows 2003 Server, in fact, led the popular Red Hat Enterprise Linux with nearly 20 percent more annual uptime. The Yankee Group made a point of stressing that the survey was not sponsored or supported by any server OS maker.
Those results sound about right but.. which linux distribution are we talking about?
Edited 2006-06-06 22:13
Those results sound about right but.. which linux distribution are we talking about?
???
“Windows 2003 Server, in fact, led the popular Red Hat Enterprise Linux with nearly 20 percent more annual uptime.”
It means linux would be having about 292 yearly uptime only (and even that in case if Windows would have 365).
Now,… it is just as stupid as I expected when I read “Yankee Group” and “Laura DiDio”.
Now, I always say Windows have terrible uptime (I consider really bad when there is up to 5 days downtime). But I can’t even imagine one server having 73 days of yearly downtime. This wouldn’t be called server. Hell, installing the worst server OS (win3.1 or win95) on Earth and even that on faulty machine couldn’t produce such ammount of downtime.
It is not stupid….
Windows is more reliable than Linux. Of course it is under a new defination of reliable.
STATS can be used to prove anything you like.
Ok, lets pretend we agree on which is more stable, because I don’t want to get into flame war (although I don’t in reality, but lets pretend I do, just humor me)
This… report… says… linux… has… average…
73 days of downtime?
Do you lack oxygen?
There is NO OS with average downtime like that, not even pre-alpha breakfast edition of home made OS made yesterday by some hacker living in moms attic.
Of course I know Linux is far more stable than Windows, I was saying it was under a “new defination” of reliability.
In other words…. they made it up.
Anyone with any sense knows the Linux uptime counter resets after 496 or so days. Check out http://www.netcraft.com
The uptime counter reset in the 2.4 kernel only, with 2.6 they fixed the type of variable that uptime was, so voila not an issue anymore. Anyone with any sense knows that
The article says the OS is more reliable, but the study reflects more on administration competance across the platforms I think.
And yeah, Anything with under 95% uptime doesn’t sound like a server to me, or even a workstation. And how, tell me, with say even a low 90% uptime, how do you beat that by 20%?
shane
…not to defend anyone against anyone…But I would read a statement like that as if Windows’ downtime was 20% less than Linux’ ditto, meaning that if Windows’ downtime was 10 minutes, Linux’ ditto would be 12 minutes…. I do not consider it to be a question of 20% of the potential uptime… that’d be absurd, I think!
http://www.iaps.com/2006-Server-Reliability-Survey.html
Thanks, this should clear up some of the confusions here, especially about uptime:
“Windows Server 2003 and Red Hat Linux with customizations and Novell SuSE Linux all reported roughly equivalent per server, per year outage times of just under 800 minutes. Surprisingly, Red Hat Enterprise Linux standard distribution users reported said they experienced 900 minutes of outage per server, per year.”
“Windows 2000 Server and Windows Server 2003 recorded the greatest number of Tier 1 reliability related incidents”
“Custom SuSE Linux delivers the highest reliability and fewest minutes — about 430 minutes of outage per server, per year.”
How these results can be spun into claiming “Windows is more reliable then Linux and has a 20% higher anual uptime” is amazing though, to say the least.
I wish I could mod you up further, for instance, all the way up into the teaser. This means that Windows has 99.85% uptime, versus Red Hat’s “severely lacking” 99.83%. But SUSE “trounced” them all with a dominant 99.92% uptime.
Further, the arithmetic mean often misrepresents the situation. I wouldn’t be suprised, for example, if the uptimes reported for Windows had a higher standard deviation, or were skewed left. Windows might have had the most severe outliers, especially given the higher number of “Tier 1” incidents (whatever that means). And what about planned downtime versus the dreaded unplanned type? Maybe sysadmins were taking their RH boxes down for kernel patches at 4:30AM on certain Saturdays.
All I can say is that the 0.02% difference in uptime (versus Red Hat only, of course) is clearly due to a scarcity of documentation. There can be no other explanation, particularly given how the code is completely open for anyone to read. Couldn’t be due to second-class ISV support, for example.
“Web-based global survey of nearly 400 IT administrators worldwide.”
Could this be called serious survey?!?
Right?
Maybe I understood false, but somebody is getting statistics REALLY false.
Even if Windows had an uptime of 99% and Linux had an uptime of only 98% in this study, the statement “Windows 2003 Server, in fact, led the popular Red Hat Enterprise Linux with nearly 20 percent more annual uptime.” must be wrong. It could only be right, if Linux had an uptime of less than 80%, and if that were the case, the setup or the hardware would have had a mayor problem which would have nothing to do with Linux.
Besides, this study has been made by the Yankee Group, which has quite a history making studies which were payed for by Microsoft. The Yankee Group also has a history of denying that the money they were payed for their studies originated from Microsoft. I totally believe their statement, that this study was not payed for by Microsoft. I believe that Microsoft payed someone else to pay for the Yankee Group study. It has been this way already too often (remember the “get the facts” campaign anyone?).
No, those results don’t sound right at all. Why is it that I keep reading these articles that Windows something or other is great and has better stability and uptimes that Linux, but whenever I run Windows in the same environments I run Linux in, Windows always comes out a dog?
Sorry, but the only ones little Laura Didio can hope to convince are the computer illiterate. Those of us who actually use them see quite a different story from the one she is painting here.
Have they been patching Windows servers?
Seeing as most decent patches that fix security require a reboot, and although it’s pretty good, it still needs reboots from time to time to fix out little quirks, and most quirks never happen twice.
Unless the Windows server is in it’s own little bubble and closed network where they don’t have to worry about anything.
But I’ve found my Linux server performs significantly better, more reliable than my Windows ones, and have been up for over a year.
I think I will stick with Linux/Unix thank you.
I think I will stick with Linux/Unix thank you.
What is interesting here is that according to Yankee group there is a significant difference in Linux and Unix, hence you should not mention them in the way you do.
On the other hand, I think this is hardly a shock to anyone that Unix beats Linux hands down.
I have been working with an IRIX workstation for 2 years, after that with a HPUX workstation for 3 years, and am now on a Linux PC for 1 year.
Please see my figuers not as a statistic, but what I can remember. Also note, that these Machines have been used for desktop (CAE) work, not as servers. All machines had to work with similar software and had to do similar tasks.
My reeboots look like this:
– IRIX had to reboot every 3 months or so
– HPUX had to reboot every 3 weeks or so
– Linux has to reboot every 6 weeks or so
I did never have to reboot any of those machines due to a complete system crash, most reboots happend because the machine became quirky somehow.
The Administrators are old UNIX experienced guys for all machines.
On the completely opposite end of the scale is the Windows XP reliability. We are currently in the move towards Windows, and all of my colleages are already using Windows. They have to reboot their machines every 3 days or so. An uptime of more than a week is very seldom. Every second reboot is due to complete system lockup (no software shutdown possible).
The Windows PCs are managed by Windows Admins.
There is already some talk going on, if we should go back to Linux, even if it looks more expensive (we use terminal servers for the MS Office tasks), if the interrupted work is not counted (and it is not easy to count it).
Because of Linux’s emphasis on clustering, I would not be surprised whatsoever with these findings. Imagine running 10,000 really cheap white box PCs. A few of them are bound to go down and because people wouldn’t really care to rush and bring them back up.
Windows 2003 Server is a more reliable server operating system than Linux
That’s gotta hurt.
Guess it’s the Linux fanbois time to get trolled, we Mac users are on our second article now.
“Why do Apple products suck Part II”
I for one, will not participate in bashing Linux.
It is to be expected…
Microsoft are going to have a really tough time getting Vista sold in numbers to existing users. So instead of promoting the benefits of this “soon to be released” vapourware, (I know it exists I have a beta here), but it is not on the shelves, therefor it can be changed any time….
anyway, instead of promoting Vista and its merits, the Windows fanboy sites degrade Linux and MacOSX.
I am too long in the tooth, to fall for their ploys. If I ever decide to drop Linux, it will be for BSD or even a Mac. I will never use another Microsoft product.
Microsoft are going to have a really tough time getting Vista sold in numbers to existing users.
MS isn’t interested in selling Vista to existing users (ie. retail shrinkwrap). But whenever somebody buys a new PC, it’s going to be preloaded with Vista. And, if you think that that is going to give MS a “really rough time getting Vista sold”, you’re simply wrong.
read what I said… I said to existing customers.
you said it too… shrink wrapped boxes.
microsoft has a mighty task getting xp users to upgrade.
You’re not getting this: The only meaningful way that the vast majority of customers upgrade an OS is when they buy a new PC. Shrink wrap upgrades in the retail channel comprise less than 1% of the total market. So, it isn’t a question of whether XP customers will upgrade to Vista or not. It’s really a question of whether they want/need a new computer. If so, they’ve getting Vista whether they want it or not. OEMs sell tens of millions of computers every year. Even if MS doesn’t advertise Vista at all, it cannot help but succeed in the market place. People said the same things about XP — but it has overwhelming market share.
Why would they try when the vast majority of licenses they sell are OEM’s on machines sold?
The only thing that hurts is the fact that somebody will twist statistics to mean anything.
The survey shows Suse is more reliable than Windows 2003 Server (a little only though) while Windows 2003 Server has a slightly higher uptime than RHEL. But they are all very close, so the only thing it “proves” is that the survey doesn’t matter.
Are there kernel updates for Windows 2003? RHEL has kernel updates and those – of course – force a reboot. Thus, the relevance of the uptime value as a factor for server performance/reliability is rather vague imho.
Yes, there are. There is also a SP for Win2K3 and many security fixes requiring a reboot.
Ok, up to here I was still wanting to read the research.
I was even thinking of forcing my self to read, but… “On a broader note, said Yankee analyst Laura DiDio” this was really too much.
God I loved how she was handling the SCO siding and hers reports in that time, she’s nothing but a dog that barks in the side that pays most.
Just reading that name makes me think the findings are rubbish.
http://en.wikipedia.org/wiki/Laura_DiDio
Sorry Laura, but you’ve aligned yourself with the wrong side too many times to allow me to think you’re unbiased.
Linux advocates have in response heavily criticised DiDio. Typical criticisms are a lack of formal Computer Science qualifications and promoting studies funded by Microsoft
Come back when you have a clue….
Ms. DiDio made a splash on the national scene many years before she turned to reporting the computer industry. She was intimately involved [1] in the alleged haunting referred to as The Amityville Horror. Two months after the Lutzes moved out, she held what she called a “psychic slumber party” involving herself and a number of paranormal investigators. During this time, pictures were taken in which some claim to see a small child in a window. There is also some evidence to suggest that DiDio encouraged the Lutzes in their initial reporting of the story.
This on the other hand is just downright dishonest
Heh, maybe Mrs. DiDio should hold another “psychic slumber party” to try and predict Vista’s release date.
I really don’t understand why people still care what the Yankee Group has to say; the last three years should have been more than enough to show that they really just aren’t trustworthy.
I’m not saying that Windows doesn’t have better uptime than Linux now – but nothing from the Yankee Group gets any assumed validity from me anymore.
Laura Didio???
Credibility zero.
This article was posted for one of two reasons:
1) Ignorance of the Yankee Group being paid shills
or
2) Intentional flamebait
Neither reason is excusable for a computing news discussion site.
Instead of posting rubbish condeming Didio and Yankee Group, why not address the points of their survey? I mean, practically nobody on OSNews is familiar with how the survey was conducted, what metrics they used, etc. It seems a bit premature (and childish) of you to shoot the messenger.
No, it is not childish to dismiss the opinion of someone who in the past repeatedly gave wrong analysis by ignoring some facts and blowing out of proportion some other facts, and getting wrong everything about the GPL which definitely is not hard to understand.
I think, the real message we can read from this report is: “We did not find anything negative about the reliability of Linux which we could blow out of proportion.”
Look, I’m not pointing out anything that other adults don’t already know. If you want to get your point across, address the survey, not Didio. Even a stopped clock gets the time right twice a day. Simply trying to demagogue Didio isn’t going to sway people to your way of thinking. It makes you look borish.
Fine. They conclude Windows is more reliable than Linux ( all distributions) despite the fact that Suse Linux has a higher uptime than Windows.
Yankee Group may not be supported or sponsored by any server OS maker, but YG sure support a certain “server” OS maker.
There are lies, damned lies, and statistic.
The messenger in this case isn’t worth shooting. She’s doing that already very fine by herself.
When you have negative credibility like Didio, you cannot just come around with some crazy allegations like “Windows more stable than Linux” without some real hard evidence for it. Black-box results are not going to cut it. In fact, even if they gave us the exact testing methodologies and were completely on the up-and-up, nobody would believe it unless a trusted third-party verified the results. That’s what happens when you have negative credibility. The lesson: if you want people to take you seriously in the future, behave like a responsible researcher from the beginning.
It’s sort of like what happens to scientists that fake reserach, except nobody ever forgives the scientists. We sort of expect that research groups and think tanks are going to lie out their butts in order to make money, but thre are limits.
Personally, I think Didio is past the point where anybody should believe anything from her. This isn’t a case of shooting the messenger, this is a case of ignoring a habitual liar whose specialty in lying just happens to be in this very field. That’s not so much a prejudice as it is “common sense”. At some point, you have to filter out the whackos and the shills.
Articles to safely ignore:
1. Yankee Group
2. The National Enquirer
Is there a link to more detailed results? I couldn’t see them…
From this article we don’t know:
-what services were running on the machines
-if the machines were all running equally well supported hardware for each OS
-the nature of the failures
-the load on each machine
-proportion of machines exposed to internet/running on intranet
Pretty hard to judge the accuracy of the results without knowing the above IMHO.
Is there a link to more detailed results?
“Windows 2003 Server, in fact, led the popular Red Hat Enterprise Linux with nearly 20 percent more annual uptime.”
Isn’t this stupidity enough? Now tell me do you know one server that would have 73 days of downtime?
Now imagine this in average (as this report does). Do you know one OS with average 73 days of downtime?
This is a “study” done by Sunbelt Software, a Microsoft consulting firm; reported by Laura DiDio(t), known and highly annoying Microsoft shill/astroturfer. In other words, everything about it is highly suspect, not to mention their idea of what “20% more downtime” means, especially since it seems to do with documentation and not actual software issues? Their point about documentation is far beyond laughable, since the entire operating system and all the source code of it is available for perusal, but even if it wasn’t there is so much documentation for most of the software (such as Apache) that O’Reilly is making a living breaking down the volumes of open source documentation for people.
Do you have a link which substantiates this claim?
just type “man” on any POSIX machine…
Get rid of this article. The author equals crap… see the link higher in the forum at:
http://en.wikipedia.org/wiki/Laura_DiDio
Why can’t we discuss this issue? What are you afraid of?
From the article:
But standard Red Hat Enterprise Linux, and Linux distributions from “niche” open source vendors, are offline more and longer than either Windows or Unix competitors, the survey said. The reason: the scarcity of Linux and open source documentation.
This must be a (well paid?) joke. Otherwise, these guys don’t know a word of what they’re talking about.
That was what made me understand this article was bull**** aswell… Seriously…more documentation for Windows applications than LUnix applications??? You need to go farther into the woods with that..
“LUnix” … I like that. An operating system that works best during the full moon, or is used by lunatics???
(Just kidding)
I wonder if anyone would be sued if a distribution called Lunix GNU “plus” Linux would be created…
Hmmm funny name, though: Lunix GNU is Not Unix plus Linux
Nah, LUnix is a “illegal hacker operation system” that “cannot be removed without destroying part of your hard disk surface.” To learn more about this menace, read this horrifying expose: http://www.adequacy.org/stories/2001.12.2.42056.2147.html
I guess I don’t have all that documentation on my system then.
I wonder what my ‘doc’-useflag are for? It apparently cannot be DOCumentation, so it’s got to be something else
I found it interesting to learn that Laura Didio was somehow involved in the Amitiville Horror.
“The Amityville Horror
Ms. DiDio made a splash on the national scene many years before she turned to reporting the computer industry. She was intimately involved [1] in the alleged haunting referred to as The Amityville Horror. Two months after the Lutzes moved out, she held what she called a “psychic slumber party” involving herself and a number of paranormal investigators. During this time, pictures were taken in which some claim to see a small child in a window. There is also some evidence to suggest that DiDio encouraged the Lutzes in their initial reporting of the story. Also involved was Hans Holzer.”
http://en.wikipedia.org/wiki/Laura_DiDio#The_Amityville_Horror
Has anyone else noticed that the “Amityville Horror” section of her Wikipedia article has disappeared?
Yes, someone removed it today. That’s what the history tab is for.
I put it back in. I verified it on numerous other websites and it is in fact true.
The edits were made by an anonymous poster, probably someone who saw the reference to the Wikipedia here. Who knows, maybe it was by Laura DiDio herself!
The Yankee Group made a point of stressing that the survey was not sponsored or supported by any server OS maker.
Did they mean that they got no cash from Sun ? Or do they mean Microsoft has not paid them this time, because Windows was not top of the pile ?
The best way to describe Ms Didio(t) and the Yankee group can be summed up from a quote from Jay and Silent Bob….
“f–k them…. f–k them up their stupid asses, they are f–king clown shoes”
Windows Server 2K3 beat Linux.
But what about OS/400?
What about zOS? Or OS2200?
Real enterprise-class OSes aren’t included in such comparisons because it wouldn’t be fair. 🙂
It really doesn’t matter how much uptime you have as long it stays alive for a reasonable amount of time. To provide round a clock service you will need to have fail over system anyway.
Regardless if you use windows or Linux you will now and again need to reboot after a security update, sure this will happen somewhat more often in windows but probably not so much more often that you need more fail over servers if you run windows.
From a business perspective, the more interesting question is therefore, not how long uptime you can get, but rather how much many man hours do you need to spend to keep that uptime.
In my experience Linux and Solaris requires far less every day maintenance to keep running. They essentially keeps on running until you shut them down. Windows on the other hand, need virus checks, hard disk defragmentation, and more patches that requires reboots.
Not that the time needed for such downtime is very long, but long enough to require users to be notified, and planned in advance.
Yankee Groups version of “Get The Fact” ads, nothing more.
Obviously linux isn’t perfect for everyone. But it’s like the whole SCO thing: Show us the code.
Maybe they collected this data via an online poll or something.
It seems any time a research firm claims Windows beat Linux in anything we all jump and assume they must not be credible.
I can’t attest to the credibility of the Yankee Group but I can say I am impressed with the Windows 2k3 servers we have deployed.
2k3 is not the end all and be all of server operating systems, but it is pretty good and I am think most of you missed the memo.
>I can’t attest to the credibility of the Yankee Group
Nobody can. Or will.
> 2k3 is not the end all and be all of server operating
> systems, but it is pretty good and I am think most of
> you missed the memo.
That’s not the point. Sure, Windows 2003 is much better than its predecessors, but better uptimes than Linux… no way!
In my experience, Linux servers keep running until the power goes down or the kernel needs upgrading (which doesn’t happen all that often). Userspace services are mostly reliable, and when they have to be restarted, that goes mostly unnoticed.
Windows servers suffer from patch “tuesday” syndrome, which almost always requires a reboot. No problem, we reboot the machines later at night and that’t it.
So… Windows boxes with better uptimes? Pff!
(I’ve never had a Linux box go crazy on me for more than a few minutes, but I had problems with Windows servers that took more than a few hours to bring back into normal service, but let’s not go there for the sake of argument.)
Windows servers suffer from patch “tuesday” syndrome, which almost always requires a reboot. No problem, we reboot the machines later at night and that’t it.
I have no idea why people apply every and any patch that comes out for windows. Where I work we selectively patch when the patch is applicable to a function of the OS we use. Even at that we test before we patch our production systems.
In my experience both Windows 2k3 and *nix are solid on good hardware.
“I have no idea why people apply every and any patch that comes out for windows.”
Maybe bacause it is better to be safe than sorry… When you have boxes serving an untrusted network you have to decide if you want to waste lots of time pondering if a certain patch is relevant or not, or much less time just testing it on a less important box and than applying it to the other boxes, especially when 2 minutes downtime at 10pm doesn’t cause any problems.
Actually, selecting patches on Linux is much easier, because you have a much clearer view about what really matters and what doesn’t. But I apply all patches to my Linux boxes too…
Maybe bacause it is better to be safe than sorry… When you have boxes serving an untrusted network you have to decide if you want to waste lots of time pondering if a certain patch is relevant or not, or much less time just testing it on a less important box and than applying it to the other boxes, especially when 2 minutes downtime at 10pm doesn’t cause any problems.
I’m all for applying patches, esp on unsafe networks but I don’t see a reason to install outlook express security patches on a windows 2003 server.
It dosen’t take that long to determine if the patch applies or not, sometimes its blatently obvious.
I agree safe is better than sorry but that goes both ways, sometimes the safe route is to not patch right away.
Anyone who runs both Windows and Linux in the same data center knows this is the worst sort of rot. It’s not even worth mentioning these days when a Redhat/Apache box hits the two-year uptime mark. When was the last time a Windows server was properly patched AND was up for more than 60 days? I can just imagine the lab conditions under which these results were manufactured . . . .
Who paid for this research??????
-2501
Oh no!
Hide the children the DIDIOT is back!!!
The most worthless and untrustworthy analyst is here from the most useless opinion company around (Yankee Group).
I would rather take my IT strategy advice from cartoon stickers on the side of a longjohnsilvers drink cup.
maybe they meant less downtime rather than more uptime and just got sloppy? after all, if windows servers are averaging 20 hours of downtime, like the article claims, then linux servers would only have to average 24 hours to be “20 percent more”.
Er, wait. 24 hours is around 99.8% uptime. That strikes me as high all around.
strikes me as bad data no matter how you slice it.
It’s stunning how many of you are posting absolutely worthless criticisms of the survey without even having understood the details of how the survey was conducted. I have to admit that I’m a bit skeptical, but get a grip, people: Just because you disagree with or don’t like Didio doesn’t alleviate any of us from doing the heavy lifting of evaluating the survey objectively. No doubt most of the same offenders will vote down this post because they won’t like being taken to the woodshed over such childish behavior. But it had to be said.
“It’s stunning how many of you are posting absolutely worthless criticisms of the survey without even having understood the details of how the survey was conducted”
That’s exactly why it needs to be critized. There are absolutely zero details on how these results came to be and that makes this “report” untrustworhy.
Without any knowledge of the details we can nor support nor debunk any of the claims with our own tests. They might as well just have made it up out of thin air for all we know (and since it’s the Yankee Group that’s not entirely impossible).
I am going to post a “Yankee type” study… here goes.
<center>Windows is more damaging to your health than Linux</center>
Do you feel frustrated and angry every time you virus detector throws up a hit ? Do you feel like slapping malware authors around the head with a wet fish ? Does the latest bsod make you want to boot Bill Gates ?
If the answers to any of these questions is yes, stop using Windows immediately. you will risk a heart attack with contiinued exposure.
I recommend switching to Mac OS, or if you feel you really need a calming computing experience, then use Linux.
Tests conducted to 27 million people be me personally, last night, after tea.
[i] Raver31 stresses that no operating system vendor has paid for this</i?
We do neither disagree with, nor do we agree with Laura Didio. We just take the verifiable history of both Yankee Group and Laura Didio and give any report from her a ranking on a personal trustworthy scale.
To be frank, Laura Didio would have to spill out ALL the raw data from that survey (which she cannot do because she certainly is obligated to anonymize the data), to give me the possibility to form my own conclusions. Then I might come to the same conclusion as she did (or not), and I could rerank the report on my personal trustworthy scale.
Since this is unlikely to happen, this report will stay at the bottom end of my trustworthy scale. This bad trustworthiness is Mrs. Didio’s own making.
Here’s the news release and you can ask miss Didio your questions, it has an email address at the bottom of the page.
http://www.yankeegroup.com/public/news_releases/news_release_detail…
Nice ASTROTURFING first post ever there complete with registering a few minutes ago…no doubt an employee of the Yankee Group.
Had to look up Astroturfing. Actually I’ve been reading osnews for a very long time. Along with slashdot and distrowatch. I don’t even work in IT. I’m a civil servant. I just thought someone would find the source interesting. I couldn’t find any details on the Yankee Group site either. I have 3 Ubuntu computers at home and no problems with keeping them running. Where I work, they constantly have to restart the Windows servers because of crashing services. My web hosting company provides linux servers and they are great. All in all the Yankee groups research doesn’t match my experience. It did say it was a survey of admins. We have no idea who they surveyed. As anyone knows, surveys are not always scientific. Don’t worry, I won’t be posting often. Took me a couple of years of reading before posting here. Bye.
Will know that this is simply not true. I’ve been an administrator in a mixed environment (These days almost all IT Administrators have been at least a little bit) and I can truthfully say that we always had to reboot our Windows machines, and rarely did I ever reboot the Debian Box I set up for Asterisk.
Even if Windows 2003 Server itself stayed up for a decent amount of time, IIS was always crashing. What about uptimes of the services that the OS itself provide, I’m sure those aren’t in this little study.
What about those lovely updates that just installed themselves and rebooted the computer? I love having my PC rebooted on it’s own, let me tell ya….
Followed by the fact that she has a BA in Communication – speaks for itself; it she weren’t doing that, she would be either a teacher or on the dole; hardly what I would wish to employ if I wanted to have any studies to carry some weight in the IT world.
Netcraft, a fairly well regarded and trusted group, wont provide uptime for recent Linux and FreeBSD based servers because of the potential for serious inaccuracies in the figures, i wonder how Yankee group avoided these problems…
http://uptime.netcraft.com/up/accuracy.html#uptime
The report seems to say they had people running servers and they said “hey, how much downtime have you had?”
Assuming you trust:
a.) The Yankee Group to pick an even sampling (not a small website for one and a large commercial center for another, which could actually twist in either direction).
b.) Those reporting to be honest about their downtime.
When you measure uptime a reboot is a big deal, when you look at downtime it’s just a minute. Of course, you can set it up to allow servers to reboot without any observable downtime on the outside; but this is supposed to be per-server so that would still have to be called downtime for that server. But in the end 12 minutes of downtime a year for update tuesday isn’t a big deal.
In the end it’s Laura Did-shoulda-been-out-of-a-job-after-sco-lost and so I’d take it with a grain of salt.
But in the end 12 minutes of downtime a year for update tuesday isn’t a big deal.
yeah, if things work out… but how many systems has a windows update borked ??
Well that would factor majorly into a downtime statistic now wouldn’t it .
“Annual uptime” == the amount of time in which the system is up during the year.
Number of minutes in a year: 525,948.766
Number of minutes of downtime for RedHat: 900
Number of minutes of annual uptime for RedHat: 525,048.766
If Windows 2003 had “20 percent more annual uptime”, then:
Number of minutes of annual uptime for Win2k3:
1.20 * 525,048.766 = 630,058.519
That means the Windows servers were up 437.54 days of the year. Incredible achievement Microsoft!
Yeah, these servers are located on Mars. The year on the Mars is longer than the local year here.
This is just based on my own experience, but although I strongly agree all OSes have significantly improved over the past few years, I do find Linux (strictly server Non-GUI) to have a higher uptime than some Windows servers.
But then again Windows is a GUI-OS, and as such some of these problems are a result of that. GUI Linux (KDE or Gnome) have horrible, completely horrible reliabilty in comprarison to the Windows machines.
Case in point, our phone system (Asterisk) has been up since last September. In this time at least every Windows server has been rebooted at least once.
Now, according to this article they were refering to overall downtime. In that sense I have been able to get Windows machines running properly after issues much quicker than Linux, a lot of these issues are the same as mentioned in the article=documentaition (Which tends to be horrible for Linux).
Which that said though, the source of the article is very lacking. I have disagreed with almost most of Yankee groups assesments, and it seems most tend to be a little off.
I’m afraid that i have to disagree, there’s very little i can think of that would require or cause my GNOME/GNU/Linux box to be restarted besides a kernel upgrade, If a badly behaved application causes problem one can always switch to a console and kill it, or, as a last resort, issue Ctrl+Alt+Bkspce to restart X, which, of course, doesn’t affect the machine’s uptime.
If something goes awry and you lose control in the GUI only the world of windows then you have no choice but to reboot.
I don’t know about Linux, but opensource in general, you end up getting MORE documentation that you need; everything from how to configure and patch, to how you can integrate it, extend it etc. etc.
Shills are often employed to assume the air of satisfied customers and give testimonials as to the merits of a given product. This type of shilling is `illegal` and almost `impossible to detect`.
Hmmm …
Where’s my music box and portable monkey dancer?
Edited 2006-06-07 03:37
Windows is better than Linux but worse than commercial Unix….
Where in this very incomplete and flamed “benchmark” are the *BSDs?
*BSDs are well-known by its high reliability and uptimes.
Sorry, but this lady has NO crediblity; regardless of the outcome of these tests, I cannot hold anything this woman puts forth as unbiased.
“DiDio is often viewed as being very critical of the Open Source movement and community due to her early support of SCO and frequent referencing of those studies comparing the TCO of Microsoft and Linux where Microsoft appears to have performed well.
An example of her opinion on how Open Source Software is handled shows in this remark (quoted from a phone interview from her home in Massachusetts): “The thing about Linux is, you can talk about a free, open operating system all you want, but you can’t take that idea of free and open and put it into a capitalist system and maintain it as though it is some kind of hippie commune or ashram, because if you can do it like that, at that point I’m like, ‘Pass the hookah please!'”
On another occasion, she made a similar comment: “I’m all for open source, and competition serves everyone’s interest. But if Linux is really to take its place alongside Windows… then the vendors in this space cannot act like a bunch of hippies in a ’60s commune or ashram. There really is no such thing as a free lunch.”
http://en.wikipedia.org/wiki/Laura_DiDio
Sorry, but this lady has NO crediblity; regardless of the outcome of these tests, I cannot hold anything this woman puts forth as unbiased.
Considering that you’re posting here anonymously, it could be likewise argued that you have no credibility, either.
He is citing wikipedia. No matter if “anonymous” or “tomcat” cites wikipedia, the credibilty to be questioned here is that of wikipedia.
And the credibilty of wikipedia regarding citatations is very high (because the citatations are verifyable through other sources)
Maybe this anonymous is not trustworthy, maybe he is, but his arguments are better than yours.
Considering that you’re posting here anonymously, it could be likewise argued that you have no credibility, either.
However tomcat, I did not post anonymously, unless you spell anonymous as fak3r Still, just because I posted as myself only gives a signpoint to argue my opinion against, it doesn’t really give me any automatic credibilty per se. My comment was simply that after hearing this lady spew on, and reviewing her background, I think the conclusion of this kind of study was known in advance. To say that you have to reboot Linux boxen more often that a Win server seems strange to me, but hey, perhaps they’re installing new kernels all the time, I don’t know, with more detail, I would know.
posted by
fa3kr
What’s this, is everyone fighting over being #2 here?
(No flamebait intended, just working with the media *spin*
Let’s take this survey into perspective–a web-based survey of about 400 IT administrators worldwide. Is this even a large enough pool of respondents to consider this survey valid?
It may appear to point out some trends (for those who haven’t RTFA, which it appears are many):
1) Patch management/reliability has improved ~20% from Windows Server 2000 to Server 2003
2) A majority of respondents indicated they had an average of three server operating systems running in their environments
3) A majority of IT administrators opt to manually apply patches and updates because they are not yet fully comfortable with automated patch rollouts
4) Windows 2000 Server and Windows Server 2003 recorded the greatest number of Tier 1 reliability related incidents (insert your definition of “Tier 1” here?)
5) Linux, Windows, Unix and open source environments exhibited a high degree of reliability — with a surprising lack of disparity among the platforms
So my thoughts:
1) Windows Server is getting more reliable — good it’s about time.
2) Multiple OSes in the data center, and I’m guessing that alot of the sysadmins aren’t qualified to deal with more than one of them (if that)….there’s your *real* downtime problem!
3) Manual updating of servers…do I hear the phrase “best practice” here? Sure you test manually, then apply to production systems either manually or automated from a trusted internal source.
4) Windows has the greatest # of reliability incidents—dooohhh!—I guess that’s what everyone is up in arms about here. More incidents but greater uptime, go figure?!
5) It just doesn’t matter what OS you run, they all have about the same reliability! Well it does, but probably has more to do with the people managing your servers then the server brand or OS that is running on them.
Personally, I believe that human error is the highest cause of system downtime! Give me a well trained sysadmin with data center experience on any of the OSes listed and you’ll beat the downtimes recorded in this survey hands-down. The OS isn’t the major problem here nor is the documentation….lack of knowledge/experience is!
More incidents but greater uptime, go figure?!
Shorter MTTF; shorter MTTR.
“Shorter MTTF; shorter MTTR.”
Thanks, but I’ve got that part of the math down already….
Would like to see the components of the equation though, MTTR for Windows with a “Tier 1 failure”? I’m assuming that means it’s *really bad*, never seen a *really bad* Windows Server failure recover *really fast*, thus the MTTR variable of the equation is in question.
I’d rather not speculate on what they mean by “Tier 1”. In my world, it means internet-backbone
As many people here have said, without seeing the details, there’s no real way to evaluate the report.
I did a very informal non-scientific survey of my own last night, and one of the things that it highlighted was that smaller shops tend to have poorer up time than larger ones, independent of what OS they run.
That suggests that the difference between RedHat and Windows could be entirely attributable to sample skew
that uptime in itself is an entirely pointless measurement of reliability.
Statistics without a detailed explanations of the methodologies used are worthless and mean nothing and should be viewed with the outmost scepticism.
Anyone who’s ever taken a basic course in statistics can tell you that.
How did they measure this “uptime”? How many systems did they use? What systems? What OS versions? For how long did they test it? etc etc.
All of these and more are questions that need answers before this report can be considered even remotely valid.
microsoft has been funding the studies for years, yet to no avail. how many is this? we must be over fifty now.
i will file it along side other yankee group gems:
– people hate ipods
– people hate itunes because it doesn’t “playforsure”
– no one like playstation
– msn search is better than google
– windows proven #1 safe from malware OS
This is the type of article which should be considered as “spam”, like linux/gnu linux and so on …
Considering it’s from mrs DiDio, it’s not even worth reading… And you all know why
What really surprises me it’s the fact osnews reports rubbish like this… bah.
They need hits.
After many days with technological articles of good quality, but little interest from users, they had to do something about it.
The number of posts for this article compared with number of posts for more technical articles should give a clear hint about why this was accepted.
It wasn’t due to lack of submissions of more worthy articles.
I knew I recognized the Yankee Group’s name from other pro-Microsoft pieces. I searched OSNews and didn’t get much, so then I tried Get The Facts and Microsoft’s website.
http://search.microsoft.com/results.aspx?mkt=en-US&setlang=en-US&q=…
1,687 results.
Whether or not they’re a shill or paid advertiser, they’ve found quite a niche in providing quotes and statistics for Microsoft.
While we’re on the subject, here’s my all time favorite Fact: http://www.microsoft.com/smallbusiness/case-studies/CaseStudy.aspx?…
Because they switched to Linux, this company contracted the Blaster virus, which caused them a full day of downtime, despite being nothing but a DDoS worm that only affects Windows systems. Amazing.
This is very strange. I was sure I got that 1687 number right where 184 is now. Maybe the site glitched. I’d fix it, but the edit command seems to have gone missing. Nonetheless, it’s many, many pages of references to this firm.
Is it possible they meant their Linux gateway failed to filter out the Blaster worm, and the rest of their system went down? I mean, I know what it looks like. And their before/after system looks surprisingly similar to my untrained eye, except that there’s now a remote computer bypassing the firewall? And mail is now handled internally, rather than via the internet.
Yes, I think that’s what they’re getting at. Because they didn’t run their mail through clamav, the workstations got the worm, and they had to go in and do Windows Update manually because you need a Microsoft server to manage it centrally or run Symantec server.
They word it in a way that makes it sound like Linux crapped out and couldn’t help itself. Until I re-read it recently, I thought that’s what they were actually saying occurred, and just could not believe they were pusing such a blatant lie. But what actually happened is that the Windows machines were vulnerable and couldn’t be protected adequately in a heterogeneous network. Basically, vendor lock-in is Linux’s fault for being a poor product.
The Yankee Group made a point of stressing that the survey was not sponsored or supported by any server OS maker.
right except for MS which is not really a server OS maker since this is not their major product. Does anyone still take the Yankee Group seriously after all the FUD that they create? Plus no self-respecting IT department ever deploys an OS without preliminary tests.
regardless of whether windows has more uptime, the point is simple. for me, a desktop user ( and not someone who was even familiar with linux, let alone a linux sysadmin ) i had experienced only roughly 99.9% uptime with linux, i had to reboot, 268 days ago, because i bought a new video card . Its great that windows has more uptime, cuz i switched to ubuntu because windows uptime was so great! Especially on DESKTOP machines, which undergo a fraction of the scrutiny that servers do. i remember how awesome that 20% more uptime was, having to reboot everytime i printed a paper i typed and hadn’t saved, or having to hack irq conflicts for every ‘plug-n-play’ device from 95-01’ish, oh garsh i even remember when my computer used to crap out, while idling!.
Its true, you’re downtime will suffer immensely if your power goes out frequently, or you know someone with cancer. Stick with windows!
Edited 2006-06-07 07:55
I honestly don’t care about this information.
However, I would love for this to shut up the linux zealots for good.
I don’t mind linux other than it’s very much over-hyped, but their zealots are just outrageous and I can’t stand them.
They are constantly talking crap and throw out misinformation and just lies and FUD. They do this non-stop and they just don’t get it.
Not everyone cares to use Linux and by creating lies, it isn’t going to help you win people over.
Yes, I have used Knoppix and yes I have use Solaris in 1995 (Unix) and I am just not impressed.
Sorry, I care more about improving and innovating applications than reinventing the wheel (OS).
I honestly don’t care about this information.
However, I would love for this to shut up the windows zealots for good.
I don’t mind windows other than it’s very much over-hyped, but their zealots are just outrageous and I can’t stand them.
They are constantly talking crap and throw out misinformation and just lies and FUD. They do this non-stop and they just don’t get it.
Not everyone cares to use Windows and by creating lies, it isn’t going to help you win people over.
Yes, I have used Windows 2 through to Vista betas and yes I have use and I am just not impressed.
Sorry, I care more about improving and innovating applications than reinventing the wheel (OS), and copying everyone elses ideas.
Judging by Proformas previous posts, is a shill and can safely be ignored
Edited 2006-06-07 08:28
>Judging by Proformas previous posts, is a shill and
>can safely be ignored.
What are you? Like 10 years old?
Nothing I said was false and you
did not even address the subject.
That is what is sad about the Internet. They just allow anyone on these days.
address what subject ?
you just came on spounting crap about linux users.
you had no viable subject to reply to.
Are you saying that your sum total of *nix experience comes from Knoppix, the, while powerful, most poorly thought out desktop ever, and Sun’s Solaris in 1995?! That’s like me saying that i’d used a Bootable CD Based windows environment and Windows 95 and i’m just not impressed.
Just imagine if those applications you are concerned with “improving and innovating” were able to run on a Free OS, for everyone to use without paying large sums for the privilege, wouldn’t that be good, wouldn’t it be nice to know that your computer’s wellbeing doesn’t rely entirely upon Microsoft?
Why is producing a solid unix like OS with good hardware support, advanced server characteristics and an open source development model reinventing the wheel? Microsoft have admitted recently that they need to do some serious reinventing of their own, the windows core is such a mess that it makes more sense to rewrite large portions of it from scratch once Vista’s through the door.
Okay.
I’ll bite.
Come with examples of FUD from Linux-zealots side. And please show me these religious fanatical persons. And what lies? Please come with specifics.
And please explain how Suse’s higher uptime than Windows, make Windows more reliable.
You used Solaris back in 1995 and you were not impressed? Oooooooh… I guess Windows 3.1 was much better?
Could it be that Solaris anno 2006 is a different beast?
Considering that you’re posting here anonymously, it could be likewise argued that you have no credibility, either.
Oh pulease, when this lady resorts to making smart ass comments about Linux, she ranks her self down at the bottom of the society’s barrell, side by side with Rush Limbaugh and Susan Coutler.
If she wishes for her study to have some weight, then make some strong arguments that carry some weight; why can’t the opensource model work in a capitalist system? so far Red Hat seems to be pulling in the cash quite nicely; Novells Linux revenues have been growing at 20% – if it were failing, wouldn’t it be true that these companies would be bankrupt already?
The whole *point* of an analyst is to be impartial; look through the facts of both sides and produce a document/report showing the arguments on both sides, then summerising it as to either be one way or another, if there is insufficient evidence, then an impartial view of ‘there is insufficient evidence to for either side to prove their stance on the matter” is required.
Too bad that there isn’t the same level of disclosure that is now required for market analysts; when they come on tv, they must say what their interests in a particular company are.
Oh, and btw, why on earth is a person, with a communications degree, doing analysing an industry that she knows NOTHING about; it would be like me analysing the phamaceuticals industry and having no training, education or background in the industry.
Edited 2006-06-07 08:33
It’s Ann Coulter.
http://en.wikipedia.org/wiki/Ann_Coulter
The whole *point* of an analyst is to be impartial; look through the facts of both sides and produce a document/report showing the arguments on both sides, then summerising it as to either be one way or another, if there is insufficient evidence, then an impartial view of ‘there is insufficient evidence to for either side to prove their stance on the matter” is required.
Maybe it’s escaped your notice that practically nobody around here is “looking through the facts of both sides”. They’re shooting the messenger. As if that were some kind of cogent argument. Hint: It ain’t.
Hmmmm. Saying this several times over, and producing obviously doctored reports, is not going to make this any more true. Peoples’ daily experiences say otherwise.
But standard Red Hat Enterprise Linux, and Linux distributions from “niche” open source vendors, are offline more and longer than either Windows or Unix competitors, the survey said. The reason: the scarcity of Linux and open source documentation.
Niche vendors(?! – WTF?), and Linux servers are offline more because of a lack of documentation? That all sounds pretty desperate to me.
The Yankee Group made a point of stressing that the survey was not sponsored or supported by any server OS maker.
Just like the boy who cried wolf, no one now believes any of it – especially reports claiming that Windows performs better.
From what I read in the responses,it seems almost all people who run a mixed, Wintel,Linux,Solaris server environment have the same practical experience as I.
Wintel has less uptime than Linux and Linux has less uptime than Solaris.
The data center I work in is blended. I know how to operate and manipulate both platforms to get the job done.
Management has me working on the Linux-*nix-BSD-OSX side since none of the other system admins, there is whole bunch of them, are, what’s the term they used, `comfortable` with working with the above four.
There is nothing wrong with the manual updating of servers as-well-as a full automation. This would, in my opinion, depend on the `type` of server being updated. Some systems need a more `cautious` approach. Maybe I should say, “Go ahead! Automate the updates on the Oracle database server that covers all financial aspects of your multi-million dollar business!” I don’t believe so.
Uptime is not a matter of seconds as in Win world, could not help chuckle at that, or days in the *nix-like, Linux, BSD, aor MacOS world. Uptime is how your servers are performing the business, whatever that may be, crunching data, graphic design, storage, etc. Not how long the power has been cycled on.
Would someone really define reliability? I believe it is attached somehow to the above paragraph. It is as one poster wrote, “Personally, I believe that human error is the highest cause of system downtime! Give me a well trained sysadmin with data center experience on any of the OSes listed and you’ll beat the downtimes recorded in this survey hands-down. The OS isn’t the major problem here nor is the documentation….lack of knowledge/experience is!”
Sorry, I just don’t buy what you’re saying.
There is nothing wrong with the manual updating of servers as-well-as a full automation
Of course there is. We’re talking data center, right ?
You should not need automatic updates in a data center. You never connect a data center to the Internet, which means having a direct path from the Internet to your data center. So there should have no need for automatic updates, or your data center is unusable. Planned manual updates are what should be required, mostly for performance. An update not performance related means there’s a serious reliability problem.
This would, in my opinion, depend on the `type` of server being updated. Some systems need a more `cautious` approach.
Like … systems dor data centers ?
Uptime is not a matter of seconds as in Win world, could not help chuckle at that, or days in the *nix-like, Linux, BSD, aor MacOS world. Uptime is how your servers are performing the business, whatever that may be, crunching data, graphic design, storage, etc. Not how long the power has been cycled on
BS. Uptime is the time your servers stay up, like its name says, and the base unit is the second. You talk about efficiency, which has not a lot to do with uptime.
Would someone really define reliability? I believe it is attached somehow to the above paragraph
Uptime is one measure of reliability. Look at wikipedia for a definition.
It is as one poster wrote, “Personally, I believe that human error is the highest cause of system downtime!
Can’t buy ths either. You should not have to attend to a reliable system.
Give me a well trained sysadmin with data center experience on any of the OSes listed and you’ll beat the downtimes recorded in this survey hands-down. The OS isn’t the major problem here nor is the documentation….lack of knowledge/experience is!
I doubt that. If you used these systems in high load environments, you’d know it’s not possible with Windows.
All of these systems, including Windows, will work in underloaded environment. But as soon as you scale high, stress these systems, Windows (and most MS server software) can’t keep up and start misbehaving.
This is visible in many benchmarks, even acknowledged by MS (but only when a new better version is out) and I’ve experienced it in real life, with Win2000.
I’ve seen it for Win2003 but only for mail servers (couldn’t keep up with a postfix on a Linux I installed that did all the antivirus/spam work, receiving 10s of mail every second), and only recently for IIS in benchmarks we did to replace the Win2000 with Win2003.
FTFA: “…in fact, led the popular Red Hat Enterprise Linux with nearly 20 percent more annual uptime.” Why is it I think not?
The important question is, who’s innumerate – Gregg Keizer or the folks at the Yankee Group (or both)?
I have been using Linux for 7 Years on both my server and desktop. I see now what I must be missing. I’ve got my checkbook and credit cards and am on the way to purchase the most reliable safest Microsoft system I can find. Great News!!
LOL!!!!
hey,
we are talking about linux! I don’t want to say that unix and linux are the same, but they are very similar!
Also we don’t know why windows has 20 percent more uptime as Linux – may there was more patches to install on the RedHat box.
An we don’t know, how many hours/minutues/seconds 20 percent are.
And we don’t know if both system were running on the same hardware.
So this is no reliable test for me, also I know my own linux box, how stable and secure it is.
I’ve got servers running W2K3S and Fedora Core 3/4. The only time they go down is during patch time. Linux needs to go down only to run with a new kernel, otherwise Apache, MySQL, etc. just need a stop/start. W2K3S on the other hand needs to reboot regardless of what has been updated (Microsoft POS ya know).
I prefer to run FreeBSD or Linux simply because there is no Microsoft Tax.
Maybe it’s escaped your notice that practically nobody around here is “looking through the facts of both sides”. They’re shooting the messenger. As if that were some kind of cogent argument. Hint: It ain’t.
“Internet Surveys” quite frankly carry no more weight that if I chose 5 of my friends in the IT world, and asked for their feedback on the stability, scalability and reliability of Linux, Windows and UNIX. Personal bias will always end up entering the equation.
I’m not debating the issue that Windows 2003 could be more reliable, but I do question the method used to collate the data given the issues that would arrise using the technique she used.
Oh, and address the original issue; the whole point of an analyst is to be impartial; the fact that the individual in question has a predetermined conclusion on opensource systems, clearly puts her into the ‘vested interest’ camp.
While the thread resulting from this post has been amusing, I just don’t see that there’s enough good data in the article (haven’t read the supposed report yet) to come to any conclusion.
Being as this is a “survey” of multiple companies, then there’s probably no consistency on which to realistically aggregate the results. I work for a fairly large IT company. For our own internal metrics on “uptime,” we’ve changed them over the last few years as we’ve deployed more & more Windows servers. We no longer count “scheduled maintenance” periods, patch time, etc when deciding availability.
Although no one (at the company) wants to admit it, what we’ve done is lower the standard, since the new kid on the block couldn’t meet the more strict former standard. I’m wondering how many of these other companies have done the same thing? And, how much does that mess up a “survey” such as this?
really short……..
Our managers decided to swith to Windows 2003 after years of using Unix. Now, my production dropped about 65% because W2003 is crashing very ofen and seems that it is not very stable.
Now they are considering Redhat and they will make a decision this week. Money waste and resources were used for nothing.
-2501
This is called part of Microsofts Fud campaign. Anyone who believes this needs somne serious help. Linux is taking over. I do not feel sorry for anyone who runs Windows anymore. The alternatives are out there. Instead of upgrading to Visat take the open approach
So, let me get this straight:
“On average, individual enterprise Windows, Linux, and Unix servers experienced 3 to 5 failures per server per year in 2005, generating 10 to 19.5 hours of annual downtime for each server.”
Linux = 364 days and 4.5h uptime per year.
“Windows 2003 Server, in fact, led the popular Red Hat Enterprise Linux with nearly 20 percent more annual uptime.”
Windows = 437 days and 18h uptime per year…
Hmm… exactly how many days are then in a year? i thought 365 but i guess i could be wrong…
`Sorry, I just don’t buy what you’re saying.`
That is your right. I still stick to my opinion there is absolutely nothing wrong with the manual updating of servers or full automation depending on the server and the software it runs. This can be a MicroSlippy server on a Linux-BSD-*nix-MacOS server. There are certain things that lend themselves to closer and more cautious work. Yes. A data center.
`You should not need automatic updates in a data center. You never connect a data center to the Internet, which means having a direct path from the Internet to your data center.`
Huh? Okay. You’re saying a MicroSlippy 2k3 servers never needs an update and that the machines could not be NAT`ed behind a series of other zones and non routable equipment. What data center do you work in that is completely isolated like that?
`BS. Uptime is the time your servers stay up, like its name says, and the base unit is the second. You talk about efficiency, which has not a lot to do with uptime.`
I measure reliability in terms of a servers ability to perform its job. Uptime is a miniscule component of the whole picture. I can have a WinSlippy box or a Linux box for that matter and if they stay up 365 days or 31,536,000 seconds but it can’t serve files or store data, or whatever its tasked to do … uptime is worthless. If a server must reboot once a day (nee .. NT4) but does it’s job for 365 days then uptime is also a worthless measurement.
`Can’t buy ths either. You should not have to attend to a reliable system.`
Okay. Put an `untrained ape` in front of a Windy 2k3 server and see how far you get. Especially when it comes to group policies and security settings.
`I’ve seen it for Win2003 but only for mail servers (couldn’t keep up with a postfix on a Linux I installed that did all the antivirus/spam work, receiving 10s of mail every second), and only recently for IIS in benchmarks we did to replace the Win2000 with Win2003.“
I don’t sit around benchmarking WinSlop servers from one version to the next. I actually spend my time enjoying my open source driven IT work. As-far-as I am concerned, MS as a company can go belly up. Maybe they can then benchmark how deep they bury it. I am not shy about this opinion.
Huh? Okay. You’re saying a MicroSlippy 2k3 servers never needs an update
No, I said you should not apply *automatic* updates on servers in a *data center*.
that the machines could not be NAT`ed behind a series of other zones and non routable equipment
No, I talked about a direct path. Which just means that you can hop from network to network until you reach the data center.
All the big data centers in the world that I know of (car rent, plane and hotel booking, france inter phone operator, french interbanks, for example) are in closed LANs, accessible only from backends.
What data center do you work in that is completely isolated like that?
I just cited some of them.
I measure reliability in terms of a servers ability to perform its job
Too bad that’s the definition of efficiency … not of reliability.
I can have a WinSlippy box or a Linux box for that matter and if they stay up 365 days or 31,536,000 seconds but it can’t serve files or store data, or whatever its tasked to do … uptime is worthless
But that still means it’s reliable, which is the basic. What was your point ?
If a server must reboot once a day (nee .. NT4) but does it’s job for 365 days then uptime is also a worthless measurement
BS again. It can’t do it’s job 365 days a year if you have to reboot it once a day. Unless your OS reboot in subsecond, which I’m sure it doesn’t.
So uptime is VERY useful to spot the fact that you have to reboot it every day.
Okay. Put an `untrained ape` in front of a Windy 2k3 server and see how far you get. Especially when it comes to group policies and security settings.
Excuse-me ? I repeat : “you should NOT have to attend to a reliable system”, not even put a ‘untrained ape’ in front of it. It should just run.
I already gave you the cases when you need to attend to it. Perhaps Win2003 is not ready for data centers, but the one I know run for years unattended, just supervised.
I don’t sit around benchmarking WinSlop servers from one version to the next. I actually spend my time enjoying my open source driven IT work
Me too, but you know, there is a phase when you make the project work, where you have to tune and configure everything.
After that, I don’t have anything to do most of the time.
The benchmarks are part of the preliminary study which evaluates the reliability/efficiency/pertinence/cost/… of the framework you put in place.
“I’d rather not speculate on what they mean by “Tier 1”. In my world, it means internet-backbone ”
Actuallty, Tier-1 doenst really mean anything these days, it has degraded into another pointless marketing term. Quote:
“We are a tier 1 provider” = “I am a salesperson.”
“They are a tier 2 provider.” = “I am a salesperson and they are our competitor”.
Jay Hennigan on NANOG@.
Edited 2006-06-08 03:51
See, I told you Windows was better than Linux.
I just love being right!
that commercial UNIXes are more stable.