Microsoft will sell a version of Windows for high-performance computing–a niche in which rival Linux is blossoming–with a first version planned for the second half of 2005.
Microsoft will sell a version of Windows for high-performance computing–a niche in which rival Linux is blossoming–with a first version planned for the second half of 2005.
Microsoft will sell…
Well, I don’t know who’ll buy. No body in the market for one of these setups will touch Windows at all, or possibly even x86 hardware. The second-half of 2005 will already be too late as well.
“(Microsoft)…also promises “a secure…platform with high performance and low total cost of ownership.”
secure? That’s a word that’s inexistent in the Microsoft Dictionary! I understand that the version of windows they’re working on for HPC might not share many things with XP, but Microsoft is NOT capable to make a secure OS, period.
Funniest thing, a few days ago I had to install XP for someone, and it got infected by 3 viruses first time I connected to the internet, before I could get to the Windows Update site!
Funny, isn’y it!
I don’t particularly care for Microsoft’s business practices or some of the problems with unpatched XP but, give them a chance. I think one should wait until a product is released before it is condemed.
it’d be nice if they release just and updated nt base with a very minimal set of apps. all i want is basic configuration utilities AND the ability to read and write to the configuration file (for scripting). it will be interesting to see the product they release since, it’ll be directly competing with the *nix market.
Even if Microsoft offered to Open Source this sell-proclaimed High-Performance WindowsXP and threw in a bunch of MSCE System Administrators for free and promissed no recurrent costs, my high performance computing group still wouldn’t be interested.
Call that close minded, but the fact is Linux is simply supperior. The 2.6 kernel offers O(1) performance for many tasks, and the file system (Ext2/3, Reiser, and XFS) combined with the anticipatory scheduler (default in 2.6) leave FAT and NTFS in the dust, even with date-stamping disabled in NTFS.
Sometimes technical superiority looses out( e.g. Apple, SUN, SGI, Sybase, Novell, and Word Perfect)
There are many NOS systems out there that are better suited for this task, to include Apples OSX. The issue is that MS keeps winning dispite the fact that it isnt always the technologicaly the best, but has the best marketing. MS will make inroads. They will give away stuff, they will create MSCSCA Microsoft Certified Super-Computer Administrators!!!!!!!!!!!!!!
1. They like the opensource because they like to fiddle with the code. Probably all of them have cs. degrees.
2. They don’t need any of the functions that billionaire Bill’s O$ has like fancy GUI or software that is dependent on windows.
3. Linux is becoming the Standard in this area just like Windows is for desktop (albeit it is forced on all of us by OEMS).
From the article:
“From what we’ve seen in performance of Windows, they just haven’t been competitive with Linux. There’s too much overhead, [. . .].”
The question is, I think, not if Microsoft can produce a HPC OS; they have the funds and resources. The issue is at what point does it stop being “Windows”? It seems that XP would have to be heavily revised, both by cutting down a lot of areas and making supplementations, for this to be a competitive HPC OS. At that point, is this really a version of Windows, or a specialized OS with Windows branding? What advantages would this hybrid OS have over the current sector leader, Linux? It certainly wouldn’t be able to compete on price, unless it is free. If it were heavily modified, would it have any better integration with Desktop XP than anything else? What advantages does MS think it can bring to the sector?
This isn’t a consumer OS we’re talking about here. I suspect that anyone in a position to be building a supercomputer is going to have to really justify putting Windows (even if in name only) on that machine. No software on a hpc machine is going to be off the shelf windows stuff, but probably in house or customized, so I don’t see Windows being a requirment in that regard.
I’d hate to be the guy holding the bag though if it had stability, security, or future licensing issues attached to it because of Windows being the choice.
My gut tells me this is the one time where the old saying can be reversed, and phrased as “no one ever got fired for putting Linux on a super computer”.
If the alphas are any indication we all will need HPC clusters in our homes just to run Longhorn!
I think a lot of people are missing the point. I don’t think they are looking to go after the super computer space, they are looking to get heavily involved in clustering.
What they will bring to the table are tools that make it easy to manage clusters and easy to write and port applications to run in a cluster environment.
A cluster can be as simple as two computers and because of this I can see some smaller Windows shops looking to leverage cluster technology take advantage of this. It most likely will not perform as well as Linux, but they will make it easy to do.
Plus, for all of you mentioning XP, this system is going to be a version of Windows Server 2003, there are some differences between the two.
“Well, I don’t know who’ll buy. No body in the market for one of these setups will touch Windows at all, or possibly even x86 hardware. The second-half of 2005 will already be too late as well.”
There are many xeon and opteron systems listed at the top 500 supercomputers list site (http://www.top500.org/list/2004/06/)
and last time I checked those were x86 hardware.
MS has got some of the best computer scientists money can buy. Let’s give them a chance to prove themselves. They’ve been written off on Servers and they came back to bite UNIX’s a**. They were written off for cell phones and they’ve come back with a vengance and will possibly beat Linux and Symbian (check Motorola’s new cellphone). They were written off for Handhelds and PocketPCs are beating the pants off Palm.
I wouldn’t count them out of the race. You Linux cheer leaders can keep talking about how cool it is to have the source but it doesn’t matter in the long run. In the long run, it’s the CS PhDs who will decide what really matters. If all they want is source code to Kcalc or the eepro network card, they are in the wrong line of work. If Windows-SC (SC=Super Computing edition 2006) does the job for the NOAA in predicting weather patterns better than Linux, guess what, NOAA couldn’t care a rat’s petootie about open source. In super computing money doesnt’ matter.
Call that close minded, but the fact is Linux is simply supperior. The 2.6 kernel offers O(1) performance for many tasks, and the file system (Ext2/3, Reiser, and XFS) combined with the anticipatory scheduler (default in 2.6) leave FAT and NTFS in the dust, even with date-stamping disabled in NTFS.
Please don’t talk about stuff you have no clue about. NT has had an O(1) scheduler since NT 4. Benchmarks of EXT2/3/Reiser vs NTFS go both ways, there isn’t sone that leaves the other in the dust.
“I wouldn’t count them out of the race. You Linux cheer leaders can keep talking about how cool it is to have the source but it doesn’t matter in the long run. In the long run, it’s the CS PhDs who will decide what really matters”
for all you can tell anyone about not having the source code the benefits are very important for customisation which precisely is what these clusters thrive upon. add that to the level of royalty free additions and tweaking and you got a killer there.
Funniest thing, a few days ago I had to install XP for someone, and it got infected by 3 viruses first time I connected to the internet, before I could get to the Windows Update site!
I had to use AutoPatcher (3rd party) with some extras that I have added in the ISO by myself to protect something like that when you are going to connect to the internet at the first time. AutoPatcher is very nice, you should check on this.
http://www.autopatcher.com/
“Please don’t talk about stuff you have no clue about. NT has had an O(1) scheduler since NT 4. Benchmarks of EXT2/3/Reiser vs NTFS go both ways, there isn’t sone that leaves the other in the dust.”
is that so. any links to the o(1) scheduler of NT?. ext3 has been proven to be a reliable and robust system better than ntfs atleast in the earlier incarnations. not sure about the latest stuff.
The reason Microsoft doesn’t go around boasting about an O(1) scheduler is that it would be absolutely silly in this day and age to have something else! All state of the art OSes (+ i believe even the Win9x series, though i’m not 100% sure about this) have had O(1) schedulers for ages. Linux was just catching up.
“Funniest thing, a few days ago I had to install XP for someone, and it got infected by 3 viruses first time I connected to the internet, before I could get to the Windows Update site!”
Well, Windows itself should not be directly connected to the internet, but behind a router and firewall at minimum. As well, the first thing to setup is virus scanning software on the machine before connection to prevent the exact scenario you speak of.
“The reason Microsoft doesn’t go around boasting about an O(1) scheduler is that it would be absolutely silly in this day and age to have something else! All state of the art OSes (+ i believe even the Win9x series, though i’m not 100% sure about this) have had O(1) schedulers for ages. Linux was just catching up.”
thats definitely wrong. provide links to the scheduler btw. for example freebsd doesnt want to be 0(1). they have taken a diff approach. so dont pull of the state of art stuff here
bsdrocks – Thanks man, I’ll be checking it out.
drill sgt – why, aren’t you the computer technician!? guess what, firewall was ON, antivirus was installed, but in order to get the new antivirus definitions you must connect to the internet, in case you didn’t know that. other “advices”?
To help thwart yet another one of “Anonymous’ (IP: 61.95.184.—)” attempts at spreading lies about anything and everything that’s not GNU or Linux, read the following link:
http://www.chesapeake.net/~jroberson/ULE.pdf
MS has got some of the best computer scientists money can buy.
That’s funny, I was reading an article the other day how MS was outsourcing to India to save money. Maybe at one point in time years back, they paid for the best. Now they are more concerned with saving a buck and going with the cheapest contractors that can be found abroad.
“drill sgt – why, aren’t you the computer technician!? guess what, firewall was ON, antivirus was installed, but in order to get the new antivirus definitions you must connect to the internet, in case you didn’t know that. other “advices”?”
Actually I am thanks, have been for years. I would suggest getting a new firewall then as it let RPC through. The default WinXP firewall is not what I was talking about, but rather something along the lines of a linksys router or some such. Something that any home user like yourself anywhere in the world can generally get ahold of cheaply.
Regards.
“To help thwart yet another one of “Anonymous’ (IP: 61.95.184.—)” attempts at spreading lies about anything and everything that’s not GNU or Linux, read the following link:
”
please bother to read the link you just gave. it clearly states the o(n) nature of the ULE scheduler. now where will you hide?
To eloborate on the difference in approaches, Linux’s ingo’s o(1) scheduler is based on 1:1 while freebsd which itself took some ideas is using the M:N approach so claiming that this is old stuff is pretty stupid
Wow, after reading this article Linux looks even more like an ancient piece of technology: http://ryangregg.com/archive/2004/05/24/871.aspx
Through the base release of 2.4, Linux was only cooperatively preemptible. There are well-defined safe places where a thread running in the kernel can be preempted. The kernel is preemptible in v2.4 patches, and in v2.6.
Windows NT has always been preemptible.
_________________
Linux 2.4 introcued per-CPU kernel memory buffers.
Windows had these in NT Service Pack in 1997.
_________________
The 2.4 scheduler is O(n). If there are 10 active tasks, it scans 10 of them in a list in order to decide which should execute next. This means long scans and long durations under the scheduler lock.
Linux 2.4 with patches from Ingo Molnar created a O(1) scheduler, using ordered lists by priority. Has per-CPU ready queues where the tasks are presorted.
Windows NT has always had O(1) scheduler based on pre-sorted thread priority queues.
__________________
2.2 Only supported few async I/O.
2.6 adds asynchronous I/O for direct disk access, AIO model introduced with efficent management of async I/O. Also added alternate epoll method.
Windows I/O is inherently asynchronous. Windows has had completion ports since NT 3.5. More advanced form of AIO.
” O(1) Schedules and other stuff”
Windows NT has always been preemptible.
—
that was kernel level preemptibility which 2.6 added.
“Windows had these in NT Service Pack in 1997. ”
again architecture specific. linux is more diverse
“Windows NT has always had O(1) scheduler based on pre-sorted thread priority queues. ”
its not 1:1 and user space clogs it
“Windows I/O is inherently asynchronous. Windows has had completion ports since NT 3.5. More advanced form of AIO.”
read windows magazine. this only applies for the contigous networks.
From your examples, it appears that Linux 2.6 kernel now has every feature Windows does. Took a while, but they caught up in the end.
Now, if Microsoft can solve their thread and process creation problems, they may even be able to catch up with Linux.
http://www-106.ibm.com/developerworks/linux/library/l-rt7/?Open&t=g…
“From your examples, it appears that Linux 2.6 kernel now has every feature Windows does. Took a while, but they caught up in the end”
not yet. where is numa or uml or various other stuff.
The place that I work at runs a win 2003 small business edition and so far the reliability of the system was reboot/month…as the system slows and services become unresponsive
So I wonder if super computers come with a red reboot button ?
I am not kidding..do they ?
numa? In 2.6, and not just on i386…. I think there is SGI and IBM big iron support too…
http://www.linuxgazette.com/node/view/8843
http://kerneltrap.org/node/view/3178
(the lse page is a little outdated)
uml? Do you mean User Mode Linux? I guess that’s probably supported under Linux, but I can’t be sure.
Ffor the people complaining about having to patch windows boxes before connecting them to the net. Just enable the firewall and then connect it to the windows update site.
Not that I am defending them, but try using your head for a change.
[Funniest thing, a few days ago I had to install XP for someone, and it got infected by 3 viruses first time I connected to the internet, before I could get to the Windows Update site!]
I had to use AutoPatcher (3rd party) with some extras that I have added in the ISO by myself to protect something like that when you are going to connect to the internet at the first time.
All you need to do is turn on XP builtin firewall before you connect to the ‘net. Then go to Windows Update and patch your machine appropriately.
As I stated earlier, I think this is more to do with getting involed with clustering and less to do with super computers. Other articles I have read about this lead me to believe that this is a play to break into the cluster market.
If there are Windows shops that want to leverage the power of clustering they will now have a choice to use a platform that they are familiar with. A cluster can be as simple as two servers. While a Windows cluster will most likely not perform as well as Linux Microsoft will bring a set of tools making it easy to port and write apps to run in a cluster environment.
This offers some of there smaller customers with no Linux/UNIX experience the opertunity to leverage this kind of technology. Of course they most likely hope that someone with build a true supercomputer, but I think the idea is more to jump on the cluster bandwagon.
When UNIX was king Microsoft was trying to push their datacenter server that had to be puchased installed by the vendor. This was in response to attempt to start cutting into the high end server market. The trend now is not to use high end servers, but instead to use clusters of inexspensive servers. Why then should anyone be suprised the Microsoft would adopt this strategy in an attempt to head off what is now their biggest threat in the datacenter/server rooms?
/off-topic
Meanwhile, you MS fanboys should be doing your part to enhance Windows’ security, and convince more people to use Linux, since – according to your logic – increased Linux market share will mean that Windows will be less of a target for hackers and malware writer.
Buddy:
http://www.securitytracker.com/alerts/2004/Jun/1010566.html
http://www.securitytracker.com/alerts/2004/Jun/1010531.html
http://www.securitytracker.com/alerts/2004/Jun/1010512.html
http://www.securitytracker.com/alerts/2004/Jun/1010501.html
http://www.securitytracker.com/alerts/2004/Jun/1010462.html
http://www.securitytracker.com/alerts/2004/Jun/1010461.html
http://www.securitytracker.com/alerts/2004/Jun/1010449.html
http://www.securitytracker.com/alerts/2004/Jun/1010438.html
http://www.securitytracker.com/alerts/2004/Jun/1010435.html
http://www.securitytracker.com/alerts/2004/Jun/1010420.html
What’s common?. All of this has to do with Linux and open source.
Here’s the microsoft only ones:
http://www.securitytracker.com/alerts/2004/Jun/1010550.html
http://www.securitytracker.com/alerts/2004/Jun/1010491.html
http://www.securitytracker.com/alerts/2004/Jun/1010482.html
http://www.securitytracker.com/alerts/2004/Jun/1010427.html
Nuff said.
The number of bugs are not important. whats important is the severity. i am pretty sure if yout take a closer look you will find the bugs in windows more severe
so dont make a fool out of your own links
You mean like how win3.1 users waited for win95 only to find out it sucks, like win95 users waiting for 98 only to find out it sucks, like win98 users waiting for 98SE only to find out it sucks,like win98SE users waiting for winME only to find out it sucks,like winME users waiting for NT only to find out it sucks,like winNT users waiting for win2k only to find out it sucks,like win2k users waiting for winXP only to find out it sucks,like winXP users waiting for Win2k3 only to find out it sucks….. And now waiting for Longhorn
Hay there is a pattern there!
Indeed, the pattern is you think Windows sucks. I sincerely doubt anything Microsoft does is going to change that.
I think there should be a filter to mod down any posts with those two words at the end. They’re invariably trollish, subrational posts. In such a contentious and emotional (sadly) argument, there is never a single piece of evidence that will convince anyone of anything relating to the superiority of Windows or Linux over the other. Why? Because they’re both good for different tasks. It’s like saying, “I like oranges more than apples because of XYZ and that’s why everyone should eat oranges.” It’s all just too funny, sad, and ridiculous.
First, the Linux list was covering everything shipped with Linux as well as the kernel. Second, as someone else pointed out, most of these would be rated “minor” on the MS site. Third, you were good about picking a month where Windows just happened to have fewer listings than ALL of Linux and it’s apps. Of course, you don’t dare go back more than a couple weeks or you hit the MAJOR security breaches in Windows and make it look bad. Typical astroturfing. Pick and choose the data presented to minimize the negative impact on Windows. “See? In these two particular weeks, Windows was better!” How about the other weeks? Don’t want to look at them, huh?
It’s pretty interesting to see how complicated these issues really are. It’s very unclear what O() class MS’s scheduler runs in.
Here’s an interesting article I found on this: http://www.i.u-tokyo.ac.jp/ss/lecture/new-documents/Lectures/03-Thr…
Note especially the diagram on page 10 and the text on page 11 and 12. Cleary, in most cases the scheduler does indeed run at O(1) some of the time by picking off the first process in the highest priority queue, but the state diagram is so complicated that you really can’t be sure all of those steps really are O(1). Interestingly enough, the document says on page 9 that “there is no formal scheduler loop.” Perhaps this means that it is too convoluted to really make judgments about its big-O class.
Elsewhere, I read that MS tests its scheduling algorithms by comparing throughput on SQL server. I suppose this is the only way to really test such a complicated system.
Did you even read the links you provided? All the Linux kernel ones are local vulnerabilities, and all the remote ones do not concern the OS at all, but rather Linux apps (Apache, smtp, cpanel).
Meanwhile, all of the links you provide for Windows are for Microsoft products.
Now, answer this simple question: how much has malware for Windows cost in millions of dollars last year? Now, compare that amount to that due to Linux malware.
Nuff said indeed.
Did you even read the links you provided? All the Linux kernel ones are local vulnerabilities, and all the remote ones do not concern the OS at all, but rather Linux apps (Apache, smtp, cpanel).
My point exactly!. It’s Linux AND open source.
You guys just admit you’ve got problems and fix them first. Did I miss the note about “it’s not cool to bash linux”?
How can you compare just Linux kernel vulns to everything MS? There are thousands of OSS projects that make up your average “Linux system” and you are comparing security vulns from only one of them against MS windows?
How is it that the pro-MS people that are the ones astroturfing again?
Any time someone criticizes Linux for something all the OSS advocates pick one distro that is not effected by that specific criticism and say it is not valid, even if the same distro has many more flaws.
But it is the pro-MS side that astroturfs huh?
And to elaborate, if there is one thing Linux is the king of it is without a doubt, blame shifting. The one skill that all pro-Linux advocates seem to be an expert in.
I can see I touched a nerve with the MS apologists.
My point exactly!. It’s Linux AND open source.
Apache, smtp and cplane are not part of Linux. They are programs that run on Linux (among others). If there is a flaw on a commercial network app for Windows made by a third party, would you consider it fair if someone said it was an OS flaw? Of course not.
If you want to compare apples with apples, and count every open-source vulnerability as a strike against Linux, then you have to count every vulnerability in Windows proprietary software as a strike against Windows. Otherwise you’re just spreading FUD – and spreading FUD about Linux security is part of MS’s newest offensive against Linux, as the Get the Facts roadshow seems to indicate.
How can you compare just Linux kernel vulns to everything MS? There are thousands of OSS projects that make up your average “Linux system” and you are comparing security vulns from only one of them against MS windows?
See above. If you want to compare OS vulnerabilities, you’ve got to compare OS vulnerabilities. Apache is not part of the operating system, nor is smtp or cpanel.
How is it that the pro-MS people that are the ones astroturfing again?
I don’t think you really know what astroturfing means. Check it out.
In the meantime, the short answer is that MS apologists conveniently lump in all non-OS vulnerabilities when criticizing Linux security, i.e. they compare apples and oranges, which is misleading, and thus FUD.
Any time someone criticizes Linux for something all the OSS advocates pick one distro that is not effected by that specific criticism and say it is not valid, even if the same distro has many more flaws.
This is irrelevant. As I stated above, you cannot compare a Linux distro to a barebones Windows install. Compare OS flaws with OS flaws, and then perhaps you’ll have some credibility. Otherwise, you’re just another MS marketroid.
But it is the pro-MS side that astroturfs huh?
You don’t know what astroturfing means. Astroturfing was coined after MS was found to have set up fake “grassroots” movements during the monopoly trial. Similarly, you can bet that the most popular IT web sites are visited daily by paid MS employees whose sole job is to post pro-MS messages on those sites’ forums, to create the impression of popular support for the convicted monopolist.
Before you call for tinfoil hats, consider that MS has done this type of thing before, and they didn’t stand to lose as much then as they do now. It’s only natural that they get involved in such “guerilla marketing.” The question should rather be: why wouldn’t MS step up the astroturfing – after all, they have a duty to increase profits for their shareholders, and Linux is directly threatening those profits.
And to elaborate, if there is one thing Linux is the king of it is without a doubt, blame shifting. The one skill that all pro-Linux advocates seem to be an expert in.
Not blame shifting. Reality checking. Comparing apples to oranges is worthless, yet this is what you guys are defending. That and an convicted monopolist who has shown numerous time in the past that it was not above unethical behavior to reach its objectives.
Meanwhile, you two happy campers have yet to answer my previous question. It’s a simple question, really: How much has Windows malware cost businesses last year, and how much has Linux malware cost?
“Now, answer this simple question: how much has malware for Windows cost in millions of dollars last year? Now, compare that amount to that due to Linux malware.”
Compare number of users, ok?
Second half of 2005? Shouldn’t they be working on Longhorn or something, the OS we all have to wait for till 2006?! Another waste of time and resources.
Now, answer this simple question: how much has malware for Windows cost in millions of dollars last year? Now, compare that amount to that due to Linux malware.
A userbase of >90% vs a userbase of ~1% is a bit different isn’t it? Is Flawed comparisons something that you can pick up in Linux by typing looking for the manpage of AdvocatingLinux?
–> check again – http://www.netcraft.com
Personaly for me the cost for my 15 linux boxes is near 0 (5 of which are desktops, 2 of them are also servers), cant say the same for time I spend blocking windows worms and viruses.. hundreds of MB traffic blocked daily, which here cost much.
A friend of mind has become MAD supporting ~30 win boxes..
The fact that windows does not allow files that are in use to be replaced.
That is one of the reasons Windows needs to be rebooted so often. Whenever you want to replace a dll that is in use, a reboot is needed. Usually dlls for device drivers are in use when Windows is active and therefore you need a reboot, so Windows can first replace the dll early in the boot process.
Now if you want to do some supercomputing, you’re talking multiple concurrent users, running multiple concurrent programs at the same time, meaning a lot of program files and dlls in use all the time.
You’re gonna get a nightmare if you want to upgrade programs, because you will have to kick everyone off in order to be sure noone is using program or dll x when upgrading.
The MS solution to the problem so far is to dump the file to a temp directory and make a mark in the registry, so that with the next reboot, the temporary file is to be copied over the file that it is supposed to replace.
I think MS has Longhorn in mind with this move, and it all sort of makes sense.
Longhorn will require hardware that most cannot afford, but in the recent past I have read that MS has plans of going to server based computing for the masses.
They will supply the hardware and software, and you will register, pay a user fee, and MS will control security,stability, and all the other negatives.
The only thing the user will need is some kind of modem or maybe an Xbox or similar device to communicate with the mother ship.
Another stab at world domination.
Thats what I think.
My suspicion is that this is designed to run conventional windows software on small clusters. The target market is probably SMEs etc. More likely for business databases and video editing than weather modelling.
why, aren’t you the computer technician!? guess what, firewall was ON, antivirus was installed, but in order to get the new antivirus definitions you must connect to the internet, in case you didn’t know that. other “advices”?
Try a hardware firewall/router next time. Its the best way to fly honestly and they can be had for as cheap as $40.
A admit that you can’t compare security vulnerabilities for every project that ever called themselves OSS to Windows vulns and be fair. But if it is a standard package with most Linux distros and it has a security vuln, then it should be counted as a “Linux security vulnerability” just the same as IIS vulns count against MS.
You could say, well you can’t count that package because Slackware and Gentoo don’t use it, but then we are back to that blame shifting thing again.
I work with Windows, Linux, and Solaris servers all the time. Our Linux boxes get owned just the same as Windows, but the difference is when one of the windows boxes gets owned someone has to remove the worm, when a Linux box gets owned it usually ends up being a warez dump and usually ends up being reloaded.
Many of the windows secury problems are worked around in SP2, now it will be only Linux machines with useless services in Listening state on default installs.
We have also not upgraded any of the 2K boxes to 2K3, which has much more secure defaults.
I really likes Ms apps. The games are very cool and fun. All that plenty of color and audio. MS Office is a cool office apps suit. Cool config and setup tools. Plenty of third party apps.
But….. not all is as good… let see: windows it’s good for and only for desktop for home/office users. Read again, all of you, tied windows advocates: WINDOWS IS GOOD ONLY FOR HOME/OFFICE DESKTOP USERS. I really think that in servers and super computing, microsoft has none to do.
I have some questions for all that pro-Microsoft guys here:
1.-Are you systems or network administrators? (windows developers doesn’t fit in this question)
2.-Some of you has used or administer a Windows AND a Unix or linux server?
3.-Do you understand COMPLETELY the concept of open source?
4.-Did you know that the open source concept not only applies to linux?
5.-Have you ever heard about Berkeley’s Unix? (BSD)
6.-Did you know that Java is open source?
7.-Did you know that the original tcp/ip stack (or internet) implementation for networks is open source?
8.-Have you ever used a Sparc or ultraSparc Workstation?
9.-Have you ever heard about pa-risc? what about OpenBoot? SCSI? Sparc? Hot Swap? SMP?? Alpha? (at least five)
10.-Do you know some ISP administrator? or a systems/network administrator who gives support to over 50 users?
Only if you answered yes to seven or more question:
Really do you think that microsoft in super-computing has anything to do????
BSDero
Compare number of users, ok?
No problem, you can proportionally adjust the cost of Windows and Linux malware according to marketshare. Guess what, Windows malware still cost more, by an order of magnitude.
A userbase of >90% vs a userbase of ~1% is a bit different isn’t it?
Actually, according to IDC Linux’s marketshare is at about 2.5 to 3%, not 1%. Even Paul Thurrott agrees with those figures. Get your numbers straight.
Is Flawed comparisons something that you can pick up in Linux by typing looking for the manpage of AdvocatingLinux?
As I said above, you’re welcome to adjust the costs from malware to account for market share (i.e. multiply the Linux malware costs by 30), it won’t change the results much.
But since you’ve brought up the “proportion” argument, then I guess it means that you agree with my original post, that in order to increase Windows security, MS advocates should encourage people to use Linux more. Indeed, the more Linux market share grows, the less Windows will be a target of malware and hacks!
Thanks for proving my point.
But if it is a standard package with most Linux distros and it has a security vuln, then it should be counted as a “Linux security vulnerability” just the same as IIS vulns count against MS.
If a typical Windows install also has that equivalent of that package installed, sure. You also should consider the severity of bugs, not just their number.
Many of the windows secury problems are worked around in SP2, now it will be only Linux machines with useless services in Listening state on default installs.
Services in listening state by default? Such as?
Anyway, you’re talking about hacking only. What about malware? That was the main point: the cost of Windows malware vs. Linux malware (adjusted for market share, even…)
“secure? That’s a word that’s inexistent in the Microsoft Dictionary! I understand that the version of windows they’re working on for HPC might not share many things with XP, but Microsoft is NOT capable to make a secure OS, period.”
The question is what would be more secure? Linux? I don’t think so. Linux certainly have a lot better track record holding back viruses, worms and other ways of breaking in than Microsoft ever have managed to. But this is a very narrow and technical definition of security, that I rather would call stability than security.
To be secure, you need stuff that keeps secret information secret. Where actions of the users are traceable etc.
E.g. If you get an encrypted mail containing an attatchment, that attatchment should remain encrypted once you store it on your hard drive. And if you open that attachment you should not be able to cut and past contents into other documents that is readable by other people than those allowed to read the attachment in the first place.
This is things that Microsoft can do in their TCPA platform.
Unfortunately that platform is so secure that it tends to lock out the owner as well, and I would certainly like to see Linux develop in this direction but make without the owner lockout feature. SELinux is a step in the right directon but more applications need to support it.
But if Microsoft some day in the future manage to get their stability issues under control, they may be very competitive in government and military markets.
That is one of the reasons Windows needs to be rebooted so often. Whenever you want to replace a dll that is in use, a reboot is needed. Usually dlls for device drivers are in use when Windows is active and therefore you need a reboot, so Windows can first replace the dll early in the boot process.
Or, alternatively, you just shutdown whatever is using the file (thus releasing it) and then replace it.
Note that even on platforms that do allow files to be ripped out and replaced from underneath running programs those programs will need to be restarted to use them. Not to to mention a program expecting version x.y that is suddenly presented – while running – with version x.z can do some, uh, “interesting” things.
You’re gonna get a nightmare if you want to upgrade programs, because you will have to kick everyone off in order to be sure noone is using program or dll x when upgrading.
This is good practice regardless. Replacing files out from underneath running binaries is a very bad idea.
Of course, you don’t need to kick everyone off, either – you just need to make sure they aren’t using whatever it is being upgraded. Not that software upgrades/changes should be happening outside of scheduled maintenance periods with an exceptionally good reason _anyway_.
“You’re gonna get a nightmare if you want to upgrade programs, because you will have to kick everyone off in order to be sure noone is using program or dll x when upgrading.”
I would suspect the process will be as simple as remove the server from the cluster, patch it, reboot, and add back to cluster.
I would suspect the process will be as simple as remove the server from the cluster, patch it, reboot, and add back to cluster.
Unless you’re patching the kernel, why should you need to reboot? If Windows is to make inroads into HPC, it will have to find ways to avoid needless rebooting.
Longhorn will require hardware that most cannot afford, but in the recent past I have read that MS has plans of going to server based computing for the masses.
What stupidity is this ? Longhorn is going to run fine on any PC you can buy *right now*.
I am not saying a reboot is a good thing, I was simply addressing how they would handle not having to take the entire system down for patching.
But….. not all is as good… let see: windows it’s good for and only for desktop for home/office users. Read again, all of you, tied windows advocates: WINDOWS IS GOOD ONLY FOR HOME/OFFICE DESKTOP USERS. I really think that in servers and super computing, microsoft has none to do.
There’s millions of business relying Windows servers for things like Exchange and Active Directory to run their businesses that would disagree with you.
I have some questions for all that pro-Microsoft guys here:
1.-Are you systems or network administrators? (windows developers doesn’t fit in this question)
2.-Some of you has used or administer a Windows AND a Unix or linux server?
3.-Do you understand COMPLETELY the concept of open source?
4.-Did you know that the open source concept not only applies to linux?
5.-Have you ever heard about Berkeley’s Unix? (BSD)
6.-Did you know that Java is open source?
7.-Did you know that the original tcp/ip stack (or internet) implementation for networks is open source?
8.-Have you ever used a Sparc or ultraSparc Workstation?
9.-Have you ever heard about pa-risc? what about OpenBoot? SCSI? Sparc? Hot Swap? SMP?? Alpha? (at least five)
10.-Do you know some ISP administrator? or a systems/network administrator who gives support to over 50 users?
Only if you answered yes to seven or more question:
I answer yes to all of them. Oh, except #6 – Java isn’t open source (yet, according to Sun).
Really do you think that microsoft in super-computing has anything to do????
Microsoft has a place anywhere they can sell a product and meet their customers’ needs.
But since you’ve brought up the “proportion” argument, then I guess it means that you agree with my original post, that in order to increase Windows security, MS advocates should encourage people to use Linux more. Indeed, the more Linux market share grows, the less Windows will be a target of malware and hacks!
Continual repetition of this ridiculous and trollish line of reasoning – which I’ve little doubt you don’t believe at all – won’t make it true.
Any reasonably conceivable (in the short to medium term of 3 – 7 years) shift in marketshare won’t make Windows less targeted, it will just make other platforms more targeted. Nothing short government legislation is going to take Windows marketshare under the 50% mark in the forseeable future and I’m sure we all know how popular *that* sort of government action is in the US.
In any event, it wouldn’t affect the security of Windows, just its exposure and it wouldn’t have any affect in the biggest security problem – users.
Unless you’re patching the kernel, why should you need to reboot?
System wide low level libraries, drivers. Anything that requires a restart of a service or program the rest of the system depends on.
Reasonably regular reboots, in maintenance periods, are a good idea anyway to make sure the machine actually still boots.
If Windows is to make inroads into HPC, it will have to find ways to avoid needless rebooting.
Which, largely, it already does. The original poster makes a good point – in clustered HPC environments, rebooting individual machines simply isn’t an issue. Heck, Google doesn’t even worry about machines when they *break*.
Continual repetition of this ridiculous and trollish line of reasoning – which I’ve little doubt you don’t believe at all – won’t make it true.
Why wouldn’t I believe it? Anti-Linux posters of all stripes keep reminding me that the only reason Linux has better security is that it’s got a smaller market share. I’m merely following their logic through.
Any reasonably conceivable (in the short to medium term of 3 – 7 years) shift in marketshare won’t make Windows less targeted, it will just make other platforms more targeted.
Well, these platforms would still be less targeted than Windows, so there would be a net gain in computer security. Then, when other OSes have a reasonable market share (always following the argument), Windows users would get a break. Always following the “popularity” argument through, or course.
Nothing short government legislation is going to take Windows marketshare under the 50% mark in the forseeable future and I’m sure we all know how popular *that* sort of government action is in the US.
I hear Linux is quite popular in China and Japan, and that an increased number of governments are considering to switch to Linux. Market share isn’t limited to the US, you know.
I’m only trying to help Windows users with their security woes. I feel your pain.
In any event, it wouldn’t affect the security of Windows, just its exposure
So what you’re saying is that the “popularity” argument is in fact not as important as all the anti-Linux advocates claim constantly? Who’da thunk it…
and it wouldn’t have any affect in the biggest security problem – users.
Nor would it have any effect on bad design decision, such as making a file executable through its extension.
System wide low level libraries, drivers.
I can replace the vast majority of drivers and libraries on my system without rebooting.
Anything that requires a restart of a service or program the rest of the system depends on.
Again, there are very few services or programs in Linux that are required by the entire system. The only reasons why I reboot my system is when I upgrade my kernel, when I install new hardware, or when there’s a power failure.
Reasonably regular reboots, in maintenance periods, are a good idea anyway to make sure the machine actually still boots.
Yeah, Windows admins always tell me that. I don’t understand the logic behind it, though. It seems to me more of a superstitious thing than anything else.
Why wouldn’t I believe it?
Because you’re not that stupid.
Anti-Linux posters of all stripes keep reminding me that the only reason Linux has better security is that it’s got a smaller market share. I’m merely following their logic through.
Please do not try and equate “exploited less” with “more secure”. At best it’s a specious argument. Classic MacOS is “exploited less” than Linux, but I doubt anyone would try to argue it is more secure.
Well, these platforms would still be less targeted than Windows, so there would be a net gain in computer security. Then, when other OSes have a reasonable market share (always following the argument), Windows users would get a break.
Uh, no. Even if we were to assume that Windows was much less secure than the alternatives, reducing its marketshare wouldn’t have any meaningful effect unless it was reduced *dramatically* – like to down near the 10% – 20% mark.
Always following the “popularity” argument through, or course.
Your follow through needs some work.
I hear Linux is quite popular in China and Japan, […]
Ah, must be why all the spam comes from China .
[…] and that an increased number of governments are considering to switch to Linux. Market share isn’t limited to the US, you know.
True enough, but the US has such a large proportion of it and such massive influence over the whole industry, that any industry-changing events would have to involve the US one way or another.
So what you’re saying is that the “popularity” argument is in fact not as important as all the anti-Linux advocates claim constantly? Who’da thunk it…
No, I’m pointing out your creative interpretations of that argument are foolish.
Marketshare has a significant bearing on:
* How likely a platform is going to be targeted
* How quickly an exploit/worm/trojan/virus will spread, if at all
* How widespread any problems will be
* How damaging any problems will be
* How noticable any problems will be
It has no bearing on how “secure” a platform is in an isolated, academic, absolutionist sense that matters little outside a laboratory (or a discussion forum).
Nor would it have any effect on bad design decision, such as making a file executable through its extension.
At best (or worst, depending on your perspective), a minor issue.
Incidentally, that’s one of those decisions driven by legacy support. Nor is it one that would be particularly difficult to change, save for all the applications it would break and end users it would frustrate.
I can replace the vast majority of drivers and libraries on my system without rebooting.
As you can on Windows, as well, if you know what you’re doing.
The reason Microsoft require a reboot so often is because they don’t know what else might be on the machine that might require restarting. Safer to just reboot the thing.
Again, there are very few services or programs in Linux that are required by the entire system. The only reasons why I reboot my system is when I upgrade my kernel, when I install new hardware, or when there’s a power failure.
Of course, you have the knowledge to be able to do that. Most people don’t.
Yeah, Windows admins always tell me that. I don’t understand the logic behind it, though. It seems to me more of a superstitious thing than anything else.
Because it’s much better to find out a machine won’t reboot correctly in a maintenance period, than in production time.
Individual uptimes are mostly a wank (although I must admit I’ve bragged about a FreeBSD box with 1300+ days of uptime in my previous job). *Service availability* is what’s important.
As an OS developer (http://phosphuros.tk/ – needs updating) I have a rather unique perspective to all of this security talk.
First, the operating system should really only need to secure and protect from software running locally in a super-computing environment. Any entity capable of affording a SuperComputer or ever a few machines for clustering can very easily afford a secured network to attach the soft to the web, simple enough.
That takes the battle to what the user does, leaving the rest to those nice security experts to handle, for the OS.
So, all an operating system should be concerned about if it is destined to run in a secured environment is the security of actions and controlling software locally.
Super computers / clusters will generally be used for a certain set of tasks, using only software that is needed for those purposes. As such, it would be very simple in Linux to *VERY SECURELY* hard-code a filter that only allows applications launching from a specific location, with a specific signature, and with an encrypted interfacing method for hook-up. This filter will be used to determine what is allowed to run, and the rest will cause a notification of an un-authorize process execution attempt.
The ability to hard-code this into the system is CRITICAL! Microsoft is not about to give out the code to this, as everyone would just use their base code and add entries, which leaves the possibility of a Microsoft failure/flaw causing unwanted execution of a naughty naughty (or possibly not even naughty) application. And it is very unlikely Microsoft would ever go that far.
One very important thing to remember, is that in this market, modifying the operating system is a massive advantage. These people are not morons, and in no way technically challenged (if they were, they would not be working on super-computers with possibly millions and millions invested behind their work). So adding a hard-coded filter, right smack into a locked file, on a locked-down super-computer can, in theory, allow near 100% protection.
For extra protection, the software used for communicating with workstations/dumb-terminals can do a few extra checks to authenticate any data requests, and the file system will have blocked ANY AND ALL data that should not be accessible in any form, by anyone. That is to say that when the work-station pumps up, and a valid user logs in, a date-service request is sent, and the clocks are sync’d…perfectly. An identifying ID can be made based on user id, password, md5s for those, a hardware hash, an md5 of the hardware hash, all coupled into an algorithm based on the time for encryption, an encrypted time stamp, md5 for everything else, packaged together, and the whole package dated, signed, and checksum’d.
Seems like a lot? Actually not really, the client machine would do the work and send that to the server and the server’s load would determine whether it fealt like doing the work itself, or actually sending the data to a trusted-peer with a lower work load to do the task and reply through the network to the client.
A lot of technologies are assumed here of course, but these people have this technology already, and are not about ready to throw it all out for some Microsoft (or any one else’s for that matter) product that doesn’t allow them to maintain the already working (and producing) setup.
If Microsoft ever makes an appearance in this sector, it will be on new stuff, which will be slow in acceptance because there are not that many entities entering into the super-computing business every year. Those who need it, have it. They can afford it, because they need it. They need it because they have so much money-making data and communications that a few million for a computer doesn’t matter. Just take Dell for instance, in 2000 they made $50mil *A*DAY* in internet sales. They need a huge setup… what are they using?
I actually *HATE* Linux for my purposes, and am disgusted with the Desktop push at this point. However, source code is pretty freakin vital in these places if they care at all about internal security as well as external security, and of course… overhead.
–The loon
Actually…
As I see it, if Windows market share was reduced, the malware/virus and other problems would actually increase.
Largest reason? Oppurtunistic virii/malware coders.
AIDS doesn’t kill you, only those other little pesky d00ds that take advantage of what AIDS does actually do. We have already seen much of this. Microsoft has been showing cracks in its facade lately, coincidingly virii outbreaks have escalated enourmously.
Microsoft could actually rid the world of this problem rather quickly. Requiring bugs and flaws and exploits to be reported to them as soon as they are found, and no data shared about these. Microsoft issues a patch, never revealing the technicals of the flaws. ESPECIALLY how to exploit them.
Now I know y’all are thinking.. geez this guy is saying Microsoft should close up even more! Exactly, I am. But, if Linux did it would be death, because they are already open. Microsoft is playing this game between open and closed… dangerous territory. If they were to release the source code they could very well see a super-fast flaw-turnover rate, similar to that of Linux and probably even better thanks to all of the corporate support and money.
If Microsoft were to close the nature of the flaws and keep the code closed, they could have a secured OS in no time, making creating virii and malware much more difficult. The effect will now take time, because they have so many flaws public.
Oh, and to the next person who says Windows has less flaws:
Every incantation of Windows has claimed more and more bug fixes. Seems the programmers at M$ need to be monitored and made to do their job.. not an a freaking schedule… but right!
That will never happen though, Microsoft doesn’t want that. WHY?! *MONEY* D’uh. Microsoft has to make money.
Linux does not have that problem.. they don’t have to make money from every place they can to cover the costs of everything in the world… just their cost of living… you know.. a couple hundred a year for some servers, and maybe a grand here and there for other costs… split that amongst a good 1,000,000 + enthusiasts seeing pop-up ads that pay the host half-a-cent or more each time… and you got a winning combo.
Oh well, I’m rambling now.. nights
–The loon
No, I’m pointing out your creative interpretations of that argument are foolish.
There’s nothing creative about it. Of course, it depends on your definition of what security means.
It has no bearing on how “secure” a platform is in an isolated, academic, absolutionist sense that matters little outside a laboratory (or a discussion forum).
In other words, if you really consider “secure” in its stricter sense, then Windows is quite secure. But security isn’t just theoretical, and “secure” also has a much wider sense. There’s a practical element to it. It is perfectly acceptable to say that right now, in the real world, Linux is more secure than Windows, if only because there’s less malware for it.
When confronted with this argument, Windows advocates predictably bring up the popularity argument. They say Linux is less attacked because it’s less popular. That may be true – but the net effect is that Linux is less attacked. It is, as far as results are concerned, more secure.
From this statement, it follows that Windows would be less attacked if it was less popular, as the pool of malware writer would have to spend some time trying to exploit vulnerabilities in other OSes. You’ve disagreed with that, but you haven’t presented any credible argument as to why this wouldn’t be the case.
Uh, no. Even if we were to assume that Windows was much less secure than the alternatives, reducing its marketshare wouldn’t have any meaningful effect unless it was reduced *dramatically* – like to down near the 10% – 20% mark.
I’m curious to hear how you came to that number. Other than pulling it out of your @ss, of course.
Oh, and as wether Windows is less secure than alternatives…it depends if you consider IE to be part of the OS (I do, if only because of the level of integration MS has brought to it):
http://news.com.com/Corporate+Web+servers+infecting+visitors‘+PCs/2100-7349_3-5247187.html?tag=nefd.top
Giving control of my machine to organized crime by visiting a web page…no thanks!
The reason Microsoft require a reboot so often is because they don’t know what else might be on the machine that might require restarting. Safer to just reboot the thing.
On Linux: type “init 1”, and at the single user prompt type “init 5”. If it’s not kernel stuff, pretty much everything else will have been restarted. You can customize this in your /etc/rcX.d folders.
The article is linked from OSNews:
http://www.osnews.com/comment.php?news_id=7479
A brand spanking new 12 million euro super-computer running windows, just think of all spam it could be hijacked to send! And what a generous gift to the script kiddies and virus writers!
This is ‘easy’
Microsoft could go into their API (if it were nice and clean, this would be easy), and change the name of every class and function for this version.
THEN, they would need to change the name of every OS feature, then change the name of every single last file, then change the name….
Why??
Well, if everything is changed, no other software in existence will run on it, which would be good for security. Then you just write the apps you want to run on it… and then they keep the API close to their vest, sharing only with those who are trusted to right software for the platform.
Of course, I am pretty certain that a lot of dorks will by anything branded Windows because they believe it will run their anti virus software…heh… of course if they used a custom / more obscure system (which most do) they just wouldn’t have to worry about the sometimes massive overhead that an anti-virus running on the machine would cost. Though I believe servers, regardless of OS, should always run an anti-virus to help protect the rest of the world.
Of course then a decision will need to be made on a scanning schedule, and of course just to delete any infected files.
Oh well, as long as Windows is on every computer… nothing is secure.
–The loon
Regarding your statement:
What stupidity is this ? Longhorn is going to run fine on any PC you can buy *right now*.
Microsoft is expected to recommend that the “average” Longhorn PC feature a dual-core CPU running at 4 to 6GHz; a minimum of 2 gigs of RAM; up to a terabyte of storage; a 1 Gbit, built-in, Ethernet-wired port and an 802.11g wireless link; and a graphics processor that runs three times faster than those on the market today.
http://www.microsoft-watch.com/article2/0,1995,1581842,00.asp
Pretty Stupid, eh?
please bother to read the link you just gave. it clearly states the o(n) nature of the ULE scheduler. now where will you hide?
Followed by:
To eloborate on the difference in approaches, Linux’s ingo’s o(1) scheduler is based on 1:1 while freebsd which itself took some ideas is using the M:N approach so claiming that this is old stuff is pretty stupid
You fucking tit, confusing a process scheduler with a threading model. Go away, stay off this site, do humanity a favor and die before reproducing, as our gene pool is screwed up enough as it is.