Microsoft has launched an effort to produce a version of Windows for high-performance computing, a move seen as a direct attack on a Linux stronghold.
Microsoft has launched an effort to produce a version of Windows for high-performance computing, a move seen as a direct attack on a Linux stronghold.
ok, I for some reason cannot see windows on a super computer.
GridEngine needs to be ported to…
MS is always last to the party. Even Apple is in on this stuff.
Two things:
“One reason Windows has been slow to catch on is that Unix and Linux were bred to be administered remotely, a necessary feature for managing a cluster with dozens or hundreds of computers.
In Windows, “the notion of remote computing is significantly more difficult than in Unix,” Papadopoulos said. “Because Windows was born out of the desktop, (it is) deeply ingrained in the Microsoft culture that you have somebody sitting in front of the machine to do work.”
Now how do you overcome a culture?
And two it mentioned the openess of the source code. Now how is MS going to overcome that?
I think in this endeavour, MS is going to find that it’s its own worst enemy, rather than Linux or even Unix.
First, I agree that Microsoft has nice development tools IF you are writing something with a GUI. But what does that have to do with distributed high performance computing? The point that Microsoft is distributing SQL Server with Yukon also seems irrelavent. Cray is not real popular as a database server.
If you really want to exploit PC’s for distributed computing, try coLinux … http://www.linuxworld.com/story/44466.htm
They need something that can run Longhorn 😉
No one is going to trust Microsoft in any kind of scenario like this. First, it is a perception thing, and second, people have found out that Windows really isn’t up to the job.
So far, programs written for the CLR and .Net aren’t as fast as those written for a specific machine, “but we see constant improvement in that,” Lifka added. Another area that needs work is security and easy patch installation, he said.
That’s a non-starter. Managed code and the CLR has become a decent target for many things, and I credit Microsoft and the Mono and DotGNU guys for some good work and thinking, but you need optimized code for this. Nothing less wil do.
Another area that needs work is security and easy patch installation, he said.
Oh my God. In high performance, mission critical computing? Forget it, because you don’t have a clue here.
Not quite.
There’s a reason massively parallel clusters don’t run Java.
These people write in assembly, or in really, really tense Fortran or C.
They’d strip so much away they’d just have Linux without a DE.
The only difference between the two would be price. That, and, well… if you’re going to be running a supercomputer any software on top of it you’re going to want access to to tweak it for your specific problem (or solution). Linux is definitely the better chose unless Windows becomes modularized heavily in the future, y’know?
They need something that can run Longhorn 😉
Good one
The term “supercomputer” is trademarked by SGI. They first coined it when they developed NUMA. So Microsoft cannot make officially a supercomputer, and in the non-marketing sectors it’ll be known as superunstable computer Which freak runs Windows on a high-end computer which is supposed to be stable and easy to admin [by a competetent person]? Those are 2 things Windows isn’t particulary good at. Windows CLI sucks so much, it needs Cygwin to be anything [with freakin’ overhead]. OFMG, Windows does not even have a SSH implementation!
I run Windows and Linux on my desktops, and do scientific computations on 128 CPU clusters (Linux). I cannot see Windows being useful for really big problems in physics and meteorology, etc., but people might use it for smaller projects in biology or chemistry perhaps, where one might need 4-8 CPUs. It might also work for financial calculations or in other problems where the parallelization doesn’t have to be so tight.
Someone who only knows Windows and needs to set up a small cluster might go for this. My prediction: Linux will be used for large Beowulf cluster and Windows will be used for COWs (Yes, is really is called COWs: Clusters Of Workstations.)
Ha, ha.. I just can’t wait until Microsoft issues Fortran for .NET 😉 But seriously, I think that in order to make their product useful for supecomputing MS will have to either write it from scratch or remove so many things from the kernel that it will no longer be “Windows”.
Microsoft can make their own Linux distribution.. wouldn’t that be an idea?
They would have saved a lot of RnD money.. ;o)
Do you honestly think Microsoft is going to whip up some small utility that will chain together current Windows systems and call it a day? This is not going to be a stock version of Windows XP or Server 2003 hacked to run in paralell, it’s most likely going to be a highly optimized version of Longhorn Server.
Its all very well to push OO, late-binding, data-structure-oriented languages as the be-all and end-all of computing, but if you actually want to maximise processor resources for *your* computational task, you’ll find FORTRAN is still heavily used and is often the only viable tool to attack this type of task.
C is more than fast enough for my tasks, but I would still balk at implementing compute-intensive, GUIless, primarily computational tasks in Java or god forbid .NET
Microsoft has lost this market, 100% – Nobody uses Windows for rendering, for HPC, for simulations, for anything requiring more resources than a single computer can offer.
Microsofts idea of ‘clustering’ is as insurance against the inevitable and frequent failures they have designed their systems to undergo.
They seriously have such a lot of ground to make up against Linux in this regard technically (and they then have to face the fact that Linux is free and a HPC-oriented Windows probably isn’t) that I can’t really see the point in them even trying.
Its just utterly unproductive – They are now finally starting to see that in these markets Linux has actually evolved into something they cannot even make a dent in with marketing, PR, and FUD.
Windows is not good for these tasks. It is architecturally unsuitable. If they want to play, they should take a POSIX-compliant OS like BSD UNIX and put a Distributed-Shared-Memory system on it, and a kernel that supports process and thread migration across an arbitrarily large cluster of machines while maintaining performance.
Of course, this is a metric f**k-ton easier said than done, and I doubt there is any money in it whatsoever for them.
To compete, they will have to come up with something better, technology-wise, and it will have to be really revolutionary, or else the entire technical/scientific computing market will have Windows swept away with a tide of linux, flowing from the render farm, through the visualisation and simulation systems to the workstations, to the desktops.
This is already happening bigtime in the film industry, and Microsoft has nothing in its arsenal to defend against it.
As another poster mentioned, Apple have positioned themselves well, with solid products based on commodity componenents, and a UNIX OS that is unquestionably superior in some (though not all) aspects to Linux – Sun/SGI/IBM/HP deserve to lose every one of their 1-2 CPU workstation sales to Apple, or to x86 boxes running Linux.
Obviously, this is a tiny market in comparison to the ‘joe averages desktop’ market, but MS has to realise that, eventually, in line with another recently published article regarding HP and Dell, the ‘Innovators’ get their lunch eaten by the ‘Copiers’ and they have been so busy pushing their ‘Innovation’ theme they have forgotten what that actually means.
People forget that by the time this comes to market (3 years minimum) Linux will be so customized for whatever tasks mircosoft wants to bring to the market….it’ll be useless. This is what Linux is used for…this is what thousands of developers are working on. Any less then 2,000 programmers wont make a difference. I doubt some developers with the code to Longhorn can compete with the power of 10,000 strong open source kernel developers and companies like SGI IBM HP and SUN working fulltime on whatever few goals microsoft will release in its powerpoint presentations years before its out. No way in history this will happen in any fashion.Lets all wait and laugh for another Mircost BOB to destroy itself. Hell, maybe when microsoft fails on this….they’ll leave the above 8 processor market alone for another 4-6 years.
Be careful for what you wish for…
http://www.salfordsoftware.co.uk/compilers/ftn95/
Hello, I would like a copy of HPC windows for a 1000 CPU system and 100 clients. Only five billion dollars? That is a very reasonable price to pay considering we won’t be encumbered with the silly source code.
I’m not saying this is likely to work, but it does have potential. Lemme ‘splain something:
NT was designed by Dave Cutler, who invented VMS and was hired away from DEC by Microsoft. VMS had good points and bad points. The bad points were that didn’t have all the fun things like cheap processes, small interoperable tools and a culture of source-sharing that made Unix such a joy to use. The good points were that it was vastly more secure and stable than any Unix of its day. It felt like a prison, but an extremely well-designed prison.
VMS had things like clustering and TCP/IP way before Unix. But people hated it. Back in the day roughly 40% of DEC customers ran Unix on their VAXen even though it was unsupported.
Anyhow, when Dave went to Microsoft, he more or less cloned VMS to make the NT kernel. But changes were made to the abstract design to make it backwards compatible and improve things like visible UI responsiveness. Needless to say, this undermined things like stability and security.
NT is not a technically-challenged kernel. It’s a very good kernel with a lot of very technically unsound hacks applied to it, because Steve ‘n Bill unlike Ken Olsen value customer input.
So, MS has a corporate culture which has historically been averse to the kind of hard engineering decisions that would enable something like a supercomputer. But they certainly aren’t lacking the technology.
I couldn’t see anything about licensing in the article, but MS would have to make something very special if they were planning on per cpu licensing for this thing. Also, why would you call an OS Windows if you never expected it to run a GUI? Sometimes marketing goes too far.
>Ha, ha.. I just can’t wait until Microsoft issues Fortran for >.NET 😉 But seriously, I think that in order to make their >product useful for supecomputing MS will have to either write >it from scratch or remove so many things from the kernel that >it will no longer be “Windows”.
There is already a FORTRAN for .NET. We use Windows for almost all our computations, reason? Because it easier to get running. We have linux machines but they are used as our server.
Windows XP Embedded “Enhance for supercomputing”
Sort of like the words they add on to the end of Windows XP for AMD and Intel’s 64bit architectures.
Windows is a fine OS, it has just had to many things integrated into it. But if you remember an article talking about future windows developments. Paraphrasing it said it will have a base OS then you could add other OS feature stacks on top, eg. to create XP Media Center, Server, etc which are all different to the Windows XP desktop stack. Also others would be able to put a language stack on top too for different countries.
So this could possible be a minimal NT OS.
Also I remember reading that the NT design already had things in its architecture for remote admin and stuff, but were removed due to design decision. So yes it may not be capable now, but could be added later.
You mean this:
http://www.winnetmag.com/Articles/ArticleID/2638/pg/3/3.html
Windows has had the ability to run this since NT 3.51, and this article was circa 1996.
And then they killed it in favour of ‘Wolfpack’, which is really a failover, not a load-sharing architecture, and wass essentially only available to MS (for Exchange, SQL Server etc., and multi-billion dollar ISVs)
http://www.winnetmag.com/Article/ArticleID/16673/16673.html
and then ‘Wolfpack’ became ‘MSCS – Microsoft Cluster Server’, i.e:
http://www.winntmag.com/Windows/Articles/ArticleID/2943/pg/1/1.html
And then, after Windows 2K came out, the hype just… dissipated like a puff of vapour in the wind.
well, as far as I can tell. does anyone else know what Microsofts current clustering technology in Win2K/2K3 actually consists of?
Not that I love windows — my home machine is Linux. But … my company has a *ton* of windows machines doing nothing much but turning cycles into heat. I look at all that compute power sitting around and drool, especially when I’ve got to run a monte carlo simulation or other trivially parallel tasks. If you think this is a case for Linux evangelism, think again. Everyone’s favorite tools run under Windows. So, if mpi threads and other infrastructure is there, so much the better. I just hope MS doesn’t screw up the standards.
Then what the hell was Microsoft Windows Datacentre Server all about? If I recall correctly Datacenter has support for 64 CPUs
http://www.unisys.com/products/es7000__servers/hardware/index.htm
Unisys makes a Windows machine that scales up to 106 processors
Cornell uses Windows for HPC
http://www.microsoft.com/resources/casestudies/CaseStudy.asp?CaseSt…
And if ypou search the web there are many other examples. Windows may not be everyones first choice for HPC but it does go on.
I reckon the day MS come up with a decent supercomputer OS, the world will start rotating backwards!
WinXP STILL has legacy code from 3.11. I spent most of my day spelunking through the registry, and it’s pretty scary stuff…
How much will a Windows supercomputer cost ? Some have hinted that it could reach 5 billion dollars. Sounds like a joke but let’s think about it : Windows Server 2003 price starts at US$ 540 (see PriceGrabber) for 25 clients. Imagine hooking 1000 nodes together. If Gates chooses his usual gouging scheme, even with rebates, that’s a lot of dough for something that will be a malware carrier on steroids. How many employees will be needed to clean up the mess after Sasser Reloaded wreaks havoc in financial centers or in power utilities ? Why waste such ressources when there are better options (see Distrowatch) ?
It is my understnading that TCP/IP was developed on BSD Unix, but VMS although I could very well be mistaken. VMS had DECNet didn’t it? I’m sure TCP/IP support was added eventually, like it was for Netware. Here at the base theres a VMS Alpha cluster still in use, though I’m not sure /what/ its used for. I’m sure its still widely used in other military installations as well.
is a master of none.
MS should try to make one thing work fine and then go on
to do other things. If they keep dividing all their efforts
in projects like this, all their products suffer.
Make one thing and make it good.
Hmmm, that sounds very *nix like thinking, wonder why?
Christopher X is pretty much correct.
VMS’s first networking protocol was DECnet. IT was there from Day 1 after all, the VAX 32bit architecture was a replacement for the PDP-11’s 16bit which by the late 1970’s was running out of steam. The first DEC produced TCP/IP stack was originally an extra product which later on got bundled in with the O/S. I spent 20 years working for DEC/Compaq mainly in Software Development for VMS Systems including Device Drivers etc.
IMHO, what Dave Cutler did with NT3.5 etc was good but he only did half a job. Windows even today, is nowhere as stable and solid an OS as VMS is. But, what else?
There is much in VMS Clustering that was included from day 1 that is still not properly impmemented in Microsoft Clustering. The ClusterWide file system that is built on the Distributed Lock Manager is a super example of that. The ease by which you can bring nodes in & out of a cluster is also great. One of Mictosoft’s biggest problems is as hinted at by another responder to this thread is the problems caused by keeping this endless millstone of “Backwards Compatibility” going. For example, while file names are still held in 6.3 format (as in FAT) and the rest of it in other places in the files structure you are not going to make the big strides forward in clustering that is needed. IT is about time that this was consigned to the scrapheap and a decent underlying filestructure used that allows proper locking, sharing and ACL’s etc that would befit an OS for the 21st Century. There are many other things that they need to do to make their systems suitable for the evolving marketplace. As I see it, Longhorn is going to be two things. 1) An even bigger Kludge that Win2003 ans 2) Lots of Eye Candy to impress the press.
So, IMHO, they are onto a looser here. There are many better solutions that will be more suitable for this role. Even LINUX/UNIX could do with a FOSS Clusterwide File System that is easy to use & implement. This requires some brave thinking at an architectural level to produce a coherent solution rather than piecemeal/point solutions.
I might( or probably) be wrong but that is how I see it.
but I could be wrong. I believe one of the major draws of Linux in this field was that it was very stable, cheap/free, open source, and “good enough.” Even in the 2.0 days Linux was likely fast enough for these kind of tasks, a single process running fullblast until completed does not require a overly sophisticated o.s. and Linux more then fits the bill. It was more then advanced enough then for a simple Beowulf cluster, its likely even more so now. Its Unixy, so academic types were already comfortable with it. Source code is a huge boon, if theres a bug you could track it down yourself. C and Fortran availible too.
Windows is expensive, both in initial cost and support. Jason is correct, I can only imagine a supercomputer becoming infected with some worm and spreading hell at lightning pace. If your moving to cheap comodity computing for a reason, cost, then why increase the price by buying loads of licenses of Windows? Wouldn’t the cost of a single license approach at least half the cost of the very hardware its running on? At least based on current offerings…
I know that the local weather prediction station run by the University (Oklahoma University) runs on a Xeon based Red Hat cluster. Not the commercial offerings, it was like 7.3 and internally supported. Downloaded the ISOs and went on it their own. Cost? Only bandwidth. Last I remember hearing it was very stable too. Lots of Sun machines there too. Oh, and the Red Hat cluster had replaced a rather large SGI machine that cost loads more, and the cluster is much faster too – by a factor of five or so if I remember.
windows can barely handle the resources of one regular computer
Hmmm…swell…supercomputer driven mass emailing worms. That’s what the world really needs. Please MS…stay away from this.
Don’t be so frustratingly naïve. They’ll make this Windows a hell of a lot more secure than the “standard” Windows version. And if they don’t, then don’t worry: costumers won’t buy it. I presume people maintaining/running supercomputers are smart enough to not buy it if it doesn’t perform as well as *nix variants.
http://www.unisys.com/products/es7000__servers/hardware/index.htm
Unisys makes a Windows machine that scales up to 106 processors
Looks like they only go up to 32 processors to me. If you saw windows “scaling” up that high it would likely be a cluster. The highest I’ve seen windows scale to is 64 processors – I would be happy for someone to prove me wrong with a link though.
Linux, meanwhile, is free, far more customisable and flexible due to complete unrestricted availability of the source, it scales up to 512 CPUs now, possibly 1024 when the 2.6 kernel matures, and is proven in the high performance clustering and supercomputing field.
Why would anyone use windows? Seriously?
It is my understnading that TCP/IP was developed on BSD Unix.
As I understand it (I tried to look it up again), early Arpanet machines seem to have been mostly DECs. The first ones ran TOPS, then VMS and Berkeley Unix came with (I think) a slight lead by VMS, and then Berkeley Unix took over pretty much completely.
Which means there was an early TCP/IP implementation for VMS; but no, it wasn’t part of the commercial package.
There is need to support Sun Microsystems servers! On Sun Microsystems it wll be the fastest!
Although I don’t think MS will be the greatest player on this kind of application, they still have a chance. In the Novell era, MS didn’t have a change on this market, right? Wrong. In the WordPerfect and Lotus 1-2-3 there is no way MS can do something, right? Wrong. Palm vs. Pocket PC’s, any bets? And more recently Xbox vs. PS2. Again, MS will have a chance on the Supercomputers arena? I don’t say yes or no, but here they come…
I’ve run benchmarks on the ES7000 up to the 32-processor configuration. This is a 32-processor SMP system, not a cluster. The ES7000 allows for dynamic reconfiguration of the entire hardware system. You can therefore reconfigure the server to look like two 8-way servers running alongside a 16-way server. The combinations are endless, including changing what memory cells are used in the different configurations. They have a proprietary interconnect for the nodes when they are acting as one massive machine and they have regular ethernet ports for when they are acting as individual machines.
This machine scales well, but you need the Data Center edition of Windows to scale past 8 processors. Windows doesn’t allow for more than 32 processors in an SMP configuration. Windows systems with more than that many processors have them segmented across multiple machines. Even if the system had more than 32 processors, the affinity mask in Windows is only 32-bits so you can only adjust the affinity on the first 32 processors. BTW launching the ES7000 with 32 hyperthreaded processors (64 processors to the OS) doesn’t create 64 processors, but instead maxes out at 32.
In this configuration the machine, and thus Windows scales well. Windows2003 scales much better than Windows2000 did. I love Linux/Unix, but Windows2003 DataCenter edition and Windows massive parallel machines do work well.
I think this is another “Hype” stunt.
The kind of customers that will buy such a thing aren’t joe user, these people make their own software, they come from *nix enviroments, I do not see anybody spending so much money on any MS OS.
Let’s be fair, MS runs very well on the desktop, you see it everywhere, but each time you go to the datacenter you find some windows boxes, and mostly *nix variants.
Telecomunications operators all run the biggest *nix clusters you can imagine, they laugh on windows regarding anything which involves anything else than a workstation.
This news is basically new in terms of the new OS version. In terms of MS’ HPC initiative, this article is about 2 years late.
http://www.microsoft.com/windowsserver2003/hpc/default.mspx
So when will we have a BSOD on a Cray ?
As a UNIX Admin i would certainly welcome a GUIless windows with better remote administration tools and compatibility with 90% of todays software.
Is GUI based…
Can you explain better what do you mean 90% of the software???? I can not imagine someone using ms word on that kind of machine, nor I can imagine people using explorer or AutoCAD or Photoshop….
What kind of software runs in a supercomputer??? Far Cry? Doom III????
</ironic off>
One of the uses they are touting is harnessing the useless power of these machines that sit using 3-5% of cpu resource. But for that, Excel (for example) would have to be able to massively thread itself and spawn calculations on the network, for a lot of different potential scenarios, and control the whole thing constantly. The new Windows would have to improved scheduling to accomodate the desktop user coming back and wanting to do work without feeling his machine grinding to a halt.
You’d have to upgrade your whole desktop population for a version of windows able to pick-up these jobs and do serious work on the network for it to be able to handle heavy traffic, potentially coming from a variety of sources !
Sounds to me like a perfect example of marketing spin.
It’d be a lot more realistic t
“So when will we have a BSOD on a Cray ?”
When someone tries to hook up a 8 year old HP scanner using the NT4 drivers.
“As a UNIX Admin i would certainly welcome a GUIless windows with better remote administration tools”
You can run cmd.exe as the shell instead of explorer.exe creating something that looks like early Xwindows.
As far as remote administration goes, Windows has gotten better. With XP and 2003 they have remote desktop incorporated into the OS and with MMC along with WMI remote administration is fairly easy.
a = 2;
b = 3;
printf(“The answer is %d
“, a + b);
answer: Did you activate your Windows SuperHorn today ?
Yes
answer: Did you pay the SAL (Supercomputer Access License) for your desktop computer ?
Yes
answer: Do you want to reboot now ?
No !! Please answer my question !
Clippy: What can I do for you ?
Go sleep !
Where Do you Want to Go Today?
I’m going to my bed and you go to hell !! Please the answer !
answer: Syntax error in lines 1, 2 and 3: It is not a C# code. Please forget C, C++ and Fortran and learn the new M$ C# proprietary language.
In this configuration the machine, and thus Windows scales well. Windows2003 scales much better than Windows2000 did. I love Linux/Unix, but Windows2003 DataCenter edition and Windows massive parallel machines do work well.
You’re making this sound like “Linux/Unix” doesn’t scale well. My friend, 32 CPUs is nowadays small fry even for Linux. Linux is being used on 256 and 512 CPU Itanium 2 single system machines (not clusters) with 8TB of memory, it is being used on 32 processor POWER4 systems, and being tested on 128 CPU POWER5 systems. IRIX has been known to run on IIRC 2048 CPU systems.
Trust me, windows 2003 datacenter whoopdeedoo reloaded special bsod high performance supercomputer edition DOES NOT work well in massively parallel machines.
Does anyone find it painful ironic, bordering on the comical to read “Windows” and “Super” in the same subject?
does it come with msie,wmp, and other programs that can’t be uninstalled ? =)