Microsoft has taken another step in its effort to bring Windows in the world of supercomputing, having finished development of its computer cluster operating system. It has finalized the code for Windows Compute Cluster Server 2003, which is designed to allow multiple servers to work together to handle high-performance computing (HPC) tasks. Such work, long handled by systems from SGI and Cray, has increasingly been tackled by Linux clusters, though Microsoft has been planning its entry for some time.
People who work with scientific clusters like Linux because of the ability to customize it for their needs. I doubt Windows will give them that.
Typically, supercomputer applications are for ludicrous amounts of number crunching, right?
How was a WIMP (Windows, Icons, Mouse Pointer) interface helping the numbers crunch faster?
Windows has supported headless operation since at least Windows 2000.
Not only that but they do have a CLI-only mode.
Windows has supported headless operation since at least Windows 2000.
Not only that but they do have a CLI-only mode
I won’t buy this. None of the Windows guru I met were ever able to do headless operation with any Windows.
Having a CLI-only mode does not mean “headless operation” is possible either.
I don’t even know which magic protocol nobody knows about you can use to do “headless operation” on Windows : enlighten me please.
[i]I won’t buy this.<i/>
I will.
http://support.microsoft.com/?id=317521
“The bootcfg /ems command permits redirection in the boot loader, with the configuration specified as port and baud rate. This command is used to start the Headless Administration feature.”
I will.
http://support.microsoft.com/?id=317521
“The bootcfg /ems command permits redirection in the boot loader, with the configuration specified as port and baud rate. This command is used to start the Headless Administration feature.”
I still won’t.
Man, how you MS shills people can be deceitful, that’s amazing !
We’re talking about GUI that is a waste of resource on clusters, and you talk to me about “Headless Administration” which is another case of MS not using the same words as everyone.
Your Windows “headless administration” still require a GUI at the other end for you to do anything useful, be it through TS, VNC or sth else.
So you are STILL running a GUI on the servers.
Your stupid feature, which is there just for Windows to understand what is happening to it when it is launched without keyboard/mouse/monitor, is useless for this discussion.
Edited 2006-06-12 16:26
You can, for sure, run Longhorn server in a CLI-only mode (ie, no graphics overhead at all).
Being that this thing just now got finished, I don’t see why it wouldn’t be in there as well, maybe it isn’t, I don’t know, I’ve not run it.
And neither have you.
You do realize that this is based on Windows Server 2003, not Longhorn…
How they actually plan on getting into this market, that often requires source code, and at such steep entry prices, I don’t know. It feels like MS is just shooting as many bullets as so they can and see what strikes and go with them.
And “Longhorn” is based on Server 2003.
I use Windows in a headless configuruation at home. All it takes is VNC. No monitor required for the Windows box.
Unless you are using some nonstandard definition of “headless”?
I see from other postings you’ve made that you equate “headless” to “GUIless”, which is interesting but rather arbitrary. IMO, of course. All I cared about was that the system didn’t require a monitor.
Edited 2006-06-12 17:34
Of course a computer doesn’t “need” a monitor, but what this is all about is the fact that you can’t run Windows without a GUI, and on a computer/cluster node which is intendent to do things like number crunching, a GUI is absolutely worthless and a complete waste of resources. If I build a HPC cluster with Mosix, then there’s nothing else than the Linux kernel, a few daemons/modules needed for the hardware, the Mosix daemons and SSH – nothing more. On a Windows cluster, you have a full Windows GUI running on every single node, which is utterly stupid.
Tom
Unless you are using some nonstandard definition of “headless”?
To me, Headless means no VIDEO CARD, no mouse, no keyboard. My “Headless” servers have a NIC, a RAID card and hard drives. If it needs a video card, it is not headless.
As I don’t know if windows(any version) can run without a video card, I will not state that it cannot. I am not aware that it can however.
If you think a video card is cheap, try a 1,000 node cluster with 20 dollar video cards. $20k better spent on more memory, more storage, more nodes. Not to mention that’s 1,000 video card drivers. God knows video card drivers never cause a problem.
To me, Headless means no VIDEO CARD, no mouse, no keyboard. My “Headless” servers have a NIC, a RAID card and hard drives. If it needs a video card, it is not headless.
That has always been the definition of headless; headless doesn’t refer to ‘lack of monitor’ but ‘lack of direct interaction with server; server does not have the features to allow direct interaction”.
Some kids here need to get out more, and have a look at some real hardware, like the stuff made by Sun; pull it out of the box, turn it on, and voila, the server is up; no graphics card, keyboard or mouse required.
“Not only that but they do have a CLI-only mode.”
Yeah that’s right, didn’t they call it “gonad” or something similar? Claiming it to better than anything from the UNIX folks.
Check your facts, Monad is MS’s “new” shell, intended to replace cmd.exe, there’s nothing headless about it (except for Microsoft itself, of course).
It’s called Monad, and yes it’s a new shell. If you refer to “CLI-only” then I think to have read that MS states that you can fully administrate Windows from this shell, which does still not mean that you can run Windows without any GUI whatsoever. What I want & need for a “real” server is that I can fully administrate it remotely, in a bandwith-efficient way (read: no GUI, text only). I can do this with Linux, but not with Windows. But still, if I’d want to administrate my Linux server graphically, I could always install X, start it and remotely launch any GUI tool which is on the server and have it displayed on my client machine. But this is _optional_, and I can turn the whole GUI functionally on and off whenever I want, minimizing resource usage,
Tom
But still, if I’d want to administrate my Linux server graphically, I could always install X, start it and remotely launch any GUI tool which is on the server and have it displayed on my client machine. But this is _optional_, and I can turn the whole GUI functionally on and off whenever I want, minimizing resource usage
Actually, the server itself does not need X, XDM or anything else installed or running. If you install any X app on the server you can log in via ssh from a system running an X server, launch the app on the server and display it on the client. The X display system does not require any local X components to do this. The X app(running on the server) just needs an X server somewhere to display to. That can be local or remote.
This method is also somewhat like a thin client setup, with the exception that the client does not boot from the server. A thin client server only needs to run XDM or equivilant.
It’s not. Do you seriously think that’s a major issue? Even if their node setup runs it (which I’d doubt) it would be a tiny cost in memory and virtually no cost in cpu usage; unless you sit some idiot at the compute node to use it like a workstation.
The bigger concerns are probably things like rewriting software to work with Microsofts libraries, Windows NT’s extremely heavy processes if you launch processes in your code, and if they haven’t changed it on this, the 2GB limit on 32bit systems by default. And if their tcp/ip stack is as robust as it’s supposed to be, now.
and if they haven’t changed it on this, the 2GB limit on 32bit systems by default.
The Computer Cluster Server 2003 is only available as 64bit according to MicroSoft, but I’m sure you know better.
Surely, people who work with scientific clusters have scientific computing jobs to run. If they are fiddling with the host OS, then they are not doing their science, and are wasting their time.
In so far as you might need a threaded runtime to help access all the cores you’ve got, and network device drivers, then Windows can be usable. Probably need to replace the standard heap manager.
Indeed, the integrated security and delegation support, and the ‘comes with’ MSMQ might be advantageous to anyone who isn’t already wedded to an MPI model. And you get to deploy onto a platform that allows the use of MS dev tools, which can be comfy for some people.
It provides more choice, why be negative?
Make sure it is connected to the internet or some other computer that is connected to the net. Supercomputer worm spreader with ddos attack support. Great.
ALL YOUR NODE ARE BELONG TO US
There will always be some type of fool to buy into this, SGI hired one, we saw what did it do for them. If same fellow gets a director job with a HPC lab, repeat same folly. The world is full of Windows believers, I was one too for engineering tools in the early NT days. It made some sense in small shops but not when there are lots of smart folks around. This will come in from upstairs politicking.
I also thought most HPC codes were OS agnostic, massive model crunching in C, Fortran,,, codes, and then later visualization modules. I don’t see VisualBio++ as making any sense. When I think of HPC codes, I see no reason to want to make it OS dependant, only standards dependant yes.
I thought MS was killing off OpenGL too!
And surely the market for this is absolutely tiny in comparison to the usual surfing office desktop userbase, it seems to me to be more for prestige.
Let’s see…
From the FAQ at http://www.microsoft.com/windowsserver2003/ccs/faq.mspx
“Supported programming languages for Microsoft MPI are Fortran77, Fortran90, and C.”
“Windows Compute Cluster Server 2003 comes with the Microsoft Message Passing Interface (MS MPI), an MPI stack based on the MPICH2 implementation from Argonne National Labs. Windows CCS 2003 will also work with other MPI stacks written to the MPI2 standard.”
That, coupled with that Visual Studio 2005 have support for OpenMP, is enough for me at least.
PS. How’s the Fortran and OpenMP support in GCC these days? Last time I checked, there was no OpenMP support and only Fortran 77 was supported, but I hope more is supported by now.
PS. How’s the Fortran and OpenMP support in GCC these days? Last time I checked, there was no OpenMP support and only Fortran 77 was supported, but I hope more is supported by now.
Who cares? need those features, hop over to Sun Studio 11, which is a free download, and includes all those features – free download for Solaris x86/SPARC and Linux.
Edited 2006-06-13 01:14
Fortan 95 has always been in gcc 4.x and GOMP is supposedly pretty much ready: http://gcc.gnu.org/projects/gomp/
Just be careful, and don’t end up with software that’s stuck on Microsoft’s embraced and extended MPI.
You forgot to mention Windows Media Player, which is installed on evey single cluster node (according to /.), so that even if you have more than one administrator (which is unlikely, because Windows is so amazingly easy to handle) each of them can watch their own movies on each node, which of course all need their own monitor to display the fancy GUI.
Tom
Is there a point to them doing this? Their Consumer OS is constantly being stripped apart, other leaders in the market are already handling people’s needs in this area. What does Microsoft think they’re going to get out of this?
Wasted effort, I say. While they’re a monopoly and Vista will be widely adopted, Microsoft would do well to remember that even monopolies eventually fall. They shouldn’t be trying to hasten the process if they know what’s good for them.
Linux handles this space just fine, is extremely flexible, and customizable. Maybe someday scientists will need a pretty gui with a dancing dog and wiggling paper clip. I can see it now:
“It seems from the data you’ve input you’d like to try some supercomputing. Microsoft has many wizards and templates available for this!
Would you like to encode video (WMV)
Or render secret Data (Don’t worry, WGA is only phoning home to PROTECT you)”
Yeah. Ok.
The whole point of HPC is to get the most performance you can. Why on earth would anybody run windows on a cluster? Pretty much any OS is notably faster than windows on the same hardware. I couldn’t imagine the speed difference when you have that many resources.
Speaking of resources, the system requirements of Vista come to mind.
No matter how fast AMD and Intel make their processors microsoft will find a way to slow things down again.
> Pretty much any OS is notably faster than windows
> on the same hardware. I couldn’t imagine the speed
> difference when you have that many resources.
Please, think about it. The performance of HPC systems is compute bound and dominated by user-space computing, which is dominated by the compiler and the runtime libraries. If you have an HPC app which is dominated by the performance of OS calls, then you have seriously screwed up.
Its not as if Windows is necessarily slow, look at the TPC benchmarks, and the old volano marks etc etc.
Some operations *are* (much) slower than the directly equivalent Linux ones, but bear in mind that a carefully written Win32 app will be using AIO for most of its communication and the fact that select sucks on Win32 is neither here nor there. A well0written app on each will use little kernel time code, so the scope for the overall result to differ by much is pretty slight. OTOH port an app to Win32 that uses posix_spawn frequently and assumes its fast, and you’ll be disappointed – but that’s because of the bogus assumption.
The performance of HPC systems is compute bound and dominated by user-space computing
OK, but some move high bandwidth of data between master and slave nodes.
which is dominated by the compiler and the runtime libraries
That’s just not true. Do you really believe the apps are compiled on each node ?!
The app is compiled on one machine for all the arch that need it, and then the apps are pushed on each node.
That’s how I did it 10 years ago with PVM which was not even the king of distributed computing, I sure hope it’s far better today with MPI and other systems.
If you have an HPC app which is dominated by the performance of OS calls, then you have seriously screwed up
Sure, but you still have to manage all these threads and processes, and Linux is noticeably faster than Windows at that too.
VM and memory management is also far better on Linux than Windows, be it 2003.
I mean, Linux can run without problem with 100 % CPU and 95+ % RAM occupied for at least one month non stop (from IBM tests).
Its not as if Windows is necessarily slow, look at the TPC benchmarks, and the old volano marks etc etc
? No, Windows is not slow, it’s just slower, and the mandatory useless things like the GUI don’t help, especially since it is in the kernel in Win2003, and you said yourself that you are screwed by OS calls. And I still have no evidence that Win2003 can run without a GUI, be it “headless” (by MS definition of headless) or not. CLI only will be possible if you believe what people say here, but monad is still not there, but Windows Cluster is, so the problem is there in Windows Cluster.
bear in mind that a carefully written Win32 app will be using AIO for most of its communication and the fact that select sucks on Win32 is neither here nor there
A distributed app that uses AIO has problems IMHO. AIO won’t make it faster for sure. I don’t know for Windows, so I better not make any more comment.
Yeah… Practically HEADLESS… haha
Windows is not slowest in that regard. Windows is very lightweight when you load nothing on it. At least there is no IPC overhead of the Mach kernel, although L4 might be faster than Windows. (I’m talking NT)
but in the middle of your calculations, you will have to reboot for updates. Congrats — your last calculation will not be saved. I believe your calculation just started from…. oh — you haven’t saved any down!
Try using it as a rendering farm and viola — your film will take longer than daikahana to market!
If Microsoft can produce an OS that links all the normally idle PC’s in a small company together, they will definitely have something. Unfortunately, few commercial applications that currently run under windows are set up to take advantage of such a setup; they tend to be single threaded and lack network awareness. Consequently much will depend on how well widely used compute intensive software, such as Matlab and Simulink, work in the new environment and the cost/node. The price quoted by MS seems a bit steep for anyone putting together a large cluster from scratch, but for making a cluster out of existing hardware in a company where folks are already using windows it makes sense if the right software is available at a reasonable price. We will see.
You can do this easily with Linux & OpenMosix. If you have a LAN with Linux clients, just install OpenMosix on all of them, configure all LAN IP addresses, and voila your whole office is a single HPC cluster. All processes are dynamically spread over all the available clusters. You can of course only _really_ benefit from that if your software uses a lot of processes/threads, but you also profit from it when you’re running a lot of apps at the same time on one computer, because some of them are migrated to another client PC in your LAN.
Tom
There are plenty of windows based supercomputing clusters around… they are called Zombies and are controlled from an IRC channel. Get with the program.. Windows for supercomputers is made to make this easier… it includes a custom version of mIRC with all the scripts you need to get your zombies going.
Why Why Why?
Neal Saferstein
HAHAHAHAHAHAHAHAHAAH *CHOKE* HAHAHAHAHAHAHAAHAHAAHAHA
HAHAHAHAHAHA
AHAHAH
HAHAHAHAA
*oik*
It’s not a joke? !!!!!!!!
Why in the world would anyone want to take the most bloated, inefficient, insecure, most poorly written fuster-cluck platform in the known universe and build a cluster of the same?
Who would really consider such a thing?
Celt said
> Why in the world would anyone want to take the most bloated, inefficient, insecure, most poorly written fuster-cluck platform in the known universe and build a cluster of the same?
> Who would really consider such a thing?
Blah. Blah. Blah.
You know what? It really doesn’t matter. MS Windows XP (yes, XP) powers some of the world’s largest commercial clusters. You will never hear about them, they aren’t listed on Top500, and you’ll never get to play with one. Sorry.
Bloated : Minimal XP (and the same will be the case with MS HPC) isn’t really bloated. There’s a reason why tens of thousands of nodes (in one company, that’s a single cluster, btw) run it
Inefficient : Huh? Inefficient as in “not producing the effect intended or desired”? Far from. Why would multi-billion corporations lean on something that doesn’t “produce the effect inteded”? Bulls***.
Insecure : Well, you probably don’t know much about how clusters are used. They aren’t on the Internet gateway router, you know.
Most poorly written : Well, I’m sure you know something we don’t, so I’ll leave it at that.
Cheers,
CEO (oh, running Opensuse 10.1, so no MS flaming pls)
Please show me a single large supercomputer cluster which runs on XP, I’m really interessted. If XP is capable of running clusters as you predict, then why do we need this extra OS?
Windows cluster server not bloated? Can you explain to me then why it has Windows Media Player installed? And I’m sure it also features FreeCell. A definite must on a supercomputer cluster node.
Security isn’t only about Internet … yes, a supercomputer cluster is most likely not connected to the Internet, but that doesn’t mean it has not to be secure. A supercomputer is most likely used by multiple users at the same time for example, a scenario where security IS an issue.
Tom
“…a supercomputer cluster is most likely not connected to the Internet…”
Well according to the diaagram that Microsoft posted on their cluster computing overview page, their ought to be an internet connection. It clearly shows a connection to the mail server, workstation, and the rest of the network. Hello supercomputing powered worms.
To answer your questions:
“If XP is capable of running clusters as you predict, then why do we need this extra OS?”
First of all, I don’t *predict* that clusters are running on XP – they already are. Why do we need MS HPC? Because, unlike XP, the HPC edition is targeted for HPC use (I’m sure that was a surprise).
“Security isn’t only about Internet … yes, a supercomputer cluster is most likely not connected to the Internet, but that doesn’t mean it has not to be secure. A supercomputer is most likely used by multiple users at the same time for example, a scenario where security IS an issue. ”
For the most part, end users interact with a cluster in a very limited fashion, e.g.
– Job scheduling software front end
– The front end (client) of the software, which then communicates with the Master node through a proprietary protocol
For the most part, only the Master node is visible to the end users, all the other nodes are on a different network subnet, unreachable by all other users. Many corporations have the Master node behind a firewall, and only the ports required to interface with the HPC app(s) are opened.
This would obviously be different in an academic/research setting, but by and large, that also tells you why the lack of security in MS isn’t really an issue.
“First of all, I don’t *predict* that clusters are running on XP – they already are.”
I’m still amongst the many people here who would like to know about all those large supercomputing clusters than run on Windows XP that you’re talking about. Can you name one?
“Can you name one?”
Of course he can’t, he’s trolling.
Thought so …
Haha 😉
Never mind. No offense taken on either side I’m sure. Just get into the HPC business yourselves and you’ll find out who I’m talking about. Actually, the single cluster I am referring to would be found not too far from the Gulf of Mexico – which is about all the hint I can give. That cluster runs Windows XP most of the time, although they will boot it (as in, automatically serve the image of..) into Linux for specific jobs.
Let’s welcome diversity. The companies that want to run Windows probably have their reasons to do so. Internally we don’t (actually we run Suse 9.3) but that’s up to each one.
And would you mind telling us HOW this mysterious super-secret company connects well at least a few hundred (if you talk about “large” clusters) computers running Windows XP which has no HPC capabilities at all to a single supercomputer??
Tom
Show me one, just one commercial cluster running XP! Just one. Let alone “some of the world’s largest,” with “tens of thousands of nodes.”
Are you simply hallucinating?
We won’t hear about them? We all know that MS is never prone to bragging (lying) about issues of performance, security and stability.
Again, I challenge you to point any of us on this forum to a commercial cluster running Windows XP. Better yet, point me to a cluster of any size (other than a three PC’s on your desk) running ANY version of Windows.
Poorly written? The MS platform, all versions included are the worst examples of hacks I’ve ever had the displeasure of using.
Disclosing information about the commercial clusters running XP would be out of the question, for the simple reason that the enterprises themselves do not provide those details officially. If you, on the other hand, have any contacts in the HPC divisions of the major HPC hardware vendors – I am sure they can give you some good ideas. I have no interest to do that here, other than to say that my company’s software is cluster based and for that reason I know very well what I am talking about.
Our software runs on both Linux and Windows, and although 95% of our clients use Linux on the back end, there are clients that would like to use Windows (typically for smaller “static” clusters, e.g. 8-16 nodes that don’t require much maintenance). We don’t disclose our customer list either, but our software’s used at many of the world’s largest oil companies.
So, in short – I am not hallucinating.
A number of well known software packages are currently being ported to Windows, some of which are promoted here : http://www.microsoft.com/windowsserver2003/ccs/partners/default.msp…
I think that it would be not too extreme to believe that the reasons these corporations (just as our own) provide their software on Windows is not because Microsoft is begging us to, but because our clients are asking for it.
As for bloated – we really do not run Windows Media Player on our cluster nodes, and I doubt that anyone else does, so that’s just a stupid argument that doesn’t matter in this context. Most of the bigger clusters run RHEL 4 ES or WS, and – as we all know – RHEL 4 comes with most of the bells and whistles as XP does, so the difference in ‘bloating’ isn’t that big.
What you also need to realize is that the OS really doesn’t matter all that much for HPC apps. Primarily, the OS is just a “necessary evil” of basic stuff, and all the fun takes place in the ‘secret sauce’ and the typical libraries that are used (e.g. MPI).
—
I’m sure this didn’t convince you, but then again, I don’t really think you want to be convinced.
They’ll probably only sell it to you if you have a $nnnnnnn billion support contract with MS, a la Windows Server Datacenter Edition, so I doubt it’ll have any inroads whatsoever, except for maybe some government contracts, etc.
except, governments all over the world are switching from propritary crap, to open sourced systems. There is no place for this, even in these sectors.
At least all available resources will be going to be used.
You will need High Performance computers to get a simple thing done with this OS.
But does it run Office2007?
It sounds interesting to me. I’m the “computer & network responsible” for a small school. At any given time, we’ve got maybe twenty computers running. Most workstations aren’t at full CPU utilization the whole time.
Say one of the students wants to listen to MP3’s, download stuff on bittorrent, photoshop some images, browse the web, and make an OpenOffice Impress presentation at the same time. She’s using a Celeron 400 machine. On the other hand, I’m just typing a document on my dual core Athlon. It would be much more efficient if some of the tasks running on her PII could be migrated over the my under-utilized Athlon. Things would be much quicker and nicer for her, and I wouldn’t notice.
If the MS folks would produce a simple, easy to configure clustering system that would allow this (and then willing to donate some licenses to us) I’d be willing to check out their stuff. I think a lot of other home, schools, and small businesses might also be interested in this if it were explained well. I doubt it’s going to happen, but it’s an area that interests me a lot.
Would’nt a Super-Computer be used for something Mission-Critical? I mean, I wouldnt trust any OS that is as bloated as Windows to run on a high-performance machine, over Unix or Linux, which has a history of being stable….
I’m just sayin’
for the TabletPC edition.
I wonder what are its advantages over the competition…
To what I’ve read so far, it got nice integration with the development suite from Microsoft. That’s about it. However, I still have to figure out why it’s a real plus. Cutting development time is always great, but it doesn’t really matter when you need fast number crunching. I must admit that I never dealed with an HPC project, but I suppose you are spending more time in developing better algorithms than the other parts (communication, synchronisation, deployment). Unless their tools got plenty of smart optimisations to significantly cut the development time, it’s going to be hard to justify your money on licences. In many cases, a licence is going to be half of the price of a node! Even volume discounts wouldn’t really help.
The MS salesman in the article mentions “attractive price”. Perhaps for those shops swearing only for Microsoft. It’s a bit early to make a real judgement, but it seems to me there are better and cheaper products out there.
Wow, imagine a beowulf cluster of these!