Bink.nu has some interesting information on Windows Server 2003 Compute Cluster Edition. It is 64bit only, and consists of 2 CD’s. They also posted a set of screenshots.
Bink.nu has some interesting information on Windows Server 2003 Compute Cluster Edition. It is 64bit only, and consists of 2 CD’s. They also posted a set of screenshots.
The screen shots are nice but I found the information lacking. I wonder how the cluster edition is going to compete.
“Windows Server 2003 Compute Cluster Edition 2003. It is 64bit only, and consists of 2 CD’s”
Is it free and open source? If not, then YAWN.
I don’t understand why Microsoft puts years in their product names. Windows Server 2003 Compute Cluster Edition 2003 won’t be released until 2006? I realize it’s based on Server 2003, but it’s very confusing…
I think they did that because it really is Server 2003 but they added clustering. So the name makes sense. Who knows if the name will change between now and release. I’d rather them call it Server 2003 Compute Cluster Edition than Windows CCE.
..and cute! You get a baloon-style notification each and every single time a task is completed! Isn’t that everything anyone could ever ask from a computer cluster!
You can all bet your lives i’m going to be selling my car for a copy (erhm, I mean license) of this wonderful clustering solution!
I hope they find a way to turn off the baloon notification. That is one feature that bugs me in windows XP. Put it in a log where I can check it later.
LOL @ balloon feature
reminds me of the System 7 informational balloons that the macos used to have :p
It seems to me that microsoft is creating all these editions of windows segmenting it’s market – just like apple did with its hardware back in the early-mid 90s
Amen.
Take a look at the “User Console” screenshot. You will notice that Windows Media Player 10, MSN Messenger are running on it.
I may missed something, but unless Microsoft re-wrote the rules of kernel development, only processes could be migrated between cluster nodes. (I don’t see how threads could be migrated between nodes.
Now here comes MS’s biggest problem: Unlike Unix/Linux/BSDs (where fork/clone effectively replaced threads for years), MS’s kernels were never designed to spawn processes at low cost, they were designed to multi-thread. The performance hit of pushing processes between nodes will be tremendous (Just consider the handles and security contexts that need to travel between nodes.) Registry syncs? FS syncs?
Windows is the last OS on this planet to even be considered as a candidate for HPC. Unless MS rewrote the kernel (and supporting libraries) from scratch, this nothing more then marketing PR stuff.
Oh… and am I the only one to find the idea of a cluster server with MPlayer and MSN ludicrous?
Uh, you do know that NT threads can be migrated across nodes, right? Nixes needed low cost process spawning because they had no proper threading mechanism. The ignorance of the differences between OS kernels is staggering.
Uh, you do know that NT threads can be migrated across nodes, right? Nixes needed low cost process spawning because they had no proper threading mechanism. The ignorance of the differences between OS kernels is staggering.
I wondering if you have any idea what you’re talking about or you’re just trying to flame-bait me.
Iโll give youโre the benefit of the doubt and ask you:
how can one migrate threads between nodes, without implementing cross node shared memory?
Now, assuming that Microsoft has managed to implement cross node shared memory, (which AFAIK never been solved before), how did they solve the problem of atomic operations? (lock inc, btX, etc), memory concurrency between threads, memory mapped file, etc?
Please educate me, oh great master.
Now before you embarrass yourself any further let me just point out to you, that among others, I write Windows drivers for a living.
“Uh, you do know that NT threads can be migrated across nodes, right? Nixes needed low cost process spawning because they had no proper threading mechanism. The ignorance of the differences between OS kernels is staggering.
I wondering if you have any idea what you’re talking about or you’re just trying to flame-bait me.
Iโll give youโre the benefit of the doubt and ask you:
how can one migrate threads between nodes, without implementing cross node shared memory?
Now, assuming that Microsoft has managed to implement cross node shared memory, (which AFAIK never been solved before), how did they solve the problem of atomic operations? (lock inc, btX, etc), memory concurrency between threads, memory mapped file, etc?
Please educate me, oh great master.
Now before you embarrass yourself any further let me just point out to you, that among others, I write Windows drivers for a living.”
Oops… didn’t notice that I posted this as Anonymous.
Gilboa
M$ CAL will be M$ Cluster Access License…
Beowulf do the same with zero costs.
>Oh… and am I the only one to find the idea of a cluster server with MPlayer and MSN ludicrous?
They do this on all their “Server” products.
The fact is that Windows is just a DESKTOP PC operating system attempting to be shoehorned into a server role. In all my years of working with Windows Server products, I have always been struck by the sheer baffling amount of things that make me think “THIS WAS designed for a DESKTOP, by DESKTOP programmers”.
Many of the problems with using Windows as a server can be traced back to the fact that its just an overblown desktop OS that was not written to be a server from the start (eg requires a GUI to be running at all times consuming huge amounts of cpu/memory for no reason when locked in a closet, the fact that it just had a completely shitty scripting shell etc). Windows just screams over and over again, Im a DESKTOP, Im a DESKTOP, Im a DESKTOP. This was very early Windows NT Server which was a total joke, and to this day I still can’t believe people actually used that crapola for a server as much as we did. Many other things in Microsofts server OS’s scream desktop or at best they screem as being designed for a 10person office, stuff like netbios networking, and WINS just were a joke when trying to scale them in the enterprise.
Compare that to Linux/Nix where you have an operating system designed to be a server and now people are trying to make it a desktop…sort of the opposite of the Windows situation.
Quite interesting how these 2 designs are both coming from totally opposite ends of the world and are now competing in the middle.
Which one will win out in the end?? The desktop who grew up to be a server, or the server who learned how to be a desktop?
Actually, Windows is meant to be more than just a server (or a desktop). Windows is meant to be a “platform”, where platform means that you expect Windows to be (somewhat) Windows everywhere the “Windows” word is involved.
This is very different than Unix/Linux concept. As you can see, there are many versions of Windows, each of which has different goals but it’s (where possible) more or less the same.
The fact that you can have the same components on a Server or Desktop version (or Mobile or Tablet PC version) of your Windows means that you can develop your software while being sure that any Windows version will be able to run it.
That’s the key value of Windows itself and one of the biggest reasons why Windows succeded.
Also, never do the mistake to consider MSN as only a messenger program or MediaPlayer as only a media player. Microsoft has always been so smart to provide to developers access to the very same technologies they had implemented.
By having MSN messenger on that machine or WMP, a developer is sure that basic libraries are there and can be consumed. In Windows, MS software rarely is simply an “application” but rather everything is connected. Most MS softwares allow developers to use or interface (Office, WMP, down to Kodak ActiveX components).
While you could consider that a bad approach, just notice that Apple is doing the very SAME thing with Tiger, by introducing new system-wide technologies available to developers.
The “coherent platform” concept is why MS was able to keep their company united when they lost against DOJ and why they tried to oppose when EU anti-trust commission forced them to remove WMP from WindowsXP. It’s a value for users and developers.
Regards.
You know there is a difference in having the libraries there to ‘be consumed’ than having the actual program on there. I agree with the comments by others that there is no reason for a server operating system to have MSN messenger and Windows Media Player. Since these are both client software and not server software, they’re useless. Maybe if there were the base libraries for WMP so that you could start a streaming service, then that’d be one thing. But MSN Messenger being installed is just asking for trouble.
And Microsoft complained so much about WMP being pulled out by the EU because they want to try to spread their crappy proprietary media formats on everyone.
MSN messenger by the way, is the first thing to go on any XP I install, especially the desktop.
I don’t know which services / applications will be installed by default and then I’m guessing. My remark wasn’t targetted to this cluster version but was mostly a general remark about why, for example, you have DirectX on Windows2003.
As a general rule, you can always disable services / application you don’t need / don’t want. I admit this is easier now that what it was (say) 3 years ago because MS realized that people is usually lazy and thus they will leave everything as it is because of fear to break something.
Now, what’s the purpose of MSN messenger on this cluster version can be just guessed (unless tester installed it for his/her own purpose, I just can guess they’re trying to push messenger as a good channel of communication between machines… for example, to chat with your co-worker to find a solution for a problem which arose. But this is just a wild guess ;-). But the general consideration is valid.
Just an example: I’ve seen a quite popular (and, honestly, very good) helpdesk system which requires DirectX to be installed on server (Win2003) because such (Web) application relies on it to create some graphics for reports and so on (don’t ask me for details, however developers eventually found handy to use such standard services to achieve their goals instead writing their own libraries. This is a value for their business.).
That explains my comments about Windows “platform”. Now, as I said, that doesn’t mean MS is right to keep MSN on cluster version. I just meant that he should have a different approach because in Windows most (if not everything) is somewhat connected.
I wouldn’t be fair if not saying that such inter-connection has also been one of major weak points of Windows “platform” because it was not secured enough by default (and sometimes in general) to provide safeness coupled with functionalities. However, the general commment still holds valid.
” The performance hit of pushing processes between nodes will be tremendous (Just consider the handles and security contexts that need to travel between nodes.) ”
You won’t be doing any of these things. Individual processes will be spawned on each node and will communicate with MPI or PVM – just as with a Linux Beowulf cluster. It’s not as if this is the first time Windows has been used for such a cluster….
You won’t be doing any of these things. Individual processes will be spawned on each node and will communicate with MPI or PVM – just as with a Linux Beowulf cluster. It’s not as if this is the first time Windows has been used for such a cluster….
OK. Nothing new here. MPI software / PVM / Bewolf could be used on any windows since Windows NT? User-land clustering software are fairly easy to implement and can run on any platform; But unlike real kernel-level clustering OSs, you can only run specially written software, you cannot migrate running processes, and the performance is lacking (at best).
And again, you can run the same software on any Windows… why pay 5x more just to get an empty shell with a cool name?
Gilboa
I don’t know this matter enough to discuss technical details. I’m just curious ๐
Couldn’t it be they released the “box” (the container) and will slowly migrate some key applications to support it? But having the “box” would also mean you could let other developers start migrating/creating such special applications.
It looks good to me. Is that impossible?
I don’t know this matter enough to discuss technical details. I’m just curious ๐
Couldn’t it be they released the “box” (the container) and will slowly migrate some key applications to support it? But having the “box” would also mean you could let other developers start migrating/creating such special applications.
It looks good to me. Is that impossible?
The problem is not the box, but the technology behind it.
You cannot just migrate any application to clustering use, you need to re-write it from scratch in-order for it to work in such an environment.
Now, things get much worse, if the OS’s kernel was never designed to be clustered: Instead of the kernel cloning a running process an migrating it to a remote node, the software itself needs to freeze it’s current execute state and manually migrate itself to a remote copy of itself, running on the target node.
As far as I can see, Microsoft didn’t rewrite the NT 5.x kernel to allow clustering, they just took a normal Window 2K3 server, added a couple of of remote administration tools and relabeled it as “HPC Windows”.
Gilboa
Well, honestly packaging fresh air into a box to build a new product looks too brave, even for MS, expecially if you’re gonna ask money for that. Naw, can’t believe that.
This cannot be Windows2003 + tools. There’s certainly more. Just figure out this: you go to your customer with this HPC version, sell him/her and make him/her pay 5x times more than WIn2003. What happens when he/she will found out that you sold just a Win2003 + tools? U have lost your face and surely 5x more customers.
By having … on that machine … WMP, a developer is sure that basic libraries are there and can be consumed
please elaborate exactly what would be the use of “WMP basic libraries” on a “compute cluster” server.
thanx
“And again, you can run the same software on any Windows… why pay 5x more just to get an empty shell with a cool name? ”
The cluster version is not yet a released product. It’ll apparently include management software not available on XP – and a 2003 product need not cost as much as Enterprise Server. The “web server” version costs much less.
Regardless:
(1) How do you know how much it’ll cost? Indications are that it’ll cost LESS than Windows Server SE.
(2) How do you know what the license says?
You don’t. You merely assume – and that poorly.
“You cannot just migrate any application to clustering use, you need to re-write it from scratch in-order for it to work in such an environment. ”
Already done. Unless you somehow think someone buys a cluster to run Word…..
“Now, things get much worse, if the OS’s kernel was never designed to be clustered: Instead of the kernel cloning a running process an migrating it to a remote node, the software itself needs to freeze it’s current execute state and manually migrate itself to a remote copy of itself, running on the target node. ”
None of this is correct. Not one word beyond “and” and “the”.
Individual process will be started on compute nodes by the head node and communicate with MPI or PVM – as I’ve already said. There will be no “migration” of anything. There’s no single system image. No single kernel running on the entire cluster. That’s not how a Beowulf or NOW works.
“But unlike real kernel-level clustering OSs, you can only run specially written software, you cannot migrate running processes, and the performance is lacking (at best). ”
What “kernel-level clustering OS” would that be?
IRIX, perhaps? DO you somehow not think you have to use something like MPI to run on an Origin SSI system?
Interprocess communication is handled the same way as on a Beowulf.
I can assure you that for relatively coarse-grained applications (which covers a large problem domain), the price/performance ratio for a Beowulf-type cluster is far superior to a traditional shared-memory supercomputer.
Please register so we can have this discussion on a name-to-name basis. I’ve got too many Anonymous in the thread and I don’t know who is who.
“Individual process will be started on compute nodes by the head node and communicate with MPI or PVM – as I’ve already said. There will be no “migration” of anything. There’s no single system image. No single kernel running on the entire cluster. That’s not how a Beowulf or NOW works.”
Here we differ. I don’t claim that Beowulf cluster is a good clustering system; I just claim that it’s a very limited one.
Stop for a second and think. I need to migrate a running copy of my software to a remote node, cause the current node needs to serviced. Unless you have kernel-level support for process migration, you’ll need very complex state management to allow for the current process’ state to be moved to another copy. With a kernel-level clustering, you just suspend the process, and move a couple of pages to the remote node.
How can you compare Beowulf to OpenMosix?
but probably they will try to push ogsi/net like arhitecture?
“Here we differ. I don’t claim that Beowulf cluster is a good clustering system; I just claim that it’s a very limited one.”
For any embaressingly parallel problem – which is most of the problems clusters are built for – a traditional Beowulf with a job scheduler is more efficient than an OpenMosix system – or a real single-system image system like an Origin. Even Bar’s on study shows this. Even wonder why there are far more Beowulf clusters than OpenMosix clusters?
“Stop for a second and think. I need to migrate a running copy of my software to a remote node, cause the current node needs to serviced. ”
Stop and think yourself: I don’t need to do this. Node failure just causes a resubmission of that dataset to another node. The loss in efficiency of having to start a process on the head and transfer it – rather than just allowing the batch scheduler to start a process on the node it has assigned – which also allows easy partitioning for different problems -isn’t worth it compared to the small amount of time lost in the relatively rare event of a node failure.
“Unless you have kernel-level support for process migration, you’ll need very complex state management to allow for the current process’ state to be moved to another copy.”
I don’t need to do this. Any of this.
“How can you compare Beowulf to OpenMosix?”
Easy. It’s more applicable to my problem domain than a quasi-SSI arrangment or even a real SSI system like an Altix. There’s a reason why Beowulf clusters are more popular than OpenMosix. There are problem domains where an SSI type system (provided you can afford the high-bandwith fabric) can be more efficient (QCM, for instance). Most supercomputer problems are not in those domains.