HP and the Pacific Northwest National Laboratory (PNNL) have collaborated on a new 11.8-teraflop Supercomputer for the Department of Energy. The $24 Million HP Supercomputer uses the Linux OS and Intel’s Itanium2 processors.
HP and the Pacific Northwest National Laboratory (PNNL) have collaborated on a new 11.8-teraflop Supercomputer for the Department of Energy. The $24 Million HP Supercomputer uses the Linux OS and Intel’s Itanium2 processors.
The only new news is that it has completed its two year acceptance process. It currently is #16 (11/04) on the top 500 and was as high as #5 two years ago while being in production use for almost that whole period…
i cant wait for longhorn on itanium
Its would have been alot better if the had chosen to run HP-UX v2 on it instead of linux.
Its would have been alot better if the had chosen to run HP-UX v2 on it instead of linux.
How so?
Its would have been alot better if the had chosen to run HP-UX v2 on it instead of linux.
I would find it quite suprising if the HP compilers offer the same performance on Itanium 2 as the Intel compilers…
This cant be true…i distinctively rememeber a MS financed report, not long ago, stating that linux wasn’t scalable.
or hey, spend less than one fourth the cash on a cluster ‘o XServes running OS X and get a few extra cycles for your troubles… gotta love that government spending!
Except its not a few more cycles. System X is currently number 7, this one is at 16. The xserves aren’t just cheaper, they’re faster.
Except its not a few more cycles. System X is currently number 7…
So how much does this system cost? Without “that government spending”?
Clusters != supercomputers.
These 2000 processors and 6.8TB of ram is one system. This can handle incredibly large tasks, as opposed to a cluster, which handles a much much smaller task over and over again.
For example, an animation studio wouldn’t use this, as their data sets wouldn’t get much larger than, say, 8gb. That type of stuff is what clusters are for. These guys are using datasets than extend into the TB range, and lightning-quick access to all of it is needed.
I’d say the government spent wisely.
These 2000 processors and 6.8TB of ram is one system. This can handle incredibly large tasks, as opposed to a cluster, which handles a much much smaller task over and over again.
This is of course nonsense. Ever heard of the Message Passing Interface (MPI) Standard? Many scientific codes (in particular hydrodynamical codes) use MPI in order to distribute _one_ large computational task across various nodes in the cluster network. Sure, the tasks that run on each individual node are small, but, if programmed properly, the end result is exactly the same as if the code had executed one “incredibly large task” on a huge shared-memory machine.
How so?
Just to see how real Unix performs in this setup. I don’t know why HP doesn’t try and sell their own IP rather then push linux. HP-UX is far more advanced an operating system then Linux and i would guess would be more effeciant in the IA-64 architecture.
HP-UX is far more advanced an operating system then Linux and i would guess would be more effeciant in the IA-64 architecture.
Too bad the people who designed this machine don’t know as much as you do.
yes we know about mpi but that dosent somhow make network infiniband or whatever faster than shared mem. clusters have more latency. take a cluster with 10mbit ethernet then a cluster with 100mbit ethernet and finaly a with infiniband. and you wil notice that for every time you steep up the speed the faster the cluster will work and shared mem is more effective than. man cpus dosent matter if you cant feed them. large data sets will never be as effective on a cluster.
Did I say the cluster performance is independent of network bandwidth. No, of course it is highly dependent on that. My point was that it is very well possible to carry out very large computations on a cluster machine, and it is done all the time. But anyway, thanks for telling me…