Two articles on OpenVMS, one at ShannonKnowsHPC and one at ITPlanet, read on.ShannonKnowsHPC.com: It’s a well known fact that OpenVMS has been successfully ported from Alpha to IPF. What have the developers been up to since completing the port? Some are putting the finishing touches on OpenVMS V8.2, the first commercial VMS/IPF release. Others are doing other thing, not the least of which is characterizing the performance of OpenVMS Alpha and OpenVMS IPF.
On Wednesday 18 August, OpenVMS Engineer Greg Jordan delivered an HP World presentation wherein he characterized the performance of the OS on the two platforms, and shed light on developments underway to improve VMS-on-IPF performance.
ITPlanet.com: It’s not uncommon for alcoholics to suffer from the DT’s (Delirium Tremens — severe alcohol withdrawal characterized by agitation, violence, anxiety, insomnia, muscle cramps, tremor, delusion, hallucinations, and fever). But whoever heard of an operating system (OS) suffering from the malady? Well, the OpenVMS OS apparently has an acute case of the DT’s. Though in this instance, we are talking about disaster recovery.
Disaster Tolerance (DT) is a concept that extends beyond disaster recovery (DR). Traditional DR focuses on minimizing downtime then picking up the pieces and reconstructing any lost data afterwards.
Strange, that HP/Compaq canceled the Alpha architecture, as it seems that it takes quite an amount of work to make the Itanium as fast as the Alpha was.
Also: “IPF executable images are typically three times the size of equivalent Alpha executables.” And that’s permanent.
I don’t know.
Alpha is a robust/mature architecture so it is understandable that it is pretty fast and hard to match out of the gate.
I think that it will be awhile before the Itanium reaches complete parity with the Alpha.
I also wonder about the image sizes for Alpha to Itanium images.
In the case of executables, smaller is better so more work has to be done here too.
I will probably look at Itanium servers in a couple years.
Until then, Alpha rocks.
I would imagine the size is because the itaniuum expects the compiler to add lots more information, mostly things other CPUs do internally.
If I’ve understood this correctly it’s an understandable tradeoff, both since the compiler is in a better position for some forms of optimization, and because time spent compiling is less of a problem than time spent reordering while executing.
Increased codesize(and thus increased pressure on the instruction-cache(s)) is one of the drawbacks of VLIW designs. The main advantage is a simplified design compared with traditional OOO super-scalar processors with some potential bottlenecks removed (the hardware scheduling mostly). However this also removes* the possibility to dynamically adapt the processing when it would be advantage.
(* not 100% true as dynamic recompilation is possible)