Even though the old-world UNIX operating systems, like IRIX and HP-UX, have been steadily losing ground to Linux for a long time now, they do still get updated and improved. HP-UX 11i v3 is supposed to get update 4 tomorrow, with a host of new features that won’t excite you if you’re used to Linux, but they’re still pretty useful for HP-UX users.
For instance, the update includes a feature called dynamic root disk, which makes patching the operating system a little faster, reducing overall downtime by 50% (according to HP). This feature allows you to create a new root partition, make a copy of the operating system there, and apply the updates to this new copy. End users can continue to use the “old” partition, while the admins test the new partition with the new updates applied, and if it’s ready to go, they can just reboot into the new partition. The old partition remains intact in case the new partition doesn’t boot or otherwise fails.
This new update will also introduce an online distribution model for HP-UX, which before was distributed old-world style. Now you can get a license within a few hours and download the operating system online. It also comes with its own disk scrubbing feature, meaning you no longer have to pay someone else to do it. There are also numerous improvements in the clustering extensions of HP-UX.
HP-UX 11i v3 Update 4 will be released tomorrow, April 17th.
linux .. linux .. linux .. doesn’t even run on many hp-servers. maybe because it doesn’t scale well enough (don’t kill me, it’s just a guess. i don’t know shit). for example, your options for the “HP Integrity Superdome with sx1000 chipset—64 processor/128 core server” are:
I won’t kill you, but
http://www.top500.org/charts/list/32/osfam
Yes, Linux is frequently used on supercomputers, because it’s free and there is source code available so it can be modified to work with custom machines. What’s your point?
His point was in reply to the person questioning whether Linux scaled to the kind of hardware HP-UX runs on. The answer being a very definite *yes*.
The answer is most likely no. Not to repeat myself again I’ll insert link: http://www.osnews.com/thread?354393
Btw it that graph there were also Windows, does that mean Windows scale great too?
SGI Altix have scaled linux to over 1024 cores, single image system.
Columbia alone is running linux through 14K cores. In partitions of 512 processors per image.
http://www.nas.nasa.gov/Resources/Systems/columbia.html
That is pretty damn scalable if you ask me.
Why don’t you read the other comments, it’s all been said, for example here: http://www.osnews.com/thread?359003
They might have 1024 cores single-image Linux to crunch numbers, but nobody is going to run SAP or Oracle on it.
So now scalability means “running Oracle or SAP” interesting…
Is it so hard for some of you to actually recognize that you may actually be wrong, rather than simply moving the goalposts?
We were talking about scalability, period. And by the way, Oracle runs on large linux installations just fine. Oracle themselves, and even large outfits like Amazon run most of their 10G fabric on linux. So what exactly was your point?
Linux may have plenty of issues. Scalability is not one of them, not by a long shot. In fact, there are few systems out there that scale the way linux does. It takes a big deal of ignorance to try to make a case against linux scalability. Jesus tap dancing Christ on a pogo stick…
Edited 2009-04-17 22:55 UTC
It certainly does not mean runing computational aplications or scientific simulations.
It may also have its strong points, but definitely not as a high end *server* OS. More likely towards embedded or desktop usage.
Right, and do you think that matters if you are purchasing a tens of million Euros (dollars) supercomputer?
There are two reasons Linux is used on supercomputers:
– It has become the standard UNIX(-ish) platform that runs on scientist workstations up to supercomputers. Meaning that you can do stuff at every level with the same familiar environment (both API and userland).
– It’s scalable enough for supercomputer applications.
Eventually Linux will kill off all other UNIXen (except for Darwin/OS X). Even the remaining UNIX server vendors (IBM and HP) invest quite heavily in Linux, if they can keep their hardware customers and ice their UNIX offerings, there is a much better profit margin for them. Oh, and it is what customers request .
Sorry there, Linux will never kill off the BSD family, primarilly FreeBSD. I also doubt Solaris is going to die any time soon. If anything, given another year of development I would say Solaris would be better suited for desktop OS up to super compuers for a common environment. I anxioulsy await the day.
I await the day when Solaris becomes a common desktop os, displacing the idea of the Linux desktop… but I fear I may be waiting a long time and, if IBM or someone else does buy out Sun… what then? I have a bad feeling that Solaris will faulter once the main driving force behind it, Sun, disappears. It would be a shame, Solaris has so much more potential to be an awesome desktop os than Linux does or ever will.
About the BSDs I agree 100%. As long as there are programmers interested in keeping them going, they will never die and that’s a very good thing.
I hate to say this, because I love BSD (and was an active contributor to NetBSD). But if you look at marketshare, Linux already finished off the BSDs. There were times (up to, say 2001-2002) where most of the BSDs had an edge over Linux in many departments, were in the media limelight and up till 2000 it also had relatively many users (those were the days when IIRC Apache or Sendmail explicitly recommended to run their software on top of BSD rather than Linux, and when FreeBSD was still used on many webservers according to netcraft). In recent years the edge and media attention seems to have evaporated. There’s still chance that BSD will be significant in embedded devices, but even there it doesn’t seem have have as much momentum as Linux.
We have been hearing that for years. I don’t think it will catch up, but if it will, Linux has too much momentum. Solaris is going to stay around for a while, but its growth days are over, its use has been declining for years. If the IBM-Sun deal makes it, Solaris will probably move to maintenance-only mode. IBM is very good at killing OSes .
Very few people use market share to decide if they’re going to use an OS or not. What was that? Linux was supposed to be “on the desktop” 5 years ago? 8 years ago? When is it coming? Exactly. Everything I’ve seen is still horrible. I’d rather hack off my left arm than be tortured with the horrible mess that is Linux.
I think it does. AFAIK most OSes for this kind of machines used to be licensed per cpu, that makes a lot of money on a machine with hundreds of cpus. Also if you have a custom machine (which supercomputers are) you also need a customised OS and it’s easier to grab FOSS then go through signing of NDAs and stuff.
Yes, they mostly build drivers for their hardware. Otherwise they develop their own OSes.
Maybe, I’ve never seen any statistics, but I used to work at one of those companies and only a small minority of customer ran Linux.
There are special licensing deals for this kind of set ups. And if the hardware is manufactured by the UNIX vendor, then they’ll allow to minimize the profit margin on the OS because they get a very good margin on the hardware. The price of software is marginal compared to the hardware, housing (most non-cluster of cheap nodes supercomputers need special cooling), and personnel.
They do far more than writing drivers. Have you ever looked at the copyright for core Linux kernel files? IBM’s name is plastered all over.
The link on it’s own isn’t much of a demonstration of scalability, since while Linux makes up a huge proportion of those systems, it doesn’t distinguish between massively parallel machines versus clusters of cheap hardware.
That said, they’re there. Number #3 on the list is an SGI Altix box, apparently running SUSE. Number #6 is a an array of Sun hardware running CentOS. Number #9 is Cray hardware, again running SUSE. All three are massively parallel setups, all with 10000+ cores.
And how does massive clusters prove anything in the way of scalability? scalability is the ability to efficiently scale in a single image over a large number of processors. Although a lot of this is dependent on the quality of the code itself regarding the user space applications running – at the same time being able to detect 512 CPU’s and then being able to spread a load efficiently over the whole thing is a different matter entirely.
As for HP-UX, the only people who are keeping it around are basically those who need the legacy support – apart from that there are very few ‘new customers’ whom one can point to who are purchasing HP-UX brand new for the first time (or upgrading their existing hardware without major consideration over moving the work to a Windows or Linux server).
I’ve always thought that the best course of action would be for HP and IBM to merge their operating systems with OpenSolaris and come up with a single UNIX specification that can rule them all – with the differentiating factor coming from the administration tools and the hardware sits underneath it all. I doubt it’ll happen but I think it is the best way to counter Windows and the growth of Linux in the enterprise by at least coming up with a single common UNIX implementation which spans over x86/x86-64, SPARC, POWER and Itanium.
Uh it is to supported… you must be pulling from the mx2 modules which are no longer sold
http://h18000.www1.hp.com/products/quickspecs/11717_div/11717_div.H…
RHEL 4 up to 128 cores and 2048GB RAM is supported…
Well, would not be too surprising to have a hardware which is custom made towards HP-UX which is NOT supported by Linux.
You need to write drivers for some of that stuff, and HP is not very deep into Linux.
The company I worked for had a HP Superdome replaced by a SGI Altix (with Linux), both of which worked fine.
HPUX for sure has it’s merits, as has Linux. I once had the pleasure of having to work on a HPUX workstation with CDE on it as a desktop. While the HPUX Kernel was working approximately as stable as Linux (less stable than SGI IRIX), the CDE Desktop was amongst the worst, which means by far less usable and powerful than KDE or Gnome, and only slightly better than Windows. The only reason why it was better than Windows was virtual desktops.
Somewhere, there are 8 sysadmins in the world who just got a big smile. everyone else has moved on (yes there are still plenty of instances of HP-UX around, but if HP had their way there wouldn’t be.
HP makes a killing off contracts with HPUX and OpenVMS Hardware. Don’t assume please….
I know that, I am an OpenVMS sysadmin (as well as many other things), so don’t assume please .
still, HP-UX has been in decline since 2006 and sontinues to fall faster each year.
HP-UX is actually one of the ones to beat. As dxvt mentioned above, parallel super-computer clusers are usually very undemanding on the performance characteristics of an OS. Linux has a massive advantage here because it’s highly flexible and people can adapt it to give rDMA access to the scientific applications and then get the heck out of the way while they are crunching the numbers and exhausting the memory bandwidth. The point is, Linux is good as a sort of ‘bootloader’ and low-frequency driver layer for those applications and the hardware takes care of the rest (i.e. Linux code is not actually executed very often).
A database app is much more demanding on the system since it really uses the I/O paths and has many users to manage etc. HP-UX is one of the top performers in this space: http://www.tpc.org/tpcc/results/tpcc_perf_results.asp. The rankings keep changing around, and it usually switches off between these erstwhile ‘dead’ OSes, like HP-UX and AIX. Even databases are not a good measure of the system scalability since they are so heavily tuned and act as an OS unto themselves. Maybe a webserver is better (at least the way webservers are designed today), but people are generally smart enough to scale out a web server rather than trying to serve off of a multimillion dollar god-box. Linux makes a pretty respectable showing here, too, so it’s not like it’s bad or anything.
I did read your comment, and I understand what you’re trying to say, but you should say it a little more carefully. Certain performance characteristics are very important to cluster users.
Disc I/O may not be one of them, but a good scheduler, memory allocator and network I/O generally are. Having an rDMA capable fabric is great but it wont be much use if it takes 500ms for the scheduler to wake your process when the data it’s been waiting for arrives.
For instance, the update includes a feature called dynamic root disk, which makes patching the operating system a little faster, reducing overall downtime by 50% (according to HP). This feature allows you to create a new root partition, make a copy of the operating system there, and apply the updates to this new copy. End users can continue to use the “old” partition, while the admins test the new partition with the new updates applied, and if it’s ready to go, they can just reboot into the new partition. The old partition remains intact in case the new partition doesn’t boot or otherwise fails.
Considering both Solaris (Live Upgrade) and AIX (clone) have had this feature for at least 5 years, that’s not much to get excited about.
DRD has been around for a few years, the information in the article is not correct as it implies it is new. Update 4 also includes – cluster improvements, LVM provisioning changes, improved AVIO for HPVM, Online VM migration, DOD compliant disk scrub, active processor power savings etc…
As for Linux it has been running on the SAME HP hardware as HP-UX for a number of years now – standard Hardware – lower TCO. i know as i have SUSE, HP-UX and Redhat running on varying HP itanium servers.
I never understand these OS-hating threads.
Linux Works, HP-UX works, AIX works and Solaris works. i use them where approrpriate for the business. for example HP-UX has fantastic Workload Management software and virtualisation Stack. Solaris is free and in ZFS is an excellant filesystem. AIX is powerful with it memory migration ability and LInux is free. what is the problem?
HP-UX, AIX and Solaris will be around for years to come (to what degree who knows – nobody knows what tomorrow will bring) Linux will continue to grow. In fact Windows will continue to grow in the Data Centre what is the problem? OS is only part of the IT infrastructure – server Hardware, network, application etc…. the cost of the OS is minimal when compared to rest particularly if the rest is badly spec’d.
Don’t be a ‘hater’ be a ‘player’
When we talk about scalability, (llama): We’re not talking about clusters. We’re talking about single-system-image big iron, where _one_ kernel runs on a single machine with > 16 CPUs in a cache-coherent shared-memory system. The most cost-effective machines for cluster-building, in CPU power per dollar, are dual-socket quad core Intel Core2-based machines. i.e. 8 cores per node. That’s great if you have a workload that has some coarse-grained parallelism, or is embarrassingly parallel, e.g. processing 100 separate data sets with single-thread processes that don’t depend on each other. That’s not so great if you have a lot of processes that need fine-grained access to the same shared resource. The canonical example here is a database server handling a database with a significant amount of write accesses. Otherwise you could just replicate it to a big cluster and spread the read load around. Locking for write access in a big cluster, even with low latency interconnects like infiniband, is still _way_ higher overhead than you’d get in e.g. a 4 or 8 socket quad-core machine. Even NUMA big iron is better suited for this than a cluster.
CLUSTERS DON’T COUNT AS BIG IRON. They’re just a pile of normal machines. They do have their uses, though.
Get it? Large cluster is not big iron. It is NOT scalability. Get it?
If I have to modify Linux to be able to run on several CPUs, then Linux is not scalable. It is modifiable. I could redesign an C64 emulator and spread the load on several nodes – does that mean that C64 is scalable? NO. Solaris on the other hand, uses the same installation DVD on small intel Atom CPUs up to big iron with hundreds of CPUs – THAT is scalability. The same kernel, without modifications, scale up. Standard Linux does not scale, you have to modify it. I could also modify C64 emulator – but C64 is not scalable. Neither is Linux. BIG IRON and LARGE CLUSTERS are completely different.
LOL. I don’t think the term “big Iron” means what you think it means.
BTW, Linux has been scaled to over 512 processors for a single image. Neither HP-UX nor Solaris, nor AIX for that matter have proven to scale to that degree in single image configurations. And to be honest, the markets that those unix target are not so much focused on things like scalability, as they are focused on reliability and redundancy.
Saying that Linux “does not scale” is disingenuous at best, or you simply have no clue what you are talking about.
Edited 2009-04-17 22:45 UTC
Sun Fire M9000 has at maximum 64 pyhsical CPUs x 4 cores x 2 threads equals 512 virtual CPUs exposed to the OS. That is no customised machine and hacked OS, but standard hardware and same installation CD as you would put on any other kind of SPARC machine!
http://sunsolve.sun.com/handbook_pub/validateUser.do?target=Systems…
http://en.wikipedia.org/wiki/SPARC
Big Iron does not mean a cluster, which is a bunch of computers. Read my first post again. Slowly.
Regarding Solaris that has not scaled to 512 cpus or is not able to scale, why dont you read this link?
http://queue.acm.org/detail.cfm?id=1095419
“These types of techniques allow the Solaris kernel to scale to thousands of threads, up to 1 million I/Os per second, and several hundred physical processors. Conveniently, this scaling work can be leveraged for CMP systems. Techniques such as those described here, which are vital for large SMP scaling, are now required even for entry-level CMP systems. Within the next five years, expect to see CMP hardware scaling to as many as 512 processor threads per system, pushing the requirements of operating system scaling past the extreme end of that realized today.”
Later this year, rumours say that SUN will release a machine with 2048 hw threads, which will be presented as 2048 CPUs to Solaris. It will have 8 Niagara III cpus (256 threads each).
Here we that prior to Linux kernel 2.6.27, Linux was 250 times slower in 64cpu configurations. 250 times slower is hardly good scaling? I bet there are other limitations still which will not allow Linux to scale.
http://kernelnewbies.org/Linux_2_6_27
To clarify, of course Linux will run on lots of CPUs, the question is how GOOD Linux does it. The question is not if Linux is capable of that. Any OS is capable to run on a machine with lots of CPUs. The question is whether the cpus will be utilized in an efficient manner. For Linux, the answer is no.
Linux sucks badly as a file server, due to limitations in the kernel:
http://www.enterprisestorageforum.com/sans/features/article.php/374…
It takes years and a long time to scale well. Solaris in it’s first iterations didnt scale well. But now, after 30 year, Solaris scales extremely good – with the same install DVD. Without no hacks or modifications of the Kernel. Linux redesigns all the time, as Andrew Morton explains, this approach leads to Linux having lots of bugs. The quality of the code sucks, as Linux kernel Developer Andrew Morton explains. How in earth can Linux scale well if the bugs never get ironed out, due to redesigns all the time?
http://lwn.net/Articles/285088/
“I used to think [code quality] was in decline, and I think that I might think that it still is. I see so many regressions which we never fix.”
The article you link to is guff and handwaving. To whit:
Thats about as much information as you’re going to get. Page size limitations? Which architectures is he talking about? What page alignment restrictions, and which architectures? Then there’s this gem:
I thought we’d left that old chestnut back in 2001, but apparently some people still do not grasp the idea that you can actually pay someone to worry about this on your behalf.
The rest of the article goes on to talk about how terrible “Linux filesystems” are, which apparently are limited to the ext family. Obviously selecting a suitable filesystem for the task at hand is a little beyond the author. Although strangely the author doesn’t complain that UFS is also a little long in the tooth and doesn’t have particularly good I/O characteristics either. Funny that!
Here is the follow up article on Linux sucks as a file server:
http://www.enterprisestorageforum.com/sans/features/article.php/374…
He got lots of mail from Linux fan boys and answers their critic. Read it.
Hmmm…let’s see:
HPC eh?
Well duh. I wouldn’t advise trying it with UFS or FFS either! HPC SAN is a highly specialised catagory. No in-tree Linux filesystems claims to be suitable for use as large HPC SANs! This is why specialised filesystems such as Lustre exist.
It appears that when yourself & the author of the articles said “Linux is rubbish as a file server” you mis-spoke and really meant to say “Linux does not provide an Open Source HPC SAN capable filesystem”. I could see how you could make such a mistake. The keys are like, right next to each other.
The default number of CPUS the _upstream_ kernel.org linux kernel supports for x86 is 64.
linux-2.6/arch/x86/configs/i386_defconfig:CONFIG_NR_CPUS=64
Do the world and unplug your computer. Please quit trolling. Some of the SGI Linux supercomputers push > 4096 cpus in a SSI (single system image for those that don’t know). What the heck do you think the SLUB memory allocator was for? It was for dealing with memory on these crazy NUMA machines with terabytes of memory. How many Solaris boxes do you see with > 1024 cpus in a SSI? None!.
I think HP-UX does this better than anything though. HP Nonstop has a theoretical limit of infinite cpus if memory serves. The rolling upgrades make it pretty much bulletproof.
http://h20223.www2.hp.com/nonstopcomputing/cache/76385-0-0-0-121.ht…
Moral of the story, “Linux is here to stay and quit being an idiot”. Unix is good and it is where most of the ideas in Linux come from.
I would hardly call 64 cpus good scaling. Prior to Linux 2.6.27 the kernel was 250 times slower on 64 cpu systems. (Solaris people have long time been talking about hundreds of cpus and many more threads).
http://kernelnewbies.org/Linux_2_6_27
Those SGI supercomputers with 4096 cpus, how old are they? Which Linux kernel version did they use? Linux v2.2? v2.4? Oh yes, the 2.4 Linux scales very well. Here we have some Linux scaling experts debunking the FUD that Linux scales badly. They clarify everything about the FUD:
http://searchenterpriselinux.techtarget.com/news/article/0,289142,s…
“Linux has not lagged behind in scalability, [but] some vendors do not want the world to think about Linux as scalable. The fact that Google runs 10,000 Intel processors as a single image is a testament to [Linux’s] horizontal scaling.
Today, Linux kernel 2.4 scales to about four CPUs
-With the 2.6 kernel, the vertical scaling will improve to 16-way. However, the true Linux value is horizontal scaling.
Q: Two years from now, where will Linux be, scalability-wise, in comparison to Windows and Unix?
A: It will be at least comparable in most areas”
Linux scales to 10.000 cpus in one single image in current v2.4, and in Linux 2.6 the kernel will improve to 16-way. Huh? Is it only me that sees a contradiction? You have bought everything about Linux scaling well and being well coded.
When I say that Linux scales bad, even the Linux experts agree, as I have proved. I am not trolling. I find it extremely hard to believe that in v2.4 Linux scaled bad (2-4 CPUs) and in 2.6 it suddenly scales better than Solaris does with hundreds of cpus? It takes decades to scale well. What am I, a fool? Can’t I think? Do I buy everything? No critical thinking? Do the world a favour, and apply some critical thinking on everything you hear.
Linux scales well on large clusters, yes. But that is NOT Big Iron. How many times must I repeat that? Read my first post again. When people says Linux scales well (which it does) then they talk about clusters.
In other words; Linux scales well HORIZONTALLY, but sucks VERTICALLY. Get it? How many times must I say this? It is explained on wikipedia. Read it. Twice. Slowly.
Linux Kernel developer Andrew Morton
http://lwn.net/Articles/285088/
“it would help if people’s patches were less buggy.”
Let me say this again, apparantely some people have problems understanding this: If I modify a kernel to scale well, the kernel does not scale well. The MODIFIED kernel scales well.
I can modify C64 to run on a large cluster, but no one would hardly say C64 scales well. The modified version does, but not C64. If modifier Linux runs on large cluster, fine. But that does not mean standard Linux scales well. Due to limitations, standard Linux does not run without modifications. But Solaris does run on big iron with hundreds of CPUs with the same install DVD. It scales well.
trasz; The fact that Linux uses spinlocks is one of the reasons that its performance drops noticeably under high load on many CPUs. Other operating systems use fully functional mutexes, along with interrupt threads.
The dynamic root update sounds a lot like the lu update available in Solaris that updates the system on another slice, disk or sometimes forcibly broken mirror.
But the article suggests that testing can be done against the dynamic root partition/slice/disk while the old system is running. With lu update you have to boot the new partition before you can realistically do any testing.
I would assert that the HP-UX dynamic root is the same unless they are doing some sort of chroot or virtual booting against it.
libray, you are correct it is the same as lu, i’ve used both. I’m not sure what testing you can actually do until you boot of the drd’d disk – that is the proof of success.
i thought this thread was about HP-UX, now it seems to be about whether linux scales or not. Do i care? erm no. Is my OS better than your OS? erm yes and no, but do i actually care? erm no. Linux does what it says on the tin, as do AIX, Solaris and HP-UX.
Operating systems are at state now that they are all pretty much do what we need them to do. Questioning which is the best OS is like asking which is the best car. The answer is only valid for the person asking the question based on their circumstances and their environment. Get it? no man is an island or no OS is an island you need – HW, Network, Applications etc…
i have used Debian, SuSe, Redhat, AIX,Solaris and HP-UX. which is the best OS? or which do i prefer? are two different questions. the former is a dumb question, whilst the latter is OK as it relates to personal choice.
BTW I LIKE Debian and HP-UX, but what does it matter i have to make sure i can ping something first 😉
Don’t be ‘Hater’ be a ‘Player’
HP’s third party vendors write to HP-UX. Many of them couldn’t care less about Linux and the customers who run HP hardware couldn’t care less, either.
They’re looking for “bulletproof” system with application software that’s just as stable and most of all, they want to call a vendor and hear “we’ve got an answer for you” instead of “well, that’s beta software, what do you expect?”
The scientific world may take chances, but when it comes to businesses, they’re more conservative, if they want to stay in business. That’s how IBM maintain their business.
It’s fine to say that Linux runs on such hardware, but it doesn’t matter mostly if the application they need doesn’t run on Linux.