“Several of the concerns about Oracle’s acquisition of Sun have revolved around how Unix technologies led by Sun would continue under the new ownership. As it turns out, Solaris users might not have much to worry about, as Oracle executives on Wednesday affirmed their commitment to preserving the efforts. In the case of Solaris, Oracle had already been a big supporter of the rival Linux operating system. Oracle has its own Enterprise Linux offering, based on Red Hat Enterprise Linux. For Oracle CEO Larry Ellison, the idea that Linux and Solaris are mutually exclusive is a false choice.”
Its good to see they aren’t scraping Solaris, but I don’t quite agree with their idea that “Solaris is for the high end” and “Linux is for the low end.” I suppose they are taking that position because Solaris/Unix is easier to sell?
Oh well, could be worse. I have been following the OpenSolaris project quite closely and hope that it continues to make progress into the desktop sector.
Oracle doesn’t have a lot of interest in the desktop sector. They want to build Big Machines. So I don’t think they will put a lot of efforts into competing with windows 7 and os x. But hey, it’s an opensource project, so I’m sure people will contribute. And as long kde/gnome/x.org progresses, opensolaris progresses aswell.
Oracle employees will contribute .. it is not like OpenSolaris has a big community of devs outside of Oracle.
I think Unix is a lecacy system. Sure there are a few places where it is still way ahead of Linux, but that goes both ways and Linux gets new features every 3 months and Unix .. like once a year at best.
You don’t have to be a master futurist to know where that is going to end.
Developing Unix in the long is just added cost. IBM unsterstands that and they want to dump AIX if their customers let them ( like in 2020 or something )
“I think Unix is a lecacy system. Sure there are a few places where it is still way ahead of Linux, but that goes both ways and Linux gets new features every 3 months and Unix .. like once a year at best.”
You speak like this is a great thing! Sure, for your piddly little laptop/desktop it is, but for an enterprise it’s a non-issue. Stability is paramount in these installations. A new feature is just that, new! There is less a chance that a new feature will go into production every three months anyway, as in a mission critical enterprise it would likely take that long just to test. That’s why there are enterprise distributions and there are community distributions.
“You don’t have to be a master futurist to know where that is going to end.”
No you don’t. It will end with an enterprise distribution with 3-7 years of support and nearly no newfandagled technology attached.
“Developing Unix in the long is just added cost. IBM unsterstands that and they want to dump AIX if their customers let them ( like in 2020 or something )”
IBM understands money, PERIOD! They don’t care if you want to run Windows, AIX, GNU/Linux, Solaris, or any other OS. They care that you buy products and services from them. They make money by tailoring a solution to the need. If you think IBM is a GNU/Linux shop you’re sadly mistaken. They’re a hardware/software/services shop plain and simple.
If you actually followed OpenSolaris development, you would see that it gets new features at much faster rate than Linux.
I guess it depends on what you class as “Unix”.
If by “Unix” you mean “Unix derived” systems, then OSs like *BSD as still very popular.
Or if you meant UNIX certified systems then let’s not forget that systems as recent as OS X Leopard carry that classification.
But if you mean “pure” Unix, then Unix already “died” years ago.
Technically, Solaris is directly based on the original Unix codebase and was licensed as such.
Unix is alive and well, thank you very much 😉
So had BSD – once upon a time.
And given the rate at which technology moves, I do question how much of the original Unix code is still inside Solaris 10. If I had to guess, I’d say not much.
So hence the “unix derived” portion of my original post.
Edited 2010-01-30 21:13 UTC
The Solaris code is open for anyone to see. I’d say that you’d be surprised how much code from the original Sys V base is still there.
Granted things get modified all the time. But as I said, unix is still alive and well…
Actually, IIRC Solaris wasn’t exclusively built on SysV code. There’s BSD and even Xenix code in there too.
In fact, the whole point of SunOS 5 was that it was marking a merger of some of the previous leading Unix variants as opposed to being an out and out BSD system (as many of the pre-Solaris-branding SunOS releases were).
I think the confusion comes in that SunOS 5 was also branded as Solaris 2.0 (which was also technically the 1st Solaris release as 1.0 was retrospectively named) and SysV Release 4.
So while Solaris is SysV derived, it’s also BSD derived and certainly not 1st, 2nd or even 3rd generation SysV.
So while it may still contain SysV code – I doubt there that much from the original SysV codebase as you suggested.
I know it is – that was the whole point of my original post (which you evidently missed)
I am not missing anything. First off Xenix was also derived from the original Unix code base. And Solaris only added the interfaces to support some of the old SunOS BSDness. There is a reason why SunOS lived for almost a decade after Solaris had been introduced.
What you are trying to claim is akin as claiming that DOS 5 was not really DOS because it was a significantly “improved” version from the original DOS 1.0 release. Which makes little sense to me.
I am not claiming anything, the source code is there for people to see. And indeed there are tons of SYS V stuff in there.
Well yes you have because you keep reiterating my point: Unix is alive and well
No, what I’m trying to explain is that you keep arguing with me by reiterating my fraking point.
I’m stating that Unix is still alive and well but just in an updated sense.
I’ve said this, you’ve said this, so why are we still arguing?!
Clearly you have some other kind of definition of “claiming” than the rest of us because you state you’re not claiming anything and then go on to make a claim.
http://www.google.co.uk/search?hl=en&q=define%3Aclaim&meta=&saf…
I’d say, under most peoples definitions, that your statement regarding SysV was a “claim”.
As for the claim itself, I’m not going to argue whether it’s true or not because quite frankly I’m not about to start running difference engines against the two sources just to resolve an argument when you’re clearly just as apathetic towards backing your point up and the the underlying argument we already agree on (despite your continual attempts arguing otherwise).
So are we done now?
(and yes I am being particularly moody today. It’s Monday morning and I had to “sleep” on the couch last night as my fraking laptop exploded yesterday destroying the bedroom in it’s wake).
Edited 2010-02-01 09:09 UTC
I wouldn’t say Solaris is slow on inventing new stuff, look at ZFS, zones, Crossbow, Dtrace, SMF, Live upgrade,..
Solaris and ZFS alone would make it worth wile switching from Linux to Solaris even on a low end file server. E.g. try to get verifiable backups from a software raid system in Linux even though this is possilble theoretically by combining lvm (for snapshots) and software raid, but the performance is not usable even to play with. Not to mention how much easier it is to replace a failing disk in ZFS compared to Linux software RAID.
Not to mention that ZFS will do windows file sharing without adding extra software like samba, just add a few extra moutn properties and you are done.
It will take a very long time before Linux comes even close to Solaris for serverside use. On the desktop Linux have a lead, but both of them are beaten by MacOS-X that is actually certified Unix. Given the rapid development of the OpenSolaris desktop I would say OpenSolaris is closing in on Linux. After all they both use the same desktop GUI toolkits, so the uer experience should be similar.
People said the same thing about OpenOffice and it has really just been a Sun project. Software projects with very large codebases can easily stagnate without paid developers.
Regarding the above..
You guys are dreaming that Linux is ready for the big time. Even IBM who some Linux people see as some kind of saviour is saying that Linux is not Enterprise class and will sell you a POWERwotsit donkey cart in the biggest rack frame they can find.
I use AIX, Solaris and Linux everyday, and Linux does not cut it *yet*. Sun gear and Solaris is truly Enterprise class, you can hotswap memory and cpus in big SPARC systems with NO DOWNTIME, and even the older boxes had hundreds of cpu’s when maxed out. Solaris quite simply has a lead over all Unix flavours (especially the laggard that is AIX) when it comes to scalability in numbers, and its got at least a 20 year head start on Linux.
Theres plenty of room for both however!
Now, what I imagine Larry means is Solaris on the big Enterprise stuff for banks, scientific, military etc (ie big markup for those willing to pay for it) with Oracle Grid Databases, while the SMB departmental stuff will be smaller and more flexible using Linux (where cost against Microsoft counts). I’ll also bet the lower end stuff will get a tweak and tuned MySQL (again, I use both daily!)
The guy has a knack for wringing cash out all the way, and lets face it, he wouldn’t keep what he couldn’t use.
Laggard AIX has something truly “enterprise” that trendy Solaris hasn’t: good virtualization.
System p (and AIX) have incredible virtualization capabilities. The best UNIX virtualization by far… and AIX’s been using the same technology for years, It’s a really solid product. SUN changed their virtualization plans every year!
Containers/Zones and LDOMs are good but can’t compete with AIX’s LPARs (or VMware ESX or even Xen). You have to do black magic to run RHEL or Solaris 8/9 using “Branded Zones”… that’s not “high end”, that’s not “Enterprise”… that’s a complete joke.
ZFS and Dtrace are amazing, I love ’em, but They’re pretty new technologies! You don’t have ZFS in every Solaris box out there! (in fact VxVM and SVM are much more common). AIX have LVM since 1991 or so, and Linux since 1998.
Solaris is really good, but It isn’t more “high end” than RHEL, AIX or any other Enterprise *nix. That’s a complete marketing bullshit.
“Containers/Zones and LDOMs are good but can’t compete with AIX’s LPARs (or VMware ESX or even Xen). You have to do black magic to run RHEL or Solaris 8/9 using “Branded Zones”… that’s not “high end”, that’s not “Enterprise”… that’s a complete joke.”
Branded Zones require Black Magic? Eh? You have never tried zones yourself. It is very easy to set Zones up. And Zones are extremely light weight also, as all Linux kernel calls get remapped to Solaris kernel calls. There is only one kernel active; the Solaris kernel. A Zone typically requires 40MB RAM and 100MB disk space (if you use ZFS). One guy started 1000 zones in 1GB RAM – it was dog slow, but it worked. Try to do that with AIX?
The point is, you just zip an old Solaris v8 server, and then drop that zip file into an Solaris 10 Zone, and now you can get rid of your old server.
You can also use LDOMS, which is Solaris equivalence to LPAR.
“ZFS and Dtrace are amazing, I love ’em, but They’re pretty new technologies! You don’t have ZFS in every Solaris box out there! (in fact VxVM and SVM are much more common). AIX have LVM since 1991 or so, and Linux since 1998.”
LVM can not be compared to ZFS, it is ridiculous. ZFS is the only solution that REALLY protects your data, LVM does not.
IBM has a DTrace copy: ProbeVue. I wonder how good it is? And then IBM wants to have the ZFS copy: BTRFS.
Regarding “Linux is on Top500”. Yes, we find Linux on Top500 but Top500 super computers are basically a bunch of PC on a fast network. They are very specialized and do one thing fast: calculate. They use stripped down and modified Linux kernel, not std Linux kernel. The difference between super computers and Big Iron (one big machine with lots of CPUs) are vast, see wikipedia article. Big Iron is hard to do, they are general multi purpose machines with lots of complex stuff. A bunch of PC sending messages and calculating is easy to do – it is almost like SETI Folding works: a bunch of computers do a calculation and sends back the result. It is very different to Big Iron where Solaris scales well – it is the same install CD and the same Solaris kernel on Asus EEE PC up to big iron with 100s of CPUs – that is true Scalability! You dont have to modify anything on Solaris kernel!
Why dont Super Computers use the Solaris kernel which scales much better on Big Iron? Solaris kernel is complex and difficult to modify and strip down, with some weird licensing (commercial stuff is ok?). It is far easier to use a naive kernel as Linux under GPL to modify.
Look at SAP latest benchmarks. Linux on 48 cores only utilize 87% of all cores, whereas Solaris utilize 99% of all cores. That means Linux does not scale on a single machine with many cores.
Hi there, I have a Solaris 2.5 @ work. The hardware is the Enterprise 3000 that is dieing. The SAP guy told us that the SAP is too old, cannot be upgraded, must run on 2.5. Could you please advise the best way forward?
Thanx & regards
Actually Linux scale far better than both Solaris and/or AIX. The largest single image computers are either old Irix boxes, or new numa intel boxes running linux. In fact most of the numa scheduling and cell migration from Irix was ported to Linux long long time ago.
The thing that both AIX and Solaris have going for them is that they both have their own proprietary integrated platforms (SPARC and POWER systems) which provide most of the “magic” regarding fault tolerance, and other enterprise-like facilities.
But from a processing scalability perspective, sorry neither AIX nor Solaris can hold a candle against Linux. However, as I said in other enterprise centric features both platforms are far more mature than linux, but it is mostly due to the specialized HW they run on…rather than just the software itself.
BTW, some of the largest enterprise systems, like Amazon… run almost exclusively on linux: from web fronts, load balancers, to even the DB backends. With some sprinkles of solaris/ORACLE at the very very deep backend. Granted, computers are just tools. And for plenty of applications, Solaris and AIX are far better suited than Linux. But in the same sense Linux may not have some of the specific capabilities of those systems. Labeling linux as immature or not ready for the enterprise is just silly.
Sorry, but these top supercomputers are often multi-image clusters and even if they aren’t, they still run very similar workloads. If a single-image OS runs application, which is simple enough to be run on cluster, it doesn’t prove scalability for me. They also have to be specially modified for their task – that alone puts it in a different league than AIX and Solaris, which run unmodified from one cpu to hundreds of cores. Larry Ellison recently explained for example, that he sees Solaris’ place on the biggest machines. If it was true, what you wrote, he would choose OEL for that.
The biggest AIX server you can buy is 128 cores, the biggest Solaris serer is 256 cores, the biggest Linux server is 1024 cores. These are single image machines, not clusters. Just throwing that out there.
Ellison has a very good reason to promote Solaris on big machines ahead of Linux no matter what the actual facts might be. He makes more money that way. He would have to be monumentally stupid to come out and say that a cluster of Linux machines is better than Solaris on big iron after having spent a ton of money on buying Sun, even if it was true.
I don’t work with big iron machines so I won’t comment on what actually works better on which workloads in the real world, but I certainly wouldn’t use a comment by Ellison as actual proof of anything.
System P (and System Z) virtualization is more of a hardware function than part of the OS, sure you can run AIX inside of an LPAR but you can also run Linux.
Solaris zones are not meant to compete with this or with vmware, they are a lighter weight alternative which serve a different purpose. There is nothing stopping you from installing solaris inside of vmware or a similar technology. And on the subject of LPARs, Sun has something similar on their high end offerings anyway, and has for years.
I suppose you are talking about capabilities of System p hardware, not AIX. Btw SPARC machines have virtualization capabilities too.
Citation needed.
First of all, you are totally confusing hardware virtualization and OS-virtualization. Technologies you named are different tools for different purposes. Second, with zones you can have hundreds of virtual environments on single server, how many can you have with VMWare, LPARs or Xen? Third, Solaris does run on Xen as dom0 and domU.
That’s not a joke, that’s how OS-virtualization works.
What? Solaris 9 was EOLd long time ago, and there is already eighth update of Solaris 10 (which includes ZFS) available. You expect ZFS to be backported or what?
What’s yor point? Solaris has Disksuite a VxVM for very very long time. (I’m not going to google for exact time.) Btw. Linux’s LVM is practically useless.
Linux on servers is x86 (== low end) OS, that’s all. Term “enterprise Linux” is an oxymoron and is not in the same category as Solaris and AIX.
All of your points are very good and it’s true that most of the virtualization in AIX is hardware based but just like Zones, AIX has OS virtuailzation as well. It’s called WPAR (as in work partition). In the end though, both platforms rely mostly on hardware when it comes to true virtualization with POWER having the added advantage of hardware virtualization support built in from the low end all the way up and having inherited all the tech from the mainframe.
The two things that AIX is missing are obviously DTRACE and ZFS, the latter of which anybody would have to admit is a trully superb filesystem, but if I wanted serious hardware virtualization and had the choice, I’d go for AIX any day. The ability to add and extract servers from a virtualized pool, live migrate from one server to another and do all this through an extremely simple to use web interface without needing to resort to the command line puts pretty much anything Sun has to shame, for the moment.
Addendum: I wouldn’t go so far as to call Enterprise Linux an oxymoron. I’ve installed SLES on POWER for a TSM backup server and Linux uses the inherited low downtime capabilities of the POWER platform just as well as AIX does. Thanks to the tools IBM have developed for LoP (Linux on POWER), it is just as totally trivial to swap out CPUs, memory and expansion cards as it is using AIX. If the kernel couldn’t handle hot-swapping, then I’d think you’d have a point, but that is demonstrably not the case.
Edited 2010-01-31 20:28 UTC
I agree that AIX+POWER probably has better virtualization capabilities than Solaris+SPARC. After all IBM has all the mainframe know-how. I wouldn’t say that Sun should be ashamed, as most of things which you outlined can be done with other tools, like Sun Cluster (as for live migration).
I’m still waiting for Linus to acknowledge the need for stable kernel API and ABI. Until that day, I can’t take Linux seriously, sorry.
I forgot all about Sun Cluster so yeah, you are spot on. Still, I’ve worked intensely with both, though it’s been a while since I worked in a SPARC environment, and I have to say that for me, Virtualization on POWER kicks it’s rear end. In the end though, they are both excellent systems and preferences do vary.
Lol! Fair enough 🙂
Well, I don’t agree with you.
Linux is on top of big serious business. Check out TOP500 lists. They wouldn’t be using Linux if it wasn’t ready for Enterprise, moron.
Hmmmmm, the last ten years hasn’t happened then? ROTFL.
IBM don’t care what is installed on their machines. It’s clear that AIX will be phased out eventually for Linux to save on support costs. Unlike Sun, they don’t have a religious attachment to their in-house Unix and won’t let it destroy them.
That must be why Sun headed towards going bust, got bought out by Oracle and will definitely disappear if they follow the same strategy – because it has already failed.
Hmmmm, that must be why Sun has had its lunch eaten for the past ten years, IBM has made a ton of cash and Linux has flourished at Sun’s expense.
Errrrr, Linux is already being used there sweetheart – and it’s the very reason why Sun has headed headlong towards bankruptcy.
Still, a pig headed unwillingness to look facts in the face will see us through!
I am getting tired of all these Oracle articles. Can’t we talk about something else, like the Apple iPad?
(Joking, people)
Solaris and other Unix variants are good, and have some features lacking from Linux. But let’s not forget that almost 90 % of the world’s top 500 super computers already run Linux and that trend is only growing: http://blog.internetnews.com/skerner/2009/11/linux-dominates-top-50… That should make it quite clear to everyone that Linux is perfectly ready for (if not already dominating) the high-end server field.
However, organizations using super computers have large and excellent admin teams who can also act as developers when needed. But big companies may lack those resources. Unix providers may be capable of offering companies, out of box, not only a unified OS but also their decades of experience and support plus all the needed ready-made server software on top of Unix. In that sense Unix like Solaris can still be quite competitive, especially in the high-end market.
In a typical fanboy fashion, you are good at misunderstanding things. High-end stuff, the so-called big iron, has really nothing to do with the so-called “supercomputers”. Nor do “supercomputers” serve anything. They just execute flops. And they do it fast. That’s their job.
Edited 2010-01-30 03:50 UTC
Do you realize the fact that those supercompuers have some of the largest and most complex I/O subsystems, file serving, and database/mining applications?
Where do you think the globs of data come to generate all those globs of flops? Pixie dust?
And yes, those supercomputers are considered big iron. In fact, there is no bigger iron than a supercomputer. Heck, the largest IBM Z10 system looks puny when compared to a large CRAY, which yes runs using AMD processors and Linux in all its glory.
Please do not confuse “mainframe” with “big iron.” And for what it is worth, IBM will happily sell you an integrated facility for linux® (IFL) subsystem for your shiny new Z-series mainframe.
Edited 2010-01-30 21:09 UTC
tylerdurden
Not correct. I reiterate: Linux scales good on large clusters, a bunch of PC on a fast network. But on a single machine with many CPUs, Linux does not scale good.
Back a while ago, Linux scaled to 2-4 CPUs. Here are Linux scalability experts dispelling the FUD that Linux does not scale good.
http://searchenterpriselinux.techtarget.com/news/article/0,289142,s…
“Greenblatt: Linux has not lagged behind in scalability, [but] some [Unix] vendors do not want the world to think about Linux as scalable. The fact that Google runs 10,000 Intel processors as a single image is a testament to [Linux’s] horizontal scaling. The National Oceanographic and Atmospheric Administration (NOAA) has had similar results in predicting weather. In the commercial world, Shell Exploration is doing seismic work on Linux that was once done on a Cray [supercomputer].
Terpstra: Accusations have been made that Unix and Windows scale to far greater numbers of processors than the Linux 2.4 kernel can. While this is true, a bare claim like this makes little sense unless it is placed within the context of deployment [needs]. Today, Linux kernel 2.4 scales to about four CPUs. Still, one should consider whether a four-CPU server is needed for departmental file and print serving in the average company. After all, there are an average 45 users per server.”
They talk about Linux scales really well, at the end they say:
“With the 2.6 kernel, the vertical scaling will improve to 16-way. However, the true Linux value is horizontal scaling.”
To scale up to many CPUs on a single machine takes decades, it is not easily done. Linux DOES scale well on a large cluster, just what Google uses, or Amazon, etc. That is HORIZONTAL SCALING. Linux sucks Vertical scaling (one big computer).
Look at this new SAP benchmark, which uses 48 cores. Linux utilize 87% of all CPUs, while Solaris utilize 99%. That is another proof that Linux does not scale well even today. That is the reason Solaris scores higher on SAP (despite using slower CPUs and slower RAM):
Linux 87% CPU
http://download.sap.com/download.epd?context=40E2D9D5E00EEF7CCDB058…
Solaris 99% CPU
http://download.sap.com/download.epd?context=B1FEF26EB0CC34664FC7E8…
Linux may EXIST on old Irix boxes with lots of CPUs, but what do you think those boxes do? Serve thousands of users, or calculate stuff? The mere existence of Linux on Irix is not a proof that Linux scales well. Most probably, the scaling sucks bad. As according to the links above.
Here we see that on recent Linux v2.6.27, Linux had some severe problems with 64 socket machines, now Linux is not 250 times slower anymore in some circumstances.
http://kernelnewbies.org/Linux_2_6_27
I wonder how many problems Linux has not fixed yet?
No that is not true. Solaris has Self Repair functionality in software, with ~40% less hardware problems. Which is what collected data says. And functionality in software always beats functionality in hardware, it is easier to patch software, easier to add new functionality, etc. With software you can add 100MB new code easily. In hardware, you have to swap parts and so on.
Ehrm. That new SAP benchmark I showed, is on x86. Not on specialized SPARC hardware. So, wrong again.
Here is a Linux zealot that switches to Solaris, because Linux does not cut it, when going to large scale work load:
http://blogs.digitar.com/jjww/2008/04/democratizing-storage/
“What got us using OpenSolaris was Linux’s (circa 2005) unreliable SCSI and storage subsystems. I/Os erroring out on our SAN array would be silently ignored (not retried) by Linux, creating quiet corruption that would require fail-over events. It didn’t affect our customers, but we were going nuts managing it. When we moved to OpenSolaris, we could finally trust that no errors in the logs literally meant no errors. In a lot of ways, Solaris benefits from 15 years of making mistakes in enterprise environments. Solaris anticipates and safely handles all of the crazy edge cases we’ve encountered with faulty equipment and software that’s gone haywire”
Here is another guy that switches to Solaris, and on the same hardware, sees lots of improvement:
http://www.lethargy.org/~jesus/archives/77-Choosing-Solaris-10-over…
Yes, they use a bunch of computers on a network. Linux is run on a low utilization, because otherwise it crashes.
Maybe you heard about Linux being buggy and bloated? What does Linus T actually mean, when he says that Linux is bloated? What does Andrew Morton mean when he say that “the code quality is declining”? What does Dave Jones mean when he is saying that “the kernel is going to pieces”? What does Alan Cox mean, when he say that “the kernel should be fixed”? I mean, several Linux kernel developers are talking about bloat and bugs and declining quality. I wonder what they actually mean?
Yes, those supercomputers also have much more cpus than big iron. But you must understand that Super computers and big iron is very different. Supercomputers have simple structure. They only do one thing: calculate. That is easy to do.
*sigh* Read here, and see that super computers and big iron are very different.
http://en.wikipedia.org/wiki/Supercomputer#Hardware_and_software_de…
“[Super computers] tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks.”
The difference is similar to a GPU vs a CPU. A CPU is general purpose, and much more complex than a GPU which is extremely fast on one thing. You do realize that GPU and CPU are different, and likewise, super computers and big iron is different.
Some more information.
http://en.wikipedia.org/wiki/Mainframe_computer#Differences_from_su…
I stopped reading when you started talking about the 2.4 kernel. That linux kernel hasn’t been current in years. If you think that it is current, it makes your whole post useless.
Nowhere do I claim that v2.4 is current kernel.
What I am trying to say is
1) v2.4 scaled really bad.
2) It is really difficult to scale good, it takes decades.
3) It is unrealistic to believe that v2.6 scales good.
4) I provide numerous links that support claim number 3).
5) Linux scales well on a large cluster, horizontal scaling.
6) Linux scales bad on a single computer, vertical scaling.
The bits about the 2.4 kernel in your post were entirely irrelevant. I said I stopped reading there. If you wanted people to read your post stick to what’s relevant.
I know Big Iron and mainframes are not the same thing. I am trying to show that Big Iron are more similar to Mainframe, than a super computer. And we see that mainframes and Super computers are very different, hence, big iron are very different than Mainframes.
But IBM Mainframes are dog slow. One IBM Mainframe with 64 CPUs, offer you 28.000 MIPS. That is 437 MIPS/CPU. You can emulate IBM Mainframe in software with “Herkules” and according to wikipedia, one Intel Nehalem-EX give you 400MIPS. But that is under software emulation, which is 5-10x slower than running native code. If you could run Nehalem-EX in native code, you would need 16 intel CPUs to match one 64CPU Mainframe. As the top configured IBM Unix P595 server which had the old TPC-C record, costed $35 million list price, I dont expect a top configured IBM Mainframe costs much less. Compare that Mainframe price to two 8-socket Intel Nehalem boxes. Which computer would you prefer to run Linux on? IBM Mainframe or two Intel boxes? I bet IBM is willing to sell me a new shiny Z-series Mainframe, but frankly, I would save $35 million and buy a 8-socket Intel box for $25.000(?).
The funny thing is, you need four IBM POWER6 at 5GHz, to match two Intel Nehalem 2.93GHz in TPC-C, according to official benchmarks. Not really fast IBM hardware, eh?
SEGEDUNUM
Just because Sun didnt sold much hardware, it doesnt mean Sun made bad hardware. MS sells Windows very well, does that mean that Windows is the best OS? No. Popularity != technical superiority
I’m not a fanboy of Linux anymore than of Unix. Instead, I just tried to bring another point to the discussion to balance the discussion. Personally I would have nothing against Unix like Solaris being used more on top super computers too, but the Linux trend there is just clear.
Also, there are many types of and uses for high-end servers, mainframes, clusters and super computers and those classifications are not exclusive but may overlap. However, it is true, especially in history, that big mainframe computers have often come with a Unix OS. But – like can be seen as a general trend in super computers predicting wider change in other high-end computing too – that has started to change too.
Edit: “UNIX arose as a minicomputer operating system; Unix has scaled up over the years to acquire some mainframe characteristics.” http://en.wikipedia.org/wiki/Mainframe_computer A similar kind of development has been happening with Linux nowadays.
What are the exact arguments if someone claims that a general kind of OS good and reliable enough for, say, a cluster based super computer used for expert systems, business predictions, weather modelling, space research etc. today couldn’t be a good OS choice for a high-end mainframe too (and vice versa)?
Edited 2010-01-31 00:58 UTC
Well the general OS is going to require a lot more planning and coercion to solve a problem over many computers. In theory an OS designed for parallel programming can act as a single machine for any type of problem. You can sort of do this with a general purpose OS but it won’t be as efficient, especially if the hardware is designed for parallel processing.
But if you look at a lot of the problems that require parallel processing then you’ll find that many of them can be solved just fine with a room filled with cheap x64 boxes running a standard OS. A lot of it boils down to cost efficiency, parallel OS gains can easily be trumped by a standard OS making use of x64 commodity pricing.
Note that I’m not talking about Linux vs Solaris here, just theory.
Thing is, UNIX is no where near stable and secure enough for the Mainframe. UNIX like AIX and Solaris are great for big iron and the odd mainframe LPAR (which will be perfectly happy to run Linux too) but if IBM swapped zOS out for AIX on all of it’s mainframes and started using a virtualized environment to run all the then legacy MVS code, their customers would A) Throw a hissy fit and B) Probably sue them.
Mainframes are the most closed environment in computing and although I dislike MS and Apple’s way of doing business, they both pale in comparison to IBM and the Mainframe. Every one of those machines are hooked up to an external phone line that will phone home the minute you reach 91% CPU usage. What happens is the reserve CPUs will kick in and you will be billed for the added system usage and let me tell you, the renting of a mainframe is not cheap.
But all in all, the hardware costs are nothing compared to the price of software on zOS. This is why you now see zLinux and IBM wanting to port Solaris over. It’s a win/win for them as first off, customers want to be able to run a fully supported, cheap open source stack in a sandboxed environment and secondly, it means more people will find the Mainframe more attractive and flexible. For example, the cost of Websphere for zOS is something like 100 times that of Webspere for linux. With Thomcat on zLinux, you pay for the support.
In the end though, if the underlying MVS system, with it’s almost bullet proof security and total reliability, where no longer present, the mainframe would lose it’s biggest selling points.
For those who are sceptical about OpenSolaris I suggest you look at many of the chances making their way into OpenSolaris:
http://mail.opensolaris.org/pipermail/onnv-notify/
The only two problems I have with it so far has been the horrible printer drivers and the lack of a decent printing system (why don’t they move to CUPS?), and NWAM which was simply horrible but from what I read Phase 1 has some big changes and Phase 2 improving things further.
On a positive side there is Boomer which has vastly improved the over all audio experience with OpenSolaris and apparently HAL is going to be eventually removed out of OpenSolaris and replaced with native tools to handle device detection (which funny enough have existed in OpenSolaris for quite some time).
The big thing will what 10.04/10.03 looks like and the future direction of OpenSolaris post Oracle buy out and how far it will develop over the next year leading to the next release at the end of 2010.