IBM widened its lead in the worldwide server market in 2003 at the expense of Sun Microsystems, making particular gains in the Unix server market, new figures show. In Unix servers, No. 3 IBM saw revenue grow 13 percent to $4.1 billion. Revenue for first-place Sun shrank 16 percent to $5.4 billion, while No. 2 HP saw revenue shrink 4 percent to $5.3 billion. The overall Unix server market shrank 4 percent to $16.7 billion, while the Linux server market grew 90 percent to $2.8 billion, Gartner said.
Will the Sun rise again, or is it on a slow journey to the grave?? Me wonders.
Whether Sun dies or not, Sparc has no future. I can’t
see it achieving economies of scale of Intel/AMD.
It remains to be seen, if Sun can replace shrinking
Sparc revenues with something else.
Ramana
opteron. at least that is what I think. Sparc could have a future. If Sun and Fujitsu pool together and give it a future.
Whether Sun dies or not, Sparc has no future. I can’t
see it achieving economies of scale of Intel/AMD.
Commodity hardware has little use in the high end server space. Intel’s offering in this area, the Itanium, only shipped 100,000 units in 2003.
And considering the advancements Fujitsu has made on SPARC64, as well as the massively SMT processors Sun has on the horizon like Niagra, I don’t think we’ll see SPARC dying soon.
It remains to be seen, if Sun can replace shrinking
Sparc revenues with something else.
All your naysaying seems to ignore the fact that Sun is still the number one server vendor. They’ve recently introduced the UltraSPARC IV and diversified their product lines with Opteron offerings.
Sun isn’t dying any time soon.
while IBM made the gap between 1st and 2nd place closer,
but Sun still No.1.
and, imho,
with a technology like in “Throughput Computing”
http://www.sun.com/processors/throughput/
SPARC do have future.
What’s really newsworthy is that Linux grew by 90%. And datacenter Linux is only really beginning now. Linux is going to eat away at any company whose main business is Unix.
IBM saw a tidal wave change a-coming and made the necessary adjustments to its corporate structure and its product offerings.
Sun could have a good thing if it expands on its Linux Desktop offerings. As it stands, its JDS is too crappy a product to be taken seriously.
If they get past their testing the waters period, they could become a true leader in the Linux desktop space, but they need to act now and offer something far more compelling that what they have so far.
Price Performance.
“Sparc IV”, “Throughput Computing”, “Sparc64 SMT” – these are all buzz words surrounding unreleased products.
Ultimately this hardware will have to pass through the holy grail of acceptance: price performance.
Gone are the days of massive multi-billion dollar projects where SAs could throw $200,000 into a single server.
AMD’s Opteron and Intels’s 32-bit lineup already blow the doors of Sparc systems in price performance – which is confirmed by this report.
Future processors released by AMD and Intel’s i32e (or iAMD64) will pose even greater challenges to the price performance viability of unreleased Sparc hardware.
What’s really newsworthy is that Linux grew by 90%. And datacenter Linux is only really beginning now. Linux is going to eat away at any company whose main business is Unix.
I think you’ll find these are mostly low-end servers, not big iron. IBM has only recently managed to get Linux running on their zSeries servers (as opposed to Linux running on top of z/OS through the z/VM) Sales of the SGI Altix certainly aren’t impressive compared to high-end Sun Fire servers or zSeries mainframes.
Sun could have a good thing if it expands on its Linux Desktop offerings. As it stands, its JDS is too crappy a product to be taken seriously.
I’m afraid the entire nation of China (http://www.sun.com/smi/Press/sunflash/2003-11/sunflash.20031117.3.h…) and the British Government (http://www.eweek.com/article2/0,4149,1405983,00.asp) beg to differ.
“Sparc IV”, “Throughput Computing”, “Sparc64 SMT” – these are all buzz words surrounding unreleased products.
Unreleased products? The UltraSPARC IV supports SMT, and is available in the following systems:
http://www.sun.com/servers/highend/sunfire_e25k/
http://www.sun.com/servers/highend/sunfire_e20k/
http://www.sun.com/servers/midrange/sunfire_e2900/
http://www.sun.com/servers/midrange/sunfire_e4900/
http://www.sun.com/servers/midrange/sunfire_e6900/
Whereas SPARC64 processors are the heart of Fujitsu PRIMEPOWER servers:
http://www.fujitsu.com/support/computing/server/unix/documents/
Ultimately this hardware will have to pass through the holy grail of acceptance: price performance.
I believe the term you’re looking for is TCO.
AMD’s Opteron and Intels’s 32-bit lineup already blow the doors of Sparc systems in price performance – which is confirmed by this report.
Xeon servers do not fit the role of massively scalable high availability servers. Opteron may eventually fill this role, as soon as a tier 1 vendor begins offering single system image Opetron servers in greater than 2 processor configurations. At this present time they do not. Neither Xeon nor Opteron can compete in the realm of Itanium, SPARC, POWER, or MIPS at this point in time, so why even bother broaching the issue? These systems fit an entirely different problem domain than low-end x86 servers are designed to address.
I think the other part the naysayers keep forgetting is that they haven’t disclosed the number of units sold for that period. Considering that SUN has had some major price cuts over the last few months, one has to wait a few quarters for businesses to consider purchasing these “lower cost” SPARC machines, also, what the revenue doesn’t disclose is how many potential customers are actually waiting for Solaris 9 to be ready for Opteron or infact, the number of Linux installations now considering Solaris x86-64 on Opteron as a viable alternative to Linux on Intel.
Those links you provided about the Chinese and British government mean nothing.
I was already aware of those deals. Sun got those deals because it has brand awareness. JDS still sucks when compared to a later incarnation of Suse as it is based on Suse 8.2 or when compared to Mandrake or Red Hat.
As it stands, it only meets the needs of very, very few organizations. I am happy that Sun is pushing desktop linux, but it will have to do much better. And I am certain that it is able to do so if, and that’s a big if, if they realize how important this truly is to their long-term survival.
Desktop Linux also opens the door for them to sell more server boxes, but they need to spend a lot more money than they have in creating a compelling distribution. Rebrading Suse and then distributing a very minimalist distribution does not count.
Bascule, do you work for Sun? If so, you should disclose in the interest of intellectual integrity. I ask because I have seen you come to the defense of SUN here anytime anyone wonders about Sun.
The other thing you forget is that many companies are preferring to scale out rather than up. Clustering now makes that very efficient and easy.
There’s no need to buy a large expensive Sun server. We successfully operate a fairly large (10GB) database on a cluster of dual Xeon systems using replication.
You don’t need high availability if you have redundancy. I expect that in the future we’ll see large Sun servers replaced by clusters of smaller Linux/x86 systems.
“Neither Xeon nor Opteron can compete in the realm of Itanium, SPARC, POWER, or MIPS at this point in time, so why even bother broaching the issue? These systems fit an entirely different problem domain”
And what problem domain is that?
A well developed parallel application using a common message passing specification such as MPI will run just as well on a cluster of x86 / x86-64 hardware with redundant nodes, RAID disks, 2 – 4GB of memory, and a dual-GIGE / Myrinet / Dolphin / Infiniband network mesh as it would on the above -and it would be a lot cheaper to both purchase and maintain, to boot.
The above mindset is prevalent in the old school of super computing and in sales groups who sell big-iron.
Eu (IP: —.209.42.38.dsli.com) – Posted on 2004-02-26 02:14:10
Those links you provided about the Chinese and British government mean nothing.
I was already aware of those deals. Sun got those deals because it has brand awareness. JDS still sucks when compared to a later incarnation of Suse as it is based on Suse 8.2 or when compared to Mandrake or Red Hat.
Sun is the only tier 1 vendor offering a desktop Linux. They were chosen because they are actually capable of supporting clients as large as the entire country of China or the British Government.
As it stands, it only meets the needs of very, very few organizations.
Yes, it meets the needs of enterprise customers. That’s who JDS is targeted at.
Bascule, do you work for Sun? If so, you should disclose in the interest of intellectual integrity. I ask because I have seen you come to the defense of SUN here anytime anyone wonders about Sun.
No, I do not work for sun. I’m merely responding from the perspective of enterprise and scientific computing demands as (is quite evident from this thread alone) many posting here are *completely* unaware of enterprise computing needs.
The other thing you forget is that many companies are preferring to scale out rather than up. Clustering now makes that very efficient and easy.
Mullighan (IP: —.com)
There’s no need to buy a large expensive Sun server. We successfully operate a fairly large (10GB) database on a cluster of dual Xeon systems using replication.
You don’t need high availability if you have redundancy. I expect that in the future we’ll see large Sun servers replaced by clusters of smaller Linux/x86 systems.
32-bit architectures are *completely* unsuited for large databases, and calling a 10GB database “fairly large is completely ridiculous”.
32-bit architectures can only address 4GB of memory at a time, meaning queries with result sets larger than 4GB must be processed in 4GB chunks, storing partial query results to disk in the interim. For truly large (multiterabyte) databases this is completely unacceptable.
Using replication on a cluster of database servers does not provide scalability on databases which are constantly being modified. Replication ensures that each server must process all incoming queries, completely eliminating any degree of scalability which could be achieved with a cluster.
Please, don’t comment on enterprise computing needs from the perspective of an average computer user.
And what problem domain is that?
Any central service, such as a database, central mail server, central build server for a code repository, etc.
Basically, any service which operates on top of a single system image.
A well developed parallel application using a common message passing specification such as MPI will run just as well on a cluster of x86 / x86-64 hardware with redundant nodes, RAID disks, 2 – 4GB of memory, and a dual-GIGE / Myrinet / Dolphin / Infiniband network mesh as it would on the above -and it would be a lot cheaper to both purchase and maintain, to boot.
Please, don’t confuse HPC clusters with high availability clusters. These fall into dramatically different problem domains. I’m paid to deploy HPC clusters, and yes, they are low end x86 systems. But the needs of a scientific computing users are *completely* different from consumers of high availability clusters.
90% Linux growth – WOW! The Penguin rules!
I wonder how much growth Windows Servers made? Hopefully we’ll see that share drop slowly over the next few years.
I don’t think Sun will be gone anytime soon, but they really need to start pushing it’s Opteron offering.
It’s always great to read a thread on Sun and see a million clueless laypeople commenting on enterprise computing needs as if they actually knew what they are. Wake up! If you haven’t ever worked in enterprise computing or in a datacenter, you have no idea what the requirements actually are. And if you think a cluster of low-end Linux/x86 boxes can fill the role of a SunFire 15k in an enterprise computing environment, you’ll probably never work in enterprise computing either.
Running SuSE on the computer in your parents basement does not qualify you to be speculating on the future of enterprise computing needs. Talk to me when you’re a top level SA for a Fortune 500 corproation.
“There’s no need to buy a large expensive Sun server. We successfully operate a fairly large (10GB) database on a cluster of dual Xeon systems using replication.”
10GB *used* to be large. Nowadays, 10GB is pretty darn wuss-sized. You are very unlikely to need a big honking Sun to run it unless you’re insanely transactional or need an SGA > 2GB [in which case you can have a smaller Sun/AIX box with lots of memory].
Yours truly,
Jeffrey Boulier
No need to get snippy buddy! Our sf15ks handle the backend database and execution of a CORBA application utilized by 43 branch offices worldwide. All told they do transaction processing for over 10,000 computer systems, managing the financial records of hundreds of thousands of clients.
Itanium
The Itanium was and still is the world greatest processor disaster.
Our sf15ks handle the backend database and execution of a CORBA application used by 43 branch offices worldwide
Why not farm out execution of the CORBA application on a cluster of Linux/x86 systems?
Anonymous (IP: —.levtwn01.pa.comcast.net)
The Itanium was and still is the world greatest processor disaster.
Perhaps, but unlike the Xeon the Itanium is a processor suited to enterprise computing tasks.
Mullighan (IP: —.com)
Why not farm out execution of the CORBA application on a cluster of Linux/x86 systems?
And who will provide a cluster of Linux/x86 systems that can also provide the same degree of support as Sun? Dell? Pitching the deployment of a mission-critical enterprise task on a cluster of Dells running Redhat to a CIO of a Fortune 500 company sounds like a good way to get fired on the spot.
A well developed parallel application using a common message passing specification such as MPI will run just as well on a cluster of x86 / x86-64 hardware with redundant nodes, RAID disks, 2 – 4GB of memory, and a dual-GIGE / Myrinet / Dolphin / Infiniband network mesh as it would on the above -and it would be a lot cheaper to both purchase and maintain, to boot.
> Please, don’t confuse HPC clusters with high availability
> clusters.
I’m not confusing the two – clearly I am talking about HPC, which you also agree that big iron is no longer competitive against multitudes of well equipped, well configured, light weight systems.
> (HPC and HA) These fall into dramatically different
> problem domains.
Agreed, high performance and high availability are different domains, with different needs.
However, different computational and reliability needs do not necessarily imply that both cannot be solved with the same underlying hardware – given smart people putting the software and hardware pieces together right.
When was the last time you visited Goggle.com and they were down?
Google is one example of an enterprise database archival and retrieval system that has maintained more than a high-availability ranking, built from light weight systems.
After all, a 512-processor non-uniform-memory-access (NUMA) machine with a two terabytes of memory and 10TB of disk is basically a homogeneous constellation of light weight boxes strung together with your choice network topology running a some auto-balancing schedulers and a distributed, caching file system – and some smart user software to make efficient use of it.
Companies that expand their thinking beyond the realm of big iron can accomplish the same results with a lower price tag.
I have no doubt Sun will do excellent with their light weight line up, however the glory days of big iron are numbered.
On the topic of high availability on light weight machines, some great launch points are:
http://www.linuxvirtualserver.org/
http://linux-ha.org/
Why not farm out execution of the CORBA application on a cluster of Linux/x86 systems?
What’s the backend database supposed to run on, genius? If we weren’t running on Sun we’d be running on IBM or HP.
Seriously, we made the transition from e10ks to sf15ks in less than a week. Deploying a Linux/x86 cluster to do the same thing would be a nightmare. And the support we receive from Sun is nothing short of excellent. We have no reason to move away from Sun, and if we did it wouldn’t be to a bunch of low end x86 crap.
90% Linux growth – WOW! The Penguin rules!
Lets see if that continues once Solaris 10 is released for x86-64/x86-32. If Solaris is cheaper and performs better, where is the incentive to use Linux for the server?
I wonder how much growth Windows Servers made? Hopefully we’ll see that share drop slowly over the next few years.
In theory, in practice there are still many MCSEs who cost a dime an hour thus the main attraction to Windows has always been cheap labour, not a good product.
I don’t think Sun will be gone anytime soon, but they really need to start pushing it’s Opteron offering.
True, SUN will keep pushing SPARC further up the enterprise ladder and when Hypertransport 3.0 is released and made available for Opteron, things will start to get interesting, also, considering you will be able to get a 64bit version of Solaris for Opteron by the end of this year, there is very little incentive to be running Linux/Itanium or Linux/Opteron.
Yes, you can get a cluster of PCs, install parallel software, and then you can tell everyone in the world that you have a supercomputer…
But not all problems can be broken into smaller problems, and for a lot of problems, message passing with MPI/PVM is not enough.
Yes, you can get a cluster of PCs, install parallel software, and then you can tell everyone in the world that you have a supercomputer…
But not all problems can be broken into smaller problems, and for a lot of problems, message passing with MPI/PVM is not enough.
yes, that is true, however, clusters result in more systems which results in more complication which results in more staff required to manage it, thus resulting in a higher TCO. If everyone could cluster 1000 servers together and cost the same amount in TCO that a e12K does, then everyone would do it, the fact is, that isn’t the case (as you pointed out).
Yes, you can get a cluster of PCs, install parallel software, and then you can tell everyone in the world that you have a supercomputer…
But not all problems can be broken into smaller problems, and for a lot of problems, message passing with MPI/PVM is not enough.
What problems cannot be broken down to smaller pieces? Ultimately, even the big iron comes with multiple processors so what’s your point exaclty? Clustering just relies on networking the processors together. With low enough latencies anything is possible.
“The overall Unix server market shrank 4 percent to $16.7
billion, while the Linux server market grew 90 percent to
$2.8 billion, Gartner said.”
That means
– the Unix server market shrank (in billions) by
16.7 * 0.04/(1-0.04) = 0.70,
– the Linux server market grew (in billions by
2.8 * 0.9/(1+0.9) = 1.33.
So, even if we attribute all decline in Unix’s market
share (in terms of revenue) entirely to Linux, still
(1.33-0.70)/1.33 == 47%
of Linux’s growth in server market revenues came from
elsewhere.
Anonymous (IP: —.levtwn01.pa.comcast.net)
The Itanium was and still is the world greatest processor disaster.
The itanium tbh is only on it’s second generation, also due to its architecture you really need effecient compilers (HP’s one blow’s the pants off ICC) Still there’s been some major improvements on Itanium compilers in the last 18months
CPU2k base numbers for the 1ghz Deerfield
827/1382
in HP’s new rx1600 2P/1U system.
What’s impressive is that it beats McKinley’s highest
SPECint_base2k score (810) at the same frequency
even with a slightly slower (14 vs 12 cycle) L3 cache
of half the capacity (1.5 vs 3.0 MB).
The one thing i’ll say that’s good about IBM’s Linux strategy, is that they are actually pushing Linux on non x86 platforms, be it their zSeries, PPC64 or even Itanium box’s (which they hardly market for obvious reasons).
“What problems cannot be broken down to smaller pieces? Ultimately, even the big iron comes with multiple processors so what’s your point exaclty? Clustering just relies on networking the processors together. With low enough latencies anything is possible.”
My point was that it is not only about breaking into smaller pieces, but also how.
Shared memory is usually the easiest, and message passing is not as easy to program.
And there is something called “global shared memory” that is not available to common PCs, you will need special hardware…
1 sunfire 4800 (Quad CPU) we had at a Datacenter, alone avg 300+ users login into a DataBase with over 50 DB instances containing over 100GB of logistics data. For the QA machine doubled the size!. Non of the Dell Xeon servers comes near.
> “What problems cannot be broken down to smaller pieces?
> Ultimately, even the big iron comes with multiple
> processors so what’s your point exaclty? Clustering just
> relies on networking the processors together. With low
> enough latencies anything is possible.”
Normally, there is even no need to go the hard way and parallelize the business codes themselves. There is more than enough concurrency which is embarrassingly easy to exploit.
> My point was that it is not only about breaking into
> smaller pieces, but also how.
>
> Shared memory is usually the easiest, and message passing
> is not as easy to program.
>
> And there is something called “global shared memory” that
> is not available to common PCs, you will need special
> hardware…
There were several approaches to do this in software. MYOAN or ASVM are just two such approaches done on the Intel Paragon.
Carsten
“So, even if we attribute all decline in Unix’s market
share (in terms of revenue) entirely to Linux, still
(1.33-0.70)/1.33 == 47% of Linux’s growth in server market revenues came from elsewhere.”
True. But what is that “elsewhere” exactly? It can just as well be /dev/null. The growth doesn’t need to come from others, what if the whole market grow in this timespan. Then it is ie. possible that every OS’s growth jumps too. So this doesn’t say much i’d say.
Sorry people, but Sun Microsystems SPARC technology is bets RISC and it is better than IBM!
In the end of this year Sun Microsystems will be very strong!
i can’t help but wonder how long specialized hardware and OS’s have before the commodity stuff (powerpcs, x86) kill them off.
There are three trends here. Unix is now the core of two lower end OS’ (os X and linux). Processors keep getting faster and (and this last one is really the kicker) the cost of producing ASICs/processors is going up enormously.
I’d guess that within 5-10 years you’ll see intel/AMD and the powerpc accounting for most of the high end.
Commodity hardware has little use in the high end server space.
What are you talking about? Have you ever worked in a data center? They’re filled with little 1U PCs now. Did you know google has been running on commodity hardware since its inception?
An intelligent company would opt to use commodity hardware and customized software before spending the premium for old tried and true UNIX. If their needs exceed the capabilities of commodity hardware they would probably have to get a custom solution from the big boys anyway.
one sided view and not being open-minded is a sure sign of arterioscleriosis in a(your)company… ..
Large companie’s.. i mean ‘fortune 500’ companies go belly by implementing this way of thinking…
Specially the just say Yes .persons.
I’m not going to dignify most of the comments here with a response. Clearly most of you commenting are far out of your league.
In response to everyone touting Google…
Google runs on commodity hardware. How? Google uses what is, in comparison to most enterprise tasks, a largely static database, periodically updated rather than incrementally updated by tens of thousands of queries per second, as would be seen in an enterprise database. Built into the database is an incredible amount of redudnancy which dramatically decreases the I/O load necessary to parallelize queries amoung multiple systems. Load balancing and failover has been built into the software itself.
Could Google’s database software be modified to fill enterprise database needs? The answer is… absolutely not. Google’s database is designed primarily for one purpose: fast substring searches, which discard all non-alphanumeric characters. This is a trivial query compared to the queries being performed on enterprise databases.
Any of you suggesting that a cluster of commodity x86 systems could fill the roles of enterprise computing systems completely fail to take into account the I/O demands of such tasks. Any parallelization benefits achieved through the use of a cluster of x86 systems would be completely mitigated by the interconnect used, compared to the crossbar architectures utilized in mainframe and high end Unix servers.
No. Looks like Bascule wins another round…
Sun Continues Strong Momentum
Thursday, Feb 26 @ 13:58 PST
“Sun Microsystems today announced it maintained the number one position in total server units shipped in the high performance and technical computing market (HPTC), according to IDC’s Technical Qview, Q4 2003. By earning the top spot in the fourth quarter, Sun also claims the crown for most server units shipped during the 2003 calendar year. The report also shows year-over-year revenue growth for the fourth quarter of 42.4 percent for Sun in the HPTC server market…”
http://www.supercomputingonline.com/article.php?sid=5569
a major problem in this discussion is the confusion between the OS and systems architecture when it comes to enterprise solutions. that’s how you you get folks arguing over linux vs sparc. sheesh.