Sun Microsystems says its new UltraSparc T1 microprocessor, nicknamed Niagara, is creating a big splash. Sun was set to report Monday that 60 percent of the trial units of Niagara, an open-source product, have been ordered by new customers. This indicates that the company is gaining market share from rivals such as IBM’s Power and Intel’s Itanium high-performance microprocessor architectures.
Glad Sun is on the up.
I honestly wasn’t expecting SPARC to make a come back but it looks like SUN has got a winner on its hands with the Niagra I & II. Things are going to get real interesting when “ROCK” comes out. I wonder if we are going to see a resurgence of UNIX on non-x86 equipment
I thoight that Rock had been cancelled
I thoight that Rock had been cancelled
Where did you read that?
Ultra V – yes along time ago
Ultra IIIi+ – yes recently. Niagara II more viable
Rock – I doubt it.
If Ultra III+ was an external design – then I must be confusing that one with the Rock
In our tests the performance of Niagra (T1000 and T2000) was simply pathetic. When we were parsing a XML file, it took 10 times longer on our Niagra systems than the Opteron we compared it against (X4200). It was only as fast as an old Netra! Or try running a bandwidth intensive task, then start a CPU intensive task and just watch download/upload speeds drop through the floor. Everything we tested on Niagra ran extremely slow, even when Sun themselves attempted to optimize it.
Unless you require SPARC, I suggest moving to x86.
If it’s that bad and you require SPARC, I suggest moving to Linux or complaining loudly to Sun. Solaris on x86 is still picky.
We’ve already moved along, I just wanted to warn others before they made the same mistake.
I definitely agree with you, Solaris x86 isn’t mature enough yet, and there are still MANY bugs that require attention. Although since the 6/06 patch cluster, there was a noticeable improvement.
I agree with you. If possible, moving to Linux is definitely the best bet.
In the testing I did using MySQL on a T2000, I was able to beat the performance of a V880 running the same test with a fraction of the CPU utilization. Unfortunately I didn’t have a lot of time with the machine to tweak the performance. So I feel the idea of trashing a platform because it would not parse an XML file is a little unfair.
And what do you mean Solaris x86 is not mature? Limited device support does not equal maturity. As someone who has used Solaris x86 since Solaris 7, I have found very few problems with it. What specific issues are you having?
What I meant by mature was that Sun pushed out Solaris 10 x86 without fully testing or even implementing it.
A few examples of issues we had before the 6/06 patch cluster, on their own hardware:
There wasn’t a way to get the status of the hardware RAID. A couple GCC libraries existed, but were empty (0 length files). The correct contents of these files were posted on Sun’s own forum, yet the wouldn’t release an official patch. pkgadd would allow you to install SPARC packages on x86 without any warning or error. ndd wouldn’t show the link status of an interface. The prtdiag command existed, but it just displayed a message saying it wasn’t not implemented.
With respect to the RAID issue, what sort of controller were you using and what were you using to examine the status – raidctl(1M)?
I’m guessing the gcc issue you ran into was
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6220191
which was fixed in patches 123648-01 (x86/x64) and 123647-01 (sparc).
The pkgadd issue you mentioned does seem to be a genuine bug. Please file a bug at
http://www.opensolaris.org/bug/report.jspa
Which sort of network device were you trying to examine the status of using ndd(1M)? Only certain network devices export their status via ndd(1M).
Finally, prtdiag(1M) for x84/x64 platforms has been available starting with the Solaris 10 6/06 release. I would suggest evaluating that or try Solaris Express which represents the development release for the next version of Solaris (think Fedora).
I see that David Comay from Sun answered your questions, but without knowing what hardware you are running (saying it’s Sun isn’t enough) it’s a little difficult to direct you.
When you create a package, you have the ability to add scripts that among other things can check the architecture prior to installation, see this for more information:
http://docs.sun.com/app/docs/doc/817-0406?q=package
Since you didn’t specify who created the package or what the package was, it would be difficult for me to say it is a bug with pkgadd without knowing what the package was.
On my Gateway GT5056 (dual core Athlon 64)prtdiag works just fine with Solaris 10 6/06.
David and Robert, thanks for trying to help. We have already opened many cases for our issues, and most of the larger ones were fixed in several months. For the rest, we were told they problems would be fixed in the next build of Solaris, which isn’t helpful at all when we need a fix asap.
The point was that we didn’t find Solaris x86 usable in a production environment until 6/06. In my opinion Solaris 10 shouldn’t even have been released before 6/06. The problems we encountered made it obvious to us that Solaris was pushed out too early.
I’d like to know what these “MANY” bugs are, and if you sent in reports to Sun about them. Solaris x86 is quite nice, actually, the only issue that I would say is truly valid and a PITA is the lack of hardware support. You can’t build off-the-shelf boxes and just expect them to work. You have to buy hardware that is known to work. Then again, the HW Solaris supports is supported fully and works correctly. The same can’t be said for a lot of the flakey linux drivers for X Y Z hardware.
Again, I don’t see why Linux is even being discussed here. It has nothing to do with your performance problems and Niagra. Your issue with performance is related to the task you were performing and the type of processor the Niagra is.
Linux is being discussed because Solaris is picky about hardware, just as you say.
Could you please provide more details on why you feel Solaris on x86/x64 platforms isn’t mature enough? Is it simply a lack of driver support? And if that’s the case, I’d like to understand which specific devices you were looking for driver support in Solaris.
While it’s certainly the case that Solaris doesn’t have the sheer number of drivers that Windows, Linux and some of the BSDs have, it tends to support the devices found on the larger vendors such as IBM, HP and Dell quite well and there is increasing support for devices found on white boxes and other types of systems.
http://www.sun.com/bigadmin/hcl/
contains a list of the systems and individual devices which Solaris currently supports.
What in the world does your suggestion have to do with his post? He made no mention of Solaris, nor did he cite any problems specific to Solaris. He also specifically stated he has a Sun X4xx machine. I’m pretty sure Solaris x86 works FINE on Sun’s own machines.
Regardless, your “suggestion” of moving to Linux is completely off-topic. Complaining to Sun about the performance of the Niagra for his application (which is obviously not well suited to the type of processor used in the T1/2000…) would be a valid thing to do. You can’t expect a highly parallel processor that has rather slow cores, but a lot of them that can handle multiple threads, to perform well at a single-threaded task. Of course a processor designed to blaze through single-threaded tasks is going to kill the Niagra. Right now, the Niagra is pretty much relegated to the world of web-farms/database farms. The 2nd revision will be even better since it will have much better floating point performance (another sore point of the current incarnation.)
Please, let’s not get the whole “use linux” crowd going here, ok? This story has nothing to do with Solaris, and neither did his problem.
What in the world does your suggestion have to do with his post? He made no mention of Solaris, nor did he cite any problems specific to Solaris. He also specifically stated he has a Sun X4xx machine. I’m pretty sure Solaris x86 works FINE on Sun’s own machines.
Well, iirc Linux runs Solaris binaries; if not then you derive no benefit from running Linux on SPARC, true. It’s also possible the problems he had were a Solaris-on-SPARC issue.
Regardless, your “suggestion” of moving to Linux is completely off-topic. Complaining to Sun about the performance of the Niagra for his application (which is obviously not well suited to the type of processor used in the T1/2000…) would be a valid thing to do. You can’t expect a highly parallel processor that has rather slow cores, but a lot of them that can handle multiple threads, to perform well at a single-threaded task.
Should a newer, faster processor run code faster than an old slow one, regardless of whether it is a parallel processor composed of comparatively slow cores?
Of course a processor designed to blaze through single-threaded tasks is going to kill the Niagra. Right now, the Niagra is pretty much relegated to the world of web-farms/database farms. The 2nd revision will be even better since it will have much better floating point performance (another sore point of the current incarnation.)
Shouldn’t a web-farm process erm, web-apps well? Regardless, my point was rather more to do with Sun’s service than Solaris or SPARC. If you’re not getting the service you seek and they won’t/can’t do anything about it, they aren’t worth the money.
Well, iirc Linux runs Solaris binaries; if not then you derive no benefit from running Linux on SPARC, true. It’s also possible the problems he had were a Solaris-on-SPARC issue.
#1 – It doesn’t matter what Linux runs, you’re still off-topic.
#2 – I’ve already explained his poor performance. He got the wrong machine for the job. The Opteron servers Sun sells would be a MUCH better suited piece of hardware for his task. Or, the other high-end SPARC machines. Niagra servers are NOT designed for the type of task he mentioned. Sun hasn’t mislead anybody about this, it’s very clear they are parallel processing machines.
Should a newer, faster processor run code faster than an old slow one, regardless of whether it is a parallel processor composed of comparatively slow cores?
No. If the task you throw at the processor can only use one core, and each core is “comparitively slow”, the task will run “comparitively slow”. Niagra based servers are good for parallel workloads. I’ve already mentioned this multiple times.
Shouldn’t a web-farm process erm, web-apps well? Regardless, my point was rather more to do with Sun’s service than Solaris or SPARC. If you’re not getting the service you seek and they won’t/can’t do anything about it, they aren’t worth the money.
Niagra handles web-apps well (I’m assuming you’re talking about Java based applications which are highly threaded…) It doesn’t handle “brute force” tasks well, such as parsing a single XML file. The way Niagra works, with the top-end model, you’d need to parse 32 XML files at the same time (or use a parallel XML parsing library, of which I am only aware one or two which aren’t that super-duper.) If you did 32 at the same time, while each individual parse would take longer, you’d complete 32 in parallel. If the Opteron system is 5x faster at parsing, then it could complete 5 XML file parses in the time it took Niagra to complete 32. This is a dumb example, but it was intentional to make it easy to digest for you.
Sun’s service isn’t at fault here, this fellow has gotten a machine ill-suited to his task. Due-diligence when making a purchasing decision would have avoided that. Then again, he might have been “try before buy”ing the system, in which case he was able to determine that machine isn’t suited to his load, at no monetary loss to himself. I’d say that’s a pretty fair action taken by Sun.
Edited 2006-10-04 00:09
Well, iirc Linux runs Solaris binaries; if not then you derive no benefit from running Linux on SPARC, true. It’s also possible the problems he had were a Solaris-on-SPARC issue.
#1 – It doesn’t matter what Linux runs, you’re still off-topic.
No, I’m not. I wouldn’t recommend running Solaris on x86, because of hardware incompatibilities. Therefore if you’re going to run an x86 box, your best bet is Linux (or one of the BSDs – but again, they’re picky about hardware).
#2 – I’ve already explained his poor performance. He got the wrong machine for the job. The Opteron servers Sun sells would be a MUCH better suited piece of hardware for his task. Or, the other high-end SPARC machines. Niagra servers are NOT designed for the type of task he mentioned. Sun hasn’t mislead anybody about this, it’s very clear they are parallel processing machines.
I didn’t say they’d misled anyone, but you did mention at one point that Niagaras are for webapps. Wouldn’t it make sense then that XML should be better supported?
Should a newer, faster processor run code faster than an old slow one, regardless of whether it is a parallel processor composed of comparatively slow cores?
No. If the task you throw at the processor can only use one core, and each core is “comparitively slow”, the task will run “comparitively slow”. Niagra based servers are good for parallel workloads. I’ve already mentioned this multiple times.
Shouldn’t a web-farm process erm, web-apps well? Regardless, my point was rather more to do with Sun’s service than Solaris or SPARC. If you’re not getting the service you seek and they won’t/can’t do anything about it, they aren’t worth the money.
Niagra handles web-apps well (I’m assuming you’re talking about Java based applications which are highly threaded…) It doesn’t handle “brute force” tasks well, such as parsing a single XML file. The way Niagra works, with the top-end model, you’d need to parse 16 XML files at the same time (or use a parallel XML parsing library, of which I am only aware one or two which aren’t that super-duper.) If you did 16 at the same time, while each individual parse would take longer, you’d complete 16 in parallel. If the Opteron system is 5x faster at parsing, then it could complete 5 XML file parses in the time it took Niagra to complete 16. This is a dumb example, but it was intentional to make it easy to digest for you.
OK, well, ignoring your patronising tone, are you suggesting one should buy one machine for webapps, and one machine for XML (and other, single-threaded, code)?
Sun’s service isn’t at fault here, this fellow has gotten a machine ill-suited to his task. Due-diligence when making a purchasing decision would have avoided that. Then again, he might have been “try before buy”ing the system, in which case he was able to determine that machine isn’t suited to his load, at no monetary loss to himself. I’d say that’s a pretty fair action taken by Sun.
All of which is immaterial to the fact that I can’t yet recommend Solaris on x86, for reasons which I’ve already explained.
No, I’m not. I wouldn’t recommend running Solaris on x86, because of hardware incompatibilities. Therefore if you’re going to run an x86 box, your best bet is Linux (or one of the BSDs – but again, they’re picky about hardware).
If you want to put together cheap boxes with cheap hardware, it likely won’t work with Solaris. Most mainstream server hardware is fairly well supported. Sun boxes (Opterons) are also very inexpensive and very high quality. Almost nobody comments on Sun servers being “bad”. You can’t get that quality from a home-built system, nor from a Dell. You’ll pay more, or the same for an IBM or HP of sim. quality to the Sun. If he’s in the enterprise world, pieced together grey boxes isn’t going to cut it, regardless, so the whole hw support argument is moot.
I didn’t say they’d misled anyone, but you did mention at one point that Niagaras are for webapps. Wouldn’t it make sense then that XML should be better supported?
Are you just arguing for the sake of argument? Your last comment makes absolutely no sense. “XML should be better supported” <– huh? XML support is a function of the software. Assuming you meant that the cpu should parse XML faster -> no. I’m sure Sun would love to build a 32-thread capable box, with each core just as fast as an Opteron. It’s not possible (until you pay big bucks for highly parallel systems with massive numbers of cpus, not just one multi-core cpu.)
Again, this is NOT the design goal of the Niagra. Most webapps don’t sit there parsing massive amounts of XML all day, either. Not only that, but the few I’ve come across are parsing a lot of small XML, in which case the parsing duties get spread over all the cores, in which case Niagra does quite well (see my previous 32 concurrent XML parsing example). This fellow just happens to have an “out of the ordinary” app which the Niagra is not suited for. That doesn’t make the Niagra bad, and it doesn’t make this fellow’s code bad. It just means he needs the right tool for the job. Hammers are great at putting in nails, but they suck at putting in screws. That’s the kind of situation we’re at here, and I don’t understand why you keep bringing up Linux, it is COMPLETELY unrelated.
OK, well, ignoring your patronising tone, are you suggesting one should buy one machine for webapps, and one machine for XML (and other, single-threaded, code)?
I’m not sure how you can hear my “tone” through text. I’m not being patronizing, you simply displayed a lack of understanding of how highly-parallel processors work, and why this workload isn’t a good match for them.
In the situation discussed, I would suggest he sticks to Opteron based servers. Obviously, from his description, that is what performs best for his particular workload. This is what I’ve been saying all along, Niagra is NOT suited to his particular workload. This has nothing to do with Linux, AGAIN. Read the title of this OSNews article. Tell me where you see Solaris/Linux/etc mentioned. Nowhere. It’s an article about Sparcs, this gentleman’s post was in response to a Sparc he tried out. Linux simply isn’t on-topic, it’s way out there.
All of which is immaterial to the fact that I can’t yet recommend Solaris on x86, for reasons which I’ve already explained.
Your reasons do not make sense in this particular case. If it was a guy trying to get a desktop OS going on some cheap off-the-shelf hardware (his home pc for instance) I could go with it.
This is a server. You don’t toss crap hardware in it. If it’s going to be used in enterprise, and you’re relying on its uptime – you shouldn’t even be thinking about using home-built junker machines. Using good quality components, 5 minutes to check the HW compat list, and you’ll be good to go. Regardless, this was not related to the article, nor to this fellows issues. Niagra is not good for his workload, that’s it. Linux has nothing to do with it.
Hmm, well, I’ve seen arguments on both sides of the x86 hardware issue re: Solaris. Obviously if you’re going to buy Sun x86 hardware, it should work, but isn’t the point of x86 being able to choose from more than one supplier in the first place? It sure isn’t buying nicer hardware (I know; I’ve got an old Sun, and it’s much nicer than your generic x86 boxen, even the new stuff.)
You can pick and choose. Solaris supports a large portion of the good quality hardware you can purchase third party. If you go pick up high quality components, there is a high likelyhood of it working. If you check the HCL first, there is a 100% probability of putting together a supported and well-working machine. I had trouble with Solaris 03/05 working on a crap box I had put together with cheap components. I initially blamed Solaris for this.
Then, I got my hands on a Sun box. Same cpus, same ram, but in all tests it was blowing away my home-built box. At this point, I realized having server-grade hardware really does make a difference. There’s a big difference between that NIC that comes on a consumer motherboard and a server-grade NIC, for instance.
Just check the HCL, and feel free to build yourself a box to run Solaris on. You’ll build it knowing 100% it will work and work WELL. The same can’t be said for a lot of HW combinations in Linux. Again, though, this is ALL OFF TOPIC! Feel free to shoot me off an email if you want more help in persuing this, I’d be glad to help. o r man dj [[a_t]] c o ren ode.com.
Well, thankyou for the kind offer but I think since Solaris is mostly a hobby for me at the moment and I’m not in a position to requisition company funds for such a project that I’ll pass on the Solaris-compatible x86 hardware for the minute. As I said though, I have some damned nice, if old, Sun hardware lying around.
Hmm, well, I’ve seen arguments on both sides of the x86 hardware issue re: Solaris. Obviously if you’re going to buy Sun x86 hardware, it should work, but isn’t the point of x86 being able to choose from more than one supplier in the first place? It sure isn’t buying nicer hardware (I know; I’ve got an old Sun, and it’s much nicer than your generic x86 boxen, even the new stuff.)
The largest base for Solaris x86 is dell, followed by HP, followed by IBM, hmmm, and then Sun. It is obvious, that you don’t have to buy x86 hardware from Sun to run Solaris x86. It works very well on my acer laptop that I am using to type this reply on
No, I’m not. I wouldn’t recommend running Solaris on x86, because of hardware incompatibilities. Therefore if you’re going to run an x86 box, your best bet is Linux (or one of the BSDs – but again, they’re picky about hardware).
Yeah! Double FUD! You get the zany-linux-zealot pudding cup!
Kudos to you, nailing Solaris and ALL BSD projects in just two sentences.
> No, I’m not. I wouldn’t recommend running Solaris on
> x86, because of hardware incompatibilities.
Could you please be a bit more specific about the hardware incompatibilities? Are we talking about missing drivers and if so, for which ones? Or if there something more underlying that you have run into?
The Solaris 10 1/06 and 6/06 releases are much better given they have modern ACPI implementations and use GNU GRUB for booting.
Well, if so that’s good news. I have still to try them as several things have kept me from doing so since being prompted to a few weeks ago.
My experience with the last version I tried (about a year ago now) was that it would fail to recognise some vital part of hardware on any machine tried; usually the keyboard and/or hard drive.
Actually, the system was purchased to run a Java based web-app, and it simply didn’t work out in OUR application. The opteron beat it soundly.
As twenex implied, the biggest issue is if you need quick response times for any single process (which customers like), you won’t get it with a Niagra.
BTW, it wasn’t my decision to get a Niagra based system. I new of these problems and fully understand the idea behind having many cores, unfortunately my input didn’t matter in the purchasing decision.
Sounds like somebody should be spanking the people that made that decision, and questioning why they didn’t buy Sun’s opteron-based line, or Sparc IV line. Using my previous example, they bought a screwdriver when they really wanted a hammer.
As twenex implied, the biggest issue is if you need quick response times for any single process (which customers like), you won’t get it with a Niagra.
Well, T2000 probably beats same-priced Opterons when most of the times there are more than 32 concurrent users using the system.
Probably, an ideal solution would be to have a bunch of T2000s and Opterons in a cluster behind an intelligent load balancer…
Although I agree with all of that…
Since when is parsing an xml file a “brute force task”?
I have to agree with the other guy slightly, is the Niagara taking parallelism too far too fast at the expense of single core performance? Surely the machine should strike a balance between the two and performance of single-threaded applications should not be 32-times worse than a standard modern processor? (I made that last figure up).
If Niagara really is just suited to this small niche then and works well at doing it why is this going to make Sun recover? How can it prove Sun is taking market share when thes systems are very very unique?
I have to agree with the other guy slightly, is the Niagara taking parallelism too far too fast at the expense of single core performance? Surely the machine should strike a balance between the two and performance of single-threaded applications should not be 32-times worse than a standard modern processor? (I made that last figure up).
No, and I will elaborate in a moment…
If Niagara really is just suited to this small niche then and works well at doing it why is this going to make Sun recover? How can it prove Sun is taking market share when thes systems are very very unique?
It isn’t a small niche. It’s the vast majority of server workload. Most server-based tasks/processes/programs can utilize the parallelism. The issue (as has always been) – programmers taking advantage of it. Most server-oriented processes (databases, webservers, etc) have been designed to operate efficiently in parallel processing environments. This stems from all the way back when mainframes were the “ultimate”.
Even XML parsing in a server-type environment can utilize the parallelism. The key is, understanding HOW to utilize it. You can’t really subdivide the parsing of the XML very well (hence the few solutitons for doing so) – but you can process multiple XML inputs at the same time. My (admittedly very basic and flawed by simplicity) example demonstrated this. However, if the application wasn’t designed to take advantage of this, and performs XML parsing in series, rather than in parallel, it’s going to run slower then the “brute force” type machines can handle it.
Another simple example!
In the situation described below, which would you rather have:
A) One 250lb bodybuilder
B) 32x160lb people
Situation: You want to build a home. You have the plans, material, and somebody to oversee the construction who can direct anybody in the building of a home. You have 2 months to build this home, or you lose your funding.
I’ll let you take a guess at who will finish first.
Now, in the XML parsing example the poster first mentioned, the situation is the same, EXCEPT there isn’t the guy to oversee the construction.
I realise the benefits of parallelism and I seriously believe the future of computing machines and the future of ubiquitous computing is with the parallel programming paradigm.
The only problem is we’re quite obviously not there yet. Although programmers have to change (and at my University students are required to learn Occam, old, but, it teaches programmers the ideas and the parallel way of thinking), the programming world won’t change overnight.
Hence my point, I believe, still stands, is Sun moving too far too fast – i.e. are they creating an ultra-parallel product where the number of applications that can really take advantage of that is minimal.
Even if they’re not moving too fast I really do believe that the market for this isn’t massive (yet) and as such it’s not really fair to say that uptake of the T1 systems – which I trully believe are fantastic – show Sun is making a market share recovery and gaining share away from POWER and Itanium.
In your example one bodybuilder is 250lb and the rest are just 160lb. Lets say then that the 32 people have just say 50% of the strength and stamina of the bodybuilder. Obviously still here 32 of these will be better, even with the overhead of communication and syncing and waiting.
However, are Niagara’s threads even 50% as fast as a standard single core’d modern Opteron? If they are then they’re going to do a fantastic job in the real world.
Are the processing cores sufficiently powerful? Perhaps in reality should your example have been one 250lb body builder and 32 small children? Is that a better example when comparing an Opteron to a T1?
I don’t know the answer, I’m just asking questions. I’m not attacking parallelism, I’m asking is T1 the best trade off between single threaded performance and parallelism.
corrected a minor spelling mistake
Edited 2006-10-04 09:30
Hence my point, I believe, still stands, is Sun moving too far too fast – i.e. are they creating an ultra-parallel product where the number of applications that can really take advantage of that is minimal.
No, Look at all the benchmarks where through put matter. T2000/T1000 handily beat single thread multi core systems. This is using existing software no extra development needed.
No, Look at all the benchmarks where through put matter. T2000/T1000 handily beat single thread multi core systems.
That’s not an answer to his point:
…creating an ultra-parallel product where the number of applications that can really take advantage of that is minimal.
Parallel computing’s usage is rising, but it’s performance people really need. If it takes longer to do a single job in each case, no matter how many more jobs it can do, then it’s not going to be of much use to many people, particularly in places like web farms. You try telling some stakeholder in a system, or a customer, why their transaction takes more time than anywhere else or takes longer than a system they used to have in.
What people need is a parallel processor with the performance to go with it.
As usual though, the Sun people around here are trying to fit square pegs in round holes when explaining Sun’s, SPARC’s (and Solaris’) poor performance that we have all known exists over the years.
Edited 2006-10-04 16:59
Parallel computing’s usage is rising, but it’s performance people really need. If it takes longer to do a single job in each case, no matter how many more jobs it can do, then it’s not going to be of much use to many people particularly in places like web farms.
If you want to serve 1 web page to one person at a time, then yes throw all the resources into a single threaded CPU. The user will get their page faster! If you have many people hit your site a once, then the use a cpu that that handles concurrent threads/processes.
A web farms performance is measured by hits/second (throughput), not by how fast it handles 1 request (serial responce). If that is your requirement then you don’t need a web farm, and you don’t need a niagara. Sun make that pretty clear. Buy an Opteron.
If Niagara really is just suited to this small niche then and works well at doing it why is this going to make Sun recover? How can it prove Sun is taking market share when thes systems are very very unique?
I would not call high volume webservers a small niche market (It may be the largest niche). The Niagara machines have been Sun’s fastest growing sales figures of any of Sun’s hardware before it. You are correct in saying that the Niagara is unique, but the properties that make it unique are suited to a very very large market.
No. If the task you throw at the processor can only use one core, and each core is “comparitively slow”, the task will run “comparitively slow”.
That’s one of the funniest things I’ve ever read. It’s certainly slow at doing the most basic of things, but it’s comparitively slow. ROTFL.
Niagra handles web-apps well (I’m assuming you’re talking about Java based applications which are highly threaded…) It doesn’t handle “brute force” tasks well, such as parsing a single XML file.
I have news for you. Web apps are all about brute force(!) tasks, and parsing things like XML well and fast is an absolute must. It really doesn’t matter how much the Niagara can do in parallel, if it’s that slow at one job it is nigh on useless for everyone.
If you did 32 at the same time, while each individual parse would take longer, you’d complete 32 in parallel.
It still doesn’t get around the fact that each of those jobs are, using your example, five times slower for everyone despite that it may be able to complete more at the same time. Of course those figures are pretty random and are just examples, but an Opteron would probably be quite a lot faster, and a multi processor multi core Opteron would do alright on the parallel front – and on price ;-).
It would make much more sense to get a bunch of powerful multi-processor and multi-cored Opteron machines where you can have it all on price, performance and parallel performance in > 95% of the uses anybody anywhere would put it to.
Niagara is as close to useless for anyone as you can get. Only people who are running background tasks somewhere which are being run all the time, and where performance for each individual job doesn’t really matter will want to look at it. That’s a highly niche, almost non-existent, market these days, and as the x86 world goes forward with multiple cores it’s only likely to get worse.
Wrong use case, totally wrong market, as per usual.
Edited 2006-10-04 12:02
Should a newer, faster processor run code faster than an old slow one, regardless of whether it is a parallel processor composed of comparatively slow cores?
If the processors are of the same type yes. But the T1 processor which is a TLP (Thread Level Parallelism) type is not really comparable to single strand optimized for instruction Level parallelism (ILP) task. So the answer to your question is no.
Sun mentions clearly what the T1000/T2000 are useful for and shine. Parsing an XML file is single threaded why would you need a 32 strand processor for it?
Shouldn’t a web-farm process erm, web-apps well? Regardless, my point was rather more to do with Sun’s service than Solaris or SPARC. If you’re not getting the service you seek and they won’t/can’t do anything about it, they aren’t worth the money.
Since the OP never mentioned the workload and application, all anyone can do is speculate.
This is akin to saying I bought a 18 wheeler for a family of four but it won’t fit in my garage at home and the top speed is 80mph and guzzles gas. The bottom line is one needs to buy a product that makes sense for the intended application.
Edited 2006-10-04 16:39
“If it’s that bad and you require SPARC, I suggest moving to Linux or complaining loudly to Sun. Solaris on x86 is still picky.”
I’m not quite sure what your point is here….Ubuntu runs on T1000/2000….so ya’ think that would run his app any better? NOT! It’s the app trying to run an an inappropriate hardware platform for the task, it’s *not* the OS. As mentioned before, match the tool to the task.
As for Solaris x86 in a server environment, you would be hard pressed to find a much better solution, IMHO, but not to digress too much off topic (there are many fine x86 OSes suitable for most tasks).
Impressive to see how many “try ‘n buy” systems are staying in the field after testing. That looks possitive for the current T1 cpu (whether it’s Solaris or Linux) and I look forward to the future “ROCK”.
If you have to buy (from a short list of) Sun-certified HW anyway, what is the point/advantage of Solaris x86 over Solaris/SPARC?
“If you have to buy (from a short list of) Sun-certified HW anyway, what is the point/advantage of Solaris x86 over Solaris/SPARC?”
1st – It needs to be *compatible* not necessarily *certified*. BTW the list is growing and includes products from IBM, HP, DELL, etal. (and your white box if you don’t just throw components together willy nilly).
2nd – Who said there was an advantage of Solaris x86 over SPARC systems? Again, its the right tool for the right job. Maybe it’s SPARC, maybe not, but if it’s x86, Solaris is a viable choice for most *NIX solutions.
Edited 2006-10-04 04:37
Aren’t those the types of applications that the T1 is suppose to shine on? What OS were you testing that gear on, Solaris or Linux?
Err.. Nevermind, reading the follow ups pretty much answered my questions.
Edited 2006-10-03 23:54
Parsing an XML file is not a highly parallel job. The Niagra is not suited well to this task. Each core is fairly “slow”, there’s just a lot of them, and they each can process multiple threads at the same time. The parallelism is where the Niagra based servers shine, and workloads that can utilize this reflect that.
As for the throughput issues with high CPU load, that sounds really strange. I’ve not experienced the same thing, it’s something I’d certainly be interested in hearing more about though. I’ve routinely pushed hundreds of megabits/s on a heavily loaded machine without problems. If he’s getting network slowdowns with high CPU load, I’d be more inclined to find another subsystem to blame, like disks.
> In our tests the performance of Niagra (T1000 and T2000) was simply pathetic. When we were parsing a XML file, it took 10 times longer on our Niagra systems than the Opteron we compared it against (X4200).
Ha ha, well, doh! XML parsing is inherently a single threaded task and so you used about 1/32 of the available CPU (1 hardware thread out of 32 available). Speaking of the wrong tool for job, what the hell did you expect! If you want Niagara shine pick a throughput oriented test that matches the focus of this system.
BTW, on throughput tasks (web serving, databases) Niagara absolutely destroys Opteron, Xeon, Power and Itanium on the comparable configurations.
Agreed. If you’re testing a machine with Niagra CPU and complain about the performance of a singlethreaded task, you’ve missed something important.
There are a lot of systems that would beat a T1000 at that.
Running 50 or so parallel instances of the test to parse the XML file should show advantages of the Niagra.
Asking a server to be able to crunch through an XML file is fairly reasonable.
Hmm…perhaps Sun should find a way to make a more “unbalanced” system.
Perhaps the ability for some small number of hardware threads to be able to kick major ass on non parallelizeable tasks while still maintaining the full spreadable crunching. Maybe the ability for selective cores to jump their frequency or perhaps adding more single thread friendly decoders to selective cores.
Hmm…perhaps Sun should find a way to make a more “unbalanced” system.
They have, these are the opteron systems. The Niagara are for a specific task, whereas opterons are general purpose.
i wish someone would manufacture a chip and mobo (not to mention video card) that was built from open specifications. that way we wouldn’t have to worry so much about the GPL3/DRM debate. too bad the niagara doesn’t perform well. i was kind of getting my hopes up.
When you say “doesn’t perform well”, do you realize that for certain types of workloads that Niagara performs exceptionally well? It doesn’t sound like you’ve made the effort to determine for which sort of workloads Niagara is a processor to consider and for which sorts a more traditional processor makes sense.
For more information on the types of workloads where the T1 makes sense and other advantages such as the radically reduced power requirements, see the FAQ at
http://www.sun.com/processors/throughput/faqs.html
and the whitepaper at
http://www.sun.com/processors/whitepapers/throughput_whitepaper.pdf
Did anyone other than me find this odd: The OpenSparc community introduced in July a new Linux distribution, Gentoo Linux, which supports UltraSparc T1 in the latest release of its open-source operating system.
Gentoo is a new distribution introduced by the Open Sparc community in July? Huh?
And, thanks to the comments of several of you, I have a better understanding of what the Niagra does and is useful for. They have been better for learning about it than some of the articles I’ve read.
From what I’ve seen, though it’s noticeably better than older sparcs, it still isn’t all that impressive compared to your average server class x86 chip. Arguments like: well of course it sucks at that, you need to use it for this instead, aren’t really a valid in my opinion. I’d prefer to be using a platform that instead of being the absolute best in one particular field and sucking in all others, is instead good in just about whatever you throw at it (even if not the “best” for one particular task). This may go against the best tool for the job mentality, but when it comes to hardware, I prefer thing to scale horizontally. Adding one off pieces of hardware to a mix just adds to complexity (which is real a problem when you have 200 other servers to care about…)
Anyhow, my real comment is regarding the hardware quality of the T1/2000 servers. From what we’ve seen, it’s just atrocious. Up until recently, _every_ one we’d gotten had some defect on it. Bad NIC, bad motherboard. Sun would replace them of course, but still, not a good first impression. Not to mention they’re a bear to get up and running, particularly if as we do, you have a very customized Solaris setup.
From what I’ve seen, though it’s noticeably better than older sparcs, it still isn’t all that impressive compared to your average server class x86 chip. Arguments like: well of course it sucks at that, you need to use it for this instead, aren’t really a valid in my opinion. I’d prefer to be using a platform that instead of being the absolute best in one particular field and sucking in all others, is instead good in just about whatever you throw at it (even if not the “best” for one particular task). This may go against the best tool for the job mentality, but when it comes to hardware, I prefer thing to scale horizontally. Adding one off pieces of hardware to a mix just adds to complexity (which is real a problem when you have 200 other servers to care about…)
I would not call multithreaded applications (or multiple concurrent processes) “one particular task”. Webserver, application server, database server, file server etc fit the niagara sweet spot. It just depends how much work you have to do at same time.
There is a very simple equation here. Sun give you 60 days free, to see if your application is suited to the niagara (or whether your sysadmins are worth the money you are paying them). If it isn’t then send it back (Sun paying the costs). Sun also have a other systems such as the Opteron, Ultra III, Ultra IV+, SPARC64 to fill the gaps. Other wise try Dell (make sure the server does not use batteries), HP (always check whether you maybe followed). etc…..
Your failure rate (100%), seems to be very excessive. All vendors have have DoA problems. I would have trouble believing that every one had a problem. Maybe somebody put a cure on you. Do you know a Dell witch doctor?
why do people cite web serving as a “parallelized” app? web servers in large farms tend to act identically. meaning server1 take a request, marshalls its data sources, and returns markup. it would be doing the same thing if it were the only server or one of ten thousand. so if niagra cannot munge text quickly or be a better tool for accessing sql databases, its a dud for webserving. webserving is in fact a decidedly non-parallel task – in a parallel task each node performs some essential task for a final result. in web serving, each node is redundant – it can be added or removed withouth altering the core functionality of the site. how could google or yahoo effectively operate if their servers were individually essential to some parallel task? quite the opposite, each server is expected to fail at any time. while it is part of a farm, it operates on requests independently.
Browser: ELinks/0.10.6-1ubuntu3-debian (textmode; Linux 2.6.15-27-386 i686; 101×35-2)
why do people cite web serving as a “parallelized” app?
Web servers tend to handle more then one transaction and one user at a time. So, they can be running 32 transactions at the same time for multiple users. As a previous post pointed out…say the T1 takes 10seconds per “process” while a Opteron takes 5s per process. The T1 can run 32 of those processes at the same time…the Opteron can’t. So, if you have little traffic the Opteron is just fine..if your server gets a lot of traffic then although each individuals experience will be slightly slower the overall performance of the application is far better.