Am I the only one that thinks that that page looks really unprofessional? Solaris 10 (at least they don’t call it Solaris X *giggle*) seems to have some nice features though.
Some things worry me though, stuff like:
> Provable Data Integrity – 99.99999999999999999 (19-9’s!) probability of detecting corrupt data.
Is corrupt data so common that you need this kind of quality control?
Is corrupt data so common that you need this kind of quality control?
The number given in the probability of it occuring, in otherwords, one would have be incredibily unlucking to end up with a corrupt filesystem, and given the number stated, they must have done a lot of work in relation in regards to error detection and correction on the fly.
Doesn’t look unprofessional to me, but I’m biased. The style sheet and colours are the Sun standard. All of the pages have that look and feel.
Anyway back to the 19-9’s thing. It’s not so much to do with commonality, but more to do with the fact that disks fail and people make mistakes. Disks have an MTBF. Your expected time to a failure is this value divided by the number of disks. With folks putting more and more disks on, the liklihood of one of them going bad is going to increase. Wouldn’t you like this level of certainty that your filesystem is going to handle this properly?
I had a colleague telling me today about a sysadmin horror story that he had recently heard about (it could even be urban myth). The admin in question decided that he wanted to attach a tape drive to his production system. He did so. He then proceeded to create the device and then wrote some data to it. Unfortunately the device he created had the major and minor numbers of the disk that contained his root filesystem. The results were predictable.
Now if that disk had ben a part of a mirrored filesystem set up under the Dynamic File System, it would have been detected as bad and corrected. I kind of like that idea.
has there been any corruption from testing inside Sun’s labs? Whats performance like? Compare well to XFS or JFS? I for one am very much looking forward to Solaris 10.
sun’s blog site (for its own employees) mentioned that version 06/04 of teh solaris 10 early access should be out. the official page still links to 05/04.
Is corrupt data so common that you need this kind of quality control?
That depends on how many disks you have on our system containing the same data. Example, Logical volumes created using software RAID.
I was just running some heavy I/O tests that we usually do to qualify drives. A brand new drive form a manufacturer with a reputation for reliability had data courruption not at the FS level but with just raw accesses. The only way I could know this is becuase the test suite looks for it with write/read compare ops. With traditional use I would be hard pressed to imagine how any one could detect this prior to having data commited to disk.
Second it depends on the market. Enterprise customers that have TB of data worth millions of dollars are very nervous about thier data.
How can you be certain that the drives electronics on every data path has error detection and correction? Unless you checksum and write the block how can you detect bit errorrs that might have been caused by the drives electronics?
I attended a presentation from the Sun Folks on Solaris 10 about a month ago. I must say that I was definitely impressed with what they had to say about the Zetta Byte filesystem. I read their entire white paper:
Predictive Self Healing delivers a breakthrough approach to service availability with automatic online error detection and auto recovery for all systems. Solaris 10 proactively removes failing components from the system before failure and automatically restarts failed applications and services.
You didn’t read the short description……
Let me guess that “predictive healing” boils down to a package version check that simply forces a revert to the “last known good”.
That’s not what it sound’s like to me, form the description above…. failing components means hardware, I would guess.
From what i read Dtrace is not a daemon. Its an interface for getting statistics from a running kernel.
SAM and SMIT Suck!!! You can have them. I for one am glad Solaris has none of these “hold your hand” tools. They have taken a different approach and created command line based utilities that are easy to use and understand in order to do administrative tasks. A few simple shell scripts can then be used to automate certain functions. If you want a gui you can use admintool which has been around for years (and also sucks).
The Dynamic filesystem is supposed to be faster than the current UFS with logging. I have not seen any benchmarks on this so it remains to be seen if this is true.
IP filter has been available for Solaris for years, it just didn’t come with the OS. I think its great that they are now going to include it “out of the box”. Sun’s own firewall called Sunscreen came with Solaris 9 but i haven’t used it, i prefer ipfilter. I wonder if they are still going to ship/support Sunscreen with Solaris 10?
“Why don’t you tell us about some great IBM solutions instead?”
LOL, I doubt he even works for ibm. Trolls here are getting quite crafty lately doing nifty ip aliasing.
If he worked for IBM he may be even using a SUN workstation , last year IBM released a press photo of one of their research teams doing neat quatum computing. Which was pretty cool, except for the fact that IBM engineers were using a SUN machine , the press photo release got yanked once it made the rounds and people noticed the detail.
In any case Solaris has had graphical administrative tools for a while now.
I looked through the list of features and couldnt find anything exciting. can someone more familiar with solaris explain whats good about this release in comparison to linux 2.6 for example
DTrace is not a “resource hogging daemon” — it is an infrastructure for dynamic, comprehensive instrumentation of production systems. In fact, there isn’t even a daemon associated with DTrace — if you never run dtrace(1M), it is as if DTrace never existed. (Much) more information is available here:
In particular, you may be interested in the paper on DTrace that we will be presenting at the USENIX Annual Technical Conference in Boston later this month; it’s here:
Look at Section 9 of the above paper for a comprehensive example of how DTrace was used to discover a serious (and previously undetectable) performance problem on a production system…
(And for whatever it’s worth, predictive self-healing isn’t a “resource hogging daemon” either — but I’ll leave it to others to provide more information on that technology.)
There’s a lot of really useful, neat stuff in Solaris 10. Zones lets you boot up a seperate system image on top of an existing kernel. So you can consolidate multiple physical machines on to a single system, or just have development and production environments on the same machine. You can limit the system resources used by Zones. It’s basically a software-only version of IBM’s LPAR technology (if you’re familiar with that).
DTrace is also a nice new feature. It lets you query over 30,000 different probes in the OS. It’s mostly useful for performance tuning. Rather than trying to correlate output from top, vmstat, iostat, etc, you can write a simple, short script (just a few dozen lines, usually) and query exactly what you want. It’s also useful for measuring performance metrics of things that the existing tools don’t measure well.
Sun has rewritten their TCP/IP stack in Solaris 10. It’s supposed to be significantly faster now (but I’ve haven’t done any testing for myself). They also switched to the open-source ipf firewall. They also will support 64-bit mode on the Opteron (but the current betas don’t do this).
LOL, I doubt he even works for ibm. Trolls here are getting quite crafty lately doing nifty ip aliasing.
Actually i do work for IBM mainly in support of the HP-UX platform and SUN for what we do(BCRS). I do get to play around with RS/6000 and its a pretty impressive platform. In alot of respects they are well ahead of HP and SUN.
Smitty and SAM are tools that make the SA’s job a lot easier. Sure i can do it all on the command line thats not the point. I am not trying to impress my co-workers or whomever with my knowledge of the command line. I am trying to use my time more effectively and it nice to be able to pop into a nice little gui to set things up with half the effort, economy of time my friend.
I still can’t get excited about solaris 10 the DFS is kinda cool but from what i can tell most of the new features emulate what other platforms have had for awhile. Dtrace is all fine and nice but what real value does it provide? Can i create nice graphs with it like i can with glance plus in HP-UX?
kdbg is not equivalent to DTrace — it’s not even in the same neighborhood! I would expect those with a strong Linux bias to cite something like LTT and/or DProbes; these aren’t equivalent either (see Section 2 of the USENIX paper for a detailed comparison to related work) but at least they’re vaguely in the same technology area…
solaris is surperior to linux in everyway except common desktop usage.
Most notebly, Solaris x86-64, and Solaris SPARC. solaris x86 has alot of features disabled due to the porting down to x86-32 that allowed the system not to scale well.
working at IBM does not impress me. I know a former IBM employee who quit because he hated it, especially the corporate culture. He worked on Java with IBM. He’s a good Java person but he’s a big linux toater and believes anything that isn’t linux or java related is evil.
I really like this Linux compatability thing for Solaris–this is one of the features I have been telling sun executes that solaris requires to survive on the x86 platform.
IBM’s big boxes vs. Sun’s big boxes.. I was impressed with IBMs it seems more sensible and easy the way things worked with the software.
Unfortunately, IBM is apparently on a mission to use linux on all of its hardware which really defeats the purpose of getting an IBM box if they dont care about their OS enough. they even suggest migrating to linux. This would be pointless on big boxes from IBM right now. Hello solaris,
Wasn’t trying to impress anyone. I was accused of spoofing my IP. People don’t realize how large a company IBM is.
Unfortunately, IBM is apparently on a mission to use linux on all of its hardware which really defeats the purpose of getting an IBM box if they dont care about their OS enough.
I don’t get that sense at all! Based on my experience here; If that were the case i would be working primarily on linux which i haven’t seen except on specialized machines here HMC’s. IBM is doing what any good service company would do..provide a plethora of options for their customers. Something SUN has rarely done, Don’t get me wrong I do like the Sparc/Solaris platform however i feel these are the end days for SUN Solaris 10 for me is not that exciting.(hopefully i am wrong).
Explain why IBM marketing has posted things about migrating from AIX, OS/2, z/OS, etc. all to linux? they have linux up and down all their product lines and they advertise it and advocate the migration to it.
However, as of lately they have been pushing AIX and Solaris users to migrate to AIX.. which seems weird to me.. I don’t know if IBM knows what they are going to do with their OS line, I don’t see a clear roadmap from a consumer’s point of view…which makes me not wanna use IBM’s OSs as I don’t wantt o invest something that will be discountinued.
A lot of ppl may not understand the value of a zone, but I’ll give you a real world example..
So lets say you are a webhosting company and you want to sell dedicated servers to some of your customers. However, over time you find that it’s more expensive and can cause management issues. So you turn to Solaris 10 zones..
You take a large box highly available box (like a SF4800), and you carve out one zone per customer. Because the important components of the OS, are shared read-only through a loop-back FS to each zone, you only have one OS to maintain. Just multiple /etc conf files and applications that are local to each zone. This means that patching only needs to be done once for these customers, you don’t have to patch each zone. And to make things even better, lets say that you want to share a directory, like /opt or /usr/local to each zone read-only, you can do that to. Or lets say you want to share /export/home read-write to each zone or to only certain zones.. you can do that too. The great part is that each zone is unaware of the others, it’s all transparent. Each customer would think they are on a completely seperate machine from the rest. You add on top of this resource management, and you can control the QoS for each customer. And then control their apps in their respective zone(s). This is all out of the box, no special addons or funky commands (no kernel patches). It takes a few minutes to setup a zone. Things are divided and jailed at the kernel level, so even if one zone is compromised it does not affect the other zones. A zone can even crash, and not affect the other zones or the global zone. The cool thing is that in the global zone, you can see everything going on in each zone:) None of the other Unices or even Linux have this level of virtualization. Now take that SF4800, and divide it into two hardware domains, cluster your zones, and bam! You have just consolidated your dedicated hosting customers and made them highly available on a single box!
This is something that AIX and HP-UX can’t do. LPAR’s and nPartitions are very different. They both require a local install of the OS in each virtual machine. LPAR’s, require components outside of the OS, (hardware). nPartitions are just like Sun Fire Domains, all hardware based virtualization (you assign CPU’s, memory, etc to each instances). Now, zones works on SPARC, x86, and Opteron. So it’s completely at the OS level, something that neither HP or IBM have. The thing that most ppl forget is that HP and IBM are not dedicated to their UNIX customers, as where Sun is. They may support Linux to a small degree, but Linux will never be a major focus for Sun. They are commited to their Solaris customers, and I think that’s a hell of lot better than where HP or IBM are these days.
Am I the only one that thinks that that page looks really unprofessional? Solaris 10 (at least they don’t call it Solaris X *giggle*) seems to have some nice features though.
Some things worry me though, stuff like:
> Provable Data Integrity – 99.99999999999999999 (19-9’s!) probability of detecting corrupt data.
Is corrupt data so common that you need this kind of quality control?
Is corrupt data so common that you need this kind of quality control?
The number given in the probability of it occuring, in otherwords, one would have be incredibily unlucking to end up with a corrupt filesystem, and given the number stated, they must have done a lot of work in relation in regards to error detection and correction on the fly.
Doesn’t look unprofessional to me, but I’m biased. The style sheet and colours are the Sun standard. All of the pages have that look and feel.
Anyway back to the 19-9’s thing. It’s not so much to do with commonality, but more to do with the fact that disks fail and people make mistakes. Disks have an MTBF. Your expected time to a failure is this value divided by the number of disks. With folks putting more and more disks on, the liklihood of one of them going bad is going to increase. Wouldn’t you like this level of certainty that your filesystem is going to handle this properly?
I had a colleague telling me today about a sysadmin horror story that he had recently heard about (it could even be urban myth). The admin in question decided that he wanted to attach a tape drive to his production system. He did so. He then proceeded to create the device and then wrote some data to it. Unfortunately the device he created had the major and minor numbers of the disk that contained his root filesystem. The results were predictable.
Now if that disk had ben a part of a mirrored filesystem set up under the Dynamic File System, it would have been detected as bad and corrected. I kind of like that idea.
has there been any corruption from testing inside Sun’s labs? Whats performance like? Compare well to XFS or JFS? I for one am very much looking forward to Solaris 10.
sun’s blog site (for its own employees) mentioned that version 06/04 of teh solaris 10 early access should be out. the official page still links to 05/04.
Is corrupt data so common that you need this kind of quality control?
That depends on how many disks you have on our system containing the same data. Example, Logical volumes created using software RAID.
I was just running some heavy I/O tests that we usually do to qualify drives. A brand new drive form a manufacturer with a reputation for reliability had data courruption not at the FS level but with just raw accesses. The only way I could know this is becuase the test suite looks for it with write/read compare ops. With traditional use I would be hard pressed to imagine how any one could detect this prior to having data commited to disk.
Second it depends on the market. Enterprise customers that have TB of data worth millions of dollars are very nervous about thier data.
How can you be certain that the drives electronics on every data path has error detection and correction? Unless you checksum and write the block how can you detect bit errorrs that might have been caused by the drives electronics?
I attended a presentation from the Sun Folks on Solaris 10 about a month ago. I must say that I was definitely impressed with what they had to say about the Zetta Byte filesystem. I read their entire white paper:
http://wwws.sun.com/software/whitepapers/solaris9/solaris_express.p…
FYI, More info on Solaris 10 is also on Ace’s hardware
http://www.aceshardware.com/read_news.jsp?id=75000449
Oops, That’s not the correct link to the white paper I read. I can’t seem to find it online, it was mailed to me by Sun’s representatives.
Let me guess that “predictive healing” boils down to a package version check that simply forces a revert to the “last known good”.
Predictive Self Healing delivers a breakthrough approach to service availability with automatic online error detection and auto recovery for all systems. Solaris 10 proactively removes failing components from the system before failure and automatically restarts failed applications and services.
You didn’t read the short description……
Let me guess that “predictive healing” boils down to a package version check that simply forces a revert to the “last known good”.
That’s not what it sound’s like to me, form the description above…. failing components means hardware, I would guess.
From what i read Dtrace is not a daemon. Its an interface for getting statistics from a running kernel.
SAM and SMIT Suck!!! You can have them. I for one am glad Solaris has none of these “hold your hand” tools. They have taken a different approach and created command line based utilities that are easy to use and understand in order to do administrative tasks. A few simple shell scripts can then be used to automate certain functions. If you want a gui you can use admintool which has been around for years (and also sucks).
The Dynamic filesystem is supposed to be faster than the current UFS with logging. I have not seen any benchmarks on this so it remains to be seen if this is true.
IP filter has been available for Solaris for years, it just didn’t come with the OS. I think its great that they are now going to include it “out of the box”. Sun’s own firewall called Sunscreen came with Solaris 9 but i haven’t used it, i prefer ipfilter. I wonder if they are still going to ship/support Sunscreen with Solaris 10?
You’re right it’s terrible (astroturfer).
Why don’t you tell us about some great IBM solutions instead?
“Why don’t you tell us about some great IBM solutions instead?”
ya. lets start with jfs
“Why don’t you tell us about some great IBM solutions instead?”
LOL, I doubt he even works for ibm. Trolls here are getting quite crafty lately doing nifty ip aliasing.
If he worked for IBM he may be even using a SUN workstation , last year IBM released a press photo of one of their research teams doing neat quatum computing. Which was pretty cool, except for the fact that IBM engineers were using a SUN machine , the press photo release got yanked once it made the rounds and people noticed the detail.
In any case Solaris has had graphical administrative tools for a while now.
I looked through the list of features and couldnt find anything exciting. can someone more familiar with solaris explain whats good about this release in comparison to linux 2.6 for example
Sun engineer’s blog on Solaris 10 Resource Management and Zones
http://blogs.sun.com/roller/page/jclingan/20040609#solaris_10_resou…
“I looked through the list of features and couldnt find anything exciting.”
First you have to define what you think is exciting… then we can move from there.
DTrace is not a “resource hogging daemon” — it is an infrastructure for dynamic, comprehensive instrumentation of production systems. In fact, there isn’t even a daemon associated with DTrace — if you never run dtrace(1M), it is as if DTrace never existed. (Much) more information is available here:
http://www.sun.com/bigadmin/content/dtrace
In particular, you may be interested in the paper on DTrace that we will be presenting at the USENIX Annual Technical Conference in Boston later this month; it’s here:
http://www.sun.com/bigadmin/content/dtrace/dtrace_usenix.pdf
Look at Section 9 of the above paper for a comprehensive example of how DTrace was used to discover a serious (and previously undetectable) performance problem on a production system…
(And for whatever it’s worth, predictive self-healing isn’t a “resource hogging daemon” either — but I’ll leave it to others to provide more information on that technology.)
There’s a lot of really useful, neat stuff in Solaris 10. Zones lets you boot up a seperate system image on top of an existing kernel. So you can consolidate multiple physical machines on to a single system, or just have development and production environments on the same machine. You can limit the system resources used by Zones. It’s basically a software-only version of IBM’s LPAR technology (if you’re familiar with that).
DTrace is also a nice new feature. It lets you query over 30,000 different probes in the OS. It’s mostly useful for performance tuning. Rather than trying to correlate output from top, vmstat, iostat, etc, you can write a simple, short script (just a few dozen lines, usually) and query exactly what you want. It’s also useful for measuring performance metrics of things that the existing tools don’t measure well.
Sun has rewritten their TCP/IP stack in Solaris 10. It’s supposed to be significantly faster now (but I’ve haven’t done any testing for myself). They also switched to the open-source ipf firewall. They also will support 64-bit mode on the Opteron (but the current betas don’t do this).
The Betas are free – see http://wwws.sun.com/software/solaris/solaris-express/get.html to download it.
LOL, I doubt he even works for ibm. Trolls here are getting quite crafty lately doing nifty ip aliasing.
Actually i do work for IBM mainly in support of the HP-UX platform and SUN for what we do(BCRS). I do get to play around with RS/6000 and its a pretty impressive platform. In alot of respects they are well ahead of HP and SUN.
Smitty and SAM are tools that make the SA’s job a lot easier. Sure i can do it all on the command line thats not the point. I am not trying to impress my co-workers or whomever with my knowledge of the command line. I am trying to use my time more effectively and it nice to be able to pop into a nice little gui to set things up with half the effort, economy of time my friend.
I still can’t get excited about solaris 10 the DFS is kinda cool but from what i can tell most of the new features emulate what other platforms have had for awhile. Dtrace is all fine and nice but what real value does it provide? Can i create nice graphs with it like i can with glance plus in HP-UX?
Dtrace is all fine and nice but what real value does it provide?
Take a look at Section 9 from the USENIX paper for a flavor of the real-world problems that DTrace can solve:
http://www.sun.com/bigadmin/content/dtrace/dtrace_usenix.pdf
“”I looked through the list of features and couldnt find anything exciting.”
First you have to define what you think is exciting… then we can move from there.
”
well. do we have any new features compared to 2.6 or not?
“well. do we have any new features compared to 2.6 or not? ”
I don’t understand you? Does Solaris 10 have any new features compared to Linux 2.6 ?
Well, Linux doesn’t have Zones or DTrace. How’s that for a start?
selinux and kdbg seems to be equivalent
selinux and kdbg seems to be equivalent
kdbg is not equivalent to DTrace — it’s not even in the same neighborhood! I would expect those with a strong Linux bias to cite something like LTT and/or DProbes; these aren’t equivalent either (see Section 2 of the USENIX paper for a detailed comparison to related work) but at least they’re vaguely in the same technology area…
is that it’s output can be, “chained”, use as other tools’ input
(as long as the program’s output is carefully designed to be parsable — which is also the case of DTrace, as many other UNIX tools)
once everything can be chained and scriptable,
you can do things automatically (or put it in crontab).
and that’s make UNIX powerful.
I do get to play around with RS/6000 and its a pretty impressive platform. In alot of respects they are well ahead of HP and SUN.
Well, I dont work at IBM but I can surely tell you that is not true.
Just my opinion and also the same as a bunch of people out there. sparcv9 rules!
Jay
“Linux Compatibility – Linux applications can run unmodified on Solaris.”
I can’t wait to see the list of gotchas.
solaris is surperior to linux in everyway except common desktop usage.
Most notebly, Solaris x86-64, and Solaris SPARC. solaris x86 has alot of features disabled due to the porting down to x86-32 that allowed the system not to scale well.
working at IBM does not impress me. I know a former IBM employee who quit because he hated it, especially the corporate culture. He worked on Java with IBM. He’s a good Java person but he’s a big linux toater and believes anything that isn’t linux or java related is evil.
I really like this Linux compatability thing for Solaris–this is one of the features I have been telling sun executes that solaris requires to survive on the x86 platform.
IBM’s big boxes vs. Sun’s big boxes.. I was impressed with IBMs it seems more sensible and easy the way things worked with the software.
Unfortunately, IBM is apparently on a mission to use linux on all of its hardware which really defeats the purpose of getting an IBM box if they dont care about their OS enough. they even suggest migrating to linux. This would be pointless on big boxes from IBM right now. Hello solaris,
working at IBM does not impress me.
Wasn’t trying to impress anyone. I was accused of spoofing my IP. People don’t realize how large a company IBM is.
Unfortunately, IBM is apparently on a mission to use linux on all of its hardware which really defeats the purpose of getting an IBM box if they dont care about their OS enough.
I don’t get that sense at all! Based on my experience here; If that were the case i would be working primarily on linux which i haven’t seen except on specialized machines here HMC’s. IBM is doing what any good service company would do..provide a plethora of options for their customers. Something SUN has rarely done, Don’t get me wrong I do like the Sparc/Solaris platform however i feel these are the end days for SUN Solaris 10 for me is not that exciting.(hopefully i am wrong).
your right, IBM is HUGE.
Explain why IBM marketing has posted things about migrating from AIX, OS/2, z/OS, etc. all to linux? they have linux up and down all their product lines and they advertise it and advocate the migration to it.
However, as of lately they have been pushing AIX and Solaris users to migrate to AIX.. which seems weird to me.. I don’t know if IBM knows what they are going to do with their OS line, I don’t see a clear roadmap from a consumer’s point of view…which makes me not wanna use IBM’s OSs as I don’t wantt o invest something that will be discountinued.
A lot of ppl may not understand the value of a zone, but I’ll give you a real world example..
So lets say you are a webhosting company and you want to sell dedicated servers to some of your customers. However, over time you find that it’s more expensive and can cause management issues. So you turn to Solaris 10 zones..
You take a large box highly available box (like a SF4800), and you carve out one zone per customer. Because the important components of the OS, are shared read-only through a loop-back FS to each zone, you only have one OS to maintain. Just multiple /etc conf files and applications that are local to each zone. This means that patching only needs to be done once for these customers, you don’t have to patch each zone. And to make things even better, lets say that you want to share a directory, like /opt or /usr/local to each zone read-only, you can do that to. Or lets say you want to share /export/home read-write to each zone or to only certain zones.. you can do that too. The great part is that each zone is unaware of the others, it’s all transparent. Each customer would think they are on a completely seperate machine from the rest. You add on top of this resource management, and you can control the QoS for each customer. And then control their apps in their respective zone(s). This is all out of the box, no special addons or funky commands (no kernel patches). It takes a few minutes to setup a zone. Things are divided and jailed at the kernel level, so even if one zone is compromised it does not affect the other zones. A zone can even crash, and not affect the other zones or the global zone. The cool thing is that in the global zone, you can see everything going on in each zone:) None of the other Unices or even Linux have this level of virtualization. Now take that SF4800, and divide it into two hardware domains, cluster your zones, and bam! You have just consolidated your dedicated hosting customers and made them highly available on a single box!
This is something that AIX and HP-UX can’t do. LPAR’s and nPartitions are very different. They both require a local install of the OS in each virtual machine. LPAR’s, require components outside of the OS, (hardware). nPartitions are just like Sun Fire Domains, all hardware based virtualization (you assign CPU’s, memory, etc to each instances). Now, zones works on SPARC, x86, and Opteron. So it’s completely at the OS level, something that neither HP or IBM have. The thing that most ppl forget is that HP and IBM are not dedicated to their UNIX customers, as where Sun is. They may support Linux to a small degree, but Linux will never be a major focus for Sun. They are commited to their Solaris customers, and I think that’s a hell of lot better than where HP or IBM are these days.