Sun Microsystems makes new releases of Solaris about every four to six months, in many cases all the new release contains is bug fixes and some changes in functionality. More often than not most releases go by without a great deal of fanfare. Just as Solaris 10 3/05 broke new ground with Zones, Dtrace and the Service Management Facility. Solaris 10 6/06 introduces ZFS or Zettabyte File System and the SATA framework and Xorg 6.9, which will be the primary focus of this review.
I used three machines in the preparation of this article, a SunBlade 100 with 2 GB of RAM, 2 80 GB hard disks, a dual channel SCSI HBA, an XVR-100 64 MB frame buffer, an internal IDE DVD-ROM drive and two Sun MultiPacks with 12 36 GB hard disks. Of the two x86 machines, one was a home built Pentium IV with a GB of RAM, an ATI Radeon 9100
AGP card with 128 MB of RAM, a 3Com 3C905 10/100 NIC, a Lite-On DVD-RW drive and a 120 GB Western Digital hard disk.
My other machine was a Gateway GT5056 dual core Athlon 64 machine with 3 GB of RAM, 2 250 GB Western Digital SATA
hard disks, a PCI Express nVidia GeForce 7300 GS video card, an Intel Pro/100 PCI NIC and an internal DVD and
DVD-RW drives. During the review I also used the motherboard integrated nVidia GeForce 6100 graphics. The monitors
used for both x86 machines were two ViewSonic A90 19″ CRT’s and a Dell 2407WFP Flat Panel LCD for the Blade 100.
Is ZFS really all that great?
Sun has made a lot of noise about ZFS and rightly so. And although the creation, modification and deletion are
much easier than before, I don’t think that this is the most important part of ZFS. Using Solaris Volume Manager,
Solstice DiskSuite or Veritas Volume Manager is not a simple matter, especially with complex volumes. There is no
doubt that ZFS eliminates a lot of pain in volume creation and management, and this is for the most part what you
hear a lot about from Sun.
What has me excited about ZFS is snapshots, not that this is new to Solaris. Since Solaris 8 1/01 Release the
fssnap command has been included to allow administrators to take snapshots of filesystems. While fssnap works, it
requires a backing store to hold snapshots and the snapshot(s) cannot reside in the filesystem that you are trying to snapshot. Another limitation of fssnap is that it only works with ufsdump and ufsrestore, further limiting its
functionality. ZFS snapshots are easier to perform from either the command line or the ZFS GUI and do not require
ufsdump and ufsrestore.
Why snapshots are important
The conventional wisdom in managing a large number of machines is that you set up a tape backup strategy and routinely backup the various machines and filesystems in your environment. The problem comes when it is time to restore a file, directory, or entire system from tape. In many cases most places have never tested their backup tapes to see if they actually work! Another issue is time, recovering individual files or a filesystem from tape can take a long time, and your system is down or is reduced in functionality while the restore is taking place. Using snapshots takes only seconds and the recovery time is just as fast.
I first started working with snapshots when we purchased a NetworkAppliance Filer NAS at a previous position I
worked at. The Filer uses a BSD derivative operating system and the Write Anywhere File Layout (or WAFL) filesystem,
and WAFL supports snapshots down to individual files. ZFS on the other hand is not that granular and can only be used
to restore (or revert) entire filesystems. While this might be viewed as a bad thing by some, I see it as a good start
because it comes with the OS and works regardless of what storage is used.
Zones + ZFS + ZFS snapshots= Heaven
One of the first things I thought of for using ZFS was creating a raidz filesystem and create Non-Global Zones using
the ZFS filesytem as the location to install the Zones. So on a Blade 100 with 2 GB of memory, 2 80 GB IDE disks, a
dual channel PCI SCSI card and a Sun StorEdge MultiPack with 6 36 GB SCSI disks I created a 102 GB ZFS raidz filesystem
called Zones. I then created 3 Non-Global whole root Zones called sol10script, adllms and oracle-test. The sol10script
Zone is used to test a shell script we are writing to lock down Solaris 10 machines after they are built. The adllms
Zone is being used to test the feasibility of installing and using a particular Open Source Learning Management System
with Solaris.
The oracle-test Zone is a little more ambitious in that not only am I creating a Non-Global Zone, I am also mounting
another ZFS volume to install Oracle on within the Zone. I first created a second raidz volume called u01 using 3 36
GB disks and dismounted the filesystem. I then set the filesystem to legacy (a requirement to mount a ZFS volume in a
Non-Global Zone). In order to mount the Oracle DVD I created a loopback filesystem to use the Blade 100’s DVD drive
in the Non-Global Zone. I then made all of the standard modifications necessary to install Oracle 10g with one
exception, I made all of the changes to /etc/system in the Global Zone (which is inherited to all Non-Global Zones).
During the installation of Oracle, there was a complaint about not being able to read /etc/system which I ignored.
Once the installation was complete I started the Database Assistant and created a database and started it with no
issues. Additionally I could have used various Resource Controls to create a Container for Oracle, but considering
the system I was running it on I chose not to.
To test ZFS snapshots I first created a snapshot of the sol10script Zone and ran the lockdown script noticing the changes made to the Zone and any complaints made while the script ran. Once I noted what needed to be fixed in the script I rolled back the snapshot then rebooted the Zone. End result was the Zone was restored to the point where I was at prior to running the script. I was also able to accomplish the same thing with the u01 filesystem for the oracle-test Zone,
I deleted it and restored it from a snapshot and started my Oracle database as if noting happened.
ZFS and JumpStart
Another use I had in mind for ZFS is to create a volume to store JumpStart configurations and Solaris Flash images of systems as part of a disaster recovery plan. In my initial testing installing Solaris 10 6/06 went off without any issues, however this was not to be the case with any other release of Solaris. The setup_install_server script for 6/06 recognizes ZFS and
all of the other releases do not. So you have the choice of either using the setup_install_server script from the 6/06 Release
or setting up your ZFS volume as Legacy so that previous Solaris Releases can see the mountpoint.
The potential for this is huge, imagine a single machine running an entire environment on a ZFS filesytem. Or if you are more
than a little paranoid, break it up into several systems with shared storage. The ability to restore a filesystem (or a “complete”
system) with two commands cannot be ignored, especially in cases of development environments or patching (all of the DBA’s I know
copy the oracle directory in case the patching goes South). With almost instantaneous recovery and the ability to clone Zones
almost at will, ZFS makes an extremely welcome addition to Solaris 10.
SATA framework, a possible cure for the Solaris IDE woes
SPARC or x86 systems that use IDE disks have always been at a disadvantage due to either the chipset support for most SPARC
systems (prior to the Blade 100) or the abysmal I/O of a Solaris SPARC or x86 machine with IDE disks where the maxphys is
57 kb of x86 or the 131 kb of SPARC. The only tunables for IDE drives is to set the dma-enabled property and the blocking factor
in the ata.conf file, and that was limited to x86 only. To further frustrate users you cannot tune maxphys on an IDE system at all
since the ATA driver does not map to the sd (SCSI) driver. Although the dma-enabled property improved disk performance considerably,
the system was still hampered by the maxphys of either system. That all changes with the SATA framework included as part of Solaris
10 6/06. From the /boot/solaris/devicedb/master file, here are the SATA controllers supported by Solaris 10 6/06:
pci1095,3112 pci-ide msd pci ata.bef “Silicon Image 3112 SATA Controller”
pci1095,3114 pci-ide msd pci ata.bef “Silicon Image 3114 SATA Controller”
pci1095,3512 pci-ide msd pci ata.bef “Silicon Image 3512 SATA Controller”
pci1000,50 pci1000,50 msd pci none “LSI Logic 1064 SAS/SATA HBA”
The problem I have with the Silicon Image controllers supported by Solaris 10 x86 is that none of them support 300 Mbit/sec drives
while the LSI Logic 1064 does. It is more likely that your average enthusiast is going to buy a motherboard or controller card with
a Silicon Image card than buy a LSI Logic card that costs considerably more than most motherboards. My Gateway system has an onboard
nVidia MCP51 SATA controller, which is recognized but is not used, the output of /usr/X11/bin/scanpci -v is below:
pci bus 0x0000 cardnum 0x0e function 0x00: vendor 0x10de device 0x0266
nVidia Corporation MCP51 Serial ATA Controller
CardVendor 0x105b card 0x0ca8 (Foxconn International, Inc., Card unknown)
STATUS 0x00b0 COMMAND 0x0007
CLASS 0x01 0x01 0x85 REVISION 0xa1
BIST 0x00 HEADER 0x00 LATENCY 0x00 CACHE 0x00
BASE0 0x000009f1 addr 0x000009f0 I/O
BASE1 0x00000bf1 addr 0x00000bf0 I/O
BASE2 0x00000971 addr 0x00000970 I/O
BASE3 0x00000b71 addr 0x00000b70 I/O
BASE4 0x0000e001 addr 0x0000e000 I/O
BASE5 0xfebfd000 addr 0xfebfd000 MEM
MAX_LAT 0x01 MIN_GNT 0x03 INT_PIN 0x01 INT_LINE 0x0b
BYTE_0 0x5b BYTE_1 0x10 BYTE_2 0xa8 BYTE_3 0x0c
Despite the fact that my system is not supported per se, the performance of the root disk with no tuning is quite good. I
used the test described in Richard McDougall’s Weblog entry titled “Tuning for Maximum Sequential I/O Bandwidth”, an example is
below:
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
239.0 0.0 61184.1 0.0 0.0 1.0 0.0 4.1 0 99 c2d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t0d1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t0d2
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t0d3
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 amanet:vold(pid932)
Xsun or Xorg
The biggest issue most will face using Solaris 10 x86 is problems with video cards, not unlike Linux of the mid 1990’s.
Even using a supported card does not necessarily mean you will get a display without some work. Prior to the addition of
the Xorg server Sun’s X Server was Xsun, which only worked with a small selection of video cards. Sun used ATI as the video
chipset of choice for their low-end and midrage video hardware, and I have used ATI cards successfully since Solaris 7 on
various x86 machines.
Of the two x86 machines I installed Solaris 10 6/06 on I had problems with the Gateway GT5056 if I used the motherboard
integrated nVidia GeForce 6100 graphics and the Pentium IV machine with the ATI Radeon AGP card. The Solaris installer
recognized both cards correctly but upon the first reboot I got a 640×480 screen. In the case of the Gateway machine I
ended up using VESA mode 1024×768 256 colors. The Pentium IV machine even though it was configured for 1280×1024 and 32-bit
color I got a 1024×768 display. Prior to this using an ATI video card in an x86 machine was a “no brainer”, now it seems that
using a nVidia card appears to be the no effort configuration.
An unexpected annoyance
When initially installing Solaris 10 6/06 using CD media I found myself at a similar point in the 1/06 Beta, and that is an
issue where once you install Solaris 10 and reboot you are asked for CD’s 2 through 4 (or 5 in the case of 6/06) a second time!
During this process no files are actually installed, you are prompted for the CD’s and you click Exit after each one until you
have used all of the CD’s and reboot the system. For those of us who use systems that do not have DVD drives (mainly older SPARC
machines) and do not want to go through the trouble of setting up a JumpStart server, this is more than a little painful.
I reported this as a bug during my testing of Solaris 10 1/06 and I am a little disappointed that the issue has yet to be fixed.
Now I understand Sun’s position in pushing people to use DVD as the media of choice for installation but many of us still have
SPARC and x86 systems that have CD ROM drives, and replacing them (especially SCSI systems) is not cheap or practical. This is
one thing that should have been fixed in the 1/06 Release and Sun should not have allowed this to slide.
Final remarks
With ZFS, Zones and Containers, Sun takes Solaris 10 to a whole new level of functionality for businesses looking at consolidation
through virtualization, a new filesystem with advanced capabilities and fine grained resource controls. The ability to manage ZFS
filesystems is eased by including web based administration as well as the traditional command line tools. The only thing that would
make ZFS better would be the ability to revert to the individual file level, that would make Solaris the choice for building “do it
yourself” near line storage systems.
As the SATA framework matures, I hope that Sun provides support for more than high-end cards and older controller logic. Sun for a
long time has treated IDE as the low-end drive interface for those who “can’t afford a SPARC” and are willing to suffer with poor
disk performance in order to use a good OS on Sun hardware (SPARC initially and now x86). Since I haven’t had the opportunity to
use a Sun x86 system that supports SATA I cannot say how well it works, the only Sun x86 systems I have ever used is the V20Z
which ship with SCSI disks. I would definitely like to get my hands on one of the SPARC machines using SATA and run it through
some benchmarks to see what I can get out of it as opposed to a similarly configured SCSI system.
With X on x86, Sun in my opinion has some work to do. While providing the capability to use more cards is good, the ability to
easily configure a display is just as important. This point alone will dissuade neophyte users more than anything else. As a
system administrator I am aware of kdmconfig and how to use it, and also know how to use the “time honored” configuration of Xorg
which is a throwback to the Linux days of the mid 1990’s. For those coming from Linux where the display is automatically configured,
they will see this as a step backwards for Solaris. I hope Sun decides to put more effort into automatic configuration of X and not
leave it in its current state.
Sun has put forth a great deal of effort into developing Solaris 10 6/06 and bringing new and powerful features to users and
system administrators. This Release has a few more issues than previous Releases I have used and tested, and I hope that what I
experienced is just a one time anomaly. Other than the CD install issue, the SPARC machines I used 6/06 worked without incident.
None of the issues are “show stoppers”, just annoyances that I experienced mostly on x86 systems.
It is more likely that your average enthusiast is going to buy a motherboard or controller card with a Silicon Image card than buy a LSI Logic card that costs considerably more than most motherboards.
By that, you mean, “It is more likely that your average server admin is going to buy a LSI Logic card that costs considerably more than most enthusiasts’ motherboards.”
Actually what I should have said was that an enthusiast will buy a motherboard or an expansion card with a Silicon Image controller logic before they spend the money on an LSI (or similar high-end) card.
The administrator is of course going to spend big bucks on the LSI (or similar) card. But if Sun is truly trying to get home hobbyists and users interested in Solaris, then Sun is going to have to make some compromizes on what hardware they intend to support.
But if Sun is truly trying to get home hobbyists and users interested in Solaris
What makes you think Sun gives a rat’s ass about home hobbyists? Even Open Solaris is targetted at developers, and it happened mainly because Sun’s engineers wanted it and Sun’s customers wanted it. It doesn’t make any sense at all for Sun to have hobbyists on their radar.
Isn’t that how Linux started, as something for “developers only”? I can’t agree with you at all considering what I read on a daily basis from the various OpenSolaris forums.
And why wouldn’t Sun be interested in the viewpoint of the hobbyist, that is how Linux migrated from the basement to the server room. For a long time Sun ignored Solaris x86 until several Sun officials met with the “Secret Six” and they demonstrated the value of continued development and support of Solaris x86. I personally think they are listening intently to those hobbyists.
Not a bad review. You did hit on some of the nice selling points of Solaris. It seems you are targetting the developer-type audience, and if that was your intent, you hit on the big issue – X/video card support. It’s something Sun does need to work on tremendously.
The other biggie you touched on lightly, but needs a good beating – is the installer. It really is terrible. Now, that being said, most everybody who uses Solaris for any length of time, sets up a jumpstart server and installs that way. It’s fine for server admins, but it really shouldn’t be necessary for devs to have to jump through some hoops to get Solaris installed in a reasonable fashion.
Both of these are known and acknowledged issues, I just don’t know at what priority they have been placed.
Thanks for the summary review.
>Not a bad review. You did hit on some of the nice >selling points of Solaris. It seems you are targetting >the developer-type audience, and if that was your >intent, you hit on the big issue – X/video card support. >It’s something Sun does need to work on tremendously.
I’m supprised the Xorg Nvidia drivers didn’t work I thought they where upgraded already. Never the less
Solaris has the same issues with video drivers as say
Linux of BSD they use the same source base. Or close
6.9 and 7.0 isn’t tied to any video driver. One thing
you don’t even care about using the native Nvidia driver
with Solaris x86/x64. You want to download the drivers
form the Nvidia site and take advantage of OpenGL
and all the hardware excelleration. Now it would be
nice if Sun just delivered the Nvidia drivers with
the system and I think they are working on this. You
can do it with linux distributions no reason why
you can’t do it with Solaris.
By the way the OpenSolaris discussion site has a
pretty good list of the changes that go into Solaris
x86/x64.
http://opensolaris.org/os/community/x_win/changelogs/changelogs-nv_…
You can see when/if your bug gets integrated. Even logging the bug is free and getting the status or asking questions about if and when it would be fixed
is free. Actually I just wish they take Xsun out of
the picture altoghther for the x86/x64 platform but
I think Xsun is needed for the SunRay servers.
—Bob
The nVidia card is the only one I didn’t have to fight with in order for it to work (the 7300 GS). The motherboard integrated 6100 and the ATI card is where I had the problems.
I know nVidia has a Solaris x86 driver, but it appears to be for the Quadro series of cards that are used in the Sun Ultra workstation series, so I never tried it.
It works for the geforce series quite well. It just isn’t supported. It worked with my 7900gt just fine.
Must be nice to have deep pockets!
>I know nVidia has a Solaris x86 driver, but it appears >to be for the Quadro series of cards that are used in >the Sun Ultra workstation series, so I never tried it.
Actually the Nvidia Solaris x86 drivers should work
with all versions of Nvidia chipsets. I have a
7800GTX Nvidia in my Sager 9750 laptop that works the
Solaris x86 Nvidia driver. Much of the embedded
Nvidia mortherboars (video) seems to work with the
drivers from Nvidia.
As for Xorg drivers not coming up with the correct
initial resolution during installation you could
consider this a bug and log the bug on opensolaris.org
site and they do address these types of issues. Heck
they address more than that I had a bug with my Nvidia
7800gtx 6months ago causing a panic and Sun fixed
that also. I’ve never seen them not address a bug
with a good description and feedback just like any
other open source platform. But than again Xorg
is not Sun or Solaris specific to begin with. Sun
does have engineers in the X organization.
I wish ATI would become more active in the Unix/Linux/BSD community. Maybe with the AMD
byout they will.
—Bob
Thanks. The focus on the problem I have with X is due to installation differences. On a SPARC for the most part you don’t worry about the frame buffer, it is usually detected and the appropriate driver/resolution is set for you. It should also be that way for x86 with supported cards. In the past with Solaris (up to Solaris 10 1/06) I was able to use an ATI video card with no problems after setting the resolution in kdmconfig during installation. This is the first Release of Solaris where I had trouble with a supported card.
I really don’t have a problem with the installer other than a couple of questionable screens that I think could go away. I feel a number of people would agree that it takes far too long to install Solaris, but that horse has been beat to death already.
Maybe I need to do some more reading or something, but I still don’t see what all the fuss is about ZFS. I’m most familiar with the volume managers from Linux (LVM2, EVMS) and AIX (LVM) and their associated filesystems (EXT3, reiser, JFS, JFS2, etc.) and I guess I don’t see what ZFS can do that the right combination of those technologies can’t. Can someone give me a quick rundown of why ZFS is better than LVM2+Reiser or AIX LVM+JFS2?
I think ZFS offers completely end-to-end integrated seamless tools, while LVM+whateverFS requires that you manage by hand many things, using tons of pvcreate, vgcreate, lvcreate, mkfs, etc. instead of just 2 commands in ZFS.
BTW LVM can’t do RAID-4 or RAID-5, and overall LVM performance sucks(especially snapshots slow down the disk performance enormously).
My AIX skills are a little rusty but if I am not mistaken you cannot create a RAID 5 array using the AIX LVM, only RAID 0 (stripes) or RAID 1 (mirrors). You either have to use a hardware based solution or use Veritas Volume Manager (VxVM) in order to do it in software. Either way it makes an expensive solution if you want fault tolerant storage and that still does not address ZFS’ capability for snapshots, which comes at no extra cost.
Solaris 10 06/06 also has SATA framework support for Marvell 88SX60xx and Marvell 88SX50xx based HBAs using the marvell88sx driver.
I wanted a cheap SATA2 300Mbit/sec controller card supported at this speed in Solaris using the native SATA framework. After looking around, I chose the Supermicro AoC-SAT2-MV8 card, less than $100 for 8 ports.
I hope in the future we’ll see more support for SATA2 controllers.
The funny thing is, none of our drives are capable of 300mbit/s, short of burst from cache. It’s all moot. I don’t understand this demand for SATA2. Once we have media that can actually USE the additional bandwidth, I’ll understand – but as SATA/SATA2 is per-drive, and not shared, the BW is far more than sufficient even for 15k drives.
Well, actually all my drives are capable of 300Mbit/sec, which is 37.5MB/sec. I guess we confused MB with Mbit; SATA 3Gb/sec can go up to 300MB/sec.
That said, cache burst speed is somewhat important for me as my drives can transfer over 200MB/s from cache. So why limit them to 150MB/sec? Of course this speed difference is barely noticeable except in certain occasions.
The real reason why I want SATA 3.0 Gb/sec is not because of the bandwidth, but because of the features it brings compared to SATA 1.5 Gb/sec, such as NCQ, HotPlug, Staggered Spinup, Port Multiplication, Port Selection, eSATA and xSATA. Plus, why choose a standard from 2002 when you can choose one from 2005, if it works?
My apologies, I copied the mbit/s from the previous post mentally, without thinking about it. Yes, MB/s.
The cache improvement really won’t be noticible. Cache generally isn’t for a large amount of data (can’t be, caches are 8/16MB generally) – so it’s all overkill. When we get new disk tech, that’s when the speed of SATA2 will be nice.
A lot of those things you mentioned are available with SATA1 as well. I’m not really sure I understand your logic there.
Now, the standard solution – that I can go with. Since SATA2 is backwards compatible, I suppose it makes sense to impliment support for SATA2 devices, as they are likely to be more common in the near future.
Right now though, Solaris has a LOT of things it needs to support HW wise, and I suspect SATA controllers are pretty far down on the list of what needs to be supported first. I’ve got a half dozen machines Solaris won’t even boot off the cd into the installer on.
Maybe I need to do some more reading or something, but I still don’t see what all the fuss is about ZFS.
I took a hard look at ZFS when it came out early release last year. I read the manual cover to cover. As an admin I can say that ZFS does a really, really good job of providing enterprise level disk/volume/partition/RAID/etc management AND (big And here) has an interface that is easy to love.
If you don’t manage a lot of disk space, you don’t care about ZFS. If you have ever created a RAID volume, ZFS makes it easier the next time around, from a software perspective.
ZFS is the good stuff.
I’m not familiar with Solaris at all, but looking at the screenshots, is that GNOME? That GUI looks really nice compared to the one’s shipped with other Linux distro’s. Can I use this WM on any Linux distro?
It’s just Gnome with a custom button for the Gnome menu (no, not a custom menu) and they’ve removed the top panel. You can easily customise your Gnome desktop to look exactly like that one.
Edited 2006-09-20 20:37
Well, check out the more recent (two-month old) JDS (Gnome) on Sun’s OpenSolaris distribution:
http://www.pbase.com/taochen/image/62600948/original
I’ll give this a try.
Your use of a Sun Blade 100 for a technical analysis, on the SPARC side, including your comments about IDE limitations needs is missing a large disclaimer from you, that this is a FIVE YEAR OLD entry level workstation. Would you write a review today of Windows Vista or XP running on circa-Y2K 800 MHZ AMD hardware? I have a PC from 2000 (about the same design year as the Sun Blade 100) that can’t accomodate IDE drives larger than 137GB for the same reason as the SB100 – the disk controller chip is too old. I don’t however fault the OS; it’s simply a hardware limitation. Would I write a review of Ubuntu Dapper Drake running on my old system? Probably not.
Other than this important omission, nice review!
The reason why I used a Blade 100 is because that is what I have available at work that I can stick on my desk. The management would have a problem with me using one of our 4800’s and the SAN. At some point in the future I might replace it with a Blade 2000 (Fibre Channel drives instead of IDE).
Fair enough – you test with what you have, and maybe it’s just a testament to some of Sun’s warhorse workstations that seem to last forever, but even your Blade 2000 (~ Y2002 hardware) is out of date if you are doing any modern OS reviews, especially comparisons between SPARC and (hp|dell|ibm) x86 gear. I just read about some other dude using an Ultra-5 (introduced in 1996?) to compare Solaris vs. Suse running on some type of hopped up Pentium 4. PeeCee shops seem to churn their desktops every 18-24 months; I still see Ultra-1’s out there!
But if Sun is truly trying to get home hobbyists and users interested in Solaris
Honestly, I don’t think Sun is spending any effort at all marketing or developing Solaris for hobbyists. And frankly, I don’t think Solaris is a good fit for most hobbyists either. I think the Open Solaris projects are good for tech-savy people who want a good challenge. Personally, I think Solaris has more of a learning curve than Linux and I think it is less appropriate as a desktop solution.
Solaris is a enterprise solution and as such it is geared more for servers and for larger numbers of installations.
And what exactly makes Solaris “harder” to use than Linux? The argument that you make for Solaris can me made for Linux if you look at it from a neophyte user’s prespective.
I’m not sure if I got this right or not but does the Sparc version of Solaris 10 6/06 supports Sata (Silicon image…) or is it limited to the x86 version?
I have some Mac Sil3112 card, that obviously gets recognized by the openfirmware on my Ultra60, but I’d need to know if Solaris actually supports it. Last time I checked on their site, SATA was only available for their x86 versions of Solaris..
This is the result of a quick Google search, the answer is yes but it is not cheap:
http://www.unixzone.dk/unix/20060218/sata-on-sparc-solaris/
>This is the result of a quick Google search, the answer is yes but it is not cheap:<snip>
That dates back from February.. I wonder if anything has changed in this regard though.