A Look at Solaris 10 6/06

Sun Microsystems makes new releases of Solaris about every four to six months, in many cases all the new release contains is bug fixes and some changes in functionality. More often than not most releases go by without a great deal of fanfare. Just as Solaris 10 3/05 broke new ground with Zones, Dtrace and the Service Management Facility. Solaris 10 6/06 introduces ZFS or Zettabyte File System and the SATA framework and Xorg 6.9, which will be the primary focus of this review.

I used three machines in the preparation of this article, a SunBlade 100 with 2 GB of RAM, 2 80 GB hard disks, a dual channel SCSI HBA, an XVR-100 64 MB frame buffer, an internal IDE DVD-ROM drive and two Sun MultiPacks with 12 36 GB hard disks. Of the two x86 machines, one was a home built Pentium IV with a GB of RAM, an ATI Radeon 9100
AGP card with 128 MB of RAM, a 3Com 3C905 10/100 NIC, a Lite-On DVD-RW drive and a 120 GB Western Digital hard disk.
My other machine was a Gateway GT5056 dual core Athlon 64 machine with 3 GB of RAM, 2 250 GB Western Digital SATA
hard disks, a PCI Express nVidia GeForce 7300 GS video card, an Intel Pro/100 PCI NIC and an internal DVD and
DVD-RW drives. During the review I also used the motherboard integrated nVidia GeForce 6100 graphics. The monitors
used for both x86 machines were two ViewSonic A90 19″ CRT’s and a Dell 2407WFP Flat Panel LCD for the Blade 100.

Is ZFS really all that great?

Sun has made a lot of noise about ZFS and rightly so. And although the creation, modification and deletion are
much easier than before, I don’t think that this is the most important part of ZFS. Using Solaris Volume Manager,
Solstice DiskSuite or Veritas Volume Manager is not a simple matter, especially with complex volumes. There is no
doubt that ZFS eliminates a lot of pain in volume creation and management, and this is for the most part what you
hear a lot about from Sun.

What has me excited about ZFS is snapshots, not that this is new to Solaris. Since Solaris 8 1/01 Release the
fssnap command has been included to allow administrators to take snapshots of filesystems. While fssnap works, it
requires a backing store to hold snapshots and the snapshot(s) cannot reside in the filesystem that you are trying to snapshot. Another limitation of fssnap is that it only works with ufsdump and ufsrestore, further limiting its
functionality. ZFS snapshots are easier to perform from either the command line or the ZFS GUI and do not require
ufsdump and ufsrestore.

Why snapshots are important

The conventional wisdom in managing a large number of machines is that you set up a tape backup strategy and routinely backup the various machines and filesystems in your environment. The problem comes when it is time to restore a file, directory, or entire system from tape. In many cases most places have never tested their backup tapes to see if they actually work! Another issue is time, recovering individual files or a filesystem from tape can take a long time, and your system is down or is reduced in functionality while the restore is taking place. Using snapshots takes only seconds and the recovery time is just as fast.

I first started working with snapshots when we purchased a NetworkAppliance Filer NAS at a previous position I
worked at. The Filer uses a BSD derivative operating system and the Write Anywhere File Layout (or WAFL) filesystem,
and WAFL supports snapshots down to individual files. ZFS on the other hand is not that granular and can only be used
to restore (or revert) entire filesystems. While this might be viewed as a bad thing by some, I see it as a good start
because it comes with the OS and works regardless of what storage is used.

Zones + ZFS + ZFS snapshots= Heaven

One of the first things I thought of for using ZFS was creating a raidz filesystem and create Non-Global Zones using
the ZFS filesytem as the location to install the Zones. So on a Blade 100 with 2 GB of memory, 2 80 GB IDE disks, a
dual channel PCI SCSI card and a Sun StorEdge MultiPack with 6 36 GB SCSI disks I created a 102 GB ZFS raidz filesystem
called Zones. I then created 3 Non-Global whole root Zones called sol10script, adllms and oracle-test. The sol10script
Zone is used to test a shell script we are writing to lock down Solaris 10 machines after they are built. The adllms
Zone is being used to test the feasibility of installing and using a particular Open Source Learning Management System
with Solaris.

The oracle-test Zone is a little more ambitious in that not only am I creating a Non-Global Zone, I am also mounting
another ZFS volume to install Oracle on within the Zone. I first created a second raidz volume called u01 using 3 36
GB disks and dismounted the filesystem. I then set the filesystem to legacy (a requirement to mount a ZFS volume in a
Non-Global Zone). In order to mount the Oracle DVD I created a loopback filesystem to use the Blade 100’s DVD drive
in the Non-Global Zone. I then made all of the standard modifications necessary to install Oracle 10g with one
exception, I made all of the changes to /etc/system in the Global Zone (which is inherited to all Non-Global Zones).
During the installation of Oracle, there was a complaint about not being able to read /etc/system which I ignored.
Once the installation was complete I started the Database Assistant and created a database and started it with no
issues. Additionally I could have used various Resource Controls to create a Container for Oracle, but considering
the system I was running it on I chose not to.

To test ZFS snapshots I first created a snapshot of the sol10script Zone and ran the lockdown script noticing the changes made to the Zone and any complaints made while the script ran. Once I noted what needed to be fixed in the script I rolled back the snapshot then rebooted the Zone. End result was the Zone was restored to the point where I was at prior to running the script. I was also able to accomplish the same thing with the u01 filesystem for the oracle-test Zone,
I deleted it and restored it from a snapshot and started my Oracle database as if noting happened.

ZFS and JumpStart

Another use I had in mind for ZFS is to create a volume to store JumpStart configurations and Solaris Flash images of systems as part of a disaster recovery plan. In my initial testing installing Solaris 10 6/06 went off without any issues, however this was not to be the case with any other release of Solaris. The setup_install_server script for 6/06 recognizes ZFS and
all of the other releases do not. So you have the choice of either using the setup_install_server script from the 6/06 Release
or setting up your ZFS volume as Legacy so that previous Solaris Releases can see the mountpoint.

The potential for this is huge, imagine a single machine running an entire environment on a ZFS filesytem. Or if you are more
than a little paranoid, break it up into several systems with shared storage. The ability to restore a filesystem (or a “complete”
system) with two commands cannot be ignored, especially in cases of development environments or patching (all of the DBA’s I know
copy the oracle directory in case the patching goes South). With almost instantaneous recovery and the ability to clone Zones
almost at will, ZFS makes an extremely welcome addition to Solaris 10.

SATA framework, a possible cure for the Solaris IDE woes

SPARC or x86 systems that use IDE disks have always been at a disadvantage due to either the chipset support for most SPARC
systems (prior to the Blade 100) or the abysmal I/O of a Solaris SPARC or x86 machine with IDE disks where the maxphys is
57 kb of x86 or the 131 kb of SPARC. The only tunables for IDE drives is to set the dma-enabled property and the blocking factor
in the ata.conf file, and that was limited to x86 only. To further frustrate users you cannot tune maxphys on an IDE system at all
since the ATA driver does not map to the sd (SCSI) driver. Although the dma-enabled property improved disk performance considerably,
the system was still hampered by the maxphys of either system. That all changes with the SATA framework included as part of Solaris
10 6/06. From the /boot/solaris/devicedb/master file, here are the SATA controllers supported by Solaris 10 6/06:

pci1095,3112 pci-ide msd pci ata.bef “Silicon Image 3112 SATA Controller”
pci1095,3114 pci-ide msd pci ata.bef “Silicon Image 3114 SATA Controller”
pci1095,3512 pci-ide msd pci ata.bef “Silicon Image 3512 SATA Controller”
pci1000,50 pci1000,50 msd pci none “LSI Logic 1064 SAS/SATA HBA”

The problem I have with the Silicon Image controllers supported by Solaris 10 x86 is that none of them support 300 Mbit/sec drives
while the LSI Logic 1064 does. It is more likely that your average enthusiast is going to buy a motherboard or controller card with
a Silicon Image card than buy a LSI Logic card that costs considerably more than most motherboards. My Gateway system has an onboard
nVidia MCP51 SATA controller, which is recognized but is not used, the output of /usr/X11/bin/scanpci -v is below:

 pci bus 0x0000 cardnum 0x0e function 0x00: vendor 0x10de device 0x0266
nVidia Corporation MCP51 Serial ATA Controller
CardVendor 0x105b card 0x0ca8 (Foxconn International, Inc., Card unknown)
STATUS 0x00b0 COMMAND 0x0007
CLASS 0x01 0x01 0x85 REVISION 0xa1
BIST 0x00 HEADER 0x00 LATENCY 0x00 CACHE 0x00
BASE0 0x000009f1 addr 0x000009f0 I/O
BASE1 0x00000bf1 addr 0x00000bf0 I/O
BASE2 0x00000971 addr 0x00000970 I/O
BASE3 0x00000b71 addr 0x00000b70 I/O
BASE4 0x0000e001 addr 0x0000e000 I/O
BASE5 0xfebfd000 addr 0xfebfd000 MEM
MAX_LAT 0x01 MIN_GNT 0x03 INT_PIN 0x01 INT_LINE 0x0b
BYTE_0 0x5b BYTE_1 0x10 BYTE_2 0xa8 BYTE_3 0x0c

Despite the fact that my system is not supported per se, the performance of the root disk with no tuning is quite good. I
used the test described in Richard McDougall’s Weblog entry titled “Tuning for Maximum Sequential I/O Bandwidth”, an example is
below:

extended device statistics

    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
239.0 0.0 61184.1 0.0 0.0 1.0 0.0 4.1 0 99 c2d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t0d1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t0d2
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t0d3
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 amanet:vold(pid932)

Xsun or Xorg

The biggest issue most will face using Solaris 10 x86 is problems with video cards, not unlike Linux of the mid 1990’s.
Even using a supported card does not necessarily mean you will get a display without some work. Prior to the addition of
the Xorg server Sun’s X Server was Xsun, which only worked with a small selection of video cards. Sun used ATI as the video
chipset of choice for their low-end and midrage video hardware, and I have used ATI cards successfully since Solaris 7 on
various x86 machines.

Of the two x86 machines I installed Solaris 10 6/06 on I had problems with the Gateway GT5056 if I used the motherboard
integrated nVidia GeForce 6100 graphics and the Pentium IV machine with the ATI Radeon AGP card. The Solaris installer
recognized both cards correctly but upon the first reboot I got a 640×480 screen. In the case of the Gateway machine I
ended up using VESA mode 1024×768 256 colors. The Pentium IV machine even though it was configured for 1280×1024 and 32-bit
color I got a 1024×768 display. Prior to this using an ATI video card in an x86 machine was a “no brainer”, now it seems that
using a nVidia card appears to be the no effort configuration.

An unexpected annoyance

When initially installing Solaris 10 6/06 using CD media I found myself at a similar point in the 1/06 Beta, and that is an
issue where once you install Solaris 10 and reboot you are asked for CD’s 2 through 4 (or 5 in the case of 6/06) a second time!
During this process no files are actually installed, you are prompted for the CD’s and you click Exit after each one until you
have used all of the CD’s and reboot the system. For those of us who use systems that do not have DVD drives (mainly older SPARC
machines) and do not want to go through the trouble of setting up a JumpStart server, this is more than a little painful.

I reported this as a bug during my testing of Solaris 10 1/06 and I am a little disappointed that the issue has yet to be fixed.
Now I understand Sun’s position in pushing people to use DVD as the media of choice for installation but many of us still have
SPARC and x86 systems that have CD ROM drives, and replacing them (especially SCSI systems) is not cheap or practical. This is
one thing that should have been fixed in the 1/06 Release and Sun should not have allowed this to slide.

Final remarks

With ZFS, Zones and Containers, Sun takes Solaris 10 to a whole new level of functionality for businesses looking at consolidation
through virtualization, a new filesystem with advanced capabilities and fine grained resource controls. The ability to manage ZFS
filesystems is eased by including web based administration as well as the traditional command line tools. The only thing that would
make ZFS better would be the ability to revert to the individual file level, that would make Solaris the choice for building “do it
yourself” near line storage systems.

As the SATA framework matures, I hope that Sun provides support for more than high-end cards and older controller logic. Sun for a
long time has treated IDE as the low-end drive interface for those who “can’t afford a SPARC” and are willing to suffer with poor
disk performance in order to use a good OS on Sun hardware (SPARC initially and now x86). Since I haven’t had the opportunity to
use a Sun x86 system that supports SATA I cannot say how well it works, the only Sun x86 systems I have ever used is the V20Z
which ship with SCSI disks. I would definitely like to get my hands on one of the SPARC machines using SATA and run it through
some benchmarks to see what I can get out of it as opposed to a similarly configured SCSI system.

With X on x86, Sun in my opinion has some work to do. While providing the capability to use more cards is good, the ability to
easily configure a display is just as important. This point alone will dissuade neophyte users more than anything else. As a
system administrator I am aware of kdmconfig and how to use it, and also know how to use the “time honored” configuration of Xorg
which is a throwback to the Linux days of the mid 1990’s. For those coming from Linux where the display is automatically configured,
they will see this as a step backwards for Solaris. I hope Sun decides to put more effort into automatic configuration of X and not
leave it in its current state.

Sun has put forth a great deal of effort into developing Solaris 10 6/06 and bringing new and powerful features to users and
system administrators. This Release has a few more issues than previous Releases I have used and tested, and I hope that what I
experienced is just a one time anomaly. Other than the CD install issue, the SPARC machines I used 6/06 worked without incident.
None of the issues are “show stoppers”, just annoyances that I experienced mostly on x86 systems.

31 Comments

  1. 2006-09-20 6:15 pm
    • 2006-09-20 6:49 pm
    • 2006-09-20 11:26 pm
      • 2006-09-21 12:16 am
  2. 2006-09-20 6:24 pm
    • 2006-09-20 7:28 pm
      • 2006-09-20 10:34 pm
        • 2006-09-20 10:39 pm
          • 2006-09-20 10:46 pm
        • 2006-09-20 10:49 pm
    • 2006-09-20 10:23 pm
  3. 2006-09-20 7:03 pm
    • 2006-09-20 8:05 pm
    • 2006-09-20 10:45 pm
  4. 2006-09-20 7:11 pm
    • 2006-09-20 7:36 pm
      • 2006-09-20 9:30 pm
        • 2006-09-20 9:42 pm
  5. 2006-09-20 8:03 pm
  6. 2006-09-20 8:05 pm
    • 2006-09-20 8:36 pm
    • 2006-09-21 3:34 am
  7. 2006-09-20 8:39 pm
  8. 2006-09-20 9:29 pm
    • 2006-09-20 10:17 pm
      • 2006-09-22 4:11 am
  9. 2006-09-21 1:04 am
    • 2006-09-21 9:08 am
  10. 2006-09-21 9:21 pm