Sun has really shifted gears lately with regards to Solaris, SPARC, and x86. For many years, Sun seemed to relegate Solaris x86 to the status of red-headed stepchild, undeserving of attention, nurturing, and support. It furthered this perception when in January of 2002, Sun announced it would not release Solaris 9, the newest upcoming Solaris operating system, on the x86 platform. Solaris 9 was to be a (more lucrative) SPARC-only platform release.
NOTE: All tests were conducted by Tony Bourke. OSNews had no participation in the tests or its procedure.
However, Sun has since changed its tune by releasing Solaris 9 x86.
They are now pushing Solaris for the x86 platform, in an attempt to
regain market in the low and mid-level server market, as the
Intel/AMD systems have been decimating SPARC sales. While this move
might signal capitulation, or at least compromise, on the x86
question, they are now engaging in a full-fledged battle with Linux.
Sun’s new story is that Solaris x86 is a better, safer, and more
stable alternative to Linux. Sun has even gone so far as to offer a
a couple of top-of-the-line Intel-based x86 systems in single and
dual processor configurations. The systems currently include at no
extra cost licensed Solaris 9 x86 pre-installed (no-cost up until
January 4, 2004), with an option to purchase Red Hat Enterprise Linux
for an additional cost.
Here is a quote from Sun executive VP Jonathon Schwartz in an eWeek
article which sums up Sun’s position with regards to Linux and
Solaris (full article at
http://www.eweek.com/article2/0,4149,1274623,00.asp):
“Also, let me really clear about our Linux strategy. We
don’t have one. We don’t at all. We do not believe that Linux plays a
role on the server. Period. If you want to buy it, we will sell it to
you, but we believe that Solaris is a better alternative, that is
safer, more robust, higher quality and dramatically less expensive in
purchase price.”
With this new push for
Solaris x86 I decided to take a fresh look with Sun’s latest, Solaris
9 x86 Platform Edition and pit it against Red Hat Linux 9 in a number
of categories, including features, security, and performance.
Background
To start off, perhaps I should give a little
background on myself. I’ve been an avid Linux and Solaris user (on
x86 and SPARC, respectively) for about 8 years now. I’ve used Linux
since the 1.2.13 kernel (Slackware, back in the day), and Solaris
since 2.4. I’ve used them both extensively in my personal and
professional work, and I enjoy both of them for their various
strengths. I don’t consider myself biased toward one or the other, as
both have been very good to me over the years.
While I’ve dabbled with Linux on Alpha and SPARC,
I’ve primarily used it on the x86 platform. For Solaris, I’ve almost
exclusively used it on the SPARC platform, with the exception of a
brief stint with Solaris 2.6 x86 several years ago. I own several
x86 systems and one Sun Ultra 5, a SPARC-based workstation.
Version
Note: For those of you not familiar with Solaris, this may help with
some versioning confusing (i.e. Solaris 2.6, Solaris 7).
After
Solaris 2.6, Sun decided to change how it named each Solaris version.
The next version was Solaris 2.7, but Sun called it simply “Solaris
7”. Solaris 8 is actually 2.8, and Solaris 9 is 2.9. They are
sometimes still referred to by the old nomenclature (i.e. 2.7),
especially when dealing with porting and software versioning.
A
bit confused? I’ve still got more! Solaris versions are also
sometimes referred to as SunOS, and different numbering schemes apply
there as well.. SunOS was the original operating system released by
Sun in 1981 and is based on BSD, where Solaris is based on SVR4 Unix
(System V). The last version of SunOS was 4.1.4, which would make
Solaris 2.0 (Solaris started at 2.0) SunOS 5.0. So Solaris 9 is also
known as Solaris 2.9 and also known as SunOS 5.9.
Testbed
To evaluate these two venerable operating systems,
I used a VA Linux box I procured on eBay a year or so ago.
Processor (2)
Intel Pentium IIIs at 600 MHz, 256 KB cache
Motherboard Intel
L440GX+
RAM 512
MB PC133 ECC
DISK (1)
9 GB Maxtor SCSI LVD
SCSI
Controller Adaptec AIC-7896 Dual Channel
Video Cirrus
Logic GD 5480 2 MB RAM
It’s not the most
powerful box around, but it’s dual processor, plenty fast, and given
the cost cutting nature of the industry, it’s still a very common
system in terms of both power and configuration.
Red Hat Linux 9
I ran a standard install of Red Hat 9. Before
testing, I applied the updates available from Red Hat’s site, which
among others packages, updated glibc and the kernel, bringing the
system to 2.4.20-20.9smp and glibc 2.3.2-27.9.
Linux
mentat 2.4.20-20.9smp #1 SMP Mon Aug 18 11:32:15 EDT 2003 i686 i686
i386 GNU/Linux
Solaris 9
I used a standard
install of Solaris 9 x86 edition. Before testing, I installed the
9_x86_Recommended public patch cluster obtained from Sun’s download
page. Here is the uname
-a from the system:
SunOS
mentat 5.9 Generic_112234-08 i86pc i386 i86pc
I did no specific
tweaking of either operating system, and except for patches, they are
both running as stock installations with off the shelf-style
configuration.
A Note On My Choice Of Linux Distribution
If this article gets published with a comments
section, it will invariably be filled with comments such as “you
fool, you should have used Slackware/SuSE/Mandrake” and “your
choice in Linux distributions shows your obvious inclination towards
the drowning of cute kittens”. So I’m going to quickly address
my choice for Linux distributions.
I chose Red Hat 9 for the simple fact that it is a
very popular distribution, and is ubiquitous in terms of corporate
and personal deployment. Of course it is not the end-all be-all of
Linux distributions, but it’s both popular and effective, which makes
it appropriate as an evaluation platform.
Besides, most of what I evaluate has more to do
with Linux itself, and not the distribution. The only significant
effect Red Hat has on this evaluation is the specific version of the
kernel (2.4.20-20.9) and the use of RPMs (which some other Linux
distributions use as well).
I’m sure that despite this little interlude, I’ll still
receive those flame-trolling comments. To that I say, If you have a
problem with my choice in distribution, then feel free to run your
own evaluation. Also, your momma is ugly. Seriously. UG-LY.
Installation
Both systems offer both
graphical (X-based) and text-based installation options, and they are
both easy to follow and a snap to use for even green sysadmins.
Red Hat 9’s graphical installation is probably the easier of the two,
although not by much. The installation process is similar for both,
and involve the configuration of hardware, selecting of software
components, and configuring system settings such as networking and
timezones.
I didn’t run into any problems installing either
operating system. Although I didn’t measure the length of time it
took to install for each (that would involve sitting in front of the
system with a stop watch, reacting to each prompt immediately, and
being very , very bored), my qualitative impression is that they both
took roughly the same amount of time to install.
The Red Hat 9 Installation comes on 5 CDs, but most
installs only need the first two or three. The last two are filled
with mainly source code RPMs. The Solaris 9 install comes on a total
of four CDs: One boot CD, two software CDs, and a language CD
containing, of all things, additional languages.
Desktop
While both Solaris and
Red Hat 9 offer desktop environments, this evaluation is
concentrating on server features, so I won’t spend a lot of time on
desktops.
With that said, Red Hat
wins, no contest.
Solaris 9 lets you
choose between a very ugly GNOME 2.0 implementation or the typically
bland CDE, while Red Hat offers both KDE and GNOME 2.0 which are both
beautifully and cleanly implemented. There are no Truetype fonts on
Solaris 9, so fonts often render ugly, and in some cases they look
like an Atari 2600 created them.
Solaris 9 ships with its default browser as – I’m not kidding – Netscape
4.78. It also defaults to “warn on cookie”, which on
Sun’s own page pops up 3 warnings, which is extremely annoying for
browsing. 4.78 fails to render many pages correctly. Red Hat
includes Galeon, Mozilla, and Konquerer, and by contrast are
stunningly beautiful.
While Red Hat 9 comes
with XFree86 4.3, Solaris runs Sun’s own X11 system called
OpenWindows. This can be problematic on Solaris with the multitude
of x86 video cards and many not being supported, although Sun offers
a porting kit to use XFree86 drivers with OpenWindows. Mileage on
that will vary greatly, of course.
However, both
recognized and correctly configured my Cirrus Logic card with its
pitiful 2MB VRAM and X came up without any problem. Using XFree86
for Solaris x86 is an option too, of course, but that would require
quite a bit of work.
While it’s possible
to get a Solaris desktop nice and pretty, it would take quite a bit
of work, especially in compile time (I don’t know if you’ve ever
tried compiling the full GNOME or KDE from scratch for a
less-than-common operating system, so I’ll just say it’s not my idea
of fun).
Solaris as a desktop
just isn’t ready yet, and which is why Sun itself is initially using
Linux instead of Solaris x86 for it’s new desktop offering, the Java
Desktop System.
Package Management
Both Solaris and Red Hat Linux (some other Linux
distributions like Gentoo use other methods) use a package management
system for installation of system software, and often use them for
third-party software as well.
Sun’s pkgadd,
pkgrm,
and pkginfo
are very similar to Red Hat’s rpm
-i, rpm
-e, and rpm
-qa. They’re both easy to use to install
software and creating Solaris packages and creating RPMs are also
both fairly easy and well documented.
My only complaint is where many Solaris packages
tend to put software: /opt.
I’m sure I’m treading into religious wars here, but I just prefer
/usr/local/
for installed software and /usr
for system software. While that is itself a religious gripe, a real
gripe would be that Sun’s auto disk partitioning during installation
doesn’t make a separate /opt
file system and usually makes the root file (“/”)
system small, thus making for a crowded root file system.
Having used both extensively for a number of
years, functionally they are both fundamentally the same, and that
was certainly true for this evaluation. You’d have to pick something
fairly specific show as an advantage in one product over another,
which I’m sure someone will do.
Security
There is one aspect of Solaris that I’m not fond
of at all, and that’s Sun’s slow response to release security patches
for some of the more serious vulnerabilities. Two potentially
remote-exploitable released recently were Sendmail and OpenSSH.
(http://www.cert.org/advisories/CA-2003-25.html
and http://www.cert.org/advisories/CA-2003-24.html,
respectively.)
For the sendmail vulnerability, CERT and Bugtraq
both showed the vulnerability on September 18th.
Sendmail.org released a patch to fix the buffer overflow a day prior
and Red Hat had updated RPM that same day. Sun released and advisory
regarding this issue on the 19th, but as did not have a
public patch released untio September 29th.
(http://sunsolve.sun.com/pub-cgi/retrieve.pl?doc=fsalert/56860)
There is a similar
story for the SSH vulnerability. One September 16th,
2003, the NetSys full-disclosure mailing list reported a potential
remote-exploitable vulnerability to gain root access via current
versions of SSH. The same day, OpenSSH released version 3.7p1 to
account for possible vulnerabilities, and almost immediately came
out with 3.7.1p1 to counter more possible exploits, and again with
3.7.1p2. Currently OpenSSH is on 3.7.1p2.
While Sun distributes
its own version of SSH, it’s based on OpenSSH and Sun admits their
versions are potentially vulnerable and only issued preliminary
patches on September 30th.
While Solaris systems
may or may not run sendmail, they almost all run SSH, as
administration by plain-text-passwords-over-the-network telnet is
tantamount to gross negligence. Given that both advisories are
potentially root-exploitable and a lack of any Sun patches,
administrators were compelled to compile and install OpenSSH 3.7.1p2,
as well as the sendmail.org’s latest patched version of sendmail.
Given the critical
nature of sendmail and the ultra-critical nature of SSH, Sun gets a
big fat F when it comes to security. Red Hat, and indeed most of the
other Linux distributions earn A’s for their quick and and speedy
patch releases to these and other issues.
Patch Management
Patch management has as much to do with security
as it does performance and bug-fixing, although security if the most
important.
Red Hat has an automated patch-install system, and
releases bug-fixes and security-related RPMs on a regular basis (more
regular than Sun does, as noted above). It’s fairly easy to use and
is effective, especially with Red Hat’s quick deployment of security
releases.
However, I think Solaris has the superior
patch-management system. One nice aspect about the Solaris
patch-management scheme is that once a patch has been installed, it
can be backed out if required. The previous versions of the drivers,
libraries, and binaries replaced by the new patch are kept on the
system (in /var/sadm/).
This has saved me on more than one occasion. The command showrev
-p shows all the patches currently
installed.
Sun’s patch management system has been honed over
years of experience in dealing with all manner of bugs, and it shows
in the form of a comprehensive and effective patch implementation.
Now if Sun would just release security patches on time.
Network Security
Both Linux and Solaris have freeware firewall
options available to them. Linux’s 2.4 through 2.6 kernel has the
open source GPL Netfilter, and Solaris has the option of the freeware
IP Filter (http://coombs.anu.edu.au/~avalon/).
Solaris 9 x86 also includes SunScreen 3.2, a Sun-branded non-open
source firewall.
You might remember IP Filter from a spat a few
years ago between the OpenBSD folks (especially Theo) and Darren
Reed, IP Filter’s author. While the disagreement was over the
licensing specifics of IP Filter (it’s still free), the argument got
quite nasty and OpenBSD abandoned IP Filter as its firewall and then
developed pf on their own. However, IP Filter is still actively
maintained by Darren and works great on Solaris systems, as well as
FreeBSD, NetBSD, HP-UX, and others (although not modern Linux).
They both offer typical filtering goodies, such as
blocking by port and protocol, stateful inspection, logging, and NAT
and IP masquerading (when several systems share a single external
broadband connection). They are both full-featured,
high-performance, and can be used on even the most active systems.
I’ve yet to encounter a situation where either has imposed any
noticeable performance penalty.
I prefer IP Filter to Netfilter, primarily for
ease-of-use. IP Filter uses a much more natural language type
configuration, where Netfilter concentrates on building chains which
can be quite cumbersome to build, and difficult to decipher. They’re
even more cryptic than the venerable ACL configuration syntax on
Cisco’s IOS.
Here is part of a sample configuration for IP
Filter, allowing only port 22 (SSH) and port 80 (HTTP) inbound and
keeping stateful information on those connections.
pass
in quick on iprb1 proto tcp from any to any port = 22 flags S keep
state
pass
in quick on iprb1 proto tcp from any to any port = 80 flags S keep
state
block
in on iprb1
Here is a portion of a Netfilter configuration
with similar rules:
$IPTABLES
-A tcp_packets -p TCP -s 0/0 –dport 22 -j allowed
$IPTABLES
-A tcp_packets -p TCP -s 0/0 –dport 80 -j allowed
$IPTABLES
-P INPUT DROP
Still, they are both
very effective for protecting your systems, although I wish Linux
would take lead from IPFilter (and pf, and OpenBSD) and greatly
simplify it’s configuration style.
Even if the system is
just a host and not a router, its a good idea to add an additional
level of protection by running host-based firewalls.
Available Software
Linux is omnipresent
to the point where just about every server application imaginable –
commercial, enterprise, open source – is ported to Linux or
written with Linux in mind. The obvious exception is of course
Microsoft; I don’t think we’ll be seeing SQL Server for Linux (or
Solaris for the matter) any time soon!
Linux is usually a
primary (or the primary) port of open source applications, so
virtually everything compiles without problem under Linux. If you’re
not the compile-from-source-type, Linux binaries are usually very
easy to find, in either RPM or tarball form.
Enterprise applications
like Oracle, PeopleSoft, Veritas, and others which once only ran on
commercial UNIXes, now all run on Linux as well.
The wide installed base
of Linux results in a great support community. If you’ve got a
problem with a commercial application or an open source release won’t
compile, the solution is usually just a Google search away. The same
can be said for Solaris x86, although to a much lesser extent as the
install base is nowhere near Linux.
In short, if it runs,
it usually runs on Linux.
Solaris is a slightly
different story. While just about any enterprise server application
runs on Solaris SPARC, very few of them run on Solaris x86. Oracle,
PeopleSoft and Veritas are just a few of the vendors that have little
or no Solaris x86 ports.
Getting pre-compiled
binaries of open-source applications for Solaris 9 x86 is more
problematic for Solaris x86 than for Linux. While some open source
projects will offer pre-compiled binaries at their download sites,
they tend to concentrate on Solaris SPARC, Linux, and sometimes
Windows (MySQL is an example). Most (at least, not any that I looked
at) do not offer Solaris x86 pre-compiled binaries.
On the bright site,
there are a couple of reliable reputable resources for pre-compiled
Solaris x86 binaries, including (http://www.sunfreeware.com),
blastwave.org (http://www.blastwave.org/),
and Sun’s own site for Solaris freeware
http://wwws.sun.com/software/solaris/freeware/.
On many of those sites,
there’ll be versions for Solaris 7 or 8 x86, but not Solaris 9 (at
least not yet). For the most part, Solaris 7 and 8 x86 binaries will
work fine with Solaris 9. The only exception would be kernel
modules, such as pre-compiled packages of IP Filter.
In the enterprise
software industry, Linux x86 and Solaris SPARC are always first on
the list for Unix-type ports, while Solaris x86 is often snubbed (a
position which Sun has only itself to blame because of their relegating
x86 to the red-headed stepchild position for so long).
The lxrun utility does
allow Solaris x86 users to run Linux applications, but I’m not sure
how much I would trust this for critical applications. If it were a
background process, like a binary-only monitoring or backup agent, I
might be inclined to use it. If it’s to run IBM’s Java environment
(which works on Linux but not Solaris x86) for Tomcat, then I’d
probably just run Linux.
On the bright side of
Solaris x86 is the fact that compiling open-source packages is fairly
trouble-free, probably owing to the attention paid to making sure
ports work for its SPARC counterpart. Once I got GCC and gmake,
compiling the various applications used in this evaluation was a
snap.
However, the Solaris
x86 situation is changing, at least slightly. OpenOffice just
recently started to release Solaris x86 binaries in addition to the
SPARC binaries that were released, and Oracle has announced that
Oracle 10G will be available on Solaris x86.
Of course the irony is
that a proprietary operating system like Solaris is a great
platform to run open source applications. But if you’re looking to
run some proprietary/commercial applications, Solaris x86 really
isn’t much of an option at this time.
File System Performance
RedHat Linux typically uses the ext2 or ext3 file
systems for storage, while Solaris 9 x86 uses the Unix stalwart UFS.
Many people complain about the slowness of UFS, similar to the
complaints about BSD’s FFS until the advent of soft-updates. But,
since Solaris 7, Sun has provided a remedy of this UFS slowness by
the way of UFS logging which many sysadmins are surprised to learn
of.
UFS with logging provides a remarkable speed boost
to file systems for some types of operations, so much so that I’m
surprised that Sun hasn’t just turned it on by default. You can run
UFS logging safely on all file systems, including /.
To enable UFS logging, just add logging to the option column (it’s
the last column) in /etc/vfstab and reboot. The file systems will
automatically come up with logging enabled. Of course be careful
when editing /etc/fstab,
as fudging it up can really ruin your day.
/dev/dsk/c0t0d0s0
/dev/rdsk/c0t0d0s0 / ufs 1 no logging
/dev/dsk/c0t0d0s6
/dev/rdsk/c0t0d0s6 /usr ufs 1 no logging
I ran a quick check to
see how UFS logging helped speed, so I did an untar
of the gcc-3.3.1.tar source code from ftp://www.gnu.org/gnu/gcc/
and then deleted the directory. The results are quite dramatic:
Task
| Without UFS Logging | With UFS Logging
tar
-xvf gcc-3.3.1.tar 4m 2s 0m 41s
rm
-rf gcc-3.3.1 2m 53s 0m 4s
Going from 4 minutes to
40 seconds for the tar, and almost 3 minutes to just under 4 seconds
is quite a speedup. Of course tar
and rm
operations are by no means a comprehensive of I/O operations in a
production server environment, but you get the picture.
For Linux, the
difference between ext2 and ext3 was minimal, and both were faster
than Solaris (even with UFS with logging):
Task
| ext2 | ext3
tar
-xvf gcc-3.3.1.tar 0m 24s 0m 22s
rm
-rf gcc-3.3.1 0m 2s 0m 2s
System Performance
For a bit more of a CPU and I/O intensive
operation, I ran the benchmarking utility included with
mysql-4.0.15a, the sql-bench test suite.
Figure 1: Basic file system performance
What surprised me about this test is that when I
ran the tests for Linux in ext2 and ext3, the results were
essentially identical. The same was true with Solaris with and
without logging; the results were the same.
With that in mind, here are the results for Linux
ext3 versus Solaris with logging.
Figure 2: MySQL benchmarks
Figure 3: MySQL benchmarks continuted
The results are mixed,
with Solaris coming out on top for some operations (most notably the
insert operations), and Linux coming out on top for others.
Compilation Performance
Another test of CPU speed was a test of
compilation times for a number of popular open-source applications.
Again, compilation time can depend on a number of metrics, including
many that aren’t tied to system speed (such as the number of system
include files), so these benchmarks serve to satiate curiosity more
than anything.
Figure 4: Compilation time
My first step was to use a common compiler, so I
chose GNU GCC’s latest, 3.3.1. For Solaris, I used the 2.95 GCC
build I obtained from Sun’s own freeware site
(http://wwws.sun.com/software/solaris/freeware/).
It compiled without any errors, and took 22 minutes and 26 seconds
to compile.
For Linux, I used the RedHat’s pre-installed GCC
3.2.2 to compile GCC 3.3.1. This took quite a bit longer, clocking
in at 94 minutes, 10 seconds. Of course since it was 3.2.2 versus
2.95, it’s not a fair comparison and thus I didn’t include them on
the graph.
I then compiled MySQL 4.0.15a (which was then used
for the MySQL benchmarks), Perl 5.8.1, OpenSSH 3.7.1p2, Apache
1.3.28, and PHP 4.3.3.
So, for the most part, Solaris seems to be the
faster operating system when compiling software, but again, this
isn’t really a critical metric.
OpenSSL Performance
Encryption operations are one way to put a system
through its paces, so I compiled OpenSSL 0.9.7c and used its speed
test: opeenssl
speed.
When I initially ran it on Solaris and Linux, the
results showed a huge disparity. Upon investigation, I found that
the default building process for Solaris compiled it with -m486,
optimized for the i486 processor, and the Linux default build
compiled with -mcpu=pentium,
or 586 flag. This system is a Pentium III system, so really the best
compilation would be for -march=i686
(Red Hat distributes an i686-optimized version of OpenSSL).
So I went back and re-compiled with the i686
optimization flag for both operations, and re-ran the tests.
Figure 5: DSA Operation rate
Figure 6: RSA operation rate
Linux held a very slight lead in all of the
results, which is interesting since the same exact hardware was used
for both.
As a side note, running each system with i486,
i586, and i686 optimizations showed that they make a very big
difference on both platforms, around 20% between the i486 and i686
results.
Web Performance
Web performance was probably the biggest surprise
of this evaluation, as the results strongly favored one OS over the
other, and the OS I suspected would win didn’t.
One of the most critical performance metrics that
web servers must contend with is connections per second, also
referred to as operations per second. The opening of a new TCP
connection, handing it off to the web server, serving up the request,
closing it off, and all the little things that go in in between the
steps I just mentioned can be very CPU intensive.
Testing web performance normally entails racks and
racks of systems, using sophisticated software and analysis tools and
tests a variety of scenarios. I don’t have racks and racks of
systems, nor do I have the software necessary to test a dynamic range
of metrics.
However, I do have more computers than is good for
any one person to own, and I have freeware tool called http_load,
written by Jef Poskanzer. You can find this great little load
generator at http://www.acme.com.
I used a small 590 byte text file (small files are
best for testing ops/second, whereas large files are best for
testing throughput) and directed http_load on two different load
systems acting as load generators to retrieve it as fast as possible
from the test machine. One system is a Pentium III 1 GHz, and the other
is a AMD 2200+ system, both running Red Hat Linux 9.
I compiled a stock Apache 1.3.28 build with
modules enabled (mod.so) and loaded the PHP 4.3.3 module, although
the test file was not PHP-based. Compilation options and environment
were identical for both systems. The only change to the stock Apache
configuration file was kicking MaxClients up to 256 (the hard-coded
maximum without re-compiling).
I ran these operations for a period of 10 minutes,
and periodically checked the file with a web browser to ensure the
system was still serving up OK. I never had a problem retrieving the
file during the test of either system. The http_load utility showed
no errors for either run. Here are the results:
In this test, Linux was the clear winner showing
double the performance, with 2057 total operations per second to
Solaris’ 946. This result was the most surprising, as I had
hypothesized that Solaris would come out on top by a slight margin,
mostly because of Solaris’ renowned networking stack.
It would be interesting to see what effect
specific operating system tweaks would have on these tests, but that
would be a whole other article.
Performance Conclusion
These tests shouldn’t be considered conclusive or
the final matter on the performance of both systems by any means,
especially considering the complicated nature of performance
assessment in general, and the limited equipment (and time) I had
with which to conduct the tests.
That said, I can make a few conclusions.
Performance was overall similar for most of the metrics tested,
perhaps with Linux in a very slight lead. However, with the web
operations test (arguably the most important and relevant), Linux is
a clear winner.
I didn’t have the opportunity (or means) to test
Java performance on both systems, for both Apache/Tomcat and pure
Java. Also, pitting IBM’s Java on Linux versus Sun’s Java on Solaris
would have made for a fascinating competition.
Hardware Support
On the subject of hardware support, Linux wins.
Virtually any common device (such as IDE RAID controllers, server
motherboards, Fast/Gig Ethernet cards) has a production-quality open
source driver, usually distributed with the Linux distribution (and
works out of the box) or the vendor has compiled and made a driver
available as a module.
For more specialized hardware, vendors commonly
release their own Linux binary drivers in cases where open source
drivers may not be available, meaning Linux is taken very seriously
in the Enterprise.
That is not the case with Solaris x86. While I had
no problem installing on my system, there are many systems in which
Solaris x86 will not run because of one or more missing drivers.
This is just a symptom of Sun’s previous lack of interest in Solaris
x86 development, and a lack of open source community support that
gives Linux it’s wide driver base.
Before considering Solaris x86, make sure to take
a look at their HCL (Hardware Compatibility List available at
http://www.sun.com/bigadmin/hcl/)
to see if your hardware will even support it.
Cost
Cost is always a consideration when evaluating any technology, and this is
certainly no exception.
Sun is, of course, a commercial and proprietary operating system. To use
it commercially, you have to pay $100 USD for a single processor,
$250 for a dual box, and $1,500 USD for a 4-way box. Educational and
evaluation use is free, as long as you register, and as long as it’s
only a single CPU box. Those prices only for licensing, however.
Support is an additional (and substantial) cost. In fact, it’s possible that installing Solaris x86 on my dual-processor box, even if I disabled one of the processors, violates the evaluation license that Sun offers Solaris x86. Oops.
Something to keep in mind is that Sun’s licensing is akin to the Oracle per-CPU licensing, they refer to the capacity of the system, not just how many processors are actually there. A
box capable of 2 processors with only one processor installed is
still a dual-processor box in their eyes, license-compliance wise).
Linux
distributions, of course, can range from free (if you download an ISO)
to even more than what Sun is asking for Solaris x86, such as
RedHat’s Enterprise Linux lists for $1499 for Standard Edition
and $2499 for Enterprise Edition. Both editions support up to 16
processors (Solaris x86 supports up to 4 CPUs).
The
pricing differences can vary greatly depending on your situation.
With all the commercial applications included with Solaris x86,
including Sun Screen firewall, Sun ONE Application Server, Volume
Manager, and Sun ONE Directory server, it certainly has a value
proposition. However, Linux has the advantage in flexibility in that
you can either pay for your system or not. Building out a grid
cluster of 100 machines can cost you zero dollars in licensing fees.
As
far as value is concerned, I have to say Linux is the clear winner,
simply because it’s free. You can pay for Linux and support if you
want, but if you’d rather use effort instead of cash (which many
companies that are strapped for cash are doing) then that’s an
option. With Linux, you don’t have to worry about licensing fees
associated with expanding your infrastructure.
Conclusion
Ironically,
Solaris x86 is now where Linux was 4 years ago: Great for
open-source applications and on a limited set of hardware, but
commercial enterprise applications are few and far between. If
you’re looking to run PHP, or Apache/Tomcat/Java (which comes
pre-installed), then Solaris x86 is a solid, stable platform. But so
is Linux. If you’re looking to run any type of commercial
applications, including several enterprise databases and even
commercial backup applications, Linux is the clear choice.
There
are several issues facing Solaris x86 adoption. Sun’s past history
of neglect, lack of commercial
applications, poor hardware support, and poor security support, all
contribute to Solaris x86’s lack of significant momentum towards
adoption.
For
this evaluation, I am inclined to declare Linux the clear winner over
Solaris x86. Linux is simply better supported, shows double the web
performance, enjoys wider internal development in terms of hardware
and kernel, vastly wider application development, and is a better
value (since it’s, you know, free).
Solaris
x86 isn’t that far off though. It is an impressive operating system
on its own right and if Sun stops spending time disparaging Linux
and open source in general, I can easily see Solaris x86 making a
broad and positive contribution to the market. The ball is in Sun’s
court.
it scales much better than 2.4. an article on slashdot benchmarking it against 2.4 showing that 2.6 is more than impressive.
Yeah, but also Solaris for SPARC is more optimized than the Solaris for x86, so the two OSes seems to perform better on their own platforms (Solaris on SPARC and Linux on x86).
That’d be an interesting test, although since it hasn’t made it’s way into any of the major Linux distributions yet, I don’t think it would have been fair to pit it against Solaris 9, as Linux 2.6 isn’t “complete” (although it does appear to be stable), in that it’s not part of a stock distribution like Solaris 9 is.
Benchmarks mean nothing if not on the same hardware. Is there something wrong with the Linux sparc port?
One point that would’ve been much more interesting is to see how the two compared on different boxes with more CPUs. It’s my understanding that Solaris is quite slow on boxes with small number of CPUs because of the amount of locking going on in the kernel, which of course is precisely what allows it to scale so well. I’d be much more interested in seeing how the two compare on an 8-way box or higher.
I do have an Ultra 5, although it’s not a current machine (or a fast one). I don’t have access to a modern UltraSPARC system, like the new UltraSPARC IIIi’s. That’d be an interesting comparison.
It would be interesting to see though which would make the most sense for older SPARC hardware, so maybe I’ll do that as an evaluation next.
From what I have heard, with the added code from UnixWare which SUN bought, it should perform alot better on uniprocessor configurations once Solaris 10 is released.
As for the desktop look, yes, Solaris does have a majorly fugly desktop, however, wait around 6 months once Java Desktop is made available then have a look.
As for the cost, if you are going to run Solaris as desktop, spend the extra cash and buy the XAccelerate + Summit Drivers, which will give you better performance and reliability. Sure, this is “buying extra”, but if you look at the cost, it still works out cheaper than buying Windows XP Professional Retail and better still, you can recompile most Linux applications and run them out of the box with little or no changes.
I’m very suprised of the difference in compiling SQL serveur.
To be exactely sure that the difference came from the OS. The gcc used to compile the teste bench, 3.3.1 should be used to compile him-self. Because, a binary generated by Gcc 2.95 or Gcc 3.2 can’t be equivalent.
So the compiler used to generate the gcc used for the bench should be compile by the same in both test.
Seems to me that if SUN wants to stay in the game they had better work on their Solaris x86 and in a big way. In particular when they are making claims regarding the performance of Solaris x86 vs. Linux.
it would be very interesting to see that benchmark extended to include Solaris x86.
Would be _very_ interesting…
Benchmarks mean nothing if not on the same hardware. Is there something wrong with the Linux sparc port?
Yes, Linux on sparc64 is *terribly* limited by the quality of code gcc can produce. Keep in mind that it was little over a year ago that the mainline gcc supported sparc64, and in terms of an open source compiler a little over a year of development time is not enough to gain major ground in supporting an architecture like sparc64.
Now, that’s not to day that gcc/sparc64 doesn’t produce decent sparc64 opcodes… it does, but Sun has been optimizing their sparc64 compiler (most recently with Forte’s compiler technology) for almost a decade. Linux (or FreeBSD/OpenBSD/NetBSD) on sparc64 can be described as “sluggish at best”, with horrible latency issues and mediocre performance for day-to-day server tasks. Meanwhile, in comparison, Solaris flies on typical server tasks on the same hardware.
Yes it would, but Sun will never endorse or perform such benchmarks because the outcome will probably not encourage people to buy Solaris.
The compilers used are probably equivalent because of the way gcc is compiled in several stages:
First the old compiler compiles a first version of the new compiler, which is used to compile itself, which again is used to compile the final version. So instead of getting one version 3.3.1 compiled by 2.95 and another compiled by 3.2, you get two compilers that are both compiled by 3.3.1
For a guy who claims to work with Solaris for a number of years, he doesn’t seem to know too much about it! First, Sun ships Netscape 7 with Solaris 9 as well as Netscape 4.8 (/usr/dt/bin/sun_netscape if I am not mistaken). A standard install, and this is based on which install cluster (there are only five to choose from)! The disk performance results I would dismiss because:
1. If the filesystems were laid out by default, most of them except /export/home would have too little space and filesystems close to full perform poorly. He makes no mention of how the filesystems were laid out and the amount of free space.
2. He makes no mention of shutting down unnecessary daemons or services (which there are a few on Solaris (both Intel and Sparc), and this could also afffect the “benchmark” results.
The Desktop versus Server comparison is a joke, first he does not mention that you can get the Software Companion CD from Sun (at the same time you download Solaris) and that includes a lot of freeware compiled and ready to install. And this includes KDE and XFCE if you don’t like Gnome. Any experienced Solaris administrator knows about optional software being installed in /opt/sfw. And if he was actually testing server traits, where is the test of Solaris Volume Manager versus RedHat’s LVM? For that matter, if he was going to test server features, why didn’t he test RedHat Advanced Server, and for the record RHAS costs more than Solaris x86!
As far as security goes, just because Sun did not release a patch immediately he gives Sun an F?! A properly set up Solaris machine would be very difficult to hack. And considering patch clusters are released about every two weeks, I just think his expectations are unreasonable.
Overall his testing methodology is poor, there is too little information on either OS to make any educated judgement as to which is actually better. I see this as “another Linux user tries Solaris and doesn’t like it”.
I would like to see Solaris x86 tested against Linux 2.6, on a multiprocessor computer. I think Solaris on a uniprocessor computer is a bit of waste, as the kernel has a lot of thread management overhead that pays off as you add more CPUs. In that sense, I am actualy surprised that Solaris 9 for x86 worked comparably well on a uniprocessor system.
As for Apache, there are hundreds of tweaks for making it work optimally on a platform (HW and SW), and I think it would be fair to use the Apache binary package that comes with Solaris, rather than compiling a random version of it. My guess is that the packaged Apache that comes with Solaris would perform better, perhaps considerably better, than the one presented in the test.
I disagree. I think it was a definitely fair comparison, and one can’t find fault with his methodology.
Please. Do you really want to argue that Solaris is better on the desktop than Linux? Give me a break.
>
As far as security goes, just because Sun did not release a patch immediately he gives Sun an F?! A properly set up Solaris machine would be very difficult to hack. And considering patch clusters are released about every two weeks, I just think his expectations are unreasonable.
>
A properly set up Solaris box can be easy to crack if it has a bunch of unpatched software.
>For that matter, if he was going to test server features, why didn’t he test RedHat Advanced Server, and for the record RHAS costs more than Solaris x86!
>
Linux is versatile. People use RH9 plain for servers everywhere. RHAS is mainly just a support thing (with Solaris, you pay extra for support).
Though I’ve never used Solaris, I had thought that its greatest strength was in its reliability, and especially when hardware died.
Isn’t Solaris supposed to have the best hotplug support out there? Can’t you rip out just about everything except 1 proc and 1 ram chip and it’ll keep chugging away? (I’d imagine that would be especially useful in a 4-way of 8-way system)
Please. Do you really want to argue that Solaris is better on the desktop than Linux? Give me a break.
“linux” here seems to be the kernel and GNU compiler + libc against the SUN equivalents. the desktop just isnt an issue as far as software is concerned – the same software will run and several people have pointed out that SUN provides it pre-compiled and freely.
quite a bit of work has gone into reducing linux’ latency recently, so a comparison against Solaris (which has fine grained locks for SMP, rather than latency reasons) would be very interesting indeed.
Greg,
My understanding of testing is that the results can be independently repeated and verified, which there is way too little information for either to be done. Solaris can be used as a Desktop (I have used it more than once), maybe it doesn’t have all the bells and whistles of a Linux box, but it can be used. Again this shows the author’s lack of familiarity with Solaris.
Any box with unpatched software is a security risk, and my definition of properly set up is everything is fully patched.
If one is going to compare server features (including support) then RHAS should have been the target for the test. As someone who supports Solaris, AIX, and Linux boxes, I am very concerned about support and RedHat’s support for the “consumer” Linux is a major issue (one year). With RHAS you get 5 years (like Solaris), so if you are running a data center the amount and duration of support becomes a big issue.
The test is more like comparing apples and oranges, and it does not go into sufficient depth to truly declare a “winner” other than to show the author’s preference for Linux over Solaris.
It occurs to me that the high compile time for gcc in particular is due to the default hard drive settings. Red Hat is being conservative here as some IDE chipset/harddrive combination are known to be buggy and so does not enable features such as 32-bit and UDMA access by default.
Most Linux sysadmins would immediately run ‘hdparm -c3 -X69 /dev/hdX’ after installation; on my last install I actually did this before installation starts and it cut installation time by a factor of more than two.
The compile-time statistics for Solaris/x86 is quite similar, so perhaps they adopt the same conservative option? Windows 2000 and above seem to be the exception in that they default to using UDMA transfer if available.
Seems to me that if SUN wants to stay in the game they had better work on their Solaris x86 and in a big way. In particular when they are making claims regarding the performance of Solaris x86 vs. Linux.
Where have they made flat out claims that Solaris is faster than Linux? all there has been so far is SUN touting Solaris’s stability and reliability track record. The only manager who did talk about Solaris x86 and Linux in terms of performance simply said that for some benchmarks Solaris performed better than Linux and in others Linux performed better than Solaris.
Solaris’s strength is large SMP configurations, if you have cared to take a look at the progress Solaris 10 has made, they are making a concerted effort to ensure that Solaris 10 performs either equal or better than Linux.
Also, I am more interested in their Opteron release, that is going to be the big decider as many people will question why buy a 32bit Xeon when you can get a 64bit Opteron running Solaris as the same price and infact, sometimes less.
Lets wait and see. With the massive improvements in Solaris 10 plus the Opteron port just arond the corner, Solaris will be an extremely attractive offer to Windows, HP-UX and Linux customers. Windows will be hit because of its dodgy track record so far in all areas, HP-UX because of the parent companies apparent love afair with Microsoft and .NET and Linux because the lack of any real direction and coheriency between the kernel, libraries and everything else that makes up the operating system, also, the cost. Just look at the price of Solaris vs. Red Hat Linux Enterprise Edition.
Also, the scalability of Linux is questionable. If you have ever run Linux on a large SMP configuration, then you will see what I mean. Linux is great on the desktop but leave the big servers to the old school.
I’m sure despite this little interlude, I’ll still receive those flame-trolling comments. To that I say, If you have a problem with my choice in distribution, then feel free to run your own evaluation. Also, your momma is ugly. Seriously. UG-LY.
Heheheheh… Seems like it really avoided people showing their nitpicks about his choice for the Linux distribution used in this test. 🙂
DeadFish Man
“Solaris can be used as a Desktop (I have used it more than once), maybe it doesn’t have all the bells and whistles of a Linux box, but it can be used. Again this shows the author’s lack of familiarity with Solaris. ”
I can use my Macintosh Plus w/ 4mb of RAM as a desktop machine for my essential needs. I did it for a few years, and it wasn’t a bad experience (2400 baud modems are a lil slow, though!). Has a word processor, telnet, and can print with it. What else do I really _need_?
Then again, just because it’s usable doesn’t mean that it’s at all good for the task today compared to just about anything else.
Solaris x86 is like that. It _can_ be used, but let’s face it, unless you’re trying to teach yourself Solaris on the cheap, you’re far better off with RedHat 9.
-Erwos
“Heheheheh… Seems like it really avoided people showing their nitpicks about his choice for the Linux distribution used in this test. 🙂 ”
It’s almost certainly because they realized the truth: RedHat 9 probably _is_ the closest thing you could Solaris compare to. It’s _the_ corporate Linux distro, and it doesn’t have any huge glaring issues that impact performance.
-Erwos
It’s almost certainly because they realized the truth: RedHat 9 probably _is_ the closest thing you could Solaris compare to. It’s _the_ corporate Linux distro, and it doesn’t have any huge glaring issues that impact performance.
A better comparision would be with a UnitedLinux distro. It is ashame I got rid of my old SCO Linux 4.0 distro which was based on United Linux, then I would be able to perform some tests myself.
Then again, just because it’s usable doesn’t mean that it’s at all good for the task today compared to just about anything else.
Solaris x86 is like that. It _can_ be used, but let’s face it, unless you’re trying to teach yourself Solaris on the cheap, you’re far better off with RedHat 9.
Agreed. I would suggest anyone interested in Solaris x86 to wait till Solaris 10 is released and Java Desktop is available on it as well. I would say that once STSF is finished (because xft/fontconfig is baroque on Solaris), you will see it made available.
If I upgrade, and the above happens, I’ll assemble a dual Opteron machine myself and install Solaris for Opteron. It would be an interested experience using a industrial strength UNIX with a nice GUI 😉
Yes, I give you that the test was far from conclusive, and there wasn’t provided a full listing of *all* details regarding the test (e.g. version and type of drivers used, etc). But I do feel that the author was genuinely surprised by the bad results of Solaris x86. This test was on a dual CPU system. Whether the poor performance was due to a driver or because random software or hardware problems, can only be left to conjecture. He did say that he was running under default configurations with only OS patches applied. Given SUNs very flamboyant stance on Solaris x86 vs. Linux, Solaris should be running circles around Linux – not just being competitive.
I know that it takes a lot of time and money to perfect a platform; they have spent that time and money on Solaris for SPARC. I also understand that Solaris was designed for a CPU with lower context switch time and more registers and different memory and I/O architecture. So the code may need some tweaking under x86. SUN should mark this under “super high priority”.
Further, SUN’s low-end systems outsold their high-end systems last quarter. They need to start thinking with their heads and not their hearts. Maybe the should take some of that capital they keep talking about and start pushing it to their software development teams. It is their last chance to prove themselves. They HAVE to make Solaris on x86 and Java their absolute highest priorities in order to compete with Microsoft and Linux. As it stands Linux is eating away at SUNs intellectual market share. They have to make Solaris a rock-solid, premium, highly tested, verifiable, scalable, benchmark monster if they want to win this one. Even on one and two processor systems
Yes, I give you that the test was far from conclusive, and there wasn’t provided a full listing of *all* details regarding the test (e.g. version and type of drivers used, etc). But I do feel that the author was genuinely surprised by the bad results of Solaris x86. This test was on a dual CPU system. Whether the poor performance was due to a driver or because random software or hardware problems, can only be left to conjecture. He did say that he was running under default configurations with only OS patches applied. Given SUNs very flamboyant stance on Solaris x86 vs. Linux, Solaris should be running circles around Linux – not just being competitive.
Well considering that up until recently they looked *DOWN* at x86, I accept and willing to give SUN atleast 18months worth of slack to get Solaris up to speed.
The fact remains that they’re making up for atleast 4-5 years of neglected development. UnixWare works nicely on uni and multi-processor machines. With the added code from UnixWare, we should see a vast improvement in speed once Solaris 10 is released, however, until then I would say wait for Solaris 10 is released then do a comparison between Solaris and Linux.
I know that it takes a lot of time and money to perfect a platform; they have spent that time and money on Solaris for SPARC. I also understand that Solaris was designed for a CPU with lower context switch time and more registers and different memory and I/O architecture. So the code may need some tweaking under x86. SUN should mark this under “super high priority”.
Hence the reason why I am very excited about the Opteron port. With the added registers, the improved latancy and overhaul of the x86 ISA, Solaris on Opteron should really fly.
Further, SUN’s low-end systems outsold their high-end systems last quarter. They need to start thinking with their heads and not their hearts. Maybe the should take some of that capital they keep talking about and start pushing it to their software development teams. It is their last chance to prove themselves. They HAVE to make Solaris on x86 and Java their absolute highest priorities in order to compete with Microsoft and Linux. As it stands Linux is eating away at SUNs intellectual market share. They have to make Solaris a rock-solid, premium, highly tested, verifiable, scalable, benchmark monster if they want to win this one. Even on one and two processor systems
Hence the reason why I said a while back that they should rip SPARC out of the low end systems, 8 way and bellow, replace it with Opteron and Solaris x86-64. As for the software, why doesn’t sun get it? they have $5.7BILLION DOLLARS, go to the software companies and PAY for the porting of applications. They need the hardware and software support now.
An Opteron server with Solaris coupled with SUN’s enterprises services would basically mow of HP in their dellusional idea that for some reason a person is going to pay 1000x more for an Itanium server running Windows just because it is from HP.
“Sun admits their versions are potentially vulnerable and only issued preliminary patches on September 30th.”
You do understand that Sun is probably more concerned with releasing a patch that won’t break clients’ systems rather than relasing a patch blindly. A patch that breaks a system is worse than the original problem. Sun seems to find a different balance than most Linux distributions or Microsoft, where Sun isn’t necessarily ashamed of not having the latest cutting edge widget or doodad or sub-second patch interval nor do they have to worry about a seemingly infinite set of random x86 platforms to guesstimate about.
Also, the difference between SunOS and Solaris is that SunOS is the operating system kernel while Solaris is the platform (kernel + userland + CDE/GNOME).
There are many things one can do to improve web serving on Solaris. Serving a static web page can take advantage of the network cache accelerator (similar to TUX) which can be used with Apache (yes I know were talking default installs here). Also I would imagine Apache 2.x would perform better – Solaris has a superior threading model which would benefit from 2.x’s new options – 2.x won’t show much improvement on Linux since threads and processes aren’t much different the way Linux treats things.
Just my 2 cents…
This would be pitting a highly-x86-optimized kernel compiled with non-SPARC-optimized GCC vs. a highly-SPARC-optimized kernel compiled with Sun’s highly-SPARC-optimized cc.
I would be very suprised if Linux would win any of the benchmarks. The only “wins” for Linux would be ones that by chance discover Linux had a better algorithm for a specific system call or something similar (I think I recall this being true for PostgreSQL or MySQL, where a particular Solaris system call was a bottleneck). Also, as the number of CPUs increases, I would expect Solaris to pull ahead noticably (especially as multi-hundred-CPU multi-core SMP systems become common over the next few years).
Does anyone have opinions on the robustness of Linux ext3 vs Solaris UFS? I have heard anecdotal evidence that the Linux ext filesystems are more willing to corrupt data in favor of raw performance, while Solaris UFS is more conservative.
This would be pitting a highly-x86-optimized kernel compiled with non-SPARC-optimized GCC vs. a highly-SPARC-optimized kernel compiled with Sun’s highly-SPARC-optimized cc.
You mean it would sort of be like putting a highly-x86-optimized kernel against a highly-SPARC-optimized kernel as was done in this test? How unfair.
Well, in 5 years I have been intensively using Solaris servers and workstations, the only way to crash them was to cut the power off, or to send them to the OBP and then restarting the processing with “go”, while they were doing disk-syncing in VxVM or SVM (ex Solstice). And in all these years, in spite of these very rude crashes, I have yet to find a single corrupted file.
This is anecdotal, but I have really worked with a lot of servers. Probably near to a thousand.
But I can’t compare it to ext3, as I don’t have nearly as much hands-on experience with it as with Solaris.
..using the binaries that are distributed with Solaris. I don’t say it will beat Linux, because on a uniproc system it very well may not, but I think the performance will be at least comparable.
I was *always* under the impression that, with the lone exception of some cornercases, Linux has been faster then UnixWare for years now. Dude, their only just now getting PAM support and its still only Unix93 compliant? I have extreme doubts its faster then Linux.
I was *always* under the impression that, with the lone exception of some cornercases, Linux has been faster then UnixWare for years now. Dude, their only just now getting PAM support and its still only Unix93 compliant? I have extreme doubts its faster then Linux.
1) UnixWare is Unix 95 compliant, just like HP-UX, IRIX and AIX. Solaris is the only UNIX vendor who is keeping up with the “grand unified unix standard”.
2) What has PAM have to do with performance?
3) UnixWare performs quite nicely on uniprocessor machines and is based on SVR6. With the tweaked x86 code from UnixWare, I am sure SUN made the hard decision and did a rip in burn. Replace crusty old partys with the must more highly tuned x86 code from UnixWare.
3) OpenServer is old and crappy. Under Ransom Love there was a concerted effort to get people to move from OpenServer to SCO UnixWare, they’re trying that again by using an “OpenServer personality kernel module” to help with compatibility.
4) There haven’t been any benchmarks yet between UnixWare and Linux. UnixWares core problem is that it is expensive. Sure, it is great to put together a system like a kitset but if you want the whole package it will set you back thousands. If they bundled their whole high end software in a box for $1599 including support, you would have alot of people use it, however, when you charge $60 just for the media alone, with NO licenses, can you blame a person for not purchasing SCO products?
What’s Solaris like on Fujitsu Sparc systems (i know they only do servers) compared to the SUN ones. If you ask me SUN should team up with Fujitsu-Siemens, have them do all the main hardware design and have SUN concentrate on Solaris and software.
From what i’ve read of the Fujitsu Sparc VI today, it’s a 2.4ghz dual core CPU. If Sun was to use this baby they could make some kick ass workstations
but I don’t think they can go 2.4 GHz yet. Where’d you read that? Last time I checked, they were kicking at 1.7 GHz. Maybe I’m behind the times, need to update my info. Please give me some pointers?
According to this article
http://www.computerworld.com/softwaretopics/software/apps/story/0,1…
The 2.4ghz part which will also be multi-threaded will be released in 2005 (i should have said that in my original post)
It’s also mentioned in this article on theinquirer
http://theinquirer.net/?article=12146
IBM loves this kind of stuff. They’re just waiting to see the fall-out of Solaris vs. Linux. If Linux wins, they’ll probably take a significant position in the Linux space by buying a major Linux vendor as they prepare for the next phase in their battle with Microsoft.
PAM has nothing to do with performance, I only pointed it out as an example of how crufty SCO’s crap is. Since I’ve been introduced to Unix I was always under the impression that their crap was bottom of the barrel. Solaris would *need* UnixWare code to improve itself? I have my doubts, Sun is tremendously larger then SCO ever was. If Linux is at least competitive with Solaris, as per these benchmarks, and I’d be willing to bet my soul that Solaris creams UnixWare, then how could it be faster then Linux? I don’t believe SCO has even claimed that in ages, though I might be mistaken given the current management’s arrogance. Now with the impending release of 2.6 I have a terribly hard time believing UnixWare could even keep up. I’d think, with the lone exception of 32-way boxes and higher – which are niche at best – that Linux 2.6 stands to cream Solaris until 10 is released. Please, if I’m wrong correct me – but I have my doubts. Hell, look at SCO’s customer list – freaking Walgreens? Hah, “high performance” my arse. Now reliability, sure – something as important as the central server for a store and its inventory, that is important – but raw performance I’d think not.
I seriously question the conclusion drawn.
In most of the benchmarks Solaris is comparble or better in performance to linux. But in the web server benchmark it is twice as slow.
That tells me that apache 1.3.28 + php 4.3.3 perform worse on soalris than linux not that solaris is slower. Why not take a few different web servers and run the tests. Take the most balanced webservice engine and run the benchmark.
Did the author use the gcc 3.3.1 compiler he compiled earlier to compile apache and php on both platforms or just the stock compilers 2.95 on solaris and 3.2 on linux?
Given the above questions I really can’t draw conclusions on two OSes because of one benchmark when all the others are clear indicators of performance of the two oses.
The author does give a disclaimer that these are not conclusive of the performance of the two but he does go on to make erroneous conclusions based on one benchmark.
I am always amazed at the fact that people wrongly think that Linux is free! In an enterprise environment which is where those OS will mostly be deployed, the cost of support from major suppliers such as IBM is far from being free. So much for free.
Linux is a very good OS, certainly a lot better than any version of Windows Server. Its great strength is that it works on commodity x86 hardware. However, Solaris on Sun hardware is really very good stuff. More expensive for sure, but really good. I believe that Linux has a good chance for “low-end” server requirements, but for the higher-end stuff, Solaris on Sun hardware is what most enterprises will choose.
well if the test machine is an ultra5 then linux would probobly win some with disk io. it sems that solaris have poor ide drivers or i dont know the magic trick to make it fly. when i had solaris on my ultra5 the disk reads was about 5mb/s. with linux on the same disk 10-15mb/s wasnt a problem i spoke with a friend about it and he said that sun dint care much for ide but hade great scsi drivers. so if the test is on sparc hardware it should have scsi disk.
well i could be trolling it can very well be that i did something wrong but ide performance under solaris 8/9 was horrible for me
The problem on ultra 5s isn’t bad ide drivers in solaris but the controller. The controller cmd 646 on the ultra5 had a bad data curroption bug so the solaris drivers disabled duplex operation and ran the controller in simplex mode (or reduced performance mode).
I doubt the linux driver is doing this so while it may be faster, it does not garauntee your data will be safe. Solaris/Sun is very paranoid about data curroption and loss, the bug probably was a corner case but they probably decided it was worth taking a performance hit and being safe.
By the way they also disable the write cache on the scsi drives to ensure the data written actually made it to the disk. Database vendors absolutely require this feature.
I’d have to agree with that. The author wants to seem dispassionate, but his article reads more like an apology for Solaris’ shortcomings than a real test.
As the author specifically asked for it, I’m gonna comment on his choice of distro 🙂
RH 9 has a custom kernel, basically RH backported a number of features regarding threading form the 2.6 kernel. This might have a positive impact on performance, which is good. 2.6, which outperforms 2.4 in many ways, is just around the corner, so if this gives us a peek into what 2.6 offers performance-wise, it’s just fine.
Apache 1.3.x fork()’s off processes as users connect. Linux and FreeBSD are pretty good at this, where as solaris is much more optimized for threads. I believe Windows is in a similar situation. This is the main reason (or one of them) why Apache 2 (as it uses threads) performs so much better compared to apache 1.3.x on both Windows and Solaris. However, as I understand it, there’s not much of a performance difference between apache 2 and apache 1.3.x on OS’s such as Linux and FreeBSD. If I am incorrect on this (and I don’t think I am), please correct me.
I read somewhere (I don’t remember where) that Solaris’s IDE subsystem is pretty bad and that you’ll get far superior I/O performance with SCSI. Even if you don’t take into account the superior performance of SCSI over IDE. Considering what niche Solaris is meant for, it makes perfect sense for them to optimize for SCSI over IDE.
Personally if i had to choose between x86 Solaris or RH linux 9 for a server i would choose Solaris. Unix versus “Unix-like” wins me over ever time despite any slight performance gains. So defines enterprise ready IMHO.
Oh, yeah, it will be released in 2005, only that little detail. Yeah, you should have mentioned that.
By 2005 Sun will have much more powerful processors. If things go by their schedule, they might have the most powerful multicore CPUs in the world.
They have successfully ported RedHat 7.2 to SPARC processors AND it does utilize both of my processors in my home Ultra 60 (2 x 360mhz)…
I just wish it wasn’t such bloody murder to upgrade to 2×450 though…
Don’t get me started on memory..
Sun’s Ultra Sparc II & III Architecture rock but damned be the prices…
Nice review.
Webserving Results are unsurprising since this was exactly the sort of load Veritest used in Mindcraft against linux (linux 2.2, I think). The linux tcp subsystem was rewritten to handle very small files and large numbers of connections very well (during the 2.3 series, I think) and it helped Linux a lot in the high performace computing arena. That being said the linux tcp subsystem was rather fast before the rewrite.
I’d like to see:
More Networking tests:
I think Solaris would get flattened on gigabit with small files, but it be nice to be sure. I be curious to see how Solaris did with large files where it might fair better. Even better an apache comparision broken down by mmap, sendfile, and read.
The review didn’t say anything about Large Memory support.
I’d like to see how Solaris does with 4 gig or more.
Does it support it?
How much heap per process? tunable?
How much can be mmaped per process?
How much can kernel address?
How does kernel actually perform on machines with lots of memory?
There are very interesting tweaks one can do for linux here.
(change user/kernel split, seperate address spaces, 36-bit hacks ect…)
Does Solaris x86 have any options like these?
Does it do 36 bit addressing on Intel?
Does it run on x86-64?
The compiler metrics really should be cross-compile metrics for fairness. I would be interested to see if solaris keeps its lead when compiling the same source. (Ie: say compiling gentoo for SPARC under Solaris x86 and Linux)
In addition, you should compile your compiler with the same compiler. I mean we are talking about _how fast gcc runs_. gcc compiled with 2.95 is not the same as gcc compiled with 3.2. (Ironically 2.95 was a fast release.)
Thanks!
Garick
i’ll be interested to see Linux 2.6-Release vs Solarais 10 both on x86-64… this is a warm-up
The large discrepancies in compile time may have something to do with the /tmp file system. It’s been a while since I have used Solaris but it usually mounts /tmp as a tmpfs file system which is effectively a ram disk. RedHat on the other hand usually includes /tmp as part of the disk based root file system.
The compiler will generate lots of working files in /tmp so disk access versus ram access will be an issue when comparing compile times.
I’d like to see the compile tests re-run with /tmp mounted on /dev/shm (or as some other ram disk). My guess is that the compile time results will be much closer in the re-run.
Weird how the tests were better with logging than without.
I think this article makes a great contribution to our awareness of system performance issues. The benchmarks and methodology are explicit and valid, and in general I support the reasoning behind them.
As a system security architect with close to 25 years of experience as a system administrator, I feel that I should make a couple of clarifying comments:
Security
Security involves a lot more than patch releases, especially when it concerns software produced by the open source community. The best and only authoritative place to go for open source software such as sendmail and openssh is the originating site.
Anyone who supplies an operating system distribution effectively does a code freeze whenever a new distribution is released. Professional system administrators understand that the convenience of using a reference distribution is not a replacement for continuous diligence in monitoring industry security advisories and similar announcements by software developers. These will always be more current than any information relayed through a distributor.
My point here is that neither Solaris nor Linux can be faulted for problems which may emerge with open source software like sendmail and openssh. If you want to administer one of these systems securely, plan to use the initial installation as a base only. Most system administration techniques are concerned with what happens after this point.
Of course you will have separate filesystems for installing the operating system, applications, databases, and users. Whenever you need to install a new version of an application, put it on the application filesystem, and disable any versions which were distributed with the operating system. Now, whenever you need to install a new distribution, you can be certain that doing so will not disturb your applications.
Package Management
The Solaris and RedHat package management systems follow different installation conventions owing to their Unix heritage. Solaris uses the System V convention of installing in /opt, with a directory tree for each application. Linux uses the BSD convention of installing in /usr/local, with a relatively flat directory tree, and most Linux package management systems enforce this convention.
There are advantages and disadvantages to each. The System V convention allows applications, including multiple verrsions of the same application, to be installed without fear of interfering with each other, but it also burdens the user with managing environment variables such as PATH. It suits large and complex sites very well. The BSD convention is simpler but also more prone to conflicts, being suited more to simple personal workstations rather than internetworked servers.
It is possible to build a hybrid environment in which each application is installed in its own directory tree, but the
system also maintains a simplified directory structure containing symbolic links to the installed files. With a little extra management overhead, you can get the best of both worlds. As a beneficial side effect of this decoupling, you can also perform version control without disturbing the installed applications. This is critical if you have a large population of users which depends on this software.
Package relocatability is also a practical concern if we want to ensure that applications install into their own filesystem, indeed possibly onto a networked filesystem which has a different path on the installation server than it does on the clients which use the application. In practice, Solaris packages are very good about relocating software correctly when installing to a nondefault directory. I’ve found that RedHat packages are often quite brittle with respect to relocation. Both package management systems support relocation, so the difference seems to be largely a consequence of development culture. I’d like to encourage more attention to this issue so that we can begin to see Linux and Solaris compete on an equal footing here.
If you’re serious about site management and security, you’ll strongly prefer to maintain applications as source code from the developers. Where open source software is concerned, the relocatability issue pretty much solves itself. The GNU autoconf conventions are to install by default into the nominally flat /usr/local, but they can be easily overridden to install each application into its own directory. To date, I have not found any exceptions to this rule. Here the development culture has really done an outstanding job of making software relocatable.
logging does NOT increase file system PERFORMANCE
Weird how the tests were better with logging than without.
Yes, this was bizarre to the point of being dubious. The reasoning I’ve always heard for Sun leaving UFS logging disabled per default was that the performance hit was too great to justify enabling it per default.
Personally I thought the numbers on this graph were too ridiculous to be true. Clearly something else is at play here:
http://img.osnews.com/img/4867/SolarisVLinux_html_1fc13881.gif
But to put my money where my mouth is, so to speak, here’s my numbers for tarball extraction on a Solaris x86 system, with and without logging enabled:
Without logging:
real 0m5.923s
With logging:
real 0m5.987s
So for me, the performance with logging was negligably slower. I performed these tests in single user mode after a fresh boot in both cases.
I’m guessing you used an older version of Solaris. I did some digging and found this:
http://www.sun.com/service/support/infoexpress/0303-02.html
Lower right-hand corner, it seems to indicate that performance was improved for logging in Solaris 9 (either that specific 12/02 hardware release, or for Solaris 9 in general, I’m not sure).
SQL Server is available for Linux, from Microsoft’s former partner, Sybase, whom originally developed SQL Server pre-Microsoft anyways.
Instead of just “logging” you can do “logging,noatime”. Very few people/installations care when a file was accessed last. Altered, yes of course, but not accessed. That should further speed up the file system marginally.
Anyway, you judge the usability of an OS on performance alone. The more I work with Solaris, the more harmonious it feels. Linux is fine, but you can tell there have been disparate philosophies involved in creating it. Solaris feels much more coherent and meshes better; some solutions are just more elegant, IMHO, and Sun just keeps improving the OS quietly in the background by adding features – such as file system snapshots right out of the box in Solaris 8 and upwards, another extremely useful feature that just showed up without fanfare.
Personally I like both Linux and Solaris, but I definitely prefer the latter.
I doubt the linux driver is doing this so while it may be faster, it does not garauntee your data will be safe. Solaris/Sun is very paranoid about data curroption and loss, the bug probably was a corner case but they probably decided it was worth taking a performance hit and being safe
Haha.. or you could spend half a second looking at drivers/ide/pci/cmd64x.c instead of talking out of your ass.
I thought that the OpenSSH stuff was more than they couldn’t prove it wouldn’t lead to a remote exploit, than somthing that definitly was?
I’ve never heard of exploit code existing for it, except for somthing that was actually a trojan?
Someone please correct me if I am wrong.
“In fact, it’s possible that installing Solaris x86 on my dual-processor box, even if I disabled one of the processors, violates the evaluation license that Sun offers Solaris x86.”
Um you have the Sun license data wrong, sun does not work on the capasity of the box, and 8 way box is only 8 way till it has 8 processors.. this is a reason why Sun are implementing the new license structure, so that IT Managers don’t have to worry about buying additional licences when buying more processors.. I think you need to research this alittle more, because you are blatantly wrong in this area as far as i can see, and misleading people.
In regard to this test, I’d hope that Linux would win, it’s had enough of a head start.. Solaris X86 despite what sun may say is stepping stone.. my basic rule of thumb is if it’s x86 use linux perfer gentoo, if it’s sparc use Solaris.. Solaris is only half the deal the hardware is the other, they are a package, so benchmarking 1 without the other is pointless and misses the point. Sun make solutions, not software, nor os’s nor hardware, but solutions.. all these things work togeather to make a solution that works.. a real benchmark is to pit a Sun setup vs a comparible x86 setup with both as optimised as is possible, for this is where you truely see facts that matter, anything else is pointless and moot.
Matt
In regard to this test, I’d hope that Linux would win, it’s had enough of a head start.. Solaris X86 despite what sun may say is stepping stone.. my basic rule of thumb is if it’s x86 use linux perfer gentoo, if it’s sparc use Solaris.. Solaris is only half the deal the hardware is the other, they are a package, so benchmarking 1 without the other is pointless and misses the point. Sun make solutions, not software, nor os’s nor hardware, but solutions.. all these things work togeather to make a solution that works.. a real benchmark is to pit a Sun setup vs a comparible x86 setup with both as optimised as is possible, for this is where you truely see facts that matter, anything else is pointless and moot.
Hmm, I wouldn’t agree on the x86 assertion. The problem with Solaris x86 is that is hasn’t been updated in ages hence it isn’t optimised/tuned for the latest hardware and lacks support for even the most basic hardware that is now available.
With that being said, the only two problems I have with Solaris is its terrible hardware support, which is being fixed, and the shonky X server which, IMHO, if you have bought the $20 download version you may as well purchase Accelerate-X and Summit at the same time. The total cost still works out cheaper than Windows XP Professional and you get the full benefit of SUN’s EOL policy which allows one to take advantage of the free patches they release.
As I said previously, wait until Solaris 10 for Opteron is ready, then perform benchmarks. By them it would have given Linux a few patch releases to calm down and stablise.
“Haha.. or you could spend half a second looking at drivers/ide/pci/cmd64x.c instead of talking out of your ass. ”
It’s funny you mention it, I did look at cmd64x.c in linux 2.4.20 before I posted and by the end of my post it will be clear who is speaking out of thier nether regions.
First, disabling UDMA doesn’t mean simplex mode. if fact the are mutually exclusive modes in IDE controllers. Let me explain.
The linux driver module cmd64x.c is used on Ultra systems and ones where in which the BIOS doesn’t program the IDE controller. While they do enable MultiWord DMA Mode 2 they do not enable simplex mode.
Simplex means that only one channel can be be accessed at any given time so reading/writting to targets on both channels simultaneously will be serialized in simplex mode. Where as in non-simplex mode parallel transfers are enabled. There is data corruption in cmd 646 which linux does not protect against.
Now coming to MW DMA Mode 2, this mode has a max bandwidth of 16 MB/s theoretical. While when I originally posted I gave the author of my parent post the benefit of doubt. On second thought, I think the numbers he quoted for linux 10-15 MB/s are incorrect. Not only are they incorrect they are impossible and grossly exagerated.
Anyone with any experience on IDE knows that the efficiency of IDE is no where near the theoretical max, not even close. Many factors determine performance of disks, which I will not go over here. Even the best performing disk won’t even cross half of the theoretical Max.
With a 16 MB/s max bandwidth the best sustained through put any IDE system can achieve is at most 5-7 MB/s and bursts of may at best 9-11 MB/s. Just look a any review of ATA disks or SATA disks. A ATA100 7200 RPM disk in UDMA mode 5 (ATA100) will have at best 35-40 MB/s sustained throughput. Do a quick search on google or look at slashdot.org’s first page story titled “IDE vs SCSI disks” or tom’s hardware SATA disk comparisons. All these above numbers are using the raw interface (no filesystem) with large block sizes of 64-128k.
This is clear example of two design philosophies. Solaris is designed with the philosophy that data integrity is top most priority and linux with best performance at any cost. Many times it is impossible to design to meet both philosophies.
So linux on any box that has no bios configuration and uses this file to configure the IDE subsytem is going to have data corruption. Search google and you’ll find that most BIOSes enable simplex mode for CMD646 contollers.
That said, my original post is still accurate. There in lies the problem with linux users who think that by having access to the source they are somehow experts and “l33t”. The issue I have with the original post by dizz is that it has the mindset that since it is linux and opensource the drivers are of better quality than solaris drivers which are closed.
I think the article by TonyB looks at one aspect of one benchmark and concludes that Solaris is a poor server OS compared to linux. There are more factors than one webserver’s performance that determines quality of a server OS. Reliability always overrides performance in a server, period.
Actaully, if you read the article, I never called Solaris a poor server OS, even compared to Linux. I said they were both great, but I thought Linux had won out.
As for licensing, check out this from Sun’s FAQ:
http://wwws.sun.com/software/solaris/binaries/faq.html#20
It is capacity for CPUs, not CPUs installed.
Since this one was so insulting in addition to being inaccurate, I’ll address it:
For a guy who claims to work with Solaris for a number of years, he doesn’t seem to know too much about it! First, Sun ships Netscape 7 with Solaris 9 as well as Netscape 4.8 (/usr/dt/bin/sun_netscape if I am not mistaken). A standard install, and this is based on which install cluster (there are only five to choose from)! The disk performance results I would dismiss because:
The default is 4.87, and I didn’t see 7, but I’ll check. All the icons/links pointed to 4.87. 7 sucks as well, but not nearly as much as 4.87.
1. If the filesystems were laid out by default, most of them except /export/home would have too little space and filesystems close to full perform poorly. He makes no mention of how the filesystems were laid out and the amount of free space.
That’s telling of the default install then, and Sun should change it. Either way, there was plenty of room on all the file systems.
2. He makes no mention of shutting down unnecessary daemons or services (which there are a few on Solaris (both Intel and Sparc), and this could also afffect the “benchmark” results.
None of the daemons used up significant CPU or I/O resources, and since they were running as part of a standard install, they were indicative of a “stock” environment. Stripping them both down to run some performance tests I do not believe would have been appropriate, nor would they likely affect the results in any significant way, since they , you know, didn’t use CPU or I/O resources.
The Desktop versus Server comparison is a joke, first he does not mention that you can get the Software Companion CD from Sun (at the same time you download Solaris) and that includes a lot of freeware compiled and ready to install. And this includes KDE and XFCE if you don’t like Gnome. Any experienced Solaris administrator knows about optional software being installed in /opt/sfw. And if he was actually testing server traits, where is the test of Solaris Volume Manager versus RedHat’s LVM? For that matter, if he was going to test server features, why didn’t he test RedHat Advanced Server, and for the record RHAS costs more than Solaris x86!
If you read the article, I posted a link to Sun’s freeware site, which includes a link to either download those free applications, or order the companion CD. In fact, I got GCC-2.95 from that very site to compile GCC 3.3.1.
I used RedHat 9 because it was free. I don’t have the money to buy RHAS. I also mentioned RHAS high costs in the article compared to Solaris x86’s.
I hate /opt. It’s a Sys V convention, but I think it’s a bad one. And I stated so. I think that the default method of laying out the disks is bad because it doesn’t take /opt into account. Hating /opt is an opinion, and I stated so.
Solaris Volume Manager and LVM would be a nice comparison, but since I only used one disk, it really wouldn’t have made much sense.
As far as security goes, just because Sun did not release a patch immediately he gives Sun an F?! A properly set up Solaris machine would be very difficult to hack. And considering patch clusters are released about every two weeks, I just think his expectations are unreasonable.
“A patch” was regarding SSH, which is how Solaris systems are administered. Given the potential remote-exploitability, I consider that to be an incredibly important patch. Everyone else, including commercial Unix vendors, came out with patches very quickly. I don’t believe Sun to this day has come out with anything more than a T patch, and that took more than two weeks. Their initial response? “Turn off SSH”. Great idea, Sun.
Overall his testing methodology is poor, there is too little information on either OS to make any educated judgement as to which is actually better. I see this as “another Linux user tries Solaris and doesn’t like it”.
The funny part is, I’ve been accused for this article as being a Solaris zealot. If you think you’ve got a better opinion, then write one up. Put your mack where your mouth is.
While there are some disputable points with the Apache benchmarking and your ‘conclusions’, overall I really liked the way this article was written and wanted to say that your candor and conciseness are much appreciated. Not too many of the home-grown articles are written quite this well and it makes a good example for others to follow.
While there are some disputable points with the Apache benchmarking and your ‘conclusions’, overall I really liked the way this article was written and wanted to say that your candor and conciseness are much appreciated. Not too many of the home-grown articles are written quite this well and it makes a good example for others to follow.
Regarding Apache, IMHO, I would have liked to see the SUN One Webserver on trial as well.
sorry to burst your bubble cluecluecachoo, but the sun one webserver IS apache but apache license says you can’t make something out of apache and call it apache
now what would be really interesting is if they were to compare apache on solaris x86 to windows server 2003 and iis just to show microsoft is the true software victor remember mindcraft
Got any proof for your claims or are you just trolling.
If anything it is porbably based on the netscape fasttrack webserver. Remember the sun netscape alliance and iplanet.
This becomes painfully clear once you have been administering both the Sun Enterprise (now caled ONE) Webserver and Apache for a while. I have been doing this in a development environment for two years, and can only laugh at your statement.
The only thing these two products (Apache and ONE/Enterprise) have in common is that they are both webservers.
sorry to burst your bubble cluecluecachoo, but the sun one webserver IS apache but apache license says you can’t make something out of apache and call it apache
Perhaps you’re confusing the iPlanet derived Sun server for the IBM one. You know rowel, there’s more than one of those big companies who sell big Unix products you’ve never had exposure to, as all you seem to do with computers is sit around on your Windows XP system and troll OSnews.
now what would be really interesting is if they were to compare apache on solaris x86 to windows server 2003 and iis just to show microsoft is the true software victor remember mindcraft
Yes, and Mindcraft was irrefutable proof that Apache is “slower” than Windows… oh wait, it was a Microsoft sponsored FUD campaign designed to tarnish Apache’s reputation.
Microsoft is anything but the “true software victor” as a web server vendor. Apache’s installed base constitutes more systems than all other web servers combined.
The linux kernel has a patch for the CMD640 to stop corruption. It has had this for ages.
Benchmarks are just what they are, benchmarks. Our System Administrator is yet to show me how the Solaris sparc servers deployed outperform and outstand our Linux x86 servers. In fact, to him, he sees no difference at all. And this was a hardcore Solaris zealot who couldn’t stand Linux only some years ago. Now he eats his yummy words like they taste better than dorritos.
All this crap about Linux for the desktop and Solaris for the ultra-highend server that nobody uses is bullshit. Any system/network admin that deploys a server OS based on religious and technical stereotypes rather than extensive testing, threshhold analysis and research to be used needs to be hung by the scrotum.
Mark my words, even when solaris 20 comes out, you’d be hard pressed to find any situation where it dances around Linux in circles. In fact, chances are it might be the other way round. That’s what happens when you take software that was once a open and you commercial and close it. It’s all about the money now not about quality, improvement, perfection or research.
I’d go as far as saying SUN is partially responsible for Unices retardness. Unix had always developed faster when it was open and free than when it was closed and commercialised. And hear all these Solaris 10 crap. As far as I’m concerned, Solaris 10 doesn’t exist. Are you Solaris 10 freaks betting big on it that it’s performance will be so significant that it would be worth dumping the alternatives. I think not. If not for Linux, we would be burying Unix is the next five years. Oh, maybe the BSDs would have saved us. But I digress again. Flame away.
Dan: Some good points, but you are a little goofy on package management.
First, Linux is a kernel, NOT a distribution. Linux doesn’t do any package management at all.
Neither RedHat nor Debian (for example) put ANYTHING in /usr/local. Packages that are supplied by debian and redhat install according to the FHS standard. which puts parts in /usr, /var, /etc, etc.!
/usr/local is for LOCAL packages. That’s it. As for downloading and installing your own stuff, Yes, most packages DO have PREFIX set to /usr/local, but beyond that it gets iffy. A big example is Apache which creates a /usr/local/apache tree that everything is installed in: it does not use /usr/local/bin for example. What’s the difference between /usr/local/apache and /opt/apache? NOTHING except PREFERENCE.
You are perfectly free to install all your own crap in /opt or /usr/local no matter which OS you use.
For Tony,
Before I got into IT, I was a photographer who spent a great deal of time on the scientific side of things. I was taught that when testing something, you provide all relevant information so that the test can be repeated. You did not provide sufficient information for anyone to independently verify or repeat your results.
You still dont answer the question of which install cluster you used to install Solaris 9. And your definition of “plenty of space” and mine are probably two different things. The comment “Sun should change it” is arrogant at best, I could say the same thing about any product and a default install. That does not address the issue of how the disk was partitioned and the amount of free space prior to and during testing.
My bad on the Sun Freeware link. However, at home I have a copy of RedHat Advanced Server (2.95 Beta) that I downloaded for free. I would not get support for it but that is not the issue. If you are going to test server software, then the products should be similar.
The issue with SSH patches still comes down to how the machine is configured. Since Solaris 9 comes with TCP Wrappers as part of the OS, if it is configured for access control features along with logging all failed login attempts (provided you have some means of monitoring and notification) an administrator could safely run for a limited time with a vulnerable daemon or service. In a perfect world all systems would be invulnerable, this is not a perfect world.
I come from the school of “build machines for the function intended only” school. That means if the machine is going to be a web server it does not have a graphical interface, SNMP, sendmail, unused or unsupported drivers running. It is also tuned for security and performance, which takes time and research. I do not believe in default installs because I have seen too many problems associated with them. And to me testing a default install only reinforces its “OK” to install everything, which I think is wrong.
If you are going to have something published in a field such as IT which is highly opininated, be prepared for some fallout. Everyone is not going to agree with you (I don’t). And if you are concerned about comments, then I would not go to Slashdot to read the comments I found there (which some are far less friendly).
sorry to burst your bubble cluecluecachoo, but the sun one webserver IS apache but apache license says you can’t make something out of apache and call it apache
Apache is pre-bundled with Solaris, however, here is a list of web servers they sell:
http://wwws.sun.com/software/product_categories/web_servers.html
“Sun ONE Web Server: A secure and highly reliable Web server. Used to power Web applications based on Java Servlet and JavaServer Pages technologies, Active Server Pages, PHP, ColdFusion, and CGI.
Sun ONE Active Server Pages: Enterprise-grade Active Server Pages (ASP) support for the Sun ONE Web Server and Apache.”
Oh, btw, how on earth can SUN One Web Server be based on Apache when it is was originally Netscape Fastsite IIRC, then later relabelled iPlanet Web Server then SUN One Web Server. Take a look at the below link then come back spouting your clueless clap-trap.
http://www.keylabs.com/results/sun/SunONEFinalReport_Solaris.pdf
Sorry to burst your bubble but even the latest source for 2.6.0-test7 has no changes to cmd64x.c. Also cmd640 != cmd646. Two different chips two different files.
Also please define corruption as I have, there are many possible corruptions you know.
From the linux kernel documentation:
“SiS” chipset. Unfortunately, it has a number of rather nasty design flaws that can cause severe data corruption under many common conditions. Say Y here to include code which tries to automatically detect and correct the problems under Linux. This option also enables access to the secondary IDE ports in some CMD640 based systems. This driver will work automatically in PCI based systems (most new systems have PCI slots). But if your system uses VESA local bus (VLB) instead of PCI, you must also supply a kernel boot parameter to enable the CMD640 bugfix/support
“Mark my words, even when solaris 20 comes out, you’d be hard pressed to find any situation where it dances around Linux in circles. In fact, chances are it might be the other way round. That’s what happens when you take software that was once a open and you commercial and close it. It’s all about the money now not about quality, improvement, perfection or research. ”
Oh that’s why in the benchmarks published in this very article solaris compiles software twice as fast as linux. And in a few benchmarks is faster as well. We don’t have to wait for solaris 20.
“I’d go as far as saying SUN is partially responsible for Unices retardness. Unix had always developed faster when it was open and free than when it was closed and commercialised. And hear all these Solaris 10 crap.”
If it were not for solaris, sun and SVR4 linux would not have a caching slab allocator. Solaris has had features like a fully preemptible, multithreaded kernel, 0(1) scheduler for 10 years now. linux is incorporating that in 2.6.0 which is still in test8. Also the 1:1 thread model has been in solaris 9 and 8 for more than 3 years now before linux has it in 2.6.
The 0(1) scheduler is what linux fanboys have been having wet dreams about for a year now and solaris has had that quietly for a decade. Anyone who has read unic internals or solaris internals knows this. And the per cpu run queues, oooh new concept for linux Ransom love writes an entire article in LJ on it, been there done that for solaris for a decade.
I find it funny when I read about something in a almost decade old textbook and find that in a linux magazine in 2003 touting it as something new and revolutionary.
I seriously doubt Sun is stagnating UNIX. It think most linux devlopers wouldn’t have come up with any better technologies if they didn’t have textbooks after textbooks talking about SVR4, solaris and closed source OSes to copy ideas from and implement them in linux. Nothing in the linux kernel is new, it is a reimplementation of old technologies, many invented by closed companies.
Mystileef, please take your linux zealotry else where if you can’t have a serious, well informed discussion. Linux and open source is good but please don’t make it a religion.
well 10mb/s was it maintaing while downloading 2gb files from ftp. 15M was writing from memory but ofcourse i think linux buffer in ram so that probly is unrealistic but 10mb/s cant be that much chaching since the machine had 128mb ram and the files was 2gb+ in size.
the ftp client was ncftp and while transfering it said the speed was 10-12MB/s and ones the download was completed it said the avrage speed was around 10MB/s.
ofcourse ncftp could have realy bad statistics.
while I do appreciate your research.
1. CMD640 != CMD646. not equal.
2. Ultra5/10 don’t have SIS chipsets.
3. The code in cmd64x.c (different from cmd640.c) which is used on ultra5/10 doesn’t enable simplex mode period. I read through all the configuration code. They enable multiword dma mode 2 but that is not simplex. I can gladly walk through the code with anyone who is interested.
“well 10mb/s was it maintaing while downloading 2gb files from ftp. 15M was writing from memory but ofcourse i think linux buffer in ram so that probly is unrealistic but 10mb/s cant be that much chaching since the machine had 128mb ram and the files was 2gb+ in size. ”
Dizz thanks for explaining that. I think you confused raw disk performance with filesytem performance.
I have used solaris 9 on an ultra10 for many years and I think filesystem operations are about as fast. Actually the latest solaris 9 has a much improved UFS and with logging improves performance dramatically. As eveidenced in the article. UFS write semantics would comit everything to disk and make sure it is on the disk before returning from the write syscall. Logging enables the writes to be lazy and thus makes writing much faster than with no logging enabled. Linux writes to kernel buffers and bdflush periodically syncs (atleast in 2.2 and 2.4 things might have changed for 2.6) thus making writes return immediately before the data os actually on disk. This was on ext2.
Linux would always benchmark faster than solaris becuase of solaris’ obsession with making sure data is actually on disk before saying a write is complete.
one thing i failed to mention. Solaris would be a dog on an ultra5 with 128mb Ram. I would say the minimum RAM for solaris is arounf 256-384 MB RAM. I had 768 MB and it was fine. linux would definitely be faster than solaris on a box with 128 MB ram.
You still dont answer the question of which install cluster you used to install Solaris 9. And your definition of “plenty of space” and mine are probably two different things. The comment “Sun should change it” is arrogant at best, I could say the same thing about any product and a default install. That does not address the issue of how the disk was partitioned and the amount of free space prior to and during testing.
It was Solaris 9 08/03 (which I didn’t mention, my bad) and the latest 9_Recommended patches that I downloaded from Sun’s public site (which I did mention).
As far as file systems, I made sure that they had plenty of room, I’ll check when I get home (I’m traveling) for the exact figures, but the article was 9 pages long, including every detail would be gregarious. If you want to replicate my work, then you can contact me with any questions.
Saying Sun should “change it” isn’t arrogance, you yourself mentioned the flawed nature of how Sun partitions disks.
The issue with SSH patches still comes down to how the machine is configured. Since Solaris 9 comes with TCP Wrappers as part of the OS, if it is configured for access control features along with logging all failed login attempts (provided you have some means of monitoring and notification) an administrator could safely run for a limited time with a vulnerable daemon or service. In a perfect world all systems would be invulnerable, this is not a perfect world.
You’re trying to placate a serious security mater. There was a potentially root remote-exploitable hole. Just about everyone, open source, Linux distros, BSDs, commercial UNIX’s, they all had appropriate patches within *days*.
There is no excuse for what Sun has done (or rather, what they failed to do). None. It was a serious security matter, and no amount of TCP wrappers or other easily circumventable measures would make up for a exploitable SSH. I’m not even sure what’s worse; potentially exploitable SSH or administering by telnet. They’re both irresponsible. Any decent sys admin would have installed the patched OpenSSH, which is what I did.
I come from the school of “build machines for the function intended only” school. That means if the machine is going to be a web server it does not have a graphical interface, SNMP, sendmail, unused or unsupported drivers running. It is also tuned for security and performance, which takes time and research. I do not believe in default installs because I have seen too many problems associated with them. And to me testing a default install only reinforces its “OK” to install everything, which I think is wrong.
Well for one I always use SNMP for any installation I use, because I’m a metrics guy. I never understand why people don’t pull as many statistics as possible for their systems. This was a general look at both operating systems, and as a, you know, general look, I used a general install. The combinations for tweaking, tuning, usage and so forth are infinite, and this was a finite article.
I still haven’t heard anything from you on how you would have done it. In fact, why don’t you do an article of your own. This is but one review from one person, and I fully acknowledged it shouldn’t be the final authority on the matter.
If you are going to have something published in a field such as IT which is highly opininated, be prepared for some fallout. Everyone is not going to agree with you (I don’t). And if you are concerned about comments, then I would not go to Slashdot to read the comments I found there (which some are far less friendly).
Of course, everyone’s got an opinion, and many are heavy on insults and devoid of any point. I’ve been doing tech writing for years, and I’m used to it, but don’t expect that insults won’t go unanswered. You started your comment with an insult, and it looks like you’ve got little to back it up (other than complaining).
while I do appreciate your research.
1. CMD640 != CMD646. not equal.
2. Ultra5/10 don’t have SIS chipsets.
3. The code in cmd64x.c (different from cmd640.c) which is used on ultra5/10 doesn’t enable simplex mode period. I read through all the configuration code. They enable multiword dma mode 2 but that is not simplex. I can gladly walk through the code with anyone who is interested.
Sorry about that, I came in a little late and didn’t catch the intial post. I assumed you were talking about CMD640 and Linux on x86.
How about trying to benchmark the Apache that comes with Solaris 9? I guess (almost sure, actually) that package comes better tuned for Solaris than the thing you get by compiling it.
Yeah, comparing Sun’s apache good idea. I considered it, but didn’t have the time. I wanted to do the same version more.
I’m guessing Sun’s included apache is probably compiled with the Forte compilers, which probably produces a bit faster code.
To be honest, I believe much more in webserver configuration That has been a large part of my job for a couple of years, and have become aware of just how important it is, for performance improvement. It’s not just the binary, it’s how it comes tuned out of the package.
“Yes it would, but Sun will never endorse or perform such benchmarks because the outcome will probably not encourage people to buy Solaris.”
Solaris is free for people with SPARC hardware.
“Yes it would, but Sun will never endorse or perform such benchmarks because the outcome will probably not encourage people to buy Solaris.”
Solaris is free for people with SPARC hardware.
They also make it free when people buy x86 hardware off them.
Solaris is free for people with SPARC hardware.
Solaris is free to educational users or developers for uniprocessor SPARC systems only. Commercial use on uniprocessor systems or any use on multiprocessor systems does not fall under the requirements of the Free Solaris Binary License program, see http://wwws.sun.com/software/solaris/binaries/index.html
Here are the costs of some various Solaris licenses:
http://store.sun.com/catalog/doc/BrowsePage.jhtml?cid=82431&parentI…
It seems that a stock freebsd 4.8 box still smokes linux 2.4. I’d be interested in seeing you repeat your tests with freebsd. (still a viable choice to linux – particularly for webserving)
Solaris is not expensive for development or educational use.
While waiting for YellowTab Zeta I downloaded and installed it. It was more difficult to get working than REDHAT on my laptop but all the help I needed was there on the Internet.
Its best to make sure you have another internet connected computer handy when doing this.
It starts up slowly but when running it seems faster than LINUX does to me, it feels crisper and less bloated. The CDE environment is a real nostalgia trip but all works really well. I had a look for netscape 7 after reading one of the comments above and it did not appear to be preloaded anywhere by the default installer.
To get my RADEON working with acceleration I had to download XF86 drivers which – Sun allow you to install into their window system – I was surprised about that but it seems sun are ok about using open source software. The ethernet card needed a line to be added to some config files, I had to edit two other config files to get dns to work. All these things work out of the box with REDHAT. The fonts do look rubbish but you could also think of them as quaint.
I think that Solaris is just more fun to install as it makes you think about what you are doing..
I would recommend it, take it for a spin, run your own benchmarks..
Tony,
you failed to mention you were talking about UPGRADE licences these work on capacity, and only concern people buying hardware from sources other then Sun or their resellers.. If you buy a grey market box then you’ll require one of these licences, the free binary licence program covers uniprocessor x86 boxes, sparc and multiprocessor x86 machines require a licence to be purchased.. Grey market machines also cannot be put into maintaince unless inspected and passed by a sun engineer and this can cost a packet..
anyway the point is you again were misleading… if you buy the box off sun the license is included and covers you for 1 or 100 processors.