How does OpenSolaris, Sun’s effort to free its big-iron OS, fare from a Linux user’s point of view? Is it merely a passable curiosity right now, or is it truly worth installing? Linux Format takes OpenSolaris for a test drive, examining the similarities and differences to a typical Linux distro.
i never really played with it but i have to agree that it was slower than linux under VMWARE. but not slow!
the next release, due shortly, is a bit faster and a bit more polished. I am anxiously awaiting it.
VMWare does not support Solaris very well. Especially desktop versions of VMWare. Recent versions of Solaris have timekeeping problems under VMWare, that is why everything slows down. Try it on a real hardware.
I have it running on Quad-core Phenom with 2GB of RAM, with 4 local zones configured, and it works _really_ fast. Solaris replaced Linux and Xen on this box, and I have to say that Xen crawled (compared to zones)…
One thing that sucks in OpenSolaris in my opinion is their new package management system, ipkg. It’s so slow that it’s near unusable.
You are much better off running Solaris in Virtual Box rather than VMWare.
The author was definitely correct about running Solaris on older hardware, just don’t (unless its SPARC hardware). It is not worth the time.
You are comparing Hypervisor type 1 performance (xen) to Os level wirtualization (zones/containers) so its like you compare apples to oranges.
Your comparasion would be good if you would compare Solaris’ xVM – implementation of Xen and Xen on Linux.
I’m not a UNIX expert (though I have been running FreeBSD and a few flavours of Linux* such as Slackware and Arch) so I’m a little confused by the different licences and builds of the different Solaris projects going on at the moment.
Basically I wan’t a monitor-less install of Solaris for it’s ZFS support**. what projects would you recommend for a n00b like myself?
* Granted Linux isn’t UNIX
** I’m a little put off by FreeBSD in this instance due to it only being experimental support.
** I’m a little put off by FreeBSD in this instance due to it only being experimental support.
Do you realise that FreeBSD “experimental” does not mean same thing in Linux.
Networking works differently to Linux – ipconfig exists but has a different syntax, and eth0 isn’t the standard interface.
Last time I used ipconfig in WINDOWS. eth0, eth1, eth2 …errr how the hell should I know what card it what? When I see rl0 in FreeBSD then I am sure it is Realtek chip, bge0 is Broadcom, em0 is Intel gigabit lan, etc. not like in Linux.
I did find it frustrating to have to relearn commands that I’ve been using without thinking for years now (eg ifconfig), and right now I’m not convinced that for me it’s worth the mental effort…
Oh, just shut up and RTFM!
It seems that the author’s standpoint regarding “relearning the commands” is that Linux is the standard and Solaris is something strange. Of course, Solaris does things different than Linux. Why? Because it isn’t Linux. That’s nothing bad per se, but you have tho know the OS you’re working with. Solaris is mostly targeted at professional users, while Linux is having more and more impact on novice users who do not see any need to know about the OS.
“I did find it frustrating to have to relearn commands that I’ve been using without thinking for years now (eg ifconfig), and right now I’m not convinced that for me it’s worth the mental effort, especially given the relative scarcity of external software available.”
When I first used Solaris (coming from a BSD environment) I found that generic UNIX knowledge does help you to find your way through the Solaris architecture, but you simply need to learn things in order to master the power of the Solaris OS. Qell, GUI frontends do help you here, but when it comes down to the basics, knowledge is the key. Most things you may find strange in Solaris are well intended, but you need some knowledge to understand why it has been done this way.
Er. What’s the point of this article? Looks more like feature description taken from OpenSolaris or Sun sites.
“OpenSolaris for Linux user review” with 2-3 screen shots of GNOME interface? Ok, granted, ZFS, SMF and pkg (or ipkg, I guess?) mentioned. But to what extent? Just to name them? Thanks, I can read README.txt files and “Relax while … installs on Your computer” ads.
After reading the article, well, it’s bland.
OpenSolaris takes quite a bit of time getting used to IMHO coming from FreeBSD, OpenBSD, and many many versions of Linux. I used it because I wanted ZFS, virtulization, and also to try something new.
I did move back to FreeBSD after about a week or so since I thought OpenSolaris brought unnecessary learning curves for someone new. Things like ‘ps’ being different than every other distro, network interface setup and modification is annoying, the number of programs that you can compile outside of their package manager are slim, and overall not very friendly (I don’t want to use the GUI, ever). However, I have 4gb of ram and ZFS really should only be run under 64-bit FreeBSD. Qemu doesn’t seem to run, Xen isn’t even an option for virtulization and WINE doesn’t work under 64-bit (these are the main reasons I bought 4gb of ram in the first place).
ZFS has been running flawlessly on FreeBSD for me thus far, and even the maintainer says he’s been using it since he ported it over without a hickup. FreeBSD runs version 6 of ZFS, while OpenSolaris currently runs version 11. It IS true, once you go ZFS you don’t go back.
I refuse to run Linux, for personal and limiting reasons, and FreeBSD won’t let me virtualize. It seems that in the next few days I’ll be biting the bullet and moving back to OpenSolaris. It is very nice that ZFS is seamlessly integrated and snapshots are automatically created when updating the system. This ensures you can easily roll back or boot back into an older install to test different things.
All in all OpenSolaris HAS some potential, but their licensing is very wack and limiting. If Sun wants their OS to evolve and take on more users in the community, the licence will really need to be changed.
Edited 2008-08-19 18:55 UTC
If you want to learn Solaris by going as far as breaking things, create a zone and do it in it. Once you’ve f–ked up the zone, you roll it back if you made a snapshot of it in the global zone or start new.
Oh, well, /usr/ucb/ps was the normal ps. But they removed that. In fact, they removed so much from OpenSolaris, that I find it impossible to actually use in a real situation, and I’ve used Solaris 1.x (SunOS 4.1.4), Solaris 2.6-10.
OpenSolaris is a bad thing IMHO, and Im not interested in it, and I hope every day that Solaris 10 doesnt suffer or stop having U-releases because of this filth.
I would say that *BSD, Linux, Solaris and UNIX users would not be happy with openSolaris in its current state. Its designed to try and gain market share from Linux, it seems, and Linux in turn is trying to gain market share from OS X and Windows. So they are trying to get the table scraps of table scraps? Bonehead move.
I have recently tested the OS and found it to be very promising.
There software is improving continously on the GUI front, but it would take years to catch up with at least Linux in this reguard.
Alot of device drivers are not yet available, alot of software are not available too; Their installer needs to mature totally to GUI and only drop to CLI if GUI fails.
Speed wise, it was faster than linux of course and faster than windows when tested on a core2 duo 2.6 GHZ system with workstation class Mobo from Asus; Raptor HDD @10,000rpm and a GF 9800GTS. My graphics card was recognized and given the resolution of 1920×1200 but glxgears yielded low fps.
NIC cards are not supported very well on solaris like on linux.
My tested version of solaris was snv_95
CPU usage was alot lower than linux when copying data to the HDD over the netword/DVD+RW/other HDD.
system tools need to be more mature; lets say the partitioner and other devices tools.
I will continue to watch Sun progress and evaluate their work.
That’s surprising your glxgears was slow. When I used Solaris with a NV 8800 GTX it seemed to be very fast. Does the command glxinfo work? (I am not near my OpenSolaris box so can’t tell).
OpenSolaris sucks compared to SXCE. The only thing I liked about OpenSolaris was IPS, otherwise SXCE is much better.
OpenSolaris is SXCE minus CDE and a few encumbered bits. And the obviously different installer and ZFS root.
One of the images in the article poses the question:
Is there really a need to have ‘SUNW’ before everything?
Actually, yes there is. The SUNW before everything used to be Sun’s stock symbol before they switched it to JAVA.
It’s a common practice for the name of the package to start with the companies stock symbol so an admin will at least have some idea where the package came from.
alan
yes it is necessary as it denotes the provider of the package; that is no different, for example, to the package of Firefox on the Mozilla ftp server, it is prefixed MOZ as on example. I certainly don’t see anything wrong with it – its hardly something that is so bad that it’ll make someone never want to use it.
in that article. Someone for instance compared LVM with ZFS! :o)
He ignored the fact that to setup a RAID under LVM it takes like 20+ commands with difficult syntax and lots of reading in the manual, whereas in ZFS it takes:
$ zpool create name_of_raid which_type_of_raid disc1 disc2 disc3
and thats it. And no formatting, just use it right away. Once you go ZFS, you never go back. Seriously.
And I dont prefer OpenSolaris. I prefer Solaris Express which is the basis for OpenSolaris.
Compared to directly partitioning/formatting harddrives and LUNs, LVM is a godsend (add drives to volume group, expand logical volumes, snapshot volumes — if you remembered to reserve space for the snapshots, etc).
Compared to ZFS, though, LVM (and md/dm) is very limiting (LVs in VGs on PVs on software RAID gets confusing quickly, and LVs in VGs on hardware RAID isn’t much better, have to reserve/waste space in the VG for snapshots, no rollback for snapshots, replacing dead drives is a royal pain, replacing drives with larger ones is a royal pain, etc).
My biggest complaint, so far, with ZFS is that you can’t add drives to a raidz pool (ie. you can’t go from a 10-drive raidz2 pool to a 12-drive raidz2 pool). And you can’t convert from raidz1 to raidz2 on a running system. So, you have to be sure to stuff your case with as many drives as it can handle from the get-go.
The nice thing about ZFS, is that you can replace drives with larger ones and have the extra space available after a resilver (offline the existing drive, remove it from the case, insert new drive, “zpool replace” the drive, then resilver).
As I understood it, you can add new drives as a whole unit (vdev) and then add the unit to the zpool. Is this correct?
But you as you say, you can not add a drive to an EXISTING zpool. Instead you can exchange the drives to larger ones. Yes.
A zpool can be extended with vdevs as you like. vdevs can be single drives, RAID-Z arrays, mirrors, files or LUNs. The redundancy is defined by the vdev. Put a single drive vdev in a pool and you’re playing with fire. You can turn single drive vdevs into mirrors, tho.
What you can’t do is expand an existing RAID-Z vdev with more disks, you’d have to add a whole new RAID-Z vdev. The way it works to avoid the write hole makes expanding it a pain in the ass. The developers threw a lot of ideas out how this could be implemented, but it’s a total non-priority for them and are encouraging third parties to give it a try if it’s a priority. You can increase the size of an array tho by successively replacing each disk with a larger one.
Since ZFS is still evolving, their focus points more towards enterprise, where whole arrays are added to a storage pool to extend it, instead of expanding a single array (which is a dangerous operation).
For that matter, there’s still work on implementing, testing and stabilizing some functionality (bp_rewrite) needed to perform the necessary operations safely.
Edited 2008-08-20 12:33 UTC
So, how would one take a 24-drive bay system, put 12 drives in it, configure a zpool, use the system, later add 12-drives to it, and add those to the pool?
Would you create a 12-drive raidz2 pool (say storage0), then add that to another pool (say bigstorage), then later create a second 12-drive raidz2 pool (say storage1), and then add that as a vdev to bigstorage?? Which would give you:
bigstorage
storage0
drive0
drive1
…
drive11
storage1
drive0
drive1
…
drive11
If so, that’s kind of neat, but a lot of extra overhead. Although I guess it does offer a bit more redundancy (can lose up to 4 drives across the two raidz arrays).
I guess the whole vdev thing is tripping me up, as I’m thinking in terms of drives and single pools per storage server.
Edited 2008-08-20 15:17 UTC
Yep, like this. Though you don’t get to label vdevs.
Yet, I guess. When they finally implement vdev removal, you need to be able to address them.
–edit:
It’d work like this to create a pool with one RAID-Z array:
zpool create pacman raidz disk1 disk2 disk3 disk4
Then later, adding a second RAID-Z:
zpool add pacman raidz disk5 disk6 disk7 disk8
The end result are two RAID-Z vdevs. ZFS will then dynamically spread blocks based on parameters like write throughput and bandwidth usage.
Edited 2008-08-20 15:28 UTC
Deleted.
Edited 2008-08-20 15:18 UTC
It is true that you have to relearn some things if you switch from Linux to Solaris, but ZFS and zones and projects more than make up for that effort. These tools allows you to have a lot more control of your system resources than you have in Linux. This makes it much easier to match IT budgets for various activities in an organisation, with actual IT resource consumption.
You also have to take into account that Linux is a moving target as well. E.g. the “upstart” system used in Ubuntu and Fedora is very similar to Solaris SMF so even if you stay on Linux you will need to learn new things now and then.
Having namess on network interfaces that actually gives you a clue on what type of device you have, is much better than calling them eth0, eth1,… The differences in ifconfig for Linux and Solaris is actually rather minor, so if you use it often you will relearn quite quickly.
However, If I was to chose some form of Solaris, I would probably go for the free as in beer commercial version. Opensolaris still feels a little too much preview or beta qualit to me. E.g. things like Jumpstart is missing, so it is hard to deploy it efficently if you have many boxes. However, the plans for future versions looks very promising as they to include more advanced tools for clustering and storage management in the next release.
Solaris is bad. Period. Why bother creating another useless piece of OS, claiming to be this and that ? Solaris lacks most of the good features that other OSes have, Linux and *BSD family, it’s hard to find and install needed software, shall you need something additionally installed, you’re probably gonna miss some development libraries. Using it for Desktop / Workstation ? No, thanks, I’ve tried that, there’s no KDE Even with GNOME, it will still crash when just changing the hostname, LOL. It’s relatively slow compared to OpenSuSE 11.0 and KDE 3.5.9 for example.
Server applications ? Again, no thanks, again. Simple benchmarks on the same reveal ~ 60% degradation over Linux and BSD.
In short, I wouldn’t trust this OS for anything.
P.S. Reading your comments, it seems that ZFS is the only good thing, soon I believe it will be ported for other OSes.
Edited 2008-08-22 08:24 UTC
FUD
you want KDE in opensolaris?.. go get belenix http://www.belenix.org
It’s not hard at all to install any software, you just need a little common sense… it’s going to be Synaptic-easy when IPS is ready, don’t worry.
60% degradation?.. may I inquire what are your sources for these numbers?.. hardware, tests and everything else in the package…
If you don’t like it, that’s ok, but don’t go around spreading false information
NOTE: OpenSolaris installs in KVM for Linux users who would like to try it. As the author said, it’s an easy install, not fundamentally different from an Ubuntu install – not a big time sink. I have it running here fairly well:
!/bin/sh
kvm \
-boot c \
-cdrom /mnt/local/virtualization/os200805.iso \
-m 1024 \
-std-vga \
-soundhw all \
-net nic,model=rtl8139 \
-net user,vlan=0,hostname=solaris \
-localtime \
-std-vga \
/mnt/local/virtualization/solaris.img
—
As for the “learning curve” issue, there’s another angle, and the author of the article seems to share mine:
If you are, say, a Windows user, and one day decided, “I’ve had it – this sucks – ENOUGH!” and you’re going to switch to a UNIXlike OS, especially for someone starting out, they’re 6 of one, half dozen of another. You might as well try FreeBSD, or you might as well try OpenSolaris or you might as well try Ubuntu.
But if you are already a Linux user, the question becomes, what does OpenSolaris offer me that makes relearning things worth the time? If there is something specifically compelling about OpenSolaris that Linux doesn’t have, it might well be worth it. But if it’s just another “UNIX environment” for you, you have to wonder if it’s worth the time.
In the time you spend to re-learn a lot of the basics, you could become a better Linux user. And I think this was the point of the author. This isn’t OpenSolaris bashing. The same would be equally true in reverse – if someone was used to OpenSolaris and was considering switching to Linux. Solaris has documented positives, and if you need or want them, then yeah, there’s going to be breaking some old habits because it’s going to be worth your time to do so.
But personally, for me, I’m not going to use most of that stuff, so it’s just not worth the time expenditure (theoretically, as I run it just because I enjoy mucking around with OSes for some godawful reason).
I will say that I think every Linux user should give FreeBSD a shot, because I think its package management is so compelling (ports AND packages), could be that Linux users don’t even know they want this until they try it. I am a Gentoo user but I could be a FreeBSD user. The changes and re-imagining my configuration and environment would be worth it if Gentoo imploded.
Anyway all of the above can be disregarded, as the point is, Solaris and FreeBSD can be easily virtualized. I have both of them running in kvm (and OpenBSD, and NetBSD, and Plan9 in qemu) and why *not* try them? This is the first variant of either Solaris I’ve tried that felt “immediately usable” upon install. The environment will be very familiar.
I heard there’s some turmoil over there and I haven’t looked in a few weeks, but look at blastwave.org once you do for a repository which will provide many of the packages you’re used to having available in Linux.
http://www.blastwave.org/howto.html