Last week, on my country’s Liberation Day, Sun released OpenSolaris 2008.05, the much awaited first official fruit of Project Indiana. It delivers many of OpenSolaris’ major features, such as DTrace, ZFS, containers, and more, in a Linux distribution-like package. The goal is to allow more people to experience Solaris. A few reviews have since hit the web.Ars Technica took a first look, and was moderately positive about the release. They very much liked the slick installation experience, which was never one of Solaris’ strong points (I repeatedly beat myself with a metal rod during Solaris 9 SPARC installs to ease the pain). According to Ars, the installation procedure is “painless, intuitive, and easily on par with Ubuntu and Fedora”. OpenSolaris boots into a live CD, and the installer can be launched from there.
Ars says that another good point is the clean GNOME theme used by 2008.05. The new theme, Nimbus, “has soft colors, smooth gradients, and a slightly bubbly look. It’s not as snazzy as Murrine, but it’s a nice improvement over Clearlooks and it beats the crap out of Ubuntu’s brown”. The GNOME installation is fairly default, albeit slightly outdated in some areas. In addition, it does not include anything Mono-related – a positive point for some, I’m sure.
Hardware-wise, there were some quirks, especially in the network area. Ars could not get OpenSolaris to connect to the network at all, which brings back bad memories on my end; getting the network up and running on Solaris 9 on my Ultra 5 was always utter hell, and I’ve more than once wanted to throw my Sun Type 5 keyboard through the window. And Sun aficionados will know how heavy a Type 5 is. Because Ars couldn’t get the network up and running, they were unable to properly test the new IPS package management system.
Ars concludes:
Although the OpenSolaris development community still has a lot of work to do before the operating system is ready to take on Linux on the desktop, the progress so far indicates that the project deserves further attention.
Another reviewer, our very own Kaiwai, tested 2008.05 on a Lenovo Thinkpad T61p, and concluded:
Apart from a few hick ups along the way, one has to acknowledge that OpenSolaris is, however, still work in progress. Although I would consider using OpenSolaris 2008.5, the problem is that the build which it relies on, B86, has a nasty bug which causes performance issues for those laptops with 4965 based wireless cards.
In light of the new release, ComputerWorld decided to interview Ian Murdock. The interview is quite interesting, but the following stands out. Upon asking which Murdock thinks is better, Debian or OpenSolaris, he answers:
I think they’re both good for different reasons. One of the advantages of Debian is it has a huge ecosystem of packages around it, so just about anything you could possibly want is just an app to get installed away. OpenSolaris has some of this functionality, like ZFS and D-Trace, that Debian – or no Linux distribution for that matter – has. So it all depends on the application environment.
Which happens to be a whole lot of wisdom and clarity put into a single answer. Something for us OSNews editors and readers to think about.
I had a similar experience. Nice install, no networking. There was some oddly-named service running, like “automagic internetworking wizard,” which I had to track down and kill before the network configuration panel was allowed to function. Once I got it on my local network, though, it still couldn’t find the Internet.
With Nexenta apparently backing away from having any graphical functionality, OpenSolaris will be the OS I’m watching for my eventual home server replacement, especially once ZFS supports arbitrarily removing drives from a zpool. Hobbling together leftover storage without risking the data will be a killer feature.
One of the best things about the OpenSolaris liveCD environment is that it runs a hardware support app, so you can see at a glance what will work in the OS. I tried it in all my systems, including my MacBook. There was always something that didn’t work. Solaris needs more drivers and an intuitive networking system, but otherwise I’m impressed.
There are two ways to configure a network in OpenSolaris, one is nwamd and the other is using the graphical network tool once nwamd is killed. To get networking to work with nwamd enabled, you have to modify /etc/resolv.conf so that it has the correct DNS servers listed. In my installation on a Pentium IV rig, the DNS servers were wrong and the /etc/nsswitch.conf file was not modified to use DNS and files. Edit /etc/resolv.conf and add the right DNS servers (if they are wrong) and /etc/nsswitch.conf, make sure the hosts entry reads hosts files dns. Once the is done, you should be able to access the Internet.
Why not just fix it so it works as reasonably expected?
They wanted something in OpenSolaris that works most of the time automagically without user intervention. That’s NWAM. The problem is, the UI side’s not done yet.
As a long time Linux fan… I understand. They’ll get there. Solaris paved the server highway for us. Maybe we can return the favor on the desktop highway.
Actually it does work, but it depends on DHCP which I do not use. So, OpenSolaris tries to come up with a configuration that will work using what it finds in the initial network probe. It found my router just fine and assigned an appropriate IP address for my network, it is just that the DNS servers were for hr.cox.net, not east.cox.net.
My soultion was to modify the resolv.conf and nsswitch.conf so I could get a working configuration, which I know does work with Solaris/Solaris Express and now OpenSolaris.
This was the first OpenSolaris release I could seriously consider. The previous (Nexenta, Belenix, etc) wouldn’t even boot on any of my machines. This one booted into a very beautiful 1500×1050 Gnome desktop on my Compaq Evo N610C laptop. Even the wireless card with encryption worked beautifully – that was truly amazing. The only downer was no mouse – say what?? I have a pointing stick and a touchpad, take you pick. Neither worked. For fun, I tried turning off first the pointing stick, then the touchpad in the bios. I thought maybe it was confused by both being present. No luck. The only way I could proceed was an external mouse – I didn’t even try since I do not want to go that route. Funny to me that all the “hard” things worked, like high-resolution video and PCMCIA wireless with WEP. But a little thing like the mouse didn’t.
But it is definitely a vast improvement, and will probably be going on one of my desktop machines. For now, the N610C is a sweet OpenBSD 4.3 machine.
Not to make fun of anything at all, but I had a similarly peculiar experience with FreeBSD PPC on my Apple Powerbook 12″…
Everything worked but the keyboard and touch pad…
This was due to these parts being connected via an old type of interface, named Apple Desktop Bus, with no drivers available as opposed to USB on all other Powerbooks…
With a USB keyboard and mouse attached, everything worked like a charm …
Somewhat off topic, but these are the little things that serve to frustrate people, so I hope it’ll be fixed in Solaris!
.. outside of Richmond.
OpenSoloaris uses more than 768 mb of RAM after booting into Gnome. It may have great features .. but it is not yet ready for the real world.
The ZFS ARC is the new Superfetch.
Try to understand this:
If your memory isn’t used by applications, ZFS uses it for caching.
Linux has that philosophy, and I think it is a good one. But what I keep hearing about ZFS is: “Don’t use ZFS with less than 2GB of memory, or on 32 bit hardware. And if you do try it don’t complain”.
I find that shocking. I have used OpenSolaris 2008.05 and it performed just fine. But I have 2GB of RAM and 64 bit hardware. I would never have expected to have to worry about filesystem memory requirements, for the gods’ sake!
Edited 2008-05-15 18:13 UTC
That’s unreasonable, Steve. You’re making it seem as if ZFS is some sort of another FAT32 – but that’s ridiculous. It offers A LOT of advanced features, and those features come at a price. You are always free to choose another less advanced filesystem.
No. I was reporting my perceptions as to what I hear about ZFS, and also my experience with ZFS, which happens to be at the recommended hardware level.
I’ve certainly read my share of “Linux is a memory hog posts” from people coming from windows and looking at “top” output for the first time. I’m receptive to explanations of ZFS’s memory and processor recommendations, or clarifications as to what the recommendations actually mean.
Let’s see… I get 300MB/s from my Linux softraid on a 32bit kernel. Combined with XFS and LVM the only downside I have is having to type a couple more commands to do the same thing.
I’d say Solaris and ZFS have a long way to go.
EDIT:
That reply should have gone somewhere else, I apologize for the unwarranted flame.
Edited 2008-05-16 23:02 UTC
First, what kind of system are you using, how many disks are you using and how are you measuring your disk I/O? Additionally is that sustained or burst I/O?
I don’t know who said the 32bit thing, because I’ve got it running on a Dimension 8400 w/ 2.5GB ram very nicely.
Kawai, are you serious about not having heard that? It’s all over the place. I, personally, doubt the claim. But whenever I see a complaint about ZFS performance, valid or not, someone, usually advocating ZFS, calls the poster foolish for trying to run it on 32 bit hardware with less than 2GB of RAM. Not the best advocacy strategy. But there it is.
BTW, the link to mplayer, et. al for OpenSolaris from your blog helped make my stroll into OpenSolaris-land a more pleasant experience. Thanks for that.
-Steve
I never advocated such a position. I only said that the 32bit claim was questionable. The memory issue – I wouldn’t know, but from what I understand, the more memory, the better. I never called the person foolish either. So don’t make claims about me you cannot back up.
Its interesting in that I came across it purely by accident. I’ve since now kept a back up copy incase the patent holders start threatening the website maintainer.
Edited 2008-05-15 20:21 UTC
Kawai, I did not make any claims about you, and did not intend to. Please reread my post, considering it to be of a friendly nature.
Sun’s people themselves have said that ZFS is simply not designed to run on 32-bit systems. You may not experience any problems, yet, but that doesn’t mean that you won’t.
Also, from the work the FreeBSD guys have done there is ample evidence that ZFS will grow unbounded to any task you throw at it.
If you read this thread, then your statement is correct:
http://www.acmqueue.org/modules.php?name=Content&pa=showpage&pid=50…
Jeff Bonwick addresses the 32-bit issues in the interview.
As far as BSD is concerned, this thread shows that modifications to loader.conf one user has been able to run ZFS on a 32-bit system with 1 GB of memory:
http://kerneltrap.org/FreeBSD/ZFS_Stability
This thread from opensolaris.org about the 32-bit limitation of 1 TB filesystems:
http://www.opensolaris.org/jive/thread.jspa?messageID=28095涿
And finally this thread from opensolaris.org about ZFS on 32-bit hardware:
http://www.opensolaris.org/jive/thread.jspa?messageID=134690㒝…
So it does work as long as you are willing to live within the limitations of 32-bit memory utilization and 1 TB for a ZFS filesystem. And for the people that commented on it, it works very weel for them.
When someone says that, it’s referring to the recommendations for running traditional Solaris with ZFS, both in their intended scenarios, the enterprise. Since those were the only real available “metrics”, they’ve been parrotted all across the web.
As far as 64bit goes, ZFS is faster in 64bit mode because the various checksum and parity algorithms work faster in that mode. Remember that everything’s checksummed left and right.
I don’t see any significant memory utilization on my Pentium IV machine, and I created a ZFS mirror of the rpool (root) disk using the two 250 GB drives I have installed in the machine. Something I might have to investigate when I get home.
Well, the whole system was super slow. Starting simple application took ages. I guess if you have less than 1.5 GB of RAM you can forget about doing anything usefull with ZFS.
Unused RAM is wasted money. I prefer my RAM doing something useful, like a good ZFS file cache. That said, ZFS probably isn’t very helpful to any system with less than 1Gb RAM, but RAM is cheap, so this isn’t a problem on any remotely recent hardware.
My position on this is probably unpopular, but here goes:
I do not care how much memory an OS appears to use, period, as long as it allows me to address what I’ve got.
I think this because:
* 8 GB of registered ECC server RAM is less than $750 these days and 8 GB of desktop ram is probably $200.
* Actually sorting out the difference is pretty tough; RAM usage numbers for the OS are not directly comparable in Solaris or 2003 or Vista or RHEL3 or RHEL4–they report different metrics. Figuring it out is possible if you are very careful, but most of us looking at TEH FREE RAM!! don’t actually know what it means in a given instance, or how that would translate to another OS.
So I look it this way: If you are not swapping, you are OK. If you are, and you can fit more ram, get it. If you can’t fit more, well, that’s why they pay us the medium bucks–you need to change some element of the solution.
What does upset me is the Microsoft solution, artificially capping ram addressing on 32 bit platforms. Asking people to pay an extra $3k for Enterprise, just to turn on PAE and address the RAM they already own, is asking too much for my taste.
OK RAM misers, I’ve vacated the soapbox, now flame away like it’s 1995!
It depends a lot upon how hard money is to come by.
But:
What color is the sky on your planet? This is not an “unpopular position”. It is an “unrealistic assertion”.
Still blue, last I checked.
http://www.newegg.com/Product/Product.aspx?Item=N82E16820134632
But you are correct, $200 is unrealistic. NewEgg wants $37 each for those 2 GB sticks…
Just how are you measuring how much memory is being used? Try this (as root) to see your actual utilization:
mdb -k
At the :: prompt type memstat and you should see a display like this:
robert@apophis:~# mdb -k
Loading modules: [ unix genunix specfs dtrace cpu.generic cpu_ms.AuthenticAMD.15 uppc pcplusmp scsi_vhci zfs random ip hook neti sctp arp usba uhci s1394 lofs audiosup sppp ptm ipc ]
> ::memstat
Page Summary Pages MB %Tot
———— —————- —————- —-
Kernel 96724 377 30%
Anon 38710 151 12%
Exec and libs 11827 46 4%
Page cache 1229 4 0%
Free (cachelist) 5139 20 2%
Free (freelist) 171633 670 53%
Total 325262 1270
Physical 325261 1270
>
To exit mdb, use Ctrl-D. The results I just posted is the current state of my Gateway laptop with an Athlon 64 CPU and 1.25 GB of memory.
My experience with it puts me in the “needs work” category. It boots into a very nicely done desktop, but it still left a lot of things that should work automatically for me to configure. It does show a lot of promise, but when I can install Ubuntu or PC-BSD with virtually zero configuration I just can’t see wasting a HD for Open Solaris.
That being said, I intend to follow it and when they come up with a new release I will take another look at it. I do like keeping up with what is going on in the OS world.
Any opinion/review of OpenSolaris 2008.5 as server, not just like Yet Another Desktop? It should shine much brighter in this aspect, shouldn’t it?
What do you want this server to run?
Web servers for example… what about scalability and threads concurrency’s performances? Is this better than recents 2.6 Linux kernels?
That would depend on several factors including how apache was compiled, quality of the network and the speed of the storage the content is on (I am assuming a web farm as opposed to individual web servers). Are these servers going to be sending data to a backend database?
Unfortunately, there is no simple answer to your question without asking more questions to narrow down what you are specifically looking for.
When you do snapshot, it locks the /root on bit level. If you patch the system, the writes will occur on a new place on the hard drive (i.e. snapshot) and therefore everything is still intact. This way you can rollback easily if the patch messes something up. When you boot Solaris, you can choose which snapshot to boot from, they are all presented in GRUB. In a sense, you can say you have versioning control for /root. If you get virus, you just rollback to an earlier. You have of course several snapshots, one with fresh install, etc. And you can snapshot each file system, for instance snapshot /root, or /usr. Rollback of /root is a killer feature.
I have a Pentium [email protected] and 1 GB RAM. I have 4 Samsung drives 500GB each in ZFS raid. I get like 20MB/sec. That suxx. I have been told with 64bit CPU, it gets a lot faster. Here is a guy with 64bit CPU and 2GB ram achieving 120MB/sec on ZFS raid:
(I cant find the link right now. Will post it later if someone asks me for it)