“A week ago we reported that a second preview release of Project Indiana, Sun’s attempt at creating an operating system for the desktop based upon OpenSolaris and led by Ian Murdock, was on track to be released in the near future. Thursday afternoon that became true with the test image surfacing for Developer Preview 2 of Project Indiana, or what will formally be called OpenSolaris. Officially, this new release is known as the OpenSolaris Developer Preview 1/08 edition. The general availability release of Project Indiana is expected in March, but today we have up a tour of this new Indiana release.”
http://www.thecodingstudio.com/opensource/linux/screenshots/index.p…
I am a bit perplexed by the desktops in the unix world, I mean, I get that there are very good ones, like Gnome, which I like very much, but the many many distributions of Linux, and FreeBSD, and now Solaris all of which want to make Desktop systems all seem to provide basically the same bloody desktop experience. Where is the differentiation? Where is the actual innovation? What makes OpenSolaris truly unique enough as a Desktop OS for me to want to use it instead of Fedora, or Windows, or OS X?
Should I even expect such differentiation? Should I expect a unique experience, or are we working toward an OS world in which they are all the same and the only reason I choose one over the other is by politics and religious zealotry (I know people who are lovers of Ubuntu and turn into vicious monsters at the mention of Mint, but why?)
I really am perplexed by all these distributions…
The difference is the quality of the integration and the superiority of the underlying operating system. Even if all the operating systems used the same GUI, if the underlying operating system is crap, the experience for the end user will be consistent with the inferiority of the underlying core.
In the case of Solaris, its wonderfully scalable, snappy, reliable, great technologies like ZFS which are lightening fast filesystems – yes, there are issues which need addressing within Solaris, but I do feel that the foundation of Solaris is alot more stable and well respected than the alternatives given the well engineered basis for it.
Innovation and differentiation takes work
Software developers glob together into groups that do what they are good at. The kernel people stick to the kernel, the gui people stick to the gui, and the big picture distro building people stick to the big picture.
So the big picture people just use what the gui people make, and right now the body of gui people support only two or three major desktop environments.
The big picture people just take and use one of these products almost exactly out of the box, and why? Because they don’t know how to do anything else. They aren’t really gui people.
A really great big picture project will have really great gui or kernel specialists working for them to do the innovation and differentiation that other projects lack.
I like to think that all GUI should more or less behave the same. Like all word processors and all spread sheets etc. The ideal would be if there were only ONE gui that all operatins systems used? And only ONE word processor? etc. That is the meaning of STANDARD. Instead of several competing technologies that have incompatible programs, there is only one technology.
So, there is other advantages of Solaris, mostly great innovative technology that no other OS has. DTrace for instance, is something that has NEVER been done before. This is new and revolutionary. Read for instance:
http://www.theregister.co.uk/2004/07/08/dtrace_user_take/
“I looked at one customer’s application that was absolutetly dependant of getting the best performance possible. Many people for many years had looked at the app using traditional tools. There was one particular function that was very “hot” – meaning that it was called several million times per second. Of course, everyone knew that being able to inline this function would help, but it was so complex that the compilers would refuse to inline.
Using DTrace, I instrumented every single assembly instruction in the function. What we found is that 5492 times to 1, there was a short circuit code path that was taken. We created a version of the function that had the short circuit case and then called the “real” function for other cases. This was completely inlinable and resulted in a 47 per cent performance gain.
Certainly, one could argue that if you used a debugger or analyzer you may have been able to come to the same conclusion in time. But who would want to sit and step through a function instruction by inctruction 5493 times? With DTrace, this took literally a ten second DTrace invocation, 2 minutes to craft the test case function, and 3 minutes to test. So in slightly over 5 minutes we had a 47 percent increase in performance”
The whole article is interesting read.
Or how Solaris is more stable than Linux:
http://www.lethargy.org/~jesus/archives/77-Choosing-Solaris-10-over…
“Just to be explicit: on the same hardware, solaris 10 fixed your corruption/read-only /data problem?”
“Yes. Same exact hardware. We reinstalled Linux twice even to make sure there wasn’t something wrong with the install. I’ve had lots of other people chime in reporting very similar problems.”
There are lots of other examples on other new technology in Solaris, ZFS for instance.
The point is, if solaris been good enough for Enterprise business for the last decades, then it is certainly good enough for my needs. I dont have to relearn a new better OS. Solaris will do. It ends there.
> The ideal would be if there were only ONE gui that all operatins systems used? And only ONE word processor? etc.
Is that really the ideal? And if so, is it achievable enough to talk about in those terms? Because last I checked, even packs of chewing gum don’t all have the same user interfaces.
I really do think that would be less than ideal. Possibly even damaging.
Not everyone wants to work a computer in the same way. That’s why GUIs are different. Some people like a simple stripped down interface which favors keyboard shortcuts. Some people prefer larger more complex beasts like KDE.
If everyone preferred to work the same way than you wouldn’t here so many flamewars about Vista, OS X, KDE, Gnome, et al.
I feel you’re taking the term ‘standard’ slightly out of context here.
You can have a standard document format (say RTF for simplistic reasons) but competing word processors running on competing platforms with radically different GUIs can all still read and edit the same file regardless of how they launch applications, what the widgets look like or even what word processor they use.
Final point:
The whole point of GUIs is to make life simpler. If you force people to use a GUI which is counter-intuitive to that particular person then ultimately they’re going to struggle more than if you give them a text console and told them what words to key in.
Edited 2008-02-03 21:38 UTC
Maybe you are right and I am wrong. However, with Standard I mean things like, right mouse click for this, menues at the top, etc. How the GUI looks are a different thing. However, you wouldnt like a GUI where right mouse click and left were interchanged, where there existed no menues at all – instead it showed a film to choose between menues. So all GUIs are more or less the same. They only look different. But imagine if a GUI were so good and natural that no other GUI was needed. There were only one GUI on all OSes. Wouldnt that be simpler? If there were lots of different word processors with different file formats, that would be a pain? That is why we are trying with Open Office on all OSes. You shouldnt have to relearn. Thats the point.
I still don’t think that’s to anyone’s advantage. Some people find the right mouse clicks confusing (too many buttons with hidden menus) and prefer Macs simpler click and hold approach. Personally I find the Mac method a handicap.
Not every person is wired the same way so why should everyone be forced to use the interfaces?
Another classic example is text command shells verses GUIs. I’m quite at home with a command line prompt where as many (most even) people I know hate them and avoid them where ever possible. I’m quite happy to do basic text editing in Vi (in fact I prefer it at times) where as some people find that interface a complete pain.
When building interfaces for humans you have to remember that there is no such thing as a “one size fits all” as no matter how well you design something, someone will prefer to interact differently.
Also, from a technology perspective: a little competition between the various interfaces is what pushes the technology forward. Do you think Vista wouldn’t have looked the way it looked if Apple hadn’t have developed OS X? And do you think Aqua would even have existed if someone sat Apple down and told them “You can build OS X, but it has to be similar to Windows GUI and KDE?”
Yay! Imagine a world where the only editor would be vi!
/me leaves planet Earth.
You seem to be under the mistaken impression that the desktop (i.e., Gnome, KDE, etc.) IS the operating system. Nothing could be further from the truth. The window manager, Gnome in this case is merely a tool by which you can access the underlying powerful Solaris OS. The fact that Solaris, UNIX, BSD, and Linux all utilize similar desktops is an advantage in my opinion as users will feel immediately at home on any one of them and can become productive faster.
You must lead a hard life if choice confuses you so much. How do you handle the many different rock bands? singers? Jeans brands? Washing powder brands? TV channels?
I think his point is (and I hope he corrects me if I’m wrong) that it’s a pity that Project Indiana ships with Gnome when Sun had a chance to ship their own desktop manager to offer a desktop manager experience different to that of Ubuntu.
While I agree with the others that the underlying technologies make more of a difference than the GUI, I do have to agree with him that with so many OSs and distros around; it’s a pity they all offer the same 2 or 3 desktop environments.
However, his point is a double edged sward as, from a n00bs perspective, *nix is a scary enough of a beast without throwing in a new interface with each distro.
Choice CAN BE quite daunting. It all depends on the personality, and the product.
And choice has a price of complexity. Look at the common criticism of KDE vs Gnome. KDE has a zillion options, and Gnome doesn’t. KDE is therefor more complex than Gnome.
Comparing a local hamburger stand that simply sells burgers, fries and drinks versus someplace else that’s selling burgers, mexican food, fish, salads, fries, zuchinni, onion rings, etc. If you know what you want, the menu doesn’t matter (you simply need to find it). If you don’t know what you want, what some see as choice others see as as bewildering list of options. I can guarantee that you will make your choice faster at the hamburger stand than at a place with more options on the menu, simply because you have less to winnow out of the equation.
See, when there is too much choice, there is actually more opportunity to make the WRONG choice than the CORRECT choice. This happens when you don’t have a clear sense of what your requirements are and, as a corollary, what the limitations are.
If you’re presented with a selection of 20 different digital cameras, is that an easy choice? Not really, not without knowing the details of each camera. And the more options you have, the more data you must provide. Most people “just want a good camera”, they don’t want to become “camera experts”.
Truth is, most consumers want SOME choice, but not everything. They don’t want to go in to the store and see a 100 digital cameras all in the same market segment (meaning they all cost roughly the same and match 80+% of the bullet points with each other). It’s bewildering and it’s stressful, again, too much opportunity to make the wrong choice.
Rather they’d prefer to defer to “experts” to make the first cut. Experts being “top 10 lists”, “what the store is selling”, “highest rated on amazon”, whatever.
FYI,
preview 2 has not been released yet.
A test image for preview 2 was released early for community developers to report problems before the preview 2 release.
Please remember that this is an alpha of a beta
Hi,
I’ve been using OSOL/Solaris for quite some time, and I applaud those who have made it possible. The progress is amazing, and I think you’ve peaked a lot of people’s interest.
That said, I really hope you can release a version of OSOL that has working wireless on the majority of laptop adapters. Specifically, Intel wireless devices.
Support has been moving along, b79 added support for iwi + wpa (my situation.) However, from what I have read, due to export restrictions or some such, the crypto code necessary for anything beyond rc4 it’s very unlikely you’ll be able to connect to a WPA secured network.
Now, from what I’ve read, b82 integrated the necessary code to make wpa “work”, so to speak. Again, this is only based on what I have read.
All of this said, when should I hope to be able to use wireless on wpa protected networks? Will it be functional in the “release” of OSOL/Indiana in March? The reason I ask (aside from the obvious desire of mine) is simply due to what I feel would be a big letdown for potential users who have picked up on the buzz. I’ve already talked a few buddies into checking out Indiana, but they all came back to me a day or two later to tell me they couldn’t get wireless working, so they threw in the towel.
I really want OSOL to succeed, and I really feel this “sore” point (which seems to have already been sorted from what I read) needs to be fixed.
That, and the rendering in Firefox. Yikes, talk about jaggies.
Thanks again for all of your great work, if you continue at the pace you’re going, I think things are going to work out just fine!
Cheers,
David
I don’t want to star a flame, but what advantages will have Indiana over Linux for desktop and multimedia users?
Does jackd work on OpenSolaris? If yes, does it work better than Linux (which can’t give me a small latency, at least I tried to mix some previously recorded tracks of a song with some LADSPA plugins and at least for playback it isn’t a solution that replaces Windows or MacOS X)?
Also, is ZFS “better” than XFS to store large files such as raw audio and video?
How does the scheduler performs for desktop applications?
The hardware support *seems* fine, also NVIDIA works on both 32 and 64 bit, both GNOME and KDE works, Xorg works.
If you want to store large raw media, then ZFS is a good choice.
Say that you do 100 edits on a 15GB raw sound file on ZFS. Then ZFS will save just each tiny difference. It wont save the whole file anew of 15GB each. And, you always can go back to the unmodified file, or any step. It is true versioning. Say you each edit takes 10MB. Then you will have occupied 15GB + 100 times 10MB = 16GB. And you can back whenever you want. And, the sound program will believe there are 100 files, each 15GB large on the disc. The program wont see any difference between a diff file, or the original file. It is totally transparent to the program.
With normal traditional file system (ext4, XFS, etc), you must save the file anew each time. With 100 edits, you have occupied 15GB + 100 times 15GB anew = 1.515GB = 1.5TB. Thats quite a lot.
And if you value your data files, automatic checksum, data integrity, etc etc then ZFS is preferable.
Ive heard that ZFS can detect and correct problems no other file system or hardware raid can. Can anyone confirm this?
That’s pure orgasm! It’s something I seriously want to try! Now that Sun is making Solaris more easier for other users, maybe some users will try it with Indiana.
Ok, now I’m going to try it on real hardware!
Yes that is quite cool. The transparency applies to all types of files, for instance Linux. Say you install Linux in virtual container in Solaris, then you lock that install via snapshot. Now you can install Oracle on top of that, but only the differences will be written anew i.e. oracle. Now you can deploy several instances of oracle running simultaneously in 1 sec each. If you delete one instance, only that instance is deleted. The original is still intact. If you edit the Linux install, it will save just the diff in a new place, and the original install will be intact. This feature is called snapshot.
Alas, there are not many or any sound program for solaris that I know of. If there are open source sound programs you could port it maybe. Apple Mac OS X has ZFS in it, but not all features yet. Apple has sound programs?
If you want to download and play with ZFS, I recommend Opensolaris, a.ka. “Solaris express community edition” – build 81 is the latest I think. That is the next gen Solaris in beta stage. But it is very very stable. Some say it is more stable than Linux. The original Solaris10 has a quite antique feeling. It is enterprise and no fancy stuff. Opensolaris reminds of a modern Linux like SuSE or RedHat, my nvidia got detected automatically, sound too, network, etc.
Here is an interesting read and videos on ZFS:
http://www.infoworld.com/infoworld/article/07/06/07/23TCzfs_1.html
Except it’s fake orgasm
I think the poster above misunderstood ZFS’s Copy-On-Write.
Quite the contratory, in order to do COW, ZFS must write more blocks than traditional filesystems when some blocks need to be updated.
Please explain how I have misunderstood Copy On Write. I claim that if you use snapshots (i.e. COW) then ZFS will save the difference, not the whole file 15GB anew. The difference tend to be tiny compared to 15GB. The difference will not always be 10MB either. That was just for illustration. The main point is, ZFS saves the difference and not everything anew.
You mean, I am wrong in this aspect? Where am I wrong? Please point it out.
> if you use snapshots (i.e. COW)
Firstly, you didn’t mention snapshot at all in your orignal post.
Secondly, COW is how ZFS updates blocks – with or without snapshot – COW just makes snapshot much easier to implement as a feature.
Snapshot must be taken explicitly by the user.
ZFS is not a “Time Machine” by itself.
Depending on how the editor works and whether snapshot is used explicitly, ZFS may do more or less I/Os than traditional non-COW filesystems.
For example, if the editor only updates 10 blocks in the existing file, non-COW filesystems would write exactly 10 blocks at their orignal location, while ZFS will write 10 blocks at new location plus their parent blocks (because the checksum changed), and parents’ parent blocks, up to the top block. In reality, writes are consolidated into transaction groups and write out to disk periodically, other wise there would be too many writes in ZFS.
If you take a snapshot before such update, for ZFS the only difference is that all the old blocks are kept, it does exactly same amount I/Os as it does without snapshot. I don’t know how ext4 or XFS implments snapshot, maybe they’re not as efficient as ZFS in this case.
If the editor uses temporary files and create a new sound file when editing finishes, there will not be much difference between ZFS and non-COW filesystems – all of them must write all 15GB data again to a new file, and this is the same with or without snapshot, because we’re writing to a new file.
Maybe that’s how you understand it too.
BTW, can you explain how snapshot is implemented in ext4/XFS? I won’t be surprsied it’s less efficient than ZFS, just want to learn.
Yes I know all that about ZFS. I have been teaching for 14 years and Ive learnt one thing; hide details.
It is best to make someone curious enough for him to want to examine the details themselves. The serious student will do so. For the casual user, it is less optimal to tell all the details. Otherwise I would have provided command line examples and cut & copy from the manuals? No. Instead Ive provided links for the serious student to look things up.
There he will learn that sometimes ZFS actually doesnt free any space when someone deletes files in a snapshot. That the feature I talked about is called “snapshot” (but as you seemed to know about snapshots, Ive correctly named them when talking with you). The video on the link shows how easy ZFS is, by showing some actual commands.
But in effect I am correct; with ZFS you CAN save only the differences. If you take a snapshot first. Yes. And if the sound program makes an internal copy first, well all editors dont do that. There will be a few cases when you loose with ZFS. But many more cases when you win with ZFS. This feature is not fake. There will be cases when you loose, worst case scenarios for ZFS would be like a normal filesystem + few bytes extra for overhead. But from you, it sounds like you mean I am lying, that the features I talk of are fake. I suggest you read the link I provided about ZFS, and see if ZFS sucks so badly as you imply. Let anyone read and judge themselves if ZFS is right for them.
I have setup Linux in Solaris on ZFS, and cloned linux. If I boot a linux clone all the changes to Linux config files are saved in the clone and the original Linux is not affected (this of course implies that the original Linux is saved somewhere, hence COW). BTW, for those nitpickers, I know the correct name is not “linux clone”. It is “SCLA” – or Solaris Containers for Linux Applications. I dont think it adds much to the discussion if I write the correct name – because of the nature of my post it is evident I aim to the solaris newbie, neither should I provide cli examples or other details. The thing I want the newbie to rememember is this: “it is possible to run linux on solaris via virtulization”. Yes, it requires use of SCLA. Yes, you must first set it up. Yes, you must first download linux – it is not included. etc etc.
“Your post is fake you can not run linux on solaris right away, you must first download it and install it, then invoke som CLI”. Geez.
Ok? My post is for the solaris newbie, and as such I dont want to confuse them with drowning them in details. There are few things I want them to rememember “ZFS can save only the difference”, “It is possible to run Linux on Solaris”.
As for ext4, XFS, etc I dont know. I am a Solaris guy.
“But from you, it sounds like you mean I am lying, that the features I talk of are fake. I suggest you read the link I provided about ZFS, and see if ZFS sucks so badly as you imply.”
– come on now, I didn’t think you were “lying”, and I didn’t say ZFS “sucks so badly”.
It’s not personal, it’s about facts and details, I learn through these discussions.
Please point out which statement I made about ZFS is incorrect, and correct me. Or, where did I mislead your students
Edited 2008-02-06 03:24 UTC
He goes wild over that ZFS is able to save the differences, which ZFS can do (provided you format your slice with ZFS, provided you do snapshots, provided you are not running Mac OS X ZFS version, etc etc).
And you write that is fake:
“Except it’s fake orgasm. I think the poster above misunderstood ZFS’s Copy-On-Write. Quite the contratory, in order to do COW, ZFS must write more blocks than traditional filesystems when some blocks need to be updated.”
It seems I have misled and posted lies. It seems that typically, ZFS uses up more space than a traditional file system as ext4, XFS, etc.
I would accept if you wrote something like:
“But sometimes, in rare occasions, that is not true. Then ZFS will actually use more disk space, albeit the ZFS overhead will be very small.”
That is correct. But stating that “it is fake” is not correct in general. It is correct sometimes, not in general.
Why dont we just let people try ZFS themselves and read about it, and let them find it out themselves, who is correct. You or me? That would be the easiest and we can end this discussion.
Only if there’s redundancy. If you don’t have physical redundancy, you can still use ditto blocks and have ZFS keep up to three copies of each datablock. The ditto blocks is a filesystem level property. The checksumming obviously helps detecting errors neither the disk or controller see, don’t bother reporting, or the operating system itself doesn’t give much about (like Windows shat up my event log in the past about controller errors due to a faulty cable, but didn’t even bother notifying me directly or making sure the data was correct).
Don’t sweat by the term filesystem, in ZFS this is merely an abstraction for a place where to dump files. It’s cheap. Personally, I’ve 26 of these filesystems.
Snapshots and other properties, like compression, are also on filesystem level.
I think most people will use ZFS with redudancy? Ive heard for instance that the write-hole-error(?) that hardware raid can suffer from, can not happen with ZFS.
Here ZFS detects a faulty power supply:
http://blogs.sun.com/elowe/entry/zfs_saves_the_day_ta
As for Solaris schedulers, there are several of them. And you can change scheduler during run time, on the fly. (Also as Solaris is very mature there are likely much less problem that Linux have.)
Uses metaslabs and 128K record size. It has a funky IO scheduler. Does read-ahead and prefetching. Watching videos, the filesystem thinks it’s being funny by prefetch in 100+ MB blocks and more.
I don’t know what everyone’s hard-on for schedulers is, but Solaris has a special scheduler class for X applications. Least thing I can tell is that I can watch videos without jitter and Compiz refreshes smoothly. The only issue here really is just Firefox messing with Xorg and introducing skips in rendering.
I tried Solaris 10 when it was released to the public. If you used dynamic ip addressing, it was very easy, but if you used static addressing, it was a huge pain to set up. For those of you who use or have tried a version of Solaris 10, or OpenSolaris, is setting a static ip address still painful?
Thanks.
I’m wondering what exactly you found painful. If I recall correctly, you are asked for an address during install.
Alan.
He might of meant changing an IP and it may still feel hard, but JDS provides a GUI to do it.
From what I know, if you do it manually, it’s still creating a hostname.adapterid file in /etc and then creating an entry in /etc/inet/hosts.
But there’s GUI tools in Gnome that allow you doing it the easy way.
You can also do it during install, anyway.
Just eyed thru it, I wanted to read about package management and eventually KDE, nothing was there.