After weeks of negotiations, IBM reportedly is eyeing a $9.55-per-share buyout for Sun Microsystems, according to a report in The Wall Street Journal. Such a price would value the deal at roughly $7 billion and offer Sun investors nearly double the price of the stock before reports surfaced earlier this month that the parties are in buyout talks. A report in The New York Times, meanwhile, notes the parties are discussing a purchase price of $9.50 a share. In either case, Sun’s investors haven’t seen the hardware maker’s stock trade at those levels since August. Last spring, Sun was trading at a 52-week high of $16.37 a share.
I hope Java, Solaris and OO find better homes.
-Ad
It appears that Sun has been eclipsed…
1 – Sun partner program is hard to get into
2 – Do you know any company that is a partner that sells hardware to the home user? The company I attempted to setup with sun hardware fell over because of …. well their singapore call center sorry…. open source applications are free to include in installed OEM products without asking every developer for permission to include it for free….
3 – I live in new zealand all the sun partners are impossible to work with unless your a big company.
Geni slow, have to have a full account to buy or even inquire about price of products. Suppliers ingram micro are really good but you cant buy direct.
and to think Sun was just startin to get it’s act together. I was really hoping that this was all a big spoof but it would make total sense for IBM to buy Sun (though many things will suffer on Sun’s product line, remember when Adobe bought Macromedia? *sad*). I am suprised that regulators and the DOJ would allow this though. This would escentialy put IBM as the only real ventor in the high end Unix market (HP’s numbers are in constant decline year after year).
(stands up with his drink to give a toast) To Sun. For all the times you made the right move, but just slightly too late. I dub this the “Icarus event” in which Sun itself will too finally fall into the sea. Good bye old friend.
I think the big problem with Sun is that they’ve struggled to monetise their assets. Java, OpenSolaris, Netbeans, MySQL, OpenOffice are all great products and are awesome open source projects. Pity that they haven’t been bringing in much money, which is what you need to achieve at the end of the day if you wish to stay afloat.
I hope that IBM will continue to develop and grow these projects.
Having been in a few purchasing decisions in my time. There’s a little secret to Sun’s failure.
In older times, companies like SUN could afford open source. They sell their amazing and powerful hardware, and they don’t mind throwing in any software for ‘free’.
Well today, everything is a commodity. You can get pretty good Intel chips for much cheaper. *** we went with an Intel based solution from IBM instead of the more expensive one from SUN. All the open source stuff is available for linux as well… so why buy SUN?
When things are a commodity, you either make the commodity or you be some kind of solution provider (that’s IBM). SUN’s support was basically non-existent for us.
Well…….they obviously haven’t got their act together, have they? For me, the writing was on the wall ten years ago because they just didn’t react to or accept the way things were going to go. Unfortunately, Sun’s pig-headedness came down through every one of their consultants I ever tried to have a conversation with.
Sun have had a very bad quarter and time is really of the essence for them because, certainly in the current climate where credit is impossible to come by, they are not very far away from running out of cash in order to maintain themselves.
Mate, for me it was around the same time. I wanted to purchase around 30 servers a couple of years ago, x86 based ones. Rang up Sun asked how I would go about ordering some. I was given the run around, told I needed an account then told I needed to go to a reseller. As soon as you as an organisation say anything other than, “that is great sir, may I get your details and what you’d like to order” you have instantly lost me as a customer.
I don’t give a toss about how you do things, when I ring up an organisation – I expect service and I expect it immediately; I don’t expect to be shunted half way around the globe because the said organisation can’t be bothered doing a decent job. Obviously if you aren’t going to take my order then obviously I am not important enough as a client for the said organisation and instead I’ll take my business to an organisation who will serve me.
Its funny, I was approached by email from the head honcho in New Zealand via email over my experiences with Sun – its amazing how executives never get back to you when you show their organisation for the pathetic and disorganised edifice to stupidity that it really is.
Edited 2009-04-05 14:24 UTC
Sun and IBM are NOT a good fit. There’s a lot of product duplication and the cultures of the two companies just plain don’t mesh. Sun is a true open source company, much like Red Hat or Canonical. Everything they do is open source, even their CPU designs. IBM, on the other hand, is a “use open source just enough to comply with the license” company along the lines of Apple. Yes, they contribute a lot towards Linux, but only because the GPL forces them and because Linux is, quite frankly, a better Unix than AIX by this point. No “core” bit of IBM technology is open source. There is no OpenNotes or OpenMVS and CERTAINLY no plans for a ready-to-go open source version of POWER or z10 in VHDL. I suspect that if IBM buys Sun, then that will mean the end of MySQL, Solaris and any sort of open Java, all of which would be tragedies, especially a reclosing of Java.
I think you are missing the point of “open”. Of course, Sun *was* careful to require copyright assignment so that they *could* sell the sweat right off the backs of their contributors. So arguably their contributors were missing the point of “open”, as well.
Yeah, well – notice out of all those companies, the only ones making serious money are IBM and Apple. The others… not so much…
That should clue you in on the viability of an open business model. Free software may be good for you – but it’s not so good for the companies who have to absorb the cost of providing it.
What about Red Hat?
Also, although I do not know a whole lot about how Sun does it’s business, Sun’s business was failing long before they jumped head long into Open Source. Their main business is their hardware and services, and they haven’t made it easy for customers to acquire their hardware easily.
Large companies that previously used UNIX are willing to fork over the big bucks for Red Hat Network and 24×7 support. Red Hat is consistently the #1 rated IT company out there for value, especially because the cost for Linux is still less than proprietary UNIX.
In addition, and very smartly, Red Hat allows the proliferation of CentOS (and Fedora) because it knows that emerging players – many of whom may have standardised on a Red Hat ‘compatible’ platform – will upgrade to a paid version when their systems become mission critical.
References:
http://www.internetnews.com/bus-news/article.php/3792946/Linux+Subs…
http://www.redhat.com/promo/vendor/
I think I see a couple of scenarios coming out of the impending merger:
1) Java gets a huge boost, and movement towards a truly open source version of Java gets a major boost. IBM has put a lot of money and resources into Java, and it would be nice to see IBM really open the doors. I know Apache has had major issues with validating Java due to Sun dragging it’s heels in regards to testing kits. I suspect that will end very quickly once IBM takes over. I think Netbeans is toast though. I personally like it much better than Eclipse (which is too heavy, and too bloated for everday development), but IBM isn’t about to drop their Eclipse work.
2) MySQL – IBM needs something that will lead people into DB2. MySQL will do that quite nicely. It also allows them to also beat back Oracle which has quite a presence in a lot of Sun shops. Add in the ability of SPARC/x86 translation software (which Sun bought a couple years ago), and IBM has a nice way to transition people off of SPARC boxes and into x86 and Power systems running MySQL and/or DB2.
3) Solaris – I know IBM has AIX, and I virtually know nothing about it. However, Sun has put a LOT of hard work into Solaris, and it has some excellent features you can’t find anywhere else. I personally would like to see IBM phase AIX out, replacing it with Solaris, but who’s kidding who? I suspect what will happen is that IBM will not drop Solaris support, but slowly transition some of the outstanding features of Solaris into the Linux world. I mean, what would happen to the btrfs efforts if IBM open sourced ZFS on Tuesday? Their creds in the Linux community would move forward leaps and bounds, and Oracle (a major proponent of btrfs) would get a nice “thanks for playing” smile.
Hardware: Canibalized off and sold to competing vendors to mitigate the ‘monopoly’ claims. SPARC production would probably be sourced over to Fujitsu, and IBM would try to transition everyone off the SPARC chips into standard x86 or Power. What happens to Niagra II/III is anyone’s guess.
Not a lot. If someone decided to port (and they probably would) then ZFS might get accepted into the kernel, and have its own directory. But there is no way in hell that the “rampant layering violation” that is ZFS is ever going to be a first class FS in Linux. It just brings too many fundamentally bad ideas with it. Best to take the time to do it right and put the right pieces into the right layers.
Sun did ZFS the way they did and now Sun is dead.
Just because we had to use volume managers many years, does not mean we need them in the future.
Or are you layering your main memory as well?
Oh, and it’s funny that btrfs does also do “rampant layering violations”…
You are not making any sense. Layering relates to code. I’m not even sure what you mean by “layering memory”.
Your btrfs point is a reasonable one. Yes, btrfs subsumes some of the DM layer. It’s one of those things that makes me cringe slightly. But it’s a limited layering violation relative to what ZFS does. And I guess I’m content to take a chance and see what happens.
I think it’s fairly certain that ZFS will make its way into Linux if IBM buys Sun. However, because ZFS has some pretty hefty memory requirements (Sun itself recommends 1 GB or more), it won’t be quite a replacement for ext3/4, but possibly for btrfs.
And it does not occur to you that there is something fundamentally *wrong* with a *filesystem* coming with hefty memory requirements?
Not necessarily. If a system with sufficient memory is used and the features of ZFS provide a significant benefit, then there’s nothing wrong with it. It’s just not a file system that’s applicable to all situations.
Not if one of the primary concerns of the machines set up is data integrity.
If you’re building a games machine, then you don’t want or need ZFS. If you’re building a file server then you do.
But isn’t that more of an argument for ZFS’s processor requirements rather than its memory requirements? My impression is that all the checksumming and self-healing is processor intensive, and not memory intensive.
Aren’t the memory requirements mostly due to the way it wants to subsume the VFS and VM layers into itself. And aren’t its problems manifestations of the layering violations it commits?
Edited 2009-04-04 17:14 UTC
ZFS runs nicely on my eeepc. There’s certainly not a very fast CPU in it.
Also, there are no layering violations. Until now, there was just one unneeded layer too much.
Please elaborate. I’m listening.
Sorry, I don’t see you’re point.
If you’re building a games machine, then both processor speed and system memory are critical.
I find it to be too bad.
Sure, ZFS scrubs noticibly slows the system, but at least they can be performed on a live system where as fsck can’t (you have to mount the file system in as readonly)
Plus it’s not hard to cron the scrubs to run over night when the system is likely sat idle.
I don’t know about that.
I do know that many of the features I never saw myself using in ZFS have now since made me wonder how I coped before said ZFS and said features existed.
ZFS (and it’s tools) are so easy that keeping data clean and maintaining the zpools and physical disks become child’s play.
I know I’m preaching like an over-zealous fanboy, but I can’t imagin a life without ZFS now hehehe
I think you are missing my point. I can agree that improving data integrity is a good justification for the kernel to use more resources. But it is unclear how that relates to ZFS’s obscene memory consumption. It should only relust in more processor consumption. So we are still left wondering why ZFS requires so much memory. [/q]
Like I say, we need a tool that manages filesystem, LVM, partitions, etc that easily. It can be done. And without putting so many design mistakes into the kernel as ZFS does.
I’ll take ext4 any day. I have never seen any truly compelling features in ZFS. Except for the admin tools. And that could be fixed in Linux a lot easier than porting ZFS to it. Why it has *not* been fixed in all this time, I don’t know.
“I think you are missing my point. I can agree that improving data integrity is a good justification for the kernel to use more resources. But it is unclear how that relates to ZFS’s obscene memory consumption. It should only relust in more processor consumption. So we are still left wondering why ZFS requires so much memory”
I dont really think the ZFS team has focused a lot on optimizing RAM usage. SUN traditionally makes big servers, the desktop market is not SUN’s main target. If you make big servers and big iron, is it not reasonable to expect at least a few GB RAM in the server? You know, ZFS is targeted to Enterprise Servers. Not desktop. ZFS aims to surpass Lustre, NetApp, etc. SUN’s new storageserver 7000 FishWorks that targets NetApp, has not 512 MB RAM (though people runs ZFS on 512 MB RAM). It is basically a PC + lots of SATA disks + SSD cache + ZFS + DTrace. And it kicks butt. The entry level 7000 server has 8GB RAM. And the largest has 128GB RAM. I dont… really see the need to focus time and effort on optimizing RAM usage, when you target Enterprise.
Why dont you complain about ZFS 64 bits CPU preference also? When designing ZFS, they chose to focus on 64 bit and dont mind 32 bits:
http://queue.acm.org/detail.cfm?id=1317400
“but as we were developing ZFS, I basically said, “Look, by the time we actually ship this thing, this problem will have taken care of itself because everything will be 64-bit.†I wasn’t entirely off base.”
They knew that 64bits CPU would be dominant soon. And they knew that more than 1GB RAM would be dominant in Enterprise Servers.
The mistake you are making is that you think ZFS is for desktop. It is not. It is for Enterprise Servers with tens of GB RAM.
“Like I say, we need a tool that manages filesystem, LVM, partitions, etc that easily. It can be done. And without putting so many design mistakes into the kernel as ZFS does.”
Feel free to share with us the “design mistakes” ZFS does. I really would like to know about them. If you claim that ZFS has design mistakes, then you should be prepared to prove your claim. Right?
“I’ll take ext4 any day. I have never seen any truly compelling features in ZFS.”
You clearly have never tried ZFS. Here is just one compelling feature. Unbreakable upgrades and rollback:
http://www.osnews.com/post?n=21250&r=357084
And also, ZFS is safer than Hardware raid. ZFS is not subject to silent corruption. CERN did a test where they tried 3000 rack servers with Linux and hardware raid, after 2 weeks they found 152 instances of silent corruption. The hardware never reported these errors, nor detected them. Previously, CERN thought everything was fine. ZFS is the only FS that has end-to-end data integrity. That is the reason to use ZFS. All the other stuff, snapshots, rollback, performance, etc is just icing on the cake. A modern disk has so many bits, some of them can never be corrected. Only ZFS can do that. Read the very first link here, and see the importance of ZFS-esque filesystems.
Well, if Sun and the ZFS devs can’t be bothered with efficiency concerns, then I suppose that is their business. But speaking of business, and in case you have not noticed, Sun as a company is dying. They have frantically looked about for a buyer. And it looks like they might have found one. I guess technical excellence understandably got trampled in the internal financial panic.
But absolutely no one in this discussion has pointed to a reason that ZFS is such a memory hog. Lack of optimisation? Please. You’ve got to *work* to find a way to make a *filesystem* hog that much memory.
Would someone please explain why ZFS hogs so much memory?
Edited 2009-04-05 16:52 UTC
That might be because no-one in this thread has found it to be an issue apart from yourself. And as you’ve already said, you don’t care much for ZFS’s excellent extended features so I don’t really see why you care so much about it’s memory constraints either.
Besides, I’m pretty sure you can configure the ZFS not to do a number of fancy bits and pieces in order to reduce the memory foot print. It is highly configurable.
Now you can just stop right there. Memory consumption *is* a common complaint about ZFS. I do think that ZFS has some desirable features. Dedication to data integrity is near and dear to me. (I suspect I’m persona non-grata in the Ext4 sphere over my comments upon recent issues.) But despite my asking repeatedly, all respondents have dodged that question in one way or another. I am not attacking ZFS. I am comfortable with ext3/4 for the moment. And I do have a few issues with ZFS’s design philosophy. But my concrete question, which no one has deigned to answer, is “where is the memory going in ZFS?”.
I wish some ZFS advocate would simply answer the question instead of dodging it or claiming that it was not an issue.
Or perhaps some people are simply advocating it for inappropriate use cases?
Edited 2009-04-05 18:19 UTC
I can’t speak for the others, but the reason I’ve not answered it is simply because I don’t know the answer to that.
In fact it’s something I’ve often wondered myself.
That might be the case.
ZFS would like to use a lot of memory, true. But it doesn’t require it. It doesn’t use it to implement some feature. It wants to use it for caching data. Out of the box, ZFS sets it maximum caching to 7/8th of the total of your memory, yes you can tune this to a lower number, but unless you know your application very well and do extensive real world benchmarking using your appplication you usually end up making the system slower. If you have 8GB of memory in your server ZFS will use 7GB of it for caching your data. No ZFS doesn’t demand it, it requests it, and the OS is free to take it back for other uses should it decide that your application would benefit from using the memory rather than of ZFS.
Of course some would say that this is too much memory. But remember every block of data that is retrived from cache instead of disk is 1000x faster. So is a big win for the user. Even if ZFS keeps its cache in memory instead of an application its still a big win, a typical hard disks can pull in 50MB of data from the disk in the time it takes the user to switch to a new window and type a word.
ZFS is very good at prefeching data and tracking data that is cached, not only can it track multiple streams of data being synchronously read, and prefetch accordingly. It also tracks of blocks of data that were removed from cache and then needed so to better guess what data is going to be needed soon and keep it in the cache. Most other filesystems aren’t smart enough to see multiple users steaming more than one file and takes there traffic pattern as random and not prefetching the data for them.
ZFS is large by kernel standards, perhaps a few MB of kernel memory but these days that is not an issue, no you can’t pick and choose ZFS features to disable hoping to save space these days its not an an issue, Everyone that complains about its memory use issue is reacting to its caching of data and most of this comes from users that are used to not seeing how much data the filesystem is caching, in top its usually just an entry in the system memory line “cached: 50000k” and no buddy cares, but when they see it associated with a filesystem everyone panics.
If you are using ZFS on Solaris you can use this cool new tool fsstat to see just how fast ZFS really is, because of the extensive caching of data, tradional tools like iostat aren’t effective in monitoring how much data it is serving. I typically see 200MB/s from my ZFS fileserver (4x 500GB sata disks in a raidz pool), I have seen numbers as high as 500MB/s which is close to the maximum streaming bandwidth of the hard disks typically seen in artificial benchmarks.
Yes ZFS + Solaris needs about 1GB to minimum to be happy, but that is about $10 in today’s market don’t have at least 1GB of ram go collect aluminum cans and buy some memory, and ZFS will use all it can get. I have heard of people being happy with 512MB, which is possible if you have low needs and a high tolerance for sluggishness. But even with all this ZFS delivers features that set it apart from anything else out there, once you get used to snapshots and clones you will wonder how you ever managed to live without them.
People reports ZFS runs fine on PCs with 512MB RAM. Ive seen several reports on that. Typically, they use 32bits CPU and 512 MB RAM. On 32 bits CPU, ZFS is dead slow because ZFS is 128 bits. That is a more serious complaint. I suggest you complain on why ZFS is dead slow on 32 bits CPU instead? ZFS achieves like 20MB/sec. Ive used ZFS on 32 bit CPU + 1GB RAM, for one year – the reason was I wanted my data to be SAFE. Speed was not the issue. If I can choose; fast and unsafe (risk of silently loosing data) or slow and safe (data is safe) – I and all server admins would choose the last option. Your preference may vary.
I knew I was going to build a PC with quad core and 4GB RAM, and then I just insert my ZFS raid into my new PC without any problems. Meanwhile, my data is safe for a year with my 32bits CPU. (In fact, I can insert my ZFS raid into a Mac OS X server, or a FreeBSD machine or into a SPARC machine (with different endianess). Try that with a hardware raid).
If ZFS using 512MB of RAM is your main concern, then the Enterprise File System ZFS is probably not for you. I suggest you use a desktop FS instead, like ext4 which you regard as superior to ZFS:
http://blogs.gnome.org/alexl/2009/03/16/ext4-vs-fsync-my-take/
Regarding ZFS being a enterprise FS and using lots of RAM, I promise you, for a server admin that is of no concern because their data is SAFE. What would a server admin pay in resources to make sure his data is not subject to silent corruption? Would 512MB ram and some CPU cycles be too much? If that is too high a price, your needs and a server admins needs are different.
You know, if you look at a normal new disk specification, it says “unrecoverable error: each 10^15 bits”. Check that for yourself.
Here he talks about Raid-5 stops working in 2009 because of disks getting larger and larger, and therefore the error rate increase from neligible to common:
http://blogs.zdnet.com/storage/?p=162
You SHOULD be concerned. (Unless you use 32bits CPU, 64MB RAM together with 10GB disks then you will never see bit errors)
You know, ZFS handles these errors and silent corruption and other bit problems. You need end-to-end integrity to be able to handle these kind of problems.
You certainly heard of ZFS on FreeBSD needing several GB of RAM. It does not on Solaris, there ZFS needs 512MB. That is not too much RAM to fix all problems with silent corruption. Here FreeBSD developer talks about why ZFS needing much RAM on FreeBSD:
http://queue.acm.org/detail.cfm?id=1317400
“The biggest problem, which was recently fixed, was the very high ZFS memory consumption. Users were often running out of KVA (kernel virtual address space) on 32-bit systems. The memory consumption is still high, but the problem we had was related to a vnode leak. FreeBSD 7.0 will be the first release that contains ZFS.”
I understand where the rumours came from; the buggy FreeBSD implementation. But that does not mean the rumours are valid for Solaris too.
And please, enlightemen us on ZFS design mistakes as you have talked about them several times. Are you just bull shitting as Mr Moron does (he never can prove his claims, no links, no nothing) or do you have some hard facts to show on this “design mistake”?
Seriously, I dont see anyone using a Unix seriously on a 64MB RAM machine? Why the fuzz? Is not 512MB RAM mininum on unix servers? You mean 64MB RAM is standard on Unix servers? I dont understand you.
PS. You could also complain that the Enterprise Server OS Solaris requires at least 512MB RAM (which happens to coincide with ZFS minimum requirements of 512MB).
Isnt it absolutely shocking that an Enterprise Server OS targeted to Big Iron requires at least 512MB RAM? I bet IBM Mainframe OS z/Vm requires at least 2GB RAM to boot up. Isnt that shocking? That must be because of “design mistakes”, right? Bad design, that is IBM Mainframe’s OS problem.
Maybe not for home machines, but if you have a server then I’d argue you’ve not given ZFS a fair trial run.
Snapshots and zraids alone are killer features – in my opinion.
In addition, while I am sure we will probably see Linux be able to handle ZFS in kernel in a fashion at some point, I just wonder how ZFS will fit in code-wise into Linux as a ‘Linux’ filesystem. Having the code is one thing, but making it work is another.
Certainly, and I don’t care how people try to paint it, the picture of ZFS is that it tends to grow unbounded to any workload that you throw at it. ZFS should be absolutely ideal for all those cheap network NAS boxes that currently run Linux and people should be falling over themselves trying to use ZFS on them, but they’re not because you’ll never get it to run properly on those things. That should tell you something.
Well, obviously it would be a sizable chunk of code, bringing its own VFS and a big chunk of its own VM subsystem in with it. Personally, I’m about as excited about it as I was about XFS. Probably less. Another big white elephant under the /fs directory which will never quite fit in.
This is not to say that ZFS is not a good filesystem for an OS whose devs have decided to commit themselves 100% to it as the OS’s standard fs. But that is never going to happen for ZFS in Linux. And I’m glad for that.
But someone really does need to write a command line tool to seamlessly manipulate all the layers from fdisk, to DM, to LVM2, to the filesystem as well as the tools in ZFS do. And it needs to become ubiquitous on Linux systems. It is really embarrassing that such has not happened yet.
Edited 2009-04-04 17:54 UTC
You call 1GB of memory a “hefty” requirement? Then please stay on your 6 year old toy.
Memory is cheap today, and it makes sense to use it.
Yes, I would call 1GB memory requirement to run a file system hefty. There’s no other file system for general operating systems that I’m aware of that requires such a memory foot print. UFS, ext2/3, most others have operated under 64 MB and even less.
Does it mean that it’s unreasonable to demand 1 GB? Of course not. But there are plenty of systems that do and need to operate with less than 1 GB of RAM dedicated to the file system (desktops, embedded systems, VMware images, older laptops, etc.).
Calling systems “toys” simply because it doesn’t have gigs of excess memory to operate a file system that the system can’t even take advantage of (over traditional file systems) is just plain rude.
I think you’re confusing “minimum spec” with “recomended spec”
ZFS will run on a lot less RAM. It’ll even run on 32bit CPUs. It just prefers more RAM and 64bit.
It’s been pre-configured to run on higher end systems so that’s what Sun recommend. If you plan to run it on a lower spec then please feel free to configure it yourself. However if you’ve only got 64MB RAM, then I suspect you’re not building a machine that needs a sophisticated file system anyway, so it’s a somewhat moot point.
Sun’s *minimum* spec is 784MB, and highly recommends 1 Gig.
http://docs.sun.com/app/docs/doc/819-5461/ggrko?a=view
I’ve actually got it running in a 512 MB VirtualBox OpenSolaris image, and performance is terrible compared to other operating systems running other file systems. I suspect much of that is ZFS related. Eventually I’ll reinstall with UFS as the default file system. I’m not doing anything on this system that would benefit from ZFS (and I suspect that’s true for a lot of people who install it as the default).
As a result, it’s not a great choice for a lot of systems. But an amazing choice for some.