Linux Foundation is organizing a end user collaboration summit this week. A major topic will be a presentation on the new upcoming filesystems – Ext4 and Btrfs. Ted Tso, who is a Linux kernel filesystem developer on a sabbatical from IBM working for Linux Foundation for a year, has talked about the two-pronged approach for the Linux kernel, taking a incremental approach with Ext4 while simultaneously working on the next generation filesystem called btrfs. Read more for details.
Ext4, a incremental revision of ext3 is built from the same codebase and compatible to it while providing better performance and scalability but there are limitations to what can be done in a compatible manner. Btrfs is now a multi-vendor effort from Red Hat, HP, IBM and Oracle allowing a common pool to save costs of development and will provide a number of additional features which requires a fundamental redesign.
These new features are expected to include storage pools, writeable recursive snapshots, fast file checking and recovery, easy large storage management, proactive error management, better security, large scalability and fast incremental backup. In reality most users don’t have databases large enough to require some of the most advanced functions but like so many technology battles, it comes down to bragging rights and engineering pride.
While Ext4 has long been merged into the Linux kernel as a development filesystem, it is getting closer to being marked as stable and beginning to see adoption from Linux distributions. Fedora 9 already includes ext4 as a technology preview and Fedora 10 will make it simpler for end users to adopt it as well although ext3 remains the default for Fedora and many major distributions.
Andrew Morton, a core Linux developer has indicated that he would like to get btrfs merged as early as 2.6.29 though it will be marked as a development or experimental feature for a while. This will allow btrfs to progress further as part of the Linux kernel development and will likely be the default for major distributions sometime in the future.
Shame that license issues has blocked ZFS adoption in Linux distros. Typical that the Linux community is all about opensource as long as it came from them.
Not again…
even if Linux never gets ZFS I am still thrilled ZFS exists and is being developed and adopted by the BSD’s and Apple. After all, if it wernt for ZFS Oracle likely wouldn’t have said “me too” and started development on Btrfs. well, they liekly would have, though i don’t think it would have been made as easily avalible. could have easily become a good piece of Oracle IP. glad its open though.
That must be why i’m willing to stick to ext3/4 while i watch development of btfs.
All my zealous love for the gpl is blinding me to the truth that i truly truly want ZFS for my servers.
The license issue with ZFS is Sun’s fault. They deliberately chose a license that’s incompatible with the GPL. That means that any combination of ZFS and the Linux kernel is impossible to distribute without violating both the GPL (on the Linux code) and the CDDL (on the ZFS code).
Did you really think that the Linux kernel developers were going to stop work, and devote all their time to tracking down all of the previous contributors to the kernel, ask them for permission to change the license to something else, and then relicense the entire thing under another license just so we can use ZFS? Most of the kernel contributors were one-off contributors who left no contact details, and a few of them are even dead.
Besides, ZFS would never be acceptable for the mainstream Linux kernel anyway. Like Reiser4, it re-implements far too many other filesystem layers, like the block cache, and has rampant layering violations. Most of the improvements from ZFS should be implemented into Linux itself, so that all filesystems can benefit from them. Of course, doing it that way isn’t nearly as marketable – you can’t just slap a single name on the whole thing and sell it.
Hmmmmmm. Funny. My recollection is that Linux’s usage of the GPL pre-dates both ZFS and the CDDL, and Sun knew fine well what it was doing when it started using the CDDL for selected bits of software.
If Sun had used GPL, other OSes like MacOS, *BSD and QNX couldn’t have ported ZFS or other Solaris tech. Sun chose a more open license to spread the tech around. Too bad Linux users are so selfish.
Edited 2008-10-14 15:53 UTC
BSD License…done
So what’s the problem with what they did with Java and dual licensing then? Sorry, but that doesn’t really wash.
No, they just want to keep code flowing into their project rather than out of it so they can keep the code open ;-).
Why doesn’t the Linux community dual license if it needs Solaris tech?
How did you come to that illogical conclusion? Real data shows Solaris IP is flowing very well to other projects. The license is far more open than the GPL.
need it? when was it ever said that linux needs anything solaris?
Linux has implemented a lot of Solaris technology in the past. The slab allocator for one. Such blind zealotry is bad form.
Edited 2008-10-15 03:53 UTC
implemented, yes. but i suspect it was more a case of “path of least effort within the bounds of the gpl” then “we cant do without it, and only solaris can provide it” that the word “need” implies.
Come on, quit trolling. Just come out and say you hate the GPL and think Linux should switch away from it, because that’s clearly what you are arguing for.
As for why Linux isn’t dual licensed, there are a bunch of reasons. Some developers are opposed to non-GPL licenses. Some of them are dead. Some can’t be contacted. All the code they wrote would have to be rewritten just to change the license, and no one thinks that would be worth the effort when they can spend that same effort on improving what they have and writing alternatives. One of the benefits that Sun has with it’s code is that is the only copyright owner, which means it is much simpler to change their licensing however they want.
I don’t give a rats ass what Linux is licensed under. Nor do I hate the GPL. I do have a problem with all the Linux zealots that think that everything must be licensed under the GPL so Linux can get the code.
Sun chose what it chose, other Open Source projects are benefiting from it just fine. How about the whining about licensing stops?
Then why have you made 9 posts in the last 12 hours on those themes?
This thread is nonconstructive and an annoyance to other readers.
Go ahead and lis the number of times I said anything close to hating the GPL in any of the post that were not direct response to some one caliming Sun purposefully picked a license to be incompatible with the GPL.
Yes it usually does when segedunum starts his incorrect ZFS hate filled responses. He already spoils Solaris based articles with the hate.
Ha, ha, ha, ha. I’m sorry, but lots of people just don’t buy the hype of ZFS and aren’t going to move back to Solaris because of it. Deal with it.
Take a look at the first post on this thread:
Nuff, said.
Please read the post that started this whole discussion. It’s the same one that comes up every single time there is an article about ZFS…
The only people whining about licensing are the pro-ZFS, Linux sux because it doesn’t have ZFS, linux needs ZFS crowd. They never seem to say much more than that, I guess they just assume that the necessity of having ZFS support is obvious. But it isn’t to most people.
Most people would agree that it would be a nice feature for Linux to have, but it’s pretty clearly something that isn’t hurting Linux adoption. The vast majority of Linux users are very happy with the current and upcoming Linux file systems, like BTRFS and Tux3, which plan to compete against ZFS. There are a number of legal and technical reasons it is hard to put ZFS into linux. Sun could solve most of these themselves, if they wished to. They apparently don’t, and that is fine.
So, let’s make this clear: I think ZFS needs Linux much more than the other way around. If Sun ever decides to make ZFS compatible, then good for them. Maybe ZFS will become more than a fringe FS. But I’m not counting on it, and I don’t really care.
Edited 2008-10-15 04:21 UTC
You haven’t answered the question. Why did Sun think that GPL dual licensing Java was the correct option there, maintaining compatibility with lots of existing software with exceptions for where they needed it, and that a completely new license in the CDDL was required elsewhere?
Ha, ha, ha, ha, ha. I love the usage of ‘IP’, which is meaningless. Not the point. I’m not talking about code flowing from Sun but others’ willingness to commit code to Sun and Solaris. There’s no evidence that Sun is even accepting that if it is happening, and things such as the PowerPC port of Solaris tell us that an awful lot is being kept in Sun’s four walls.
Based on what? There is zero code flowing into Solaris from outside Sun. That’s when you know you don’t have an open source community. If anything, Sun is strangling it.
You didn’t answer my question either. Why can’t linux dual license when most projects out there do?
Ha, ha, ha, ha, ha. I love the usage of ‘IP’, which is meaningless. Not the point. I’m not talking about code flowing from Sun but others’ willingness to commit code to Sun and Solaris. There’s no evidence that Sun is even accepting that if it is happening, and things such as the PowerPC port of Solaris tell us that an awful lot is being kept in Sun’s four walls. [/q]
You mean to say Linus accepts every single line of code someone wrote in to the main line tree, Really?
Based on what? There is zero code flowing into Solaris from outside Sun. That’s when you know you don’t have an open source community. If anything, Sun is strangling it. [/q]
That has nothing to do with the license. Do you even understand what your are saying your self. That response was the most illogical thing I have ever heard.
Yeah, it’s a real shame you can’t http://en.wikipedia.org/wiki/Dual-licensing“>dual the code.
1) file systems do not follow Moore’s Law…
2) btrfs is designed as a FS for DBs… ZFS is designed as a multipurpose FS… Of course it can “leapfrog ZFS on several fronts”, surely, DB related fronts… that comment from Ts’o couldn’t fall further on the irrelevancy world
Truth is, everybody wants ZFS… at all cost
Yeah, hate to be a hater, but that article contained a lot of nonsense. Why would Microsoft adopt ext4? Aside from lack of obvious need, NTFS isn’t even bad. Also, it doesn’t give any specifics on how btrfs will “leapfrog” ZFS, but if enterprise deployment is targeted for 2012 (I read this as inclusion and endorsement in RHEL), it sure as hell better (btr?) be better than ZFS, which was released and promoted in a production Solaris release in mid-2006.
Oh, and COW filesystems and databases are not a natural fit. Doesn’t mean it doesn’t work, but database is not the most obvious strength of a COW FS, which by design fragments files on modification. The article seems to imply otherwise, probably just because Mason works for Oracle. Oracle has their preferred storage solution already: ASM. Unless btrfs grows cluster capability, I don’t see why Oracle would promote it for databases at all.
ORACLE is also working on CRFS, built upon BTRFS. CRFS uses the same disk format as BTRFS, but adds cluster capabilities.
Is it getting interesting for you?
You’ve wildly misinterpreted the article.
The article didn’t say that Microsoft would adopt ext4 or btrfs. They said that licensing issues would prevent them providing any sort of support for them. No shock there.
Wow, really? Where’s that 500 terabyte array running NTFS as its principle filesystem? That’s what the article is about and we’re they’re heading with these – if you’d read it properly. Not that many Linux filesystems can’t do that today, but some things could use improvement on that scale. Many things are ‘nice to haves’ for everyone else.
However, perversely, with that statement you have touched on why people aren’t as excited about ZFS as some people think that they should be.
ZFS is still largely a very unproven filesystem, regardless of when it was put into production. btrfs will probably remain so for many years as well, but it depends on how fast it develops. Additionally, there are some very large question marks over ZFS’s ability to be used as a filesystem from very small ARM NAS devices right up to the large systems ZFS is restricted to today. At the very least, btrfs is in the right testing environment for those sorts of things to be tried and tested.
The article mentioned the word ‘databases’ once, and didn’t mention copy-on-write at all (if that is indeed what COW is an acronym for). I’m not entirely sure how you’ve managed to extrapolate all that. The article implied nothing about copy-on-write and databases in the case of btrfs because it was never talked about. You can guess that large databases are one of the things they’ll look at though. EDIT: After the comment above I’d forgot about CRFS.
I’m also not entirely sure why you are wandering off and questioning an Oracle engineer’s involvement in a fledgling filesystem, based on what you think Oracle’s overall strategy is, when it will probably be maintained and developed by lots of other interested parties as well.
Edited 2008-10-14 11:18 UTC
Is there a point in there? What gave you the impression that ZFS can’t run on ARM? ZFS is endian neutral and can be tuned to use less memory. If you really want a NAS device performance is not a key requirement so the ARC can be tiny and you still get all of ZFS’ protection. There is nothing in ZFS’ design that prevents it from being implemented in a small NAS box.
Errrrrrrr:
1. It doesn’t today in any shape or form.
2. The chances of Solaris running on ARM are practically non-existant.
3. There just simply isn’t enough memory or resources on such devices.
4. No one is even contemplating it despite some source code kicking around, and the lessons learned from the PowerPC port should tell you that it will never happen in the usual open source fashion of picking up the code and compiling it.
How much less? Whatever way you cut it, ZFS needs at the very least two gigabytes of memory. That’s the bare minimum, and I really don’t care that certain people have been able to run it on a laptop for a day with less.
I doubt very much whether you could cut down ZFS memory usage and limits to way less than 256 or 128 megabytes and leave it running. ZFS has not historically been a happy bunny when it reaches various memory and other limits. I/O wise it is pretty CPU intensive, which is understandable for what it does, but it really strikes it out here.
In a NAS box with perhaps one or two disk drives you also don’t get any protection from ZFS. Try to understand that the reliability problems we have today need to be solved by our storage media and not by filesystems trying to jump through an awful lot of hoops. We’ve reached the end of the road on that front and I hope Btrfs understands that, although Btrfs just starting at the right time with SSDs in their infancy.
Doesn’t prove that it can’t.
Why?
Memory is cheap. You can easliy build a mini-itx box with 2GB of memory for dirt cheap. Most commercial Linux based NAS boxes are overpriced.
http://blog.flowbuzz.com/search/label/NAS
Irrelevant, again. You don’t need ARM in a NAS box. you can build an intel based one cheaper than buying a commercial one.
There is no reason to make ZFS smaller. 1 GB is recommended but memory is cheap.
Memory is cheap.
Wrong. Silent data corruption is bad making excuses for poor software is wrong.
Snapshots are easy backup for most home users. Checksums detect failing drives. Mirroring is easy as pie on ZFS using 2 drives. ZFS can use SSDs.
Edited 2008-10-14 22:56 UTC
The article didn’t say that it did, but Moore’s Law has allowed people to do things with their storage devices where filesystems and storage containers like LVM and RAID haven’t haven’t quite kept up. That was the point the article was making.
The article didn’t say what direction btrfs would take, but database usage is probably just one of their use cases. It sounds like you’re already carving out a niche for ZFS……………..
Hmmmmmmm, no. Some people want to believe that, but it isn’t true I’m afraid. For the vast majority of storage uses in the world today, and when you look at reviews of OpenSolaris, no one is the slightest bit interested in ZFS or even aware that it exists. It makes certain things somewhat better, but Sun unfortunately don’t have the userland tools that would expose ZFS as something remotely useful for the majority.
The biggest drawback we have with storage today is the storage devices. To maintain the large amounts of storage that many people are using Linux filesystems like XFS for today, and keep it reliable, we need to ditch disk drives with lots of mechanical moving parts and make data integrity a better part of the hardware. ZFS hasn’t changed that fact at all and neither will btrfs.
In summary, no one is going to rush off their existing platform to get ZFS and btrfs (a select few might). Filesystems tend to change slowly, and adoption happens in the normal course of other things, normal iterative development and the price of upheaval. However, when compared with ZFS in Solaris, btrfs already has a head start even now in testing and development in that it is developed within a a kernel that runs on small ARM NAS boxes to very large arrays. Its code will also be scrutinised as such. ZFS isn’t going to have that kind of free testing environment until people start doing the things with OpenSolaris and its source code that are currently done with Linux.
ZFS testing/debugging is a little wider than that as it is also being used on FreeBSD.
EDIT: It will be used on Mac OS too so the testing and use of ZFS is not even limited to Solaris/FreeBSD and you can bet the GUI/tools used to manage ZFS on Mac OS will more intuitive than that on Solaris (though there’s nothing wrong with command line ZFS usage, its ridiculously easy).
Edited 2008-10-14 15:40 UTC
Yes it has. XFS can’t detect bad hardware like ZFS. Silent bit-rot and data corruption are common issues with hardware and most linux filesystems are piss poor at detecting those.
It does not matter how much you cram into hardware there will always be bugs and errata that can cause all sorts of nastiness. Claiming anything else is silly really.
Again not True. ZFS has testing on a much wider platforms than you give it credit for, the BSDs and Apple are testing them so are many many people out side of Sun .
Edited 2008-10-14 16:01 UTC
You miss the point sweetheart………….again. Detecting bad hardware is a problem of hardware and the current state of drive technology. Using ZFS isn’t going to change that situation, and all anyone is finding out as a result of using ZFS are Solaris driver problems ;-).
Feel free to furnish me with a list.
Currently, that is x86 only and preferably 64-bit if you value your data. The problem for Sun though is that little if any code is flowing back in Sun’s direction to help them maintain ZFS, so the quality control and shared development just isn’t there. If you ever wanted to know why Linux uses the GPL, that’s it.
You are far too dismissive of ZFS; I wonder if they simply licensed it under the GPL, without making any other changes to it, your attitude would change. The constant attempt to prove ZFS is irrelevant or worthless invariably seem to come from hardcore Linux/GPL advocates. If ZFS, precisely as it exists now, came from the GNU/Linux community all the other non GPL open source projects and particularly the Microsofties would not hear the end of it.
License compatibility would make an in-kernel implementation of ZFS *possible*. But there would be many other problems. ZFS clashes rather violently with Linux Kernel design philosophy. (“Rampant layering violation” was the term Andrew Morton chose for it while *praising* its feature set.) It’s hard to see ZFS making it into mainline in a recognizable form, licensing issues asside. Far more likely, ZFS concepts will make it into Linux via btrfs.
More generally, I’ve never felt that the GPL/CDDL licensing issues were that significant. The Linux and Solaris kernel internals are so different that the sharing and cross-pollination of *concepts* is more useful than the sharing of actual code.
Edited 2008-10-14 21:34 UTC
Sorry, but no it wouldn’t. It has a great deal of useful features, but for a filesystem, it consumes far too much memory and CPU without some pretty damn serious tuning. It certainly isn’t a general purpose filesystem you can throw any workload at.
This is an article about possible future Linux filesystems (apart from ext4). Take a look at the first post ;-).
No I’m sorry, but people just don’t get that excited about filesystems. ext4 and Btrfs are attempts at trying to move some shortcomings on, but new filesystems are not big bang events and don’t generate a lot of default excitement that some people think that ZFS automatically should.
Linux had some excitement over Reiser4 and what might be possible, but the vast majority simply could not stomach yet another completely incompatible filesystem change and another reformat no matter how great the new features were. Microsoft even had to make NTFS upgradeable from FAT, which just shows you. It’s the main reason why ext has become the pre-eminent filesystem in the Linux world, despite some shortcomings. It remains to be seen how Btrfs fairs on that front and how long it will take to get a reasonable installed base, regardless of how good it might be in the future. Inertia is deep when it comes to storage.
[
May be the next time you post on Solaris articles you should too.
Yes they do.
http://blogs.smugmug.com/don/2008/10/10/success-with-opensolaris-zf…
Edited 2008-10-14 23:00 UTC
No Miss Daisy, you are. Your statement is like saying “I have a cold and I can’t smell the catastrophic gas leak. Since I can’t smell it there is no problem”. Pretty foolish.
Err.. I really don’t care to.
You really need to provide some data to back this claim?
You’re not going to be able to smell gas all the time sweetheart, and smelling the gas does not fix the problem. Fixing your gas system might be a far better option ;-).
He, he, he, he, he. Seriously. You don’t get that?
Hmmmmm, so ZFS is not tested on a wide list of hardware than I gave it credit for?
Give me a list of outside contributors contributing source code to a central code repository, and that is going into Solaris today. I can’t provide you with such a list, and I don’t think you will care to either ;-).
Errr… not smelling the gas gets you killed… data corruption gets you fired or makes lose money/business/customers. Duh! I hope you are not responsible for anyone’s data but your own. Your ineptitude could cause someone serious harm.
No. I don’t care to indulge your every whim.
See above.
Edited 2008-10-16 03:58 UTC
While I still consider most of this thread to be a silly waste of everyone’s time… I do think the question is a valid one. IIRC, you were claiming that ZFS would be suitable for a small ARM-based NAS appliance. Yet you seem quite evasive regarding Segedunum’s question about what hardware ZFS has actually been shown to scale down to.
I do quite a lot of work with such devices running Linux, and so am familiar with the rather stringent hardware limitations. I don’t think there is a chance in hell of getting ZFS running on such a platform.
First, he was saying that ZFS has an inherent design flaw that makes it incapable of operation on an ARM based device. Do you agree with that claim?
When I asked him to explain the technical reason for that claim he changed the subject. Then he goes on to claim that because OpenSolaris has not been ported to ARM, ZFS can’t be tested on ARM therefore it is inherently inferior to BTRFS. But still fails to provide any valid technical reason for why he claimed ZFS will not work on ARM. A discussion of the ARM instruction set/pipeline that makes the checksums ZFS does infeasible or some thing else impossible would have sufficed? ARM is a 32Bit processor and can address 4GB of memory.
To refresh your memory here is how the thread went:
Segedunum’s:
His point is since ZFS only exists in Solaris it won’t get wide testing and scrutiny.
I responded with:
I would say that was an apt factual response, no?
Then Segedunum asked:
The list here refers to my response above.
He then erroneously claims this:
.
http://blogs.sun.com/jimlaurent/entry/testing_macos_x_read_only
one of the comments has a person running it on OS X on powerPC.
The only reason ZFS won’t work on ARM is because the OSes that support it to won’t run on ARM. That’s a strawman Segedunum uses to make a non point. You would need sacrifice some caching and performance but the vast majority of the features would still work. For the purposes an ARM chip would be used, performance is not a major concern. If some one really wanted to get ZFS on ARM they will make the changes needed. Saying that ZFS has less testing than BTRFS because it doesn’t work on ARM based systems is dubious. The bottom line is ZFS isn’t on any OS that runs on ARM right now. Therefore the test results of a hypothetical configuration are a non sequitur.
None of the FSes discussed here Tux3, btrfs, hammer fs have been proven to do anything substantial yet and are effectiveyl unusable for the vast majority of people. At their stage of development that is very understandable. I wouldn’t use that as argument point to say ZFS is superior.
Its is unknown right now if BTRFS will ever be used on ARM based boxes, either.
Edited 2008-10-16 05:04 UTC
Here is a long series of articles from a Linux guy that wants to set up a file server at home. He goes through all the alternatives and chooses ZFS. He describes his first steps into the ZFS world. Very thorougly done and a good read.
http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
Ok. So from the articles, we can say that ZFS scales all the way down to 4GB of memory and an AMD Athlon X2 BE-2350 dual core processor running in 64 bit mode.
Edited 2008-10-16 16:04 UTC
No, People are running ZFS on 512MB boxes. People are also running them in VMWare VMs. Zfs-on-FUSE is used on linux there are numerous articles if you search.
People put a boatload of memory because ZFS performs best when it can cache a lot. I fail to see how you can misconstrue that to mean it needs a lot of memory. Less memory won’t give you the best throughput.
http://www.opensolaris.org/jive/thread.jspa?threadID=73990&tstart=-…
There are posts here where people having been using ZFS with 512MB.
“My home ZFS server runs with only 1 GB of RAM. It achieves 430 MB/s sequential
reads and 220 MB/s writes. This is very good, given that its primary task is
to serve large files over NFS.”
“speed” is another vague term. Do you mean throughput, or latency. Local I/O,
or over NFS. Etc. FYI a small amount of RAM usually impacts random I/O
workloads when they would otherwise fit in memory, but does not reduce the
throughput of sequential I/O because prefetching algorithms work just fine as
all they need is a few tens of MB of memory.
……….
As a matter of fact, until march 2008 I had been running snv_55b for over a
year with only 512 MB to serve a 2.0-TB pool over NFS. Again, the performances
(throughput) were very acceptable. If that’s what the OP needs, then 512 MB
would be *technically* sufficient, even if given the current prices 1 GB would
make more sense.
-marc
“
As much as I am a ZFS fanboy, I doubt that you achieve 430MB/sec on 1 GB RAM. Ive run ZFS on 1GB RAM for over a year without problems. But Ive never got 430MB/sec. In fact, that number exceeds the SATA bus speed, which is 150MB/Sec. It must have been a PCI-express or PCI-X solution. Which is not common on home file servers. If you have that expensive solution, then you can afford more RAM than 1GB. I therefore doubt that piece of information. Something is strange.
True, the SUN “Thumper” achieves 600MB/sec but it has dual opteron cpu + 8(?) GB RAM + 48 hard drives.
I like ZFS too; I cannot comment on Solaris, but I have been following ZFS development on FreeBSD though the FreeBSD mailing lists and everything I have read by both the developers and testers point to two important points:
1.) On FreeBSD ZFS is a memory hog; if you don’t have enough memory ZFS might exhaust kernel memory and lock up the system; this is why in FreeBSD-CURRENT they’ve had to increase the kernel memory limit and recommend you stuff as much memory in your system as possible
2.) Relating to the above, its recommended that you use ZFS on the 64 bit architecture so it can address/cache much more memory
Why should the *filesystem* layer be able to do such a thing? See what kinds of problems show up when proper layering is violated? Packaging everything up in the fs layer, as Sun did, might have seemed convenient at the time, but the consequences of that decision have come home to roost… and will continue to roost.
Better to make the changes needed in the proper kernel layers. Do it right, or not at all.
Thats utter nonsense. FreeBSD locking up has nothing to do with ZFS and more to do with a bug.
You can sing and dance about layering all you want but that what you said is absolute bunk. Any kernel module no matter how layered can cause serious issues if there are bugs or exposes bugs in other layers.
ZFS does it right that’s why many many OSes are picking it up and linux devs are trying to make similar file systems. We’ll see when the Linux FSes become production ready if they do things better than ZFS and are even remotely as easy to use.
KERNPANIC,
It is strange that ZFS consumes much memory on FreeBSD. On Solaris it doesnt. Ive run for over a year with 1 GB ram and Pentium 4 @ 2.4GHz without any problems. You know, Solaris has a swap file that it uses when more memory is needed! :o)
And I notice that the swap file size immediately shrinks to zero when not used anymore. Whereas Windows likes to use the swap always no matter how much memory you have free.
I’ve wondered about that too, I don’t know why the memory exhaustion problems don’t exist on Solaris (FreeBSD uses swap also of course, but its a dedicated partition rather than a file); are you using 32 bit?
Edited 2008-10-17 08:27 UTC
Pentium 4 is 32 bit, yes.
And I meant, Solaris uses a swap partition, not a file.
Edited 2008-10-17 10:54 UTC
Thumper can push 2GB/sec throughput.
http://milek.blogspot.com/2006/11/thumper-throughput.html
http://kevinclosson.wordpress.com/2008/10/05/i-know-nothing-about-d…
SATA ports also come in 3Gbps flavors and can do 300MB/s. You can have 8 SATA drives and easily achieve 400MB/sec reads. Of course, since ZFS caches a lot the throughput number can be much higher than raw hardware.
F***. You mean to tell me that you rely on your own nose to tell you that you have a gas problem, and you rely on that to keep the system going rather than finding and fixing the problem or getting yourself a decent gas system to start off with? How many problems that ZFS has caught over time have resulted in fixes and patches to Solaris?
In this case, Solaris’s drivers need fixing, because the vast majority of what ZFS picks up are invariably device driver problems when you get to the heart of it. Oh, if only Solaris had Linux’s device drivers and the wide range of testing that goes on in a true open source project where people get the code, compile it on lots of hardware and find problems.
I’d check yourself into a clinic before you have a very large explosion.
So Solaris runs on very, very few platforms, as does ZFS, and is highly unlikely to ever run on a ARM or PowerPC device? Glad we’ve established that you have no data and no evidence of a counter claim whatsoever. You’re mighty glad to start demanding evidence from others when people start pointing out issues and your paranoia kicks in.
You have produced no such list, lying about it isn’t going to help you and I can give you an answer. No one is committing into Solaris’s source tree because there is no open source repository and, as such, no open source Solaris project.
Difficult to face with someone of your obvious issues, but true nonetheless.
Edited 2008-10-17 10:02 UTC
Unless you can see gas, how the F*** do you detect it. Why do you think they add additives to propane and butane to make it smell? To make it easy for people to detect they have a problem. Why do you think carbon monoxide is so deadly because people can’t detect it by smell. They get sensors put in to detect it.
If you can’t detect the problem how do you know you have one? After damage is done?
I really would like you to use this argument to convince some one that your stupid idea is the best way to design a safe system. They will instantly have you committed to an asylum which is where you really belong for trying to defend your nonsensical position.
You ask this:
and then immediately said this:
Get your head examined.
No with linux you just get data corruption and lose data because nothing really detects the problems.
Here is a gentoo developers perspective:
http://planet.gentoo.org/developers/lavajoe/2008/02/18/linux_needs_…
“ZFS is Sun’s very cool filesystem. I won’t go into detail here – just google it – but it has some eye-opening features, the most critical of which is end-to-end data integrity. Unfortunately, ZFS’s license is incompatible with the GPL.
I say “critical” because I have a strong feeling that silent data corruption is far more prevalent than most people believe. Also, I just don’t buy the argument that bit-for-bit reliability is only important for servers. Yes, in certain circumstances, a bit flip here or there may not be noticed, but I think that is scary as hell. Personally, I’d rather know; I count on computers to copy the bits exactly, don’t you? We simply cannot tolerate random bit errors, no matter how “unnoticeable”. And you will notice if that bit flip hits a critical part of your file.
In my experience with computers, I have caught two examples of silent data corruption. These are ones I actually discovered. It freaks me out to think there may be many more that went unnoticed. And both were due to bad IDE cables (so even the hard disk error rates don’t count here) on two different computer systems. The first on the old and slow PATA and was some data pattern dependent copy glitch, where a diff found the problem. The other was this past year on a modern UDMA/80-conductor cable, and it was found by ZFS – it appears that during some reported DMA errors (probably the cable’s fault), a 64K file block got written to the wrong spot on the disk (PATA does not protect the data address part of the communication).
ZFS is the only filesystem that actually will catch silent corruption in the whole chain: ATA interface -> cable -> disk (HW and firmware). For those who say, “Why not RAID?”, well, RAID will save you if a whole drive fails, but not these more insidious issues. I bet Linus and others are seriously thinking about what to do, since what once was considered rare could become commonplace. There are rumors Apple will adopt ZFS, and FreeBSD already has it in its kernel (and, of course, Solaris has it). For now, zfs-fuse is very interesting, but I think we need such protection of our data in the kernel, and soon.”
Zfs works fine on MacOS X PowerPC. I already posted the data. If you can’t read and comprehend that isn’t my problem.
Go ahead and provide me with a list of ARM NAS devices BTRFS is being tested on. Your argument is moronic. Because even the developers of BTRFS claim they are tackling the problem of linux filesystems scaling up for large loads.
http://btrfs.wiki.kernel.org/index.php/Main_Page
“Linux has a wealth of filesystems to choose from, but we are facing a number of challenges with scaling to the large storage subsystems that are becoming common in today’s data centers. Filesystems need to scale in their ability to address and manage large storage, and also in their ability to detect, repair and tolerate errors in the data stored on disk.
….
The main Btrfs features include:
Checksums on data and metadata (multiple algorithms available)
”
Gee I wonder why the BTRFS page mentions the need to detect errors. Isn’t that the hardware’s job?
You are the only idiot claiming it will scale down to ARM NAS boxes with 128MB RAM. The design goal for BTRFS is large data centers and they realize that a filesystem should detect errors.
You are even more delusional than I could ever imagine. A list of hardware supported by an OS has nothing to do with what external people are committing to the source base. If people want Solaris to scale down they will make it happen. Right now there is no real need or market. WindRiver, Linux, QNX, FreeBSD can all be embedded OSes. There is no market need for Solaris in embedded systems that is why no one is committing anything to Solaris for ARM. If people want ZFS on ARM based devices they can use FreeBSD or the QNX guys that already picked up DTrace will port it.
If some one really sees a need for OpenSolaris on their hardware they will port it. Nothing to do with license.
Why do you think the *BSDs don’t get as much support as linux? To use your stupid logic it must be because they are not “true open source” OSes. Linux became a buzz word and gained mind share. It is also a OS good. Mindshare + Quality = Marketshare. You can have the quality but without mindshare you can’t capture a market. Microsoft has tremendous mindshare. Linux can’t seem to displace it from the desktop.
People are returning netbooks with linux 4x more than those with XP.
Are you talking to your self again? Off your meds? Others have commented on how difficult it is talking to your. I am glad you finally had some self realization.
Edited 2008-10-17 16:17 UTC
the article said:
so, indeed, it did say FS followed Moore’s Law… try proof reading next time.
Moore’s Law was about the growth of the number of transistor on a CPU die… people mistakenly extrapolated that to everything, being a wrong assumption at it’s root.
So, you need proof, eh?
– Linus Torvalds:
http://lwn.net/Articles/237905/
– Zemlin’s desperate and groundless attacks like:
while later stating:
(So minor he wants it on Linux so badly?… A bit contradictory, isn’t it?)
http://www.nytimes.com/idg/IDG_852573C400693880002574CE00371FE1.htm…
– Apple: http://www.apple.com/server/macosx/technology/filesystem.html
– FreeBSD: http://lists.freebsd.org/pipermail/freebsd-current/2007-April/07054…
– ZFS-on-FUSE: http://zfs-on-fuse.blogspot.com/
Not only that everybody wants it, it also has inspired other works like Mathew Dillon’s HAMMER: http://kerneltrap.org/DragonFlyBSD/HAMMER_Filesystem_Design
While desktop users don’t know/don’t care about FSs, whenever they buy a Mac or install Solaris or FreeBSD, they will get ZFS, and they don’t ever need to know about their existence… that’s the point of it, FSs should be transparent to end users…
Heh… investigate a little before doing such assertions: http://blogs.sun.com/erwann/entry/zfs_on_the_desktop_zfs
I think my previous list can prove you wrong.
You see, ZFS was designed taking in account the vast percentage of wasted CPU cycles on modern servers and computers in general… same thing that has motivated virtualization on all platforms, in modern times there is an excess of processing power, not lack of it like 20 years ago. The whole idea of hardware disk controllers to make disks arrays has become obsolete in many (save some very specialized) cases, thanks to ZFS… I invite you to read some papers and the ideas behind the Thumper X4500… a storage solution using ZFS
Even so, ZFS is no so massively resource intensive as you seem to imply, it just need a decent (not even great) configuration based on modern modern standards and it works like a charm…
<em>”They agreed on btrfs, which was written from the ground up by Oracle’s Mason based on his prior Novell work with the Linux-based reiserfs file system”</em>
I’m glad to see something come from Reiser’s work. It would have been a shame for the technology itself to have been stigmatized.
Work with both high-res satellite imagery and large computational linguistics datasets have proven to me that more than a conventional filesystem is necessary for many fields.
I’ve made this point before, but the heavy lifting of Linux, for nearly the past decade has come from Corporations and their funding of engineers to put in the code and time.
You may proclaim it superior to closed source, but the reason they are doing it is Time to Market. If they could have built a solution faster and equal in a proprietary vein you betcha they’d do it. I’m just glad they can’t justify the cost and tow it alone so they’ve brough it into the open for us all to draw upon and add into, however minutely, as time goes on.
I’ve made this point before, but the heavy lifting of Linux, for nearly the past decade has come from Corporations and their funding of engineers to put in the code and time.
Two responses:
1: duh
2: so?
You may proclaim it superior to closed source, but the reason they are doing it is Time to Market. If they could have built a solution faster and equal in a proprietary vein you betcha they’d do it.
Paraphrase: “you might say it’s superior, but get this, they’re only doing it because it’s actually superior.” Why thank you for that insight. That’s sort of what we were saying.
Btrfs… I thought is was BitTorrent File System
im more interested in tux3, or the one being developed for dragonflybsd (hammerfs?)…
as for corps sponsoring development or not, i have no worries as long as its under gpl or equivalent.
if so, its less likely for any one corp to hijack the prooject or dictate its use (see microsoft silliness, past and present)…
Edited 2008-10-14 04:33 UTC
Draonfly’s filesystem is meant for clusters and won’t really do much for a single system.
How many people are negative to ZFS. If they would have tried ZFS themselves, they wouldnt say those things. Yes, ZFS is that good. It is not over hyped. I run it at home as my file server. Before I always backed up important files to other computers. Now I am nervous until I get it on my ZFS raid. Then I can calm down. ZFS doesnt like hardware raid card. It wants to manage all the drives it self.
Here is an article of a Linux guy tries out ZFS as his home file server and he loves it. As everyone else that has tried it. He investigates different solutions and chooses ZFS at the end. Very interesting motivation he has:
http://breden.org.uk/2008/03/02/home-fileserver-existing-products/
After several embarassing air plane crashes, they switch to ZFS.
http://www.eweek.com/c/a/IT-Infrastructure/How-the-FAA-Is-Bringing-…
Uhhh, read your own link. After some embarrassing crashes of their *flight plan filing system* they are upgrading some very old *internal business servers* to new equipment. And some of it happens to use ZFS.
The distinction between a “plane crash” and a “flight plan filing system crash” may seem subtle to you. But some people, including users of the respective hardware involved, do care.
Edited 2008-10-14 19:26 UTC
SEGEDUNUM,
I have run ZFS for over a year with 1 GB RAM intel [email protected] GHz at home. It worked fine. The thing is ZFS likes to cache a lot, as it is a Server Enterprise file system. It will use all the memory it can grab for it’s cache several GB if it RAM is idle. And if some app wants memory, ZFS will release some memory. But that does not mean it REQUIRES a lot of memory to run. Several people have explained this several times, but you just ignore those posts.
Come on, ZFS is targeted to Enterprises, do they have 256MB RAM? No, they have several GB. If SUN would have designed ZFS without using a cache, so it would run fine with 128MB RAM, then you would have complained about it: “ZFS not using a cache sucks because ZFS is not usable in Enterprise environments that have lots of RAM. ZFS doesnt use the available idle memory. A computer should use all the idle memory for something. Otherwise it is a waste. Therefore ZFS sucks badly”.
The most logical thing, when designing a new filesystem, is to allow it to use all idle RAM for performance reasons. But you state that is a bad design choice. You doesnt like the idea of a cache to speed things up. That is less optimal and goes against all computer knowledge. You SHOULD use a cache, right? (I bet if SEGEDUNUM would run an Enterprise, all computers with 512GB RAM would have their RAM unused. “Turn of all caches! Never cache anything! That is bad!”).
Lets face it, nothing SUN does will please you. Damned if SUN does add a cache to ZFS, damned if SUN doesnt add a cache. Sometime in the past, some SUN affiliated guy must have done something terrible to you.
And for ZFS “rampant layering violation”, the main architect behind ZFS explains why that is wrong:
http://blogs.sun.com/bonwick/entry/rampant_layering_violation
An article about Linux filesystems becomes a discussion about the superiority of Solaris over Linux.
An article about Mono becomes a discussion about the superiority of KDE over GNOME.
An article about Apple becomes a discussion about how there are lower priced, better alternatives for everything they create.
An article about Vista or XP becomes an article about the superiority of every other OS over anything Microsoft creates.
Unfortunately this place has become a haven for vendor trolls. Why can’t we talk about the technology anymore?
Is it only me that finds it difficult to try to communicate with SEGEDUNUM? I have told him for the umpteenth time that Ive run ZFS on 1GB RAM with 32 bit CPU for over a year without problems. Other people have told him similar stories. And still he states that ZFS requires several GB of RAM to function? And probably he will continue to ignore our posts, because we tell things he doesnt like to hear.
And his implicit claim that ZFS’ habit of trying to use all unused RAM as a cache is stupid – not many people agree with that? ZFS as a Enterprise filesystem can expect to run on machines with for instance, 64 GB RAM or more, and it is bad to try to use all idle RAM as a cache??? It is absolutely vital that SUN makes sure that an Enterprise filesystem runs well on 256MB RAM machines???
Is it just me that thinks this reasoning is… a bit strange?
As Ron writes:
“Funnily enough, most of the hardened ZFS critics I know who I endlessly debated with changed their tune after 5 minutes of actually using ZFS, the proof of the pudding is in the eating :-)”
Maybe Jonathan Schwartz used that pony-tailed sensitive guy look to effect and stole his girl? If not, the vehemence seems a little, um, odd.