Apparantly, Apple is interested in porting Sun Solaris’ ZFS to Mac OS X. From the zfs-discuss mailinglist: “Chris Emura, the Filesystem Development Manager within Apple’s CoreOS organization is interested in porting ZFS to OS X. For more information, please e-mail him directly at [email address]. Speaking for the zfs team (at Sun), this is great news and we fully support the effort.”
Apple Interested in Solaris’ ZFS
85 Comments
Shades of RMS… and it had to be defragmented on a scheduled basis too or performance would begin to lag.
Limits
Max file size 16 EiB
Max number of files Unlimited
Max filename size 255 characters
Max volume size 16 EiB
from wikipedia
How about just port Aqua to Solaris. I always see on Blogs and stuff that Solaris engineers are in love with using Apple stuff.
-
2006-04-29 1:20 pm
-
2006-04-29 1:26 pm
-
2006-04-29 1:47 pmMikeGA
Mind you, there’s no reason other than Steve Jobs’ ego why Sun couldn’t license the Aqua “look.”
-
2006-04-30 11:40 amKugelKurt
Well, that doesn’t need to change. AFAIK it’s widely considered as a fact, that Solaris (the core OS, without graphical stuff) is much superior to Darwin.
Since OpenSolaris was announced, I am hoping that Apple will use Solaris as the core for Mac OS X.
All higher level features (Quartz, QuickTime, CoreVideo,…) would still be Apple exclusive, but with a more powerfull core.
-
2006-04-29 2:38 pmnighty5
At an official capacity, this will never happen. Given Solaris 10 is based on Gnome with the free licenses that are associated with the GTK toolkit, I highly doubt Sun will consider Aqua. This falls in line with they decision not to back QT for their next toolkit even though its GPL licensed as well, the downfall is seen by its commerical history and backing from Trolltech…
Isn’t ZFS meant more for environments with very large disks and large files? To me this doesn’t seem to suit most of Apple’s consumer market.
However, I can see a use for the pro video editing market maybe.
-
2006-04-29 1:38 pm
-
2006-04-29 1:48 pmMikeGA
Server is a consumer product now?!?
I just think that porting it probably isn’t worth the time and expense for how a small a market it would fit.
-
2006-04-29 1:53 pmThom Holwerda
Server is a consumer product now?!?
Of course it is. It is just a different consumer product than the iMac or MacBook consumer products.
Edited 2006-04-29 13:53
-
2006-04-29 2:04 pmeMagius
While the speed and disk pooling features of ZFS might be more appealing to the workstation or server market, ZFS also offers some great improvements at the desktop and consumer level, not the least of which is file-system level snapshots and rollbacks. ZFS also offers self-healing and excellent reliability.
If ZFS becomes an install-time option, the benefits to OS X users will be immediate and vast.
-
2006-04-30 2:24 pmaliquis
“which is file-system level snapshots and rollbacks. ZFS also offers self-healing and excellent reliability.”
I think an explaination of what those are, what they does and how they do it would help.
-
2006-04-29 2:08 pmBastian
ZFS is “endian-neutral” which might be making it attractive to Apple; right now Intel Mac’s are taking a bit of a disk access performance hit because they’re little-endian and HFS+ is big-endian.
The bits of ZFS that protect against corruption might be even more attractive to smaller businesses and home users, who in my experience aren’t always very good at sticking to a backup schedule.
I don’t really understand all this, but I was wondering if ZFS can be used under Linux and would I, as a home user, benefit from it in any way? I don’t do backups (though, I know I probably should), I am just concerned about reliability and performance…What I liked under Windows was the capability of having compressed folders, but wouldn’t compressing the whole filesystem just slow things down?
-
2006-04-29 2:27 pmBastian
Depending on your computer, compressing the filesystem can actually speed things up because it reduces the amount of data you have to read from the hard disk. Granted, this will only be the case if your CPU isn’t already working hard.
-
2006-04-29 2:43 pmnighty5
Alongside ext2/3, reiser and XFS there is no real benefit from a home user perspective. I use ext3 everywhere for servers & workstations at home, but have a couple of macs too.
If you have a couple of machines I recommend scheduling a job to backup, even to each other (systems). This is what I do with Windows and Linux systems over Samba.
You should ask yourself the question, do I need compression at all? With harddrive prices the way they are, its almost inconceivable for needing to compress.
Actually, I think that ZFS does have a place in many home computers today. Any computer that may well have more than one drive can benefit from ZFS.
Simply put, it’s very easy to append disparate volumes together and make larger single volumes. So, for example, now you can easily just keep chaining those new 750G Seagate drives together as you continue to rip your DVDs or home video projects.
I think most people don’t really enjoy volume management. If they don’t have any more room in the “My Photos” directory, they move the entire directory over to a new drive rather than split it up.
ZFS makes that process pretty seemless. For example, you can throw 2 250G drives together, create two filesystems, say Music and Movies. BOTH can be “500GB”, tho obviously you don’t have 1000GB of space. But they’ll both just pull sectors from the disk pool as needed. When the disk fills up (say 350MB of Movies, and 150MB of Music), it’s actually full. Not “Movies are full, but Music is half empty”. Grab a new drive, slap it in, and bump the quotas. Voila, both Movies and Music have an extra “250G each” to grow in to.
Then, on top of that, you have the advanced attributes of being able slice the pool up in to various sizes to support things like snapshots and etc. Snapshots et al operate on mountable volumes, and under ZFS volumes become almost “as cheap” to make as a new directory, but with exrta attributes. For those who don’t know, snapshots are the ability of the file system to literally take a picture of its current state.
So, say you make a daily snapshot of your filesystem every night at midnight. On Friday, you notice that you accidently destroyed a file or something. You can instantly go back to a recent snapshot and restore the file.
Or say that you’re running Windows, you run virus checking and spy ware tools on Sunday, but on Tuesday, somehow your machine got massively infected. You copy the few documents you’ve been working on to a flash drive for backup, and simply “reset” the system back to Sunday before the virus attack, and the whole thing is restored in a heartbeat.
Snapshots work by keeping used disk sectors associated with the snapshots. If a file changes a single sector, then the old snapshot has a oopy of the old sector, and the current version has a copy of the new, but both versions share the rest of the file. So, now you have the original and a backup copy of the file, but the price on disk is an extra disk sector, not an entire duiplicate file.
ZFS also has its built in checksum system for data integrity. This helps indicate potential drive and/or controller failures. Most filesystems check integrity when data is written, but not when it’s read. ZFS does both. It may not necessarily be able to fix it, but you’ll know immediately that it’s happening vs silently corrupting your data. Even on a laptop, where you typically don’t need to manage multiple physical drives, this extra reliability is a bonus.
Now certainly ZFS isn’t the only file system that has these capabilities, but it’s a much more dynamic and easy to use system than its predecessors. Many others offer some, but no all of ZFS features. They really thought the problem through on this one.
-
2006-04-29 5:01 pmtaos
Well said.
Also, look at how many people want to do RAID-5 at home, but don’t want to spend $300+ on a decent RAID card.
With ZFS, you can use RAID-Z, which at least in theory performs better than any software-based RAID-5.
The higher reliability (the key feature of ZFS) alone is worth the consideration for any home RAID users.
-
2006-04-30 3:27 pmaliquis
“Simply put, it’s very easy to append disparate volumes together and make larger single volumes. So, for example, now you can easily just keep chaining those new 750G Seagate drives together as you continue to rip your DVDs or home video projects.”
But if one of the disk dies, does the whole filesystem die or can one “repair” parts of it from the other parts? (not talking raidz here).. Say disk 1,2,4,5 lives, 3 died. Can i remake the filesystem with disk 1,2,4,5 and have the files which wasn’t on disk 3 still available?
-
2006-05-01 6:35 pmatsureki
I believe so. Generally that problem arises because the volume manager and filesystem are separate, so the FS doesn’t actually know it’s being spread out over multiple physical devices. When a big chunk of itself goes missing, it doesn’t know what to do, especially if that chunk has the only superblock. ZFS manages multiple physical volumes by itself, which gives it the equally important advantage of being able to avoid getting cornered like that in the first place. If one physical disk is giving you trouble, you can command ZFS to start bailing out the files on that one onto other disks, and then remove it from the pool. Its featureset is really amazing.
oh my this would indeed be good. this might be whats needed to give the endge to mac OSX server
# zpool create pool1 raidz drive1 drive2 drive3
# zfs create pool1/filesystem1
done! You have just created a 3 drive raidz array with complete LVM and it took less than 5 seconds. Feel free to paste the commands to do the same thing in your OS of choice.
Now you need to create a place for your sister to store her files.
# zfs create pool1/sisterFiles
okay now lets set a limit on how much space she can use.
# zfs set quota=1m pool1/sisterFiles
this is just a small demostration of how ZFS makes things easier, all of ZFS is this simple.
Knowing the tight lips policy of Apple as we all know and being the support of a new filesystem a serious thing, why this humble post by a top, though not the top most, Apple manager?
To me, this means that support for ZFS has been agreed and developed closely by Apple and Sun for more than a year. It may be one of the big features of Leopard.
This small post may be just to start getting some attention from developers outside Apple and Sun.
Good to see Apple doing serious and brave steps to improve Mac OS.
-
2006-04-29 7:12 pmWes Felter
Or maybe someone at Apple sent a private email to someone at Sun, who then told the whole world. I guess Apple should have negotiated an NDA first… 🙂
-
2006-04-30 6:15 amTuishimi
Something like this WOULD be great, but in what future release would we SEE it? I remember when Mac OS X came out you could set it up under HFS or UFS… but Mac OS X NEVER worked 100% under UFS… they eventually removed the option, did they not?
Anyway, it seems like the OS uses the HFS in ways that may have to be carefully considered when switching file systems, which to me means it will not ship with Leopard, maybe not even ocelot, or whatever the next version of OS X is called…
-
2006-05-01 10:11 ams_groening
…This is just one of numerous points that make Mac OS X somewhat seem like OS/2 in its days…
You often had to install applications on HPFS (High Performance File System) in order for them to work, mainly because of the support for long filenames, and possibly a few other technicalities…
Apart from that, just to mention, is the way everything seems to be drag’n’drop oriented, the nature of the WinOS/2 vs Classic Environment is also a lot alike
…But then again, I’ve always loved OS/2…. -It’s always been way more stable than my female relationships
Since BootCamp many of us believe that Apple will implement some kind of virtualization technology on 10.5. Can ZFS ease the implementation of virtualization, either of complete OSes or of just single applications, a bit like BSD jails do? How does ZFS relates to other file systems? Can it “see” ext, fat or NTFS?
Another point: how does ZFS manages both very small and very large files, in terms of disk space and speed?
-
2006-04-29 7:23 pmWes Felter
Since BootCamp many of us believe that Apple will implement some kind of virtualization technology on 10.5. Can ZFS ease the implementation of virtualization
If you’re going to virtualize your system, you’ll also want to virtualize your storage, and ZFS provides fine-grained, efficient, easy-to-use storage virtualization (aka volume management). Although for the kind of virtualization that Mac users are generally interested in (Parallels), you really just need good sparse file support so that those disk image files take up less space if they’re not full. ZFS would be really useful if OS X implements something like jails, but I doubt that will happen.
How does ZFS relates to other file systems? Can it “see” ext, fat or NTFS?
ZFS is a different file system. Different file systems don’t interoperate, so you’re either using ZFS or you’re not.
how does ZFS manages both very small and very large files, in terms of disk space and speed?
Efficiently, using multiple block sizes to reduce wastage on small files and increase speed on large files.
.. is for suckers. RAID is only benificial with large pools of drives and RAID-0 is not RAID. The “R” redundancy is missing in raid-0, and on top of that there is no “hotswap” capability on most home machines.
Consumer RAID is for suckers.
Edited 2006-04-29 19:32
-
2006-04-29 8:02 pmpresent_arms
raid on my home linux box is the dogs dangly bits (thats good) i have 2 ide drives one on each cable and used software raid built in to linux to make raid 1, it goes rapid… useful for all kinds of things (openoffice is quick now
Edited 2006-04-29 20:05
I thought apple would be more interest with the root rather than the branches.
Why don’t Apple buy Sun; that way they will pump their Server technologies and be very competetive with Microsoft. Besides, Apple and Sun have much in common.
-
2006-04-29 11:04 pm
-
2006-04-30 2:05 amhraq
“AAPL” market cap is 60 Billion; “SUNW” is 17 Billion, I think they can buy sun. And for the difference it would be the power to add to Apple throne in the server arena.
-
2006-04-30 3:02 amzemplar
Market Cap alone won’t determine one company’s ability to purchase another. Although I’d be all for some sort of collaborative merger between Sun’s server-oriented expertise and Apple’s desktop prowess.
-
2006-04-30 3:37 amtaos
Exactly. No need to buy-out to collaborate.
Apple didn’t spend billions on a BSD company to build OSX on BSD.
Also, from my point of view, Sun is a much bigger and more influential _technology_ company than Apple, while Apple is a giant in the consumer market, it doesn’t make sense for one to buy another, neither has the capability.
However, it makes lot of sense for them to collabroate on the OS side, it would be a win-win situation.
Sun does not “sell” Solaris, they sell enterprise services for systems. Even if Apple makes a “better Solaris than Solaris” by migrating to Solaris kernel but keeping the UI technology, it doesn’t have the capabilty to service the kind of enterprise environment that Sun has been supporting.
In the end, more people will use OSX on the desktop/workstation and more people will use Solaris on enterprise servers. That’s what I call win-win.
Thanks Wes for your answer. To clarify, when you say:
“ZFS is a different file system. Different file systems don’t interoperate, so you’re either using ZFS or you’re not.”
can a ZFS Solaris formated volume read and write to FAT, NTFS, ext2/3 partitions? In OS X/HFS+ I can read and write to FAT and read NTFS. Sorry for the noob question but is this a function of the file system or is implemented at a different level on the operating system?
Also, what’s ZFS smallest block size?
-
2006-04-29 9:29 pmhelf
can a ZFS Solaris formated volume read and write to FAT, NTFS, ext2/3 partitions? In OS X/HFS+ I can read and write to FAT and read NTFS. Sorry for the noob question but is this a function of the file system or is implemented at a different level on the operating system?
It’s done at the OS level with drivers that tell the OS how to handle a given file system.
You can read this for more information on file systems.
http://en.wikipedia.org/wiki/Filesystem
-
2006-04-29 9:33 pmtaos
One filesystem (or “volume”) does not read or write another filesystem. That’s what Wes meant by “different filesystems dont’ interoperate”.
For each filesystem, you have a separate module which understands its own filesystem on-disk and kerenl structures. The operating requests (cd/open/read/write etc.) are passed to different module for different filesystem by the OS.
You can read/write FAT and read NTFS on OSX suggests OSX provides the module that read/wrte FAT, and another module which only supports NTFS read.
It has nothing to do with HFS+.
If you replace HFS+ with ZFS, you can still read/write FAT and read NTFS, because those modules are still there.
-
2006-04-30 7:43 amrayiner
Here is hopefully a more full explanation.
Most modern OSs have what is called a “virtual file system”. The actual directory structure you see is implemented inside the OS, and plugins handle mapping on-disk filesystems (like ZFS or FAT), to the logical VFS hierarchy. When you “mount” a filesystem, you’re basically tell the VFS: “use such and such filesystem plugin to expose such and such partition at such and such directory”. The plugin then takes care of the logical mapping between the VFS semantics and the on-disk filesystem semantics. It’ll handle, for example, working with a case-insensitive filesystem like VFAT in a case-sensitive system like Linux.
FYI: At the OS level, filesystems generally don’t interact. In OS X, a CD filesystem might appear in /Volumes, which is a directory on an HFS+ disk. However, at the kernel level, the HFS+ plugin really doesn’t do anything when you read/write to the CD. The part of the kernel that handles filename lookups will see a path like “Volumes/My_CD/foo”, and realize that ‘foo’ is actually a file on the CD drive, not a file inside a directory inside an HFS+ volume, and redirect the request accordingly.
Thanks helf and taos for the clarification. This learning bird is more knowledgeable now.
From a fast google search it seems that Solaris can also read and write to FAT and read NTFS partitions.
Again, thanks.
ZFS rocks. I have been using it on my laptop since it first came out in Solaris, and I am very pleased with it. It works very nicely with zones.
For all those people who want it in Linux, it is unfortunate that the Linux kernel is stuck with a restrictive license. BSD based systems, go for it.
UFS support is still there in os x (can’t remember if it is in the installer but it is in the disk utility ) i know there were warnings about ufs due to case sensitivity but i use case sensetive hfs+ on my mac with no problems
Browser: Mozilla/4.0 (compatible; MSIE 6.0; ; Linux armv5tejl; U) Opera 8.02 [en] N770/SU-18_0.2005.51-1_PR
I wouldn’t be surprised if their interest is for their next generation o Xserve type machines. Using ZFS as a root volume is still a bit of a hack and relies on a very small invisible version of another filesystem to get started.
At the very least this is probably where they would start and then go from there.
Can’t a Sun engineer put ZFS in OSX through Darwin? The engineer would make it work for Darwin. Then make a binary that would work in OSX?
Is what I am saying possible?
Today, Sun as announced that Solaris ZFS 1.0 will be available in June as part of the next commercial release.
http://www.sun.com/smi/Press/sunflas….5.xml?cid=155
It would be fantastic to have the great features and robustness of ZFS underlying OS X. HFS+ is so last century.
how is HFS+ last century? curious.
Good question. I have little familiarity with ZFS, but HFS+ is certainly more sophisticated than what you see on Windows.
I have little familiarity with ZFS, but HFS+ is certainly more sophisticated than what you see on Windows.
In which ways? I’m not choosing either side, I’m just curious. I don’t really give a rat’s ass about what filesystem I use, so I know little about it.
God I hate NTFS and its defragging problem. Try a little experiment. Defrag your computer and I mean offline and online defrag. Then reboot, you will see your computer boot fast in less than 20 seconds….well mine does anyway….reboot 3-4 more times and watch your boot times grow longer and longer as the disk gets more fragmented because of all the I/O that is being done. That is the single biggest problem I have….is fragmentation with NTFS.
Are you sure thats not the prefetching optimisaton on XP?
God I hate NTFS and its defragging problem. Try a little experiment. Defrag your computer and I mean offline and online defrag. Then reboot, you will see your computer boot fast in less than 20 seconds….well mine does anyway….reboot 3-4 more times and watch your boot times grow longer and longer as the disk gets more fragmented because of all the I/O that is being done. That is the single biggest problem I have….is fragmentation with NTFS.
I’m going totally OT here, but from my past experience with 2K/XP, wiping the swap and then setting it to a fixed size after a defrag helps a lot with this problem. Windows will create the swap in a contiguous space so it doesn’t become fragmented, helps with a quicker bootup and slightly better performance overall.
But you’re right, defragging ntfs is a PITA.
Regarding a fixed swap to prevent disk slowing due to fragmentation and/or resizing, for those with 2GB of RAM fix the swap size at 0.
Completely disabling swap is not recommended in Windows. I’ve never personally tried it on my box with 2 GB of RAM, but I’ve heard that some programs cease to function when no page file is available.
I’ve never had any issues with running windows without swap. My xp machine has 1gb of ram and hasn’t had swap in years. no problems. Don’t believe everything you hear/read.
Heh … awesome. I’ll play around with that the next time I’m at work and have access to some of our more powerful hardware.
Can’t say for sure, but I am pretty sure you still have a swap file. Windows does not allow for a zero-size swap. It then creates one automatically.
A disk won’t get fragmented just because of I/O going on. It will get fragmented if you’re appending to files after you’ve consolidated all of your free space (and packed the files in close to each other).
I call bullshit on this test, as I deal with 3-4 PCs getting benchmarked every day, and fragmentation only goes up minimally (ie. 3%) every few days.
Not really. NTFS is actually fairly sophisticated. For example, it was built with journaling in mind, while HFS+ had journaling bolted-on as an afterthought.
All the merits go out the window when one has to waste time defragmenting.
No one said you have to defragment. Out of the 35 Windows users that I know better, only 2 regularly defragment. The other 33 don’t have any problems. 🙂
Hmm…true defragging is not required….but it is for having decent I/O performance isnt it?
Indeed, but NTFS fragmentation isn’t even close to being as bad as the parent made it out to be.
Indeed, but NTFS fragmentation isn’t even close to being as bad as the parent made it out to be.
NTFS fragmentation is bad. It is very bad. In fact most typical users have highly fragmented hard drives. They download music and movies and then delete them and download more music. They install and uninstall crapware. Normal usage like this fragments the hell out of NTFS to the point where it is almost unusable. I don’t think constant removal of spyware is very helpful either, at least to the state of an NTFS file system.
Not really. NTFS is actually fairly sophisticated.
Absolutely. NTFS is one of the best things MS has ever done. Compression, encryption, journaling — it’s an advanced FS by most accounts. Completely closing off outside support for it has somewhat invalidated that for multisystem users like me, though. Also, I think it still stores entries by filename, so it can’t have hard links and just doesn’t have symlinks, but the Windows line was never particularly about hands-on control, so few will miss those.
Anyway, the complaint was about fragmentation. I remember how happy everyone was back in the day about how much better the Win2K defragmenter was than the 98 one. NTFS itself didn’t help any. MS didn’t design around that little problem. They just improved the tool to manage it.
For example, it was built with journaling in mind, while HFS+ had journaling bolted-on as an afterthought.
Successfully, without breaking backwards compatibility. OS9 can still boot from a journaled HFS (but won’t access the journal). Let’s see 98 even access an NTFS volume. The first version of NTFS is older than FAT32. There’s no excuse for the complete lack of included support in the 9x line. But I digress.
Apple has been very conservative with filesystems. Their own HFS is getting up there, and UFS (which they support with OSX) is about 30 years old now. I know Apple doesn’t have a reputation for backwards compatibility, but it’s very important to them in this case. They’d rather see the user buy all new software and hardware in order to upgrade than lose one byte of data.
With the new Intel Macs, they jumped over to the default filesystem for EFI so they could have nice, shiny software-firmware integration. Apart from that, I know nothing about it. If they’re considering ZFS, their comfort with it must come from the fact that Sun is a big name with lots of experience and a reputation of reliability behind it. But I don’t think it would ever be used for the system partition (i.e., replacing HFS/EFI in any way.) ZFS was designed primarily for storage. There aren’t even any Solares that can boot from it yet. If the post is for real, it sounds like Apple might be looking to sweeten their oft overlooked Xserve / SAN product lines.
I really hope ZFS makes it into Leopard. I’d love to turn my G4 into a low-power (as in, electricity) fileserver once I get a new system. I’d put Solaris on my Athlon server right now if I could trust that I’d be able to add and upgrade software.
This is certainly a throw away comment. NTFS can store double the amount of information on volumes and file size. Up to 16 exebytes. HFS+ can store 8 exebytes. NTFS is actually a very advanced file system with refinements happening with every release of Windows.
Of course, that can’t be the only differences, maybe you should checkout this and see what other factors come into play:
http://en.wikipedia.org/wiki/Comparison_of_file_systems
P.S I do own a powerbook and absolutely love it. Modern filesystems don’t mean much on smaller laptops but can play an important role on larger systems.
NTFS is also NOT OPEN.
http://www.sun.com/2004-0914/feature/ ZFS is ENDIAN NEUTRAL, this is a reason why Apple is intrested in it. They can jump architectures easier in the future.
What license is this, Sun’s own tight license?
Edited 2006-04-29 15:05
ZFS is under Sun’s CDDL license, which is probably one of the reason’s Apple is interested in it. After all it doesn’t contain any viral clause (like the GPL), though any changes Apple make to files licensed under CDDL need to be released under that license.
Basically it’s the best of BSD and GPL licenses combined. The original coder/community gets any improvement to the code (ala GPL), and commercial companies can include it in the OS/whatever and not have ro relicense their entire codebase (ala BSD)
Which is the same you can do with LGPL
That’s all fine and dandy. No one mentioned LGPL though. GPL != LGPL.
That’s hardly a point.
“After all it doesn’t contain any viral clause (like the GPL), though any changes Apple make to files licensed under CDDL need to be released under that license. ”
I don’t think you know what you’re talking about.
“though any changes Apple make to files licensed under CDDL need to be released under that license”
is exactly the “viral problem” people describe in the GPL.
The GPL applies the fullest definition of “derivied work” reasonable within copyright law. The CDDL applies a much narrower definition of “derived work”, based on files. Under the GPL, if you use a file from a GPL’ed program in your program, your whole program must be GPL’ed. Under the CDDL, you can use a file from a CDDL’ed program in yours, but you can keep the other files under whatever license you want, as long as you publish any changes you make to the CDDL’ed file.
To use an analogy, the GPL says: “if you use any pages from this book in yours, your book is a derived work.” The CDDL says: “you can use pages from this book in yours, but the only pages that are derived works are modifications of the page you use.”
Well then. It’d appear I’m the one who doesn’t know what he’s talking about 😉
is exactly the “viral problem” people describe in the GPL.
No, what makes the GPL viral is that GPL + any other file == GPL. It’s like being “infected” by a virus. If I have a virus and infect you, then you are infected as well.
Whereas using CDDL items is like being near a non-contagious infected person. Unless you do certain things you don’t become infected
> NTFS is also NOT OPEN.
Of course it isn’t. Who claimed otherwise? And how does it make it less superior?
It means that the other operating systems on your multi boot system cannot access files on your Windows partition. That does make it less useful in my book, and thus less superior.
Imagine if all operating systems used open file systems. Not necessarily open source, just properly documented. Wouldn’t that take away most of the data sharing pain we all take as part of life today?
If Apple switched to ZFS, dual booting OS X + Linux would be far more convenient than Windows + Linux.
I doubt it.
I still havent found a very stable ext3/reiser drivers *for* Windows.
Just because its open source doesn’t mean much in some things.
Also, you can access (including write) to NTFS filesystems from Linux, I currently do it on Ubuntu and its fine.
http://www.linux-ntfs.org/
That’s because the IFS (Installable File System) Kit for Windows (which includes the required headers to make the development process feasible) costs around 150$ and few lone developers are going to buy it. I think the last time I checked the kit costed several times the current price.
I doubt it.
I still havent found a very stable ext3/reiser drivers *for* Windows.
Just because its open source doesn’t mean much in some things.
Linux is a little guy created by programmers all over the world of varying tastes and interests, so they work to be able to access and manipulate all forms of data.
Apple is kind of a little guy and wants its users to be able to have access to their data, and so includes support for major filesystems where it sees it as practical (FAT, NTFS, UFS).
Microsoft is a big guy that wants to stay the big guy, and thus doesn’t want any revolving doors on their platform. Windows XP supports exactly two filesystems (three variants each), both of them Microsoft native, and MS is a ton less helpful in allowing amateur developers to write drivers than they are with apps. A closed filesystem or just a closed system in general will allow for vendor lock-in. It’s not that having the filesystems open doesn’t help. It’s that MS found a way to hook you in anyway.
Also, you can access (including write) to NTFS filesystems from Linux, I currently do it on Ubuntu and its fine.