From the AppleInsider forum comes an interesting discovery about ZFS and Apple. A user who has the Leopard WWDC preview searched the system with Spotlight and found a mention of ZFS. He says: “There is no file system bundle for it, nor is there a mount utility or any other one (no fsck, now newfs, etc.). There is, however, a changed vnode.h.”
I though the real give away was how in the keynote, the guy gave the example of accidently saving instead of Save As, and that you could go back in time and undo that.
If Time Machine were designed to backup just once every midnight, you could hardly go back in time to undo specfic short term mistakes like that.
Time Machine is most certainly based on the same notification system used by Spotlight, which would allow backing up each time a file changes.
So one could very much save and back up instantaneously. I can’t imagine Apple making a daily backup when they have implemented the whole Spotlight engine and notification system in the Kernel.
Actually I am quite surprised nobody proposed a backup system for OS X based on that very notification system.
If you want more info, read the following article by Amit Singh (whose book about OS X Kernel is great and which I highly recommend):
http://www.kernelthread.com/software/fslogger/
I though the real give away was how in the keynote, the guy gave the example of accidently saving instead of Save As, and that you could go back in time and undo that.
Whole lot of other previous solutions had this feature, it exists even on Windows Server. Some do log changes and some log file versions.
If Time Machine were designed to backup just once every midnight, you could hardly go back in time to undo specfic short term mistakes like that.
And you never question why previous versions never replaced scheduled backup? Scheduled backup still rules. I went for a few solutions like Time Machine so far and I always returned for the same reason, I solved one problem but got 20 new ones.
Let me give you a clue. You work on a large movie or picture. Simply change background and complete file is different (which will make changes even larger than original file). Now how many copies of that large file will exist?
This approach is acceptable for non-binary controllable formats, for binary (especially losy file types)? Not really. And the main trouble is that binary are usualy larger ones. Thay are maybe even acceptable for databases as they don’t change whole file at once, but the next point about slowdown could prove it wrong (or not, because when db is written no new handle is created and changes could be intercepted and logged if this thing was implemented correctly).
Second problem here is how file is being written. Write to temporary file. Read old and new one to create a diff. Store diff and diff information to its internal structure. Rename temporary and delete previous.
Now imagine this write with 1GB file or even larger.
Just because ZFS is mentioned in an enum type in a header does not mean at all that ZFS will be included in Leopard. You will find for example lots of processor architecture defines in windows.h (MPPC, 68k!) and Windows does not support those.
Eventually Apple will switch to ZFS but changing the default filesystem of the OS takes time because it simply has to work perfectly. No errors acceptable there.
As was pointed out in the thread, maybe 10.6
You will find for example lots of processor architecture defines in windows.h (MPPC, 68k!) and Windows does not support those.
Ehum, Windows NT runs on Alpha, PowerPC, SPARC, MIPS, and the relatively unknown Clipper architecture (among others), so mentions of those in Windows is anything but exciting.
I known that the NT kernel works on those archs, although I do not think that it ever has been ported to 68k. Nevertheless there is nomore WindowsXP or even Windows2000 support for anything but x86/x86_64/Itanium. And since the kernel evolved since NT3.5 and NT4 I am pretty sure MS does not care for the other archs anymore.
That doesn’t prevent MS from simply keeping the defines for other archs for people that still target the old architectures in one way or another.
That doesn’t prevent MS from simply keeping the defines for other archs for people that still target the old architectures in one way or another.
Exactly. Other than that, what if a high-profile customer wants a custom version of NT to run it on a non-x86 architecture? Do you think Microsoft would not help them out?
In fact, that’s how NT got ported to that Clipper architecture.
Edited 2006-08-23 15:26
Incommunicado:
there is nomore WindowsXP or even Windows2000 support for anything but x86/x86_64/Itanium. And since the kernel evolved since NT3.5 and NT4 I am pretty sure MS does not care for the other archs anymore.
Well, I’m afraid they cared a lot for the PPC architecture: the apple G5s were recently used as developer systems for the Xbox360. Looks like they ran a streamlined NT kernel derived from the mentioned NT version for PPC (Motorola and other RISC machines, not Apple’s).
I agree. Chances are this is probably a groundwork kind of thing.
It’ll be there for 10.6, 10.7, or whatever.
The FreeBSD guys are well into porting ZFS. In 10 days they have been able to create and mount zfs files systems. There is still more work to be done, but this was one guy and 10 days of work! Imagine what Apple can do.
With ZFS and zvol you can create a volume and put a different FS on it, currently ufs is supported in Solaris.
Apple could use most of ZFS and overlay HFS+ on zvol and get snapshots. Apple ported DTrace for 10.5, I don’t think porting ZFS is that far fetched.
http://docs.freebsd.org/cgi/getmsg.cgi?fetch=227886+0+current/freeb…
Edited 2006-08-23 15:36
I think Apple was waiting the porting of ZFS for FreeBSD to complete its own. FreeBSD is a “significant” part of XNU/Darwin. Only time will tell us.
Indeed. It probably has more to do with the initial synchronisation of Darwin and FreeBSD than anything else at this point.
It’s certainly possible that ZFS will make its way into Leopard but UFS2 could have been in Tiger and didn’t make it.
Apple is slow to integrate new file systems because they need to assess the value added and the impact to Spotlight.
I second that: Apple engineers have been talking to ZFS people, and they probably will look to have ZFS integrated into Mac OS X as soon as possible, but getting it done within the Leopard time-frame is unlikely.
John Siracusa wrote about this very subject a while back: http://arstechnica.com/staff/fatbits.ars/2006/8/15/4995
but getting it done within the Leopard time-frame is unlikely.
Not so unlikely.
http://blogs.sun.com/roller/page/eschrock?entry=zfs_on_freebsd
“Pawel Dawidek has been hard at work, and has made astounding progress after just 10 days (!).
As with any port, the hard part comes down to integrating the VFS layer, but Pawel has already made good progress there. The current prototype can already mount fielsystems, create files, and list directory contents.”
If one man can do this in ten days, we can easily think what a developper team at Apple can do.
It’s not only about the filesystem itself, that’s rather easy.
It’s to use ZFS’ features instead of HFS, and add features from HFS which are missing in ZFS because of the different inner workings of the file systems. Also creating usable and integrated userspace tools and so on.
From the original source:
Kickaha is correct that Time Machine uses hard links, but instead of rsync it uses the same filesystem notification mechanism spotlight uses to flag files as changed. The upside is that like spotlight your system will easily know which files have been changed without having to search the whole drive. The downside is that everytime a file is changed it will have to be copied in its entirety to the backup. For small files this is no biggie, but for larger files it kinda sucks. The other downside is that Time Machine runs by default at midnight, and copies changed files for that day. Time Machine won’t help you if you made several changes to a file in one day, and want to revert to an earlier state. The only choice you will have is to revert to a copy from the day before.
Neither ZFS not UFS can directly support some advanced features of HFS/HFS+ such a files with multiple forks. These files date as far back as the original Mac OS but are still possible to be used on the latest Mac OS X.
A typical Carbon application which was converted from Mac OS to Mac OS X contains a resource fork and a data fork. Mac OS X-compatible applications are no longer built this way, keeping their resources in a separate file or bundle of files.
Since ZFS and UFS and the various other UNIX and Linux file systems are unaware of this, they can’t handle classic Mac OS applications and files properly.
The move to x86 processors will limit this since older applications will not likely be converted to work on the newer hardware but as we know, people don’t give up their favourite things so easily.
ZFS can probably handle resource forks just fine, by treating them as attribute streams.
Neither ZFS not UFS can directly support some advanced features of HFS/HFS+ such a files with multiple forks. These files date as far back as the original Mac OS but are still possible to be used on the latest Mac OS X.
ZFS allows the creation of emulated volumes, which means just another block device. Disks are block devices and you can put file systems on block devices. You create a zvol volume and format it with HFS/HFS+ if you wish. Since volumes are created from the pool all the features of zfs like snapshots and raidz are availiable to them. So you can take a macpro mirror the 4 internal disks to create a pool and create a zvol of the entire size and make it look like one “Macintosh HD” with HFS+. The users will be none the wiser.
http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qsl?a=view
If you ever worked with ZFS, you never wana use something else. Everybody is talking about the features, performance etc. but for me, the most important is: it’s so easy to use! Its intuitive options and excellent documentation brings light to a dark admin day and speeds up things.