The journaled file system, which will run atop the Mac’s traditional HFS file scheme, will be switched off by default; users will be able to switch it on via the command line, sources said. They reported that while “Elvis” runs in the background, enabling Journaling FS will slow current system performance by 10 percent to 15 percent. Read the full report at eWeek.
Why such a huge slowdown in overall performance? BeOS had journaling and it was/is one of the fastest OSes around.
BeOS’ journaling filesystem is _not_ one of the fastest around. The OS is feeling fast, sure, but the fs is not that fast. Try navigating to your /boot/home/config/settings/ or any other directory with more than 200 files in it and you will see how “not fast” it can be.
And at the end of the day, journaling does come in a cost of speed. This is normal.
BTW, if you are talking about the “overall performance”, I agree, the OS should not feel more slow, only the filesystem operations should do. If the _whole_ OS is so much slower, indeed, this OSX fs is not ready for the primetime yet.
But you have to ask yourself; would BeOS have been faster without the journaling?
I wouldn’t run a server or a desktop that I did any serious work on that did not have a journaling filesystem.
Although Eugenia is right when she sas BFS is not the fastest FS around, it is important to notice that journaling is not the feature that makes it slow (in fact,in most of the cases, it makesit faster due to batching of transactions). Extended attributes and indexes are the main problem speed-wise. Anyway, BFS would not be BFS without these features so I am pretty happy they’re there, even if they slow down the FS.
-Bruno
I am really anxious to see what this JFS feature actually is, when its released. But I have high hopes that the FS will be none short of rocking, since Dominic most likely is working on this… yummy! Go, Dominic – go!!!
I wouldn’t run a server or a desktop that I did any serious work on that did not have a journaling filesystem.
Umm, why? Journaling may speed the recovery process of dirty filesystems at the cost of sacrificing metadata corruption for file corruption. If a metadata operation is journaled before the filesystem cache is flushed, catastrophic results can occur. Journaling is a faustian bargain.
The performance issues with journaling certainly aren’t localized to Apple. Sun introduced journaling for UFS (in the form of “logging”) but keep it turned off by default due to the performance hit incurred.
I think many of us are still wondering when Apple is going to add soft updates support for UFS
> Umm, why? Journaling may speed the recovery process of
> dirty filesystems at the cost of sacrificing metadata
> corruption for file corruption.
Please, explain. If a file is being written (i.e there is file data still in the cache) when the system crashes for some reason (a reboot, for instance), the file data will be corrupt if the FS is journaled or not.
> If a metadata operation is journaled before the
> filesystem cache is flushed, catastrophic results can
> occur. Journaling is a faustian bargain.
If it happens, it is the FS design that is flawed. It is not a fault of the journaling process.
> The performance issues with journaling certainly aren’t
> localized to Apple. Sun introduced journaling for UFS (in
> the form of “logging”) but keep it turned off by default > due to the performance hit incurred.
As I said, at least in BFS, journaling may make things faster due to transaction batching and sorting.
-Bruno
not the speed or lack thereof of a JFS), but the speed at which Apple updates OSX do these guy even get to sleep or take a vaction?
I am using a journaling filesystem right now (ext3 on linux) – and it doesn’t slow down my computer (I used to use non-journaled ext2).
Why would this slow down MacOSX?? Doesn’t make sense to me.
I like my journaled filesystem for several reasons. The one that impacts me the most is not having to do a complete disc check after a fatal crash (my computers don’t crash too often, but my tinkering can sometimes get the best of my system 🙂 – now instead of doing the disk check it just checks the journal and then recovers the necessary parts that were in limbo at the time of the crash – very handy (all this normally takes less than a second to do).
I think apple needs to figure it out. 10 – 15% in performance loss is just unnacceptable.
My guess is that it would be the price you pay for legacy stuff. For all that matters, it is still HFS (now with journaling on top).
-Bruno
Please, explain. If a file is being written (i.e there is file data still in the cache) when the system crashes for some reason (a reboot, for instance), the file data will be corrupt if the FS is journaled or not.
I’m talking about an entirely new problem introduced by journaling filesystems, wherein some operation which alters both metadata and file contents occurs (such as, say, editing a file). If the system loses power after the metadata operation has been journaled but before the cache has been flushed, the result is a metadata entry with no associated file. The end result is a file full of zeroes.
I wonder if the next update to OS X Server 10.2 will include this(it just might not if the feature is deemed too unstable for server usage), and if so, if Apple will include a preference pane for switching it on and off or even more advanced options.
> I’m talking about an entirely new problem introduced by
> journaling filesystems, wherein some operation which
> alters both metadata and file contents occurs (such as,
> say, editing a file). If the system loses power after the
> metadata operation has been journaled but before the
> cache has been flushed, the result is a metadata entry
> with no associated file. The end result is a file full of
> zeroes.
But the point of a journaled FS is that it guarantees that the on-disk structures are always consistent. It does not guarantee that file data won’t get corrupt. Still, I don’t see how it is different from the case I mentioned (a system crashing when a file is being written). You may not get a file full of zeroes but you will still get a file that is corrupt.
-Bruno
Journaling does not inherently slow down a filesystem. First, the speed of metadata performance only comes into play when dealing with small files, since with large files, the quality of the filesystem’s storage allocator (where it places blocks on disk) and block mapper (how it maps file offsets to disk addresses) outweighs everything else. XFS is a very good example of great large-file performance. Now for working with lots of small files, metadata performance becomes quite important. However, as ReiserFS (which has extremely good small file performance) shows us, journaling does not necessarily have to have a performance impact on small files either. Reportedly, this is because ReiserFS has an extremely high quality journaling implementation. What is most likely the performance issue with this Apple filesystem is the underlying HFS. HFS (if you read Giampalo’s own book is an extremely antiquated filesystem and as weird (in comparison to modern filesystem design dogma) as VFAT. I wonder if it wouldn’t have been better for Apple to just use UFS with softupdates, which is a fully modern filesystem comparable to any on Linux.
If the system loses power after the metadata operation has been journaled but before the cache has been flushed, the result is a metadata entry with no associated file. The end result is a file full of zeroes.
but if I were dealing with this problem, I would implement it thusly:
if (metadata entry with no associated file)
ignore metadata
else
pay attention to metadata
<blockquote>What is most likely the performance issue with this Apple filesystem is the underlying HFS. HFS (if you read Giampalo’s own book is an extremely antiquated filesystem and as weird (in comparison to modern filesystem design dogma) as VFAT. I wonder if it wouldn’t have been better for Apple to just use UFS with softupdates, which is a fully modern filesystem comparable to any on Linux.</blockquote>
You’re probably right in the suggestion that tacking journaling onto HFS is the main reason that performance is going to take a dive.
As to why they didn’t use UFS or some other more modern filesystem as soon as OSX was released has to do with backwards compatibility. Many Apple users were running OSX along with OS 9.x on their systems due to the lack of native OSX applications. Since OSX.2 (Jaguar) is eliminating support for dual booting alongside OS 9, I suppose at some future point they may migrate to a newer filesystem.
I wonder if it wouldn’t have been better for Apple to just use UFS with softupdates, which is a fully modern filesystem comparable to any on Linux.
but I guess this is one of the reasons why they want to get rid of OS9 booting?
Nick DePlume? That means ‘Think Secret’. Unless anyone who knows Dominic can verify, I’m not going to believe a word of it.
First off, you can’t just add the listed features ‘on top of’ HFS+. There’s a HELL of a lot more to it than that.
Second, I find it very difficult to believe that this journaling filesystem will allow you to rollback changes in a genericway. That’s generally not what journaling is for except in the case of a potential disk-corrupting event. Usually a journal is a list of ‘This is what I am about to do’ right be fore you do it so that before you make any changes, you have a record in case the system goes down mid-file write.
From what I understand DG is working at Apple, and he DID mastermind BFS, so there may be a story about a new FS coming from Apple, but hyperbole like this is just too much for me to swallow.
Still, I don’t see how it is different from the case I mentioned (a system crashing when a file is being written). You may not get a file full of zeroes but you will still get a file that is corrupt.
Journaling adds a larger window in which corruption can occur. Without journaling, corruption can occur WHEN the buffer cache is being flushed. With journaling, corruption can occur any time between when a metadata operation is journaled and when the buffer cache is flushed. The only solution is to flush the buffer cache synchronously with when metadata operations are journaled, decreasing performance benefits gained from using a buffer cache.
This risk can be minimized and further performance benefits may be reaped by buffering metadata operations and committing along with the syncing of the buffer cache (i.e. soft updates) The only problem with this is when the buffer cache can’t be flushed (i.e. the volume is full) and the metadata operations have already been completed (i.e. moving files between two volumes using soft updates)
I’m simply saying that there is no silver bullet in terms of filesystems. Journaling does not guarantee filesystem integrity any more than a filesystem without journaling or a filesystem with soft updates. All of these alternatives have drawbacks and nasty consequences in the event of system failures.
The only major advantage of a journaled filesystem is faster recovery times in the event of a system failure.
>>>
Journaling adds a larger window in which corruption can occur. Without journaling, corruption can occur WHEN the buffer cache is being flushed. With journaling, corruption can occur any time between when a metadata operation is journaled and when the buffer cache is flushed. The only solution is to flush the buffer cache synchronously with when metadata operations are journaled, decreasing performance benefits gained from using a buffer cache.
<<<
This is exactly what’s done actually. There is still a plentiful performance benefit to be had from being able to batch and reorder the actual operations, as Bruno pointed out. The loss from having to do synchronous writes is fairly small by comparison.
>>>>
BeOS’ journaling filesystem is _not_ one of the fastest around. The OS is feeling fast, sure, but the fs is not that fast. Try navigating to your /boot/home/config/settings/ or any other directory with more than 200 files in it and you will see how “not fast” it can be.
<<<<
This particular issue is due to the poor VM/fs cache in BeOS, this isn’t a performance flaw in BFS itself or journalling in general.
than synchronous metadata updates (i.e. normal UFS) when you deal with many files.
UFS + softupdates is extremely fast though.
However, HFS supports the silly resource forks etc. and is compatible with all the older stuff.
If everything gets cocoized and we lose all links to the past, I believe Apple could move to a better filesystem.
Extent-based filesystems can also be excellent for performance if properly implemented (VxFS, XFS).
Bascule:
I’m a FreeBSD advocate. However, depending on the journaling, you get varying degrees of protection. Check out ext3’s writeback, ordered and journal modes please.
VxFS can do something similar if needed.
Most other filesystems, including ReiserFS and NTFS only provide metadata journaling.
Bear in mind that softupdates can get seriously screwed up if the power goes down in the middle of a large operation (the performance tradeoffs are usually worth it though).
In the case of VERY large filesystems, believe me, you don’t want to be waiting for an fsck of a non-journaled 1TB filesystem…
D
Is that ridiculous or what??
So lessee here… I pay way too much for my Apple hardware -We’ll say $1699, which is the openening price for their cheapest midtower.
That’ll get me a dual 867, which never-minding the fact that they don’t outperform their predecessors and in raw power can’t match a similarly priced x86 box, means that I get about a 1734Mhz system, give or take.
So now when I want to use a file system that is arguably as good as, if not better than that of the BeOS (and which would run happily on a 200Mhz box with no noticable performance impact), I in effect am lowering my machines performance down to that of a 1474Mhz PC???
They’ve gotta be kidding!
I mean I appreciate the benefits of a journaled filing system, but come on. What is it that Jobs and Apple have less of a grasp of?? Modern computers, or consumers?
Lessee… Someone can correct me if I’m wrong, but I see Apple stock hitting new lows soon with this kind of stuff.
What a crock of crap… I feel sorry for those who have docked their boat in the Apple harbor.
I have a question – when Ext 3 debute for Linux was it comparably slower then what this Mac journaling file system would be? For what its worth, I do not recall feeling or seeing any performance drops when converting over. I can understand that it will always be slower on some level – but 15% seems a bit excessive – perhaps because they retain HFS+? What if they scrapped it and made a fully modern, from the core on up, journaling file-system? Or would that not do anything?
The loss from having to do synchronous writes is fairly small by comparison.
So which filesystems are you referring to which don’t use a write cache? XFS certainly does…
Well, if this all true, it should be interesting. Apple keeps coming up with surprises.
If true, I can’t imagine that Apple would just leave it that way – surely they would start working on performance immediatey.
But, perhaps it is the big hint that Steve Jobs dropped, that there will be “options” when OS 9 is gone forever.
Mr. Cancelled, it all depends on how you look at it. In one sense you’re getting a new, very desirable for some, feature at no extra cost. And it’s optional, you don’t have to turn it on if you don’t want to or need to. Tomorrow Apple announces its quarterly earnings after the stock market closes. It will be interesting to see what they are, especially because of the big fall-off of iMac purchases, etc. But, if they continue to make money, there will be no stock drop-offs.
x86 architecture is NOT the same as the PPC architecture. Look at the difference between your beloved Athlons and then those loathed Pentium 4s: to put it simply, the Athlon gets more performance to the MHz.
It’s the same way with PPC/x86: PPC gets more performance to the MHz.
Also, you can’t just multiply the MHz per CPU on a dual-CPU system to get an idea on the speed. The actual speed depends on the application(s) you are using and the way they were coded.
— Original Message —
Is that ridiculous or what??
So lessee here… I pay way too much for my Apple hardware -We’ll say $1699, which is the openening price for their cheapest midtower.
That’ll get me a dual 867, which never-minding the fact that they don’t outperform their predecessors and in raw power can’t match a similarly priced x86 box, means that I get about a 1734Mhz system, give or take.
So now when I want to use a file system that is arguably as good as, if not better than that of the BeOS (and which would run happily on a 200Mhz box with no noticable performance impact), I in effect am lowering my machines performance down to that of a 1474Mhz PC???
They’ve gotta be kidding!
I mean I appreciate the benefits of a journaled filing system, but come on. What is it that Jobs and Apple have less of a grasp of?? Modern computers, or consumers?
Lessee… Someone can correct me if I’m wrong, but I see Apple stock hitting new lows soon with this kind of stuff.
What a crock of crap… I feel sorry for those who have docked their boat in the Apple harbor.
I woudlnt mind running a Phase Tree system.
For those that dont know Phase Tree is where the file system is structured into a tree and data is written then its root so taht way of teh operation isnt complete, it will just ignore the data sicne it doesnt have a root. I think thats the jist.
I think teh only one to use Phase Tree is TuxFS which has last I heard stopped development to work out a patent issue.
Its suppose to use less space (I think you lose around 6% of yur harddrive space to the journal, not sure exact numbers for either) and I thought I heard taht the idea is lower under low load but faster under high load.
Have some perspective here. If we are talking about times on the order of milliseconds, then a 10-15% slowdown may be nearly imperceptible. If HFS+ is slow to start with–and I don’t know if it is or not–then that 10-15% slowdown may be significant. But 10-15% doesn’t mean much unless we know the original amount off which the percentage is based.
Just thoughgt Id post a link since I just foudn it
http://people.nl.linux.org/~phillips/tux2/
ReiserFS has some really interesting stuff in the pipeline. There will be files that can be used as directories in Reiser4, creating a very convenient and efficient way to store metainformation. They will also have a plugin architecture, making encryption, compression and security extensions very easy to implement. And they already have the best performance and best space efficiency for small files. That would be very nice for all those metadata directories OS X uses.
BeFS was really nice, but I do not think that anybody can come up with a better filesystem than ReiserFS in the next few years!
The same reason they can’t use XFS, and the same reason the FreeBSD (and other BSD) projects can’t use XFS/ReiserFS/etc, the GPL and its “viral” nature. The XNU/Darwin sources are distributed under the APSL, not the GPL.
“Look at the difference between your beloved Athlons and then those loathed Pentium 4s: to put it simply, the Athlon gets more performance to the MHz.”
Yeah sure… I bought an Athlon then sold it after three weeks and bought a PIV. There is a difference indeed : )
“Yeah sure… I bought an Athlon then sold it after three weeks and bought a PIV. There is a difference indeed : )”
Thanks for sharing, now can we stick to the topic?
With the amount of documentation behind ResierFS, it would be easy to build their own implementation easily. But I doubt the license is the issue. The issue is that many legacy apps rely on HFS+, which is probably why they choosed it over UFS.
I imagine Apple could get it for less than what they pay their ad agency for one of those lame Switch ads. It’s not like those twits at PalmSource are using it for anything.
It would be nice, but I think it would be far more easier for Apple to implement their own BFS based on documentation Be provided. Because BeOS and Darwin is very different.
Plus, buying BFS from Palm isn’t a good idea, when there is (still unstable) full implementation of BFS available from OBOS. Why waste money? I mean, Apple only got 4 billion bucks, they are no Microsoft…
Speaking about Switch ads, I got proof that Final Cut Pro is the best video editing tool. Not only you can edit the video, you could translate monkeytalk into English and change the appreance of a person.
Before:
http://homepages.nyu.edu/~jgg221/
After:
http://www.apple.com/switch/ads/ellenfeiss.html
> > I’m talking about an entirely new problem introduced by
> > journaling filesystems, wherein some operation which
> > alters both metadata and file contents occurs (such as,
> > say, editing a file). If the system loses power after
> the
> > metadata operation has been journaled but before the
> > cache has been flushed, the result is a metadata entry
> > with no associated file. The end result is a file full
> of
> > zeroes.
>
> But the point of a journaled FS is that it guarantees that
> the on-disk structures are always consistent. It does not
> guarantee that file data won’t get corrupt. Still, I don’t
> see how it is different from the case I mentioned (a
> system crashing when a file is being written). You may not
> get a file full of zeroes but you will still get a file
> that is corrupt.
Check out ext3’s different modes. One of them also ensures file-data consistency not just meta-data. Of course this costs performance but if don’t movies (like I don’t) use it ane be assured that your data will always be consistent except hardware-failures destroy your disk
PS: I once read that ext3 had some special handling for possible hardware-faults, so maybe it will help with that too
They’re just trying to make OS X, BE (buzzword enabled). Without proper buzzword compliance they don’t meet industry standards. 🙂
Meanwhile we can all laugh at the whole thing. I mean, they could have had this what, like six years ago?
“Meanwhile we can all laugh at the whole thing. I mean, they could have had this what, like six years ago?”
yea, probably, but W2000 doesn’t have proper journalling even today. Windows is supposed to add that 2005.
<<<
Second, I find it very difficult to believe that this journaling filesystem will allow you to rollback changes in a generic way.
>>>
Why? Databases are doing exactly that for decades now, and the VMS filesystem also took steps in this direction.
<<<
That’s generally not what journaling is for except in the case of a potential disk-corrupting event.
>>>
I’d rather say that full data journaling has not been widely implemented in FS because of performance issues; journaling the metadata only is a useful compromise here.