MacSlash reports that when a file is accessed on Panther, a check is made to see if it is fragmented. If so, and if it is less than 20 MB in size, the filesystem will copy the file over to a contiguous area on the HD that will hold the file in it’s entirety in concurrent sectors, and then free up the HD space the fragmented version used to occupy. There are cases though, where a third party defrag utility will be required for best results.
that is a nice feature….and it makes sence as well.
or you could copy the files to another partition and the rebuild your file system and then copy the files back… and all the fragmentation would be gone….
power of the command prompt…
not bad to be in the *nix family….
read, learn, explore and enjoy….
…the question is how this affects the “Secure Empty Trash” feature of Panther. According to Apple, when using this feature any files in the trash will be “overwritten with the same 7-pass algorithm approved by the Department of Defense”. However, it is stated in the article that this defragmentation feature copies a file from its original (fragmented) position to a new (non-fragmented) position on the drive. Now let’s say I delete the file, and use Secure Empty Trash, thinking it is permanently gone. While being no expert, I would assume it would still be possible to retrieve the *original* fragmented version of the same file using some file restoration software? If so, this would essentially bypass the SET feature!!!
Please feel free to comment,
BR//Karl -> qwilk
I guess my post was kinda flamebait but seriously.
No ones doubting that MS’s R&D budject is much larger than Apples, so why does Apple comeup with these kinds of concepts when MS dosent?
I think its a valid question
This feature appears to sacrifice immediate file system speed for the (potential for) future speed improvements. I can easily think of cases where this might not be the ideal behavior. Imagine a system where there is continuous file system access of many variably sized files which are temporary, in that they don’t have a long life but are destroyed fairly quickly after creation. Some databases might exhibit this behavior. In this case the total speed would be slower, not faster. I wonder how smart the feature is, and if it shuts itself off when it might degrade performance?
In general, though, I like the idea of automating as much of this as possible. For a desktop system, I imagine that the trade-off would be generally positive.
Erik
Actually this kind of on-the-fly defragging has been in FreeBSD for a while. But it’s great to have it in OS X as well.
R&D Budget?
Didn’t BeOS have an auto-defragging file system? I’m sure Eugenia will remind us 😉
Seriously though, this is something I would love to see appear in Linux — anyone know if there’s an fs there already that does it?
One answer to the question could be that MS are doing research in a wide area of subjects (Office, graphics xbox etc). Not only OS related stuf. On the aother hand, if you dont have any good ideas you wont produce any good inovations.
/D
Microsoft is in the buisness of making money. They allow others to be the pioneers and get the arrows, they are the settlers and get the corn.
If it is a database over 20 MB, than it will not be effected by this, which is good.
“windows nt/2k/xp has been able to do thsi for years”
Really? I think ill give it a try. Can you post how to?
At some point however, it would be nice if OSX had a way of doing this manually in case you have a bunch of large files, eg VIDEO that could benifit from being “defragged”.
BeOS did *not* have an auto-defragmenting FS. Its FS (like most modern FSs) was designed to allocate in a way that avoided fragmentation, but that was it.
Although yes, I know, Reiser4 is getting to be a little vaporware-ish at this point (not gonna make it into 2.6.0 for sure), I remember seeing something about it doing something like this. There was supposed to be some sort of auto-defrag-daemon. However, this seemed to have been regarded as a hack and generally a bad thing rather than a positive. Can anyone explain further?
Not sure how the OSX implementation works, but the best thing for ensuring deleted files is to have a background task wipe over all the free unallocated space when the disk is idle, say a couple of times a week. In addition to the wipe on each delete.
You guys take deleting your files seriously.
Well, personally I don’t, but some people may. After all, if someone (e.g. a government official) would use SET to protect confidentiality/privacy etc, and that file could still be retrievable, that wouldn’t be a good thing. On the other hand, it is also possible that criminals could use SET to cover their tracks more easily. The feature is a two-edged sword, so to speak…
BR//Karl -> qwilk
Does anybody know if any of the linux filesystems has got similar anti-desfragmenting methods? I use XFS in my workstation and I have seen there is not any anti-desfragmenting tool…
Thats because most of the recent FS in linux do it automatically.
Then again, so does NTFS (I believe version 4 and up, though it may be 5).
So all in all, nothing new. Same old, same old…
or you could copy the files to another partition and the rebuild your file system and then copy the files back… and all the fragmentation would be gone….
power of the command prompt…
not bad to be in the *nix family….
read, learn, explore and enjoy….
Don’t be such a nix weenie. That’s hardly convinient, quite different to the point of the news post, and 2k/XP can easily do that while the system is running as well.
Just get some perspective, you’ll sound more credible.
I don’t think I have ever defragmented a UNIX system, ever. fsck will report fragmentation numbers, but they are always very low. I think this is a side effect of UFS’ allocation scheme; does OS X not do the same?
surely there is a significant perfomance hit in doing this check everytime a file is “accessed”.
t
mmm, I’m betting it’s not every access it is more than
likely a hook on the open and close system calls. Which
means you could also easily add it to other OS easily.
Personally I would rather have the old VMS defrag utility, It zoned the disk, knew which file to stack together. (old data files, files not accessed in a long time etc. It would also leave a bit of space around those it knew it would grow, and it was deamon. Sigh.
Donaldson
If you want more info follow the link to the Ars Forum. Guys over there found that piece of code in Darwin sources, the code is beautifully documented. I can comment on performance hit though. I can’t feel any, except sometimes I hear a hard click of the hard drive, but that is it. Here is the link
http://arstechnica.infopop.net/OpenTopic/page?a=tpc&s=50009562&f=83…
Actually it can, but this bit of code is especially intended for HFS+, which is still the recommended partition type to use. This is pretty much for backwards compatibility with classic and some carbon apps. Although with journaling and now this Apple certainly has breathed some life into the older FS.
fsck will report fragmentation numbers, but they are always very low.
UFS fragmentation is a completely different beast: UFS has the ability to split disk blocks into fragments and use them for small files and file ends, storing the data of several files into the same block. This way you can have large disk blocks (4K or more) which benefit large files, while avoiding wasting disk space when storing small files.
That said, UFS belongs to the group of filesystems which try to minimize block-level fragmentation by storing new files into continguous areas of free blocks.
I would like to see a benchmark of how this affects performance.
The reason Microsoft has not implimented something like this because of the speed overhead involved. But I would like to state that 2000 & XP have the architecture to do this just as fast as OS10 can.
Just for testing, does anybody know how to disable this auto defrag in OS10.3, we’re not getting 10.3 here in the office for a couple more weeks.
NTFS does? hmm, that is weird…I ran my XP laptop for a year, and when I looked at it, it had a 15% fragmentation.
yeah, you get a speedy system for a few months and then it is slow as a dog durring disk access.
XP NTFS defragments itself? What a laugh. I’ve got two computers here at work. One is a Mac with Panther (10.3) and the other is a PC with XP with all the service packs and updates.
I defrag the XP on the 1st business day of every month. It is currently October 30th at 12:37 pm.
It reports that 4% of my files are fragmented. I checked to see how many files were on my C: drive and how many have changed since October 1st. Guess what. EVERY file larger than 1 block that has changed since October 1st is reported as fragmented. Meaning that ZERO files were stored contiguously.
NTFS defragments itself. ROFL. Man the pain. I can’t stand the pain. Please help me stop laughing.
Well its good to know that my Power book has a modern FS. I personally haven’t ever seen fragmentation as a huge problem on the Unix platforms i ahve worked with. I seem to recall commands to defrag LVM filesystems on HP-UX to what ends i do not know. Seems it is primarily only a consideration for windows OS’es.
Since some people here seem to get their chuckles from file fragmentation on NTFS, I thought this link deserves a mention:
http://www.abakus.cz/materialy/IeLV7I17.htm#1
Summary: Temporary files, especially the kind created by browser cache & setup programs leave empty spaces on the drive, which get filled in with files that ‘match’ the space, leading to defragmentation.
I dont know too much about the NTFS driver itself to comment on the ‘automatic’ aspect of it, but if you have Norton installed, there’s a thing known as Speed Disk Service which starts defragmenting when the PC is idle.
whoops! that should read as ‘…leading to fragmentation’ instead of ‘…leading to defragmentation’
OT: Can we have a preview button next to the Submit comment button please? Thanks!
Between my 2 roommates and I, we have 5 computers. Two of them are WinXP only, one is a Win2k Adv Server – Solaris 9 – Redhat
9 – FreeBSD 4.8 multiboot and the other 2 are WinXP – Linux.
In all cases, our Windows experiences have been the same. Quite speedy after fresh installs but within a matter of months, depending on usage, the speed degradation, especially for bootup is just awful.
For Linux and FreeBSD, there’s no noticeable change. Nor for Solaris 9 but that’s is VERY rarely used.
It has always come down to file fragmentation and we’ve found the built-in defragger for Win2K-XP, which I believe to be a cut-rate version of Diskeeper to be inadequate.
I much prefer Raxco’s PerfectDisk, which can defrag the Master File Table without rebooting.