Apple released Mac OS X 10.3.8. Key enhancements include: Improved OpenGL technology and updated ATI and NVIDIA graphics drivers; Enhanced file sharing and directory services for mixed Mac and PC networks; Improved compatibility for third party applications and devices.
Much faster. Resolves dns properly. Faster mail safari. Fans get some serious exercise on a G5. Hello is this LAX?
Ran Disk Utiliy and installed using the Combo Updater and MDD is good to go! Forget to disconnect my external FireWire! Running FileVault. The iBook is next.
The one “feature” I’ve been waiting for… the update says it fixes Powerbook G4s waking to a black screen and an inoperable keyboard and mouse… welp, mine still does it. The problem is you put the laptop to sleep, take it to work, plug in a monitor, then wake it. It wakes, but with no display — you have to reset.
Of course I get around this by waking the laptop first, then plugging in the monitor, then closing the lid, then waking it. works every time.
Usually, when this happens –in any platform– it’s the fault of a buggy driver. Remove all unessasery/old/buggy drivers you have installed over the years, and I promise you, it will work.
I remember an article a while back about how computers have gotten much faster over the past 8 years but the user experience (percepted speed) has been the same.
I don’t think many would argue that the long term trend is that operating systems get slower, not faster with every release.
So why after every x.x.y release, does someone make the claim that it is faster?
Given the “break neck” speed hardware is improving in performance, operating systems bloating faster than hardware can keep up.
So why are so many people willing to pay another $300 for that extra .2 Ghz but not willing to preasure OS vendors to sell them a tweaked operating system? (not aimed at apple)
Well anonymous, the thing is recent OSX updates have indeed offered increased speed ( at the very least peceptually if not in actual fact), while most Windows updates offer the reverse.
Is it so shocking then that people are amazed that OSX seems to geet faster with each release while XP doesn’t?
I know for me SP2 was an absolute disaster on my Boxx. I removed it and went back to POXP… I am thinking of just going back to Win2K.
“I don’t think many would argue that the long term trend is that operating systems get slower, not faster with every release.”
Anonymous, I think that’s because those people are using Windows. My experience with Mac OS X is that it does indeed get faster with every OS update, rather than slower. Panther is faster on the same hardware than Jaguar, and I expect Tiger to be faster yet.
>My experience with Mac OS X is that it does indeed
>get faster with every OS update, rather than slower
This is true and I confirm it. But that’s only because the first versions of OSX sucked seriously and OSX already had a bad name of being really slow. As time went by, the good engineers at Apple fixed most of the slowness and so, up to Panther we have seen a steady increase in speed with versions. I don’t really expect to see the same percentage of increase on Tiger, because OSX seems to have stabilize a lot in terms of optimizations, plus the new indexing backend will take cpu cycles away anyway.
Also, there is a difference between “consume less memory” and “run faster specific tasks”. For example, Win98 is able to run on less memory/Mhz than XP can (Win98 can easily run on a P200 with 64 MBs of RAM for example). However, when both are running on the same hardware (e.g. 600 Mhz P3 with 256 MBs of RAM), XP blows Win98 off in overall speed/multitasking.
Same goes for Linux 2.6.x vs 2.4.x. The new kernel has bigger requirements, but when they actually go head to head on acceptable hardware, 2.6.x performs better (e.g. in databases).
My point: there are trade-offs in performance. You trade memory for speed. There is no way around it usually. And yes, BeOS did the same too. Everyone does.
To ellaborate: remember that the first OSX versions were installed on iMacs or iBooks with only 64 MBs of RAM! Today, people are bitching at Apple for selling still machines with 256 Mbs of RAM.
Here’s a clue for you: Mac OS X has become faster since then. If that’s the case why does it require more memory? Refer to my post above. Everything is a trade-off after some optimizations have already taken place.
Just updated, everything seems fine thus far. I haven’t noticed any speed increases, thus far. This is on a 12″ 1.33GHz Powerbook w/ 768mb of ram.
Apple still has to support the G4, which ships in its Mac Mini, eMac, and laptop computers. This means that they can’t just blow CPU cycles as if everybody had dual G5s. I offer this point not as criticism of Apple, but as why Apple should have great interest in optimizing its OS.
Also, 10.3.8 is unlikely to contain significant speed improvements. They are literally a few months away from Tiger, which means that most of their engineers have to be very busy finishing up Tiger. Even if they had some optimizations ready, they’re likely to want to save those to make Tiger look better.
“This is true and I confirm it. But that’s only because the first versions of OSX sucked seriously and OSX already had a bad name of being really slow.”
That is obviously one person’s opinion and not a “fact”. The words “sucked” and “slow” are subject to interpretation.
One man’s trash is another man’s treasure as they say.
Oh, please. I started using OSX 10.1.3 on a Cube 450 Mhz G4 with 448 MBs RAM. The thing was SLOW as HELL. I would click on a widget and sometimes I would have to wait many seconds to see a response. It was unusable, and I did write about it back then.
Fast forward a few years, and Jaguar and then Panther has brought REAL optimizations. The OS does NOT feel THAT slow anymore even on that old hardware. Mac OS X Panther is running normally now on this hardware.
I also have here a dual PowerMac G4 and a Powerbook. The same optimizations have been observed there too. Mac OS X doesn’t suck anymore as it used to. And yes, that’s factual, ask everyone who have used Mac OS X since 10.0.0 to 10.3.8 on the SAME hardware. The optimizations made since then are really big (sure, it requires more RAM now, but it’s all good still).
Updated fine for me on my spiffy new 1.67Ghz PowerBook G4 (which I only just got yesterday ). I can confirm a successful update on a 1.2Ghz iBook and a dual 500Mhz G4 PowerMac as well.
OS X speed increases in major upgrades (ie 10.0 -> 10.1 -> 10.2 -> 10.3) are not opinion or subject to debate. They’re factual and recorded through benchmarking.
I’m at work, so can’t go off on a data hunt, but you can Google for the data and find this for yourself.
The smaller, point releases end with a system optimisation (updates prebindings) which improves file and application access a little, but this should be done once every week or two as a general maintenance task (not required, but nice to keep everything working as well as can be).
I’ve been itching to buy one of these. I’ve been holding off since I know “Tiger” is supposed to be out soon. My assumption has been that I’d have to pay for the 10.3 to 10.4 upgrade. Can anybody confirm this for me?
Yes, you have to pay. This is a major upgrade, not a small one, despite the version numbers feeling “too close”. Panther had 8 free updates in 1.2 years since it got released: 1.3.0 to 1.3.8, today. Expect the same free updates from 10.4 too. But to go to 10.4 you will have to pay.
Thanks, Jon. I figured as much. It actually makes since to me, too. I guess I’ll be playing the waiting game until they start releasing the newly-sold machines with 10.4.
I’m sticking it out for the JDK 1.5 and dynamic libstdc++ linking (somewhere in apple.com it indicates that libstdc++ is static linking only right now).
“How much is Steve going to bilk for this upgrade. I know a Mac addict and he was talking about $200 a year for upgrades to the OS alone. It sure must feel good!”
Your friend doesn’t know what they are talking about…
http://www.amazon.com/exec/obidos/tg/detail/-/B0002G71T0/qid=110800…
Try $130.00, where did they get the $200?
And If you can afford it it sure does feel good not run Windows. Its well worth the price.
Just updated. No problems so far. (And no, I don’t feel anything faster, nor slower)
Go ahead and try. (Too sad it has a ´dated´ Samba version, 3.0.5…)
“That is obviously one person’s opinion and not a “fact”. The words “sucked” and “slow” are subject to interpretation.
One man’s trash is another man’s treasure as they say.”
Actually there is a lot of empirical evidence out there that MacOSX continues to get faster on the same hardware. I expect the same to happen with Tiger.
Except for how it seems that core video is going to require more vram than just quartz extreme and that could cause some slowdown on video cards with 32mb or below such as the one in Mini Mac or eMac whether it’s by it having to do things in software or by there just not being enough memory.
Actually there is a lot of empirical evidence out there that MacOSX continues to get faster on the same hardware. I expect the same to happen with Tiger.
I was not disputing that it was getting faster.
I was disputing Eugenia’s usual over-the-top grumpy accusations of “slow” and “horrible”.
I thought the first release of OS X was perfectly useable and was a lot faster than other OS’s I’ve used and more enjoyable.
Just because it got better doesn’t meen it was horrible to begin with.
All I was disputing is that it’s hardly “factual” just because a bunch of people think so.
>OS X was perfectly useable and was a lot faster than other OS’s I’ve used and more enjoyable.
Obviously, you never used either BeOS or Win2k at the time.
>Just because it got better doesn’t mean it was horrible to begin with.
It was.
Why does Apple’s OS keep getting faster?
I believe because Apple and the GCC community are doing good work on improving the performance of the GCC compiler on the PowerPC hardware.
Each new Version of OS X has come with a new release of a GCC compiler.
Tiger, I don’t know for user, will either be compiled with a new GCC or IBM’s PPC compiler. Either way, OS X will be faster for the G4 and the G5.
Everything went fine on Mac mini. Mail seems a tad faster. Update took around 11 minutes to install. No problems now off to the iBook.
http://www.123macmini.com
http://www.macminiforums.com
holy crap… it is an API!!! it is not a display technology!!!!
OS X will not feel sluggish. operations that are done on video, like encoding will be slower, but that is because it will be done in software not hardware.
Eugina mentioned BeOS and Win2k.. if I could only go back In any case, I don’t know what the fuss is about. OSX was a dog and it gets better every iteration. XP keeps getting slower (ever see the indexing service go to 100 percent CPU usage? not pretty!). After years of installing 10 and 10.1 and 10.2 and 10.3 without hiccup I just expect it to work and be faster. I’m starting to think this Apple reality distortion field really might be more rooted in proven facts than hysteria
(1) Yes, 10.4 will have to be paid for in cash when it comes out. A new decimal point will only be available in a box, on CD-ROM, never as a download or through system update. Yes, this could cost you almost as much as new hardware, so the wise thing to do is wait for 10.4 before buying (you won’t be the only one!).
(2) Yes, Macs (including low-end Macs) run faster with newer versions of the OS. This is often because of amendments to the programs themselves (e.g., Safari) and how Apple’s programs interact with the system, rather than due to direct improvements to the OS –although both do occur.
IT’S SIMPLY PEOPLE, APPLE “OPTIMIZES/DEFRAGS” THE HARDDISK AFTER EVERY UPDATE.
If Microsoft ‘defragged’ after every update – people would be saying that they notice Windows getting faster too!!!
Oh my god! WTF is this guy talking about.
it certainly doesn’t defrag, thats just silly.. and would take hours..
what it does do is updates the prebinding.. and that helps add to the “snappines” people report after minor updates..
but on the grander scale, the major os’s releases.. 10.0 10.2 10.3 10.4, those speed increases are very noticable.. i started with the public beta, it was pretty much useless. not just due to lack of software, but it just ran so slow.. once 10.1 came out however, i never went back to 9
Heh.. ummm.. no. It doesn’t defrag the hard drive after an update. I think all it is doing is rebuilding the dynamic library database so that applications can start more quickly. It doesn’t affect their running speed, though. Someone who knows might be able to correct/fill-in the exact details here.
Obviously, you never used either BeOS or Win2k at the time.
No, I had. As I said, they’re subjective.
You can spout all day about how you’re 100% right. But that doesn’t make you right, it just makes you a person with an opinion.
Speed is relative, if I’m used to running on a turbo 386 with Windows 3.1, I would have thought OS X to be blazing fast.
If I would have used Win2k on a 133mhz Pentium I would have though it was horribly slow.
Speed is a subjective evaluation.
Your problem is you can’t stand to be wrong.
Actually, you never defrag an OS X partition. The drive doesn’t fragment. Neither do most modern file systems (ext2/3 or reiser). One of the things I was looking forward to in WinFS (before it was axed) was the possiblity that Microsoft had left the stone age of file systems and had fixed that issue.
Actually, you never defrag an OS X partition. The drive doesn’t fragment. Neither do most modern file systems (ext2/3 or reiser).
Yes, they do.
One of the things I was looking forward to in WinFS (before it was axed) was the possiblity that Microsoft had left the stone age of file systems and had fixed that issue.
NTFS is quite adequate. WinFS wasn’t a filesystem, either.
http://developer.apple.com/tools/gcc_overview.html
Tiger will probably be GCC 3.4
Tiger is GCC 4.
Peep this: http://developer.apple.com/macosx/tiger/xcode2.html
wrote:
Encoder wrote:
Why does Apple’s OS keep getting faster?
I believe because Apple and the GCC community are doing good work on improving the performance of the GCC compiler on the PowerPC hardware.
Each new Version of OS X has come with a new release of a GCC compiler.
Tiger, I don’t know for user, will either be compiled with a new GCC or IBM’s PPC compiler. Either way, OS X will be faster for the G4 and the G5.
GCC improves incrementally from Apple who then merges the changes into GCC as they so choose.
GCC 4.0 is the first to expect the ObjC++ support that Apple wrote which of course include the dynamic libstdc++. As has already been noted Java 5.0 designed to work under Cocoa will be included.
http://developer.apple.com/macosx/tiger/xcode2.html
Apple is committed to GCC. Unless the IBM PowerPC Compiler is open sourced and allowed to be customized with ObjC and ObjC++ referencing that compiler with Apple is not even worthy of speculation.
Actually if I remember correctly Panther casually defrags as files as accessed. After an updates though the optimization is prebinding as those above me have stated. Not to mention I think it bears mentioning that if Microsoft was able to improve the speed of Windows by taking a few minutes more after each update – don’t you think they’d be doing it?
You are silly, HFS+ automatically defragments itself (rather, its OS X driver does). I don’t know if Apple really optimizes the disk with every update, but if it did, there would be practically no benefit, as it was already optimized before the update.
Microsoft has no clue about filesystems. They’re still trying to develop a stupid Filesystem with a bolted-on database, while the Haiku people managed to re-implement a fast, metadata-supporting 64-bit filesystem with very few people in very few time, and Apple will deliver with OS X 10.4 what Microsoft originally announced for Longhorn (and now it won’t even ship with Longhorn!).
“…Your problem is you can’t stand to be wrong. …”
Shawn please stop this! Even my dog knows what Eugenia was talking about. Eugenia even forgot to mention how horrible was launching applications in 10.0. Here ( http://arstechnica.com/reviews/01q2/macos-x-final/macos-x-3.html ) you can read:
“Native OS X application launch times are not particularly impressive, and sometimes downright bad. Even without a stopwatch, it’s easy to measure the time needed to launch an application by counting how many times its icon bounces in the Dock before it settles down to run. “Bouncemarks” may be rough, but when you see classic Internet Explorer 5.0 barely make it through a single bounce before coming up, and then watch the carbonized Internet Explorer 5.1 “preview” bounce 18 times, the verdict is clear. Even something as simple as launching the System Preferences takes 6 bounces.”
while in the conclusion page you can read
“Interface responsiveness and effective stability are the two biggest fundamental problems, but missing features and compatibility issues rank just as high if you actually intend to use OS X as a full Mac OS 9 replacement: the 10.0 release cannot view DVD movies; printer drivers are still scarce; CD burning is not yet supported, even by Apple’s own iTunes CD authoring application; and a lot of hardware (like my G3/400’s serial port adapter to which my printer is attached) seem destined to be orphaned forever.”
I mean I could find tons of other reliable reviews around (you trust arstechnica don’t you).
So basically Eugenia is simply right in saying that performance of earlier version of OS X was horrible expecially UI responsiveness.
So please stop whining at Eugenia whitout a point
You are silly, HFS+ automatically defragments itself (rather, its OS X driver does).
No, files that meet certain criteria (size) are copied to another part of the disk (ideally into a single contiguous block) when they are accessed, if they are fragmented beyond a certain point.
It’s not entirely clear to me what happens if there isn’t a large enough contiguous block.
This particular feature *reeks* of a kludge.
Microsoft has no clue about filesystems.
Microsoft’s filesystem work is really some of their better stuff. Prior work includes OS/2’s widely-liked HPFS, for example.
They’re still trying to develop a stupid Filesystem with a bolted-on database, while the Haiku people managed to re-implement a fast, metadata-supporting 64-bit filesystem with very few people in very few time, and Apple will deliver with OS X 10.4 what Microsoft originally announced for Longhorn (and now it won’t even ship with Longhorn!).
Microsoft, in NTFS, have had a “64 bit, metadata supporting” filesystem for about 12 years now (probably closer to 15 if you want to count from the prerelease stage). “Recent” other features include per-file compression and encryption. I’m not aware of any other (common) production filesystems that have these features, while NTFS has had them for about 5 and 9 years, respectively.
WinFS is nothing more than an indexed database that sits on top of NTFS. Basically the same thing Spotlight will be (although more business/server oriented than home/desktop).
reeks of kludge? how?
what happens when there is little room left on disk is that the access times slow down. that is why apple recommends you have 1% disk space open on drives that get a lot of read and writes.
The one part of OSX that is STILL irritatingly slow for me is printing – specifically the UI that brings up page setup, printer settings, etc.
Is this just me/my environment, or do others here also perceive almost unbearable sluggishness as compared with OS 9?
Tiger is GCC 4.
Peep this: http://developer.apple.com/macosx/tiger/xcode2.html
Thanks Fraeone and Marc.
I notice GCC 4 will include Auto-Vectorization.
It’s this a Big Thing?
This should really speed up the ILife applications.
Do you know what compiler Apple used for the ILife 05 apps?
I can see Auto-Vectorization really helping out GarageBand.
Will it do much for general processing and possibly Java?
Slow is quite subjective :-). On my machine, hitting Cmd-P to bring up the print dialog takes about 2 seconds. I don’t find that too slow, but if your Mac is taking a long time to display the dialog box, it could be the network that’s doing it. Assuming of course you’re printing to a network printer.
There seems to be one huge difference with this update. I have been using iPulse and since I installed the update the temperature it reports is about 20 degrees cooler after upgrading. Has anyone else seen this? I have a dual 2gh G5 system here.
2 seconds seems slow to me
I have never used a locally connected printer – but our network is not slow, so I doubt that the network is the determining factor.
Mac OS 9’s print user interface was snappy, but OS X seems very slow in comparison to me.
No, files that meet certain criteria (size) are copied to another part of the disk (ideally into a single contiguous block) when they are accessed, if they are fragmented beyond a certain point.
No the files are optimized at first access. So by the time the disk begins to get full you have most frequently accessed files are near each other.
20 MB is a considerable chunk of data considering most block level or clusters in windows files systems are 512bytes to 4k.
It’s not entirely clear to me what happens if there isn’t a large enough contiguous block.
There aren’t moved, Duh.
This particular feature *reeks* of a kludge.
You wouldn’t know an elgant design if it smacked you in your face. But you would claim it was elgant had microsoft come up with it. Jealous I presume.
The one part of OSX that is STILL irritatingly slow for me is printing – specifically the UI that brings up page setup, printer settings, etc.
Is this just me/my environment, or do others here also perceive almost unbearable sluggishness as compared with OS 9?
This dialog (or sheet) pops up (almost) instantly on my machines, except for the 1 second it takes to animate it coming out from under the title. Is this the slow part for you? I do not have more than less than a second where I hear the hard drive access, and then the dialog rolls out.
I wish I could help, and good luck with the performance issue. Heck, maybe 10.4 will address the issue, like the rare PowerBook sleep issue. ;^)
DrSmithy – you are funny.
Are you REALLY trying to tell us that FAT and FAT-32 weren’t horrible? Are you REALLY trying to tell us that NTFS doesn’t fragment A LOT. Don’t go by MS defragment tool as to whether the partition needs to be defragmented. It is checking how many files are fragmented compared to all the files on the hard drive. So if you have 50,000 files and 5,000 are fragmented then it is only 1% fragmented. But they may be the 5,000 files you use ALL THE TIME. If this is true than close to 100% of the files you use are fragmented which is BAD BAD BAD. I’m not making this up. This is reality.
Now as for Mac OS X’s, OS/2’s, BeOS’s, and Linux’s Riser file systems. You will never find anything close to that unless the hard drive has less than 1% free space.
Facts are facts. Oh, and Windows have a 64bit file system for 15 years? DrSmithy. You are funny.
I’ve got two Macs. One is a 350mhz G3 blue & white tower with 256MB of RAM. The other is an 800mhz G4 iMac “lamp” with 512MB of RAM.
The G3 was really, reeeaaallllyyy slow up until 10.2.1.
Up until 10.2.1 is was like this. Tell an app to start, go get coffee, remember that you don’t like coffee, go dump coffee and get a Pepsi, ahhhh, sit down, app is almost up and running. It could litereally take over a minute before Mail was usable.
Since 10.2.1 things have been sped up tremendously on my G3. I can now actually start Mail, Safari, and iChat at the same time (click, click, click) and be using all three of them within 15 seconds. Case in point. I tried clicking on all three, hurry to fridge, get Pepsi, come back and all three are up and running.
Does that sound like each version is getting faster? That’s real life experience here. No stop watch neede. More like an hour glass before. Not anymore.
People seem to be a bit confused about if/how Apple’s managed to get the OS working faster across releases. It’s definitely happened, largely through a combination of three things:
1) Implementing clever things, like hotfile clustering and kext caches.
2) Making huge improvements in GCC. GCC produced lousy PowerPC code during the first couple of releases, but has been improved considerably since then. Since the entire OS and many of the apps are compiled with this, improvements in the compiler have very broad effects. GCC4, coming with Tiger, sounds like it will continue to improve things.
3) Basic optimizations of the OS and its application frameworks. Find bottlenecks and eliminate them. Find method calls that go through 3 libraries and send them straight to the last one. Optimize OpenGL, and then use it for as much of the GUI as you can (even more of the GUI is OpenGL based in Tiger).
Combined, these three efforts have produced some very significant speedups. Eugenia doesn’t expect Tiger to be a big performance leap, but i’m not so sure. GCC4 could be significant, especially if there’s a lot of latent autovectorization opportunities on OS-X, and the OpenGL/driver improvements we’re seeing with 10.3.8, combined with a greater use of OpenGL in Tiger may push things forward significantly. Reports indicate that current preview builds are already as fast or faster than Panther on the same hardware, and there’s still a couple of months for Apple’s engineers to get more done.
Quite true that early versions of OS X were almost unbearably slow, and equally true that each release has gotten speedier and speedier (lots of RAM also helps tremendously).
However, it’s still quite possible as well that Tiger *will* be even zippier — esp. for those computers that have video cards that can access Quartz 2D+ to rasterize nearly all of screen output through the GPU.
And no, Apple hasn’t released what those video card specs are yet, though they’ll probably be similar to Quartz+.
Ater running the update on my iBook, my sound was muted. This is the first time that has happened with a point upgrade. Reminds me of Linux. Since I use my iBook as an alarm clock, I discovered it this morning. Good thing I woke up anyway.
My iBook woke up and started iTunes but not a peep out of the speakers. I checked system prefs under sound and sure enough, the mute box was checked. Did this happen to anyone else?
>> Are you REALLY trying to tell us that FAT and FAT-32 weren’t horrible?
I didn’t notice any mention of FAT filesystems in the post.
>> Facts are facts. Oh, and Windows have a 64bit file system for 15 years? DrSmithy. You are funny.
NTFS has been around for about 12 years. See
http://en.wikipedia.org/wiki/Comparison_of_file_systems
Also there was nothing in the post to warrant the “funny” comments.
If HFS+ doesn’t need defragging, why are there at least three different defrag programs available…
http://www.versiontracker.com/php/search.php?mode=basic&action=sear…
There is a workaround to get uControl to work in 10.3.8
http://sourceforge.net/tracker/index.php?func=detail&aid=1119697&gr…
Yeah, I knew somthing was gonna screw the pooch.
Want an update issue?
Install this update with Netbarrier 10.3.5 and watch your setup and your “great, a new update” excitement bite it.
And of course, it’s probobly goona take Intego a couple of days to work this out.
BUT, yeah does feel a little quicker.
I don’t have a problem w/ Apple’s upgrade policy: I get way more than w/ Windows, & generally for less money. And since I pay full price, I feel free to legally (AFAIK) keep using all my old OS’s as I wish.
Of course everyone does that anyway, even with “upgrades” that don’t allow it, but I get to feel good about selling a cheapskate a used copy of Panther.
Paying full price for major upgrades has the side effect of spreading Apple’s OS’s a little wider than they would otherwise. I might not buy 10.3 for my parents’ old iBook, but I’ll gladly upgrade them (they have my old copy of Jaguar) once I move to Tiger.
Holy crap… I didn’t say it’s a display technology!!!! And you are only repeating what I meant in the second part. Now if all operations on tiger with core video will be able to be done with no problem on a video card with 32mb of ram then I will eat crow and be glad because I will be able to upgrade my machine and reap the benefits. But there has been indication that it at the very least won’t be fuly featured on video cards with less than 64mb of ram and that’s NOT because a person thinks that it’s a display technology!
“holy crap… it is an API!!! it is not a display technology!!!!
OS X will not feel sluggish. operations that are done on video, like encoding will be slower, but that is because it will be done in software not hardware.”
A great write-up of fragmentation on HFS+ filesystems, and how OS X deals with it, is found here:
http://www.kernelthread.com/mac/apme/fragmentation/
Bottom line: On-the-fly Defragmentation
When a file is opened on an HFS+ volume, the following conditions are tested
If the file is less than 20 MB in size
If the file is not already busy
If the file is not read-only
If the file has more than eight extents
If the system has been up for at least three minutes
If all of the above conditions are satisfied, the file is relocated — it is defragmented on-the-fly.
Hot File Clustering: A 3rd party defragmenting tool should not move a file into the hot file area, nor should it move a fil out of the hot file area. Doing so might degrade performance!
More great stuff…10 Things Apple Did To Make Mac OS X Faster:
http://www.kernelthread.com/mac/apme/optimizations/
And of course the best reading of them all, for the real geek in you:
http://www.kernelthread.com/mac/osx/arch_fs.html
(What is Mac OS X?)
for the same reason people write Linux antivirus software; if it didn’t exist, Windows users would think there was something wrong.
I am an XP user and consider myself to be a novice. I use XP mainly because I game and there are lots of softwares I can borrow from my friends and install since they use XP. I dont think XP is slow personally. I have used the 17 inch Mac laptop with Panther adn it seems just about a tad slower at least in UI feel compared to XP. I am curious though how long it takes to boot or shutdown or startup apps in OS X. I know from experience that my XP can boot completely including post in less than 20 seconds. My programs start up blazing fast. Netbeans loads up in 15 seconds. Games load up fast and this is on a laptop. I installed SP2 like a month ago and it has not broken my system. On the contrary it sped up. I do agree that defragging is a massive problem and I wish that MS did something about it. I am not trying to bring Mac OS down but the OS is still kind of slow IMHO and I would wait to see what upgraded hardware Apple comes out with and what performance increases come out after Tiger. For now I will be happy but a grumbling user of XP. Next machine of mine in 3-4 years will be either a top of the line Mac or a top of the line AMD Windows machine. We will see what happens then. As of now I think XP is ahead in speed. Functionality and coolness Mac OS X is king.
Well, that’s true about XP. Post install is quite fast (provided you’ve got the adequate hardware). But just wait until you start installing/uninstalling things, using the file system; in other words, let’s talk again in six months.
I work with XP every day, and believe me, it costs me “a lot” of effort to maintain decent speeds. Disk Defragmenter (works during screen saver), StartUp items equals almost “0”, cleaning registry and temp files (thanks to Treesize it’s easy to find thrash on the HDD), etc.
Again, if you install Office 2000 or above, performance will not be the same (unless you make sure to remove the office pre-loading stuff).
StartupMonitor is a good piece of super tiny software that will help
And as soon as ´programs´ start adding stuff to explorer and shell extensions, performance will be even worse. (Try a Right mouse button click in six months, when three or four programs added their own ‘shell extension’).
This is a fact. And i’ve got 768MB ram. That’s why “they” say that installing XP every 6-12 months is plain SANE.
On the other hand, this OS X (shipped as 10.3.2, now, 10.3.8) is “as slow” or “as fast” as it used to be, no matter what stuff you add. If anything, some updates to the video card really made it faster, but not slower. Although I think that a lot of these speed improvements are “percived” speed as someone noted. It’s just ´imagination´.
So in reality the update apparently (and simply) overwrote or moved a couple of files.
I reinstalled Netbarrier and everything is dandy.
That’s why they call me “theslownorris.”
I get there, I’m just notin a hurry.
Sorry if this sent a flag up on anyone.
As of now I think XP is ahead in speed. Functionality and coolness Mac OS X is king.
No the hardware XP runs on supercedes it’s minium requirements for it to seem snappy GUI wise.
MacOS X on the other hand is very ambitious GUI wise and most of the hardware it runs on is dioes not meet it’s minium requirements for to feel snappy.
Apple has been taking the right steps to make it snappy. Introducing Quartz Extereme for one. Now with CoreImage in tiger. As Graphics cards on most macs get faster more and more work is being punted over to them. If Apple is going to use CoreImage as the renderer for Quartz then the graphics should be significantly fast on current macs. Becuase now the cpu is rendering everything.
The reason this wasn’t possible before was that most macs didn’t ship with graphics cards and enough graphics memory to do this in 2001 when apple released OS X.
The situation is similar to what Microsoft plans to do with Avalon and the DCE. Microsofts release schedule just happens to be more in sync with hardware capabilities.
Apple had to release OS X in 2001 and they had to make design compromises to meet hardware requirements available then.
“I am curious though how long it takes to boot or shutdown or startup apps in OS X. I know from experience that my XP can boot completely including post in less than 20 seconds.”
This is not really an issue for most OS X users. Sleep works so well in OS X that I never shut down my PowerBook. It can sleep for days, but when I hit a key, it’ll wake up and be fully usable within 2 seconds. Why would you subject yourself to a shutdown/bootup routine if sleep is so robust?
That being said, I’ve noticed a lot of switchers tend to shut down their Macs. When I show them the sleep feature, they are almost always surprised at the convenience, but it still takes a little time for them to break the habit of shutting down the machine.
Actually it’s a known fact that NTFS has major fragmentation issues due to its allocation strategy which ended up backfiring. In essence the filesystem reserves a few blocks after the end of a new file in order to allow for future expansion. The problem with this however is that it seems to more or less do it sequentially rather than distributing evenly across the disk, with the end result that you have a ton of files with tiny space holes in between. So, then you get a really big file and it has no place to put them other than in the middle of a bunch of other files. I might further note that until fairly recently NTFS had major issues with being corruptible after crashes/power outages despite its journalling, etc., and its metadata support is very very limited (max ~64KB per file, same as XFS). So no, it’s hardly a supreme king of filesystem design, about its only really cool feature is the fact that all the FS metadata is stored as files, which allows some neat things like adding space to a partition just by growing the size of the partition bitmap file. Feel free to reference both Inside NTFS by MS Press and Practical Filesystem Design by Dominic Giampaolo if you don’t believe me about any of this.
Core Video is a feature that is expected to be used in content creation. You’d use it (for example) in Photoshop filters or to play around in video footage.
Afterwards the output is in the standard formats, and Core Video is not required to use it (like you don’t need Photoshop installed to view a jpeg created in it).
So, why would a content creator be using a Mac Mini or iBook? I don’t think many would, and I understand that if someone chooses to create using a low-end Mac, Core Video will drop into software if the GPU/VRAM isn’t enough.
Talking about Core Video on the low-end Macs is a red herring.
Why do we need a program to defrag the hard drive? We don’t if we’re using 10.3 or above. But there are enough people who haven’t moved to 10.3 yet or who believe that they need this sort of thing because Windows users need it, so that creates a market demand. Some companies simply choose to fill this demand.
Upgrades aren’t mandatory, after all. There’s no-one from Apple forcing you to buy an upgrade. Developers usually are quick to jump onto the new APIs in major system upgrades though, and apps requiring later OS versions are common a few months after the OS release.
But that’s why people upgrade! The new OS version has features that make it worthwhile. It’s not mandatory, but it gives you stuff you might need. Like any software, if the features aren’t worth it, then people can choose to not buy.
Lastly, my iBook starts from sleep by the time I open the lid. About one or two seconds. I reboot only to pick up system upgrades (like the 10.3.8 last night). I don’t think startup time is a factor in measuring OS speed. Even if it was, a difference of 10 seconds isn’t much over the course of an entire day. It’s probably the least common task a user need do.
I would like to also correct drsmithy, in case anyone got misdirected.
it’s a common mistake for people coming from the windows environment to think all filesystems need to be defragmented. I work with hp/ux, but the same is true of linux and mac os x…while it is possible for a file to be fragmented, it is quite rare and actually requires a special set of circumstances to occur.
I have never defragmented a unix disk, and I have the utilities to check for fragmentation….they just don’t have any fragmentation.
Whereas in the windows world, it is a common task to defragment drives…even amazingly, if you are using SQL Server where you’d think the files would be pre-allocated or managed by the database…even on those machines you have to degragment.
It’s really a major deficiency of windows.
I think this is probably a bug fix but my system slowed right down. It wasn’t noticable other than the SETI client took way longer than usual to do a unit and the fan kept switching off.
I’ve traced the problem to the power management prefs, I had mine set to “custom” and I think it may have been ignoring this before and running at full power. 10.3.8 has fixed this so it’s running slower to conserve power.
The solution is to change the setting to max performance, fan is now back on as crunching has sped up again 🙂
“Want an update issue?
Install this update with Netbarrier 10.3.5 and watch your setup and your “great, a new update” excitement bite it. ”
Actually, I have just that version of Net Barrier and did not have a problem with the update. Perhaps something else is the root of your problem.
Bill
reeks of kludge? how?
Because instead of simply trying to avoid fragmentation completely, or minimise its affects on performance, they’re shuffling files around the disk to reassemble non-contiguous files. Also, they only do this to certain types of files in certain conditions, rather that providing a proper defragmenter.
Are you REALLY trying to tell us that FAT and FAT-32 weren’t horrible?
Considering FAT was designed in about 1980 for use on 180k floppy disks, DOS and machines with 64k of RAM, it wasn’t too bad.
FAT32 was a reasonable kludge to modernise FAT a bit, but fundamentally FAT should have just been retired earlier.
You need some context here. FAT was created *twenty-five years ago* for XTs without hard disks running DOS. You can’t validly compare it to things created five years or one year ago.
Are you REALLY trying to tell us that NTFS doesn’t fragment A LOT.
I’ve never been convinced of the NTFS defraggers’ claims to fragmentation. I’ve certainly never seen any noticable performance improvements from defragmentation the few times I’ve tried it (and this is going back ~9 years now). IMHO they’re just trying to sell their software.
Certainly everything about the design of NTFS is supposed to avoid fragmentation.
Quite frankly I think NTFS’s fragmentation problems – and their impact on actual performance – are often grossly overstated.
Facts are facts. Oh, and Windows have a 64bit file system for 15 years? DrSmithy. You are funny.
NTFS has been around – and not changed markedly AFAIK – since NT’s original release in 1993. I think it’s a fair assumption they didn’t just bang it together the night before it was released, so 12 – 15 years is possibly somewhat conservative, given NT development started in 1988.
So no, it’s hardly a supreme king of filesystem design, about its only really cool feature is the fact that all the FS metadata is stored as files, which allows some neat things like adding space to a partition just by growing the size of the partition bitmap file.
I never said – or even implied – NTFS was the “king of filesystem design”. Apart from the bit about being prone to corruption and the negative impact of NTFS’s allocation strategies, which I’d have to disagree with, everything you said is accurate.
“certain types of file”
This is true, but it covers the majority of files.
“certain conditions”
Yes… the system must have been up for more than three minutes. Not a very hard condition to satisfy.
The automiatic file defragmenter is an add-on, but it’s a solid one with a good purpose. That doesn’t seem like a kludge to me, but an extra feature on the existing file system.
Also… how can a file system avoid fragmentation completely without defragmenting the drive? If you’ve got free space spread out over the drive, you will have a time when a file cannot be written in a contiguous block. If you move files around to maximise contiguous space, isn’t that defragmenting?
I recall reading that disk fragmentation doesn’t cause a performance hit under OS X as large as a similar fragmentation issue under Windows. Does anyone know more about this?
Because instead of simply trying to avoid fragmentation completely, or minimise its affects on performance, they’re shuffling files around the disk to reassemble non-contiguous files. Also, they only do this to certain types of files in certain conditions, rather that providing a proper defragmenter.
Absolutely wrong. OS X HFS+ does this in addition to smart allocation schemes. Read the article from kernelthreads that was posted a few posts above.
Most OS X machines with variying loads fragment only upto 3%. Which is pretty good and much much better than FAT32 or NTFS.
In my experience defrag on MacOSX is a waste of time. Correcting permissions and updating prebindings is much more important.
http://docs.info.apple.com/article.html?artnum=25668
Absolutely wrong. OS X HFS+ does this in addition to smart allocation schemes. Read the article from kernelthreads that was posted a few posts above.
I’m aware of that already. My point was why do they feel the need to do this as well ?
Most OS X machines with variying loads fragment only upto 3%. Which is pretty good and much much better than FAT32 or NTFS.
Better than FAT-anything is hardly surprising, it has no features whatsoever to try and reduce fragmentation.
I am curious, however, where are your figures coming from ? How are they measured ? What are they measuring ?
Also, of more interest than “how fragmented” a volume gets is how much that fragmentation impacts performance.
I’m aware of that already. My point was why do they feel the need to do this as well ?
No you weren’t. You called it a kludge and said they should rather fix thier allocation algorithm. Because it makes sense.
I am curious, however, where are your figures coming from ? How are they measured ? What are they measuring ?
He pointed to the kernelthread articles posted before.
Also, of more interest than “how fragmented” a volume gets is how much that fragmentation impacts performance.
Fragmentation always affects perfromance. Most allocators, be they for memory or storage blocks, constantly battle the fragmentation problems inherent in allocating resources.
I downloaded a trial copy of IBM’s xlc compiler for my G5 and tested out how it compares to openssl compiled with xlc (-O3) and Mac OS X provided openssl (compiled with cc – which I think is gcc).
Darwin inferno.local 7.8.0 Darwin Kernel Version 7.8.0: Wed Dec 22 14:26:17 PST 2004; root:xnu/xnu-517.11.1.obj~1/RELEASE_PPC Power
Macintosh powerpc
Running on iMac G5 (1.8 Ghz Power G5 with 768 MB RAM) and settings set to highest for cpu in engery settings. Also running Panther 10.3.8
inferno:~/Documents bdelbono$ more openssl.benchmarks
OpenSSL 0.9.7b 10 Apr 2003
built on: Wed Apr 21 15:28:56 PDT 2004
options:bn(64,32) md2(int) rc4(ptr,char) des(idx,cisc,16,long)
aes(partial) blowfish(ptr)
compiler: cc -arch i386 -arch ppc -g -Os -pipe -Wno-precomp -arch i386
-arch ppc -pipe -DOPENSSL_NO_IDEA -DFAR=
available timing options: TIMEB USE_TOD HZ=100 [sysconf value]
timing function used: getrusage
The ‘numbers’ are in 1000s of bytes per second processed.
type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes
md2 800.48k 1669.78k 2308.04k 2576.03k 2646.41k
mdc2 4152.44k 4992.50k 5344.02k 5384.85k 5382.92k
md4 8482.18k 28611.73k 77254.95k 131719.26k 167462.75k
md5 6957.58k 21910.15k 54785.04k 86404.65k 105626.47k
hmac(md5) 3723.37k 12946.25k 38017.12k 74333.69k 101879.70k
sha1 6895.47k 21673.86k 53972.38k 85554.51k 103158.52k
rmd160 5417.43k 14658.79k 30596.98k 41395.63k 46350.06k
rc4 171650.31k 194030.30k 203723.41k 205953.82k 206034.34k
des cbc 23684.24k 24754.36k 24820.32k 25074.16k 25189.71k
des ede3 9478.51k 9516.97k 9600.61k 9585.95k 9658.23k
idea cbc 0.00 0.00 0.00 0.00 0.00
rc2 cbc 12938.09k 13267.37k 13241.69k 13256.30k 13267.72k
rc5-32/12 cbc 45873.60k 49051.46k 49871.64k 50255.22k 50851.91k
blowfish cbc 45423.95k 48501.18k 49716.24k 49959.05k 49854.96k
cast cbc 39574.05k 41729.28k 42424.77k 42290.50k 41915.27k
aes-128 cbc 52710.37k 53334.04k 57531.53k 60627.03k 61670.49k
aes-192 cbc 53844.98k 55031.35k 50766.98k 52219.88k 55344.60k
aes-256 cbc 48324.62k 49682.54k 49818.55k 47607.61k 49474.13k
sign verify sign/s verify/s
rsa 512 bits 0.0012s 0.0001s 856.7 9174.9
rsa 1024 bits 0.0060s 0.0003s 166.7 3051.1
rsa 2048 bits 0.0370s 0.0011s 27.0 946.0
rsa 4096 bits 0.2421s 0.0036s 4.1 275.6
sign verify sign/s verify/s
dsa 512 bits 0.0010s 0.0012s 1017.4 825.2
dsa 1024 bits 0.0031s 0.0038s 325.8 266.5
dsa 2048 bits 0.0101s 0.0126s 98.9 79.3
inferno:~/Documents bdelbono$ more openssl-xlc.benchmarks
OpenSSL 0.9.7e 25 Oct 2004
built on: Thu Feb 10 23:31:28 PST 2005
options:bn(64,32) md2(int) rc4(ptr,char) des(idx,cisc,16,long)
aes(partial) idea(int) blowfish(ptr)
compiler: xlc -DOPENSSL_SYSNAME_MACOSX -DOPENSSL_THREADS -D_REENTRANT
-DOPENSSL_NO_KRB5 -O3 -DB_ENDIAN
available timing options: TIMEB USE_TOD HZ=100 [sysconf value]
timing function used: getrusage
The ‘numbers’ are in 1000s of bytes per second processed.
type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes
md2 795.61k 1670.70k 2300.49k 2562.76k 2639.64k
mdc2 4565.34k 5471.89k 5757.35k 5871.28k 5902.65k
md4 9340.83k 32278.61k 90067.80k 165798.66k 219243.94k
md5 7525.97k 24634.18k 64466.54k 108164.40k 133787.14k
hmac(md5) 10977.37k 33062.10k 77015.67k 115615.08k 134698.33k
sha1 8680.54k 28502.13k 76755.76k 132966.57k 170777.00k
rmd160 5642.16k 14678.52k 30510.62k 41302.37k 46148.27k
rc4 174285.80k 193781.56k 201647.52k 203284.21k 205500.60k
des cbc 25398.85k 26026.97k 26020.64k 26431.70k 26217.16k
des ede3 9604.43k 9629.40k 9736.04k 9656.87k 9603.75k
idea cbc 24084.61k 24633.21k 24794.45k 25156.50k 25035.08k
rc2 cbc 15287.12k 15682.48k 15743.40k 15858.26k 15938.42k
rc5-32/12 cbc 66655.82k 71181.87k 72396.12k 72414.55k 73270.13k
blowfish cbc 50656.09k 53139.26k 53784.23k 54791.26k 54195.97k
cast cbc 41108.71k 42766.73k 43166.31k 43562.75k 43351.84k
aes-128 cbc 73168.71k 74340.27k 76764.67k 77528.75k 78770.94k
aes-192 cbc 66110.45k 66948.63k 68397.79k 68616.90k 69317.56k
aes-256 cbc 59548.65k 59553.18k 61169.32k 61661.53k 62173.98k
sign verify sign/s verify/s
rsa 512 bits 0.0012s 0.0001s 817.7 8726.9
rsa 1024 bits 0.0062s 0.0003s 160.5 2987.3
rsa 2048 bits 0.0383s 0.0011s 26.1 891.1
rsa 4096 bits 0.2577s 0.0038s 3.9 260.9
sign verify sign/s verify/s
dsa 512 bits 0.0010s 0.0013s 978.3 798.8
dsa 1024 bits 0.0032s 0.0038s 315.2 261.5
dsa 2048 bits 0.0108s 0.0132s 92.6 75.8
From TFA:
“Key enhancements include (…) Previous standalone security updates.”
OK, if the Mac is supposed to be immune to worms and viruses like just about every mac owner says, why is apple wasting their time on “security updates”?
Or is security the mac’s “dirty little secret?”
OK, if the Mac is supposed to be immune to worms and viruses like just about every mac owner says, why is apple wasting their time on “security updates”?
Worms and viruses aren’t the only ways to exploit the system. None of the updated components have anything to do with worms or viruses.
I must be sooo dumb. Please explain to li’l ole me exactly what a security update fixes. Other than security flaws that can be exploited by… worms and viruses!
Or maybe are you lot afraid of teh 3vil haxx0rz come to take away all of gramma’s holiday snaps? pleeze…
Why is OS X defragmenting during runtime such a great feature? You’re hitting the hdd (one of the slowest components of the computer) doing maintenance when the user is expecting the computer to do work. I don’t have any data to back this up, but logically it would decrease runtime performance.
Furthermore, there’s nothing wrong with defragmenting once a week at night on Win XP either. If you’re not using the computer, who cares how long it takes to defragment? And I bet it’s the same amount of work that Mac OS X has to do, Mac OS X just spreads it out over time. Plus Mac OS X will only work on files smaller than 20MB according to the Kernelthread article. So it’s not really effective at all.
But in truth, I’ve NEVER routinely defragment on Win XP nor Linux and I don’t perceive any decreases in performance. If there’s any, it’s not so much that it’s noticable. In general, I don’t think fragmentation is such a big issue for my usage pattern…and undoubtedly for most usage patterns on any of the popular OSes.
“This is not really an issue for most OS X users. Sleep works so well in OS X that I never shut down my PowerBook. It can sleep for days, but when I hit a key, it’ll wake up and be fully usable within 2 seconds. Why would you subject yourself to a shutdown/bootup routine if sleep is so robust?
That being said, I’ve noticed a lot of switchers tend to shut down their Macs. When I show them the sleep feature, they are almost always surprised at the convenience, but it still takes a little time for them to break the habit of shutting down the machine.”
Can you sleep your desktop? And will it be as energy efficient as hibernation on Windows and Linux? During hibernation, power if off to all devices except a trickle to the motherboard unlike sleep where DRAM still has power.
True is that OS X cannot STAND BY nor HIBERNATE. But there’s balance in the force. The little amount of power used during sleep (you could easily leave you box sleeping for more than a week with full battery, I haven’t measured but possibly much more, mine is not sleeping more than 20 hrs/day) is the price you’ve got to pay for “almost instant” resume (on my powerbook 15” 512RAM 1.25GHZ it takes less than 2seconds). In fact from the moment you press the button till you open the lid and see the screen, the OS is already responsive; so, being both a XP and OS X user, i can tell that I prefer this “sleep” rather than either standby or hibernate. In fact XP Standby sucks (YMMV) and hibernate it has its advantages, tho if you have 2GB of ram, it really takes a little time till it rereads the contents back to memory and viceversa. (It takes, as I am trying it RIGHT NOW, almost 15 seconds with 512 ram till the box completely shuts down). (AMD XP 2700+, XP Professional).
So, to be honest and fair with both methods, hibernate is good because you know the box is fully off, but unless you don’t abandon your box frequently, it’s uncomfortable having to wait till the box powers up and reloads the hib file. (In case you set auto-hibernate if i don’t use for “x” minutes). On the other hand, Sleeping on OS X can be set to sleep after 5 minutes (if you want to really save power) and won’t really annoy you with reloading times. The drawback could be, ok, the box is “not” entirely shut down. But as far as I know, everything mobile (HDD, CD, Etc.) is “powered off” and it’s safe to transport it, but not change the memory for example. (And if I had to change memory or any other component in windows, I’d shutdown rather than hibernate)
So… you can see both methods have its advantages and drawbacks. Finally, regarding Standby on XP, on some boards/Cases/fans the standby won’t shut down everything, so one of the main reasons why I “sleep” my Mac and Hibernate my PCs is “NOISE” and the standby feature, although its faster than Hibernating, keeps making noise (less, but a lot on an AMD anyway… specially for my “in the middle of nowhere and silent room”).
🙂
I have taken the time to measure my servers and I can assure that after six months of moving files across the HDD (that is, considering a “heavily” fragmented NTFS volume), using Disk Keeper (Tool for defrag) really accelerated loading times, explorer response time and overall HDD speed access increased.
Since I thought it was percived speed, and a friend of mine was annoyed with his “6 month old XP install” (in other words: he was about to reinstall because it was slow) I suggested he tried using this tool.
It was HIM (he lives on the southern hemisphere, across the sea..) who told me “my god, the difference is amazing”. Now I do not say (nor I work for any DD company!) that disk defragmenting an NTFS volume will save your life and computer, but..it won’t hurt to have all the files in the less possible amount of pieces, all placed as close as possible to the disk start, etc.
I can tell, I leave the software running as a service in Screen Saver mode so it won’t annoy nor do anything stupid while i am working. Everytime the ssaver runs, if I don’t do anything for a minute, the soft does what OS X does, that is, defrags on the “fly”. I’ve tried three tools. The only one which satisfied me was this Disk Keeper.
My 0.02c
It looks like the upper workload processes faster on IBM’s XLC compiler. And the lower workload RSA… is that the same criteria? 1000s of bytes per second?
1) Did you download the latest XCode dev tools available from Apple? If you haven’t upgraded you might be on GCC 3.2.
2) As the above poster’s have said, Apple is using GCC 3.4, if you’re not in the developer program, then you’re not running the version they are using for Tiger( 10.4 )
Why is OS X defragmenting during runtime such a great feature? You’re hitting the hdd (one of the slowest components of the computer) doing maintenance when the user is expecting the computer to do work. I don’t have any data to back this up, but logically it would decrease runtime performance.
To answer your question, it is a great feature because it reduces fragmentation and coupled with Hot-Files and the smart allocation schemes it imporves performance overtime.
The problem with your logic regardless of you wether you have data or not is that OS X isn’t just doing it in the background all the time. It only does it, tada drum roll, when you are already hitting the harddisk to begin with and only does files smaller than 20MB for exactly the very reason.
Furthermore, there’s nothing wrong with defragmenting once a week at night on Win XP either. If you’re not using the computer, who cares how long it takes to defragment?
The problem is not how long it takes to fragment but why it fragments. HFS+ along with many other filessytems fragment very less over time. NTFS is notorious for fragmentation.
Your desktop centric narrowminded view is disturbing. NTFS is used on servers and there is no at night for servers, and setting up jobs just to deframgment a server nightly is bad, when it could be doing other things or at the very least if someone decides to access it and write a bunch of files the defrag process will keep restarting and never finish.
And I bet it’s the same amount of work that Mac OS X has to do, Mac OS X just spreads it out over time. Plus Mac OS X will only work on files smaller than 20MB according to the Kernelthread article. So it’s not really effective at all.
This works on frequently used slow growing files. A large files being written once is already fragmented less becuase of the smart allocation process. Contrary to your theory, the kernelthreads article concluded it is very effective.
and undoubtedly for most usage patterns on any of the popular OSes.
I don’t think so.
It looks like the upper workload processes faster on IBM’s XLC compiler. And the lower workload RSA… is that the same criteria? 1000s of bytes per second?
Yes. The test was using both 1000s of bytes per second.
1) Did you download the latest XCode dev tools available from Apple? If you haven’t upgraded you might be on GCC 3.2.
Actually, I didn’t recompile openssl with gcc. I’m using the one that is preinstalled on this machine. That said, I’m running the latest XCode dev tools (with the November 2004 gcc update)
Reading specs from /usr/libexec/gcc/darwin/ppc/3.3/specs
Thread model: posix
gcc version 3.3 20030304 (Apple Computer, Inc. build 1671)
2) As the above poster’s have said, Apple is using GCC 3.4, if you’re not in the developer program, then you’re not running the version they are using for Tiger( 10.4 )
No. I haven’t tested the 10.4 betas and aren’t in the developer program (it’s USD $500 for ADC membership, IIRC).
-Bruno
The GCC 3.3 compiler is already faster at the RSA/DSA workload. Interesting.
Yes, $500, plus an extra test machine is keeping me out of the Dev program too. If Apple were to come out with a VirtualPPC product I’d be on it.
And go back to school for the ed. discount.
( That’s probably why Mac users are SMARTER( Don’t remember who did the study. ) than Windows users. It really helps to be taking a class and pick up the Educational Discount. Event the Developer program has an Ed. Discount. )
You need some context here. FAT was created *twenty-five years ago* for XTs without hard disks running DOS. You can’t validly compare it to things created five years or one year ago.
UFS was created a lot longer than FAT was and it doesn’t fragment nearly as much.
I’ve never been convinced of the NTFS defraggers’ claims to fragmentation. I’ve certainly never seen any noticable performance improvements from defragmentation the few times I’ve tried it (and this is going back ~9 years now). IMHO they’re just trying to sell their software.
Hmm…. microsoft bundles a defragmenter. I wonder why they spent the resources to devleop and support one?? Oh they didn’t they just licensed it from the makes of DiskKeeper.
http://www.pcguide.com/ref/hdd/file/ntfs/relFrag-c.html
The solution to the problem of fragmentation under NTFS is the same as it is under FAT: use a defragmenter. :^) For Windows NT, you will need to use a third-party program, one of the most popular being the Diskeeper program by Executive Software. In its wisdom, Microsoft decided to license the Diskeeper defragmenter technology and include it in Windows 2000, so the operating system now includes a built-in defragmenter, though it is likely either less capable or slower than the full Diskeeper program sold by Executive Software. (As an interesting aside, Microsoft found themselves in the hot-seat as a result of this licensing decision. The German government took issue with the defragmenter because the CEO of Executive Software is a member of the Church of Scientology, and a big hoo-ha resulted. How bizarre. As the old expression says, “I am not going there”, but you can read about this strange episode here.)
Certainly everything about the design of NTFS is supposed to avoid fragmentation.
But I fails miserably in this regard. Just like many of the “Desgins” or should I implementation microsoft does.
Quite frankly I think NTFS’s fragmentation problems – and their impact on actual performance – are often grossly overstated.
I think they are grossly understated by thier fanboys my self.
http://www.win2000mag.com/Windows/Article/ArticleID/16444/16444.htm…
I particularly like this blurb.
Disk fragmentation is a timeless problem that isn’t limited to PC hard disks. (Raxco Software and Executive Software both started out providing defragmentation utilities for OpenVMS systems.)
Funny considering the guy who designed VMS designed NT.
http://www.winnetmag.com/Windows/Article/ArticleID/21917/21917.html
I am posting this inline because it really isn’t woth clicking to read this much:
For years, I believed that NTFS didn’t fragment like FAT file systems do, but eventually I realized that fragmentation can indeed be a serious problem on Windows NT 4.0 systems. Has Microsoft made any efforts to decrease fragmentation in Windows 2000?
NTFS 4.0 (NTFS4) allocates free space between files so that as files grow, they expand into the adjacent free space rather than fragment. This feature protects NTFS from fragmentation—until files occupy approximately 50 percent of disk space. At that point, the disk becomes fragmented as you save new files because the existing files and allocated free space fill the disk.
Win2K’s NTFS 5.0 (NTFS5) doesn’t improve the situation. In fact, the problem appears to be worse. (For details about NTFS5, see Mark Russinovich, NT Internals, “Inside Win2K NTFS, Part 1,” November 2000 and “Inside Win2K NTFS, Part 2,” Winter 2000.) However, Win2K includes Disk Defragmenter, a lite version of Executive Software’s Diskeeper. You can use this built-in defrag tool to get a handle on the problem. (For information about Disk Defragmenter, see Kathy Ivens, Getting Started with Windows 2000, “Disk Defragmenter,” September 15, 2001.)
Interesting to read this after I just finished working on a customers brand spankin’ new Powerbook G4, *Nice*
And no stinking USB malloc() bug!
Chuck-
– I’ve noticed on Symantic(?) that they allow you to declare certain files rarely used, and move them to the absolute back of the disk drive. With the file index-directory at the front, when these files are used theres a tremendously long arm-swing back-and-forth going on.
– Some of the allocation patterns are very questionable.
ntfs fragmentation doesnt make a difference? heres a few stories…
when gta3 came out for pc, i tried running it. my pc wasnt even close to the “minimums”, so when it was unplayable i just figured my computer wasnt good enough. later, i installed some game i hadnt played in awhile, and its performance was far worse then i remembered. checked a bunch of stuff, when i hit fragmentation it was at something insane like 80%. managed to get it down to around 10% or so, aftwards gta was more then playable. definately didnt fly on that old box, but at least double the framerate i was getting before the defrag, and at a higher resolution.
every few months here at the job, someone manages to get a new computer, and theres a big computer swap that goes down, usually based on senority. when i got a new machine that time, even though the specs were better, the performance was worse. checked stuff like spyware/malware, poked around in some of the settings, but couldnt find anything to explain it until i checked the disk fragmentation. by then i had been using diskkeeper for awhile (pretty much manditory for any non-gaming machine running windows). i checked the fragmentation of idea.jar and oc4j.jar, and found them both to be fragmented to within an inch of their lives. once again, a defrag fixed the problem.
but the vocal windows users on these kinda of boards use a magical version that the rest of us dont know about. for the rest of us, windows gets virii, gets spyware, crashes, and gets fragmented hard drives.
Yeah I think I should try to get Diskkeeper on my machine and have it start defragging whenever I am idle for a bit of time. That is not a bad idea. It is true fragmenting gets to be an issue in XP. Hmm also getting rid of registries and stuff every 2 weeks and defragging and runnign adaware, even though I am used to it does get boring. But that is absolute bogus if you say that regularylt using Office 2003 with all its features, having the entire Adobe Suite, (Photosoph CS, AfterEffects), 6-7 games like HL2, BField, HD2, Splinter Cell, etc all this on a desktop computer running XP slows down the box which it does not. I have a 2 ghz athlon desktop with a gig of ram and a 200 gb hdd. Upon booting I only use a 128 mb of memory because I have the registry edited so that the core windows dlls are loaded in memory. I had it down to 70 mb before. So anyway, Word starts up in less than 2 seconds, so does Excel, so does any other MS product. Adobe runs brilliantly on it also. No slowdowns hanging or crashes. I load Battlefield so much faster than other people that pretty much always I am the first person to join a server while other people are still connecting. I think XP shouls be given some sort of crefit. I am not saying it is every person’s OS but it does have its strengths just like OS X does. My 2 cents. And I am buying a 64 bit Mac desktop in a year definitely cause I am curious.