SuperFetch is a technology in Windows Vista and onwards that is often misunderstood. I decided to delve into this technology to see what it is all about, and to dispel some of the myths surrounding this feature.
Very succinctly put, SuperFetch is a technology which allows Windows to manage the amount of random access memory in the machine it runs on more efficiently. SuperFetch is part of Windows’ memory manager; a less capable version, called PreFetcher, is included in Windows XP. SuperFetch tries to make sure often-accessed data can be read from the fast RAM instead of the slow hard drive.
SuperFetch’ goals
SuperFetch has two goals: it decreases boot time, and makes sure applications that you use the most load more efficiently. SuperFetch also takes timing into account, in that it will adapt itself to your usage patterns.
Let’s focus on decreasing boot times first. During the Windows boot process, the same files need to be accessed at different times. SuperFetch records which data and files need to be accessed at which times, and stores this data in a trace file. During subsequent boots, this information is used to make the loading of said data/files more efficient, resulting in shorter boot times.
SuperFetch performs more tasks to make the boot process more efficient. It also interacts with the defragmenter to make sure that the files accessed during the boot process are stored on the disk in the order they are accessed in. It performs this as a routine task every three days; the specific file layout is stored in /Windows/Prefetch/Layout.ini
.
SuperFetch’ second goal is to make applications launch faster. SuperFetch does this by pre-loading your most often used applications in your main memory, based on not only usage patterns, but also on when you use them. For instance, if you have the same routine every morning (Chrome – Mail – Miranda – blu), SuperFetch will pre-load these into memory in the morning. If your evening routine is different (for instance, it includes Word, Excel, and Super Awesome Garden Designer), SuperFetch will adapt, and load those in memory instead during the evening.
SuperFetch for applications basically operates in the same way as the boot variant; it traces what files are accessed by an application during the first ten seconds of said application’s startup, which can then be used to load the proper data in memory at appropriate times. SuperFetch data for applications is stored in /Windows/Prefetch
(the various .pf files).
Windows has always included a built-in caching mechanism, but this one is quite limited. All it basically does is keep application data in memory after its termination, which allows applications to be loaded faster right after quitting them. This caching mechanism is helpful, but limited – for instance, a reboot obviously flushes all data in RAM. In addition, cached data will eventually drop out of RAM if other applications push it out of RAM.
SuperFetch also has other advantages, Mark Russinovich details some of them.
SuperFetch myths
There are a lot of myths going around about SuperFetch, the most predominant probably having to do with how Task Manager reports memory statistics. If you open Task Manager (in Windows 7), it’ll tell you Total, Cached, Available, and Free. The problems arise from the “Cached” figure, since this figure is generally substantially higher than the “Free” figure.
When people look at the Task Manager, and they see the figure for “Cached” compared to the number of “Free”, people assume that only very little of their memory is available for the applications they are about to launch. What they forget is that the Cache filled by SuperFetch and the standard caching mechanism runs on a lower priority; in other words, memory requests by applications will always supersede SuperFetch.
In other words, whatever you see in the “Cached” figure is actually accessible to applications.
And this brings us to the question of what to do with RAM. I have 4GB of main memory in my main desktop machine, and I would find it a total waste if the operating system did not use it to make my computing experience smoother. Isn’t that why I got 4GB of top-quality RAM in the first place? To make my machine faster?
This is exactly what SuperFetch does. It’s an intelligent mechanism that uses the RAM in a machine to its fullest potential to make computing a smoother experience. The fact that SuperFetch (and its related technology, ReadyBoost) actually works, has already been confirmed by Tom’s Hardware. The key here is that the more RAM you have, the bigger the benefit SuperFetch delivers; according to Tom’s Hardware, Vista’s sweet spot was about 2GB of RAM, but even at 1GB they noticed a positive difference.
Contrary to what many Windows tweaking guides on the internet tell you, SuperFetch does not impact your every day computing experience in a negative way. SuperFetch makes often-used applications load fast – it doesn’t make other applications load any slower. As such, turning it off, as some guides advise you to do, can only result in a slowdown, not a speed up.
Another oft-made claim is that SuperFetch will make your boot times considerably longer, and that any advantages in application launch times are nullified by an increase in boot times. Not only is this very debatable (since SuperFetch also works for booting the operating system), you also have to ask yourself: what action do I perform more often, booting, or launching applications?
Even if there was a trade-off, it’s probably a worthwhile one. Reducing application load times will have a more positive impact at the end of the day than reducing boot times would.
Conclusion
SuperFetch is something all operating systems should have. I didn’t buy 4GB of top-notch RAM just to have it sit there doing nothing during times of low memory requirements. SuperFetch makes my applications load faster, which is really important to me – I come from a BeOS world, and I like it when my applications load instantly.
SuperFetch’ design makes sure that it does not impact the system negatively, but only makes the system smoother. Because it runs at a low-priority, its cache doesn’t take away memory from the applications you’re running.
Was so that Vista could actually run.
Take that douchebaggery to /.
Well, he’s accurate. Whereas in the past memory requirements went up based only on the applications that you actually ran, now memory requirements have gone up based on applications that you *could* and *have* ran. This means that your memory requirements go *way* beyond just the applications you will run at any one time.
OSX has been doing this for a while, altho obviously not ‘exactly’ the same way, it does however cache almost everything and it’s also the reason why a lot of typical or old memory tools and reading available memory doesn’t apply for it.
it’s also why tools that ‘free’ unused memory is a bad idea as you’ll only end up slowing things down.
By OSX do you mean Linux/UNIX? Because most (if not all) have been doing that for quite some time. You always hear from noobs, on forums, about “I barely have anything running but says I only have x% free!” because they are all used to Windows using only ‘what-is-running’. Hell, I was a culprit of it at one point…
Just an FYI
Actually no, it is not similar to super fetch. Super fetch is thought to be an automatic technology active when you boot up the computer. You turn on the computer and, it checks your history, saved on disk, and then it starts a silent launching in the background.
Mac OS X and some *NIXs do differently. They are slow to lunch the app, but when you quit the app, not all resources are flushed out of memory… So if you later decide to re-open the application, the application will open faster. Just in that case. However, once you turn off your computer, all RAM memory is flushed, and those RAM caches are lost. The next time you turn on your computer the process start all over again. What Apple and *NIXs providers recommend is: Do not turn off your Mac or Workstation, unless you have to.
Super fetch tries to do something different. It is trying to guess what your habits are, how you use your computer and opening those apps you use according to your habits… All without asking. It can work sometimes. If it works is wonderful. But when it fails, it fails really badly, like a Pentium IV branch prediction.
So I believe it is not perceptually right for the user. It is like a roller coaster. Sometimes, the system would be incredible fast. Other times, it would be too slow. So users complain about it. If the system were slow always, the user would adapt to the speed of the system and it would not feel the difference… After all, we all have used slower computers in not a distant past. But the ups and downs in the speed is what get users frustrated.
Edited 2009-05-12 05:11 UTC
As explained in the article, Windows does that as well.
Of course, Windows Vista does it too, but when I said “differently” I was referring to Super fetch approach. Mac OS X does not have super fetch.
Yet, all apps under OSX load supremely slowly. While I can launch Firefox under 3 secs on any hardware under Vista, under OSX, it never takes under 7-8 secs on any mac, including Mac Pro and my C2D iMac with 3 GB RAM. The same goes for iTunes, Adium, iPhoto (crap!), whatever. I`m not the “50 apps open at all times”, so I need them to launch pretty quick. With Leopard, that is impossible.
Edited 2009-05-12 02:36 UTC
Yes, Mac OS X is slow lunching comparing to Windows, but it does not have to do with super fetch kind of technology, because OS X does not have it.
Mac OS X has pre-binding, that is different and cache optimizations to boot up. Cache optimizations is similar to what Windows XP offers.
Pre-binding is “something” that improve load of dynamic applications (Mostly Cooca ones). But it is not as super fetch, prebinding is a process of knowing where the applications parts are, so when you lunch it, the app has a fair idea of where to lunch components. In short, prebinding is the technology Apple uses to avoid a Windows Registry. A Windows-like registry would be faster, but it is easy to corrupt, and difficult to repair, as many PC users can testify.
Mac OS X is also slow to lunch apps for other reasons too… Cocoa dynamic nature, for example, which has a runtime kind of similar to a Java runtime, but without the emulation part. .Net Apps are slower to lunch than traditional Windows apps too. But, it has so many benefits to work using Cocoa, that lunching times are considered a bearable trade off. It happens the same with Java and .Net.
I see this mentioned often, but rarely see it materialize post Windows 2000. Reverting to a previous version of the registry is as easy as running a system restore, which takes all of 2-3 minutes to do. It’s still more brittle than it should be, but has come a long way since the Windows 98 days.
So Apple launch slowness is a toll we pay for not having Windows Rot? I think I could accept that. (There is OSX rot too, but not that painful).
Yes it is and yes, pre-binding can get corrupted too, but any user can perform maintenance on it and it does not degrade performance over time. Some people might say: it is always slow, but speed is consistent so the user adapts to it. And of course, you always have the possibility to buy faster hardware.
The problem with Windows is: after a clean install is very fast, over time gets slower to terribly slow. The registry is difficult to repair.
The register is a nice idea, and when it works it does work. But computers are not always meant to be used by specialists, so design decisions have to be made, Apple approach is slower but more user friendly.
And yes, Windows has better registry these days, but also Mac OS X pre-binding is better.
Edited 2009-05-12 17:06 UTC
Whenever I have SuperFetch turned on I hear a consistent annoying noise from the hard drive while watching movies. I have noticed no noticeable speed ups from it. Just lots of noise. So you can say all you want about how much faster applications load, but having a tick tick tick noise while watching a movie really distracts from any advantages it has.
The problem with SuperFetch is that Microsoft designed it making certain assumptions. And you know what they say about assumptions. Let me explain:
Microsoft assumes that the normal user is going to use the same apps over and over and over. It then fills up memory pre-loading these apps. What happens when I want to run a big app that I dont run often? How does this work with multiple users?
As a real world example, I run computer labs at a university. We moved to Vista this past year. Computer performance got worse as the day goes on. Why? Different users use different apps. And the computer can’t make any long term predictions. So everytime a user does something, the system has to unload what is in memory and then load something new. This results in lots of disk thrashing and horrible performance.
Another bug we ran into was using products like deep freeze and drive shield. How is Vista going to learn usage patterns when it can’t make permanent writes to the drive? It doesn’t.
Superfetch can be useful, but it can also be a pain. The most useful idea would be to allow the system admins to pick which apps get pre-fetching and then turn off the adaptive learning part.
I wouldn’t call your experience with SuperFetch a bug. It sounds like it simply isn’t a good feature for your situation.
Couldn’t this be solved by using multiple accounts? I’m actually researching this now, because it dawned on me that if SuperFetch data is stored in /Windows/Prefetch, how is it multi-user aware?
Maybe it uses attributes or ACLs to tie .pf files to user accounts? I really have no idea, and Google isn’t helpful either. Maybe PlatformAgnostic (you there?) or some of the other Windows NT experts in here can help us out…?
mmmh.. why SuperFetch (or any other similiar application) should care about different users? I mean, it is focused towards programs. Word loads the same exe and the same dlls when launched by mum or dad or joe the plumber, so cached memory is still the same.
The PreFetch directory can be shared across accounts, because it merely contains the list of what libraries need to be loaded by each application. The user’s profile of often-used applications doesn’t live in there, I don’t think.
It’s really quite simple. The SID of the user account is referenced in the filename (depicted here as %SID%):
AgGlUAD_P_%SID%.db
AgGlUAD_S_%SID%.db
Also, the .pf files are nothing new. These are trace files that are used by the cache manager to improve application launch times through strategic prefetching. This is a feature that was introduced in XP and, as far as I’m aware, has nothing to do with SuperFetch. The windowsitpro.com site has a fairly detailed article about it:
http://tinyurl.com/9qhwso
EDIT: I didn’t notice at the time of writing that the distinction between prefetch and superfetch is elaborated upon to some extent in later comments. Still, the above article goes into rather more detail.
Edited 2009-05-16 05:24 UTC
If SuperFetch is just caching and prefetching, wouldn’t ‘unloading’ it take virtually no time, as the memory need not be swapped out or written to disk? And, I would assume, that any work done by SuperFetch is done with ultra-low priority. So, it seems to me, no matter how you cut it, SuperFetch should not cause the performance degradation you’re describing; while not providing any benefit, it shouldn’t make things any worse than if SuperFetch didn’t run at all…
Are you sure this is being caused by SuperFetch? Could it be there is something else causing the performance to degrade over the day?
As wonderful a feature as people make it sound like me neither would benefit much from SuperFetch. I have only 1GB RAM and whenever I am on the computer I am doing memory-heavy things on it like f.ex. programming, 3D modeling or gaming. There simply isn’t any memory left to cache anything.
I was planning to try Win7 RC out soon, but I’ll have to disable SuperFetch. It’ll most likely just slow the performance down in my case instead of boosting it.
Well, you should seriously consider upgrading from 1 GB RAM in 2009, especially if you are into FX, 3D and games
Same happens when you run games. I ran Silkroad Online installer (MMORPG) which is 2GB in size and Vista’s Superfetch was reading this file after every reboot – this is _retarded_.
I don’t see the differences with the prefetching mechanism available in Windows XP. Maybe there are some optimizations, but the approach is the same. It’s a marketing name for a retouched XP prefetcher, I think.
See here:
http://channel9.msdn.com/forums/Coffeehouse/112054-superfetch-vs-pr…
[quote]SuperFetch is something all operating systems should have.[/quote]
My comment: Superfetch is not an idea that MS invented.
My comment: I never said so .
Nor did I imply other operating systems do not have it. This article is specifically about the SuperFetch implementation of a certain concept that exists in one form or the other on other operating systems as well, as well as the myths that surround it.
Edited 2009-05-11 21:51 UTC
I have been using it for quit some time now under Ubuntu. It is called preload. Works almost the same way as SuperFetch.
Mostly Microsoft buys inventions or re-invents them (just like RSS, they never invented it but they act like they did and most people will believe it’s something they invented).
Haha, tricksy! Wait until all of the websites in the world use it, and then it’s a smooth experience for their naive end user!
I was using prefetch for some time now but realised that, probably like SuperFetch, i have almost no gain on apps speedup. On the contrary, my system takes more 10-15 seconds to boot up than without the prefetch deamon.
So my suggestion is this: disable the prefetch deamon and restart the system. Then clock your boot time to see the difference.
Correction: with prefetch i meant to say preload.
I’ve seen an awful lot of sites trying desperately to dispel ‘myths’ about Superfetch, and Prefetch in Windows XP, but it doesn’t dispel them at all because any such system has to make assumptions about what your most used applications are. It really is debatable how much of a performance improvement there is using it, and I’ve seen no benchmarks to suggest that any improvement exists – at all. I’ve only seen articles dispelling ‘myths’ about it.
Install new apps and use them or use a reasonable variety of apps and what Windows thinks that you’re going to be using is just wasting memory and eventually ends up having to be unloaded, just slowing down the current app you want to run. This can result in the very well known paging to and from disk people see as the cache is reformed – the very thing fetching is supposed to minimise. You see, Superfetch constantly has a need to use ‘all’ your memory because that’s how it’s designed. To make it work you need a ton of it and you *need* 64-bit Windows to ensure this paging situation does arise. Kind of pointless really. This situation gets far, far worse when you start using more memory intensive applications. You don’t get anything for free. If you’re a gamer, turn it off immediately.
Strangely, you don’t see it if you turn the system off or if you use the sensible Windows Server as a workstation, which sensibly doesn’t use it. Each application is treated equally. Superfetch works better the more memory you have? Well, duh. I’d love to be able to see into the future and preload for every application I’m going to be using before I do, but that’s just a tad impractical. I just want the app I’m running now to run well, not just load faster.
Reducing load times? Bah. Software vendors have been preloading for years with all those pointless system tray things at startup. It might make their applications load faster but it will probably be to the detriment of what you happen to want to run.
To really make this kind of thing work then the algorithms that predict what you’ll be needing need to get far better. Until they get it right just turn it off or install Windows Server and use it as a workstation. It just tries to be too clever for its own good.
my benchmark in xp:
after deleting all prefetch file in the prefetch folder, winxp take way less time to boot.
i user my xp only to play games (for work, linux is fine too). i didn’t notice a slowdown in my start up time for games app.
so, winxp take less time to go from computer off to ingame without prefetch
Did you even bother to read article or the link that Thom posted? Apperently no. Superfetch and application loading in tray is totally 2 different thing. You make false claims that Superfetch would make other apps to page stuff that isn’t true since Superfetched material is always dumped away if memory is needed. I never seen any claims, other than forum trolls, that would prove that using Superfetch hinders gaming performance. Show us the proofs!
No they arent, and lots of people seem to be hypnotised into believing that they are. The intended effect of using the two is the same – preload all or part of an application into memory to make it load faster. The only difference with Superfetch is that it is a more universal way of managing it for all applications you might use. It does not make the application run faster once it is running and Superfetch itself cannot guarantee that any given application will load or even run faster – just what it thinks you’ll be using. There’s only so many ways of cutting that.
Superfetch relies on building a cache of memory that expands to the total amount of memory that you have. Once a cache of a large amount applications has been built up after a reasonable amount of usage (which is where the ‘myth’ that this gets worse over time comes from) that hits the limit of your installed system RAM then you really start to see the effect of memory management and paging happens as things get moved around. This is why you need several gigabytes of memory to make it work.
Superfetch is a process, albeit a low priority one, that moves memory and rebuilds the cache both at startup and as it’s running. Memory management is expensive, especially once you start hitting certain limits. To think otherwise and to think that Superfetch is ‘free’ is stupid. To cultivate this image by dispelling ‘myths’ is even more stupid.
Sorry, but I’m afraid you can’t just start demanding proof in response to me asking for benchmarks and proof of whether Superfetch actually works for people. The subject is Superfetch therefore show me that Superfetch works. You can’t just throw something in and say “Prove that it doesn’t work”. Doesn’t work like that.
I have yet to see that this is anything other than a another pointless technology by Microsoft to expand memory requirements in a pretty pointless way.
You are mistaken. Data cached by SuperFetch NEVER gets paged. Once cached SuperFetch memory is needed by something else, it’s just “deleted”, and that’s it. It is NOT written back to the pagefile.
Yes, because venerable sites like Tom’s Hardware and AnandTech, who both call SuperFetch a a tremendous performance booster, are all lying, and I should definitely trust you more than I should trust them.
Or my own worthless perceptions, of course .
This is correct. A pre-loader is “speculative”. It pre-guesses which data might be useful later, and during idle moments of the system the pre-loader loads such data into RAM which is spare at the time.
If the system actually needs that RAM for a real use, then the pre-loader’s speculation was in vain, and the pre-loaded data is simply overwritten. It is treated as if that RAM was never allocated at all, and was still spare.
You miss the point. No, when you cache and preload it doesn’t get paged – it is merely cached and overwritten. We’re ignoring disk caching there though so it was a poor choice of the word ‘paging’ on my part. The point still stands however that you need a reasonable level of free memory to make caching work, and especially when you have a service on top that is constantly trying to maintain that cache at an even keel. When you hit memory limits, as many more people do when they run a desktop and memory intensive applications, then some strange things can happen. The free memory isn’t there to make it work because caching and prefetching is all about putting free memory to work. That’s why we’ve ended up with things like ReadyBoost tacked on.
Tom’s Hardware and Anandtech both confirm what I’ve said – you need lots of free memory over and above the applications that you use and if you don’t you need to boost it with ReadyBoost. As they say:
Anandtech:
Tom’s:
That is not a free operation by any stretch. Hmmmm, let me see. Do I start work immediately, fire up my app, wait perhaps a couple of seconds extra in which case my app will be cached anyway and have a cuppa or do I wait a few minutes and have a cuppa to get the perception of a quicker start time? Hell, I might even be waiting for several hundred megabytes of applications to be preloaded that I won’t use today.
Yer, I’m sure if you load four or five applications time after time that fits nicely in four gigabytes of RAM and don’t use the disk at all then yer, things will be lovely. Alas, the world and peoples’ usage patterns are not that simple. Things have a weird way of evening themselves out.
It’s a solution in search of a problem. Yes, disk access is slow so we’ll intelligently cache in memory and every time you boot up, but if you don’t have enough enough RAM then you’ll need ReadyBoost. If you don’t have that than then you’ll hit memory limits anyway, hitting limits within Superfetch, probably adversely affecting other apps and perhaps even going back to disk via disk caching.
Whatever way you dice it you’re hitting system limits. Yet again, you have to throw memory and now flash drives at the ‘problem’ to make it work for you which was the point all along.
That’s the point. Your perceptions and even my perceptions on our own usage patterns are worthless here.
Sorry, but I think you don’t get it.
Superfetch loads things in memory it thinks you’ll need. If you don’t need it, no harm done – ‘unloading’ doesn’t exist. Apps start as fast as without superfetch OR faster. Never slower. Same story with the linux memory management, expect that the linux one is less smart.
The only 2 ways in which superfetch could technically be a disadvantage are these:
– if the IO priority system is screwed, the reading superfech does COULD interfere with other activity. Depends on the quality of the kernel’s management of priorities. No idea if this is an issue at all, I doubt it, but it is possible.
– superfech uses resources while your PC wouldn’t have been doing anything if it weren’t running. Bad for powerusage = environment and probably slightly bad for the lifespan of your hardware.
It might be true that the first issue I noted is actually real – but then the MS engineers writing this would’ve been rather stupid to forget about making the priority system work properly. Afaik that’s not the case, so superfetch never slows anything down (but might not speed anything up either, depending on usage paterns and available memory).
The second – well, it matters on laptops and environmentalists I suppose. But that’s about it…
The only 2 ways in which superfetch could technically be a disadvantage are these:
– if the IO priority system is screwed, the reading superfech does COULD interfere with other activity. Depends on the quality of the kernel’s management of priorities. No idea if this is an issue at all, I doubt it, but it is possible.
– superfech uses resources while your PC wouldn’t have been doing anything if it weren’t running. Bad for powerusage = environment and probably slightly bad for the lifespan of your hardware.
I can think of atleast one case where SuperFetch is a disadvantage: if you are playing a game the game will most likely read its files in more-or-less sequential order, and disk cache will load the parts of the files it assumes will be needed next. But if SuperFetch is caching something in the background while you’re playing then it’ll invalidate the disk cache causing slightly longer loading times.
Now, I’m not saying that’s a big issue though. It’s most likely negligible enough for most people not to care. I just mentioned it for the sake of argument.
Well, yes, it is a valid issue. Caching algorithms would probably be a bit bad to not take such a situation into account, but when low on memory it might happen.
Furthermore, from the threads below I understand the IO priority system in windows isn’t at the level of quality it would need to be to prevent prefetch from having an adverse effect on the performance of the system. In other words, the first disadvantage I considered so unlikely seems to be hurting users…
Indeed. One could only hope for an actual benefit from preloaders if the preloader uses ONLY system resources that are idle at the time. There must be spare (unused) RAM and the CPU and disk must be idle when the pre-loader is operating, otherwise the preloader will reduce speed and responsiveness of the overall system where it supposed to be assisting.
Edited 2009-05-12 11:12 UTC
Managing memory in any way is expensive. You have to manage that cache. Nothing is ever cost-free.
This is nothing like Linux memory management. Linux traditionally keeps a cache because memory is wasted if it isn’t used. Old pages are paged to disk cache regularly. So far, so good. What Superfetch does on top of that is to ‘intelligently’ preload applications into a cache it thinks that you’ll need at startup and as it’s running. That’s an expensive operation from an I/O point of view that is not guaranteed at all to help you at any given time or any given application you might be using. Preload for Linux faces all the same unreproduceable issues and questions of whether it makes a real difference to a user’s overall usage. Hell, Linux’s caching alone still has some extreme corner cases.
With Linux, if you load something once and then again and it is still in the cache then great. Things will be faster. However, it does not attempt to constantly fill and manage the cache based on its idea of what your usage pattern is. That’s the key difference. In theory, it would be great if you could get that right which is why people think this is such a great idea but in practice things are very different.
Besides, with any form of caching the central point stands and this certainly stands with an additional service over and above caching such as Superfetch. You need plenty of ‘free’ memory over and above what applications you use to make it work. It’s fine when you can put ‘free’ memory to work but it’s not fine when you don’t have it which is often the case on desktop systems. Whereas the cache itself might be relatively cost free given the benefits, if you start sticking a service on top that decides what should constantly be in the cache then you get a whole bunch of real unknowns.
It’s not really the technological idea that is at fault with Superfetch. It’s the idea that it knows what your usage pattern is, it knows what you’ll be running in the future and the algorithms responsible for deciding it.
“Superfetch loads things in memory it thinks you’ll need. If you don’t need it, no harm done – ‘unloading’ doesn’t exist. Apps start as fast as without superfetch OR faster. Never slower. Same story with the linux memory management, expect that the linux one is less smart.”
Nonsense. The way Linux uses caching is much, much smarter, IMHO. It does moderate caching, leaving much of the system RAM free. This means less disc I/O, less flushing, less paging, but also faster load time for common apps, and faster access time for common data.
SuperFetch just tries to use as much unused RAM as possible. While the idea “unused RAM is wasted RAM” sounds decent in theory, in practice it’s not. Trying to purposely use up unneeded RAM just causes more activity, more disc I/O, more paging, and more flushing (when less common stuff is started). It also drains battery life on laptops faster (more stuff going on).
And of course, no caching is a bad idea, too. Caching, in a lot of areas, has proven to have speed benefits.
Thus, the middle ground is the way to go. Linux takes the middle ground. SuperFetch takes the extreme ground of caching and using as much memory and disc I/O as possible.
And the funny thing is, Linux, with it’s more moderate caching, and it’s utilization of less overall resources. is much much much faster, and loads apps like Firefox, Eclipse, OpenOffice, QtCreator, NetBeans, GIMP – all cross platform apps) a gazillion times faster than Vista with Superfetch.
Now that is complete nonsense. Linux will gladly consume almost all available physical RAM for its pagecache. Case in point … I have a Linux 2.6.28.9 system with 8G of RAM which has been up for 11 days. It currently has only 69M of RAM unused, with 637M used by applications, 523M used for kernel buffers (largely attributable to the ext4 inode cache in my case) and 6750M cached! In other words, the pagecache grows as large as it possibly can and that’s a good thing.
That it does not on your PCLinuxOS system is due to an insufficient workload being generated for the pagecache to ever consume the majority of your available RAM and/or the kernel not running long enough before being shut down.
And, guess what, Windows works in the same way and there’s nothing wrong with that. Indeed, it’s hugely beneficial and the reason why a RAM upgrade yields immediate performance enhancements (even beyond the scope of providing for any especially memory hungry applications that may be used):
http://en.wikipedia.org/wiki/Page_cache#Memory_conservation
As the original poster alluded to, SuperFetch is a different kettle of fish. By virtue of the very nature of SuperFetch, the comment that the Linux VM is “less smart” is correct, as there is no equivalent to SuperFetch (please also note that “less smart” does not mean the same thing as “ineffective”). However, one could argue as to whether SuperFetch is worthwhile which, ultimately, is the topic under discussion here
This story is full of comments by people indicating that it can be slower. Note that if an app requests 200Mb RAM, the memory manager still needs to walk along the standby list discarding prefetch data page-by-page, so it’s more accurate to say “SF data is not written back to the pagefile”, but it still needs to be unloaded.
If the priority system were not screwed, it could still be bad. In theory, low priority IOs should be issued to disk on a 1-in-20 basis when normal priority IO exists. If your normal pri IO is sequential, that low priority IO can still trigger a seek and slow it down.
And yes, IO priorities are far from perfect. In Vista, no priority boosting was performed if a regular priority thread was waiting on activity performed by a low priority thread. It could stall on a collided pagefault (needs the data prefetch is fetching) or on any lock in the IO path (prefetch has a lock held, issued low pri IO that takes seconds to complete, and the foreground apps wait for the lock.)
In Win7 boosting is implemented and will be performed if a normal pri thread waits on many common locks (but not all) held by a low pri thread.
That’s inherent in the design of SF (use background cycles to load things.)
(Disclaimer: I am an MS engineer who worked on making the priority system work properly, but have not worked on SF itself.)
To be fair to the SF crew, Vista was finished under considerable time pressure. Changing every lock in the IO path to be low priority aware is a huge job. They were faced with the choice of shipping with no IO priority, shipping with IO priority in the disk scheduler that can invert in higher levels, or spending years making it perfect. They chose the middle option, presumably as a compromise. It still gives some benefit, but also has some inversion problems. Their choice doesn’t make them stupid – it made sense at the time. Resources are always finite.
Well, stupid is of course a bit harsh but considering the amount of complaining it probably should’ve been off by default until a servicepack would have fixed the most serious performance regressions. Either way, thanks for the information… I’ll by trying Vista today on my new laptop. If it performs smooth enough I might keep it on there (after installing KDE 4 of course) instead of replacing it with linux right away 😉
That would make sense if you are a small startup that needs to release the product or go bankrupt. But the biggest OS maker with 95% marketshare needs to cut features after 6 years of development?
This is dramatically oversimplified. Superfetch is a) generating IO to load all that data in; b) consuming memory to load it to and c) guessing about what data will be used next. If it guesses right, you get a speedup. If it guesses wrong…well, let’s hope you weren’t waiting on any IO or needing RAM at the time.
Vista added low priority IO and low page priority in part to limit the damage superfetch can do in those negative cases. Win7 has done considerable work to make low page/IO priority more effective and generate less inversions. But it can still do damage, believe me.
I used to build overnight. I’d come in in the morning and check email. Outlook would fight with superfetch – SF was trying to pull back in the stale build data (that I’m finished with) and would evict my email cache to do it. Outlook generates plenty of IO and uses plenty of RAM on its own, so superfetch was hurting my foreground task to benefit a task which was no longer occurring. In the end, I killed superfetch in frustration, and now accessing email in the morning is painless.
I should also add that tracking usage patterns is not free. There is a component in Vista (fileinfo) that does this tracking. Essentially, every time you open a file is needs to look up the name of the file in order to record access information about it. This slows down file system access by around 10%. Benchmarking superfetch reliably requires removing fileinfo to generate an apples to apples comparison.
Perhaps not. But if your disk can read at 50Mb/sec, all your apps are perfectly defragmented and sequential, it will take SF 80 seconds to populate your RAM with apps. In reality it will take longer. Since RAM is increasing faster than disk bandwidth, the sky is the limit for how long this process can take.
If you leave your PC on all the time, that’s not such a problem. If you boot, run a random app, and shut it down, SF is generating IO that is of no benefit to you, and may be negatively impactful.
With any heuristic, the user relies on the guess being correct. Guess right, perf goes up. Guess wrong, perf goes down. By nature, superfetch cannot be right all the time. Whether it is a win for you depends on how “predictable” superfetch determines you are. This is why opinions on this feature are so varied on the internet – every user is different.
In the past I worried about all that memory being filled up, but I’ve become convinced that letting it sit idle is indeed a waste. However, I have never noticed any performance increase due to Superfetch. I’ve turned it off on Vista and doing so doesn’t seem to slow anything down. I think the poster above was on to something about I/O load. That additional load is going to offset any perceived benefit. Perhaps the timing of that load needs to be better allocated.
I think the I/O load on Vista was its main downfall. With Superfetch, Windows Search, Windows Defender and your anti-virus of choice all competing for the most limited bandwidth on your hardware has to take a hit. I generally disable the first three and see a massive increase in performance regardless of RAM or CPU performance.
I wonder how many of these benchmarks we see comparing Windows 7 to XP SP3 are running Windows Search on both?
Indeed, the main problem with Vista’s superfetch is that even with low priority for its CPU and i/o thread, it can cause major slowdown during boot because of superfetch consuming the hard drive when you just want to launch an app immediately.
In Vista, superfetch starts before you log into your system. In a multi-user setup, it can’t guess very well which applications to preload first.
For a single user, it actually does work well (at least for me, and depending on how much RAM you have for superfetch to preload).
In Windows 7, microsoft is further lowering the priority of the superfetch thread, and also delaying it from starting until after the desktop has been loaded.
Windows 7 also in general just has less processes and services at startup, with a rewritten service stack with better delayed loading (partial loading on demand).
I’ve found the responsiveness at boot (in Vista and XP) is tied directly to the random throughput of your hard drive. When I upgraded to a 10K rpm Raptor, I can now boot windows and pretty much immediately open any application. Superfetch doesn’t really get in the way of the Velociraptor with its low seek time. A typical 7200rpm is at least twice as slow, and a 5400 is even slower. Most laptops of course using the slowest option, sometimes even a 4200rpm, or a poor SSD.
But for more generic hard drives of the 7200rpm variety, I have seen Superfetch make the machine unbearable to use during boot. Windows itself would freeze and pause waiting for hard drive i/o.
Some of it I attribute to bad drivers, but even with a powerful modern machine (q6600,4GB,500GB 7200rpm) I can notice the pauses for i/o.
Similarly, its the same situation with poor SSD’s that can barely read or write random i/o. It’s all about latency.
Overall though, if you can wait for Vista to fill the superfetch catch (50MBps to fill 4GB?), then your experience after that would be better assuming Vista prefetched the correct programs.
Edited 2009-05-12 07:20 UTC
Superfetch tries to guess what one is going to do and autonomously allocates resources accordingly. Sounds good on the surface. But on a deeper level, offends my design sensibilities. I can’t help but be reminded of Scotty’s observation in Star Trek III: “The more they overthink the plumbing, the easier it is to stop of the drain!”.
The “guess” is based upon statistics of what programs are actually used, so it shouldn’t be that bad (for some use cases anyway). Having said that … there is absolutely no doubting Scotty’s wisdom.
Dare I mention it, but on systems with even a modest amount of RAM by today’s standards … why not just load the entire OS and all of its applications into RAM and be done with it?
Such a thing is possible:
http://www.puppylinux.org/home/overview
Once it has booted and loaded itself entirely into RAM, it is super fast, as you might imagine. It can do this on systems with only 128MB RAM, altough 256 MB is obviously better. Anything over that, you are laughing!
Sort of SuperDuperFetch, if you will.
Edited 2009-05-12 03:44 UTC
Disclaimer: An anecdote does not data make. YMMV. Widely.
I was looking at a friend’s Vista laptop, 3 GB, decently if not extravagantly spec’d. It was kinda slow, and the disk would thrash for minutes after the desktop appeared. I was trying to find out what was hitting the hard disk and when, using my trusty optical detectors, so I turned off Superfetch. Lo and behold, the post-desktop thrashing stopped and the system’s responsiveness improved.
Conclusion? Superfetch ain’t always so super.
I’ve found that high-performance OSes tend to use efficient algorithms, and as little code as practical, in providing a sound implementation of the basics — disk I/O, memory management, and the like. Microsoft, on the other hand, went the Rube Goldberg route, apparently compelled to build a rather complex caching scheme which tries to predict what you will do.
Well, software is never all that great at that (and I often can’t stand Microsoft’s presumptuousness in trying to do so). So I have to ask why? Is it because Microsoft knew Vista was going to have performance issues, and was desperate to do anything to make it appear that that was not the case (even if that something, in cases like mine above, amounted to nothing more than good advertising buzzwords?) Vista, with this “superior” system, is not faster than XP on either machine I’ve tried both OSes on (not my own — helping others decide which OS to use. Both folks chose XP.)
FreeBSD and Linux, which generally stick to doing the basics well, both wipe the floor with even XP on my personal hardware. I can’t imagine what the disparity would be with Vista, but I refuse to purchase Vista outright (maybe I’ll download the Win 7 RC and try that).
Unfortunately it’s not just Microsoft that’s afflicted with this. I’ve about had it with turning off “quick launchers” from Adobe (Acrobat), Apple (Quicktime), and even OpenOffice (though OpenOffice at least makes it doable without using other tools). It appears that writing tight, efficient code is a lost goal, if not a nearly lost art, and that’s sad.
I’ve never liked the idea of when an operating system vendor tries to work around inefficiencies in a system by claiming that automation through guessing what the end user requires loading (before they need it) will improve performance overall. I don’t like it because it is based on a series of assumptions that fall over in the real world – what happens in the lab doesn’t always translate into he real world.
What I’d sooner see is the operating system vendors look at how libraries are linked to the executable; fixing the underlying storage mechanism and the file system design would go along way to addressing the issue (maybe a hybrid design of SSD for the applications and operating system; traditional hard disk for personal files).
If you crafted it carefully, pre-loading can work. Think of the OS sitting there idle, having loaded the OS, window manager and desktop, with nothing yet to do because the user has yet to press the “start” button. The machine has been keeping track of “libraries accessed, ranked by frequency” and it just knows that in almost every session some libraries, such as perhaps a HTML renderer, vector graphic renderer and a directory lister dlls are almost always used in user sessions but are not loaded yet.
May as well use the time, hey, until the user hits the start button? We are going to get lucky a good percentage of the time and have some stuff pre-loaded that will be needed later, and even if not … well the system would have been idle otherwise.
As soon as the user hits the start button … all bets are off … we will need the system to be responsive.
It could work … as long as it was carefully crafted.
And thus the hate brigade on OSNews.com out at in full force moderating down those whom they don’t agree with – and fail to enter into some discourse.
I don’t want my computer constantly ‘working’ behind the scenes. I want my computer, when it isn’t doing something, I want it to go into a deep state so my battery life is maximised to its full potential. I don’t want my system bogged down either because the operating system company thinks that they know what I want before I do – don’t second guess me because every time you’ll fail to do so.
In regards to the larger operating system; I would sooner sacrifice some ‘speed’ for the sake of more stability, security and reliability. 2 seconds there, 5 seconds there – its nothing in the grand scheme of things. It is nothing in the grand scheme of things when you compare it to the time wasted because of problems that cause down time. What software vendors, be they Microsoft, Apple or what have you, need to look at the ‘reducing time lost’ holistically rather than simply focusing on ‘how fast does it load’ and ‘how fast does it boot’. They need to focus on the big time wasters rather than a couple of seconds here and there.
Preload perports to do the same on linux.
For a (dirty) set of benchmarks look here: http://www.techthrob.com/2009/03/02/drastically-speed-up-your-linux…
Works for me…
Smiles
Oh superfetch makes this better and that better… Horses**t…
I have two Vista machines and 6 XP machines. Until about a month ago I was working on Vista, and then I switch back to XP. It was amazing! XP just works… XP just boots…
My Vista machine is quad-core with 3 GB of RAM, and my other Vista machine is tablet with 4 GB of RAM.
These machines are super slow!!!! They require about 5 minutes to completely boot up after logon.
And the fact that they use very little resources is horses**t… If you had a 1 GB drive sure little. But I have 500 GB drives nearly full and then it just churns and churns and churns… The operating system is constantly needing to do something and it drives me bonkers!
Vista is garbage and Windows 7 will be the same rehashed garbage! (I already played with it and was not impressed)
Now don’t read this as me voting for or against Thom, but I find it interesting that Thom—a big BeOS fan—would defend SuperFetch, when BeOS was all about the raw responsiveness and short code-path. It didn’t need any fancy tricks to be fast, it just was.
I think Vista is using a cludge to somewhat get around the fact that it is just plain slow compared to Windows XP. There are too many services being loaded, too much IO going on. If it guesses right, you get the illusion of being quick, but it’s only relative to the overall slowdown of the system. I have Office 2003 Professional installed on my XP netbook, and it loads without the splash. Prefetching on XP is disabled, as I’m on an SSD.
Gimmickry just doesn’t beat raw speed to begin with.
And that is the point!!!
Vista and Windows 7 in its gimmickry makes the entire system appear faster when it fact it is slower. And I see it many times especially when I am on battery mode and Vista has this undying habit of having to search something…
I obviously would like to live in a world where BeOS was the norm. Sadly, this isn’t the case, and we’re stuck with big lugs like Mac OS X and Windows, who both need large amounts of RAM to be usable.
I recently bought my dad a new iMac which came with 1GB of RAM. Dear lord that machine is slow because of the low RAM. My dad didn’t like that at all – his brand new iMac was way slower and less responsive than his 2002 Windows 2000 box! I still need to order him some extra RAM.
So, that’s the world we live in. I don’t like it, but there’s nothing I can do to change it. With that in mind, anything that improves thing – trickery or not – is welcome.
Edited 2009-05-12 08:09 UTC
I think that PERCEIVED speed is more important than actual speed, on common tasks anyway. I dont care if numbers crunch in 1 or 2 seconds, but tearing on XP GDI desktop spoils the fun for me.
It would indeed same a waste to have 3 out of 4GB sit idle when you are using a low-memory application. However, I would have expected that in this day of age, with all the focus on power saving technologies, it would actually be possible to shut down RAM chips that are not used.
JAL
Well, I believe OOo would be a perfect candidate to test the SuperFetch theory and see how efficient SuperFetch is. Don’t you agree? Put its shortcut in Start Up (the whole thing, not just the pre-loader), restart Windows about 10 times and see if it makes a difference (as in whether OOo will start to launch quicker).
P.S. The title of my post should be “Test Candidate”.
Edited 2009-05-12 08:46 UTC
In my article, there’s a link to Tom’s Hardware which indeed performed this very test on OOo 2.1 a few days after Vista was released. Where Writer 2.1 first took 9 seconds to load, it dropped to just 2-3 seconds, and that was on a machine with 512MB of RAM. Adding in more RAM made launching Writer even faster; less than a second on 2GB machines.
I wonder why they needed to wait for Vista to be released?
OOo and preload were available on Linux before Vista came on the scene, I’m sure.
Let’s see.
http://www.debianadmin.com/load-applications-quicker-in-ubuntu-usin…
July 9th, 2007
Vista launch?
http://en.wikipedia.org/wiki/Vista#Development
January 2007
OK, so we have to go back a bit further.
http://sourceforge.net/forum/forum.php?forum_id=492870
Preload first appeared in 2005.
http://sourceforge.net/forum/forum.php?forum_id=597292
A few improvements in 2006.
http://sourceforge.net/forum/forum.php?forum_id=849333
Then some more tuning starting in 2008.
http://sourceforge.net/projects/preload
Last update, April 2009.
So Tom’s hardware could have done their review of this idea two years earlier if they had wanted to.
Windows XP introduced PreFetch in 2001.
But anyway, Tom’s Hardware article was about Vista and the advantages of SuperFetch, and they used, among other applications, Writer as an example. It wasn’t about Writer itself.
Edited 2009-05-12 09:26 UTC
Thom:
OSGuy:
Pre-fetch isn’t Superfetch, but it was a predecessor.
http://en.wikipedia.org/wiki/Prefetcher
Superfetch didn’t make an appearance until Windows Vista.
http://en.wikipedia.org/wiki/Windows_Vista_I/O_technologies#SuperFe…
Preload is more like Superfetch than it is like Prefetch.
http://en.wikipedia.org/wiki/Preload_(software)
Both preload and Superfetch use previously collected frequency of usage data to determine exactly what to pre-load into RAM, whereas Prefetch doesn’t AFAIK.
PS: It is interesting to note that Microsoft bothered to get a patent for ReadyBoost, but they didn’t do so for SuperFetch. Damn prior art!
The tricks for success are:
(1) make sure that the time the preloader uses is indeed spare time
(2) pre-load the optimal data and no more
(3) make sure that the system knows what stuff is already in RAM, so it doesn’t have to re-load it, and
(4) don’t throw away potentially useful pre-loaded data too early.
Edited 2009-05-12 11:06 UTC
lemur2, no offense but I am failing to see the point you are trying to make? We are talking about SuperFetch and memory management and this has nothing to do with an ordinary pre-loading. Please clarify?
Edited 2009-05-12 09:29 UTC
… is that it wanted to cache my virtual machines too.
Sometimes I launch the VMWare server console and start a VM right after my physical machine booted up. SuperFetch decided this was reason enough to start caching my +10GB virtual machine (which is located on a different partition).
So far, no tech magazine i trust has ever found the slightest bit of a speed improvement through SuperFetch.
I let superfetch run unabated for a number of weeks, before I decided to turn it off.
Yes, compared to load times after turning off superfetch, some of my most commonly used apps started slightly faster … slightly.
No, boot time was no different.
Yes, with superfetch enabled, there was a lot of disc thrashing. It seemed to be constantly trying to figure out what to have loaded, what to flush out of RAM, and in what order to do it in. Once I turned superfetch off, disc thrashing went down dramatically, and the overall responsiveness of the system improved.
Look, some RAM caching is good. Linux does it, and does it well. For instance, on my 2Gig Ram laptop, which has PCLinuxOS dual booting with Vista, in PCLOS, typing in the “free -m” in bash, it shows a total used memory of 468meg, with 22 of that in buffers, and 251 of that cached, and 1430 completely free.
That seems much much much much more sensible to me. It’s actually using a minimal amount of memory – 194, caching a moderate 251, and leaving the rest free. It’s got enough cached to improve start up times and access to data. But it also has plenty of free memory to make it immediately available for non-common apps, without having to do flushing or extra disc I/O.
I think it’s stupid for the system to automatically use up as much RAM as it can. That just causes more paging, flushing, and disc I/O (the most expensive operation). It’s also stupid to not do any caching.
Thus, a sensible middle ground is best, like Linux does.
Apparently this moderation concept was lost on MS engineers.
And, IMHO, too much disc I/O is the absolute worse case scenario for overall performance. It’s also dangerous for storing your data. If the system keeps thrashing the disc, it’s going to eventually get worn out and die. If your disc dies, so does your data.
If your RAM dies, who cares. RAM is cheap and easily replaced. But if the HD dies, you’re screwed. That’s tons of unrecoverable data down the toilet (unless, of course, you performed regular back-ups).
Really, traditional, spinning, hard discs are the weakest technological link in modern computers. They are the slowest part of the system, they are the least reliable, and the most devastating when they fail. I see Flash Memory storage as the possible answer.
But until then, using something like SuperFetch, which definitely causes more disc thrashing, just does not seem like a good idea.
In any case, in my own anecdotal example, disabling SuperFetch made my Vista experience much better.
No it isn’t, no it doesn’t. Please read up on the page cache and demand paging mechanisms, and how the Linux VM works in general.
No it doesn’t, really. As mentioned in my previous post, this misapprehension can only be based on the fact that you are not generating enough of a workload.
In computing, less is more. The less unneeded crap loaded, the better. That’s more CPU cycles and RAM available for the stuff you want to or need to run.
SuperFetch is a cpu using, RAM using, process itself, and is causing the loading of stuff you don’t need/want right away. That is a cost.
On Arch Linux OpenOffice 3.1 is available, so I recently upgrade to that. When I first upgraded, on my modest system, i got these results:
Time to load for first time in a session: ~ 8 seconds.
Time to load subsequently in a session: ~ 3 seconds.
So, as an experiment, I also install preload. Now my system has a relatively modest CPU, hard disk and graphics card, but it DOES have 2GB RAM, so there should be some memory spare.
After installing preload:
Time to load for first time in a session: ~ 3.5 seconds.
Time to load subsequently in a session: ~ 1.5 seconds.
So preload has done some good. It hasn’t done as much good (just after booting) as a recent load of the whole application does when it comes to re-loading the same application a subsequent time … but clearly it does some good.
There was no (perceptible) increase in boot time.
OpenOffice on Arch is now effectively as snappy as MS Office on Windows on the same hardware, due to using preload on Linux (which isn’t the default).
So pre-loading (such as done by preload and SuperFetch) CAN actually work … in some circumstances, if done right.
Edited 2009-05-13 03:52 UTC
The funny thing is that i use Arch too and preload makes my system start much more slowly that without it and i haven’t noticed any difference on speed loading apps. Maybe on OO.org, but nothing more. I didn’t benchmark it, but that was the impression i got.
Suppose I have 8 GB RAM, which I will no doubt have one day (though short of 6 currently!).
Is it possible to edit the relevant files such that the system always loads ALL of my fave apps at startup? – I don’t care how long it takes because I switch my PC on and then go have a shower…
This would really only mean 4-5 apps like FF, etc, but it would be cool if my fat game would pop open instantly
Also, will this technology make a difference once we all get switched to SSDs?