Although the hard disk is much slower than RAM, it is also much cheaper and users always have a lot more hard disk space than RAM. So, Windows and other OSes are designed to create this pseudo-RAM or in Microsoft’s terms – Virtual Memory, to make up for the shortfall in RAM when running memory-intensive programs.
Well I guess news is news.
Indeed.
Though a bit long…
Windows seems to be designed to blow all my RAM on disk caches, put all my data in the swap file, then cache the swap file.
Adrian’s site has got to be the slowest-loading site on the internet. After waiting over a full minute, it still hadn’t loaded, and I’m on a T1.
Adrian’s site is a joke. You can’t even load the article the link points to. It starts loading then just stops. Hey Adrian…..up the bandwidth my friend.
Adrian’s site is under siege by thousands of Slashdot visitors. For more information, and many more comments about the page, please visit the Slashdot article at:
http://it.slashdot.org/it/05/03/25/186208.shtml?tid=201&tid=190&tid…
I would have liked to have read a much more detailed article and learnt something. This was as brief as a webopedia type definition. It could have atleast suggested speed tips like using a separate drive if available, permanent v temp, keeping it defragged, and talked about swap size ratio to memory etc.
There must be many good articles around.
Brian N
Uh, virtual memory is virtual memory. That is the standard term from time immemorial, not “microsoft’s terms”. “Pseudo RAM” is the term that has to be qualified with such an expression.
A rather specious article. One wonders if the author has any kind of clue. I wonder what the “VM” in DEC VAX/VMS stands for?
I still don’t get why MS decided to call the concept of swapping “virtual memory”. Virtual memory is the mapping of a process’ virtual address space (pages) to physical memory locations (frames). Where these frames are located (i.e. on a piece of SDRAM or a HDD is a different matter).
So if you run out of physical memory, you can use your disk to store your memory pages… or better yet, you can use your disk to store pages that you dont use frequently, allowing potentially more (or larger) programs to use your memory. You cannot optimize this process in Windows. You can sort of affect it in Linux with /proc/sys/vm/swappiness.
So the best thing you can do is make sure that your disk accesses for swapping out pages is fast. *nix does it right: create a dedicated partition -> no filesystem overhead, no fragmentation. In Windows its just another file. Thus, you need to keep the file from fragmenting. This means put it on a separate partition or make it a fixed sized in a contguous free region of the disk.
this site is so slow! Why did he put all that graphics crap there? And he definately should rewrite the engine and make it up to current standards!
Though the article itself, at least those parts I’ve got to load, is nice
This isn’t news. At all.
Are there any editors in this website at all?
The main difference lies in their names. Swapfiles operate by swapping entire processes from system memory into the swapfile. This immediately frees up memory for other applications to use.
This is not how most people use the term.
While swapping occurs when there is heavy demand on the system memory, paging can occur preemptively.
I don’t think “preemptively” is the right word. Maybe “proactively” or “predictively”.
Finally, some programs require the use of a paging file to function properly. It may be to store sensitive data on something less volatile than the RAM or …
Does this make sense to anyone?
Even the fastest hard disk is currently over 70X slower than the dual-channel PC2700 DDR memory common in many computers.
70x is nothing. I’m not sure what he’s doing, but he might be ignoring latency. Even with an impossibly fast hard drive (say 0.5ms seek time) and super-slow RAM (say 0.5us latency) the difference is 1000x.
1) Not just MS is using the term ‘virtual memory’… others did waaaaaay before.
2) The swap file under windows is not “just another file”, it’s treated differently
3) The NT kernel implements the ‘working set’ algorithm for deciding which pages have to be swapped. It *is* optimized and balanced and you should by not means mess with it (like using the ‘RAM defrag utilities’) [and having a partition on the fastest area of the drive is a good idea on _every_ system]
Just getting tired of the usual Linux_does_it_better_anyway trend.
Regards.
It is highly uninformed. Didn’t learn anything at all from that article. I agree there is a lot of misinformed stuff on that website.
Is the NT kernel optimized and balanced for putting as much as possible of process data on disk and put as much files in memory? Because that’s certainly the impression I get from using Windows. And I find it strange, because I’ve always heard NT memory management is supposedly quite good in theory.
I won’t do any comparisons with Linux, it was a fair while since I used any locally.
The artical is better now. It seem the first half was missing
in the /. storm
44 pages is brief? Perhaps if you clicked the next page link..
It’s entirely affordable to get a Gig of Ram these days.
You can run win2k and winXP with NO paging file at all and suffer no consequences except for improved performances.
Even if you set the paging file to disabled, should you encounter a time when your RAM is not sufficient, windows just takes over anyway !
The article seems to be based on a time when RAM was so expensive, but now it’s not.
Waste of time.
> So the best thing you can do is make sure that your disk
> accesses for swapping out pages is fast. *nix does it right:
> create a dedicated partition -> no filesystem overhead, no
> fragmentation. In Windows its just another file. Thus, you
> need to keep the file from fragmenting. This means put it on
> a separate partition or make it a fixed sized in a contguous
> free region of the disk.
My defrag program (O&O defrag) on Win2k shows me that indeed the pagefile is stored in a contiguous region, just like a separate partition would be. And I’d bet that it is fixed-size, just like a separate partition would be (as swap data grows, not counting manual resizing). However, it has some advantages that a separate partition does not have:
– it is positioned in the middle of the NTFS partition, and can thus be accessed in half the seek time (avg)
– it does not use a partition table entry, which are scarce anyway
– manual resizing is easier, since there is no arbitrary limitation about the position of the swap space
and one disadvantage: On a multiboot system, it cannot be as easily shared between OSes. Since it is not a design goal of Windows to be a multiboot-friendly OS, this counter-argument isn’t important, just as the argument about partition table entries isn’t.
In what way does *nix do it right?
I don’t think Kilburn would be too happy with you claiming Virtual Memory is a Microsoft term!
When your primary partition is heavily fragmented and you choose to use the default swap settings in windows you can get a heavily fragmented swap file. It doesn’t happen all the time but it does happen if you decide to run a program (or programs) that use a lot of virtual memory. Windows will increase the size of the swap as more and more ram is required and the problem is that the space claimed by the swap file will not be reclaimed until the system defrags on boot. Since Windows requires a swap file to function and it won’t let go of the file except on startup. I have had a very fragmented swap file (50+ fragments) and while it was slightly faster when I finally defraged it, the difference was not night and day.
The partition way of having a swap file isn’t much better IMO though. I prefer the Windows way. Why? It’s much harder to create/delete partitions on a hard drive so if you want to resize your swap space it’s much harder. I prefer flexibility over potential speed. Especially in this case, since when you are paging to disk, the difference in speed between using a fragmented swap file and using a partitioned swap file is so very little.
Not all unixes do it with a seperate partition, I know that OS X uses normal files for swap similar to Windows.
While it is the recommended way, you don’t have to use a swap partition in Linux and, as was pointed out above for Windows,if you haven gobs of RAM, you could do away with swap space entirely ( also not recommended!)
Modifying or disabling swap in Windows, requires jumping through a LOT of hoops AND rebooting. Linux allows you to enable/disable swap files or partitions with a simple “swapon/swapoff”. What could be easier, Mr. Miyagi?
Here’s a Redhat page on customizing swap space in Linux:
http://www.redhat.com/docs/manuals/linux/RHL-8.0-Manual/custom-guid…
I do like XP’s ability to override the pagefile settings when necessary but wish that you could define the priority for the various swap files the way you can in Linux.
Please note that other Unix-like OSes may also have the same swap capabilities as does Linux.
Here are a couple of pages that talk about Virtual Memory in XP:
http://aumha.org/win5/a/xpvm.htm
http://www.theeldergeek.com/paging_file.htm
Not all unixes do it with a seperate partition, I know that OS X uses normal files for swap similar to Windows.
You can use a page/swap file under Linux. According to what I have read, the performance loss isn’t significant if you use the 2.6 kernel. However, I guess nobody uses this feature since everybody is used to create swap partitions. Anyway, I never heard of somebody wishing to change the size of his swap partition once it was setted up…
> When your primary partition is heavily fragmented and you
> choose to use the default swap settings in windows you can
> get a heavily fragmented swap file. It doesn’t happen all
> the time but it does happen if you decide to run a program
> (or programs) that use a lot of virtual memory. Windows will
> increase the size of the swap as more and more ram is
> required and the problem is that the space claimed by the
> swap file will not be reclaimed until the system defrags on
> boot.
A swap *partition* cannot be resized at all, so at worst Windows is missing a switch to disable the increase.
> Here’s a Redhat page on customizing swap space in Linux:
This looks like it makes the file possibly fragmented and not positioned in the middle of the host partition, since it created the file with ‘dd’, so some kind of fragmentation control would be needed to reach the performance of a swap partition, and additionally a position control to reach the performance of a swap file under windows (non-increased). In general it’s nicer though to have the choice.
add: I don’t know if declaring the file as a swap file enables such fragmentation/position control – that would be best and clearly give Linux an advantage.
Fragmentation isn’t nearly as much a problem on linux; all the popular filesystems self-defragment to some degree, and if the file size stays the same, it would never get fragmented.
This article is quite lame… I’m wondering how it got on osnews….
seems to me there are a lot of assumptions in this ‘guide’. the biggest one seems to be that VM is accessed in sequential multi-cluster chunks. if the system doesn’t read more than a few clusters/sectors/whatever in a row, there will be seek delays even if the file is in one spot on the disk. the second is that there is no other disk activity while accessing VM. I have no more tested facts to offer than the author of the article, but it seems to me the system is usually loading something when it starts kicking stuff out of memory to disk so the seek savings go out the window because your bouncing back and forth between reading a file and writing VM.
I also found it amusing when I saw the comments here about the VM being located in the middle of a partion (or is it the middle of the disk?). I had not thought of that, but if seek time really is more important than sustained read speed as I’m guessing, putting the VM in the middle makes a lot more sense than moving it to one end of the disk. from any point on the disk, you have to move across at most half of the disk instead of at most across the whole disk to get to the VM.