Linked by David Adams on Thu 20th Nov 2008 04:19 UTC
General Unix Linux and other Unix-like operating systems use the term "swap" to describe both the act of moving memory pages between RAM and disk. It is common to use a whole partition of a hard disk for swapping. However, with the 2.6 Linux kernel, swap files are just as fast as swap partitions. Now, many admins (both Windows and Linux/UNIX) follow an old rule of thumb that your swap partition should be twice the size of your main system RAM. Let us say I’ve 32GB RAM, should I set swap space to 64 GB? Is 64 GB of swap space really required? How big should your Linux / UNIX swap space be?
Order by: Score:
I have a better way...
by looncraz on Thu 20th Nov 2008 06:02 UTC
looncraz
Member since:
2005-07-24

The best way, IMHO, is to do need approximation.

For instance if you will not be doing anything different with your new 2GB of RAM than you did with your 512mb of RAM, and you had a 1.5GB swap space before, then you really don't need any swap space what-so-ever.

In fact, I have 512MB of RAM ( by choice ), and I keep a 750MB swap partition for Ubuntu. I only "hit" the swap partition when a piece of software is memory leaking. For me, using the swap is a no-no/fall-back scenario.

But let's say you will be using some intense applications ( video editing ). The program requires 2 GB of RAM, but recommends 4. You have 2GB of RAM, and can't fit any more in the system ( y'know, ya bought some machine instead of building your own, like ya shoulda ).

With that you have your needs: 4 GB of RAM, meaning RAM + SWAP >= 4 GB as a minimum.

Now, set aside 200MB for the operating system, Desktop, and various background utilities which may or may not end up needing to be swapped to disk.

Then, if you expect to be running other applications at the same time, see what their usages could be. Firefox, well just add another 250 MB, just to be safe.

You know for certain you will be running an apache_server for your intranet, so add 100-200MB if you need.

Continue on until you have exhausted the list of program you will be most likely running while the most memory intensive portion of the video program is running in lowest priority ( or max 'nice' ). Includes your media player, say 75-100 MB.

Now, your final "max" memory usage would be expected to be around 6GB in this case.

So you have a 4 GB partition/file for swap, preferably on another ( fast ) hard drive.

But let's say you have 16GB of RAM and the above is true. No, a 32GB partition would be stupid. In fact, any swap would be - just in case the kernel decided something was dormant and should be swapped to disk just in case real memory is needed - even though it never will be. No point in that at all ( this is what Windows did/does (Vista was suppose to fix it, but it crawls on my machine - even though everything else flies ).

So a real simple formula would be:

S = C(M - (R/C))

Where:
S is swap
M is max memory required
R is the actual RAM installed.
C is the extra comfort room you would prefer,
such 25%, which would be 1.25

So, if you need 3GB of RAM total, have 2GB of ram, and want to have room for at least another 512MB of load while still allowing the same amount of added comfort( 25%) then you can do the following:

1.25 * (3 - (2/1.25))
1.25 * ( 3 - 1.6 )
1.25 * 1.4
1.75

So you get a 1.75 GB partition with 2 GB of RAM to satisfy your needs, plus some comfort, giving you 3.75GB of RAM+Swap. Remember that comfort means to have some left over, even if you have gone right to the max extant of your comfort level.

Naturally, not many people want to calculate their work loads to this precision, but it is the "right" way. And, I would personally use a comfort level of 50% or maybe even 75%. Sometimes you just plumb forget about closing those programs on another workspace.

But if you want to be the safest, then you should just count the number of programs you could possibly run at once, and give them each room to run to their fullest addressable limits ( 4 GB per app on 32-bit, depending on OS as well ).

So if you have 60 programs, you will need 240 GB of RAM + swap.

To be somewhat smart: Just keep an eye on your swap usage ( if the OS allows it in a human readable manner ) and increase its size if you use too much or have problems with memory. Not so easy with a partition if you can't easily resize it.

So, seriously, your best bet is:

S = C(M - (R/C))

If you don't know what your loads will be, assume you need exactly 50% more than the RAM you have, so:

M = R*1.5

If you upgraded your RAM, don't even fret about resizing the swap file, there is no point unless you have errors or expect to be using more memory beyond what your upgrade provides, then use my formula.

--The loon

Reply Score: 3

RE: I have a better way...
by parentaladvisory on Thu 20th Nov 2008 07:05 UTC in reply to "I have a better way..."
parentaladvisory Member since:
2006-12-18

Hmm, I can agree with your reasonings, but how is it with applikations that "requires" a pagefile(windows terminology)? _If_ you had 32GB of RAM and you use an app that is not going to install or whatever if you do not have a swapspace, what you recomend then?

Reply Score: 1

RE[2]: I have a better way...
by BiPolar on Thu 20th Nov 2008 11:21 UTC in reply to "RE: I have a better way..."
BiPolar Member since:
2007-07-06

_If_ you had 32GB of RAM and you use an app that is not going to install or whatever if you do not have a swapspace, what you recomend then?


File a bug report about that silly app and/or search for an alternative program.

Reply Score: 2

RE[2]: I have a better way...
by looncraz on Thu 20th Nov 2008 22:18 UTC in reply to "RE: I have a better way..."
looncraz Member since:
2005-07-24

Well, if you ABSOLUTELY NEED that poorly written program, then you will need to satisfy its "needs." You will need to experiment or research to find out how much swap space for which the thing is checking.

Regardless, I agree with BiPolar completely, file a bug report or use a different program, if possible.

--The loon

Reply Score: 2

RE[2]: I have a better way...
by B. Janssen on Sun 23rd Nov 2008 11:35 UTC in reply to "RE: I have a better way..."
B. Janssen Member since:
2006-10-11

Just wirte a little script (read the manual pages of dd, mkswap, swapon) that adds a swapfile to a partition of your choice, e. g. /home or /opt, for the application and cleans up after you exit the application (check swapoff and rm).

This will increase your application's start-up time significantly if the swapfile is very huge, i. e. hundreds of GB, but satisfy the app's requirements. After you have done that, send the script and a bug report to the programmers of the app and tell them that a userspace application has no business poking its nose into memory management ;)

Reply Score: 2

It does depend upon your workload
by shotsman on Thu 20th Nov 2008 07:00 UTC
shotsman
Member since:
2005-07-22

But in general ever since 'lazy swap' has been used then the 2X RAM for swap has not been needed.

If however you are running a big Database (eg Oracle ) and it is configured to have a huge SGA then I'd veer towards the 2X RAM for swap just so there is space for all the other processes to swap in and out.

Add that to the falling cost of Hard Drives, isn't is more of a case of

2X RAM for Swap? Yeah. That shiny new 1TB HDD has plenty of space on it. Lets make it 4X RAM for swap. We can afford the disk space.

It is more like a question that is no longer relevant in the Linux world.

Now, if this were windows then that would be a different question due to the different swaping/pagefile allocation models used.

Reply Score: 4

I don't use swap at all
by asgard on Thu 20th Nov 2008 09:55 UTC
asgard
Member since:
2008-06-07

If I have a desktop machine with more than 1GB RAM, I usually put no swap space at all.

There is a big disadvantage of large swap - if you have a process that runs and eats too much of memory (for example, due to leak), system starts swapping, and everything is going to halt. If you make swap proportional to the size of the main memory, it will take proportional time before OOM killer kicks in and shoots down the process. This means a lot bigger and unnecessary wear of your hard drive.

For most Linux usage, 1GB of memory is sufficient. If you need more, just add more RAM, it will be more efficient to your time and hard disk wear. With larger amounts of RAM, the chances that swap will help you more than it will hurt you is diminishing (i.e. the probability that application will be save by additional 1GB is getting lower, but the probability that it will due to some bug eat any amount of memory you throw at it is the same). Sure, hibernate is a problem, but I don't use that (I use sleep).

What I would like to see would be if kernel could somehow notify applications that memory is running out, so the applications could help. For example, they could reduce their memory usage at the cost of CPU, run GC cycle, compact their memory, reset themselves to prevent any runaway leaks and so on.

Reply Score: 2

Comment by Traumflug
by Traumflug on Thu 20th Nov 2008 10:08 UTC
Traumflug
Member since:
2008-05-22

Admittedly, I never could follow the reasoning for this 2x-RAM-rule.

About any piece of software out there requires some absolute amount of RAM and doesn't care wether this RAM is available physically or not. So multiplying the physical amount of RAM with any number is meaningless. What you really want is a rule of thumb for an absolute number.

Example: Running Ubuntu with some desktop stuff needs about 1.5 GB of RAM (worst case scenario), so make sure you have 2 GB of RAM available to be on the safe side. If you have 0.5 GB physical RAM, allocate 1.5 GB swap. If you have 1 GB physical RAM, 1 GB swap is sufficient.

Nowadays, with most desktop PCs having 2 GB physical RAM or more, it's actually questionable wether you need swap at all. If you install a 32-bit OS on a machine with 4 GB physical RAM you can safely ignore swap as the OS can't even address the extra. A common scenario these days, yet most Linux distros still insist on allocating a swap partition.

Reply Score: 3

RE: Comment by Traumflug
by Yagami on Thu 20th Nov 2008 10:32 UTC in reply to "Comment by Traumflug"
Yagami Member since:
2006-07-15

yeah, this makes much more sense.

just target a memory size that your system needs and make the swap size with whats missing from your ram !

with the weird calculations that the article mentions, sometimes its "better" ( as in you have less swap ) with less ram !

i just target the 2GB size. As i already have 2 gb of ram there isnt any swap needed.

if you have 512MB, dont make a 1 gb swap , as your system will only have 1.5 GB memory capacity, but a 1.5 GB swap so it reaches the 2GB capacity. ( just in theory, since linux will hardly ever need that for normal desktop usage )

having said that, from the article i guess i better make a 3 gb swap so my laptop can suspend to ram. (anyways, whats 3 gb more or less in my hardrive ?!? ;) )

Reply Score: 1

RE[2]: Comment by Traumflug
by computrius on Fri 21st Nov 2008 13:22 UTC in reply to "RE: Comment by Traumflug"
computrius Member since:
2006-03-26

Does this mean that if I have 4 gig of ram someone is going to ship me a free 2gb hard drive?

Send it to my home address please ;)

Reply Score: 2

RE: Comment by Traumflug
by Bill Shooter of Bul on Thu 20th Nov 2008 15:59 UTC in reply to "Comment by Traumflug"
Bill Shooter of Bul Member since:
2006-07-14

If you install a 32-bit OS on a machine with 4 GB physical RAM you can safely ignore swap as the OS can't even address the extra. A common scenario these days, yet most Linux distros still insist on allocating a swap partition.


No, 32 bit linux can address more than 4 GB with Physical Address Extension (PAE) support ( which every processor has had since Pentium Pro) .

http://en.wikipedia.org/wiki/Physical_Address_Extension

Reply Score: 2

RE: Comment by Traumflug
by looncraz on Thu 20th Nov 2008 22:30 UTC in reply to "Comment by Traumflug"
looncraz Member since:
2005-07-24

Except each application can address 4 GB, technically. Though your way is less mathematically involved than my nice little formula :-)

--The loon

Reply Score: 2

This is dumb
by steve_s on Thu 20th Nov 2008 10:39 UTC
steve_s
Member since:
2006-01-16

Swap space was only ever about compensating for a lack of (expensive) physical RAM, making use of (in-expensive) disc space.

How much swap space should you allocate? Well, the real question is how much memory do you need for typical use.

2x RAM was a popular ratio that generally fit with people's computing needs, and did for a couple of decades. RAM has become much cheaper, to the point where it's no longer too expensive to buy as much RAM as is needed.

The 2x RAM rule doesn't need to be debunked - it's a dumb cargo-cult relic of a bygone age.

Reply Score: 2

Common sense required
by -APT- on Thu 20th Nov 2008 11:03 UTC
-APT-
Member since:
2007-03-20

You need to figure out the workload for the machine you're using and set the amount of swap accordingly.

I run quite a lot of virtualised servers on machines with fairly limited RAM, varying on how tight I need to squeeze the memory on these servers I tend to set swap to 1-2 times the amount of memory. I obviously hope that the machine won't use it too much, but for these virtual servers the performance doesn't matter too much.

Now for real servers where performance is important I wouldn't consider allocating so much. I'd keep an eye on the machine to ensure that all the services have limits set so they don't start going into swap. Maybe just add 1GB swap on these servers just as a safety measure.

These days RAM is cheap enough to put excessive amounts into machines.

Edited 2008-11-20 11:03 UTC

Reply Score: 1

Just use a growing swap file instead
by jokkel on Thu 20th Nov 2008 12:47 UTC
jokkel
Member since:
2008-07-07

I just don't get why Linux still uses a swap partition, which has exactly no benefits over a swap file. Except if they are on different drives. Yes I know you can use a swap file with Linux, but no distro has this as default setup.

Mac OS and Windows use swap files, which are transparent to the user and the programs. Just let the OS care about the size of the swap file and let me worry about my job. The OS can just start with the size of the RAM or half of it and increase the file size when neccessarry.

Reply Score: 5

siride Member since:
2006-01-02

It's because a SWAP partition doesn't require interaction with the VFS. Reads and writes and allocations can be done more directly. If you use a SWAP file, then you have filesystem overhead. In the older days, I suppose this was considered too slow to get good performance. I doubt it's a problem today, but people are slow to change.

Perhaps you should run some SWAP benchmarks and see if a SWAP file is considerably slower. I won't do it because I don't really care ;) .

Reply Score: 2

MattPie Member since:
2006-04-18

It's because a SWAP partition doesn't require interaction with the VFS.

Also, swap files that grow fragment over time, which makes the disk head jump around looking for blocks that should be contiguous. Granted, having a swap file on a separate part of the disk isn't much better, but the performance will be constant, not degrading over time.

Reply Score: 2

sbergman27 Member since:
2005-07-24

Swap really belongs in the middle of the disk. Not way out at the beginning or end of the disk, where it's furthest from the filesystem wrt seek time. I should note that reduction of seek time is likely to outweigh any factors of raw throughput performance at various places on the drive. In my experience, swap seldom streams. It seeks.

Reply Score: 2

google_ninja Member since:
2006-02-05

I don't know about HFS+ for mac, but NTFS on windows actually does its best to keep system files and the page file near the center of the physical disc. I don't know how difficult that would be to implement on Ext (I know there are alot of fundamental differences in the way that the two filesystems work), but IMO for pagefiles at least NTFS has the better solution. It is something that the system should be able to figure out by itself based on usage patterns, not something users should have to worry about.

Reply Score: 3

UltraZelda64 Member since:
2006-12-05

Mac OS and Windows use swap files, which are transparent to the user and the programs. Just let the OS care about the size of the swap file and let me worry about my job. The OS can just start with the size of the RAM or half of it and increase the file size when neccessarry.

Windows does that, and it's really good at pissing me off. Not much is more annoying than doing something slightly unusual in terms of memory use, then the OS thinks, "oh, it looks like you might need more swap" and causes the hard drive to grind away for a while, producing fragments in the swap file in the process.

Would it be so hard for the OS to just ASK? There's a chance that maybe, just maybe, I didn't *want* my swap space to be increased for [insert-stupid-action-here]. Maybe it wouldn't be so bad if Windows wasn't so stupid when determining the default minimum size for swap file, but as it is it's a major annoyance and was always the first thing I changed when reinstalling the OS.

In Windows, I just set the both the minimum/maximum size to 768MB or so so it didn't resize, and it worked fine.

In Linux, I make a 512MB or 768MB swap partition (depending on distribution and whether repartitioning is feasible or not), and if I absolutely *need* more I create and/or activate a 256-512MB swap file. I also make sure my swap partition is on a drive with my /home partition, separate from my system drive, but often create the swap file in / (which is, of course on the system drive).

This all on a machine with only 256 megs of RAM.

Reply Score: 2

XDMCP/NX server
by sbergman27 on Thu 20th Nov 2008 13:04 UTC
sbergman27
Member since:
2005-07-24

Just as a datapoint, I have a server running Fedora 8 x86_64, about 60 Gnome desktops via a mix of XDMCP on the lan and NX on the wan. (Web browsing, email, OpenOffice, etc. Standard business stuff.) It's also a samba server, database server for a COBOL C/ISAM -> SQL gateway, about a hundred instances of a curses-based point of sale and accounting system, etc. It currently has 8GB of ram, and uses a maximum of about 7GB of the 16GB of swap I have allocated. It is at the point that performace is still (just) acceptable throughout the work day. We'll be adding 4 more GB of ram soon, which maxes that server out. (For x86_64, my rule of thumb is at least 128MB per Gnome user. At least 96MB per user for x86_32.)

As a Unix (and later Linux) admin since 1988, I am always surprised when these kinds of discussions fixate on the eMachines PC in the living room.

Reply Score: 3

RE: XDMCP/NX server
by siride on Thu 20th Nov 2008 13:38 UTC in reply to "XDMCP/NX server"
siride Member since:
2006-01-02

Wouldn't it make sense to have more than one machine do what you are making that poor server do? It seems strange to have a machine that runs user desktops also be running the database and point-of-sale stuff, among other things.

Reply Score: 2

RE[2]: XDMCP/NX server
by sbergman27 on Thu 20th Nov 2008 13:54 UTC in reply to "RE: XDMCP/NX server"
sbergman27 Member since:
2005-07-24

Wouldn't it make sense to have more than one machine do what you are making that poor server do?

No. More machines mean more administration. I cannot and would not go to management to say we need to allocate funds to buy another server to increase our administration load. People pay big bucks for virtualization tools to *consolidate* their servers. Why would I push for server *proliferation*? I segregate functions only when security or other practical considerations require it.

BTW, the C/ISAM -> SQL gateway operates on the COBOL files used by the POS/Accounting system, and the desktop users use the POS/Accounting system as one of the application on their desktops. It makes a great deal of sense to keep all the disk and socket accesses local.

Although performance is only "OK" right now, it will be excellent in about a week when we add the memory. The dual 3.2GHz Xeons are only moderately loaded, being shared by only 60 users. (Multicore is vastly over-hyped for standard business desktop workloads. But that's a topic for another post.)

Even after all these years, I still find Unix/Linux multi-user efficiency to be amazing.

Edited 2008-11-20 14:01 UTC

Reply Score: 3

RE[3]: XDMCP/NX server
by google_ninja on Thu 20th Nov 2008 17:55 UTC in reply to "RE[2]: XDMCP/NX server"
google_ninja Member since:
2006-02-05

The only thing I would say is that works fine; as long as you wont need to scale dramatically upward, and as long as you will never expose that machine outside your firewall. As soon as you let users remote desktop in, I would move the ISAM datastore on to its own more locked down machine, or at the least look at virtualization options (which makes scaling alot easier too)

Although performance is only "OK" right now, it will be excellent in about a week when we add the memory. The dual 3.2GHz Xeons are only moderately loaded, being shared by only 60 users. (Multicore is vastly over-hyped for standard business desktop workloads. But that's a topic for another post.)


Dual core makes sense, one core dedicated to your active application, the other core dedicated to everything else going on with the system, assuming the workload is greater then email/producivity apps. You have to have to work alot harder to find a case where quad core makes any sort of sense on a business machine, let alone the stuff coming down the pipe like the nehalem.

It will be real interesting to see what happens in the programming world in the next few years, since the languages we have right now make multi threading so painful. It is very conceivable that we will finally see the rise of LISP (or some other functional language) that everyone has been jokingly predicting for so many years now.

Reply Score: 3

RE[4]: XDMCP/NX server
by sbergman27 on Thu 20th Nov 2008 18:25 UTC in reply to "RE[3]: XDMCP/NX server"
sbergman27 Member since:
2005-07-24

The only thing I would say is that works fine; as long as you wont need to scale dramatically upward, and as long as you will never expose that machine outside your firewall.

This client buys a new server every three years, about the time that the hardware support contract from the manufacturer is about to run out. Next spring, we'll have a very economical new server that will scale to 4 or 5 times the load that this one handles. Figure about 300 users. Not that we'll reach that number of users during its lifetime.

I imagine that some admins would try to overcomplicate this with multiple servers talking over the network, virtualization, and a generally buzzword compliant topology. And they would end up with a slower, more expensive, less reliable, harder to administer system. Since I semi-retired, I'm more interested than ever in systems that just work. And this config has proven itself over 6 years, 3 servers, the addition of a couple of remote offices, and a tripling of users.

Massive scaling, of a kind which requires the whole "shared nothing" approach is interesting to think about... but very unlikely to be needed.

Edited 2008-11-20 18:26 UTC

Reply Score: 3

RE: XDMCP/NX server
by helf on Fri 21st Nov 2008 00:53 UTC in reply to "XDMCP/NX server"
helf Member since:
2005-07-06

Maxes at 12gb of ram? That is kinda weird.

Reply Score: 2

Not really required
by IvoLimmen on Thu 20th Nov 2008 13:27 UTC
IvoLimmen
Member since:
2005-07-06

I just bought a new machine that has 8 GB of RAM. Just because I am used to follow that rule my swap is 16GB... it has never been touched.
Since I also have enough drive space I really don't care that much about it...

Reply Score: 2

Why not?
by ShadesFox on Thu 20th Nov 2008 14:59 UTC
ShadesFox
Member since:
2006-10-01

If you are running a system with 32 gigs of memory you intend to do SOMETHING memory intensive with it. Why not use 64 gigs for swap? Again, it seems like you intend to do something with it, and how expensive are disk drives these days? With the cash you are dumping into memory you can spend something on some beefy disk drives.

Besides, this isn't a major life decision here. Choose something. The installer's default will be fine. If you need more make a swap file.

Reply Score: 2

The rule is 2x because of grandma
by Bounty on Thu 20th Nov 2008 17:27 UTC
Bounty
Member since:
2006-09-18

The rule is 2x because of grandma. She has no idea how much memory she'll be using. She doesn't know how big her memory footprint is for photo editing or digital scrapbooking. Or for playing solitare for that matter. We know RAM is expensive, that's one reason why eMachines exist. HDD's are cheap, and a fractional amount for space on that drive is even cheaper.

If you are a 3leet haxor linux admin with 1 server handling 28872 programs spread across 376265.2 users, you probably know how much RAM and swap you need. Rules of thumb don't apply to you admin/hacker.

If you are cheap and got 512MB ram, and have 1 GB swap, you probably told someone you just want email.... and your setup is fine. If you have 2 GB of ram and 4 GB of swap (on your 300GB hard drive) you probably told someone you will use the net and some programs, and might do some 'stuff' with photo's. And your setup is fine too..... as a rule of thumb. Swap is cheap and users are unpredictable.

Reply Score: 2

suspend to disk
by JoeBuck on Thu 20th Nov 2008 17:47 UTC
JoeBuck
Member since:
2006-01-11

If you have a laptop, and you want suspend to disk to work, you need for all of your RAM to fit into your swap. Furthermore, if you have a lot of suspended programs sitting around that are already in swap, you need more.

So 2x RAM is a good rule of thumb if you want suspend to disk to work.

In fairness, the article does mention the suspend to disk issue.

Reply Score: 3

swap actually makes it faster.
by renhoek on Thu 20th Nov 2008 22:01 UTC
renhoek
Member since:
2007-04-29

a note for all people who do not make a swap partition because they think it will be faster :

swap space is also used to page out idle programs freeing up ram. this ram will be used as disk cache resulting in (netto) less disk io and thus higher performance.

virtual memory is one of the few things usually implemented very well on most oses. trust your os builder, he's most likely smarter than you are.

i also use the 2 times the amount of physical ram, because it's a good starting point. if you exhaust this amount of swap you need a lot more real ram. usually my swap use is around 5%, but hey, harddisk space is really cheap.

Reply Score: 2

sbergman27 Member since:
2005-07-24

virtual memory is one of the few things usually implemented very well on most oses. trust your os builder, he's most likely smarter than you are.

Indeed. On the XDMCP server I mentioned in another post, we once tried changing the /proc/sys/vm/swappiness value from the default of 60 to the oft-recommended 10. This causes the kernel to try to avoid swapping out pages until it really has to. It was a huge performance "lose". Back to 60 we went. I know from other experience that going to the other extreme, say, swappiness=90 yields pretty good results during the work day. But when everyone goes to log out at exactly 5PM... mama mia! The swap in load is killer as all those pages that are only needed at log in and log out have to be swapped back in. It often pays to trust the defaults.

Edit: Another nice thing about defaults is that they are the most well tested config. Remember that data corruption bug in ext3 several years ago that only affected people who ran with data=journal, presumably in order to be "extra safe"?

Edited 2008-11-20 22:24 UTC

Reply Score: 2

RE: swap actually makes it faster.
by 6c1452 on Fri 21st Nov 2008 01:14 UTC in reply to "swap actually makes it faster."
6c1452 Member since:
2007-08-29

Fortunately, RAM is so cheap that you can have it both ways. I don't use swap because I don't like twiddling my fingers while I wait for formerly-idely programs to get paged back in to real memory, and there is enough left over for a disk cache (is this what system monitors refer to as the 'system cache'?).

Reply Score: 1

Interesting... but don't go overboard.
by womprat on Thu 20th Nov 2008 23:11 UTC
womprat
Member since:
2008-10-30

Lets simplify the rule:

RAM + Swap space should equal or be greater than the total memory required. The more ram you install the less swap space that any OS will require!

Generally few people need more than 2-4gb swap space for a current desktop linux distro or XP/Vista. Even then only a fraction of it will be utilised.

Now days, RAM is so cheap it's little excuse to not have enough of it and it should be one's primary concern. You have enough physical ram when your OS barely needs swap space.

I recall back in the day running Windows NT servers with 4 of ram and 16mb of swap space. You could run several 2-4mb executables nicely. Ram was so fiendishly expensive swappable memory was a terrific innovation.

In some cases a swap file can be disabled if there is enough. Windows can do this, but low priority data is paged out to swapfile even on a system alot of physical ram because this optimizes shutdown and hibernate performance. It's not uncommon to disable your pagefile and watch such performance drop. Linux goes a bit nuts without at least some swap space.

If in doubt don't mess with it: Windows will do a good job of figuring out how much swap space is needed and adapt, 9/10 users should just leave it alone, infact there is little advantage in messing with it unless you want to avoid fragmentation of your swap file by using a fixed size. Linux you have to give some thought to your swap partition. Put equal sized swap partitions across all your drives, the kernel will stripe data between these for performance. Windows does a similar thing, it's a nice little performance bump in Vista if you have two drives and less than 2gb RAM.

Reply Score: 2

Backwards?
by jharrell on Fri 21st Nov 2008 00:22 UTC
jharrell
Member since:
2007-07-30

Debunking the "2x SWAP SPACE as RAM" Rule - Right?


I have never heard of a "2x Ram as Swap Space" Rule

Reply Score: 1

my theory
by pixel8r on Fri 21st Nov 2008 03:16 UTC
pixel8r
Member since:
2007-08-11

my theory on swap is quite simple.

Allocate enough space for swap so that your system wont crash, should you ever feel the need to run lots of memory hungry apps all at once.

But when I'm using the computer, I treat swap as evil. If its swapping, it means I dont have enough RAM, and the solution is to increase my RAM.
I've been running with 512MB for a long long time and with very minimal swap usage (yeah I use linux, how'd you guess?). Lately my development needs have dictated that I increase RAM up to 1GB, but this is still below the current average.

It still frustrates the hell out of me when I need to use windows that windows insists on using swap well before my RAM is full. Why do I need a disk cache when the app I'm running is so painfully slow because its getting swapped out of memory?! Its moronic. There is never a good argument for swap IMO. If something's swapping, you need more RAM.

Reply Score: 1

RE: my theory
by bnolsen on Mon 24th Nov 2008 04:43 UTC in reply to "my theory"
bnolsen Member since:
2006-01-06

On my 8 core dev system I turned off swap entirely. Problem was when I would make a screw up which caused a memory overrun I ended up having to either hard reboot the machine or else wait 20+ mins for the application (along with a bunch of other apps) to get OOM killt. Without swap this OOM cycle runs faster. I'm sure the better way is to probably set some sort of hard per process real ram limit.

Reply Score: 2