Linux: Should You Use Twice the Amount of Ram as Swap Space?

by on November 19, 2008 · 67 comments· LAST UPDATED December 8, 2008

in , ,

Linux and other Unix-like operating systems use the term "swap" to describe both the act of moving memory pages between RAM and disk, and the region of a disk the pages are stored on. It is common to use a whole partition of a hard disk for swapping. However, with the 2.6 Linux kernel, swap files are just as fast as swap partitions. Now, many admins (both Windows and Linux/UNIX) follow an old rule of thumb that your swap partition should be twice the size of your main system RAM. Let us say I've 32GB RAM, should I set swap space to 64 GB? Is 64 GB of swap space really required? How big should your Linux / UNIX swap space be?

Old dumb memory managers

I think the '2x swap space' rule came from Old Solaris and Windows admins. Also, earlier memory mangers were very badly designed. There were not very smart. Today, we have very smart and intelligent memory manager for both Linux and UNIX.

Nonsense rule: Twice the size of your main system RAM for Servers

According to OpenBSD FAQ:

Many people follow an old rule of thumb that your swap partition should be twice the size of your main system RAM. This rule is nonsense. On a modern system, that's a LOT of swap, most people prefer that their systems never swap. You don't want your system to ever run out of RAM+swap, but you usually would rather have enough RAM in the system so it doesn't need to swap.

Select right size for your setup

Here is my rule for normal server (Web / Mail etc):

  1. Swap space == Equal RAM size (if RAM < 2GB)
  2. Swap space == 2GB size (if RAM > 2GB)

My friend who is a true Oracle GURU recommends something as follows for heavy duty Oracle server with fast storage such as RAID 10:

  1. Swap space == Equal RAM size (if RAM < 8GB)
  2. Swap space == 0.50 times the size of RAM (if RAM > 8GB)

Red Hat Recommendation

Red hat recommends setting as follows for RHEL 5:

The reality is the amount of swap space a system needs is not really a function of the amount of RAM it has but rather the memory workload that is running on that system. A Red Hat Enterprise Linux 5 system will run just fine with no swap space at all as long as the sum of anonymous memory and system V shared memory is less than about 3/4 the amount of RAM. In this case the system will simply lock the anonymous and system V shared memory into RAM and use the remaining RAM for caching file system data so when memory is exhausted the kernel only reclaims pagecache memory.

Considering that 1) At installation time when configuring the swap space there is no easy way to predetermine the memory a workload will require, and 2) The more RAM a system has the less swap space it typically needs, a better swap space

  1. Systems with 4GB of ram or less require a minimum of 2GB of swap space
  2. Systems with 4GB to 16GB of ram require a minimum of 4GB of swap space
  3. Systems with 16GB to 64GB of ram require a minimum of 8GB of swap space
  4. Systems with 64GB to 256GB of ram require a minimum of 16GB of swap space

Swap will just keep running servers...

Swap space will just keep operation running for a while on heavy duty servers by swapping process. You can always find out swap space utilization using any one of the following command:
cat /proc/swaps
swapon -s
free -m
top

See how to find out disk I/O and related information under Linux. In the end, you need to add more RAM, adjust software (like controlling Apache workers or using lighttpd web server to save RAM) or use some sort of load balancing.

Also, refer Linux kernel documentation for /proc/sys/vm/swappiness. With this you can fine tune swap space.

A note about Desktop and Laptop

If you are going to suspend to disk, then you need swap space more than actual RAM. For example, my laptop has 1GB RAM and swap is setup to 2GB. This only applies to Laptop or desktop but not to servers.

Kernel hackers need more swap space

If you are a kernel hacker (debugging and fixing kernel issues) and generating core dumps, you need twice the RAM swap space.

Conclusion

If Linux kernel is going to use more than 2GiB swap space at a time, all users will feel the heat. Either, you get more RAM (recommend) and move to faster storage to improve disk I/O. There are no rules, each setup and configuration is unique. Adjust values as per your requirements. Select amount of swap that is right for you.

What do you think? Please add your thoughts about 'swap space' in the comments below.

TwitterFacebookGoogle+PDF versionFound an error/typo on this page? Help us!

{ 67 comments… read them below or add one }

1 0xAF November 19, 2008 at 9:33 am

i think that 2GB swap is never going to be filled on a desktop/laptop or even on a server…

and i support the idea of getting more ram if you see that you’re using more than 1GB swap.

so basicly i use no more than 2GB swap space even if the server has 8GB of ram. on my laptop i have 2GB ram and 2.5GB of swap (for hibernating).

Reply

2 Mason February 7, 2012 at 12:52 am

I have an Athlon x4 system running Mint Debian edition right now, and I use swap space on the installed drive as well as a seperate older 40gb drive exclusively for backup swap space, I can EASILY use 16 GB on my 8GB memory system, so a “twice ram in swap” baseline is perfactly in line, I for one have over 75 GB of swap space and frequently go into that 40GB backup reserve, but you can tell when it does, it slows the system down dramatically, telling me it’s time for a reboot.

Reply

3 Валяк November 19, 2008 at 11:27 am

The old rule applies only for the hibernates and heavy loaded machines. On a desktop no point of having twice of ram if you won’t use hibernate and want to use a partition.

The actual reason for this rule is that on a heavy loaded server when they crash and coredump the core is saved in the swap (if kernel problem) and you can debug the problem by looking up at the cores in swap…

Reply

4 Ivan November 19, 2008 at 4:05 pm

Surely the ideal swapspace would be where a process takes less time to look up the page to be swapped in the swapfile, than to prepare a page for swapping, and write to first available space in the swapfile? Write processes, going to cache, should surely be faster than an index search, and disk seek?

Any volunteers to write a program that can monitor swap lookups/reads vs. write speeds, incrementally resize the swapfile, and calculate the perfect swap size ‘on the fly’, taking into account average process RAM usage, averaging over a period of days or weeks, until the swapfile size variation is within preset parameters, then fix it there? Oh, and it should never go to swap.

Current best performers:
Lightly-loaded CentOS RAID 10 (40 users) – 1Gb RAM, 1.2Gb swap. Average record read into RAM – 110Kb, 82% RAM used, 26% swap.
Heavy load CentOS RAID 1 (4 users) – 2Gb RAM, 2Gb swap. Average record read into RAM – 45Mb, 96% RAM used, 45% swap.

Usage averaged from top().

Reply

5 UtahLuge November 19, 2008 at 4:08 pm

For my home computer, I have a spare hard drive that I use for swap. I mean come on; we all have those 4/8/10Gig drives laying around that we never seem to get rid of. This gives the benefit that your hard drive won’t be thrashing around accessing swap & data at the same time. But on my servers, I have checks that make sure I am not using up all my ram and as soon as anything hits swap I am alerted so I can take corrective measures.

Reply

6 Valqk November 19, 2008 at 4:26 pm

@UtahLuge it’s not about i we have spare disks. It’s about the pont in having a swap that never get filled, not talikng of used when you have 4 gigs of ram on desktop.
@Ivan you can use file instead of swap, this way you can resize on th fly (at least by doing swapoff -a; dd swapon -a)…

Reply

7 glaunix November 19, 2008 at 5:34 pm

Surely your friends rule of thumb needs to be amended …
1. Swap space == Equal RAM size (if RAM 8GB)

What does he/she suggest with 16GB RAM? The rule says 8GB swap which is the same with 8GB RAM.
Maybe for RAM > 8GB
swap(GB)= 4 + 0.5xRAM

Reply

8 Simon November 19, 2008 at 6:09 pm

Using virtualisation products might call for a different memory management approach. Running two virtual machines at full steam (for, say, networking experiments), plus the workload of the host machine, your memory and swap space could end up full to the brim.

I agree that this is currently a very special case, but as virtualization becomes ever more popular, the memory management tends to become more complicated.

Reply

9 Luiso November 19, 2008 at 7:00 pm

I think that we can have more than 1 partition of swap. Is it better to have small partions of 1gb?

Reply

10 Valqk November 19, 2008 at 8:13 pm

Using more than one swap partition when you have > 1 disk is a pretty good practice. Actually if you setup raid1 server it’s a must so you can always have a working swap, even if one of the disks fail.

Reply

11 Thomas November 19, 2008 at 8:47 pm

I have 4 gigs of RAM in my laptop. I use no swap on it.

I have between 4 and 16 gigs of RAM on my servers.
They do not use swap either.

RAM is cheap. I think that you should just buy the amount of RAM you need.

Reply

12 Chris Bryant November 19, 2008 at 9:43 pm

I’m running 2gb ram (actually 1.5 with shared video), and a 5 gig swap (what Kubuntu partitioned it at)- I’ve actually run out of swap space a couple of times- not fun at all!
I do agree that more ram is the answer- especially now that prices are at a low end- I can add 4 gigs for around US$40-50, for a total of 6gb (too cheap to throw out the 2 1 gig sticks I have, or I would go to my mb limit of 8gb).
Virtual machines tend to eat a lot of memory quickly!

Reply

13 BobCFC November 20, 2008 at 12:23 am

If you read into the history of Linux, such as in the Torvalds biography, you will find that when it was at version 0.00001 virtual memory was the first feature that Linus added at the request of a third party and not something he needed himself. (Somebody emailed him asking if he could make the kernel work on a system with only 1mb instead of 2mb or something).

Linus was very proud when he managed to fulfil this need for another person, (this feature was not present in Minix) and it really marks the point when Linux moved from a hobby to something more.

On a side note I have 8gb of RAM and manage fine without a swap disk. Even using virtual box and 5 workspaces with all apps open and logged in as two users.

Reply

14 Greg Folkert November 20, 2008 at 5:20 am

I guess nobody here has actually had a machine that is aging and its right sized for load from wild hared “customers”. Customer that can write thier own 100 way Cartesian join on an 80 million record return set out of a well made select that chains a ton of “OR” statement vs a using a *…

So… yeah Swap is a good thing to be able to remotely *GET* to a machine before if falls over, via ssh.

2.5 times RAM is my minimum rule, These machine are 2-16 core machine with any where from 4GB to 64GB of memory… not only is RAM cheap… but DISK SPACE is way cheaper.

Reply

15 Asif MUSHTAQ November 20, 2008 at 8:19 am

I think you should have sufficient enough swap space to protect you against infrequent transient load surges where machine processing requires lots of memory. If machines is continuously swapping then you definitely need more RAM. My formula to calculate swap is as follows.

Swap space = 2GB for RAM =< 2GB
Swap space = Physical RAM for 2GB < RAM = 8GB

Reply

16 Istvan Visegradi November 20, 2008 at 8:41 am

Just to be picky: There is a rule if RAM > 8GB and another one if RAM < 8GB. So what about if RAM == 8GB? (same true for 2GB rule but in that case the calculation is same with both rule)…

I do agree with the comment that the above rule cannot be applied if you run virtual machine(s). I would say we should rather use common sense than rule of thumb in that case. I guess any of us can calculate the virtual machine(s) + host OS memory requirements and also the host machine available memory to see how much swap space will be needed.

Reply

17 Stefan November 20, 2008 at 8:48 am

Why is the user even required to make such an arbitrary decision? Some admins may see benefits from managing swap space manually, but for the vast majority of users, an automtaically expanding/shrinking swap file instead of a fixed size partition would be the optimal solution.

Reply

18 TooManySecrets November 20, 2008 at 9:01 am

Ok, but what say your oracle guru friend about apps like Oracle, when the installer fails if your swap is not defined as the doble of the ram installed in the server? I had two problems of this kind with oracle 10g under systems with linux kernel 2.6.

Thank you!

Reply

19 Andrey November 20, 2008 at 9:29 am

The old rool is still correct. You need about 2xRAM swap in order to be sure that you do not limit you machine’s ability to run really memory hungry applications. You are free to violate the rule if you do not have such applications.

If you are patient and declare a OC standing still later than most, you mey use 2.5, 3.0, even 4.0 times more swap than RAM.

Reply

20 Pedro November 20, 2008 at 12:59 pm

I make good use of tmpfs, a memory backed (RAM+swap) file system, and have it setup on all my machines: laptops, desktops and servers.

It is very useful to have all those short lived files (e.g. HTTP session files) go in there an hardly ever touch the disk. In my laptops it saves battery charge and in my servers it reduces disk access and speeds the work that can’t live without the disk (e.g. database, email). For the infrequent case where there are many or large temporary files then swap gets used by tmpfs. For those cases enough swap space needs to be available but 2GB for laptops/desktops and 4GB for servers have proven to be enough.

Reply

21 dmc November 20, 2008 at 1:25 pm

I had almost 2GB of swap used on a 4gb swap file (2gb ram) the other day on my server .. And trust me, it didnt look good ;) Of course one of the apache procesess was leaking memory, and in the normal use it doesnt use more than 10-30 mb of swap … By my experience even in this crazy situations 2Gb is more than enough, coz as the author said, if machine use 2Gb of swap, it basically wont work (load average was 40 because of swapping)..

Reply

22 John Q Normal November 20, 2008 at 2:45 pm

RedHat’s rule of thumb: 2x RAM up to 2G, 1x RAM for 3-8G, and 1/2x over 8G — for servers. 1/2 that for desktops. Exception: if you enable hibernation, you’re going to need to add 1xRAM + 1xVRAM on top of what you’d normally set.

Reply

23 Artem S. Tashkinov November 20, 2008 at 3:09 pm

Since I upgraded my PC to 1GB of RAM (around two years ago) I completely disabled SWAP and I never looked back.

100% of my servers run without SWAP.

I have yet to face a single problem related to SWAP absence, since nowadays RAM is as cheap as chips.

SWAP has the following disadvantages
1) _all_ OS tend to fill up the RAM with cache and eventually some of your applications will be swapped out. When you need them fast you will have to wait for your slow HDD to finish reading operations

2) Swap adds some memory and if you have some crazy applications eating all available _virtual_ RAM then you may get into a situation when your system is absolutely stuck trying to swap memory pages from and to SWAP. If you run out of RAM and you have no SWAP your kernel will promptly kill your evil application [with OOM signal].

3) SWAP files/spaces are usually a place where HDD start to fail.

So, I’m quite happy with the fact that I have completely eliminated SWAP headache on all my PCs and servers.

Reply

24 Kamil Kisiel November 20, 2008 at 5:19 pm

Quite frankly I don’t see how this article clarifies anything. You just throw out some random values without much justification other than bashing some Windows and Solaris admins. Swap usage is very workload dependent and attempts like this provide little more than a starting point. The most useful bit of this article is the suggestion to use swap files instead of partitions. This is key. You can start small, monitor load and performance, then add more swap or ram as necessary without having to worry about repartitioning.

Reply

25 Taneli Otala November 20, 2008 at 5:27 pm

I would suggest an even simpler rule, but break it into three categories of machines…

1) Production Servers
These have got to run, and run fast. You need as much RAM as you are going to use. No SWAP — by the time the system starts swapping, you\’re out of luck.
If you give a production server SWAP, it will mask the situations where the performance dropped to unacceptable.

2) Analytics/Database Servers
Some people do crazy things with their database servers, complex JOINs and poorly constructed queries — you may suddenly need an extra 32GB of memory… Start with oodles of real RAM (whatever you can afford, 8-32GB), and then add double that as SWAP, just in case someone goes crazy. When things go crazy, things go slow, but at least it won\’t crash.
Never apply this \”logic\” to your production servers.

3) Laptops and others who hibernate
Enough SWAP to handle the hibernation, i.e. a bit more than your amount of RAM.

Reply

26 OldBoots November 20, 2008 at 6:29 pm

Just for the historical perspective, the 2X RAM rule dates from the days before VM, when a process got swapped out, the entire memory image of the process got swapped out, not just the least recently used memory pages. Any time you see something like this that you think doesn\’t make sense, try viewing it through the eyes of a PDP-11 programmer who had no VM, no page faults, no networking, etc.

Shared libs, VM systems, the ability to re-read clean pages from the original image on disk rather than from swap all have greatly reduced the amount of swap required.

That being said, a large swap can give you some breathing room to kill of a wild process before it brings the systems down. If you have the space, why not use it?

The comment about using separate drives is also true. Using a dedicated swap drive will cost you the power of having it spun up all the time, but performance will be faster. Hang it off a 2nd controller and it gets better still. But both of those factors drive up initial cost as well as operational cost. You are going to hardware RAID it, right? :)

One strategy I have used in the past is a daemon which adds/removes swap space dynamically as it is required (by looking at rate of change and consumed space). Initial swap is on fast local disk, next step of swap goes to a swap file on local disk (slower), and the final stage goes to swap files on NFS mounted disk (very slow). The slowing of swap gives you even more time to go in an identify the culprit and kill things off gracefully.

Reply

27 Art Protin November 20, 2008 at 7:54 pm

The rule of thumb being used dates way back and was helpful when the administrator doing the installation had not been briefed by the engineer that spec’ed and purchased the system. The full design rule was plan for swap to big enough to accommodate all jobs that needed to run concurrently. In order for the system to be reasonably responsive, you needed enough ram to have 1/2 the work swapped in. If the system was under sized but the swap was twice the size of ram, jobs would fail (out of memory), but fail quickly. Thus, even when we engineers were doing the installation and/or the administrators were in on the initial spec, we still used the 2X rule, especially when the work load was much less well known than the budget.

Reply

28 Alan November 20, 2008 at 9:28 pm

This talk of having zero swap for “production servers” seems crazy to me. Yeah we all know swap is slow but it might just save your “production server” going down. Are you really saying you’d rather it was a case of “shame that server went down, but at least it never started running slower”. What kind of a production environment are you working in? I’m not sute how this would mask performance problems either.

Since disk is so cheap it’s better to be safe and allocate some. The suggestion of using an old separate disk that you might not have used otherwise seems like a decent one.

The amount you want to allocate depends on what the servers purpose is. That’s why the swap rule issue continually gets raised, because there is no definitive answer.

Reply

29 Arhimed November 20, 2008 at 10:38 pm

On my laptop I have 1.5Gb of RAM. I never used ‘sleep’ feature of my WinXP OS, so one day I just disabled the swap. From that moment I haven’t got any trouble and even more – my laptop became more resposive when switching between apps.

Reply

30 Dave November 21, 2008 at 2:49 am

common having a large swap never hurt anything, except wasted space. I’d rather have wasted space than important programs crashing on me.

Reply

31 Akshay Sulakhe November 21, 2008 at 7:27 am

I am also always confused about how much swap i should i use on my dekstop.The config is AMD Quad core 9950 black edition..2gb DDR2 800 MHZ ram…160gb sata,500gb storage disk,MOtherboard ASUS M3A78EM…OS Linux os,sabayon or Ubuntu..i generally give 2 gb swap…but my ram is never being utilized more than 700mb..even if m compiling on sabayon..and swap not more than 19 mb..so whats the use…please tell whether i need to use swap or not..no Windows..so no hassle…

Reply

32 Chris Lees November 21, 2008 at 11:13 am

Akshay Sulakhe: Use however much swap you like. I used to run with no swap at all on a 2gb desktop, with no ill effects. When I did a fresh install of the new Ubuntu I forgot to tell it not to create some swap, so now I run with swap space. I don’t notice the difference.

There is also a “swap should be no more than 1.5x your RAM” rule floating around, that originally came from Mac users. The virtual memory system in the Mac OS would work faster if you set it to less than 1.5x your RAM (it was dynamically configurable). You wouldn’t believe how many people had it set to “All my RAM plus one megabyte” as this was what the value defaulted to when you turned on VM!

Reply

33 Dan November 21, 2008 at 3:16 pm

I generally follow this rule, but if I want to save my session I obviously need > my RAM. I have 1GB of RAM and it rarely runs out of RAM, but I’m gonna get 2GB so it will never. I now think having either just more than your RAM or none at all are good options. Should I ideally have some or none?

Reply

34 Akshay Sulakhe November 21, 2008 at 5:11 pm

But whats the use wasting more hard disk space for swap…i can use that for /usr/local…or if i use VM i can go for /dev/shm…thatz worth it…ok but if i need VM i need a swap…so i am still confused how much swap i shud use for my comp…

Reply

35 Vincent November 22, 2008 at 11:54 am

If undecided regarding your swap size, you may use swapd: http://cvs.linux.hr/swapd/

This nice daemon will dynamically create swap files on a filesystem when really needed.

I personally use a hybrid approach at home, with a swap partition for the usual case + swapd for peaks.

Reply

36 Saurabh November 22, 2008 at 1:03 pm

This maths for dual of ram is effective only where their are number of process simultaneously running and where data flow is too high .

Reply

37 Anjanesh November 23, 2008 at 3:49 am

I filled in all my desktop-pc RAM slots amounting to 8GB for the sole reason of eliminating swap totally. But I still set aside 2GB of swap just-in-case. So far usage has been 0% !

I’ve heard of hard-disks wearing out too often just due to too many disk-seeks for virtual memory operations.

Reply

38 Erik Bussink November 30, 2008 at 12:26 pm

I’m going for 113% of the physical memory. That way it’s a bit more than the physical memory, without being too much. If these 113% of swap space are used,then the performance of my system will go down anyway due to high I/O.

This additional 13% is a value we came out after a long lunch hour discussion in one of the Red Hat courses I followed.

Reply

39 Bob Weber December 4, 2008 at 8:09 pm

For the most part, this discussion seems mostly academic. Hard drive space is so much cheaper than RAM, there shouldn’t be any problem allocating 2x RAM for swap. If you have 16GB of ram, you probably have a TB of disk space, so what’s 32GB?

The questions I’m interested in are (and some of these are answered in the comments here) more along these lines:

Is running no swap a viable option?

Are there a performance or reliability increases with more, less or no swap?

Can swap behavior be changed to not include caching or only be used as a last resort in a production server?

Is there a good way to monitor swap so potential memory leaks can be identified and handled?

Is there a good way to analyze what is being swapped?

Reply

40 devx December 8, 2008 at 1:28 am

Swap is, as you know a “emergency” memory space: linux runs fully in the memory otherwise, until it runs out of ram. Depending on what programs you run, you may find it viable (check how much free ram you have often.) Its best to keep one in any case for emergencies (like memory leaks or whatnot).

As others said, swap is also used if you configured your kernel to make core dumps for debugging or if you plan to “sleep to disk”.

Some programs do cache to disk using swap but if you don’t have any, most (like VIM) run fine fully within RAM. Swap is addressable by all programs for cache once running. Someone who does more memory management stuff can help you here.

In terms of need/reliability: For some machines, which have a rare/fixed amount of RAM that is a little small, its a must (hacked TiVO recorders or Playstation 2 running linux are examples, the latter with 256 and 32 MB of ram fixed. Yetch!) Certainly all machines, as long as your disk is good will continue to run stable abeit somewhat slower (if you use faster hdds or multiple swap on different drives this may be less noticable.)

Reply

41 Biff December 24, 2008 at 3:29 am

Folks, the old swap = 2x RAM dates back to when eight MEGAbytes was a lot of RAM on a multiuser system, and there was an enormous difference in $/MB between disk and RAM. Back then, you would often set up a machine using the assumption that you would be swapping to disk on a routine basis. The rule of thumb stuck, but few people understood that it was just a guideline and was not a requirement.

These days, with RAM so cheap, if you are hitting swap, just buy more RAM!

The amount of swap that you should use is entirely dependent on how the system is used. If you have lots of users running unpredictable codes, then configure more swap. If you have a reasonable RAM cushion for your system’s usage pattern, you may well never touch swap. I ran a 32 GB RAM system for quite a few years using 8 GB of swap, and in all that time, the swap was never needed.

Our shop bought quite a few 1-4 GB RAM workstations from SGI over the years. They shipped pre-configured with either 128 MB or 256 MB of swap (I forget exactly which). Again, never a problem.

That said, as disks became very cheap, we started setting up new Unix workstations with 1 GB swap unless there was a specific reason that we would want to add more, and for servers, we’d typically just allocate 4 GB for swap, no matter how much RAM the boxes had. This was a high-performance environment, and again, never a problem.

We also kept a close eye on system performance (including swap usage) using sar and also SGI’s Performance Co-Pilot product (free for Linux, by the way), and that gave us more confidence in ignoring the traditional “Rules of Thumb.”

Reply

42 Intelliginix January 3, 2009 at 9:20 am

I try to get enough RAM so swapping is not necessary , but depending on the size of the server I would usually have some swap space available just in case. On servers that don’t have enough physical memory swap is absolutely necessary, but again you have to calculate how much memory your server (or workstation) is using and will need. For instance you have a server with a database, and for performance reasons you may have databases cached in memory. In this you would be better off have enough RAM where swapping isn’t necessary during peak hours of operations. However, due to upkeep and maintenance you may have other process that run during off-time hours (for instance backups and other things) that may push the limits of your physical RAM for a period of time. In this case swap is necessary to avoid “out of memory” situations.

I tend to design modularly so I good grasp on what to provision them for, and the age of virtualization has made some these easier in ways, and more difficult in others. Good virtualization software will allow you to dictate how much memory you will use for your virtual machines (which makes it easier to manage machines running over a host OS).

I good monitoring tool also helps with the proper thresholds set to alert the admin if there is any sign of possible trouble.

Reply

43 Keld Simonsen January 22, 2009 at 12:17 am

For a desktop it is very nice to have quite some swap space. I have been running a desktop which got trashing because of firefox etc, and I was using the 2 X rule of thumb. However, my firefox does not exceed sy 500 MB som 2 GB should be enough.

Another issue is to use raid for swap. If you do that, your system can survive and continue running, if one of the disks that you use for swap fails. Given that we are talking about 2 Gb partitions, which is not much on todays huge disks, you can use more than 2 disks to use, 3 could be useful to survive 2 disk failures.

And for performance I would use the raid10,f2 or raid10,f3 layouts. The “far” layout of raid10 gives almost raid0 performance for sequential readning, and raid10 with the “far” layout is overall the best performing raid-1 like raid type. More on linux raid can be found at http://linux-raid.osdl.org and specifically setting up a system with swap on raid on http://linux-raid.osdl.org/index.php/Preventing_against_a_failing_disk

Reply

44 IT_Architect February 21, 2009 at 3:01 am

This same decision at present as well.
- First, high performance hard drive space is not cheap.
- Second, with server consolidation, you don’t want your VMs taking up space unnecessarily.
- Third, prudent use of hard drive space yields more options for floating your virtual machines to other servers.
- Fourth, prudent sizing of hard drive space for VMs reduces backup space, bandwidth, and time requirements spent backing up to low cost drives where gigs tend to add up making them not quite so cheap if you are paying by the month from a hosting company.
- Lastly, if you run out of memory, applications will malfunction and there may be OS problems as well. But can you actually fix that with hard disk space?

My thoughts:
1. High-speed processes normally found on servers cannot actively swap anywhere near a gig and provide usable performance. Unless you typically load a lot of dead wood loaded that you can park, it will be tough to leverage much space. (Maybe a bunch of VMs that aren’t doing anything.) As server loads go up, programs that dynamically spawn processes or threads as needed require more memory because the processes run longer and thus more are needed at a time. Swap soon ends up being a performance governor since no matter how much physical RAM you have, the amount of virtual memory you can use will be limited by the speed at which you can move memory contents between disk and RAM, and the time it takes to manage the virtual memory. With the processor speeds vs. disk speeds today, the amount of useful virtual memory would be a pretty low number. You may be able to limit the number of threads or processes these types of programs can spawn so as not to render other applications unusable, however, that simply passes the delay to the service requesters in the form of timeouts in the client apps. This means the only way out is more memory and or CPU is required, and more swap space cannot help. Thus the a diminishing ratio between RAM and swap makes sense as RAM increases.

Let’s take this thought process to the next level. At one point in time, the biggest advantage of a hard drive over a floppy drive was not speed, it was capacity. Later, as processors and disks improved, there was a very big difference in speed. The hard drive felt as fast as RAM. The problem up to this point was the processor no matter how much people talked about “slow speed peripherals”. I remember the president of SUN confidently stating at a keynote at a COMDEX, “The 486 is the fastest CISC processor that will ever be made” That’s before he knew anything about it. His reasoning was no processor with varying length instruction sets could never be pipelined. The 486 dropped the bomb on that supposition and could do far more at 25 MHZ than the hottest AMD 386 could do at 40 MHZ. A CISC instruction that required 6 clock cycles could accomplish the same work that took the RISCs 200, which is why the CISC, even considering pipeline flushes, is still around today. Ever since the 486, feeding the processor from RAM has been a major problem. However, for the most part, all the way through the Pentium III, 512 megs of RAM is all they would support, and 256 was more common. Things have changed a lot since then. A few years ago AMD gave Intel in incredible beating when Intel couldn’t adequately feed their processors from memory. Intel has only recovered from that in the past year or so. So if feeding the processor at RAM speeds is such a huge problem today, how is it that we can somehow make use of gigabytes of disk space to compensate for RAM shortages? Either I’m missing something here, or the hard drive could only help for small snippets that need to be accessed very infrequently. Forget using the hard drive for swap, can you even imagine today’s processors getting anything done without using substantial RAM for disk caching? Is it possible that 2 times RAM was meant for the Pentium III and earlier where memory sizes and CPU speeds allowed virtual memory on the hard drive to be of practical benefit?

2. Workstations should be able to benefit more from swap that servers. Users cannot switch tasks 1000 times a second, thus large amounts of virtual memory can be practically used to for editing large files and parking large chunks of many programs that were loaded but doing nothing most of the time. Swap times of 2 to 4 tenths of a second are of no consequence in an environment where user programs switch every 20 seconds or more, or the occasional search and replace on a large file takes 15 seconds to complete, or minimized Photoshop pictures that take a second to reload.

For me, I think I’m going to challenge some OS developers to explain to me where all of the dead wood is coming from that they can store in swap because it surely cannot be from active processes.

Reply

45 Lirio February 25, 2009 at 6:26 am

Hi guys, do you know how much RAM Redhat 9 can accomodate? I have a system which runs only with RH9 and I want to upgrade it with new hardware but I’m not sure how much RAM it can detect. Redhat 9 is x86 only.

Thanks

Reply

46 Intellignix March 5, 2009 at 11:14 am

If you are running a desktop PC you WILL want to have more the amount of RAM you are using in case you need to use the hibernate feature on your computer. As far as servers go since they are meant to stay up, if you provision them correctly you shouldn’t need swap space at all. But you should have swap just in case for safety reasons, but how much really depends on what your doing.

intelliginix.com

Reply

47 Luis Alvaro Araujo² April 16, 2009 at 6:37 am

The amount of swap isn’t relative to the amount of memory.
The amount of swap is relative to the amount of programs used!!

I have an FreeBSD 6.3 running on an old Duron 1Ghz, 256 MB RAM and although the computer has 256 MB of swap, it never used the swap because it only runs NAT, firewall, DHCP and Apache.

If I install more RAM, will I increase the size of swap space?
If I install more RAM, the swap should decrease, as more RAM will be available and less swap will be needed.

Reply

48 Solar Winds July 17, 2009 at 6:20 pm

According to the normal computer hardware details a processor having a Address bus of 32 bits (Which is very less for a server but for a desktop), the processor cannot use the total memory of 512MB total memory. Remove a RAM of suppose 256MB the computer will never use more then 256MB. Thus theory of
swap=2*RAM is failing.
Thus it is better to look at your usage then taking some general rules.

Reply

49 Bala July 31, 2009 at 3:59 pm

I have a question here, can somebody clarify me:
In real time, we have a UNIX server which host about ~10 databases which are used by web and other applications. Will the swap spaces be created so that the space is dedicated to one database?

Reply

50 valqk August 1, 2009 at 10:12 am

If you want to have a nice working and quick database access you should NEVER EVER get to swap usage. SWAP is SLOW and is there to save your ass when there is no memory left so that the kernel won’t start killing random processes… fine tune your db to use the memory you have.

Reply

51 Roger Wolff October 18, 2009 at 10:53 am

Ram keeps getting cheaper. Disks keep getting cheaper. If you stick to the swap=2xRAM rule, you’ll be spending about 1% of the value of the RAM on disk space allocated for swap. That sounds reasonable.

Although you can nowadays easily PLAN not to use any swap, someday, somehow, you’ll run some app that happens to use a lot of memory, and you’ll need some swap.

If you’re lucky, and you’ve got (enough) swap space configured the app will silently push some stuff to disk, when it needs the memory, and it will be almost-unused data, that you don’t notice when it has moved to disk for a while.

If you’re unlucky, you’ll notice your machine becoming slow to work with. You are then faced with a choice. You can stop the application that consumes “more than expected” memory. Or you can suspend it and continue it later. Especially if you decide to do this last option, having the swap space means a lot.

Reply

52 Tony November 16, 2009 at 9:10 pm

There is a lot of confusion out there about swap space and setting up the correct size. For desktop systems, you’re just fine with no swap and a lot of RAM.

For server environments, it is a different story.

You see, while you never want to use swap space (in the I/O sense), the system has some interesting issues that it has to deal with when handing out memory, and the default choice Linux makes is very bad for reliable environments.

Here is a summary:

When a program asks for memory (via malloc, or indirectly via fork(), which has to make a copy of the running process), Linux assumes that the request is larger than it really needs. This is a reasonable assumption in the majority of cases. For example, in a shell, every time you run a command, the data set of the shell itself is marked copy-on-write, and is therefore a potential memory sink if that new shell were to continue running. In reality, the next thing that happens in 99% of the cases is an exec(2), so that those potential memory drains are immediately thrown out.

Basically, in order to run with little or no swap, the kernel has to lie and tell all comers that memory is available. If that turns out to be wrong, then it KILLS whatever program ends up actually needing the memory at some future time when none is available.

The problem is that there is no way to make a memory “guarantee” with this policy. It’s a bit like selling too many tickets for an airplane: you don’t know who is going to get bumped. So, say your Java VM calls a native method that does a fork() of a 1GB process, gets a bunch of copy-on-write pages, and then starts to to touch them like made…something is going to break, and it may not be the JVM. That is the problem. If the timing is wrong, it could be your production database that gets the axe!

People talk about “old” or “vintage” UNIX systems that made you have twice the amount of swap as RAM.

This had nothing to do with small memory systems. It had to do with guarantees.

Those UNIX systems were written to be very predictable in their runtime behavior, and the only way to do that is to cause memory allocation to fail in cases where there is nowhere to store the data. So, the idea is this: “Only tell me I can have the memory if it is really there (via RAM + swap)”, but the performance assumption was (as Linux assumes) that programs will ask for more than they need (which is common because of the whole fork()/exec()/copy-on-write model).

Classic UNIX requires predictability (and therefore a lot of swap space), and assumes that performance will be good because no one uses all of the RAM they ask for (e.g. swapping will be infrequent).

Linux, by default, assumes reliability, and hopes for good performance and behavior.

So, as an administrator of these other UNIX system, dealing with memory management goes like this: you expect to see swap in “use”…it means that allocations have been made against it. But it but does not mean any disk IO has happened!

What you have to do for monitoring is watch the actual swapping stats via sar or some other utility. If you are seeing swap I/O, then you need more RAM or you need fewer processes. The amount of swap “in use” is an almost useless number (other than if you run out, new memory allocations will fail on request).

Linux can be tuned to work in a fashion that is closer to this “more reliable” technique as of the 2.6 kernel line. Google on “Linux Memory Overcommit”. The basics are:

- Have at least 2MB + size of RAM. I’d recommend the old RAM*2 number…disks are cheap. This is independent of RAM size, since what you are using swap for is guarantees, not actual storage.
- Change the Kernel parameter vm.overcommit_memory to 2 in sysctl.conf

As your systems run, watch for swap I/O, which is a little difficult and misleading.

sar -B has numbers about paging, but unfortunately, paging is how all I/O happens. The vm;eff is a good indicator (you want it to be high).

If you use a swap partition then you can directly monitor the reads/writes from/to that partition itself via sar -d.

Reply

53 Greg Zeng March 4, 2010 at 8:04 am

Using OpenSuse, latest, default 2gb swap file included & used. Boot: 5.75 gb. Home: 7.12 gb. [checked by freeware Partition Wizard 4.2.2].

OpenSuSe swap usage? How do I measure it?

Main op sys is VISTA (HP PAVILLION notebook DV6514TX, 320 GB HDD, 3GB RAM). Vista has a 4gb partition, with operating set swap drive. At the moment (48 processes, physical ram 78 %), it uses 1 gb swap disk. [Freeware: Process Explorer]. Linux is used to correct Windows crashes.

Windows is preferred because of better applications: Dragon Naturally Speaking, latest hardware, Servant Salamander, Cobian Backup, Omnipage, ….

Windows & applications make use of “temp” directories. I put all my temp folders (4 of them in Windows) on the first partition of my fastest drive. Being a common notebook, it has only one, not counting the slow external USB drives.

Greg Zeng, Australian Capital Territory

Reply

54 Dude January 27, 2011 at 10:31 am

Ubuntu defaults to 136% of physical RAM for swap. My netbook has Ubuntu 10.04, 2gb of RAM and about 3gb of swap space. It NEVER uses more than 500mb of swap space. Disk space isn’t cheap when you are using an SSD drive. Anything over 1gb of swap is a waste on my netbook SSD drive. I never use the Suspend feature, because Linux boots fast (unlike Windows).

My desktop system has 4gb of RAM and an Ubuntu 10.04 variant. I forced it to use about 1.3gb once. I ran 2 VirtualBox session with Redhat and Fedora, Firefox, Gimp and VLC all at the same time. This was when I was using a 30gb SSD drive for swap space. I never even got the system to use 2gb of swap space. Ok, maybe it used more than 1.3gb for a few microseconds, but the average was much less. Typically, my desktop uses less than 500mb as well.

Try the command ‘vmstat -s’ to see how much swap space is used. Most of the time it’s 0 k on both of my systems. I tend to agree that swap space isn’t really needed if you have enough RAM. I think the Ubuntu default of 136% of RAM is more than enough.

Reply

55 jack March 21, 2011 at 3:55 pm

So, we just installed some IBM X3950 M3′s in a cluster with 1 Terabyte of RAM. You’re suggesting how much swap space?

Reply

56 Walter July 31, 2011 at 10:01 am

A bit outdated … but at least when you intend to hibernate / suspend your system, you want
physical RAM + little more for swap space. Because that swap space is used to save
your memory content …

Reply

57 Eugene November 8, 2011 at 5:04 am

I’ve noticed that in a heavily loaded vm environment where 65 to 85+ percent of the RAM is allocated/utilized, having less swap space than the amount of RAM tends to crash some of the virtual machines at times, so I tend to stick with the 2x rule just to be on the safe side.

Reply

58 Wolf January 20, 2012 at 7:07 pm

There are always processes that require extraordinary amounts of memory – yet not all of this memory has be be accessible instantaneously. Take 3D image file processing such as in CAT scans. A 4000-cubed volume has some 64G-pixels and depending on a particular software processing – such as modern nonlinear tomographic reconstruction methods – you may easily reach 1TB of storage for those pixels, raw input data, and related matrix information. Smart processing will focus on sub-regions while other regions are kept in swap space.

Even smaller reconstructions, when several of them are to be processed concurrently on an MP system, have similar appetite for memory. Writing swap routines for intermediate data storage (as I have done in the past) definitely slows down development of these kind of high accuracy algorithms and should be avoided.

Interestingly, it seems that under those conditions – a swap space several times the size of RAM (say in the >100GB range) – some swap space management algorithms “paint themselves” into a corner and kill processes before they have made use of the available swap space (time delays/pauses in writing to virtual memory can save the day – avoiding process kills).

In summary – just filling the RAM to the physical MAX may not solve the issue for a given machine nor is it efficient in terms of computing resources. Thus the question remains – how much swap space may/should you safely allow for. Would that not remain a problem specific issue?

Reply

59 Chankey Pathak February 18, 2012 at 4:13 am

You pointed out a good thing. I am having 4GB RAM and was not using swap partition. Didn’t see any effect but when I started using hibernate and servers. I needed to use it.

Reply

60 Giulio Quaresima March 5, 2012 at 1:21 pm

Your Oracle GURU’s rule make no sense. With his rule, if a system has 7.99999 GB RAM will have a 7.99999 GB swap partition, but if it has 8.00001 GB RAM, will have only 4.000005 GB swap!

Reply

61 neuron May 20, 2012 at 6:43 am

Thanks a lot for this. You saved my precious minutes. However, I have a machine with 2 GB RAM and swap is also of 2GB. Is that good? With the explanation above, I think thats good. But when I start IDE,rhythembox abd editors my swap also starts filling up…..load average is like hell. What should I do for that? Thnaks in advance.

Reply

62 Mathew August 15, 2012 at 8:55 pm

If I have solid-state-device and hard disks, what is best place for performance to make a swap space. Machine has 64 Gb RAM and 128 Gb solid-state device. Should I be using the solid state device for swap space or it is a total waste?

Reply

63 Ramel.d October 29, 2012 at 4:32 pm

2x swap space is not necessarily nowadays. Having sufficient RAM and little swap space is good enough for the server to run all of its processes. To stop processes from using swap space we need to install more RAM. Having more RAM, makes the swap space less important in the system.

Reply

64 john January 29, 2013 at 8:48 pm

Can someone advise on this ?
I have a VPS with 1 GB of Ram available. At the same time there is a 256MB of SWAP space available, which is always running almost at 99% of usage, while RAM usage is about 60% and the CPU usage at 0,5. I am getting very often server downtimes with the server throating 500 error.
Should the swap space be higher ? It is possible that there are other misconfigurations ?

Reply

65 great white May 3, 2013 at 1:14 pm

If a use a laptop, there is some more space needed 4 example 4 hibernation?
Thus swap = RAM + Space 4 hibernation
Am I correct?
i.e. If I have 4GB of RAM in that situation 2GB for swap (min) + 4GB to store hibernation of RAM. Thus I use 6GB linux swap file for any case, especially for I use really RAM voracious math applications that create many huge matrixies. When going sleep needed to stop working without loosing data thus thats the only way be sure to save a present session with data and pending calculations.

Reply

66 DG January 24, 2014 at 11:42 pm

I’m amazed, years after the initial question was posed, how little *science* has been presented amidst scores of “disks are cheap” and “rule of thumb” comments utilizing logical arguments valued around the same level as divination based on entrail-readings… Basically the only logic-based explanations I see here relating RAM to swap space in a modern computing environment are those related to hibernation. The “2xRAM” rule, in the absence of hibernation requirements, seems obviously bogus as illustrated simply by the imaginary scenario of a system with 16GB RAM and 32GB swap, which swaps rarely but occasionally, and, with no alteration to workload, is treated to an upgrade to 24GB RAM. As the existing 32GB of swap should be sufficient even for hibernation, and workload, as stated, does not increase, by what logic should the extant swap space be expanded to 96GB (as the 2xRAM rule would dictate)?

Throwing an additional drive on your little media center box is one thing, but what about Linux systems in the Real World doing Real Work — provisioning additional RAM for 100 VMs in an ESX cluster can be done on-the-fly without a reboot trivially, expanding each system’s drive and repartitioning, or adding additional drives, while admittedly scriptable, is not nearly so trivial. This is where having some Real Science behind the decision would be most welcome.

Reply

67 QuyetDX May 20, 2014 at 1:32 pm

How much minimal size can swap space be allocated? Is it equal to page size or somewhat?

Reply

Leave a Comment

Tagged as: , , , , , , , , , ,

Previous post:

Next post: