≡ Menu


Linux and other Unix-like operating systems use the term "swap" to describe both the act of moving memory pages between RAM and disk, and the region of a disk the pages are stored on. It is common to use a whole partition of a hard disk for swapping. However, with the 2.6 Linux kernel, swap files are just as fast as swap partitions. Now, many admins (both Windows and Linux/UNIX) follow an old rule of thumb that your swap partition should be twice the size of your main system RAM. Let us say I've 32GB RAM, should I set swap space to 64 GB? Is 64 GB of swap space really required? How big should your Linux / UNIX swap space be?

Old dumb memory managers

I think the '2x swap space' rule came from Old Solaris and Windows admins. Also, earlier memory mangers were very badly designed. There were not very smart. Today, we have very smart and intelligent memory manager for both Linux and UNIX.

Nonsense rule: Twice the size of your main system RAM for Servers

According to OpenBSD FAQ:

Many people follow an old rule of thumb that your swap partition should be twice the size of your main system RAM. This rule is nonsense. On a modern system, that's a LOT of swap, most people prefer that their systems never swap. You don't want your system to ever run out of RAM+swap, but you usually would rather have enough RAM in the system so it doesn't need to swap.

Select right size for your setup

Here is my rule for normal server (Web / Mail etc):

  1. Swap space == Equal RAM size (if RAM < 2GB)
  2. Swap space == 2GB size (if RAM > 2GB)

My friend who is a true Oracle GURU recommends something as follows for heavy duty Oracle server with fast storage such as RAID 10:

  1. Swap space == Equal RAM size (if RAM < 8GB)
  2. Swap space == 0.50 times the size of RAM (if RAM > 8GB)

Red Hat Recommendation

Red hat recommends setting as follows for RHEL 5:

The reality is the amount of swap space a system needs is not really a function of the amount of RAM it has but rather the memory workload that is running on that system. A Red Hat Enterprise Linux 5 system will run just fine with no swap space at all as long as the sum of anonymous memory and system V shared memory is less than about 3/4 the amount of RAM. In this case the system will simply lock the anonymous and system V shared memory into RAM and use the remaining RAM for caching file system data so when memory is exhausted the kernel only reclaims pagecache memory.

Considering that 1) At installation time when configuring the swap space there is no easy way to predetermine the memory a workload will require, and 2) The more RAM a system has the less swap space it typically needs, a better swap space

  1. Systems with 4GB of ram or less require a minimum of 2GB of swap space
  2. Systems with 4GB to 16GB of ram require a minimum of 4GB of swap space
  3. Systems with 16GB to 64GB of ram require a minimum of 8GB of swap space
  4. Systems with 64GB to 256GB of ram require a minimum of 16GB of swap space

Swap will just keep running servers...

Swap space will just keep operation running for a while on heavy duty servers by swapping process. You can always find out swap space utilization using any one of the following command:
cat /proc/swaps
swapon -s
free -m

See how to find out disk I/O and related information under Linux. In the end, you need to add more RAM, adjust software (like controlling Apache workers or using lighttpd web server to save RAM) or use some sort of load balancing.

Also, refer Linux kernel documentation for /proc/sys/vm/swappiness. With this you can fine tune swap space.

A note about Desktop and Laptop

If you are going to suspend to disk, then you need swap space more than actual RAM. For example, my laptop has 1GB RAM and swap is setup to 2GB. This only applies to Laptop or desktop but not to servers.

Kernel hackers need more swap space

If you are a kernel hacker (debugging and fixing kernel issues) and generating core dumps, you need twice the RAM swap space.


If Linux kernel is going to use more than 2GiB swap space at a time, all users will feel the heat. Either, you get more RAM (recommend) and move to faster storage to improve disk I/O. There are no rules, each setup and configuration is unique. Adjust values as per your requirements. Select amount of swap that is right for you.

What do you think? Please add your thoughts about 'swap space' in the comments below.

Oracle Linux now joined Fedora, Ubuntu, and Solaris for giving out free CDs. You can now request your FREE Oracle Unbreakable Linux 2-disc (DVD) Kit from official oracle site.

=> Visit oracle site to grab free CD kit [ direct link ].

This might come handy...

The HCL (Hardware Compatibility List) now includes OpenSolaris content. Sun's hardware compatibility list includes the systems and components that run OpenSolaris, and the drivers and devices it supports.

=> HCL for OpenSolaris

Virtualization: Run Windows and Linux at One Place

I've used VMWARE ESX / Xen paravirtualization, Virtuozzo, Solaris Containers, and FreeBSD Jails as os level virtualization. Virutalbox is another full virtualization solution. Presently, VirtualBox runs on Windows, Linux and Macintosh hosts and supports a large number of guest operating systems including but not limited to Windows (NT 4.0, 2000, XP, Server 2003, Vista), DOS/Windows 3.x, Linux (2.4 and 2.6), and OpenBSD.

Virtualization: Run Windows and Linux at One Place

Rakesh has published a small article about VirtualBox Virtualization software. Both Windows and Linux can be run together simultaneously, and you don't even need to switch between the two. With the seamless Windows feature of the latest version of VirtualBox virtualization software, you can seamlesssly run both Windows and Linux applications from the same desktop interface. This has been made possible by the combined efforts of VirtualBox and SeamlessRDP that is meant for seamless Windows support for rdesktop.

=> How to run Windows and Linux at one place? [ciol.com]

Mac ZFS Source Code Released

ZFS has amazing feature set and now it is ported to Mac

ZFS file system developed by Sun for its UNIX operating system. ZFS presents a pooled storage model that completely eliminates the concept of volumes and the associated problems of partitions, provisioning, wasted bandwidth and stranded storage. Thousands of filesystems can draw from a common storage pool, each one consuming only as much space as it actually needs. The combined I/O bandwidth of all devices in the pool is available to all filesystems at all times.

Apple has ported ZFS from Open Solaris to the Mac OS X platform. You can download ZFS beta version here (via ./).

Project Indiana is working towards creating a binary distribution of an operating system built out of the OpenSolaris source code. The distribution is a point of integration for several current projects on OpenSolaris.org, including those to make the installation experience easier, to modernize the look and feel of OpenSolaris on the desktop, and to introduce a network-based package management system into Solaris.

The resulting distribution is a live-CD install image, and is fully permissible to be redistributed by anyone. It will also have the capability for developers to create their own, customized distribution based on Project Indiana.

Now the first preview version is available. This is an x86-based LiveCD install image, containing some new and emerging OpenSolaris technologies. This may result in instabilities that lead to system panics or data corruption.

Among the features contained in this release are:

  • Single CD download, with LiveCD 'try before you install' capabilities
  • Caiman installer, with significantly improved installation experience
  • ZFS as the default filesystem
  • Image packaging system, with capabilities to pull packages from network repositories
  • GNU utilities in the default $PATH
  • bash as the default shell
  • GNOME 2.20 desktop environment

Download Project Indiana OpenSolaris Developer Preview ISO

=> Visit the official site to grab ISO file

This is an interesting filesystem comparison. If you are looking to build cheap storage for personal use file system decision is quite important:

This is my attempt to cut through the hype and uncertainty to find a storage subsystem that works. I compared XFS and EXT4 under Linux with ZFS under OpenSolaris. Aside from the different kernels and filesystems, I tested internal and external journal devices and software and hardware RAIDs. Software RAIDs are "raid-10 near2" with 6 disks on Linux. On Solaris the zpool is created with three mirrors of two disks each. Hardware RAIDs use the Areca's RAID-10 for both Linux and Solaris. Drive caches are disabled throughout, but the battery-backed cache on the controller is enabled when using hardware RAID.

=> ZFS, XFS, and EXT4 filesystems compared