What Is /dev/shm And Its Practical Usage

by on March 14, 2006 · 47 comments· LAST UPDATED June 29, 2012

in

/dev/shm is nothing but implementation of traditional shared memory concept. It is an efficient means of passing data between programs. One program will create a memory portion, which other processes (if permitted) can access. This will result into speeding up things on Linux.

shm / shmfs is also known as tmpfs, which is a common name for a temporary file storage facility on many Unix-like operating systems. It is intended to appear as a mounted file system, but one which uses virtual memory instead of a persistent storage device.

If you type the mount command you will see /dev/shm as a tempfs file system. Therefore, it is a file system, which keeps all files in virtual memory. Everything in tmpfs is temporary in the sense that no files will be created on your hard drive. If you unmount a tmpfs instance, everything stored therein is lost. By default almost all Linux distros configured to use /dev/shm:
$ df -h
Sample outputs:

Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/wks01-root
                      444G   70G  351G  17% /
tmpfs                 3.9G     0  3.9G   0% /lib/init/rw
udev                  3.9G  332K  3.9G   1% /dev
tmpfs                 3.9G  168K  3.9G   1% /dev/shm
/dev/sda1             228M   32M  184M  15% /boot

Nevertheless, where can I use /dev/shm?

You can use /dev/shm to improve the performance of application software such as Oracle or overall Linux system performance. On heavily loaded system, it can make tons of difference. For example VMware workstation/server can be optimized to improve your Linux host's performance (i.e. improve the performance of your virtual machines).

In this example, remount /dev/shm with 8G size as follows:
# mount -o remount,size=8G /dev/shm
To be frank, if you have more than 2GB RAM + multiple Virtual machines, this hack always improves performance. In this example, you will give you tmpfs instance on /disk2/tmpfs which can allocate 5GB RAM/SWAP in 5K inodes and it is only accessible by root:
# mount -t tmpfs -o size=5G,nr_inodes=5k,mode=700 tmpfs /disk2/tmpfs
Where,

  • -o opt1,opt2 : Pass various options with a -o flag followed by a comma separated string of options. In this examples, I used the following options:
    • remount : Attempt to remount an already-mounted filesystem. In this example, remount the system and increase its size.
    • size=8G or size=5G : Override default maximum size of the /dev/shm filesystem. he size is given in bytes, and rounded up to entire pages. The default is half of the memory. The size parameter also accepts a suffix % to limit this tmpfs instance to that percentage of your pysical RAM: the default, when neither size nor nr_blocks is specified, is size=50%. In this example it is set to 8GiB or 5GiB. The tmpfs mount options for sizing ( size, nr_blocks, and nr_inodes) accept a suffix k, m or g for Ki, Mi, Gi (binary kilo, mega and giga) and can be changed on remount.
    • nr_inodes=5k : The maximum number of inodes for this instance. The default is half of the number of your physical RAM pages, or (on a machine with highmem) the number of lowmem RAM pages, whichever is the lower.
    • mode=700 : Set initial permissions of the root directory.
    • tmpfs : Tmpfs is a file system which keeps all files in virtual memory.

How do I restrict or modify size of /dev/shm permanently?

You need to add or modify entry in /etc/fstab file so that system can read it after the reboot. Edit, /etc/fstab as a root user, enter:
# vi /etc/fstab
Append or modify /dev/shm entry as follows to set size to 8G

none      /dev/shm        tmpfs   defaults,size=8G        0 0

Save and close the file. For the changes to take effect immediately remount /dev/shm:
# mount -o remount /dev/shm
Verify the same:
# df -h

Recommend readings:

TwitterFacebookGoogle+PDF versionFound an error/typo on this page? Help us!

{ 47 comments… read them below or add one }

1 Vivek Poduval March 4, 2008 at 3:01 pm

Very good Description.

I want to know the concept of /dev/shm in FreeBSD.

Reply

2 Scott Marlowe May 5, 2008 at 3:09 pm

I cannot see where allocating an 8G ram disk on a machine with 8G of physical ram is a good thing. The OS will start swapping out the ram disk pages long before you fill it up, slowing the whole system to a crawl.

Reply

3 nixCraft May 5, 2008 at 4:13 pm

I you have 32G add 8G for virtual machine. I’m not asking to allocate all ram.

HTH

Reply

4 Peter Teoh July 6, 2008 at 12:44 am

I don’t quite understand why u need the tmpfs thing – if u have that much of memory, just write direct to memory, why set up swapspace and then write to swapspace via tmpfs?

Similarly question for /dev/shm?

Is it because of 4GB limit for memory – in 32bit OS scenario?

Reply

5 DeveloperChris July 30, 2008 at 11:51 pm

Peter you can’t always write direct to memory, for example I have a cgi program I want to load very fast placing it in tmpfs will permit that. If I simply place the file on the disk and ‘hope’ the disk cache caches it then it may be swapped out during times of high disk activity. by placing it in shared memory it will stay put. until of course the next reboot
DC

Reply

6 pavan September 22, 2008 at 7:03 am

I am a new to linux
After reading the discussion of this article i am
really confused

firstly, what is tmpfs and swap space? where they are stored?
second, if there are any difference how this will affect the performance of a system?
third, is there any method to allocate tmpfs or swap space for a particular program?

if my english is bad please forgive

Reply

7 susan October 8, 2008 at 8:31 pm

/dev/shm is a good place for separate programs to communicate with each other. For example, one can implement a simple resource lock as a file in shared memory. It is fast because there is no disk access. It also frees you up from having to implement shared memory blocks yourself. You get (almost) all of the advantages of a shared memory block using stdio functions. It also means that simple bash scripts can use shared memory to communicate with each other.

As an example, I have an external piece of hardware which only one program at a time can communicate with. I want to log data from this device as often as possible, but also run an “assay” – ie. tell it to do a sequence of tasks. I have implemented this in a client/server mode where the “server” does all of the communication and the clients request actions. The data logger requests readings as often as it can and the assay controller requests the device to change states at the appropriate times. In order to avoid conflicts each client program must check a “lock file” in /dev/shm to see if the server is available. If it is, then the client writes it own process ID ($$ in bash) to the lock runs it request, and then releases the lock (don’t forget that step!) by writing 0 back to the lock so other clients know that the server is available again.

All of this _could_ be done with a “normal” file, but would be a bit slower.

Reply

8 Abdul Rauf April 9, 2011 at 12:27 pm

Dear Susan,
Good explanation one thing more i wana add that when we install oracle in linux os if don’t have enough size in /dev/shm then we can not increase the size of sga at first stage we have to modify the size of this shm then we can alter memory_max_target parameter. thanks hope it was useful for every one.

Reply

9 Noob October 13, 2008 at 2:12 pm

I’m curious to know why when I executed “mount -o remount,size=8G /dev/shm” to increase the swap, the disk size of other partitions remain the same? Where is the additional 6 GB coming from if I already have 2G of /dev/shm ?

Reply

10 JD June 11, 2013 at 5:42 pm

I have this same question. Where does the space come from ?

Reply

11 dreamingkat November 16, 2008 at 2:12 pm

Something I see done that’s not commented on here is using this setup for single programs that use a lot of I/O during normal operation – like qmail (as long as your willing to loose mail in the queue if the server crashes or needs to reboot).

Reply

12 Grrry January 30, 2009 at 2:06 pm

Not sure why you think making a ramdisk for swapping/paging is a good idea. Any page fault is still overhead. If you’re having heavy swapping or page faulting but have immense amounts of memory, the issue may be in the kernel parameters, not in reducing available memory to make a ramdisk.

Reply

13 Solaris July 12, 2009 at 12:08 pm

Oracle uses large /dev/shm to improve communication between processes.
If your /dev/shm is not large enough Oracle will not install, worse will most likely
crash.

Reply

14 Phil September 27, 2009 at 7:07 pm

So is /dev/shm effectivly a ramdisk? but using virtual memory?

Reply

15 DeveloperChris September 28, 2009 at 6:53 am

Yes and No
Yes /dev/shm is Ramdisk
No it doesn’t use virtual memory unless you run out of ram then the OS may swap to disk (virtual RAM). I don’t know enough to say for sure if this happens actually happens but it would be most likely.
If you run out of space on /dev/shm it would report an error like any other type of disk

Reply

16 Jenny Olsson September 28, 2009 at 4:58 pm

It’s not quite the same as a ramdisk. A ramdisk is guaranteed to be in RAM, tmpfs may be swapped out. Depending what else is using RAM at the time it may or may not be faster that writing to a disk; it generally will be of course. Personally I run with very small /dev/shm as nothing I use uses it, as far as I can see.

The equivalent for the BSDs is mfs, but there’s no standard of having it in any particular location – as far as I know anyway.

Reply

17 Shinoj Mathew October 14, 2009 at 5:07 am

Dear friend,
It’s really helpful and i can solve a giant oracle configuration problem easily with the above description..

thank you..

Reply

18 Massive November 8, 2009 at 10:16 pm

I think It’s worth to mention to /dev/shm with nosuid,noexec options

Reply

19 Catalin May 24, 2010 at 12:59 pm

Hello
It is corect this permision of ramdisk ?
drwxrwxrwt 2 root root 40 2010-05-24 15:53 ramdisk
This is a corect mount ?
# mount -t tmpfs /dev/shm /media/ramdisk
# mount -o remount,size=1756M /dev/shm
Why you use this ?
nr_inodes=5k,mode=700
Thank you !

Reply

20 Charles August 20, 2010 at 1:20 am

I’m not finding this very helpful.

Like almost every manpage I’ve ever seen, this documentation assumes that you already know the stuff, and need a reminder of how to use it.

I’d like to use this–my system has 3 cpu’s and 4G memory, but runs erratically–freezing for anything from a fraction of a second to over a minute–particularly bothersome when viewing video from the hard drive, but it happens even when all I am doing is browsing.

While I understand the concept, (it is, after all an old idea,) this doesn’t help me apply it to my problem–if indeed it would help.

I’m still new to doing system admin on a linux box, though I have done many years of doing so on IBM mini’s and mainframes.

I’m old enough that my memory isn’t reliable, and my hands have tremors–making the use of the command line a pain in the (&O.

I know that I have a lot of study yet to do, but like all new systems, the amount of documentation is daunting, and much of it doesn’t help very much because it assumes a great many things–like all too much documentation written by programmers, it ignores the problems of those learning how to use things, and tends to be minimalistic and cryptic until you reach a threshold of knowledge (which I have yet to acquire.)

While reminder documentation is extremely useful, more details descriptions of what programs do and how they do it seem to me to be needed.

In general I’ve found that things which are used daily are over-documented, and things used rarely are massively under-documented. And programmers writing their own documentation are extremely likely to write only the most obscure parts of the tool instructions…because they are already intimately familiar with the main functions.

This is the kind of thing that leads to the extremely useless error messages generated by Windoze (Or an old database program I used once that gave only 4-digit error codes, all of which seemed to map to “call tech support,” bad enough, but the company no longer existed….

Am I alone in this perception?

Reply

21 Thou_Shalt_not_Charles September 9, 2010 at 4:54 pm

Charles wrote:
“Am I alone in this perception?”

Yes you are!

Reply

22 Bob May 5, 2012 at 12:21 pm

Hardly. Linux is a very unprofessional, badly written, poorly documented “OS”.

It’s only allure is the fact it’s free. If it weren’t free – nobody would use it. Unix was an interesting concept, but not good in practice. IBM’s DOS was able to destroy the Unix market almost immediately.

Linux is just a re-do of Unix – but worse. The original Linux source code was written by a novice: and as usual, everything flows “from the top”. That’s why we now have such horrible documentation, poorly-styled code without comments and hare-brained concepts such as /dev/shm. Everything is a Band-Aid fix for a gaping wound. There’s no consistency, no style, no rules. While some people may think these are features, the fact is these are what keeps Linux from winning over the bigger market.

The fact some people act like Linux is a product of nature shows how ignorant they are. It’s free. That’s why people use it. If Linus started charging for downloads it would lead to the end of Linux.

Reply

23 XSeSiV May 15, 2012 at 10:14 am

This is the most uneducated post I have read on the internet in all my life! My god, are you working for M$ or is this mr gates himself.

ROFL!

Reply

24 robet loxley August 26, 2012 at 8:05 am

XSeSiV you are right, if this guy is not gates himself s a complete ignorant.

Reply

25 Sean October 22, 2012 at 2:13 pm

Fact, Linux rules the world of IT. The person responsible for the evolution of Linux basically won the “Nobel Prize” for technology. The poster Bob is probably a point and click monkey who needs a Vendor to tell him how to do his job.

Reply

26 Henry July 27, 2013 at 12:19 am

“Linux is a very unprofessional, badly written, poorly documented “OS”.”
In order to make the above observation you have to know some of the major Enterprise level UNIX’s from the 90’s or even early 2000’s (AIX, Solaris, HP-UX.)
I worked on all of them, mostly on HP-UX. Linux would have to be born-again to come even 2% close the professionalism of HP-UX.
I switched to Linux 4 years ago, I’m not complaining, just sharing the facts.
The reason Linux became what it is today, is based on two facts; it was free, and it ran on x86.
Also it has to do with younger generation being more computer savvy. While older folks, shall we say were more like IBM types. Where you expected things to work, otherwise you did not put it out in the market.
Younger guys don’t mind spending hours and hours working with poor code, trying to make it work (remember they have been enticed by the fact it was free.)
But, that is the way world is. Progress. Would Google be what it is today, if it had to spend money on AIX and IBM hardware?
So my advice to older folks like me is to embrace change. If you don’t, change won’t stop, only you will be miserable.
One thing I noticed on working with Linux, expectations of the customers also match the level of professionalism of Linux. (I hope people get the last sentence)

Reply

27 steved129 February 9, 2014 at 3:17 pm

completely wrong on so many levels.

linux is NOT an operating system. it is a Kernel. and in fact it is an industry standard kernel used is a vast array of operating systems used today. used in everything from client to server OS, all the way to mobile phones and even integrated devices.

The linux kernel is well documented, and is inherently credible.

The only reason we have some of the technology today is because of what was which the speed that technology has been evolving; left behind is the days of needing to elaborately document every function and command line flag:

What the new Linux ecosystem is, is a modular cornucopia of millions upon millions of code based on peoples hard work. We now have publicly accessible source code repositories, where things like crowd sourcing has increased the speed and rate of the evolution of technology exponentially, and to be honest, to put it simply, don’t fix something that isn’t broken.

This, coupled with communications and development tools, which again, are leaps and bounds ahead of their time compared to the tools of yesteryear, created a totally new global platform for a codependent software anatomy

instead of having one really nice OS with a few good tools, we have 25 good OS’s all working together to develop cross platform tools, which in turn scale the ENTIRE industry

get it now?

Reply

28 Dave October 18, 2014 at 6:05 am

lol thats why linux systems dominate over 60% of the global market share for servers

Reply

29 RJB October 24, 2010 at 11:27 pm

Charles — I don’t think you’re alone in that perception. However, with enough searching it seems I eventually find *somebody’s* site that gives wonderful insight for exactly what I need to know. The main thing is I don’t expect every blogger to write a follow-up to fill me in with what is assumed I know…I go and find out. Yes it takes time, but I think that’s the place everybody learning something new has to start :(… Unless you personally know an experienced Linux guru who volunteers time to mentor you.

I think as far as /dev/shm goes, the beauty of this being mounted like a normal file system is that you can write programs (and/or scripts) to use it like you would any other file system. If you have enough RAM available, it will be much more efficient than reading/writing a hard disk. If you’re not writing programs, then /dev/shm is of little consequence to you and does not do anything for you (unless you use a program which uses this). It really should be as if it does not even exist to the average user…but for a sysadmin, it’s good to be aware of it in the event you have a program (like Oracle) that is having strange problems. Then your awareness will lead you in the right direction when researching a solution.

As far as comments about using up RAM and slowing down the whole system…well you simply need to make good decisions about using it when it makes sense to use it, and not making your entire program use it. For example, you have to make the same decision about how much data you load into RAM at a time whether using /dev/shm or just by using a big buffer and passing a pointer between applications…lots of data uses lots of RAM and doesn’t matter whether it’s shm or somewhere else.

I’m speaking out my arse here, but I think mounting 8G shm on 8G RAM is perfectly fine as it’s referencing paged memory like any other program. There is a danger that if your program uses a significant amount, you can starve other programs in RAM by forcing them to swap…but if you’re in that situation, then the active program using SHM is going to use swap if you limit SHM to say, 2G or something…so either way you lose. In the end, don’t write programs that want to load 8G into RAM during runtime. Only load things into RAM if you need to use them right away. When you’re done using it, then free it.

Reply

30 xuedi November 22, 2010 at 7:17 am

Hello there,
no Gentoo user here? So just my addon, shm worked like charm if you do -j3 ore more compiling with gcc, it takes under Gentoo full advantage and speed up thinks greatly, while in daily use it seems to not be every used.
Put i really would not suggest to use more than half of the memory for shm, even its used by the kernel dynamically by the need, if you have something fucked up in any service running and your shm is filling up the main memory, everything goes into swap and kills the system via I/O flood ,….

Greeting from Beijing
xuedi

Reply

31 Richard Lynch December 9, 2010 at 7:13 pm

/dev/shm memory is not allocated until it is needed, as it would be silly to do so otherwise.

It is fairly typical to make the size == the RAM, and just pray you (almost) never actually reach that and start swapping like crazy.

/dev/shm is essentially a ramdisk, only pretending to be a file, and it might get swapped to disk when the going gets tough. You can survive a page fault or two now and then. You can’t survive constant thrashing of your swap.

Because it pretends to be a file and lives in RAM (mostly) it makes an execellent vehicle for program communication with locks etc as a “shared memory” substitute. It’s easy to use; It lives in RAM (mostly); It is fault-tolerant (hitting swap if it has to); It’s fast

Reply

32 Juan Miller February 1, 2012 at 3:43 am

My RAM died last night. be careful if you do this stuff.

Reply

33 Henry July 27, 2013 at 12:24 am

I’m sure horns are worth something.

Reply

34 Ternia Wilite February 11, 2012 at 7:48 am

Is there a standard or something for using /dev/shm? In other words is any program use /dev/shm by default? In other other words ;) can I harmless unmount and delete /dev/shm on working computer?

Reply

35 Rob March 14, 2012 at 11:34 am

I found malware stored in /dev/shm that looked like an irc bot. I too am considering unmounting and removing this.

Reply

36 xuedi March 14, 2012 at 4:18 pm

@Rob: If you have a rootkit on your system it does not matter if it is in SHM or in the file system, your system is lost, better make backups ^^ SHM is just a file system as any other..

Reply

37 Giuseppe Lottini April 15, 2012 at 4:12 pm

It was very simple, concise and useful. Thanks a lot

Reply

38 navendu54 May 31, 2012 at 7:42 am

Charles / Bob :- Why troll? If you do not have anything to contribute, move along.

And yes, this was really useful to me. I could not find any good source to get rid of the following errors on my production Oracle which runs on CentOS 64bit system:

ORA-32004: obsolete and/or deprecated parameter(s) specified
ORA-00845: MEMORY_TARGET not supported on this system

I increase /dev/shm from default 1G to 8G and now I do not get any errors.

Reply

39 Lucian June 29, 2012 at 12:29 pm

Thank you for this post. Very concise and very useful!
Thank you for your comments! These also helped me to better understand this concept!
Charles, I understand your frustration. There are many situations when I have to search a lot to find something. But like RJB said, finally you will find what you need. But you have to want this! Really want!!

Reply

40 Ron Stubbs July 7, 2012 at 3:23 pm

“tmpfs : Tmpfs is a file system which keeps all files in virtual memory.”

This should read:
tmpfs : Tmpfs is a file system which keeps all files in physical memory.

Swap is an example of virtual memory.

Reply

41 GOSteen August 7, 2012 at 3:18 pm

‘virtual memory’ is correct. In this case, it is referring to the virtual addressing structure used within the kernel which, through memory management, accounts for all memory, both physical and otherwise. tmpfs can be swapped out as well so it would be unfair/unwise to state that it keeps it in ‘physical’ memory when, in fact, the data could exist within a swapfile or swap partition.

Reply

42 Aron October 1, 2012 at 6:05 am

Can I create shm partition on LVM?

Reply

43 Guest April 6, 2013 at 11:20 am

Ramdisk/tmpfs is kind of last resort. It’s almost never needed. Linux kernel uses all the free RAM for the file cache by default, so all the frequently accessed files remain cached in RAM. Short-lived temporary files may stay in cache and not hit the disk at all. Linux kernel is smart. :)

There’re just a few cases when ramdisk/tmpfs is much faster than real filesystem (and most of them can be reported as a bug). Usually there’re better alternatives.

For example to reduce number of disk write attempts you can increase vm.dirty_* sysctls (http://www.kernel.org/doc/Documentation/sysctl/vm.txt). If you’re trying to reduce number of HDD spinups for a laptop, then vm.laptop_mode and laptop-mode-tools package would give you the best solution. If your software uses a lot of slow fsync’s that you need to avoid, then `eatmydata` wrapper is just for you.

Tmpfs takes your RAM from other apps even when not used, often it’s not easy to keep it in sync with your local data, and it’s MUCH slower than real filesystem when it’s swapped out. The only way to make sure that you need it is to benchmark it, not with some general benchmarking tool, but with your particular test case.

Reply

44 BigDaddy68 October 9, 2013 at 10:47 pm

This was a good post by “Guest”. FWIW in the Solaris world Sun used to make /tmp a tmpfs filesysem. This is because ‘cc’ ( and other compilers) were multi-pass exec()’s cpp, cc1,cc2 etc.. based and communicated via smallish intermediate files in /tmp. So having /tmp be free vm pages (free RAM under the covers) meant that compiles could go faster. a nice feature for develoeprs indeed. many used to call all the TMPFS ( /tmp) RAM the ‘page cache’ – since it was all teh freemem used to start new processes/threads, to cache files – this sounds liek the same as what linux does ( to me anyway )

We used to use tmpfs ( /tmp) carefully on large RAM systems well above its normal required RAM for DB workloads and we used to drop the DB TEMP files(tempdb, tablespace datafiles et al) into /tmp because using the fast RAM could assist in N-Way join processing which Writes/Reads DB TEMP files heavily/successively for large queries doing joins. DB TEMP files (in general) were of fixed size and the DB always wiped them clean on a db-restart so if you cp’d them out there (pre-created them?) before your DB started it really helped query performance; substantially in many cases.

So since this is all about linux I am not sure if I’d use linux /dev/shm for this or I’d use a RAMDISK or if it would even matter ..

Does anyone know if it matters – ie. does a ramdisk handle larger files better than /dev/shm or is it ultimately the same at the end of the day.

If not mistaken ( at least for Solaris ) I thought I understood that TMPFS used the seg_vn segment driver to handle file/vnode VM mappings under the covers… I am wondering if linux does this too ? Then I guess one could understand why the comment about smaller files prevails for /dev/shm under linux (maybe for same reason ?)

Still though writing/reading pure VM(RAM) is way faster that doing physio’s to anything by a longshot – even if you have to do it in shot-glasses and not beer mugs :)

in linux is there teh ability to use a more optimized segment driver for large file mappings for large shmem segments which could also have a name in a filesystem like /ext4 or /xfs – that might run better or more optinmally

sorry im a shity typer
BigDaddy68

Reply

45 Jalal Hajigholamali July 31, 2013 at 7:22 am

Hi,

Thanks a lot, very useful article

Reply

46 Ojp August 2, 2013 at 4:42 pm

Hi everyone,
The most useful example to use this is if you application have lots of small files. For example: sessions in disk, temporary data files, log files, etc. In my case, I experimented a huge change of performance with an application that keeps the sessions in disk.

Reply

47 Sascha February 8, 2014 at 10:00 pm

Interesting..

I already use SHM in a linux server to cache write during save periods, this has saved me 2/3 of the time this save needed, since it would suspend all netstates and restart them after the save (yes, I’m talking about a game), the data, after the save was done, will be moved by another thread in the main disk, but the netstates can restart freely.

This is an important thing to me, it saves game-freeze time, saving all the necessary data in a temporary folder that is way faster than the disk, after the save, the data can be moved… Yes, I could fork() the process and copy all the data there, but to avoid certain occurrences to happen, the way with tempfs is more clean and fast (from 24s to 9s is a lot of gain to me)

Reply

Leave a Comment

Tagged as: , , , , , , , , , , , , , ,

Previous post:

Next post: