What Is /dev/shm And Its Practical Usage

/dev/shm is nothing but implementation of traditional shared memory concept. It is an efficient means of passing data between programs. One program will create a memory portion, which other processes (if permitted) can access. This will result into speeding up things on Linux.

shm / shmfs is also known as tmpfs, which is a common name for a temporary file storage facility on many Unix-like operating systems. It is intended to appear as a mounted file system, but one which uses virtual memory instead of a persistent storage device.

If you type the mount command you will see /dev/shm as a tempfs file system. Therefore, it is a file system, which keeps all files in virtual memory. Everything in tmpfs is temporary in the sense that no files will be created on your hard drive. If you unmount a tmpfs instance, everything stored therein is lost. By default almost all Linux distros configured to use /dev/shm:
$ df -h
Sample outputs:

Filesystem            Size  Used Avail Use% Mounted on
                      444G   70G  351G  17% /
tmpfs                 3.9G     0  3.9G   0% /lib/init/rw
udev                  3.9G  332K  3.9G   1% /dev
tmpfs                 3.9G  168K  3.9G   1% /dev/shm
/dev/sda1             228M   32M  184M  15% /boot

Nevertheless, where can I use /dev/shm?

You can use /dev/shm to improve the performance of application software such as Oracle or overall Linux system performance. On heavily loaded system, it can make tons of difference. For example VMware workstation/server can be optimized to improve your Linux host’s performance (i.e. improve the performance of your virtual machines).

In this example, remount /dev/shm with 8G size as follows:
# mount -o remount,size=8G /dev/shm
To be frank, if you have more than 2GB RAM + multiple Virtual machines, this hack always improves performance. In this example, you will give you tmpfs instance on /disk2/tmpfs which can allocate 5GB RAM/SWAP in 5K inodes and it is only accessible by root:
# mount -t tmpfs -o size=5G,nr_inodes=5k,mode=700 tmpfs /disk2/tmpfs

  • -o opt1,opt2 : Pass various options with a -o flag followed by a comma separated string of options. In this examples, I used the following options:
    • remount : Attempt to remount an already-mounted filesystem. In this example, remount the system and increase its size.
    • size=8G or size=5G : Override default maximum size of the /dev/shm filesystem. he size is given in bytes, and rounded up to entire pages. The default is half of the memory. The size parameter also accepts a suffix % to limit this tmpfs instance to that percentage of your pysical RAM: the default, when neither size nor nr_blocks is specified, is size=50%. In this example it is set to 8GiB or 5GiB. The tmpfs mount options for sizing ( size, nr_blocks, and nr_inodes) accept a suffix k, m or g for Ki, Mi, Gi (binary kilo, mega and giga) and can be changed on remount.
    • nr_inodes=5k : The maximum number of inodes for this instance. The default is half of the number of your physical RAM pages, or (on a machine with highmem) the number of lowmem RAM pages, whichever is the lower.
    • mode=700 : Set initial permissions of the root directory.
    • tmpfs : Tmpfs is a file system which keeps all files in virtual memory.

How do I restrict or modify size of /dev/shm permanently?

You need to add or modify entry in /etc/fstab file so that system can read it after the reboot. Edit, /etc/fstab as a root user, enter:
# vi /etc/fstab
Append or modify /dev/shm entry as follows to set size to 8G

none      /dev/shm        tmpfs   defaults,size=8G        0 0

Save and close the file. For the changes to take effect immediately remount /dev/shm:
# mount -o remount /dev/shm
Verify the same:
# df -h

Recommend readings:

๐Ÿง Get the latest tutorials on Linux, Open Source & DevOps via RSS feed or Weekly email newsletter.

๐Ÿง 58 comments so far... add one
CategoryList of Unix and Linux commands
Disk space analyzersncdu pydf
File Managementcat
FirewallAlpine Awall CentOS 8 OpenSUSE RHEL 8 Ubuntu 16.04 Ubuntu 18.04 Ubuntu 20.04
Network UtilitiesNetHogs dig host ip nmap
OpenVPNCentOS 7 CentOS 8 Debian 10 Debian 8/9 Ubuntu 18.04 Ubuntu 20.04
Package Managerapk apt
Processes Managementbg chroot cron disown fg jobs killall kill pidof pstree pwdx time
Searchinggrep whereis which
User Informationgroups id lastcomm last lid/libuser-lid logname members users whoami who w
WireGuard VPNAlpine CentOS 8 Debian 10 Firewall Ubuntu 20.04
58 comments… add one
  • Vivek Poduval Mar 4, 2008 @ 15:01

    Very good Description.

    I want to know the concept of /dev/shm in FreeBSD.

    • mi Jan 29, 2015 @ 17:59

      Read up the md(4) manual page. To get the equivalent of Linux tmpfs, use the “swap” kind of md.

      (To get the equivalent of Linux “ramdisk”, use the “malloc” kind instead.)

  • Scott Marlowe May 5, 2008 @ 15:09

    I cannot see where allocating an 8G ram disk on a machine with 8G of physical ram is a good thing. The OS will start swapping out the ram disk pages long before you fill it up, slowing the whole system to a crawl.

    • Thiago Oct 6, 2016 @ 15:13

      I have a 32G PC, with Xenial, look this:

      tmpfs 16G 241M 16G 2% /dev/shm
      tmpfs 16G 0 16G 0% /sys/fs/cgroup

      Soon as I BOOT my machine, 100% of RAM is being used!! WTF is this? LOLOL

  • ๐Ÿง nixCraft May 5, 2008 @ 16:13

    I you have 32G add 8G for virtual machine. I’m not asking to allocate all ram.


  • Peter Teoh Jul 6, 2008 @ 0:44

    I don’t quite understand why u need the tmpfs thing – if u have that much of memory, just write direct to memory, why set up swapspace and then write to swapspace via tmpfs?

    Similarly question for /dev/shm?

    Is it because of 4GB limit for memory – in 32bit OS scenario?

    • mi Jan 29, 2015 @ 18:06

      Data written to a regular filesystem — one, that must survive a reboot — must be flushed to the actual storage eventually (and it better be soon).

      A number of use-cases don’t need that sort of persistence-guarantee — for example, the PID-files lose meaning after a reboot anyway.

      For another example, Amazon-hosted virtual machines get access to “ephemeral” storage — fast (SSD-backed) “disks”, which are wiped-out on every reboot. The best way to use them is by making them swap and mounting a filesystem (such as /tmp) over most of this new capacity.

  • DeveloperChris Jul 30, 2008 @ 23:51

    Peter you can’t always write direct to memory, for example I have a cgi program I want to load very fast placing it in tmpfs will permit that. If I simply place the file on the disk and ‘hope’ the disk cache caches it then it may be swapped out during times of high disk activity. by placing it in shared memory it will stay put. until of course the next reboot

  • pavan Sep 22, 2008 @ 7:03

    I am a new to linux
    After reading the discussion of this article i am
    really confused

    firstly, what is tmpfs and swap space? where they are stored?
    second, if there are any difference how this will affect the performance of a system?
    third, is there any method to allocate tmpfs or swap space for a particular program?

    if my english is bad please forgive

  • susan Oct 8, 2008 @ 20:31

    /dev/shm is a good place for separate programs to communicate with each other. For example, one can implement a simple resource lock as a file in shared memory. It is fast because there is no disk access. It also frees you up from having to implement shared memory blocks yourself. You get (almost) all of the advantages of a shared memory block using stdio functions. It also means that simple bash scripts can use shared memory to communicate with each other.

    As an example, I have an external piece of hardware which only one program at a time can communicate with. I want to log data from this device as often as possible, but also run an “assay” – ie. tell it to do a sequence of tasks. I have implemented this in a client/server mode where the “server” does all of the communication and the clients request actions. The data logger requests readings as often as it can and the assay controller requests the device to change states at the appropriate times. In order to avoid conflicts each client program must check a “lock file” in /dev/shm to see if the server is available. If it is, then the client writes it own process ID ($$ in bash) to the lock runs it request, and then releases the lock (don’t forget that step!) by writing 0 back to the lock so other clients know that the server is available again.

    All of this _could_ be done with a “normal” file, but would be a bit slower.

    • Abdul Rauf Apr 9, 2011 @ 12:27

      Dear Susan,
      Good explanation one thing more i wana add that when we install oracle in linux os if don’t have enough size in /dev/shm then we can not increase the size of sga at first stage we have to modify the size of this shm then we can alter memory_max_target parameter. thanks hope it was useful for every one.

  • Noob Oct 13, 2008 @ 14:12

    I’m curious to know why when I executed “mount -o remount,size=8G /dev/shm” to increase the swap, the disk size of other partitions remain the same? Where is the additional 6 GB coming from if I already have 2G of /dev/shm ?

    • JD Jun 11, 2013 @ 17:42

      I have this same question. Where does the space come from ?

      • cdp Jun 23, 2016 @ 22:47

        It’s not coming from disk space. You are grabbing available memory(RAM).

  • dreamingkat Nov 16, 2008 @ 14:12

    Something I see done that’s not commented on here is using this setup for single programs that use a lot of I/O during normal operation – like qmail (as long as your willing to loose mail in the queue if the server crashes or needs to reboot).

  • Grrry Jan 30, 2009 @ 14:06

    Not sure why you think making a ramdisk for swapping/paging is a good idea. Any page fault is still overhead. If you’re having heavy swapping or page faulting but have immense amounts of memory, the issue may be in the kernel parameters, not in reducing available memory to make a ramdisk.

  • Solaris Jul 12, 2009 @ 12:08

    Oracle uses large /dev/shm to improve communication between processes.
    If your /dev/shm is not large enough Oracle will not install, worse will most likely

  • Phil Sep 27, 2009 @ 19:07

    So is /dev/shm effectivly a ramdisk? but using virtual memory?

  • DeveloperChris Sep 28, 2009 @ 6:53

    Yes and No
    Yes /dev/shm is Ramdisk
    No it doesn’t use virtual memory unless you run out of ram then the OS may swap to disk (virtual RAM). I don’t know enough to say for sure if this happens actually happens but it would be most likely.
    If you run out of space on /dev/shm it would report an error like any other type of disk

  • Jenny Olsson Sep 28, 2009 @ 16:58

    It’s not quite the same as a ramdisk. A ramdisk is guaranteed to be in RAM, tmpfs may be swapped out. Depending what else is using RAM at the time it may or may not be faster that writing to a disk; it generally will be of course. Personally I run with very small /dev/shm as nothing I use uses it, as far as I can see.

    The equivalent for the BSDs is mfs, but there’s no standard of having it in any particular location – as far as I know anyway.

  • Shinoj Mathew Oct 14, 2009 @ 5:07

    Dear friend,
    It’s really helpful and i can solve a giant oracle configuration problem easily with the above description..

    thank you..

  • Massive Nov 8, 2009 @ 22:16

    I think It’s worth to mention to /dev/shm with nosuid,noexec options

  • Catalin May 24, 2010 @ 12:59

    It is corect this permision of ramdisk ?
    drwxrwxrwt 2 root root 40 2010-05-24 15:53 ramdisk
    This is a corect mount ?
    # mount -t tmpfs /dev/shm /media/ramdisk
    # mount -o remount,size=1756M /dev/shm
    Why you use this ?
    Thank you !

  • Charles Aug 20, 2010 @ 1:20

    I’m not finding this very helpful.

    Like almost every manpage I’ve ever seen, this documentation assumes that you already know the stuff, and need a reminder of how to use it.

    I’d like to use this–my system has 3 cpu’s and 4G memory, but runs erratically–freezing for anything from a fraction of a second to over a minute–particularly bothersome when viewing video from the hard drive, but it happens even when all I am doing is browsing.

    While I understand the concept, (it is, after all an old idea,) this doesn’t help me apply it to my problem–if indeed it would help.

    I’m still new to doing system admin on a linux box, though I have done many years of doing so on IBM mini’s and mainframes.

    I’m old enough that my memory isn’t reliable, and my hands have tremors–making the use of the command line a pain in the (&O.

    I know that I have a lot of study yet to do, but like all new systems, the amount of documentation is daunting, and much of it doesn’t help very much because it assumes a great many things–like all too much documentation written by programmers, it ignores the problems of those learning how to use things, and tends to be minimalistic and cryptic until you reach a threshold of knowledge (which I have yet to acquire.)

    While reminder documentation is extremely useful, more details descriptions of what programs do and how they do it seem to me to be needed.

    In general I’ve found that things which are used daily are over-documented, and things used rarely are massively under-documented. And programmers writing their own documentation are extremely likely to write only the most obscure parts of the tool instructions…because they are already intimately familiar with the main functions.

    This is the kind of thing that leads to the extremely useless error messages generated by Windoze (Or an old database program I used once that gave only 4-digit error codes, all of which seemed to map to “call tech support,” bad enough, but the company no longer existed….

    Am I alone in this perception?

  • Thou_Shalt_not_Charles Sep 9, 2010 @ 16:54

    Charles wrote:
    “Am I alone in this perception?”

    Yes you are!

    • Bob May 5, 2012 @ 12:21

      Hardly. Linux is a very unprofessional, badly written, poorly documented “OS”.

      It’s only allure is the fact it’s free. If it weren’t free – nobody would use it. Unix was an interesting concept, but not good in practice. IBM’s DOS was able to destroy the Unix market almost immediately.

      Linux is just a re-do of Unix – but worse. The original Linux source code was written by a novice: and as usual, everything flows “from the top”. That’s why we now have such horrible documentation, poorly-styled code without comments and hare-brained concepts such as /dev/shm. Everything is a Band-Aid fix for a gaping wound. There’s no consistency, no style, no rules. While some people may think these are features, the fact is these are what keeps Linux from winning over the bigger market.

      The fact some people act like Linux is a product of nature shows how ignorant they are. It’s free. That’s why people use it. If Linus started charging for downloads it would lead to the end of Linux.

      • XSeSiV May 15, 2012 @ 10:14

        This is the most uneducated post I have read on the internet in all my life! My god, are you working for M$ or is this mr gates himself.


        • robet loxley Aug 26, 2012 @ 8:05

          XSeSiV you are right, if this guy is not gates himself s a complete ignorant.

          • Sean Oct 22, 2012 @ 14:13

            Fact, Linux rules the world of IT. The person responsible for the evolution of Linux basically won the “Nobel Prize” for technology. The poster Bob is probably a point and click monkey who needs a Vendor to tell him how to do his job.

      • Henry Jul 27, 2013 @ 0:19

        “Linux is a very unprofessional, badly written, poorly documented รขโ‚ฌล“OSรขโ‚ฌย.”
        In order to make the above observation you have to know some of the major Enterprise level UNIX’s from the 90’s or even early 2000’s (AIX, Solaris, HP-UX.)
        I worked on all of them, mostly on HP-UX. Linux would have to be born-again to come even 2% close the professionalism of HP-UX.
        I switched to Linux 4 years ago, I’m not complaining, just sharing the facts.
        The reason Linux became what it is today, is based on two facts; it was free, and it ran on x86.
        Also it has to do with younger generation being more computer savvy. While older folks, shall we say were more like IBM types. Where you expected things to work, otherwise you did not put it out in the market.
        Younger guys don’t mind spending hours and hours working with poor code, trying to make it work (remember they have been enticed by the fact it was free.)
        But, that is the way world is. Progress. Would Google be what it is today, if it had to spend money on AIX and IBM hardware?
        So my advice to older folks like me is to embrace change. If you don’t, change won’t stop, only you will be miserable.
        One thing I noticed on working with Linux, expectations of the customers also match the level of professionalism of Linux. (I hope people get the last sentence)

        • steved129 Feb 9, 2014 @ 15:17

          completely wrong on so many levels.

          linux is NOT an operating system. it is a Kernel. and in fact it is an industry standard kernel used is a vast array of operating systems used today. used in everything from client to server OS, all the way to mobile phones and even integrated devices.

          The linux kernel is well documented, and is inherently credible.

          The only reason we have some of the technology today is because of what was which the speed that technology has been evolving; left behind is the days of needing to elaborately document every function and command line flag:

          What the new Linux ecosystem is, is a modular cornucopia of millions upon millions of code based on peoples hard work. We now have publicly accessible source code repositories, where things like crowd sourcing has increased the speed and rate of the evolution of technology exponentially, and to be honest, to put it simply, don’t fix something that isn’t broken.

          This, coupled with communications and development tools, which again, are leaps and bounds ahead of their time compared to the tools of yesteryear, created a totally new global platform for a codependent software anatomy

          instead of having one really nice OS with a few good tools, we have 25 good OS’s all working together to develop cross platform tools, which in turn scale the ENTIRE industry

          get it now?

          • Ravin Apr 9, 2015 @ 23:09

            That was a wonderful reply! You covered the topic mulch-dimensionally, concisely and comprehensively. I can’t help but think Bob would have asked “does it come in cornflower blue?”

            • Ravin Apr 9, 2015 @ 23:10

              Damned spell checkers! MULTI-DIMENSIONALLY, not MULCH! ๐Ÿ™‚

      • Dave Oct 18, 2014 @ 6:05

        lol thats why linux systems dominate over 60% of the global market share for servers

  • RJB Oct 24, 2010 @ 23:27

    Charles — I don’t think you’re alone in that perception. However, with enough searching it seems I eventually find *somebody’s* site that gives wonderful insight for exactly what I need to know. The main thing is I don’t expect every blogger to write a follow-up to fill me in with what is assumed I know…I go and find out. Yes it takes time, but I think that’s the place everybody learning something new has to start :(… Unless you personally know an experienced Linux guru who volunteers time to mentor you.

    I think as far as /dev/shm goes, the beauty of this being mounted like a normal file system is that you can write programs (and/or scripts) to use it like you would any other file system. If you have enough RAM available, it will be much more efficient than reading/writing a hard disk. If you’re not writing programs, then /dev/shm is of little consequence to you and does not do anything for you (unless you use a program which uses this). It really should be as if it does not even exist to the average user…but for a sysadmin, it’s good to be aware of it in the event you have a program (like Oracle) that is having strange problems. Then your awareness will lead you in the right direction when researching a solution.

    As far as comments about using up RAM and slowing down the whole system…well you simply need to make good decisions about using it when it makes sense to use it, and not making your entire program use it. For example, you have to make the same decision about how much data you load into RAM at a time whether using /dev/shm or just by using a big buffer and passing a pointer between applications…lots of data uses lots of RAM and doesn’t matter whether it’s shm or somewhere else.

    I’m speaking out my arse here, but I think mounting 8G shm on 8G RAM is perfectly fine as it’s referencing paged memory like any other program. There is a danger that if your program uses a significant amount, you can starve other programs in RAM by forcing them to swap…but if you’re in that situation, then the active program using SHM is going to use swap if you limit SHM to say, 2G or something…so either way you lose. In the end, don’t write programs that want to load 8G into RAM during runtime. Only load things into RAM if you need to use them right away. When you’re done using it, then free it.

  • xuedi Nov 22, 2010 @ 7:17

    Hello there,
    no Gentoo user here? So just my addon, shm worked like charm if you do -j3 ore more compiling with gcc, it takes under Gentoo full advantage and speed up thinks greatly, while in daily use it seems to not be every used.
    Put i really would not suggest to use more than half of the memory for shm, even its used by the kernel dynamically by the need, if you have something fucked up in any service running and your shm is filling up the main memory, everything goes into swap and kills the system via I/O flood ,….

    Greeting from Beijing

  • Richard Lynch Dec 9, 2010 @ 19:13

    /dev/shm memory is not allocated until it is needed, as it would be silly to do so otherwise.

    It is fairly typical to make the size == the RAM, and just pray you (almost) never actually reach that and start swapping like crazy.

    /dev/shm is essentially a ramdisk, only pretending to be a file, and it might get swapped to disk when the going gets tough. You can survive a page fault or two now and then. You can’t survive constant thrashing of your swap.

    Because it pretends to be a file and lives in RAM (mostly) it makes an execellent vehicle for program communication with locks etc as a “shared memory” substitute. It’s easy to use; It lives in RAM (mostly); It is fault-tolerant (hitting swap if it has to); It’s fast

  • Juan Miller Feb 1, 2012 @ 3:43

    My RAM died last night. be careful if you do this stuff.

    • Henry Jul 27, 2013 @ 0:24

      I’m sure horns are worth something.

  • Ternia Wilite Feb 11, 2012 @ 7:48

    Is there a standard or something for using /dev/shm? In other words is any program use /dev/shm by default? In other other words ๐Ÿ˜‰ can I harmless unmount and delete /dev/shm on working computer?

  • Rob Mar 14, 2012 @ 11:34

    I found malware stored in /dev/shm that looked like an irc bot. I too am considering unmounting and removing this.

  • xuedi Mar 14, 2012 @ 16:18

    @Rob: If you have a rootkit on your system it does not matter if it is in SHM or in the file system, your system is lost, better make backups ^^ SHM is just a file system as any other..

  • Giuseppe Lottini Apr 15, 2012 @ 16:12

    It was very simple, concise and useful. Thanks a lot

  • navendu54 May 31, 2012 @ 7:42

    Charles / Bob :- Why troll? If you do not have anything to contribute, move along.

    And yes, this was really useful to me. I could not find any good source to get rid of the following errors on my production Oracle which runs on CentOS 64bit system:

    ORA-32004: obsolete and/or deprecated parameter(s) specified
    ORA-00845: MEMORY_TARGET not supported on this system

    I increase /dev/shm from default 1G to 8G and now I do not get any errors.

  • Lucian Jun 29, 2012 @ 12:29

    Thank you for this post. Very concise and very useful!
    Thank you for your comments! These also helped me to better understand this concept!
    Charles, I understand your frustration. There are many situations when I have to search a lot to find something. But like RJB said, finally you will find what you need. But you have to want this! Really want!!

  • Ron Stubbs Jul 7, 2012 @ 15:23

    “tmpfs : Tmpfs is a file system which keeps all files in virtual memory.”

    This should read:
    tmpfs : Tmpfs is a file system which keeps all files in physical memory.

    Swap is an example of virtual memory.

    • GOSteen Aug 7, 2012 @ 15:18

      ‘virtual memory’ is correct. In this case, it is referring to the virtual addressing structure used within the kernel which, through memory management, accounts for all memory, both physical and otherwise. tmpfs can be swapped out as well so it would be unfair/unwise to state that it keeps it in ‘physical’ memory when, in fact, the data could exist within a swapfile or swap partition.

  • Aron Oct 1, 2012 @ 6:05

    Can I create shm partition on LVM?

  • Guest Apr 6, 2013 @ 11:20

    Ramdisk/tmpfs is kind of last resort. It’s almost never needed. Linux kernel uses all the free RAM for the file cache by default, so all the frequently accessed files remain cached in RAM. Short-lived temporary files may stay in cache and not hit the disk at all. Linux kernel is smart. ๐Ÿ™‚

    There’re just a few cases when ramdisk/tmpfs is much faster than real filesystem (and most of them can be reported as a bug). Usually there’re better alternatives.

    For example to reduce number of disk write attempts you can increase vm.dirty_* sysctls (http://www.kernel.org/doc/Documentation/sysctl/vm.txt). If you’re trying to reduce number of HDD spinups for a laptop, then vm.laptop_mode and laptop-mode-tools package would give you the best solution. If your software uses a lot of slow fsync’s that you need to avoid, then `eatmydata` wrapper is just for you.

    Tmpfs takes your RAM from other apps even when not used, often it’s not easy to keep it in sync with your local data, and it’s MUCH slower than real filesystem when it’s swapped out. The only way to make sure that you need it is to benchmark it, not with some general benchmarking tool, but with your particular test case.

    • BigDaddy68 Oct 9, 2013 @ 22:47

      This was a good post by “Guest”. FWIW in the Solaris world Sun used to make /tmp a tmpfs filesysem. This is because ‘cc’ ( and other compilers) were multi-pass exec()’s cpp, cc1,cc2 etc.. based and communicated via smallish intermediate files in /tmp. So having /tmp be free vm pages (free RAM under the covers) meant that compiles could go faster. a nice feature for develoeprs indeed. many used to call all the TMPFS ( /tmp) RAM the ‘page cache’ – since it was all teh freemem used to start new processes/threads, to cache files – this sounds liek the same as what linux does ( to me anyway )

      We used to use tmpfs ( /tmp) carefully on large RAM systems well above its normal required RAM for DB workloads and we used to drop the DB TEMP files(tempdb, tablespace datafiles et al) into /tmp because using the fast RAM could assist in N-Way join processing which Writes/Reads DB TEMP files heavily/successively for large queries doing joins. DB TEMP files (in general) were of fixed size and the DB always wiped them clean on a db-restart so if you cp’d them out there (pre-created them?) before your DB started it really helped query performance; substantially in many cases.

      So since this is all about linux I am not sure if I’d use linux /dev/shm for this or I’d use a RAMDISK or if it would even matter ..

      Does anyone know if it matters – ie. does a ramdisk handle larger files better than /dev/shm or is it ultimately the same at the end of the day.

      If not mistaken ( at least for Solaris ) I thought I understood that TMPFS used the seg_vn segment driver to handle file/vnode VM mappings under the covers… I am wondering if linux does this too ? Then I guess one could understand why the comment about smaller files prevails for /dev/shm under linux (maybe for same reason ?)

      Still though writing/reading pure VM(RAM) is way faster that doing physio’s to anything by a longshot – even if you have to do it in shot-glasses and not beer mugs ๐Ÿ™‚

      in linux is there teh ability to use a more optimized segment driver for large file mappings for large shmem segments which could also have a name in a filesystem like /ext4 or /xfs – that might run better or more optinmally

      sorry im a shity typer

  • Jalal Hajigholamali Jul 31, 2013 @ 7:22


    Thanks a lot, very useful article

  • Ojp Aug 2, 2013 @ 16:42

    Hi everyone,
    The most useful example to use this is if you application have lots of small files. For example: sessions in disk, temporary data files, log files, etc. In my case, I experimented a huge change of performance with an application that keeps the sessions in disk.

  • Sascha Feb 8, 2014 @ 22:00


    I already use SHM in a linux server to cache write during save periods, this has saved me 2/3 of the time this save needed, since it would suspend all netstates and restart them after the save (yes, I’m talking about a game), the data, after the save was done, will be moved by another thread in the main disk, but the netstates can restart freely.

    This is an important thing to me, it saves game-freeze time, saving all the necessary data in a temporary folder that is way faster than the disk, after the save, the data can be moved… Yes, I could fork() the process and copy all the data there, but to avoid certain occurrences to happen, the way with tempfs is more clean and fast (from 24s to 9s is a lot of gain to me)

  • Sepahrad Salour Jan 19, 2015 @ 16:05

    It’s a good article, but it was getting better if you talked about IPC or named pipes to clarify your article.

    Thanks again.

  • arun Jan 20, 2015 @ 20:36

    we have server with 7 Gb and Tmpfs is 4 Gb. No process/application is using more memory but we are facing memory issue. And it is not Db server

    Question : In free -g we have only 3 gb free (buffer+caches is 2 gb and cache is 1 gb).
    Kindly help on this and find the below details from that server .

    1) cat /etc/fstab->tmpfs /dev/shm tmpfs defaults 0 0
    2) df -h –> tmpfs 3.9G 0 3.9G 0% /dev/shm
    3) free -m
    total used free shared buffers cached
    Mem: 7873 7240 632 0 119 1640
    -/+ buffers/cache: 5480 2392
    Swap: 4095 0 4095

    ##free -g
    total used free shared buffers cached
    Mem: 7 7 0 0 0 1
    -/+ buffers/cache: 5 2
    Swap: 3 0 3
    4) from top —
    top – 21:01:04 up 332 days, 19 min, 1 user, load average: 0.00, 0.00, 0.00
    Tasks: 249 total, 2 running, 247 sleeping, 0 stopped, 0 zombie
    Cpu(s): 0.2%us, 0.3%sy, 0.0%ni, 99.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
    Mem: 8062532k total, 7415264k used, 647268k free, 122792k buffers
    Swap: 4194296k total, 4k used, 4194292k free, 1679428k cached

    32746 UH00006 30 10 144m 7216 2328 S 0.0 0.1 1:53.21 pandora_agent
    1917 root 20 0 245m 5404 884 S 0.0 0.1 1:26.54 rsyslogd
    23397 root 20 0 169m 4120 2236 S 0.0 0.1 4:45.63 httpd
    2413 cimsrvr 20 0 382m 3804 2088 S 0.0 0.0 2:51.21 cimservermain
    21780 root 20 0 95824 3788 2864 S 0.0 0.0 0:00.14 sshd
    2351 postfix 20 0 78972 3400 2468 S 0.0 0.0 1:21.00 qmgr
    22873 postfix 20 0 78800 3244 2416 S 0.0 0.0 0:00.02 pickup
    2344 root 20 0 78720 3200 2348 S 0.0 0.0 5:34.63 master
    18650 apache 20 0 169m 3104 1160 S 0.0 0.0 0:00.00 httpd

    5) Top memory usage
    ps -A –sort -rss -o comm,pmem | head -n 11
    pandora_agent 0.0
    rsyslogd 0.0
    httpd 0.0
    cimservermain 0.0
    sshd 0.0
    qmgr 0.0
    pickup 0.0
    master 0.0
    httpd 0.0
    httpd 0.0

  • Pepe Jan 13, 2016 @ 2:56

    He obviously meant GNU/Linux when he wrote Linux, and you know it. It doesn’t take a Richard Stallman to realize this.

    I agree that the Linux kernel is well documented, and I don’t know what you mean with “credible”, but that’s it.

    The Linux “ecosystem” is, like you said, a collection of millions of lines of code from different sources.

    Its quality ranges from genius to abysmal.

    • Pepe Jan 14, 2016 @ 1:54

      That last message was a reply to steved129, I don’t know why it didn’t show up as a reply.

  • ravisreerama Apr 5, 2016 @ 12:46

    I have a /tmp file system which is created in ext4 format. Now the user asked to convert it into tmpfs. The mount point is a LVM mount point. How can I do it?
    # df -hT /tmp
    Filesystem Type Size Used Avail Use% Mounted on
    ext4 2.9G 5.0M 2.8G 1% /tmp

Leave a Reply

Your email address will not be published.

Use HTML <pre>...</pre> for code samples. Still have questions? Post it on our forum