How to set up ZFS ARC size on FreeBSD

When working with FreeBSD and ZFS, you will run into ZFS cache size problems. Not all FreeBSD servers are file servers. Some servers act as backup servers. Others might run Linux and Windows VM where you want those guest VMs to manage their own caching. It would help if you had tons of RAM for ZFS, but you may not have that luxury in real life. This page explains how to set up ZFS arc size on FreeBSD to work with less RAM to avoid the computer running out of memory in the kernel.

Tutorial details
Difficulty level Advanced
Root privileges Yes
Requirements FreeBSD with ZFS
Est. reading time 7 minutes

Information presented and config options only works with FreeBSD. It will not work with Linux or other oses where ZFS is supported.

What is ARC?

ZFS is an advanced file system initially created by Sun Microsystems. ARC is an acronym for Adaptive Replacement Cach. It is a modern algorithm for caching data in DRAM. In other words, ARC is nothing, but it contains cached data such as filesystem data and metadata. ZFS will try to use as much as a free ram to speed up server operations.

There is also a secondary cache called L2ARC (level II Adaptive Replacement Cach). Why use L2ARC? You know as DRAM is expensive and limited on all systems. So what we do is we use faster SSDs or PCIe NVMe storage for that purpose.

Examples

Here is how it typically look on an enterprise server:

  1. DRAM – ARC – 16Gbyte
  2. L2ARC – NVMe/SATA SSD – 512G
  3. ZFS Storage – Multiple mirrored disks (say 12TB)

How to tune ARC on FreeBSD

Unfortunately, there is no easy formula for everyone. You need to find out your server role, and as per your requirements, you need to set up ARC and L2ARC. That is your job as a Unix system administrator. For file servers such as CIFS/NFS, we can set up a large ARC with L2ARC to speed up the operation. For MariaDB/PostgreSQL, I set up a small ARC and tune database caching along with Redis or Memcached. In this example, my FreeBSD home server does many roles, and I have limited RAM, and there is no space for L2ARC yet. I have multiple Linux VMs, and they need DRAM too. A couple of Jails for testing my apps. The server also acts as a backup server for all other computers. I don’t have NFS or CIFS. In other words, each setup is unique, and you need to think about your requirements.

Enough talk; let’s get our hands dirty.

How to set up ZFS arc size on FreeBSD

You need to edit the /boot/loader.conf file, run:
sudo vim /boot/loader.conf
Let us set Max ARC size to 4GB and Min size to 2GB in bytes:

# Setting up ZFS ARC size on FreeBSD as per our needs
# Must be set in bytes and not in GB/MB etc
# Set Max size = 4GB = 4294967296 Bytes
vfs.zfs.arc_max="4294967296"
 
# Min size = 2GB
vfs.zfs.arc_min="2147483648"

Save and close the file. Make sure you reboot the FreeBSD box:
sudo reboot

Verification

$ sysctl vfs.zfs.arc_max vfs.zfs.arc_min
# see all values #
$ sysctl vfs.zfs.arc

How to view ZFS statstics

We can use zfs-stats command to see ZFS statistics in human-readable format on FreeBSD including:

  • ARC
  • L2ARC
  • zfetch (DMU)
  • vdev cache statistics and more.

Installing zfs-stats on FreeBSD

Type the following pkg command:
$ sudo pkg install zfs-stats

Updating FreeBSD repository catalogue...
Fetching packagesite.txz: 100%    6 MiB   6.5MB/s    00:01    
Processing entries: 100%
FreeBSD repository update completed. 30370 packages processed.
All repositories are up to date.
Checking integrity... done (0 conflicting)
The following 1 package(s) will be affected (of 0 checked):
 
New packages to be INSTALLED:
	zfs-stats: 1.3.0_2
 
Number of packages to be installed: 1
 
Proceed with this action? [y/N]: y
[1/1] Installing zfs-stats-1.3.0_2...
[1/1] Extracting zfs-stats-1.3.0_2: 100%

How to use zfs-stats on FreeBSD

To see all statistics pass the -a as follows:
zfs-stats -a
zfs-stats -a | more

You will get stuff on scree:

------------------------------------------------------------------------
ZFS Subsystem Report                            Sat May 22 16:05:11 2021
------------------------------------------------------------------------
 
System Information:
 
        Kernel Version:                         1300139 (osreldate)
        Hardware Platform:                      amd64
        Processor Architecture:                 amd64
 
        ZFS Storage pool Version:               5000
        ZFS Filesystem Version:                 5
 
FreeBSD 13.0-RELEASE #0 releng/13.0-n244733-ea31abc261f: Fri Apr 9 04:24:09 UTC 2021 root 4:05PM  up 3 days, 20:40, 1 user, load averages: 1.08, 1.10, 1.08
 
------------------------------------------------------------------------
 
System Memory:
 
        1.53%   486.25  MiB Active,     3.61%   1.12    GiB Inact
        37.66%  11.68   GiB Wired,      0.00%   0       Bytes Cache
        57.07%  17.71   GiB Free,       0.13%   41.04   MiB Gap
 
        Real Installed:                         32.00   GiB
        Real Available:                 99.57%  31.86   GiB
        Real Managed:                   97.37%  31.03   GiB
 
        Logical Total:                          32.00   GiB
        Logical Used:                   41.16%  13.17   GiB
        Logical Free:                   58.84%  18.83   GiB
 
Kernel Memory:                                  847.96  MiB
        Data:                           94.60%  802.16  MiB
        Text:                           5.40%   45.81   MiB
 
Kernel Memory Map:                              31.03   GiB
        Size:                           37.18%  11.54   GiB
        Free:                           62.82%  19.49   GiB
 
------------------------------------------------------------------------
 
ARC Summary: (HEALTHY)
        Memory Throttle Count:                  0
 
ARC Misc:
        Deleted:                                31.41   m
        Mutex Misses:                           1.37    k
        Evict Skips:                            186.29  k
 
ARC Size:                               85.99%  3.44    GiB
        Target Size: (Adaptive)         89.81%  3.59    GiB
        Min Size (Hard Limit):          50.00%  2.00    GiB
        Max Size (High Water):          2:1     4.00    GiB
        Decompressed Data Size:                 9.21    GiB
        Compression Factor:                     2.68
 
ARC Size Breakdown:
        Recently Used Cache Size:       45.66%  1.64    GiB
        Frequently Used Cache Size:     54.34%  1.95    GiB
 
ARC Hash Breakdown:
        Elements Max:                           813.52  k
        Elements Current:               89.17%  725.39  k
        Collisions:                             2.92    m
        Chain Max:                              5
        Chains:                                 55.74   k
 
------------------------------------------------------------------------
 
ARC Efficiency:                                 2.61    b
        Cache Hit Ratio:                98.74%  2.57    b
        Cache Miss Ratio:               1.26%   32.93   m
        Actual Hit Ratio:               97.17%  2.53    b
 
        Data Demand Efficiency:         93.61%  8.85    m
        Data Prefetch Efficiency:       18.83%  4.02    m
 
        CACHE HITS BY CACHE LIST:
          Anonymously Used:             1.53%   39.31   m
          Most Recently Used:           36.89%  948.88  m
          Most Frequently Used:         61.53%  1.58    b
          Most Recently Used Ghost:     0.03%   832.22  k
          Most Frequently Used Ghost:   0.03%   714.13  k
 
        CACHE HITS BY DATA TYPE:
          Demand Data:                  0.32%   8.29    m
          Prefetch Data:                0.03%   757.06  k
          Demand Metadata:              95.31%  2.45    b
          Prefetch Metadata:            4.34%   111.69  m
 
        CACHE MISSES BY DATA TYPE:
          Demand Data:                  1.72%   565.72  k
          Prefetch Data:                9.91%   3.26    m
          Demand Metadata:              73.49%  24.20   m
          Prefetch Metadata:            14.89%  4.90    m
 
------------------------------------------------------------------------
 
L2ARC is disabled
 
------------------------------------------------------------------------
 
Dataset statistics for: zroot/ROOT/default

Remaing stats:

Dataset statistics for: zroot/ROOT/default
 
        Reads:          88.73%  787.20  k
        Writes:         11.09%  98.38   k
        Unlinks:        0.18%   1.60    k
 
        Bytes read:     94.27%  3.98    b
        Bytes written:  5.73%   241.83  m
 
Dataset statistics for: zroot/jails/dnscrypt
 
        Reads:          89.42%  124.35  k
        Writes:         10.49%  14.59   k
        Unlinks:        0.09%   123
 
        Bytes read:     98.64%  517.96  m
        Bytes written:  1.36%   7.14    m
 
Dataset statistics for: zroot/rsnapshot
 
        Reads:          55.21%  3.47    m
        Writes:         3.61%   227.13  k
        Unlinks:        41.18%  2.59    m
 
        Bytes read:     89.81%  426.25  b
        Bytes written:  10.19%  48.36   b
 
Dataset statistics for: zroot/tmp
 
        Reads:          99.46%  698.75  k
        Writes:         0.50%   3.53    k
        Unlinks:        0.04%   274
 
        Bytes read:     63.83%  185.67  m
        Bytes written:  36.17%  105.19  m
 
Dataset statistics for: zroot/usr/home
 
        Reads:          99.99%  1.02    m
        Writes:         0.01%   72
        Unlinks:        0.00%   21
 
        Bytes read:     100.00% 132.00  b
        Bytes written:  0.00%   740.30  k
 
Dataset statistics for: zroot/var/log
 
        Reads:          0.30%   887
        Writes:         99.69%  293.71  k
        Unlinks:        0.00%   13
 
        Bytes read:     5.54%   4.80    m
        Bytes written:  94.46%  81.73   m
 
Dataset statistics for: zroot/var/mail
 
        Reads:          0.00%   0
        Writes:         59.09%  13
        Unlinks:        40.91%  9
 
        Bytes read:     0.00%   0
        Bytes written:  100.00% 24.66   k
 
 
------------------------------------------------------------------------
 
File-Level Prefetch:
 
DMU Efficiency:                                 3.76    m
        Hit Ratio:                      82.09%  3.09    m
        Miss Ratio:                     17.91%  674.00  k
 
------------------------------------------------------------------------
 
VDEV cache is disabled
 
------------------------------------------------------------------------
 
ZFS Tunables (sysctl):
        kern.maxusers                           2375
        vm.kmem_size                            33315098624
        vm.kmem_size_scale                      1
        vm.kmem_size_min                        0
        vm.kmem_size_max                        1319413950874
        vfs.zfs.sync_pass_rewrite               2
        vfs.zfs.sync_pass_dont_compress         8
        vfs.zfs.sync_pass_deferred_free         2
        vfs.zfs.commit_timeout_pct              5
        vfs.zfs.history_output_max              1048576
        vfs.zfs.max_nvlist_src_size             0
        vfs.zfs.dbgmsg_maxsize                  4194304
        vfs.zfs.dbgmsg_enable                   1
        vfs.zfs.zap_iterate_prefetch            1
        vfs.zfs.rebuild_scrub_enabled           1
        vfs.zfs.rebuild_vdev_limit              33554432
        vfs.zfs.rebuild_max_segment             1048576
        vfs.zfs.initialize_chunk_size           1048576
        vfs.zfs.initialize_value                -2401053088876216594
        vfs.zfs.nocacheflush                    0
        vfs.zfs.scan_ignore_errors              0
        vfs.zfs.checksum_events_per_second      20
        vfs.zfs.slow_io_events_per_second       20
        vfs.zfs.read_history_hits               0
        vfs.zfs.read_history                    0
        vfs.zfs.special_class_metadata_reserve_pct25
        vfs.zfs.user_indirect_is_special        1
        vfs.zfs.ddt_data_is_special             1
        vfs.zfs.deadman_enabled                 1
        vfs.zfs.deadman_checktime_ms            60000
        vfs.zfs.free_leak_on_eio                0
        vfs.zfs.recover                         0
        vfs.zfs.flags                           0
        vfs.zfs.keep_log_spacemaps_at_export    0
        vfs.zfs.min_metaslabs_to_flush          1
        vfs.zfs.max_logsm_summary_length        10
        vfs.zfs.max_log_walking                 5
        vfs.zfs.unflushed_log_block_pct         400
        vfs.zfs.unflushed_log_block_min         1000
        vfs.zfs.unflushed_log_block_max         262144
        vfs.zfs.unflushed_max_mem_ppm           1000
        vfs.zfs.unflushed_max_mem_amt           1073741824
        vfs.zfs.autoimport_disable              1
        vfs.zfs.max_missing_tvds                0
        vfs.zfs.multilist_num_sublists          0
        vfs.zfs.resilver_disable_defer          0
        vfs.zfs.scan_fill_weight                3
        vfs.zfs.scan_strict_mem_lim             0
        vfs.zfs.scan_mem_lim_soft_fact          20
        vfs.zfs.scan_max_ext_gap                2097152
        vfs.zfs.scan_checkpoint_intval          7200
        vfs.zfs.scan_legacy                     0
        vfs.zfs.scan_issue_strategy             0
        vfs.zfs.scan_mem_lim_fact               20
        vfs.zfs.free_bpobj_enabled              1
        vfs.zfs.max_async_dedup_frees           100000
        vfs.zfs.async_block_max_blocks          -1
        vfs.zfs.no_scrub_prefetch               0
        vfs.zfs.no_scrub_io                     0
        vfs.zfs.scan_suspend_progress           0
        vfs.zfs.resilver_min_time_ms            3000
        vfs.zfs.free_min_time_ms                1000
        vfs.zfs.obsolete_min_time_ms            500
        vfs.zfs.scrub_min_time_ms               1000
        vfs.zfs.scan_vdev_limit                 4194304
        vfs.zfs.sync_taskq_batch_pct            75
        vfs.zfs.delay_scale                     500000
        vfs.zfs.dirty_data_sync_percent         20
        vfs.zfs.dirty_data_max_max              4294967296
        vfs.zfs.dirty_data_max                  3421336780
        vfs.zfs.delay_min_dirty_percent         60
        vfs.zfs.dirty_data_max_max_percent      25
        vfs.zfs.dirty_data_max_percent          10
        vfs.zfs.disable_ivset_guid_check        0
        vfs.zfs.allow_redacted_dataset_mount    0
        vfs.zfs.max_recordsize                  1048576
        vfs.zfs.send_holes_without_birth_time   1
        vfs.zfs.pd_bytes_max                    52428800
--More--(byte 7920)

Summing up

You learned about controlling ZFS ARC size under FreeBSD as your needs. I strongly suggest that you read FreeBSD ZFS books that explain all of this stuff in detail.


🐧 Get the latest tutorials on Linux, Open Source & DevOps via RSS feed or Weekly email newsletter.

🐧 2 comments so far... add one


CategoryList of Unix and Linux commands
Disk space analyzersdf duf ncdu pydf
File Managementcat cp mkdir tree
FirewallAlpine Awall CentOS 8 OpenSUSE RHEL 8 Ubuntu 16.04 Ubuntu 18.04 Ubuntu 20.04
Modern utilitiesbat exa
Network UtilitiesNetHogs dig host ip nmap
OpenVPNCentOS 7 CentOS 8 Debian 10 Debian 8/9 Ubuntu 18.04 Ubuntu 20.04
Package Managerapk apt
Processes Managementbg chroot cron disown fg glances gtop jobs killall kill pidof pstree pwdx time vtop
Searchingag grep whereis which
User Informationgroups id lastcomm last lid/libuser-lid logname members users whoami who w
WireGuard VPNAlpine CentOS 8 Debian 10 Firewall Ubuntu 20.04
2 comments… add one
  • Anonymous Jun 2, 2021 @ 17:31

    Excellent info. I am able to run ZFS on 4GB EC2 host now.

  • Tyler Windrow Jun 6, 2021 @ 12:28

    Can you suggest settings for 8GB EC2 FreeBSD host?

Leave a Reply

Your email address will not be published.

Use HTML <pre>...</pre> for code samples. Still have questions? Post it on our forum