How to automatically mount zfs file system on Linux/Unix/FreeBSD

I have created a zfs file system called data/vm_guests on Ubuntu Linux server. After the server reboot, zpools do not automatically mount at /data/vm_guests. It is failing my KVM guest machines. How can I mount my ZFS (zpool) automatically after the reboot?

By default, a ZFS file system is automatically mounted when it is created. Any file system whose mountpoint property is not legacy is managed by ZFS.

List your pools

Type the following command:
# zpool list
Sample outputs:

data        1.48T   142G  1.35T         -     5%     9%  1.00x  ONLINE  -
nginxwww     131G  40.3G  90.7G         -    22%    30%  1.00x  ONLINE  -

Create zfs file system called data/vm_guests

Type the following command:
# zfs create data/vm_guests

Get mountpoint for data/vm_guests

# zfs get mountpoint data/vm_guests
Sample outputs:

Fig.01: Get the mountpoint for the dataset

Is it mounted?

# zfs get mounted data/vm_guests
Sample outputs:

Fig.02: Get the mountpoint status of the dataset

If not mounted, mount ZFS file system explicitly

You can explicitly set the mountpoint property for zfs file system on Linux/Unix/FreeBSD as shown in the following example:
# zfs set mountpoint=/YOUR-MOUNT-POINT pool/fs
# zfs set mountpoint=/my_vms data/vm_guests
# cd /my_vms
# df /my_vms
# zfs get mountpoint data/vm_guests
# zfs get mounted data/vm_guests

Sample outputs:

Fig.03: Modifying ZFS dataset mountpoints and mount ZFS file system as per needs

Please note that you can pass the -a option to zfs command to mount all ZFS managed file systems. For example:
# zfs mount -a

How do I see a list of all zfs mounted file system?

Tyep the following command:
# zfs mount
# zfs mount | grep my_vms

Unmounting ZFS file systems

# zfs unmount data/vm_guests

See also

See zfs(8) command man page for more info:

🐧 Get the latest tutorials on Linux, Open Source & DevOps via RSS feed or Weekly email newsletter.

🐧 2 comments so far... add one

CategoryList of Unix and Linux commands
Disk space analyzersdf ncdu pydf
File Managementcat cp mkdir tree
FirewallAlpine Awall CentOS 8 OpenSUSE RHEL 8 Ubuntu 16.04 Ubuntu 18.04 Ubuntu 20.04
Network UtilitiesNetHogs dig host ip nmap
OpenVPNCentOS 7 CentOS 8 Debian 10 Debian 8/9 Ubuntu 18.04 Ubuntu 20.04
Package Managerapk apt
Processes Managementbg chroot cron disown fg jobs killall kill pidof pstree pwdx time
Searchinggrep whereis which
User Informationgroups id lastcomm last lid/libuser-lid logname members users whoami who w
WireGuard VPNAlpine CentOS 8 Debian 10 Firewall Ubuntu 20.04
2 comments… add one
  • Ivan Oct 31, 2016 @ 11:00

    Please note that on systemd based services there are following services which need to be enabled in order to have zpool imported automatically as well as zfs filesystem mounted:


  • Keith Aug 20, 2017 @ 13:17

    Thanks for the tutorial. It helped me solve the mount issue in no time at all compared to my still unsolved issue with using the -V option for ZVOL created ZFS file systems.

    With Ubuntu Server 16.04, I couldn’t reliably get my OS mounted ZVOLs to load at boot before KVM/Virsh defaulted my default-named storage pool back to their default directory instead of my mounted zd02, for example, in /mnt/zd02. My ZVOL had two partitions formatted with ext4.

    The standard ZFS file system created as explained here seems to be working good and holding true as my VM storage layer. I’m updating things on my other servers now to match this method.

    I added this property option when creating the file systems as a single command solution:

    zfs create -o mountpoint=/MOUNTDIRECTORY POOLNAME/FSNAME


Leave a Reply

Your email address will not be published.

Use HTML <pre>...</pre> for code samples. Still have questions? Post it on our forum