How to create RAID 10 – Striped Mirror Vdev ZPool On Ubuntu Linux

in Categories , last updated August 3, 2016

How do I create zfs based RAID 10 (striped mirrored VDEVs) for my server as I need to do small random read I/O. How can I create striped 2 x 2 zfs mirrored pool on Ubuntu Linux 16.04 LTS server?

A stripped mirrored Vdev Zpool is the same as RAID10 but with an additional feature for preventing data loss. In this quick tutorial, you will learn how to create a striped mirrored Vdev Zpool (RAID 10) on Ubuntu Linux 16.04 LTS server. The commands remains same on FreeBSD or any other Linux distro or Unix-like system.

Before you get started

First, obviously, you want to make sure zfs is installed, run the following command:
$ sudo apt update
$ sudo apt install zfsutils-linux

Create striped mirrored VDEVs (RAID 10)

The syntax is:
sudo zpool create NAME mirror VDEV1 VDEV2 mirror VDEV3 VDEV4
or:
sudo zpool create NAME mirror VDEV1 VDEV2
sudo zpool add NAME mirror VDEV3 VDEV4

A VDE can be a raw disk, a file/image, or a partition.

Step – 1: Find device name

In this example, I’m going to create a striped mirrored Vdev Zpool using four physical disk. It is recommended that you use /dev/disk/by-id/ disk names, which often use serial numbers of drives. Type the following command to find out find the disks that you have in your system:
$ ls -l /dev/disk/by-id/ | grep sd[a-z]$
Sample outputs:

Fig.01: Linux find disk names by serial number using /dev/disk/by-id/
Fig.01: Linux find disk names by serial number using /dev/disk/by-id/

Step -2: Create a 2 x 2 mirrored pool using four raw disks

You can use wwn-0x50011731002b33ac (sda), wwn-0x50011731002b50d0 (sdb), wwn-0x5001173100406557 (sdc), and wwn-0x50011731004085a7 (sdd) as follows to create a zpool containing a VDEV of 4 drives in a mirror i.e. a 2 x 2 mirrored pool:
$ sudo zpool create tank0 mirror wwn-0x50011731002b33ac wwn-0x50011731002b50d0 mirror wwn-0x5001173100406557 wwn-0x50011731004085a7
OR: use the following syntax, to create a zpool called foo containing a VDEV of 2 drives in a mirror:
$ sudo zpool create foo mirror wwn-0x50011731002b33a wwn-0x50011731002b50d0
Next, add another VDEV of 2 drives in a mirror to the pool:
$ sudo zpool add foo mirror wwn-0x5001173100406557 wwn-0x50011731004085a7 -f

Another example: Create a 2 x 2 mirrored pool using four partitions

Use the following command to list the partitions:
$ ls -l /dev/disk/by-id/ | grep sd[a-z][0-9]$
Use serial number-partition format to create a zpool containing a VDEV of 4 drives in a mirror:
$ sudo zpool create cartwheel mirror wwn-0x5001173100406557-part1 wwn-0x50011731004085a7-part1 -f
$ sudo zpool add cartwheel mirror wwn-0x50011731002b50d0-part1 wwn-0x50011731002b33ac-part8 -f

Finally, execute the following command to make sure it was created on the system:
$ zpool status
$ zpool list
$ df -H

Sample outputs:

Fig.02: See pool's health status
Fig.02: See pool’s health status

You can now start copying data or store data in /nixcraft:
$ cd /cartwheel
$ ls
$ cp /bar/ .

However, ZFS allows you to create file system. For example salesdata or lxccontainers file systems in the pool called cartwheel:
$ sudo zfs create cartwheel/salesdata
$ sudo zfs create cartwheel/lxccontainers
$ zfs list

Sample outputs:

NAME                      USED  AVAIL  REFER  MOUNTPOINT
cartwheel                 111K  1.44T    19K  /cartwheel
cartwheel/lxccontainers    19K  1.44T    19K  /cartwheel/lxccontainers
cartwheel/salesdata        19K  1.44T    19K  /cartwheel/salesdata

To destroy both file systems from the pool called cartwheel, run:
$ sudo zfs destroy cartwheel/salesdata
$ sudo zfs destroy cartwheel/lxccontainers
$ sudo zfs list

How do I delete a zpool and all data stored in the pool called cartwheel?

$ sudo zpool destroy zpoolNameHere
$ sudo zpool destroy cartwheel
$ zpool status

Posted by: Vivek Gite

The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.

Share this on (or read 0 comments/add one below):