How to add new brick to replicated GlusterFS volume on Linux

Posted on in Categories , , , last updated July 26, 2017

I have a three server replicated volume setup (scalable network filesystem for cloud and VMs). I need to add one more server. How do I add a new a new brick to an existing replicated volume on a Debian or Ubuntu/CentOS Linux?

How to add a new brick to an existing replicated volume?
This tutorial show how to add a new node/server, and balance it into the existing array. I am assuming that you already set up a two-server/three-server GlusterFS array as described here.

I am going to assume that this node name is gfs04 and you updated the /etc/hosts file to include an IP address to host mapping. My existing gluster volume named as gvol0.

Step 1 – Install glusterfs

Type the following apt-get command/apt command on a Debian/Ubuntu Linux:
$ sudo add-apt-repository ppa:gluster/glusterfs-3.11
$ sudo apt-get update
$ sudo apt-get install glusterfs-server
$ sudo apt-mark hold glusterfs*

Step 2 – Format and setup disk in the server

Type the following commands:
$ sudo fdisk /dev/vdb
$ sudo mkfs.ext4 /dev/vdb1
$ sudo mkdir -p /nodirectwritedata/brick4
$ sudo echo '/dev/vdb1 /nodirectwritedata/brick4 ext4 defaults 1 2' >> /etc/fstab
$ mount -a
$ sudo mkdir /nodirectwritedata/brick4/gvol0
$ df -H

Step 3 – Configure the trusted pool

Run the following command from any one of the node:
$ gluster peer probe gfs04
Sample outputs:

peer probe: success.

Step 4 – Get info about current volume named gvol0

Type the following command:
# gluster vol status
Sample outputs:

Status of volume: gvol0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gfs01:/nodirectwritedata/brick1/gvol0 49152     0          Y       3994 
Brick gfs02:/nodirectwritedata/brick2/gvol0 49152     0          Y       3956 
Brick gfs03:/nodirectwritedata/brick3/gvol0 49152     0          Y       4069 
Self-heal Daemon on localhost               N/A       N/A        Y       18938
Self-heal Daemon on gfs03                   N/A       N/A        Y       13273
Self-heal Daemon on gfs04                   N/A       N/A        Y       3128 
Self-heal Daemon on gfs02                   N/A       N/A        Y       13144
 
Task Status of Volume gvol0
------------------------------------------------------------------------------
There are no active volume tasks

Step 5 – Add a new brick to an existing replicated volume

Type the following command:
# gluster volume add-brick gvol0 replica 4 gfs04:/nodirectwritedata/brick4/gvol0
Sample outputs:

volume add-brick: success

Where,

  1. gluster – The command name.
  2. volume – The command is related to a volume.
  3. add-brick – I am adding a brick to the volume.
  4. gvol0 – This is the name of the volume.
  5. replica 4 – After you add this brick, the volume will keep at least 3 copies of each file.
  6. gfs04:/nodirectwritedata/brick4/gvol0 – This is the IP address/hostname gfs04 of the Gluster server, followed by the absolute path to where the brick data should be stored.

Step 6 – Verify new setup

Type the following command:
# gluster vol status
Sample outputs:

Status of volume: gvol0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gfs01:/nodirectwritedata/brick1/gvol0 49152     0          Y       3994 
Brick gfs02:/nodirectwritedata/brick2/gvol0 49152     0          Y       3956 
Brick gfs03:/nodirectwritedata/brick3/gvol0 49152     0          Y       4069 
Brick gfs04:/nodirectwritedata/brick4/gvol0 49152     0          Y       3172 
Self-heal Daemon on localhost               N/A       N/A        Y       21424
Self-heal Daemon on gfs03                   N/A       N/A        Y       19593
Self-heal Daemon on gfs02                   N/A       N/A        Y       20088
Self-heal Daemon on gfs04                   N/A       N/A        Y       3192 
 
Task Status of Volume gvol0
------------------------------------------------------------------------------
There are no active volume tasks

To see status of peers, type:
# gluster peer status

Number of Peers: 3

Hostname: gfs03
Uuid: 569a9fd1-3de8-470e-932b-fb903ca925cf
State: Peer in Cluster (Connected)

Hostname: gfs02
Uuid: d13c68f8-b00f-4615-98f1-1d7c4138a6bd
State: Peer in Cluster (Connected)

Hostname: gfs04
Uuid: 9ccfb748-d4d6-45de-8b7a-3dcf7d5efb67
State: Peer in Cluster (Connected)
This entry is 4 of 4 in the GlusterFS Tutorial series. Keep reading the rest of the series:
  1. How to install GlusterFS on a Ubuntu Linux
  2. How to mount Glusterfs volumes inside LXC/LXD
  3. How to enable TLS/SSL encryption with Glusterfs storage
  4. How to add a new brick to an existing replicated GlusterFS volume on Linux

Posted by: Vivek Gite

The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on Twitter, Facebook, Google+.

2 comment

Leave a Comment