I have a three server replicated volume setup (scalable network filesystem for cloud and VMs). I need to add one more server. How do I add a new a new brick to an existing replicated volume on a Debian or Ubuntu/CentOS Linux?
This tutorial show how to add a new node/server, and balance it into the existing array. I am assuming that you already set up a two-server/three-server GlusterFS array as described here.
I am going to assume that this node name is gfs04 and you updated the /etc/hosts file to include an IP address to host mapping. My existing gluster volume named as gvol0.
Step 1 – Install glusterfs
Type the following apt-get command/apt command on a Debian/Ubuntu Linux:
$ sudo add-apt-repository ppa:gluster/glusterfs-3.11
$ sudo apt-get update
$ sudo apt-get install glusterfs-server
$ sudo apt-mark hold glusterfs*
Step 2 – Format and setup disk in the server
Type the following commands:
$ sudo fdisk /dev/vdb
$ sudo mkfs.ext4 /dev/vdb1
$ sudo mkdir -p /nodirectwritedata/brick4
$ sudo echo '/dev/vdb1 /nodirectwritedata/brick4 ext4 defaults 1 2' >> /etc/fstab
$ mount -a
$ sudo mkdir /nodirectwritedata/brick4/gvol0
$ df -H
Step 3 – Configure the trusted pool
Run the following command from any one of the node:
$ gluster peer probe gfs04
Sample outputs:
peer probe: success.
Step 4 – Get info about current volume named gvol0
Type the following command:
# gluster vol status
Sample outputs:
Status of volume: gvol0 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs01:/nodirectwritedata/brick1/gvol0 49152 0 Y 3994 Brick gfs02:/nodirectwritedata/brick2/gvol0 49152 0 Y 3956 Brick gfs03:/nodirectwritedata/brick3/gvol0 49152 0 Y 4069 Self-heal Daemon on localhost N/A N/A Y 18938 Self-heal Daemon on gfs03 N/A N/A Y 13273 Self-heal Daemon on gfs04 N/A N/A Y 3128 Self-heal Daemon on gfs02 N/A N/A Y 13144 Task Status of Volume gvol0 ------------------------------------------------------------------------------ There are no active volume tasks
Step 5 – Add a new brick to an existing replicated volume
Type the following command:
# gluster volume add-brick gvol0 replica 4 gfs04:/nodirectwritedata/brick4/gvol0
Sample outputs:
volume add-brick: success
Where,
- gluster – The command name.
- volume – The command is related to a volume.
- add-brick – I am adding a brick to the volume.
- gvol0 – This is the name of the volume.
- replica 4 – After you add this brick, the volume will keep at least 3 copies of each file.
- gfs04:/nodirectwritedata/brick4/gvol0 – This is the IP address/hostname gfs04 of the Gluster server, followed by the absolute path to where the brick data should be stored.
Step 6 – Verify new setup
Type the following command:
# gluster vol status
Sample outputs:
Status of volume: gvol0 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs01:/nodirectwritedata/brick1/gvol0 49152 0 Y 3994 Brick gfs02:/nodirectwritedata/brick2/gvol0 49152 0 Y 3956 Brick gfs03:/nodirectwritedata/brick3/gvol0 49152 0 Y 4069 Brick gfs04:/nodirectwritedata/brick4/gvol0 49152 0 Y 3172 Self-heal Daemon on localhost N/A N/A Y 21424 Self-heal Daemon on gfs03 N/A N/A Y 19593 Self-heal Daemon on gfs02 N/A N/A Y 20088 Self-heal Daemon on gfs04 N/A N/A Y 3192 Task Status of Volume gvol0 ------------------------------------------------------------------------------ There are no active volume tasks
To see status of peers, type:
# gluster peer status
Number of Peers: 3 Hostname: gfs03 Uuid: 569a9fd1-3de8-470e-932b-fb903ca925cf State: Peer in Cluster (Connected) Hostname: gfs02 Uuid: d13c68f8-b00f-4615-98f1-1d7c4138a6bd State: Peer in Cluster (Connected) Hostname: gfs04 Uuid: 9ccfb748-d4d6-45de-8b7a-3dcf7d5efb67 State: Peer in Cluster (Connected)
- How to install GlusterFS on a Ubuntu Linux
- How to mount Glusterfs volumes inside LXC/LXD
- How to enable TLS/SSL encryption with Glusterfs storage
- How to add a new brick to an existing replicated GlusterFS volume on Linux
🐧 2 comments so far... add one ↓
Category | List of Unix and Linux commands |
---|---|
File Management | cat |
Firewall | Alpine Awall • CentOS 8 • OpenSUSE • RHEL 8 • Ubuntu 16.04 • Ubuntu 18.04 • Ubuntu 20.04 |
Network Utilities | dig • host • ip • nmap |
OpenVPN | CentOS 7 • CentOS 8 • Debian 10 • Debian 8/9 • Ubuntu 18.04 • Ubuntu 20.04 |
Package Manager | apk • apt |
Processes Management | bg • chroot • cron • disown • fg • jobs • killall • kill • pidof • pstree • pwdx • time |
Searching | grep • whereis • which |
User Information | groups • id • lastcomm • last • lid/libuser-lid • logname • members • users • whoami • who • w |
WireGuard VPN | Alpine • CentOS 8 • Debian 10 • Firewall • Ubuntu 20.04 |
Hi, small typo there
$ sudo mkdir /nodirectwritedata/brick1/gvol0
should be
$ sudo mkdir /nodirectwritedata/brick4/gvol0
Nice guide, Thanks!
thanks for the heads up!