Last updated on: 2020-09-04
Authored by: Matt Sherborne
This article describes how to add and remove GlusterFS servers from your server array.
nova boot command from the previous article to create a server called web3:
nova boot --image bb02b1a3-bc77-4d17-ab5b-421d89850fca --flavor performance1-4 web3
You can also use the Rackspace Cloud Control Panel to create the new server.
In the previous article, you added a Rackspace custom network.
Get the Universal Unique Identifer (UUID) of the network by using the following
After you have the UUID, associate the new host with it.
UUID in the following command with the actual UUID (for example,
web03 is the host name of the server that you want to add.
nova network-associate-host UUID web03
You can also use the Rackspace Cloud Control panel to associate a server with your existing network.
When you are done, the new server should have the IP address 192.168.0.3 on interface
/dev/eth3. That is the address that GlusterFS uses to communicate with the other server.
Use the following steps to format the server partition and install GlusterFS:
Use Secure Shell (SSH) to log in to the server.
By using the instructions from the previous article, install GlusterFS and format the
bricks partition as follows:
apt-get update apt-get install -y glusterfs-server glusterfs-client apt-get install -y xfsprogs mkfs.xfs /dev/xvde1 mkdir /srv/.bricks echo /dev/xvde1 /srv/.bricks xfs rw 0 1 >> /etc/fstab mount /srv/.bricks ufw allow in on eth2
Use SSH to log in to either
The following command instructs the GlusterFS volume to trust the new server:
root@web02 :~# gluster peer probe 192.168.0.3 peer probe: success
Add the brick into the volume by using the following commands:
root@web02 :~# gluster volume add-brick www replica 3 192.168.0.3:/srv/.bricks/www volume add-brick: success
The command includes the following parts:
gluster- The command is for GlusterFS.
volume- The command is related to a volume.
add-brick- You are adding a brick to the volume.
www- This is the name of the volume.
replica 3- After you add this brick, the volume keeps at least three copies of each file, one copy per brick, and in this case, one copy per server (because only one brick is on each server).
192.168.0.3:/srv/.bricks/www- This part of the command is the IP address of the Gluster server, followed by the absolute path to where the brick data is stored.
GlusterFS offers the following types of volume storage strategies:
You can also combine strategies, for example replicated and distributed, as illustrated in the following example:
gluster volume create www replica 2 transport tcp 192.168.0.1:/srv/.bricks/www 192.168.0.2:/srv/.bricks/www 192.168.0.3:/srv/.bricks/www 192.168.0.4:/srv/.bricks/www
The replica number is the number of bricks that make up a replica set, that is, hold a full copy of the files. In the preceding example
192.168.0.2 hold a full copy of the files, as do
192.168.0.4. The brick order is significant. If you ordered it 1,3,2,4, then 1 and 3 and 2 and 4 would hold the full files. If 1 and 2 went down, you would lose half of your files and have two copies of the other half.
Having a combined of a replicated and distributed volume gives you a little extra speed, and more space in exchange for data safety.
A combination of striped and replicated volumes are recommended only when you have files that are bigger than a brick, or many large files that are undergoing many IO operations.
The following commands help you find out more about what’s happening in your cluster. You will use these commands in later GlusterFS articles.
If you run the following command from any server, it shows all the peer servers that it knows about:
root@web01:~# gluster peer status Number of Peers: 2 Hostname: 192.168.0.3 Uuid: ba502dc2-447f-466a-a732-df989e71b551 State: Peer in Cluster (Connected) Hostname: 192.168.0.2 Uuid: 56e02356-d2c3-4787-ae25-6b46e867751a State: Peer in Cluster (Connected)
The following command is a helpful troubleshooting command. It gives information about all the GlusterFS volumes and tasks queued and in progress.
root@web03:~# gluster volume status Status of volume: www Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick 192.168.0.2:/srv/.bricks/www 49152 Y 13673 Brick 192.168.0.1:/srv/.bricks/www 49152 Y 10249 Brick 192.168.0.3:/srv/.bricks/www 49153 Y 13783 NFS Server on localhost 2049 Y 13793 Self-heal Daemon on localhost N/A Y 13800 NFS Server on 192.168.0.2 2049 Y 13900 Self-heal Daemon on 192.168.0.2 N/A Y 13907 NFS Server on 192.168.0.1 2049 Y 10286 Self-heal Daemon on 192.168.0.1 N/A Y 10293 There are no active volume tasks
You can shrink volumes as needed while the cluster is online and available. Use the following command to remove a brick:
# gluster volume remove-brick <volName> <brickName> start
remove-brick with the
start option automatically triggers a rebalance operation to migrate data from the removed bricks to the rest of the volume.
The following example removes brick 2:
root@web01:~# gluster volume remove-brick www replica 2 192.168.0.2:/srv/.bricks/www start Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y volume remove-brick commit force: success
This command tells GlusterFS that the
www volume will now keep only 2 copies of each file. It warns you that you might lose data and prompts you to continue.
If you want to remove a brick on a distributed volume, you should to run the following command instead:
root@web01:~# gluster volume remove-brick www 192.168.0.2:/srv/.bricks/www start
You can view the status of the
remove-brick operation by using the following command:
root@web01:~# watch gluster volume remove-brick www replica 2 192.168.0.2:/srv/.bricks/www status
This section explains how to re-add a brick.
Try adding web02 back into the volume, as follows:
root@web02:~# gluster volume add-brick www replica 3 192.168.0.2:/srv/.bricks/www volume add-brick: failed:
It failed. Look at the logs on web02 to see why the command failed:
root@web02:/srv/.bricks# tail /var/log/glusterfs/*log -f | grep E [2014-05-25 00:19:04.954410] I [input.c:36:cli_batch] 0-: Exiting with: 0 [2014-05-25 00:19:12.958620] I [input.c:36:cli_batch] 0-: Exiting with: 0 [2014-05-25 00:40:46.923747] E [glusterd-utils.c:5377:glusterd_is_path_in_use] 0-management: /srv/.bricks/www or a prefix of it is already part of a volume [2014-05-25 00:40:46.923789] E [glusterd-op-sm.c:3719:glusterd_op_ac_stage_op] 0-management: Stage failed on operation 'Volume Add brick', Status : -1
The issue is that
/srv/.bricks/www still contains the data from the time when web02 was a member of the volume.
You need a clean place to store the data. The easiest way to clean up is to just remove all the data by using the following command:
root@web02:~# rm -rf /srv/.bricks/www
Warning: Be careful to perform this action on the correct host (web02, which is currently out of the volume). If you make a mistake, the next article shows you how to recover. Alternative actions are to move the
www directory out of the way, or to add the brick using another directory, such as
Now that you have a clean location in which to store the brick, adding the brick is successful:
root@web01:/srv# gluster volume add-brick www replica 3 192.168.0.2:/srv/.bricks/www volume add-brick: success
©2020 Rackspace US, Inc.
Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License