I am having the exactly some problem here.
I have two servers with Gluster 11 with 6 SSD DC disk as a bricks.
/dev/sdc1 1,8T 1,7T 120G 94% /data1
/dev/sdd1 1,8T 1,7T 123G 94% /data2
/dev/sdb1 1,8T 361G 1,4T 21% /data3
/dev/sdf1 1,8T 358G 1,4T 21% /data4
/dev/sdg1 1,8T 362G 1,4T 21% /data5
/dev/sdh1 1,8T 356G 1,4T 20% /data6
As you can see, the two first bricks are overused.
I have a gluster volume which use this bricks:
gluster vol status
Volume VMS is not started
Status of volume: stg-vms
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster1:/data1/vms 59976 0 Y 3896
Brick gluster2:/data1/vms 52513 0 Y 2565
Brick gluster1:/data2/vms 50314 0 Y 3978
Brick gluster2:/data2/vms 52867 0 Y 2652
Brick gluster1:/data3/vms 51747 0 Y 4071
Brick gluster2:/data3/vms 53994 0 Y 2741
Brick gluster1:/data4/vms 52358 0 Y 161845
Brick gluster2:/data4/vms 54324 0 Y 1570340
Brick gluster1:/data5/vms 54552 0 Y 161878
Brick gluster2:/data5/vms 54510 0 Y 1570373
Brick gluster1:/data6/vms 57117 0 Y 161911
Brick gluster2:/data6/vms 54696 0 Y 1570406
Self-heal Daemon on localhost N/A N/A Y 4106
Self-heal Daemon on gluster2 N/A N/A Y 2775
My question is:
If I do gluster vol VMS rebalance fix-layout and then gluster vol VMS rebalance start, there will be any impact on the VM disks, during this rebalance process?
Can I do this with safe?
Thanks