Partager via


Gluster on Azure - Part 2

This is part 2 of installing and configuring Gluster on Azure. For information on setting up the VM’s on Azure installing Gluster check out this article.

Once we’ve installed Gluster we need to configure it. The first step is to ensure that all machines can see each other. In order to do this we need to get the Internal IP Address (internal to Azure) of each server. You can get this information from the Dashboard of each server VM or by running the ifconfig command on each VM.

1. From each server make sure you can Ping all other servers in the cluster to ensure that you have the correct IP addresses.

2. Once the IP addresses are confirmed edit the hosts file to map the IP’s to a simple name. We do this on all machines in the cluster.

 

 [root]# vi /etc/hosts
 

3. Add the IP addresses to the hosts file and save the file:

 

 #VM IP        #FQDN   #NAME 
 xxx.xx.xxx.94 gluster1 gluster1 
 xxx.xx.xxx.19 gluster2 gluster2
 

4. Now that we have the Hosts file set we can start setting the connections. We do this using the “gluster peer probe” command. We do this for all the servers that we want to connect including the local server itself.

 

 [root]# gluster peer probe gluster1 
 peer probe: success
 [root]# gluster peer probe gluster2 
 peer probe: success

 

5. Now we can check the connections and ensure that the state is “Connected” on all machines.

 
 [root]# gluster peer status Hostname: xxx.xx.xxx.19 Uuid: 77d27dac-6544-47b9-8d50-29945008b24e State: Peer in Cluster (Connected)

 

6. Once all machines are connected we can create volume that essentially connects all drives together.  Here’s the parameter of how this command works.

 
 gluster volume create NEW-VOLNAME [stripe COUNT] [replica COUNT] [transport <tcp | rdma>] NEW-BRICK ...
 [root]# gluster volume create glusteredData stripe 2 transport tcp  gluster1:/mnt/gluster gluster2:/mnt/gluster force

 

So the above creates a volume called “glusteredData” across 2 servers. The bricks (servers in the gluster) reference the mount points that we created.

 

9. The last step in the process is to start the volume.

 
 [root]# gluster volume start glusteredData

 

10. Test the gluster, you can run:

 
 [root]# gluster volume info 
 Volume Name: glusteredData 
 Type: Stripe 
 Volume ID: eb0948e5-11b3-484c-8855-bdc167be8f85 
 Status: Started 
 Number of Bricks: 1 x 2 = 2 
 Transport-type: tcp 
 Bricks: 
 Brick1: gluster1:/mnt/sharedDisk 
 Brick2: gluster2:/mnt/sharedDisk

 

 [Connecting to the Gluster from a Client]

Similiar to the Gluster-server setup we need to install the glusterfs on the client.

<optional> You can copy the server’s Hosts file to the client. This way you can use the machine name to reference the Gluster point.

Next create a mount point in the /mnt/nas

 

 mount -t glusterfs HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR
 [root]# mount -t glusterfs gluster1:/glusteredData /mnt/nas

 

and that’s it. You can check the size of the drive by:

 
 [root]# df -h 
 FilesystemSizeUsedAvailUse%Mounted on 
 /dev/sda1 31G1.2G29G4% / 
 none 4.1k 0 4.1k 0% /sys/fs/cgroup 
 udev 866M 4.1k 866M 1% /dev 
 tmpfs 176M 345k 176M 1% /run 
 /dev/sdb1 74G 55M 71G 1% /mnt 
 gluster1:/glusteredData 1.5T 1.2T 220G  85% /mnt/nas

Comments

  • Anonymous
    May 19, 2015
    Hello you could tell me the performance you achieved with standard vm Ax?