Difference between revisions of "HA NFS on Amazon Linux 2"

From Michael's Information Zone
Jump to navigation Jump to search
 
Line 1: Line 1:
 +
*<b>AWS does not support virtual IPs in their VPCs. Even with src/dst checks disabled. This will not work.</b>
 
<ref>https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/ch-nfsserver-haaa</ref>
 
<ref>https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/ch-nfsserver-haaa</ref>
 
==Packages==
 
==Packages==
Line 26: Line 27:
 
sudo pcs property set stonith-enabled=false
 
sudo pcs property set stonith-enabled=false
 
</pre>
 
</pre>
*<b>AWS does not support virtual IPs in their VPCs. Even with src/dst checks disabled. This will not work.</b>
 
 
==Volume Creation==
 
==Volume Creation==
 
*Create lvm, mount point, and deactivate the volume group<ref>https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/s1-NFSsharesetup-HAAA</ref>.
 
*Create lvm, mount point, and deactivate the volume group<ref>https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/s1-NFSsharesetup-HAAA</ref>.

Latest revision as of 14:01, 15 December 2020

  • AWS does not support virtual IPs in their VPCs. Even with src/dst checks disabled. This will not work.

[1]

Packages

  • Will be using two t3a.larg instances in an active/passive config.
  • Package installation
sudo yum upgrade -y && sudo reboot -h now
sudo yum install pcs pacemaker fence-agents-all -y

Create Cluster

  • Authorize nodes[2]
 sudo passwd hacluster
Changing password for user hacluster.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

sudo systemctl enable --now pcsd.service

sudo pcs cluster auth ip-192-168-17-132.us-east-2.compute.internal ip-192-168-17-164.us-east-2.compute.internal --force
sudo pcs cluster setup --start --name nfs_cluster ip-192-168-17-132.us-east-2.compute.internal ip-192-168-17-164.us-east-2.compute.internal --force

  • Here I disable stonith since I am performing an active/passive config. This is probably a very bad idea[3][4]
sudo pcs property set stonith-enabled=false

Volume Creation

  • Create lvm, mount point, and deactivate the volume group[5].
sudo pvcreate /dev/nvme1n1
sudo vgcreate data /dev/nvme1n1
sudo lvcreate --size 195G --name nfs data
sudo mkfs.ext4 /dev/mapper/data-nfs
sudo mkdir -p /mnt/nfs
 sudo vgchange -an data
  0 logical volume(s) in volume group "data" now active

Volume Clustering

  • Configure LVM for clustering[6]
  • Add 'volume_list = []' to /etc/lvm/lvm.conf
sudo lvmconf --enable-halvm --services --startstopservices
sudo pcs resource create nfs LVM volgrpname=data exclusive=true --group nfs_cluster
sudo pcs resource create nfsshare Filesystem device=/dev/data/nfs directory=/mnt/nfs fstype=ext4 --group nfs_cluster

NFS Clustering

sudo pcs resource create nfs-daemon nfsserver nfs_shared_infodir=/mnt/nfs nfs_no_notify=true --group nfs_cluster
sudo pcs resource create nfs-docker1 exportfs clientspec=192.168.17.147/255.255.255.255 options=rw,sync,no_root_squash directory=/mnt/nfs fsid=0 --group nfs_cluster
sudo pcs resource create nfs-docker2 exportfs clientspec=192.168.19.27/255.255.255.255 options=rw,sync,no_root_squash directory=/mnt/nfs fsid=0 --group nfs_cluster --force

VIP

sudo pcs resource create nfs_ip IPaddr2 ip=192.168.17.5 cidr_netmask=24 --group nfs_cluster