Difference between revisions of "HA NFS on Amazon Linux 2"

From Michael's Information Zone
Jump to navigation Jump to search
 
(13 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
*<b>AWS does not support virtual IPs in their VPCs. Even with src/dst checks disabled. This will not work.</b>
 
<ref>https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/ch-nfsserver-haaa</ref>
 
<ref>https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/ch-nfsserver-haaa</ref>
 +
==Packages==
 
*Will be using two t3a.larg instances in an active/passive config.
 
*Will be using two t3a.larg instances in an active/passive config.
 +
*Package installation
 
<pre>
 
<pre>
 
sudo yum upgrade -y && sudo reboot -h now
 
sudo yum upgrade -y && sudo reboot -h now
 
sudo yum install pcs pacemaker fence-agents-all -y
 
sudo yum install pcs pacemaker fence-agents-all -y
 
+
</pre>
 +
==Create Cluster==
 +
*Authorize nodes<ref>https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/ch-startup-HAAA#s1-clusterinstall-HAAA</ref>
 +
<pre>
 
  sudo passwd hacluster
 
  sudo passwd hacluster
 
Changing password for user hacluster.
 
Changing password for user hacluster.
Line 12: Line 18:
  
 
sudo systemctl enable --now pcsd.service
 
sudo systemctl enable --now pcsd.service
 +
 +
sudo pcs cluster auth ip-192-168-17-132.us-east-2.compute.internal ip-192-168-17-164.us-east-2.compute.internal --force
 +
sudo pcs cluster setup --start --name nfs_cluster ip-192-168-17-132.us-east-2.compute.internal ip-192-168-17-164.us-east-2.compute.internal --force
 +
 +
</pre>
 +
*Here I disable stonith since I am performing an active/passive config. This is probably a very bad idea<ref>https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/ch05.html</ref><ref>https://access.redhat.com/solutions/15575#fencedevicetypes</ref>
 +
<pre>
 +
sudo pcs property set stonith-enabled=false
 +
</pre>
 +
==Volume Creation==
 +
*Create lvm, mount point, and deactivate the volume group<ref>https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/s1-NFSsharesetup-HAAA</ref>.
 +
<pre>
 +
sudo pvcreate /dev/nvme1n1
 +
sudo vgcreate data /dev/nvme1n1
 +
sudo lvcreate --size 195G --name nfs data
 +
sudo mkfs.ext4 /dev/mapper/data-nfs
 +
sudo mkdir -p /mnt/nfs
 +
sudo vgchange -an data
 +
  0 logical volume(s) in volume group "data" now active
 +
</pre>
 +
==Volume Clustering==
 +
*Configure LVM for clustering<ref>https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/s1-exclusiveactivenfs-HAAA</ref>
 +
*Add 'volume_list = []' to  /etc/lvm/lvm.conf
 +
<pre>
 +
sudo lvmconf --enable-halvm --services --startstopservices
 +
sudo pcs resource create nfs LVM volgrpname=data exclusive=true --group nfs_cluster
 +
sudo pcs resource create nfsshare Filesystem device=/dev/data/nfs directory=/mnt/nfs fstype=ext4 --group nfs_cluster
 
</pre>
 
</pre>
 +
==NFS Clustering==
 +
<pre>
 +
sudo pcs resource create nfs-daemon nfsserver nfs_shared_infodir=/mnt/nfs nfs_no_notify=true --group nfs_cluster
 +
sudo pcs resource create nfs-docker1 exportfs clientspec=192.168.17.147/255.255.255.255 options=rw,sync,no_root_squash directory=/mnt/nfs fsid=0 --group nfs_cluster
 +
sudo pcs resource create nfs-docker2 exportfs clientspec=192.168.19.27/255.255.255.255 options=rw,sync,no_root_squash directory=/mnt/nfs fsid=0 --group nfs_cluster --force
 +
</pre>
 +
==VIP==
 +
<pre>
 +
sudo pcs resource create nfs_ip IPaddr2 ip=192.168.17.5 cidr_netmask=24 --group nfs_cluster

Latest revision as of 14:01, 15 December 2020

  • AWS does not support virtual IPs in their VPCs. Even with src/dst checks disabled. This will not work.

[1]

Packages

  • Will be using two t3a.larg instances in an active/passive config.
  • Package installation
sudo yum upgrade -y && sudo reboot -h now
sudo yum install pcs pacemaker fence-agents-all -y

Create Cluster

  • Authorize nodes[2]
 sudo passwd hacluster
Changing password for user hacluster.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

sudo systemctl enable --now pcsd.service

sudo pcs cluster auth ip-192-168-17-132.us-east-2.compute.internal ip-192-168-17-164.us-east-2.compute.internal --force
sudo pcs cluster setup --start --name nfs_cluster ip-192-168-17-132.us-east-2.compute.internal ip-192-168-17-164.us-east-2.compute.internal --force

  • Here I disable stonith since I am performing an active/passive config. This is probably a very bad idea[3][4]
sudo pcs property set stonith-enabled=false

Volume Creation

  • Create lvm, mount point, and deactivate the volume group[5].
sudo pvcreate /dev/nvme1n1
sudo vgcreate data /dev/nvme1n1
sudo lvcreate --size 195G --name nfs data
sudo mkfs.ext4 /dev/mapper/data-nfs
sudo mkdir -p /mnt/nfs
 sudo vgchange -an data
  0 logical volume(s) in volume group "data" now active

Volume Clustering

  • Configure LVM for clustering[6]
  • Add 'volume_list = []' to /etc/lvm/lvm.conf
sudo lvmconf --enable-halvm --services --startstopservices
sudo pcs resource create nfs LVM volgrpname=data exclusive=true --group nfs_cluster
sudo pcs resource create nfsshare Filesystem device=/dev/data/nfs directory=/mnt/nfs fstype=ext4 --group nfs_cluster

NFS Clustering

sudo pcs resource create nfs-daemon nfsserver nfs_shared_infodir=/mnt/nfs nfs_no_notify=true --group nfs_cluster
sudo pcs resource create nfs-docker1 exportfs clientspec=192.168.17.147/255.255.255.255 options=rw,sync,no_root_squash directory=/mnt/nfs fsid=0 --group nfs_cluster
sudo pcs resource create nfs-docker2 exportfs clientspec=192.168.19.27/255.255.255.255 options=rw,sync,no_root_squash directory=/mnt/nfs fsid=0 --group nfs_cluster --force

VIP

sudo pcs resource create nfs_ip IPaddr2 ip=192.168.17.5 cidr_netmask=24 --group nfs_cluster