Difference between revisions of "HA NFS on Amazon Linux 2"

From Michael's Information Zone
Jump to navigation Jump to search
Line 38: Line 38:
 
</pre>
 
</pre>
 
*Configure LVM for clustering<ref>https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/s1-exclusiveactivenfs-HAAA</ref>
 
*Configure LVM for clustering<ref>https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/s1-exclusiveactivenfs-HAAA</ref>
 +
*Add 'volume_list = []' to  /etc/lvm/lvm.conf
 
<pre>
 
<pre>
sudo lvmconf --enable-halvm --services --startstopservices
+
sudo lvmconf --enable-halvm --services --startstopservices
Warning: Stopping lvm2-lvmetad.service, but it can still be activated by:
+
sudo pcs resource create nfs LVM volgrpname=data exclusive=true --group nfs_cluster
  lvm2-lvmetad.socket
 
Removed symlink /etc/systemd/system/sysinit.target.wants/lvm2-lvmetad.socket.
 
 
 
  
 
</pre>
 
</pre>

Revision as of 12:18, 15 December 2020

[1]

Packages

  • Will be using two t3a.larg instances in an active/passive config.
  • Package installation
sudo yum upgrade -y && sudo reboot -h now
sudo yum install pcs pacemaker fence-agents-all -y

Create Cluster

  • Authorize nodes[2]
 sudo passwd hacluster
Changing password for user hacluster.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

sudo systemctl enable --now pcsd.service

sudo pcs cluster auth ip-192-168-17-132.us-east-2.compute.internal ip-192-168-17-164.us-east-2.compute.internal --force
sudo pcs cluster setup --start --name nfs_cluster ip-192-168-17-132.us-east-2.compute.internal ip-192-168-17-164.us-east-2.compute.internal --force

  • Here I disable stonith since I am performing an active/passive config. This is probably a very bad idea[3][4]
sudo pcs property set stonith-enabled=false

Volume Creation

  • Create lvm, mount point, and deactivate the volume group[5].
sudo pvcreate /dev/nvme1n1
sudo vgcreate data /dev/nvme1n1
sudo lvcreate --size 195G --name nfs data
sudo mkfs.ext4 /dev/mapper/data-nfs
sudo mkdir -p /mnt/nfs
 sudo vgchange -an data
  0 logical volume(s) in volume group "data" now active
  • Configure LVM for clustering[6]
  • Add 'volume_list = []' to /etc/lvm/lvm.conf
sudo lvmconf --enable-halvm --services --startstopservices
sudo pcs resource create nfs LVM volgrpname=data exclusive=true --group nfs_cluster