Logical Volume Management
Contents
Notes
- If you make changes on the fly, and are unable to mount volumes, try "systemctl daemon-reload" first.
Create Mirrored Volume
Purpose
To mirror two 4TB disks.
Steps
[root@natasha ~]# pvcreate /dev/sd[b-c] Physical volume "/dev/sdb" successfully created. Physical volume "/dev/sdc" successfully created. [root@natasha ~]# vgcreate DATA /dev/sd[b-c] Volume group "DATA" successfully created [root@natasha ~]# lvcreate -m1 -L 3.63T -n mirror1 DATA Rounding up size to full physical extent 3.63 TiB Logical volume "mirror1" created. [root@natasha ~]# mkfs.ext4 /dev/mapper/DATA-mirror1 mke2fs 1.44.3 (10-July-2018) Creating filesystem with 974420992 4k blocks and 243605504 inodes Filesystem UUID: 4c050c23-6e5b-4747-9ec5-39ea112b39ba Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848, 512000000, 550731776, 644972544 Allocating group tables: done Writing inode tables: done Creating journal (262144 blocks): done Writing superblocks and filesystem accounting information: done [root@natasha ~]# mkdir /data [root@natasha ~]# mount /dev/mapper/DATA-mirror1 /data/ [root@natasha ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 3.8G 0 3.8G 0% /dev tmpfs 3.8G 0 3.8G 0% /dev/shm tmpfs 3.8G 8.8M 3.8G 1% /run tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup /dev/mapper/OS-root 103G 2.7G 100G 3% / /dev/sda2 976M 183M 727M 21% /boot /dev/sda1 599M 6.8M 593M 2% /boot/efi tmpfs 770M 0 770M 0% /run/user/0 /dev/mapper/DATA-mirror1 3.6T 89M 3.4T 1% /data
Troubleshooting
The following fields are helpful for monitoring RAID status. This can be pulled using
lvs -o ?
- raid_mismatch_count - For RAID, number of mismatches found or repaired.
- raid_sync_action - For RAID, the current synchronization action being performed.
- raid_write_behind - For RAID1, the number of outstanding writes allowed to writemostly devices.
- raid_min_recovery_rate - For RAID1, the minimum recovery I/O load in kiB/sec/disk.
- raid_max_recovery_rate - For RAID1, the maximum recovery I/O load in kiB/sec/disk.
i.e.
[root@natasha ~]# lvs -o raid_mismatch_count Mismatches 0
Break Mirrored Volume
[root@natasha ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert mirror1 DATA rwi-aor--- 3.63t 100.00 root OS -wi-ao---- <102.40g swap OS -wi-ao---- 7.80g
[root@natasha ~]# lvdisplay ...
--- Logical volume --- LV Path /dev/DATA/mirror1 LV Name mirror1 VG Name DATA LV UUID 1GU4M2-EIsD-e4zx-sXtk-9gQ9-KfDT-eR8Uve LV Write Access read/write LV Creation host, time natasha, 2019-10-24 19:26:26 -0400 LV Status available # open 1 LV Size 3.63 TiB Current LE 951583 Mirrored volumes 2 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:8
lvconvert --type linear DATA/mirror1 vgreduce DATA /dev/mapper/disk2 Error reading device /dev/mapper/enc_backup at 0 length 512. Error reading device /dev/mapper/enc_backup at 0 length 4096. Error reading device /dev/mapper/backup_vdo at 0 length 512. Error reading device /dev/mapper/backup_vdo at 0 length 4096. Removed "/dev/mapper/disk2" from volume group "DATA" pvremove /dev/mapper/disk2 Error reading device /dev/mapper/enc_backup at 0 length 512. Error reading device /dev/mapper/enc_backup at 0 length 4096. Error reading device /dev/mapper/backup_vdo at 0 length 512. Error reading device /dev/mapper/backup_vdo at 0 length 4096. Labels on physical volume "/dev/mapper/disk2" successfully wiped.
Create Cache
[1]Working off the mirror LVM created previsouly, I needed to increase performance on these SMR drives. I had an NVME drive connected via USB 3.0 that would work for now.
Read Cache
pvcreate /dev/sda1 vgextend DATA /dev/sda1 lvcreate -n cache-write -L 50GB DATA /dev/sda1 lvcreate -n cache-read -L 100GB DATA /dev/sda1 lvconvert --type cache --cachevol cache-read --chunksize=256 DATA/mirror1
[root@natasha ~]# lvs -o cache_used_blocks,cache_read_hits,cache_read_misses CacheUsedBlocks CacheReadHits CacheReadMisses 162 89 205
Fix excluded by filter
[root@natasha ~]# wipefs -a /dev/sdb /dev/sdb: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/sdb: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 /dev/sdb: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa /dev/sdb: calling ioctl to re-read partition table: Success
Grow logical volume
Purpose
A couple small servers I had built using the vmware defaults started to run out of space, and everytime this happens I fail to record the steps and have to look up a refresher. This time I will record it.
Steps
NOTE: The following uses defaults. If you need to look up specific information on your system, use vgdisplay and lvdisplay.
- Create a new partition on the disk
fdisk /dev/sda Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): p Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x000606ed Device Boot Start End Blocks Id System /dev/sda1 * 2048 1026047 512000 83 Linux /dev/sda2 1026048 20971519 9972736 8e Linux LVM Command (m for help): n Partition type: p primary (2 primary, 0 extended, 2 free) e extended Select (default p): Using default response p Partition number (3,4, default 3): First sector (20971520-41943039, default 20971520): Using default value 20971520 Last sector, +sectors or +size{K,M,G} (20971520-41943039, default 41943039): Using default value 41943039 Partition 3 of type Linux and of size 10 GiB is set Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8) Syncing disks.
- Run partprobe so kernel knows of new partition.
- Add the partition as a physical parition for LVM
pvcreate /dev/sda3
- Add the new physical volume to the volume group
vgextend centos /dev/sda3
- Extend the logical volume
lvextend /dev/centos/root /dev/sda3
VMWare Specific
To detect a resized disk running on VMWare, you need to scan the scsi controller before expanding the volume.[3]
echo 1 > /sys/class/scsi_device/0\:0\:0\:0/device/rescan