VDO
Purpose
General notes on using VDO [1] [2]
Encryption with LUKS
This is a work in progress as my test backup drive keeps getting corrupted.
- Working with a 2TB external disk
- Created GPT table on disk (in this example /dev/sde) using parted, then created partition using gpart that used all available space.
- Setup encryption
[root@natasha ~]# cryptsetup luksFormat /dev/sde1 WARNING! ======== This will overwrite data on /dev/sde1 irrevocably. Are you sure? (Type uppercase yes): YES Enter passphrase for /dev/sde1: Verify passphrase:
- Open the encrypted volume, and setup VDO.
[root@natasha ~]# cryptsetup luksOpen /dev/sde1 enc_backup Enter passphrase for /dev/sde1: [root@natasha ~]# vdo create --name=backup_vdo --device=/dev/mapper/enc_backup --vdoLogicalSize=4T Creating VDO backup_vdo The VDO volume can address 1 TB in 1022 data slabs, each 2 GB. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Starting VDO backup_vdo Starting compression on VDO backup_vdo VDO instance 0 volume is ready at /dev/mapper/backup_vdo
- Create LVM partition
[root@natasha ~]# pvcreate /dev/mapper/backup_vdo Physical volume "/dev/mapper/backup_vdo" successfully created. [root@natasha ~]# vgcreate backup /dev/mapper/backup_vdo Volume group "backup" successfully created [root@natasha ~]# lvcreate -l 100%VG backup --name backuptest Logical volume "backuptest" created. [root@natasha ~]# mkfs.ext4 -E nodiscard /dev/mapper/backup-backuptest mke2fs 1.45.4 (23-Sep-2019) Creating filesystem with 1073740800 4k blocks and 268435456 inodes Filesystem UUID: 30475cde-9f57-4a6f-8067-8dbbcbef127f Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848, 512000000, 550731776, 644972544 Allocating group tables: done Writing inode tables: done Creating journal (262144 blocks): done Writing superblocks and filesystem accounting information: done
Grow Logical VDO Size
I have a SAMBA share used to mirror a directory from another server. This was intended to test VDO. The problem is that the kernel reports the disk is now full, but the physical usage is no where near full.
- Size of the disk.
sudo lsblk /dev/nvme2n1 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme2n1 259:3 0 300G 0 disk └─datafeed 253:0 0 295.4G 0 vdo /mnt/data
- Here I am told the mount point is full.
sudo df -hT /dev/mapper/datafeed Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/datafeed xfs 296G 296G 160K 100% /mnt/data
- But when checking vdo stats, we see it very much empty
sudo vdostats --human-readable Device Size Used Available Use% Space saving% /dev/mapper/datafeed 300.0G 80.3G 219.7G 26% 74%
- Now I grow the logical volume size
sudo vdo growLogical --name=datafeed --vdoLogicalSize=500G sudo lsblk /dev/nvme2n1 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme2n1 259:3 0 300G 0 disk └─datafeed 253:0 0 500G 0 vdo /mnt/data
- Grow XFS and check changes
sudo xfs_growfs /dev/mapper/datafeed ......... sudo df -hT /dev/mapper/datafeed Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/datafeed xfs 500G 296G 205G 60% /mnt/data