Difference between revisions of "VDO"
Jump to navigation
Jump to search
Michael.mast (talk | contribs) |
Michael.mast (talk | contribs) |
||
(4 intermediate revisions by the same user not shown) | |||
Line 2: | Line 2: | ||
General notes on using VDO | General notes on using VDO | ||
<ref>https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/storage_administration_guide/vdo-ig-administering-vdo</ref> | <ref>https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/storage_administration_guide/vdo-ig-administering-vdo</ref> | ||
+ | <ref>https://blog.delouw.ch/2018/12/17/using-data-deduplication-and-compression-with-vdo-on-rhel-7-and-8/</ref> | ||
+ | |||
+ | ==TRIM== | ||
+ | This is critical and not well documented. In order for VDO to remove deleted data the filesystem must be trimmed. Red Hat recommends using systemd for this. | ||
+ | <pre>systemctl enable --now fstrim.timer</pre> | ||
+ | Here is an example of a before and after. | ||
+ | *Here I have deleted a bunch of data I no longer needed. The OS reports I am only using 63GB but VDO reports 121.6G | ||
+ | <pre> | ||
+ | [root@localhost ~]# vdostats --s && df -hT /mnt/test/ | ||
+ | Device Size Used Available Use% Space saving% | ||
+ | /dev/mapper/vdo_test 240.0G 121.6G 118.4G 50% 11% | ||
+ | Filesystem Type Size Used Avail Use% Mounted on | ||
+ | /dev/mapper/test-lv_test ext4 503G 63G 416G 14% /mnt/test | ||
+ | </pre> | ||
+ | *After running a manual trim, VDO reports 66.8G used. | ||
+ | <pre> | ||
+ | [root@localhost ~]# fstrim -v /mnt/test/ | ||
+ | /mnt/test/: 440.7 GiB (473135718400 bytes) trimmed | ||
+ | [root@localhost ~]# vdostats --s && df -hT /mnt/test/ | ||
+ | Device Size Used Available Use% Space saving% | ||
+ | /dev/mapper/vdo_test 240.0G 66.8G 173.2G 27% 18% | ||
+ | Filesystem Type Size Used Avail Use% Mounted on | ||
+ | /dev/mapper/test-lv_test ext4 503G 63G 416G 14% /mnt/test | ||
+ | </pre> | ||
==Encryption with LUKS== | ==Encryption with LUKS== | ||
Line 19: | Line 43: | ||
Verify passphrase: | Verify passphrase: | ||
− | </pre | + | </pre> |
*Open the encrypted volume, and setup VDO. | *Open the encrypted volume, and setup VDO. | ||
<pre> | <pre> | ||
Line 33: | Line 57: | ||
Starting compression on VDO backup_vdo | Starting compression on VDO backup_vdo | ||
VDO instance 0 volume is ready at /dev/mapper/backup_vdo | VDO instance 0 volume is ready at /dev/mapper/backup_vdo | ||
+ | </pre> | ||
+ | *Create LVM partition | ||
+ | <pre> | ||
+ | [root@natasha ~]# pvcreate /dev/mapper/backup_vdo | ||
+ | Physical volume "/dev/mapper/backup_vdo" successfully created. | ||
+ | [root@natasha ~]# vgcreate backup /dev/mapper/backup_vdo | ||
+ | Volume group "backup" successfully created | ||
+ | [root@natasha ~]# lvcreate -l 100%VG backup --name backuptest | ||
+ | Logical volume "backuptest" created. | ||
+ | [root@natasha ~]# mkfs.ext4 -E nodiscard /dev/mapper/backup-backuptest | ||
+ | mke2fs 1.45.4 (23-Sep-2019) | ||
+ | Creating filesystem with 1073740800 4k blocks and 268435456 inodes | ||
+ | Filesystem UUID: 30475cde-9f57-4a6f-8067-8dbbcbef127f | ||
+ | Superblock backups stored on blocks: | ||
+ | 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, | ||
+ | 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, | ||
+ | 102400000, 214990848, 512000000, 550731776, 644972544 | ||
+ | Allocating group tables: done | ||
+ | Writing inode tables: done | ||
+ | Creating journal (262144 blocks): done | ||
+ | Writing superblocks and filesystem accounting information: done | ||
</pre> | </pre> | ||
+ | *After mounting everything I have the following tree. | ||
+ | <pre> | ||
+ | [root@natasha ~]# lsblk /dev/sde | ||
+ | NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT | ||
+ | sde 8:64 0 2T 0 disk | ||
+ | └─sde1 8:65 0 2T 0 part | ||
+ | └─enc_backup 253:9 0 2T 0 crypt | ||
+ | └─backup_vdo 253:10 0 4T 0 vdo | ||
+ | └─backup-backuptest 253:11 0 4T 0 lvm /mnt/backup | ||
+ | </pre> | ||
+ | *At this point I am writing random data to the disk to see if I can trigger the corruption I was getting previously. | ||
+ | |||
==Grow Logical VDO Size== | ==Grow Logical VDO Size== | ||
I have a SAMBA share used to mirror a directory from another server. This was intended to test VDO. The problem is that the kernel reports the disk is now full, but the physical usage is no where near full. | I have a SAMBA share used to mirror a directory from another server. This was intended to test VDO. The problem is that the kernel reports the disk is now full, but the physical usage is no where near full. |
Latest revision as of 13:42, 18 October 2020
Purpose
General notes on using VDO [1] [2]
TRIM
This is critical and not well documented. In order for VDO to remove deleted data the filesystem must be trimmed. Red Hat recommends using systemd for this.
systemctl enable --now fstrim.timer
Here is an example of a before and after.
- Here I have deleted a bunch of data I no longer needed. The OS reports I am only using 63GB but VDO reports 121.6G
[root@localhost ~]# vdostats --s && df -hT /mnt/test/ Device Size Used Available Use% Space saving% /dev/mapper/vdo_test 240.0G 121.6G 118.4G 50% 11% Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/test-lv_test ext4 503G 63G 416G 14% /mnt/test
- After running a manual trim, VDO reports 66.8G used.
[root@localhost ~]# fstrim -v /mnt/test/ /mnt/test/: 440.7 GiB (473135718400 bytes) trimmed [root@localhost ~]# vdostats --s && df -hT /mnt/test/ Device Size Used Available Use% Space saving% /dev/mapper/vdo_test 240.0G 66.8G 173.2G 27% 18% Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/test-lv_test ext4 503G 63G 416G 14% /mnt/test
Encryption with LUKS
This is a work in progress as my test backup drive keeps getting corrupted.
- Working with a 2TB external disk
- Created GPT table on disk (in this example /dev/sde) using parted, then created partition using gpart that used all available space.
- Setup encryption
[root@natasha ~]# cryptsetup luksFormat /dev/sde1 WARNING! ======== This will overwrite data on /dev/sde1 irrevocably. Are you sure? (Type uppercase yes): YES Enter passphrase for /dev/sde1: Verify passphrase:
- Open the encrypted volume, and setup VDO.
[root@natasha ~]# cryptsetup luksOpen /dev/sde1 enc_backup Enter passphrase for /dev/sde1: [root@natasha ~]# vdo create --name=backup_vdo --device=/dev/mapper/enc_backup --vdoLogicalSize=4T Creating VDO backup_vdo The VDO volume can address 1 TB in 1022 data slabs, each 2 GB. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Starting VDO backup_vdo Starting compression on VDO backup_vdo VDO instance 0 volume is ready at /dev/mapper/backup_vdo
- Create LVM partition
[root@natasha ~]# pvcreate /dev/mapper/backup_vdo Physical volume "/dev/mapper/backup_vdo" successfully created. [root@natasha ~]# vgcreate backup /dev/mapper/backup_vdo Volume group "backup" successfully created [root@natasha ~]# lvcreate -l 100%VG backup --name backuptest Logical volume "backuptest" created. [root@natasha ~]# mkfs.ext4 -E nodiscard /dev/mapper/backup-backuptest mke2fs 1.45.4 (23-Sep-2019) Creating filesystem with 1073740800 4k blocks and 268435456 inodes Filesystem UUID: 30475cde-9f57-4a6f-8067-8dbbcbef127f Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848, 512000000, 550731776, 644972544 Allocating group tables: done Writing inode tables: done Creating journal (262144 blocks): done Writing superblocks and filesystem accounting information: done
- After mounting everything I have the following tree.
[root@natasha ~]# lsblk /dev/sde NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sde 8:64 0 2T 0 disk └─sde1 8:65 0 2T 0 part └─enc_backup 253:9 0 2T 0 crypt └─backup_vdo 253:10 0 4T 0 vdo └─backup-backuptest 253:11 0 4T 0 lvm /mnt/backup
- At this point I am writing random data to the disk to see if I can trigger the corruption I was getting previously.
Grow Logical VDO Size
I have a SAMBA share used to mirror a directory from another server. This was intended to test VDO. The problem is that the kernel reports the disk is now full, but the physical usage is no where near full.
- Size of the disk.
sudo lsblk /dev/nvme2n1 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme2n1 259:3 0 300G 0 disk └─datafeed 253:0 0 295.4G 0 vdo /mnt/data
- Here I am told the mount point is full.
sudo df -hT /dev/mapper/datafeed Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/datafeed xfs 296G 296G 160K 100% /mnt/data
- But when checking vdo stats, we see it very much empty
sudo vdostats --human-readable Device Size Used Available Use% Space saving% /dev/mapper/datafeed 300.0G 80.3G 219.7G 26% 74%
- Now I grow the logical volume size
sudo vdo growLogical --name=datafeed --vdoLogicalSize=500G sudo lsblk /dev/nvme2n1 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme2n1 259:3 0 300G 0 disk └─datafeed 253:0 0 500G 0 vdo /mnt/data
- Grow XFS and check changes
sudo xfs_growfs /dev/mapper/datafeed ......... sudo df -hT /dev/mapper/datafeed Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/datafeed xfs 500G 296G 205G 60% /mnt/data