NetAPP Disk Management
Revision as of 09:53, 10 November 2021 by Michael.mast (talk | contribs) (→Moving Disks Between Aggregates)
Moving Disks Between Aggregates
Vendor that migrated a NetAPP appliance created two aggregates. One had SSD caching and one did not. The one without the cache was used for virtual desktops and mechanical disks without caching is useless.
I wanted to delete the HDD only aggregate and move the disks to the aggregate with flash.
NOTE : This appliance is destination for snapmirror and is not used in production. When I delete a volume I am deleting a snapmirror copy that will be recreated.
- I moved all volumes I could until running out of space.
- Removed snapmirror relationships for the remaining volumes, then deleted the volumes.
- Deleted the HDD only aggregate.
- Using an SSH console, I looked up which disks where available. The aggregate the I delete was aggr1_drntapclus02_02 and the disks were owned by drntapclus02-02
drntapclus02::> storage aggregate show-spare-disks Original Owner: drntapclus02-01 Pool0 Spare Pool Usable Physical Disk Type Class RPM Checksum Size Size Status ---------------- ------ ----------- ------ -------------- -------- -------- -------- 1.0.23 SAS performance 15000 block 1.09TB 1.09TB zeroed 1.1.2 SAS performance 10000 block 1.09TB 1.09TB not zeroed 1.1.19 SAS performance 10000 block 1.09TB 1.09TB zeroed 1.0.3 SSD solid-state - block 372.4GB 372.6GB zeroed Original Owner: drntapclus02-02 Pool0 Spare Pool Usable Physical Disk Type Class RPM Checksum Size Size Status ---------------- ------ ----------- ------ -------------- -------- -------- -------- 2.10.14 BSAS capacity 7200 block 1.62TB 1.62TB zeroed 2.10.15 BSAS capacity 7200 block 1.62TB 1.62TB zeroed 2.10.16 BSAS capacity 7200 block 1.62TB 1.62TB zeroed 2.10.17 BSAS capacity 7200 block 1.62TB 1.62TB zeroed 1.0.10 SAS performance 10000 block 1.09TB 1.09TB not zeroed 1.0.11 SAS performance 10000 block 1.09TB 1.09TB not zeroed 1.0.12 SAS performance 10000 block 1.09TB 1.09TB not zeroed 1.0.17 SAS performance 10000 block 1.09TB 1.09TB not zeroed 1.0.18 SAS performance 10000 block 1.09TB 1.09TB not zeroed 1.0.19 SAS performance 10000 block 1.09TB 1.09TB not zeroed 1.0.20 SAS performance 10000 block 1.09TB 1.09TB not zeroed 1.0.21 SAS performance 10000 block 1.09TB 1.09TB not zeroed 1.1.5 SAS performance 10000 block 1.09TB 1.09TB not zeroed 1.1.7 SAS performance 10000 block 1.09TB 1.09TB not zeroed 1.1.11 SAS performance 10000 block 1.09TB 1.09TB not zeroed 1.1.12 SAS performance 10000 block 1.09TB 1.09TB not zeroed 1.1.14 SAS performance 10000 block 1.09TB 1.09TB not zeroed 1.1.17 SAS performance 10000 block 1.09TB 1.09TB not zeroed 1.1.18 SAS performance 10000 block 1.09TB 1.09TB not zeroed 1.1.20 SAS performance 10000 block 1.09TB 1.09TB not zeroed 1.1.21 SAS performance 10000 block 1.09TB 1.09TB not zeroed 1.1.22 SAS performance 10000 block 1.09TB 1.09TB zeroed 1.1.23 SAS performance 10000 block 1.09TB 1.09TB zeroed 1.0.0 SSD solid-state - block 372.4GB 372.6GB zeroed 28 entries were displayed.
I can verify owner of disk by checking 1.0.10
drntapclus02::> disk show 1.0.10 Disk: 1.0.10 Container Type: spare Owner/Home: drntapclus02-02 / drntapclus02-02 ...
- At this point I can remove the ownership and change the owner. Not sure how to do this in bulk as adding multiple disks didn't seem to work. To protect the system from my ignorance I decided to update each disk one at a time.
drntapclus02::> disk removeowner -disk 1.0.10 drntapclus02::> disk assign -pool 0 1.0.10
- I then added the disks to the aggregate with the flash cache. This appears to automatically add disks by creating raid groups and zeroing out drives.
drntapclus02::> aggr add-disks -aggregate aggr1_drntapclus02_01 -diskcount 17 -disktype SAS Info: Disks would be added to aggregate "aggr1_drntapclus02_01" on node "drntapclus02-01" in the following manner: First Plex RAID Group rg2, 16 disks (block checksum, raid_dp) Usable Physical Position Disk Type Size Size ---------- ------------------------- ---------- -------- -------- dparity 1.1.2 SAS - - parity 1.0.10 SAS - - data 1.1.5 SAS 1.09TB 1.09TB data 1.0.12 SAS 1.09TB 1.09TB data 1.1.7 SAS 1.09TB 1.09TB data 1.0.17 SAS 1.09TB 1.09TB data 1.1.11 SAS 1.09TB 1.09TB data 1.0.18 SAS 1.09TB 1.09TB data 1.1.12 SAS 1.09TB 1.09TB data 1.0.19 SAS 1.09TB 1.09TB data 1.1.14 SAS 1.09TB 1.09TB data 1.0.20 SAS 1.09TB 1.09TB data 1.1.17 SAS 1.09TB 1.09TB data 1.0.21 SAS 1.09TB 1.09TB data 1.1.18 SAS 1.09TB 1.09TB data 1.1.19 SAS 1.09TB 1.09TB Aggregate capacity available for volume use would be increased by 13.73TB. Warning: One or more disks will not be added. 17 disks specified, 16 disks will be added. Do you want to continue? {y|n}: y Addition of disks to aggregate "aggr1_drntapclus02_01" has been initiated. 15 disks need to be zeroed before they can be added to the aggregate. The process has been initiated. Once zeroing completes on these disks, all disks will be added at once. Note that if the system reboots before the disk zeroing is complete, the disks will not be added. Warning: One or more disks will not be added. 17 disks specified, 16 disks will be added.
- Checking the disk info again I can see the drive has a new owner.
drntapclus02::> disk show 1.0.10 Disk: 1.0.10 Container Type: aggregate Owner/Home: drntapclus02-01 / drntapclus02-01
- We now wait for the drives to be zeroed out and capacity increased on the aggregate. When completed I will re-add the volumes and snapmirrors.