Difference between revisions of "Migrate CentOS 5.4 to Ovirt"

From Michael's Information Zone
Jump to navigation Jump to search
 
(5 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
==Purpose==
 
==Purpose==
A very old install of CentOS 5.4 needed to be migrated to an Ovirt Hypervisor. This was a special install of Hylafax, a productions server that could not be taken offline, and was critical to business operations.
+
A very old install of CentOS 5.4 needed to be migrated to an Ovirt Hypervisor. This was a special install of Hylafax, a production server that could not be taken offline, and was critical to business operations.
 
<br>
 
<br>
 
<br>
 
<br>
 
Several attempts were made to replicate the server and it's 500GB of data. Including running a disk dump over ssh to the destination VM and another physical server.
 
Several attempts were made to replicate the server and it's 500GB of data. Including running a disk dump over ssh to the destination VM and another physical server.
 +
 
==Steps==
 
==Steps==
 
My final idea was to rsync the system to the new VM disk, install grub, then go from there.
 
My final idea was to rsync the system to the new VM disk, install grub, then go from there.
Line 52: Line 53:
 
*Jumped to the rescue shell. At this point it did not find any linux volumes, which was something I had seen after performing the disk dump.
 
*Jumped to the rescue shell. At this point it did not find any linux volumes, which was something I had seen after performing the disk dump.
 
*Ran fdsik to verify the partitions, unlike the disk dump I was able to see the two volumes I had created previously. PROGRESS!
 
*Ran fdsik to verify the partitions, unlike the disk dump I was able to see the two volumes I had created previously. PROGRESS!
 +
====Install Grub====
 +
I ran into an issue with the chroot environment not recognizing the new /dev/sda disk that the VM uses. This was due to the old server being installed on an HP with smart raid controller<ref>https://wiki.centos.org/TipsAndTricks/ReinstallGRUB</ref>. The original disk would show up as /dev/cciss/c0d0.
 +
<br>
 +
<br>
 +
To get around this I copied the grub config files to the live environment that did see the disk (this was before I realized a process that would have fixed this for me)
 
*mounted both partitions as follows
 
*mounted both partitions as follows
 
**mount /dev/sda2 /mnt/sysimage
 
**mount /dev/sda2 /mnt/sysimage
 
**mount /dev/sda1 /mnt/sysimage/boot
 
**mount /dev/sda1 /mnt/sysimage/boot
*chroot /mnt/sysimage
+
*Copied /mnt/sysimage/usr/share/grub/x86_64-redhat to /usr/share/grub/x86_64-redhat because the rescue environment did not have this.
 +
*Updated grub.conf with the new drive information (switching from c0d0 to sda)
 +
*Installed grub onto the new disk <ref>https://wiki.centos.org/HowTos/GrubInstallation</ref>
 +
<pre>
 +
grub-install /dev/sda
 +
</pre>
 +
*Updated /etc/fstab with /dev/sda1 and /dev/sda2 as well.
 +
SUCCESS! We were able to boot grub, but then kernel panics as it was unable to find the root partition.
 +
 
 +
====Fixing Grub====
 +
The first thing I did was to rebuild initrd with virtio modules. Thankfully the old server already had virtio modules available.<ref>https://wiki.centos.org/TipsAndTricks/CreateNewInitrd</ref>
 +
*Booted back into the CentOS 7 rescue environment.
 +
*Did the following to properly chroot (this would have been helpful in the last step)
 +
<pre>
 +
mount --bind /proc /mnt/sysimage/proc
 +
mount --bind /dev /mnt/sysimage/dev
 +
mount --bind /sys /mnt/sysimage/sys
 +
chroot /mnt/sysimage
 +
</pre>
 +
*At this point I checked for the kernel version I needed
 +
<pre>
 +
ls -1 /boot/initrd-*
 +
/boot/initrd-2.6.18-164.9.1.el5.img
 +
/boot/initrd-old
 +
/boot/initrd-old-
 +
</pre>
 +
*Using 2.6.18-164.9.1.el5, I generated a new initrd image with virtio moduels <ref>https://www.centos.org/forums/viewtopic.php?t=14647</ref> (after making a backup of course)
 +
<pre>
 +
mkinitrd --with=virtio_pci --with=virtio_blk -f /boot/initrd-2.6.18-164.9.1.el5.img
 +
</pre>
 +
At this point I was still getting a kernel panic with root mount errors, though I did see the virtio modules being loaded. Everything I had read online said to NOT use UUIDs in grub, but since I was out of ideas it was the next thing to try.
 +
<br>
 +
<br>
 +
Back in the CentOS 7 rescue shell, and chroot back into the CentOS 5.4
 +
<pre>
 +
uuid=$(ls -l /dev/disk/by-uuid | grep sda2 | awk '{print $9}')
 +
sed -i "s|/dev/sda2|UUID=$uuid|" /boot/grub/grub.conf
 +
sed -i "s|/dev/sda2|UUID=$uuid|" /etc/fstab
 +
</pre>
 +
*Another reboot, and I had a login prompt!
 +
<br>
 +
<br>

Latest revision as of 18:34, 16 June 2018

Purpose

A very old install of CentOS 5.4 needed to be migrated to an Ovirt Hypervisor. This was a special install of Hylafax, a production server that could not be taken offline, and was critical to business operations.

Several attempts were made to replicate the server and it's 500GB of data. Including running a disk dump over ssh to the destination VM and another physical server.

Steps

My final idea was to rsync the system to the new VM disk, install grub, then go from there.

Rsync

First I created a VM in Ovirt with a disk large enough to hold the data, plus extra as the existing server was running out of space.
After that was completed I

  • booted off a live fedora ISO
  • ran fdisk
    • created first partition with enough space for the boot partition.
    • created second partition with rest of space.
  • Formatted both partitions as ext3, same as the old server.
  • started sshd
  • created a password for liveuser
  • On the old server, ran ssh-copy-id liveuser@newvm to ensure key based ssh was setup.

rsync boot partition

While still in the live fedora shell

  • mount /dev/sda1 on /mnt
  • chown liveuser /mnt

Then on the old server ran the following dry run to check for errors.

rsync -avn /boot/* liveuser@newvm:/mnt/ &> rsync.log

Then the live transfer

rsync -avz /boot/* liveuser@newvm:/mnt/ &> rsync.log

rsync root partition

Back in the line fedora shell

  • umount the last partition used for the boot partition on /mnt
  • mount the other partition created (mount /dev/sda2 /mnt)
  • chown liveuser /mnt (shouldn't be needed, just in case).

Then on the old server I ran the following dry run to check for errors and loops. Making sure to exclude the boot partition we already replicated.

rsync -avn --exclude=\/boot / liveuser@newvm:/mnt/ &> rsynctest2.log &
disown
  • Every so often I would check to make sure this was still running. Again, this was 500GB of small files on old mechanical disks.
  • The next morning things looked good, so we went for a live migration. This time adding a nice value of +10 to ensure the replication did not impact system performance during production.
nice rsync -avz --exclude=\/boot / liveuser@newvm:/mnt/ &> rsynctest.log &
disown
  • After a day or so this finished without any strange errors, beyond the expected "files disappeared" that you would get from replicating a live system.

Installing and fixing grub

This was the part that took the most time. For this part I

  • Booted the VM from a CentOS 7 minimal install ISO.
  • Jumped to the rescue shell. At this point it did not find any linux volumes, which was something I had seen after performing the disk dump.
  • Ran fdsik to verify the partitions, unlike the disk dump I was able to see the two volumes I had created previously. PROGRESS!

Install Grub

I ran into an issue with the chroot environment not recognizing the new /dev/sda disk that the VM uses. This was due to the old server being installed on an HP with smart raid controller[1]. The original disk would show up as /dev/cciss/c0d0.

To get around this I copied the grub config files to the live environment that did see the disk (this was before I realized a process that would have fixed this for me)

  • mounted both partitions as follows
    • mount /dev/sda2 /mnt/sysimage
    • mount /dev/sda1 /mnt/sysimage/boot
  • Copied /mnt/sysimage/usr/share/grub/x86_64-redhat to /usr/share/grub/x86_64-redhat because the rescue environment did not have this.
  • Updated grub.conf with the new drive information (switching from c0d0 to sda)
  • Installed grub onto the new disk [2]
grub-install /dev/sda
  • Updated /etc/fstab with /dev/sda1 and /dev/sda2 as well.

SUCCESS! We were able to boot grub, but then kernel panics as it was unable to find the root partition.

Fixing Grub

The first thing I did was to rebuild initrd with virtio modules. Thankfully the old server already had virtio modules available.[3]

  • Booted back into the CentOS 7 rescue environment.
  • Did the following to properly chroot (this would have been helpful in the last step)
mount --bind /proc /mnt/sysimage/proc
mount --bind /dev /mnt/sysimage/dev
mount --bind /sys /mnt/sysimage/sys
chroot /mnt/sysimage
  • At this point I checked for the kernel version I needed
ls -1 /boot/initrd-*
/boot/initrd-2.6.18-164.9.1.el5.img
/boot/initrd-old
/boot/initrd-old-
  • Using 2.6.18-164.9.1.el5, I generated a new initrd image with virtio moduels [4] (after making a backup of course)
mkinitrd --with=virtio_pci --with=virtio_blk -f /boot/initrd-2.6.18-164.9.1.el5.img

At this point I was still getting a kernel panic with root mount errors, though I did see the virtio modules being loaded. Everything I had read online said to NOT use UUIDs in grub, but since I was out of ideas it was the next thing to try.

Back in the CentOS 7 rescue shell, and chroot back into the CentOS 5.4

uuid=$(ls -l /dev/disk/by-uuid | grep sda2 | awk '{print $9}')
sed -i "s|/dev/sda2|UUID=$uuid|" /boot/grub/grub.conf
sed -i "s|/dev/sda2|UUID=$uuid|" /etc/fstab
  • Another reboot, and I had a login prompt!