KVM

From Michael's Information Zone
Jump to navigation Jump to search

Installation

Debian

[1]

apt install qemu-kvm libvirt-clients libvirt-daemon-system

Fedora 28

I had removed zfs-fuse, which also removed a number of kvm modules that I needed. To re-install I was able to follow the following link [2]

sudo dnf install libvirt-daemon
sudo systemctl enable libvirtd
sudo systemctl start libvirtd
sudo dnf install qemu-kvm qemu-img libvirt-python python-virtinst libvirt-client virt-install virt-viewer device-mapper-libs libvirt-daemon-driver-qemu libvirt-daemon-config-network libvirt-daemon-kvm

Create Storage Domain

I have a ZFS pool already established, with several volumes in use. I will be creating a new volume dedicated to this purpose and will set it as one of two pools. The other being the ISO directory where I keep ISOs.

virsh # pool-list
 Name                 State      Autostart 
-------------------------------------------
 default              active     yes              
 michael              active     yes       
 qemu                 active     yes       

virsh # pool-autostart --pool default --disable 
Pool default unmarked as autostarted

virsh # pool-undefine --pool default 
Pool default undefined

virsh # pool-autostart --pool michael --disable 
Pool michael unmarked as autostarted


virsh # pool-list 
 Name                 State      Autostart 
-------------------------------------------
 default              active     no        
 localstorage         active     no        
 michael              active     no        
 qemu                 active     no        

Now I can create the new pool. I am going to use the most basic config options.

virsh # pool-create-as default --type dir --target /raid5/libvirt
Pool default created

virsh # pool-autostart --pool default
Pool default marked as autostarted

virsh # pool-list
 Name                 State      Autostart 
-------------------------------------------
 default              active     yes               
 michael              active     no        
 qemu                 active     no        

virsh # 

Networking

Remove Default Network

To remove the default network from the host. This can conflict with other services running if this is a shared environment. In my case I run a DNS server off the host, and the default network uses DNS forwarding with dnsmasq. When dnsmasq is running it takes port 53 which prevents my DNS service from running.

[3]

[root@nas ~]# virsh
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # net-destroy default
Network default destroyed

virsh # net-list
 Name                 State      Autostart     Persistent
----------------------------------------------------------

virsh # exit

Setup Bridged Network

I already had a bridge created for LXC containers, it only made sense to use it for KVM guests as well.

  • Create the XML file[4]. My existing bridge name is virb0.
<network>
  <name>bridge1</name>
  <bridge name="virbr0" />
  <forward mode="bridge" />
</network>
  • Create the network using virsh
virsh # net-list 
 Name                 State      Autostart     Persistent
----------------------------------------------------------

vvirsh # net-define --file br.xml 
Network bridge1 defined from br.xml

virsh # net-list 
 Name                 State      Autostart     Persistent
----------------------------------------------------------

virsh # net-start --network bridge1
Network bridge1 started

virsh # net-list 
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 bridge1              active     no            yes

  • Set to auto start
virsh # net-autostart --network bridge1
Network bridge1 marked as autostarted

Nested Virtualization

[5]

PCI Passthrough

[6]

  • Add intel_iommu=on to /etc/default/grub on the GRUB_CMDLINE_LINUX line.
  • Rebuild Grub with grub2-mkconfig -o /boot/grub2/grub.cfg
  • Reboot
  • Blacklist device for passthrough. First grab the IDs[7]
lspci -vmmnn
...
Slot:	01:00.0
Class:	VGA compatible controller [0300]
Vendor:	Advanced Micro Devices, Inc. [AMD/ATI] [1002]
Device:	Caicos XT [Radeon HD 7470/8470 / R5 235/310 OEM] [6778]
SVendor:	Dell [1028]
SDevice:	Radeon HD 7470 [2120]

Slot:	01:00.1
Class:	Audio device [0403]
Vendor:	Advanced Micro Devices, Inc. [AMD/ATI] [1002]
Device:	Caicos HDMI Audio [Radeon HD 6450 / 7450/8450/8490 OEM / R5 230/235/235X OEM] [aa98]
SVendor:	Dell [1028]
SDevice:	Device [aa98]
...
  • In this case we want 1002:6778 and 1002:aa98. Add these to '/etc/modprobe.d/vfio.conf' as follows.[8]
options vfio-pci ids=1002:6778,1002:aa98
  • Enable the module
echo 'vfio-pci' > /etc/modules-load.d/vfio-pci.conf

SELinux

Image Files

New process

My previous notes are not great, working on cleaner notes.

  • To get things going for an existing image that started to not work anymore[9]
[root@natasha ~]# ls -alZ /data/libvirt/images/unifi2.qcow2
-rw-------+ 1 qemu qemu system_u:object_r:virt_image_t:s0 53695545344 Feb 21 08:41 unifi2.qcow2

[root@natasha ~]# chcon -t svirt_image_t /data/libvirt/images/unifi2.qcow2

[root@natasha ~]# ls -alZ /data/libvirt/images/unifi2.qcow2 
-rw-------+ 1 qemu qemu system_u:object_r:svirt_image_t:s0 53695545344 Feb 21 08:50 /data/libvirt/images/unifi2.qcow2

Old process

After a server rebuild I wanted the disk images to be placed in another directory. SELinux would NOT work for me, even after setting the context based on Red Hat's documentation[10]. Found an obscure posting[11] after an hour of searching that got me running. The only difference is svirt_image_t vs virt_image_t.
UPDATE : This appears to not make a difference. I am still unable to load the images. The svirt is actually just the dynamic label that gets applied after the image is started.[12]

semanage fcontext -a -t virt_image_t "/data/libvirt/images(/.*)?"
restorecon -vR /data/libvirt/images 


At one point I needed to remove bad context.[13] The policy is kept in /etc/selinux/targeted/contexts/files/file_contexts.local but you can't edit this directly.

semanage fcontext -d "/data/archive/ISO/ubuntu-18.04.1-live-server-amd64.iso"

What is really annoying is that audit logs were not reporting any violations when using the troubleshooter, but I was getting the following

type=VIRT_CONTROL msg=audit(1576848063.439:6601): pid=1265 uid=0 auid=4294967295 ses=4294967295 \
subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm op=start reason=booted vm="Unifi" uuid=37eed7bf-a37f-4d49-86c2-b9a6bb8682c3 \
vm-pid=-1 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=failed'UID="root" AUID="unset"