Difference between revisions of "KVM"

From Michael's Information Zone
Jump to navigation Jump to search
Line 122: Line 122:
 
</pre>
 
</pre>
 
==PCI Passthrough==
 
==PCI Passthrough==
 +
<ref>https://www.ovirt.org/documentation/install-guide/appe-Configuring_a_Host_for_PCI_Passthrough.html</ref>
 +
*Add intel_iommu=on to /etc/default/grub on the GRUB_CMDLINE_LINUX line.
 +
*Rebuild Grub with grub2-mkconfig -o /boot/grub2/grub.cfg
 +
*Reboot
 +
 
==SELinux==
 
==SELinux==
 
===Image Files===
 
===Image Files===

Revision as of 21:12, 10 January 2020

Installation

Debian

[1]

apt install qemu-kvm libvirt-clients libvirt-daemon-system

Fedora 28

I had removed zfs-fuse, which also removed a number of kvm modules that I needed. To re-install I was able to follow the following link [2]

sudo dnf install libvirt-daemon
sudo systemctl enable libvirtd
sudo systemctl start libvirtd
sudo dnf install qemu-kvm qemu-img libvirt-python python-virtinst libvirt-client virt-install virt-viewer device-mapper-libs libvirt-daemon-driver-qemu libvirt-daemon-config-network libvirt-daemon-kvm

Create Storage Domain

I have a ZFS pool already established, with several volumes in use. I will be creating a new volume dedicated to this purpose and will set it as one of two pools. The other being the ISO directory where I keep ISOs.

virsh # pool-list
 Name                 State      Autostart 
-------------------------------------------
 default              active     yes              
 michael              active     yes       
 qemu                 active     yes       

virsh # pool-autostart --pool default --disable 
Pool default unmarked as autostarted

virsh # pool-undefine --pool default 
Pool default undefined

virsh # pool-autostart --pool michael --disable 
Pool michael unmarked as autostarted


virsh # pool-list 
 Name                 State      Autostart 
-------------------------------------------
 default              active     no        
 localstorage         active     no        
 michael              active     no        
 qemu                 active     no        

Now I can create the new pool. I am going to use the most basic config options.

virsh # pool-create-as default --type dir --target /raid5/libvirt
Pool default created

virsh # pool-autostart --pool default
Pool default marked as autostarted

virsh # pool-list
 Name                 State      Autostart 
-------------------------------------------
 default              active     yes               
 michael              active     no        
 qemu                 active     no        

virsh # 

Networking

Remove Default Network

To remove the default network from the host. This can conflict with other services running if this is a shared environment. In my case I run a DNS server off the host, and the default network uses DNS forwarding with dnsmasq. When dnsmasq is running it takes port 53 which prevents my DNS service from running.

[3]

[root@nas ~]# virsh
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # net-destroy default
Network default destroyed

virsh # net-list
 Name                 State      Autostart     Persistent
----------------------------------------------------------

virsh # exit

Setup Bridged Network

I already had a bridge created for LXC containers, it only made sense to use it for KVM guests as well.

  • Create the XML file[4]. My existing bridge name is virb0.
<network>
  <name>bridge1</name>
  <bridge name="virbr0" />
  <forward mode="bridge" />
</network>
  • Create the network using virsh
virsh # net-list 
 Name                 State      Autostart     Persistent
----------------------------------------------------------

vvirsh # net-define --file br.xml 
Network bridge1 defined from br.xml

virsh # net-list 
 Name                 State      Autostart     Persistent
----------------------------------------------------------

virsh # net-start --network bridge1
Network bridge1 started

virsh # net-list 
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 bridge1              active     no            yes

  • Set to auto start
virsh # net-autostart --network bridge1
Network bridge1 marked as autostarted

Nested Virtualization

[5]


PCI Passthrough

[6]

  • Add intel_iommu=on to /etc/default/grub on the GRUB_CMDLINE_LINUX line.
  • Rebuild Grub with grub2-mkconfig -o /boot/grub2/grub.cfg
  • Reboot

SELinux

Image Files

After a server rebuild I wanted the disk images to be placed in another directory. SELinux would NOT work for me, even after setting the context based on Red Hat's documentation[7]. Found an obscure posting[8] after an hour of searching that got me running. The only difference is svirt_image_t vs virt_image_t.
UPDATE : This appears to not make a difference. I am still unable to load the images. The svirt is actually just the dynamic label that gets applied after the image is started.[9]

semanage fcontext -a -t virt_image_t "/data/libvirt/images(/.*)?"
restorecon -vR /data/libvirt/images 


At one point I needed to remove bad context.[10] The policy is kept in /etc/selinux/targeted/contexts/files/file_contexts.local but you can't edit this directly.

semanage fcontext -d "/data/archive/ISO/ubuntu-18.04.1-live-server-amd64.iso"

What is really annoying is that audit logs were not reporting any violations when using the troubleshooter, but I was getting the following

type=VIRT_CONTROL msg=audit(1576848063.439:6601): pid=1265 uid=0 auid=4294967295 ses=4294967295 \
subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm op=start reason=booted vm="Unifi" uuid=37eed7bf-a37f-4d49-86c2-b9a6bb8682c3 \
vm-pid=-1 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=failed'UID="root" AUID="unset"