Difference between revisions of "KVM"

From Michael's Information Zone
Jump to navigation Jump to search
 
(14 intermediate revisions by the same user not shown)
Line 1: Line 1:
==Fedora 28==
+
==Installation==
 +
==Debian==
 +
<ref>https://wiki.debian.org/KVM</ref>
 +
<pre>
 +
apt install qemu-kvm libvirt-clients libvirt-daemon-system
 +
</pre>
 +
===Fedora 28===
 
I had removed zfs-fuse, which also removed a number of kvm modules that I needed. To re-install I was able to follow the following link <ref>https://unix.stackexchange.com/questions/195948/kvm-virtual-manager-connection-failed</ref>
 
I had removed zfs-fuse, which also removed a number of kvm modules that I needed. To re-install I was able to follow the following link <ref>https://unix.stackexchange.com/questions/195948/kvm-virtual-manager-connection-failed</ref>
 
<pre>
 
<pre>
Line 7: Line 13:
 
sudo dnf install qemu-kvm qemu-img libvirt-python python-virtinst libvirt-client virt-install virt-viewer device-mapper-libs libvirt-daemon-driver-qemu libvirt-daemon-config-network libvirt-daemon-kvm
 
sudo dnf install qemu-kvm qemu-img libvirt-python python-virtinst libvirt-client virt-install virt-viewer device-mapper-libs libvirt-daemon-driver-qemu libvirt-daemon-config-network libvirt-daemon-kvm
 
</pre>
 
</pre>
 +
==Create Storage Domain==
 +
I have a ZFS pool already established, with several volumes in use. I will be creating a new volume dedicated to this purpose and will set it as one of two pools. The other being the ISO directory where I keep ISOs.
 +
<pre>
 +
virsh # pool-list
 +
Name                State      Autostart
 +
-------------------------------------------
 +
default              active    yes             
 +
michael              active    yes     
 +
qemu                active    yes     
 +
 +
virsh # pool-autostart --pool default --disable
 +
Pool default unmarked as autostarted
 +
 +
virsh # pool-undefine --pool default
 +
Pool default undefined
 +
 +
virsh # pool-autostart --pool michael --disable
 +
Pool michael unmarked as autostarted
 +
 +
 +
virsh # pool-list
 +
Name                State      Autostart
 +
-------------------------------------------
 +
default              active    no       
 +
localstorage        active    no       
 +
michael              active    no       
 +
qemu                active    no       
 +
 +
</pre>
 +
Now I can create the new pool. I am going to use the most basic config options.
 +
<pre>
 +
virsh # pool-create-as default --type dir --target /raid5/libvirt
 +
Pool default created
 +
 +
virsh # pool-autostart --pool default
 +
Pool default marked as autostarted
 +
 +
virsh # pool-list
 +
Name                State      Autostart
 +
-------------------------------------------
 +
default              active    yes             
 +
michael              active    no       
 +
qemu                active    no       
 +
 +
virsh #
 +
</pre>
 +
 +
==Networking==
 
==Remove Default Network==
 
==Remove Default Network==
===Purpose===
 
 
To remove the default network from the host. This can conflict with other services running if this is a shared environment. In my case I run a DNS server off the host, and the default network uses DNS forwarding with dnsmasq. When dnsmasq is running it takes port 53 which prevents my DNS service from running.
 
To remove the default network from the host. This can conflict with other services running if this is a shared environment. In my case I run a DNS server off the host, and the default network uses DNS forwarding with dnsmasq. When dnsmasq is running it takes port 53 which prevents my DNS service from running.
===Process===
+
 
 
<ref>https://libvirt.org/sources/virshcmdref/html/sect-net-destroy.html</ref><pre>
 
<ref>https://libvirt.org/sources/virshcmdref/html/sect-net-destroy.html</ref><pre>
 
[root@nas ~]# virsh
 
[root@nas ~]# virsh
Line 26: Line 79:
  
 
virsh # exit
 
virsh # exit
 +
</pre>
 +
===Setup Bridged Network===
 +
I already had a bridge created for LXC containers, it only made sense to use it for KVM guests as well.
 +
*Create the XML file<ref>https://libvirt.org/formatnetwork.html#examplesBridge</ref>. My existing bridge name is virb0.
 +
<pre>
 +
<network>
 +
  <name>bridge1</name>
 +
  <bridge name="virbr0" />
 +
  <forward mode="bridge" />
 +
</network>
 +
</pre>
 +
*Create the network using virsh
 +
<pre>
 +
virsh # net-list
 +
Name                State      Autostart    Persistent
 +
----------------------------------------------------------
 +
 +
vvirsh # net-define --file br.xml
 +
Network bridge1 defined from br.xml
 +
 +
virsh # net-list
 +
Name                State      Autostart    Persistent
 +
----------------------------------------------------------
 +
 +
virsh # net-start --network bridge1
 +
Network bridge1 started
 +
 +
virsh # net-list
 +
Name                State      Autostart    Persistent
 +
----------------------------------------------------------
 +
bridge1              active    no            yes
 +
 +
</pre>
 +
*Set to auto start
 +
<pre>
 +
virsh # net-autostart --network bridge1
 +
Network bridge1 marked as autostarted
 +
</pre>
 +
==Nested Virtualization==
 +
<ref>https://www.server-world.info/en/note?os=Debian_9&p=kvm&f=8</ref>
 +
 +
==PCI Passthrough==
 +
<ref>https://www.ovirt.org/documentation/install-guide/appe-Configuring_a_Host_for_PCI_Passthrough.html</ref>
 +
*Add intel_iommu=on to /etc/default/grub on the GRUB_CMDLINE_LINUX line.
 +
*Rebuild Grub with grub2-mkconfig -o /boot/grub2/grub.cfg
 +
*Reboot
 +
*Blacklist device for passthrough. First grab the IDs<ref>https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#Isolating_the_GPU</ref>
 +
<pre>
 +
lspci -vmmnn
 +
...
 +
Slot: 01:00.0
 +
Class: VGA compatible controller [0300]
 +
Vendor: Advanced Micro Devices, Inc. [AMD/ATI] [1002]
 +
Device: Caicos XT [Radeon HD 7470/8470 / R5 235/310 OEM] [6778]
 +
SVendor: Dell [1028]
 +
SDevice: Radeon HD 7470 [2120]
 +
 +
Slot: 01:00.1
 +
Class: Audio device [0403]
 +
Vendor: Advanced Micro Devices, Inc. [AMD/ATI] [1002]
 +
Device: Caicos HDMI Audio [Radeon HD 6450 / 7450/8450/8490 OEM / R5 230/235/235X OEM] [aa98]
 +
SVendor: Dell [1028]
 +
SDevice: Device [aa98]
 +
...
 +
</pre>
 +
*In this case we want 1002:6778 and 1002:aa98. Add these to '/etc/modprobe.d/vfio.conf' as follows.<ref>https://www.server-world.info/en/note?os=CentOS_8&p=kvm&f=12</ref>
 +
<pre>
 +
options vfio-pci ids=1002:6778,1002:aa98
 +
</pre>
 +
*Enable the module
 +
<pre>
 +
echo 'vfio-pci' > /etc/modules-load.d/vfio-pci.conf
 +
</pre>
 +
 +
==SELinux==
 +
===Image Files===
 +
====New process====
 +
My previous notes are not great, working on cleaner notes.<br>
 +
*To get things going for an existing image that started to not work anymore<ref>https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security-enhanced_linux/ch07s02</ref>
 +
<pre>
 +
[root@natasha ~]# ls -alZ /data/libvirt/images/unifi2.qcow2
 +
-rw-------+ 1 qemu qemu system_u:object_r:virt_image_t:s0 53695545344 Feb 21 08:41 unifi2.qcow2
 +
 +
[root@natasha ~]# chcon -t svirt_image_t /data/libvirt/images/unifi2.qcow2
 +
 +
[root@natasha ~]# ls -alZ /data/libvirt/images/unifi2.qcow2
 +
-rw-------+ 1 qemu qemu system_u:object_r:svirt_image_t:s0 53695545344 Feb 21 08:50 /data/libvirt/images/unifi2.qcow2
 +
 +
[root@natasha ~]# semanage fcontext -d "/data/libvirt/images(/.*)?"
 +
 +
[root@natasha ~]# semanage fcontext -a -t svirt_image_t "/data/libvirt/images(/.*)?"
 +
 +
</pre>
 +
 +
====Old process====
 +
After a server rebuild I wanted the disk images to be placed in another directory. SELinux would NOT work for me, even after setting the context based on Red Hat's documentation<ref>https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html-single/virtualization_security_guide/index</ref>. Found an obscure posting<ref>https://unix.stackexchange.com/questions/60799/selinux-interfering-with-host-guest-file-sharing-using-kvm</ref> after an hour of searching that got me running. The only difference is svirt_image_t vs virt_image_t.<br>
 +
UPDATE : This appears to not make a difference. I am still unable to load the images. The svirt is actually just the dynamic label that gets applied after the image is started.<ref>http://selinuxproject.org/page/NB_VM</ref>
 +
<br>
 +
<pre>
 +
semanage fcontext -a -t virt_image_t "/data/libvirt/images(/.*)?"
 +
restorecon -vR /data/libvirt/images
 +
</pre>
 +
<br>
 +
At one point I needed to remove bad context.<ref>https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-selinux_contexts_labeling_files-persistent_changes_semanage_fcontext</ref> The policy is kept in /etc/selinux/targeted/contexts/files/file_contexts.local but you can't edit this directly.
 +
<pre>
 +
semanage fcontext -d "/data/archive/ISO/ubuntu-18.04.1-live-server-amd64.iso"
 +
</pre>
 +
What is really annoying is that audit logs were not reporting any violations when using the troubleshooter, but I was getting the following
 +
<pre>
 +
type=VIRT_CONTROL msg=audit(1576848063.439:6601): pid=1265 uid=0 auid=4294967295 ses=4294967295 \
 +
subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm op=start reason=booted vm="Unifi" uuid=37eed7bf-a37f-4d49-86c2-b9a6bb8682c3 \
 +
vm-pid=-1 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=failed'UID="root" AUID="unset"
 
</pre>
 
</pre>

Latest revision as of 08:55, 21 February 2020

Installation

Debian

[1]

apt install qemu-kvm libvirt-clients libvirt-daemon-system

Fedora 28

I had removed zfs-fuse, which also removed a number of kvm modules that I needed. To re-install I was able to follow the following link [2]

sudo dnf install libvirt-daemon
sudo systemctl enable libvirtd
sudo systemctl start libvirtd
sudo dnf install qemu-kvm qemu-img libvirt-python python-virtinst libvirt-client virt-install virt-viewer device-mapper-libs libvirt-daemon-driver-qemu libvirt-daemon-config-network libvirt-daemon-kvm

Create Storage Domain

I have a ZFS pool already established, with several volumes in use. I will be creating a new volume dedicated to this purpose and will set it as one of two pools. The other being the ISO directory where I keep ISOs.

virsh # pool-list
 Name                 State      Autostart 
-------------------------------------------
 default              active     yes              
 michael              active     yes       
 qemu                 active     yes       

virsh # pool-autostart --pool default --disable 
Pool default unmarked as autostarted

virsh # pool-undefine --pool default 
Pool default undefined

virsh # pool-autostart --pool michael --disable 
Pool michael unmarked as autostarted


virsh # pool-list 
 Name                 State      Autostart 
-------------------------------------------
 default              active     no        
 localstorage         active     no        
 michael              active     no        
 qemu                 active     no        

Now I can create the new pool. I am going to use the most basic config options.

virsh # pool-create-as default --type dir --target /raid5/libvirt
Pool default created

virsh # pool-autostart --pool default
Pool default marked as autostarted

virsh # pool-list
 Name                 State      Autostart 
-------------------------------------------
 default              active     yes               
 michael              active     no        
 qemu                 active     no        

virsh # 

Networking

Remove Default Network

To remove the default network from the host. This can conflict with other services running if this is a shared environment. In my case I run a DNS server off the host, and the default network uses DNS forwarding with dnsmasq. When dnsmasq is running it takes port 53 which prevents my DNS service from running.

[3]

[root@nas ~]# virsh
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # net-destroy default
Network default destroyed

virsh # net-list
 Name                 State      Autostart     Persistent
----------------------------------------------------------

virsh # exit

Setup Bridged Network

I already had a bridge created for LXC containers, it only made sense to use it for KVM guests as well.

  • Create the XML file[4]. My existing bridge name is virb0.
<network>
  <name>bridge1</name>
  <bridge name="virbr0" />
  <forward mode="bridge" />
</network>
  • Create the network using virsh
virsh # net-list 
 Name                 State      Autostart     Persistent
----------------------------------------------------------

vvirsh # net-define --file br.xml 
Network bridge1 defined from br.xml

virsh # net-list 
 Name                 State      Autostart     Persistent
----------------------------------------------------------

virsh # net-start --network bridge1
Network bridge1 started

virsh # net-list 
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 bridge1              active     no            yes

  • Set to auto start
virsh # net-autostart --network bridge1
Network bridge1 marked as autostarted

Nested Virtualization

[5]

PCI Passthrough

[6]

  • Add intel_iommu=on to /etc/default/grub on the GRUB_CMDLINE_LINUX line.
  • Rebuild Grub with grub2-mkconfig -o /boot/grub2/grub.cfg
  • Reboot
  • Blacklist device for passthrough. First grab the IDs[7]
lspci -vmmnn
...
Slot:	01:00.0
Class:	VGA compatible controller [0300]
Vendor:	Advanced Micro Devices, Inc. [AMD/ATI] [1002]
Device:	Caicos XT [Radeon HD 7470/8470 / R5 235/310 OEM] [6778]
SVendor:	Dell [1028]
SDevice:	Radeon HD 7470 [2120]

Slot:	01:00.1
Class:	Audio device [0403]
Vendor:	Advanced Micro Devices, Inc. [AMD/ATI] [1002]
Device:	Caicos HDMI Audio [Radeon HD 6450 / 7450/8450/8490 OEM / R5 230/235/235X OEM] [aa98]
SVendor:	Dell [1028]
SDevice:	Device [aa98]
...
  • In this case we want 1002:6778 and 1002:aa98. Add these to '/etc/modprobe.d/vfio.conf' as follows.[8]
options vfio-pci ids=1002:6778,1002:aa98
  • Enable the module
echo 'vfio-pci' > /etc/modules-load.d/vfio-pci.conf

SELinux

Image Files

New process

My previous notes are not great, working on cleaner notes.

  • To get things going for an existing image that started to not work anymore[9]
[root@natasha ~]# ls -alZ /data/libvirt/images/unifi2.qcow2
-rw-------+ 1 qemu qemu system_u:object_r:virt_image_t:s0 53695545344 Feb 21 08:41 unifi2.qcow2

[root@natasha ~]# chcon -t svirt_image_t /data/libvirt/images/unifi2.qcow2

[root@natasha ~]# ls -alZ /data/libvirt/images/unifi2.qcow2 
-rw-------+ 1 qemu qemu system_u:object_r:svirt_image_t:s0 53695545344 Feb 21 08:50 /data/libvirt/images/unifi2.qcow2

[root@natasha ~]# semanage fcontext -d "/data/libvirt/images(/.*)?"

[root@natasha ~]# semanage fcontext -a -t svirt_image_t "/data/libvirt/images(/.*)?"

Old process

After a server rebuild I wanted the disk images to be placed in another directory. SELinux would NOT work for me, even after setting the context based on Red Hat's documentation[10]. Found an obscure posting[11] after an hour of searching that got me running. The only difference is svirt_image_t vs virt_image_t.
UPDATE : This appears to not make a difference. I am still unable to load the images. The svirt is actually just the dynamic label that gets applied after the image is started.[12]

semanage fcontext -a -t virt_image_t "/data/libvirt/images(/.*)?"
restorecon -vR /data/libvirt/images 


At one point I needed to remove bad context.[13] The policy is kept in /etc/selinux/targeted/contexts/files/file_contexts.local but you can't edit this directly.

semanage fcontext -d "/data/archive/ISO/ubuntu-18.04.1-live-server-amd64.iso"

What is really annoying is that audit logs were not reporting any violations when using the troubleshooter, but I was getting the following

type=VIRT_CONTROL msg=audit(1576848063.439:6601): pid=1265 uid=0 auid=4294967295 ses=4294967295 \
subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm op=start reason=booted vm="Unifi" uuid=37eed7bf-a37f-4d49-86c2-b9a6bb8682c3 \
vm-pid=-1 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=failed'UID="root" AUID="unset"