lspci
.lspci -nnk | grep 'VGA|Audio'
— this will output a list of installed devices relevant to your GPU. You can also just run lspci -nnk
to get all attached devices, in case you want to pass through something else, like an NVMe drive or a usb controller. Look for the device ids for each device you intend to pass through, for example, my GTX 1070 is listed as [10de:1b81]
and [10de:10f0]
for the HDMI audio. You need to use every device ID associated with your device, and most GPUs have both an audio controller and VGA. Some cards, in particular VR-ready nvidia GPUs and the new 20 series GPUs will have more devices you’ll need to pass, so refer to the full output to make sure you got all of them.GRUB_CMDLINE_LINUX=
add these arguments, separated by spaces: intel_iommu=on
OR amd_iommu=on
, iommu=pt
and vfio-pci.ids=
followed by the device IDs you want to use on the VM separated by commas. For example, if I wanted to pass through my GTX 1070, I’d add vfio-pci.ids=10de:1b81,10de:10f0
. Save and exit your editor.grub-mkconfig -o /boot/grub/grub.cfg
. This path may be different for you on different distros, so make sure to check that this is the location of your grub.cfg prior to running this and change it as necessary. The tool to do this may also be different on certain distributions, e.g. update-grub
.chmod +x iommu.sh
and run it with ./iommu.sh
to see your iommu groups. No output means you didn’t enable one of the relevant UEFI features, or didn’t revise your kernel commandline options correctly. If the GPU and your other devices you want to pass to the host are in their own groups, move on to the next section. If not, refer to the troubleshooting section.dmesg | grep vfio
to ensure that your devices are being isolated. No output means that the vfio-pci module isn’t loading and/or you did not enter the correct values in your kernel commandline options.nmcli
as well:iw list
. From there you can set up a virtual AP with hostapd and connect to that with the bridge. We won’t be covering the details of this process here because it’s very involved and requires a lot of prior knowledge about linux networking to set up correctly.ip
, or set up a macvlan. Both are complex and require networking knowledge.dmesg
output on the host after starting the VM and searching for common problems.<rom bar='on' file='/var/lib/libvirt/vbios/vbios.rom'/>
in the pci device section that corresponds to the GPU./usr/bin/vfioverride.sh
.pci-isolate.conf
in /etc/modprobe.d
, open it in an editor and add the line install/usr/bin/init-top/vfioverride.sh
to it. Save it. Make sure modconf
is listed in the HOOKS=(
array section of your initrd config file, mkinitcpio.conf
.install_items+=
array and modconf/vfio-pci to the add_drivers+=
array.mkinitcpio
, dracut
, or update-initramfs
depending on your distribution (Arch, RHEL/Fedora and *Buntu respectively.)lspci
for them (if they’re missing you’re good to go.)amdgpu,radeon
or nouveau
to module_blacklist=
in your kernel command line options (same way you added vfio device IDs in the first section of this tutorial.)vfio_pci vfio vfio_iommu_type1 vfio_virqfd
to your initramfs early modules list, and removing any graphics drivers set to load at the same time. This process varies depending on your distro.mkinitcpio
systems (Arch,) you add these to the MODULES=
section of /etc/mkinitcpio.conf
and then rebuild your initramfs by running mkinitcpio -P
.dracut
systems (Fedora, RHEL, Centos, Arch in future releases,) you add these to a .conf file in the /etc/modules-load.d/
folder.