GPU Passthrough with KVM on VPS Hosts
GPU passthrough allows you to dedicate a physical GPU directly to a virtual machine for high-performance computing, gaming, or AI workloads. In this guide, you will learn how to configure a GPU passthrough KVM setup or Proxmox VE host and set up both Linux and Windows guests.
At PerLod Hosting, we specialize in high-performance KVM and Proxmox servers with GPU support. This guide walks you through the exact steps to enable GPU passthrough, so you can replicate it on your own system or hosting environment.
Table of Contents
Prerequisites for GPU Passthrough KVM Setup
Before starting GPU passthrough KVM setup, you must confirm your hardware and software can support it, which requires a modern CPU, proper BIOS options, and control over the host machine.
Be sure your system includes:
- A dedicated PCIe GPU like NVIDIA or AMD.
- IOMMU and VT-d/AMD-Vi support is in both the CPU and the motherboard.
BIOS settings enabled:
- Intel: VT-d = Enabled.
- AMD: SVM = Enabled, and IOMMU = Enabled.
- Above 4G Decoding = Enabled.
- Resizable BAR = Auto or Disabled if you encounter instability.
- Operating system: Debian LTS, Ubuntu LTS, or Proxmox VE 8.
- Hypervisor: KVM/QEMU or Proxmox VE.
Remember, you must have full control of the host; regular shared VPS instances do not allow raw PCI passthrough.
Verify Hardware and IOMMU Support
The first step is to confirm that your system detects the GPU and supports IOMMU, which ensures that the kernel can properly isolate PCI devices for passthrough.
List the GPU and related audio functions with the command below:
lspci -nn | egrep -i 'vga|3d|display|audio'
Check IOMMU support in kernel logs by using the following command:
dmesg | egrep -i 'iommu|dmar|amd-vi'
Note: If dmesg doesn’t show IOMMU-related entries, it means the kernel hasn’t enabled it yet. We will fix it in the next step by editing the GRUB configuration.
Enable IOMMU in GRUB For GPU Passthrough
The GRUB bootloader controls kernel parameters during startup. You must enable IOMMU in Grub to tell the Linux kernel to initialize device isolation required for GPU passthrough.
Open the GRUB configuration file and modify the kernel boot line:
sudo nano /etc/default/grub
For Intel CPUs, change as shown below:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
For AMD CPUs, change as shown below:
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"
Save and close the file. Update and reboot the system with the commands below:
sudo update-grub
echo -e "vfio\nvfio_pci\nvfio_iommu_type1\nvfio_virqfd" | sudo tee -a /etc/modules
sudo update-initramfs -u -k all
sudo reboot
After reboot, verify the modules:
dmesg | egrep -i 'iommu|dmar|amd-vi'
lsmod | egrep 'vfio|vfio_pci|vfio_iommu_type1'
If these modules appear, your system now supports IOMMU.
Check and Isolate IOMMU Groups
Every PCI device belongs to an IOMMU group. GPU passthrough requires your GPU and its audio function to be in its own isolated group, which ensures safe and stable device assignment.
Use the command below to check IOMMU groups:
for g in /sys/kernel/iommu_groups/*; do
echo "Group ${g##*/}";
for d in $g/devices/*; do
echo -n " "; lspci -nn -s ${d##*/};
done;
done
If your GPU shares a group with other devices:
- Try a different PCIe slot.
- As a last resort, enable the PCIe ACS override patch, though it reduces isolation.
Find GPU Vendor and Device IDs
In this step, you need to find your GPU’s vendor and device IDs to bind it correctly to VFIO later. To do this, you can run the command below:
lspci -nn -s 0000:BB:DD.F
Example Output looks like this:
10de:1e84 (GPU)
10de:10f8 (Audio)
Note these IDs; you will use them in the next step.
Prevent the Host from Using GPU Drivers
The host OS must not claim the GPU before VFIO; otherwise, passthrough will fail, so you must blacklist the default GPU drivers and manually bind the device to VFIO.
Use the commands below to blacklist host GPU drivers:
echo -e "blacklist nouveau\noptions nouveau modeset=0" | sudo tee /etc/modprobe.d/blacklist-nouveau.conf
echo -e "blacklist nvidia\nblacklist nvidia_drm\nblacklist nvidia_uvm" | sudo tee /etc/modprobe.d/blacklist-nvidia.conf
echo -e "blacklist radeon\nblacklist amdgpu" | sudo tee /etc/modprobe.d/blacklist-amd.conf
Then, bind the GPU to VFIO with your IDs:
echo 'options vfio-pci ids=10de:1e84,10de:10f8 disable_vga=1' | sudo tee /etc/modprobe.d/vfio.conf
Update and reboot with the commands below:
sudo update-initramfs -u -k all
sudo reboot
After reboot, verify the kernel driver with the following command:
lspci -nnk -s 0000:BB:DD.F
If you see “Kernel driver in use: vfio-pci”, the GPU is successfully bound.
Configure Libvirt VM for GPU Passthrough
At this point, you must give your VM access to the GPU by editing the VM’s XML configuration to assign the GPU device and its audio function.
Edit the XML with the command below:
sudo virsh edit vm-name
Use Q35 as the machine type, host-passthrough CPU mode, and hide the KVM vendor to prevent NVIDIA Code 43:
<domain type='kvm'>
<name>gpu-vm</name>
<memory unit='GiB'>16</memory>
<vcpu placement='static'>8</vcpu>
<os>
<type arch='x86_64' machine='q35'>hvm</type>
<loader readonly='yes' type='pflash'>/usr/share/OVMF/OVMF_CODE.fd</loader>
<nvram>/var/lib/libvirt/qemu/nvram/gpu-vm_VARS.fd</nvram>
</os>
<features>
<acpi/>
<apic/>
<hyperv>
<vendor_id state='on' value='KVMhidden'/>
</hyperv>
<kvm>
<hidden state='on'/>
</kvm>
</features>
<cpu mode='host-passthrough' check='none'>
<topology sockets='1' dies='1' cores='4' threads='2'/>
</cpu>
<devices>
<!-- VirtIO disk and NIC recommended -->
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none' io='native'/>
<source file='/var/lib/libvirt/images/gpu-vm.qcow2'/>
<target dev='vda' bus='virtio'/>
</disk>
<interface type='bridge'>
<source bridge='br0'/>
<model type='virtio'/>
</interface>
<!-- The GPU and its audio function -->
<hostdev mode='subsystem' type='pci' managed='yes' multifunction='on'>
<source>
<address domain='0x0000' bus='0xBB' slot='0xDD' function='0x0'/>
</source>
<rom bar='on'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0xBB' slot='0xDD' function='0x1'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x1'/>
</hostdev>
<!-- Optional: add a keyboard-video-mouse off if you rely on the GPU -->
<video>
<model type='none'/>
</video>
</devices>
</domain>
Once you are done, apply and start it with the following commands:
sudo virsh define /etc/libvirt/qemu/gpu-vm.xml
sudo virsh start gpu-vm
sudo virsh console gpu-vm # for Linux install, otherwise use VNC if you kept a virtual GPU
Configure GPU Passthrough in Proxmox VE
If you’re using Proxmox, you can enable GPU passthrough via the web interface or manually edit the VM config file.
In Proxmox GUI, open your VM, navigate to:
Hardware → Add → PCI Device
From there, select the GPU and its Audio device and enable the following options:
- All Functions: ON
- ROM-Bar: ON
- PCI-Express: ON
- Primary GPU: optional
To edit it manually, open the VM configuration file:
sudo nano /etc/pve/qemu-server/101.conf
Add the following lines to the file:
machine: q35
cpu: host,hidden=1,flags=+pcid
hostpci0: 0000:BB:DD.0,pcie=1,x-vga=1,rombar=1
hostpci1: 0000:BB:DD.1,pcie=1
Then, start the VM and check for logs with the command below:
journalctl -xe | egrep -i 'vfio|qemu|hostpci'
Install and Verify GPU Drivers Inside Guest OS
Once the VM boots, you can install GPU drivers in your guest operating system so the GPU can function properly.
For Linux Guests, for example, Ubuntu, run the system update and install NVIDIA drivers, and verify them with the commands below:
# Update
sudo apt update && sudo apt upgrade -y
# NVIDIA drivers example
ubuntu-drivers devices
sudo apt install build-essential dkms -y
sudo apt install nvidia-driver-535 -y # pick recommended version from the previous command
# Verify
nvidia-smi
For AMD: On Ubuntu 22.04 and later, most AMD GPUs use the amdgpu driver by default. For the Pro stack, install Vulkan drivers:
sudo apt install mesa-vulkan-drivers -y
Note: For compute workloads, use ROCm and install the appropriate ROCm packages.
For Windows guests:
- You must install the official GPU drivers from NVIDIA or AMD.
- If you removed the virtual display, connect via RDP after installing the driver.
- For older NVIDIA drivers that can detect virtual machines, make sure the KVM hidden and vendor_id settings are added in the XML.
- You can optionally turn on MSI for the GPU to reduce system interrupts. In Windows, use MSI Utility v3 and check the box for your GPU.
To verify:
- NVIDIA: The GPU appears in Device Manager, and nvidia-smi works in PowerShell.
- AMD: The GPU appears in Device Manager, and performance data shows up in Task Manager.
Test GPU with CUDA and PyTorch in Linux
Once the GPU drivers are installed, test if CUDA and PyTorch detect the GPU correctly.
Install the CUDA toolkit with the commands below:
sudo apt install nvidia-cuda-toolkit -y
nvcc --version
Create a quick Python test with the command below:
python3 - << 'PY'
import torch
print("Torch:", torch.version)
print("CUDA available:", torch.cuda.is_available())
if torch.cuda.is_available():
print("Device:", torch.cuda.get_device_name(0))
PY
If the output shows your GPU name, passthrough is working perfectly.
How to Fix Common Errors in GPU Passthrough?
In this step, we want to explore the most common issues and how to fix them:
- VM fails to start: This means IOMMU is not enabled. Check your GRUB boot options and make sure VT-d (Intel) or AMD-Vi (AMD) is turned on in the BIOS.
- GPU shares IOMMU group with other devices: Try moving the GPU to a different PCIe slot. If that doesn’t help, enable ACS in your motherboard settings.
- NVIDIA Code 43 in Windows: Make sure the following settings are in your XML:
<kvm><hidden state='on'/></kvm>
<hyperv><vendor_id state='on' value='KVMhidden'/></hyperv>
Use Q35 machine type and OVMF (UEFI) firmware. Don’t add a second virtual GPU if the passed-through GPU is your main one.
- Black screen on boot: If you removed the virtual display, set the passed GPU as the primary display. Make sure a monitor is connected, or use an HDMI dummy plug if your card needs it.
- Driver won’t build on Linux due to Secure Boot: Either sign the kernel modules or turn off Secure Boot in OVMF.
- Host loads nouveau/nvidia/amdgpu after updates: Check that your blacklist files still exist, then rebuild the initramfs and reboot:
sudo update-initramfs -u -k all
sudo reboot
Rollback and Remove Passthrough Configuration
If something breaks or you want to revert, you can simply remove all passthrough configurations. To remove them, you can run the commands below:
sudo rm -f /etc/modprobe.d/blacklist-nouveau.conf \
/etc/modprobe.d/blacklist-nvidia.conf \
/etc/modprobe.d/blacklist-amd.conf \
/etc/modprobe.d/vfio.conf
Remove IOMMU-related kernel parameters from your GRUB boot configuration:
sudo sed -i 's/ iommu=pt//; s/ intel_iommu=on//; s/ amd_iommu=on//' /etc/default/grub
Update Grub and reboot your system:
sudo update-grub
sudo update-initramfs -u -k all
sudo reboot
FAQs
Can I enable GPU passthrough on a shared VPS?
No. You need full hardware access; cloud providers block raw PCI passthrough.
What is the safest setup for NVIDIA GPUs?
Use Q35 machine type, OVMF (UEFI), and KVM hidden flags.
Can I pass multiple GPUs?
Yes, repeat the VFIO binding and XML steps for each GPU.
Conclusion
GPU passthrough KVM setup, or with Proxmox VE, allows your virtual machines to use real GPU power for deep learning, rendering, or gaming. By following the guide steps, your VM can fully utilize your GPU safely, efficiently, and with full control.
If you’re looking to host virtual machines with dedicated GPU performance, PerLod offers optimized VPS hosting solutions for Proxmox and KVM.
We hope you enjoy this guide. Subscribe to X and Facebook channels to get the latest updates and articles on GPU passthrough.
For further reading: