This post is not a tutorial, merely some notes sharing about a working setup I've made in 2016 and maintain since. Feel free to let me know if something needs to be added/corrected :)
Windows has been the PC gaming platform for more than two decades and while Linux games are on the rise, including AAA titles like Shadow of Mordor, Tomb Raider, the XCOM series, Rocket League or Total War, many titles remains Windows-only.
Maintaining a dual-boot system is often still the go-to solution to play Windows games, you may have some luck with Wine but not all games are supported.
There's however a third option to play Windows games from Linux : VFIO which stands for Virtual Function I/O and allows you to give a piece of hardware like a GPU or a NIC to a Windows Virtual Machine.
Assuming you met the prerequisites, you can with a bit of work get an extremely good performance out of such a setup, around 95% of a comparable bare-metal Windows install.
For that, you'll need :
An Intel processor with VT-x and VT-d instructions or AMD-Vi
A compatible motherboard
2 GPUs (one discrete for the VM and one integrated for the host for instance)
enough spare RAM anc CPU juice to run both a Linux host and the Windows VM
In this blog post, I'll document some of the steps I've taken to get a working setup.
These notes focus on Intel CPU and Nvidia GPU as I don't own another type of hardware, that being said, some parts are hardware agnostic.
These notes use the most "recent" KVM VGA passthrough method which is OVMF + vfio-pci (compared to Seabios + pci-stub) and requires a GPU with EFI ROM
Gentoo amd64, Linux kernel >= 4.6.x
QEMU >= 2.5.1 + KVM
virt-manager >= 1.3.2 & libvirt >= 1.3.5
Windows 10 Pro VM
Intel Core i5-3470
ASRock B75 Pro3
Intel HD (IGP)
Nvidia GTX 750 Ti (Discrete GPU)
16 GB RAM
Part 1 : Host system setup
For easier sound handling, I've found than compiling
snd_hda_intel in kernel rather than as a module works perfectly (no need to unload/reload the kernel module)
Kernel boot options
/etc/default/grub and add to GRUB_CMDLINE_LINUX
Then regenerate grub2 menu with
grub-mkconfig -o /boot/grub/grub.cfg
Check IOMMU groups
This command should report the various IOMMU groups from your machine (An IOMMU group is the smallest set of physical devices that can be passed to a virtual machine.)
if there's nothing it means IOMMU is not properly enabled.
for iommu_group in $(find /sys/kernel/iommu_groups/ -maxdepth 1 -mindepth 1 -type d);
do echo "IOMMU group $(basename "$iommu_group")";
for device in $(ls -1 "$iommu_group"/devices/);
do echo -n $'\t'; lspci -nns "$device";
Isolating the GPU with vfio-pci
Get your vendor-id :
lspci | grep -i vga
note down the first number, it is the slot number i.e 01:00.0
lspci -nns 01:00.0
Note down the last number between "", this is the vendor-id, i.e 10de:1380
lspci -nnk -d vendor-id
/etc/modprobe.d/vfio.conf with the vendor-id from your GPU you want to isolate gathered from the previous command, in this example, the vendor-id is 10de:13c2 for the GPU and 10de:0fbb for the audio
options vfio-pci ids=10de:13c2,10de:0fbb
Add these modules to
/etc/conf.d/modules (Gentoo/OpenRC specific)
modules="vfio vfio-pci vfio_iommu_type1 vfio_virqfd"
module loading at boot is enabled by
rc-update add modules boot
If you were using the
nvidia proprietary Unix driver before, you need to blacklist module loading at boot time by settings up the blacklist in
Reboot, your computer need to be able to run Xorg on top of the Intel IGP to be able to proceed for the rest of the setup.
Check that your GPU is correctly isolated
$ dmesg | grep -i vfio
[ 0.329224] VFIO - User Level meta-driver version: 0.3
[ 0.341372] vfio_pci: add [10de:13c2[ffff:ffff]] class 0x000000/00000000 <-- Good
$ lspci -nnk -d <VENDOR_ID>
Kernel driver in use: vfio-pci <-- Good
Same check for your sound card
lspci -nnk -d reports a "Kernel driver in use" which is not
vfio-pci, there's something wrong.
Make sure these modules are present when you run
Below example is for Gentoo & OpenRC, adjust for your distro.
# emerge --ask qemu virt-manager libvirt git
# rc-update add libvirt-guests
# rc-update add libvirtd# service libvirt-guests start
# service libvirtd start
If you plan to user
virt-manager, make sure your user is part of the libvirt group so that you don't need to type the root password each time your start it.
usermod your_username -a -G libvirt
Get OVMF firmware via https://www.kraxel.org/repos/jenkins/edk2/ and choose the
Unpack it and copy
/usr/share/ovmf/x64/, creating the directory if needed.
Add this line to
nvram = [
Create a new machine with virt-manager and check "edit settings" before finishing the setup, change firmware from BIOS to UEFI (or to "Custom: /usr/share/ovmf/x64/ovmf_x64.bin").
At this point you should be able to install Windows through the VGA console.
If you're dropped to an EFI shell, make sure the ISO you wish to boot is correct, try with a recent Ubuntu release for instance.
Part 2 : Guest VM setup
The XML configuration of the VM is located at
/etc/libvirt/qemu, should you edit it, do it with
virsh edit , do NOT edit with your editor directly since XML validation is done by virsh (and you want it)
When the Windows install is done, shutdown the VM and give it the GPU through the virt-manager GUI.
Boot the VM with standard virtualized VGA adapter and install the nvidia drivers (tested with Nvidia Geforce driver 368.81 WHQL)
One effortless way is to get an USB WiFi adapter and passing-through the USB device to Windows directly using virt-manager, otherwise, best to look at bridging.
Recent nvidia drivers block the driver if it runs on top of an hypervisor, the workaround is to hide KVM and enable some hyper-v-related tweaks :
Qemu 2.5+ supports the hv_vendor_id flag that lets you change the vendor ID. It was discovered that this is whats used by the nvidia drivers to detect hyper-v, so changing the vendor ID allows you to use all the enlightenments without upsetting nvidia
via libvirt :
edit your VM XML definition with
virsh edit your_vm_name
<features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='whatever'/> </hyperv> <kvm> <hidden state='on'/> </kvm> </features>...
- Prevent Windows to install nvidia drivers
You might need to set this up so that Windows doesn't try to update your nvidia driver unattended, as newer drivers are sometimes more locked down or require workaround to work on a VM, we prefer to update them manually (with a VM snapshot as a safety net !)
The best resource I know is the VFIO subreddit at https://www.reddit.com/r/vfio and this Arch Wiki page https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF
VFIO is not a silver bullet ! When it works, it's pretty great but it can be kind of a pain (bugs/regression, slow I/O if you don't use the VM often or on an HDD rather than an SSD, maintaining Windows, ...)
I believe we must strive to vote with our wallet when buying games, Linux as an open gaming platform is still a work in progress, let's make sure we support it !
The video which started it all for me :