Enable Proxmox VE N...
 
Share:
Notifications
Clear all

Enable Proxmox VE NVIDIA vGPU support for virtualization


Brandon Lee
Posts: 536
Admin
Topic starter
(@brandon-lee)
Member
Joined: 15 years ago

I wanted to post out to share the news from official Proxmox channels just a few days ago that Proxmox VE now supports NVIDIA vGPU. This is pretty cool news. Starting with NVIDIA vGPU version 18, Proxmox is now part of the officially supported hypervisors list. TI think this is a big step forward for those running GPU-heavy or AI workloads in Proxmox virtualized environments. Also, this helps to close the gap on this functionality compared to other hypervisors like VMware vSphere.

You can read the official blog post from Proxmox here: NVIDIA vGPU on Proxmox VE - Proxmox VE.

What does this mean?

This means you can virtualize and share NVIDIA GPUs across multiple VMs in Proxmox using NVIDIA’s vGPU technology. This will also unlock GPU acceleration for tasks like the following:

  • AI/ML

  • 3D-rendering

  • Video encoding/decoding

  • VDI environments, etc

The feature can also reduce hardware costs by making full use of expensive GPUs across workloads.

Requirements

To use the NVIDIA vGPU capabilities in Proxmox VE, you need to make sure your setup meets the following:

  1. For support you need Proxmox VE Subscription & NVIDIA vGPU Entitlement: An active Proxmox VE subscription at the Basic, Standard, or Premium and a valid NVIDIA vGPU software license

  2. Supported Hardware: Use enterprise hardware listed in NVIDIA's compatibility list

  3. Compatible Software Versions: Make sure you're running supported versions of Proxmox VE, Linux kernel, and NVIDIA drivers

Enabling NVIDIA vGPU

  1. Enable PCIe Passthrough:

    • First, you need to make sure your system supports PCIe passthrough by enabling IOMMU in your BIOS/UEFI settings. For Intel, enable VT-d; for AMD, enable AMD-Vi.

    • Confirm that the IOMMU is active by checking the system logs:

       
      dmesg | grep -e DMAR -e IOMMU
    • You should see that IOMMU is enabled.

  2. Configure Proxmox VE Repositories:

    • Make sure your Proxmox VE system is set to use the right repositories. For production environments, the enterprise repository is recommended. For evaluation or home labs, the no-subscription repository can be used.

  3. Update System Packages:

    • Update your system by installing the latest package versions:

       
      apt update && apt full-upgrade
    • Reboot

  4. Prepare the System Using pve-nvidia-vgpu-helper:

    • Install the pve-nvidia-vgpu-helper package:

       
      apt install pve-nvidia-vgpu-helper
    • This tool helps with setting up the configurations for NVIDIA vGPU

  5. Install NVIDIA Host Drivers:

    • Download NVIDIA vGPU host drivers for your Proxmox VE and kernel version. Take a look at NVIDIA's documentation for the correct version

    • Install the drivers using the latest documentation from NVIDIA

  6. Enable SR-IOV (Single Root I/O Virtualization):

    • If your GPU has support for SR-IOV (which many today do) you will need to enable it to create virtual functions (VFs). These can be assigned to virtual machines. Generally speaking, to enable this, it includes doing the following:

      • Enabling SR-IOV in the BIOS/UEFI

      • Configuring the number of VFs using the kernel module parameters or you can use the tool pve-nvidia-vgpu-helper

  7. Create PCI Resource Mapping:

    • You will then need to map the physical GPU resources to virtual functions to assign to the VMs. 

  8. Configure Virtual Machines:

    • Assign the vGPU to your virtual machines through the Proxmox VE web interface or command line.

    • Install the necessary NVIDIA guest drivers within the virtual machines to ensure proper operation.

      1. In the Proxmox Web UI, go to your VM → HardwareAddPCI Device.

      2. Select the virtual function (e.g., 0000:81:00.4) that matches up to the vGPU.

      3. Look for and enable the following:

        • All Functions

        • Primary GPU (optional, only if it's the main GPU)

        • ROM-Bar (if needed)

        • PCI-Express

      4. You may also need to set args manually in the VM configuration file (/etc/pve/qemu-server/<vmid>.conf) if advanced parameters are needed, such as:

        Once SR-IOV and vGPU device creation is complete:

         

    • hostpci0: 81:00.4,pcie=1
      1. Install the NVIDIA guest driver inside the VM (you can download this using NVIDIA Enterprise portal with your entitlement). These drivers are required to recognize and use the vGPU device.

  9. License the vGPU:

    • Make sure that each vGPU instance is licensed according to NVIDIA's licensing requirements.

  10. Test:

    • After the configuration, make sure the vGPU is installed and functioning in the virtual machines.

    • Run GPU applications to test performance and stability.

I’m really looking forward to testing this out in the lab. Has anyone already started experimenting with this new NVIDIA vGPU functionality in Proxmox VE? Would love to hear about setup experiences, performance benchmarks, or gotchas you have run into.