NVIDIA GPU Install for VMware Horizon vGPU

I’m going to cover the steps to get an NVIDIA GPU installed and ready for VMware Horizon vGPU usage. The following steps are not all-inclusive since the hardware and specific versions you may use could be different from what I’m using. This is just an example guide using the hardware and software I mention below. Be sure to reference any best practices for the particular GPU and physical host(s) you will be working with.

I’ll walk through installing an NVIDIA T4 GPU in a Cisco UCS C240 M4 server. Officially a T4 is not supported in the M4. Since this is only a lab it works perfectly for what I need. Just not production ready since there are much newer options.

Installing the GPU

The first thing we need to do is install the physical GPU into our ESXi host. Please reference your server manufacturer documentation for proper installation. They may specify a PCI slot or slots where a GPU should be placed. Some may require a riser card, such as the one shown below.

After installing the GPU and powering the server back on you may need to make some adjustments in the BIOS. Again, reference your own server manufacturer’s documentation as they may call out specific GPU settings that should be enabled or disabled.

With my C240 M4 I only had to disable the SR-IOV setting in the BIOS. When that was enabled the card was not visible from the CIMC or from ESXi. One configuration called out in the Cisco C240 documentation was to enable MMIO above 4GB. MMIO above 4GB will be enabled by default if it’s a standalone server, but it’s worth checking.

With ESXi back up we should now see the GPU as a PCI Device under Host > Configure > Hardware > PCI Devices > All PCI DEVICES, as shown below.

GPU Configuration

At this point, the next steps will depend on what type of GPU usage you intend to use with VMware Horizon. If you are not familiar with the ways you can leverage a GPU in Horizon read through the 3 types of graphics acceleration here.

TLDR:

Option 1 – Virtual Shared Graphics Acceleration (vSGA) – The GPU is virtualized and shared with multiple VMs. Requires a GPU driver in the ESXi host and uses the VMware vSGA 3D driver in the guest VM. Limited GPU features are available and resource contention can happen with high VM density.

Option 2 – Virtual Shared Pass-Through Graphics Acceleration (vGPU) – A step up in performance from vSGA where we skip the VMware driver and use a vendor(NVIDIA) driver within the guest VM to communicate directly with the GPU. The GPU can still be shared between multiple VMs, but allows for most GPU features.

Option 3 – Virtual Dedicated Graphics Acceleration (vDGA) – The most powerful performance option. The GPU is passed directly to an individual VM giving it full access to the GPU. There is no hypervisor driver in the mix so the full feature set of the GPU is available. The downside is this requires a 1 GPU to 1 VM assignment.

I’m going with Option 2 – vGPU. vGPU is probably the most common graphics acceleration used with Horizon. It’s a good compromise of better performance than vSGA while allowing for better consolidation ratios than vDGA. Without getting into too many details here we can carve up the GPU based on the framebuffer size to allow multiple VMs access to GPU. For example, the NVIDIA T4 card I’m using has 16GB of framebuffer and we can assign 16 VMs each with 1GB of framebuffer, or 8 VMs with 2GBs, as well as other combinations based on the card and licensing.

Installing the NVIDIA GPU VIB

Since we want to use vGPU with Horizon we need to install the NVIDIA GPU VIB on our ESXi host.

Here is where you will need some form of NVIDIA licensing. If you are just running a test you can request evaluation licenses. The keys last for 90 days.

Once you have your licenses from NVIDIA and set up your enterprise account on their website log in to the NVIDIA Licensing Portal. Click the NVIDIA LICENSING PORTAL as shown below.

Click on Entitlements.

Entitlements will display all the licenses that you either own or receive as a part of the evaluation.

Next click on Software Downloads.

Since I’ll be installing this on vSphere 7 I selected the VMware vSphere Download. You may need a different version based on the platform you will be installing on so search for the relevant downloads.

After clicking download you will need to agree to the terms by clicking Agree & Download. A .zip file should be downloaded.

Looking inside the Grid-vSphere zip file that gets downloaded, there should also be several pdfs containing all the relevant documentation.

To get started with the VIB install we need to copy the files over to our ESXi host. One way is to copy the files to a datastore on the host. In this case, I’m using the local datastore. To do this launch vSphere and navigate to the datastore that the host has access to. Click Files (A) and then optionally select a folder where you’d like the files to be kept. Then click Upload Files (B). These can be removed later and are just used for the installation.

Locate the vGPU zip file that you previously downloaded. The file is located within the Host_Drivers folder. It should look similar to what you see below, but your version numbers may differ. Click Open once you have the zip file selected.

Depending on your environment you may receive an error when trying to upload a file to a host’s datastore. If so click Details.

Thankfully the details now explain the issue being due to certificate trust issues. This type of issue can easily be resolved by connecting directly to the host in question and accepting the certificate. The message even includes a link to the host which you can then click.

If you’re using Chrome click the Proceed to hostname (unsafe). If you’re using a different browser it may look slightly different. Either way,, you want to accept the certificate associated with the ESXi host.

You do not need to log in to the host. You can close the tab and go back to vSphere. Click the Upload Files again and select the zip file from before. Finally, click Open. The upload should finish quickly and show Completed in the status.

Next, connect to the ESXi host using an SSH client such as PuTTY. Be sure SSH is enabled on the host first. Log in as root to the host.

Now we need to run the installation of the VIB. You can use the following command, but modify the path to match where you stored the VIB file during the previous steps.

esxcli software vib install -d /vmfs/volumes/<datastore path>/NVD-VGPU_510.108.03-1OEM.702.0.0.17630552_20701914.zip

When it completes you should see a message: Operation finished successfully. In my lab it took about 1 minute to complete.

Next reboot the host. You can reboot it from vSphere or just enter reboot at the already open SSH session.

After the host comes back online you can validate that the card is working properly by connecting back over SSH. The command to run once you connect to the host is nvidia-smi. You should see details of the card similar to what I’m showing below.

You can now take the host out of maintenance mode. There is one last change we need to make to ensure the GPU is being passed through correctly for vGPU. From vSphere select the host with the GPU installed and navigate to Configure > Hardware > Graphics > Graphics Devices.

Now select the GPU and click EDIT.

By default Shared will be selected which would be used for vSGA graphics acceleration. For vGPU we want to change it to Shared Direct.

At this point, the GPU is ready to be used and you could attach it to a VM. A quick way to test this is to edit a virtual machine and try adding a New PCI device. There should now be NVIDIA GRID vGPU Profiles to select from.

The next piece will be to set up a license server or connect to a cloud license server from NVIDIA. Then prep a base image and build out a Horizon desktop pool. I’ll cover those items in separate blog posts.

I hope this was helpful and thank you for reading!

Leave a Reply

Your email address will not be published. Required fields are marked *