GPU virtualization

From HandWiki
(Redirected from Virtualized GPU)
Short description: Technology that allows a GPU to be used by multiple virtual machines

GPU virtualization refers to technologies that allow the use of a GPU to accelerate graphics or GPGPU applications running on a virtual machine. GPU virtualization is used in various applications such as desktop virtualization,[1] cloud gaming[2] and computational science (e.g. hydrodynamics simulations).[3]

GPU virtualization implementations generally involve one or more of the following techniques: device emulation, API remoting, fixed pass-through and mediated pass-through. Each technique presents different trade-offs regarding virtual machine to GPU consolidation ratio, graphics acceleration, rendering fidelity and feature support, portability to different hardware, isolation between virtual machines, and support for suspending/resuming and live migration.[1][4][5][6]

API remoting

In API remoting or API forwarding, calls to graphical APIs from guest applications are forwarded to the host by remote procedure call, and the host then executes graphical commands from multiple guests using the host's GPU as a single user.[1] It may be considered a form of paravirtualization when combined with device emulation.[7] This technique allows sharing GPU resources between multiple guests and the host when the GPU does not support hardware-assisted virtualization. It is conceptually simple to implement, but it has several disadvantages:[1]

  • In pure API remoting, there is little isolation between virtual machines when accessing graphical APIs; isolation can be improved using paravirtualization
  • Performance ranges from 86% to as low as 12% of native performance in applications that issue a large number of drawing calls per frame
  • A large number of API entry points must be forwarded, and partial implementation of entry points may decrease fidelity
  • Applications on guest machines may be limited to few available APIs

Hypervisors usually use shared memory between guest and host to maximize performance and minimize latency. Using a network interface instead (a common approach in distributed rendering), third-party software can add support for specific APIs (e.g. rCUDA[8] for CUDA) or add support for typical APIs (e.g. VMGL[9] for OpenGL) when it is not supported by the hypervisor's software package, although network delay and serialization overhead may outweigh the benefits.

Application support from API remoting virtualization technologies
Technology Direct3D OpenGL Vulkan OpenCL
VMware Virtual Shared Graphics Acceleration (vSGA)[10][11] 11 4.1 Yes No
Parallels Desktop for Mac 3D acceleration[12] 11[upper-alpha 1] 3.3[upper-alpha 2] No No
Hyper-V RemoteFX vGPU[14][15] 12 4.4 No 1.1
VirtualBox Guest Additions 3D driver[16][17][18] 8/9[upper-alpha 3] 2.1[upper-alpha 4] No No
Thincast Workstation - Virtual 3D[20] 12.1 No Yes No
QEMU/KVM with Virgil 3D[21][22][23][24] No 4.3 Planned No
  1. Wrapped to OpenGL using WineD3D.[13]
  2. Compatibility profile.
  3. Experimental. Wrapped to OpenGL using WineD3D.[19]
  4. Experimental.

Fixed pass-through

In fixed pass-through or GPU pass-through (a special case of PCI pass-through), a GPU is accessed directly by a single virtual machine exclusively and permanently. This technique achieves 96–100% of native performance[3] and high fidelity,[1] but the acceleration provided by the GPU cannot be shared between multiple virtual machines. As such, it has the lowest consolidation ratio and the highest cost, as each graphics-accelerated virtual machine requires an additional physical GPU.[1]

The following software technologies implement fixed pass-through:

VirtualBox removed support for PCI pass-through in version 6.1.0.[33]

QEMU/KVM

For certain GPU models, Nvidia and AMD video card drivers attempt to detect the GPU is being accessed by a virtual machine and disable some or all GPU features.[34] NVIDIA has recently changed virtualization rules for consumer GPUs by disabling the check in GeForce Game Ready driver 465.xx and later.[35]

For NVIDIA, various architectures of desktop and laptop consumer GPUs can be passed through in various ways. For desktop graphics cards, passthrough can be done via the KVM using either the legacy or UEFI BIOS configuration via SeaBIOS and OVMF, respectively.

NVIDIA

Desktops

For desktops, most graphics cards can be passed through, although for graphics cards with the Pascal architecture or older, the VBIOS of the graphics card must be passed through in the virtual machine if the GPU is used to boot the host.[36]

Laptops

For laptops, the NVIDIA driver checks for the presence of a battery via ACPI, and without a battery, an error will be returned. To avoid this, an acpitable created from text converted into Base64 is required to spoof a battery and bypass the check.[36]

Pascal and earlier

For the laptop graphics cards that are Pascal and older, passthrough varies widely on the configuration of the graphics card. For laptops that do not have NVIDIA Optimus, such as the MXM variants, passthrough can be achieved through traditional methods. For laptops that have NVIDIA Optimus on as well as rendering through the CPU's integrated graphics framebuffer as opposed to its own, the passthrough is more complicated, requiring a remote rendering display or service, the use of Intel GVT-g, as well as integrating the VBIOS into the boot configuration due to the VBIOS being present in the laptop's system BIOS as opposed to the GPU itself. For laptops that have a GPU with NVIDIA Optimus and have a dedicated framebuffer, the configurations may vary. If NVIDIA Optimus can be switched off, then passthrough is possible through traditional means. However, if Optimus is the only configuration, then it is most likely that the VBIOS is present in the laptop's system BIOS, requiring the same steps as the laptop rendering only on the integrated graphics framebuffer, but an external monitor is also possible.[37]

Mediated pass-through

In mediated device pass-through or full GPU virtualization, the GPU hardware provides contexts with virtual memory ranges for each guest through IOMMU and the hypervisor sends graphical commands from guests directly to the GPU. This technique is a form of hardware-assisted virtualization and achieves near-native[lower-alpha 2] performance and high fidelity. If the hardware exposes contexts as full logical devices, then guests can use any API. Otherwise, APIs and drivers must manage the additional complexity of GPU contexts. As a disadvantage, there may be little isolation between virtual machines when accessing GPU resources.[1]

The following software and hardware technologies implement mediated pass-through:

While API remoting is generally available for current and older GPUs, mediated pass-through requires hardware support available only on specific devices.

Hardware support for mediated pass-through virtualization
Vendor Technology Dedicated graphics card families Integrated GPU families
Server Professional Consumer
Nvidia vGPU[46] GRID, Tesla Quadro No
AMD MxGPU[42][47] FirePro Server, Radeon Instinct Radeon Pro No No
Intel GVT-g Broadwell and newer

Device emulation

GPU architectures are very complex and change quickly, and their internal details are often kept secret. It is generally not feasible to fully virtualize new generations of GPUs, only older and simpler generations. For example, PCem, a specialized emulator of the IBM PC architecture, can emulate a S3 ViRGE/DX graphics device, which supports Direct3D 3, and a 3dfx Voodoo2, which supports Glide, among others.[48]

When using a VGA or an SVGA virtual display adapter,[49][50][51] the guest may not have 3D graphics acceleration, providing only minimal functionality to allow access to the machine via a graphics terminal. The emulated device may expose only basic 2D graphics modes to guests. The virtual machine manager may also provide common API implementations using software rendering to enable 3D graphics applications on the guest, albeit at speeds that may be low as 3% of hardware-accelerated native performance.[1] The following software technologies implement graphics APIs using software rendering:

See also

Notes

  1. 1.0 1.1 Not available on VMware Workstation.
  2. Intel GVT-g achieves 80–90% of native performance.[38][39] Nvidia vGPU achieves 88–96% of native performance considering the overhead on a VMware hypervisor.[40]

References

  1. 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Dowty, Micah; Sugerman, Jeremy (July 2009). written at San Diego. "GPU Virtualization on VMware's Hosted I/O Architecture". ACM SIGOPS Operating Systems Review (New York City: Association for Computing Machinery) 43 (3): 73–82. doi:10.1145/1618525.1618534. ISSN 0163-5980. https://www.usenix.org/legacy/event/wiov08/tech/full_papers/dowty/dowty.pdf. Retrieved 10 September 2020. 
  2. Hong, Hua-Jun; Fan-Chiang, Tao-Ya; Lee, Che-Rung; Chen, Kuan-Ta; Huang, Chun-Ying; Hsu, Cheng-Hsin (2014). "GPU Consolidation for Cloud Games: Are We There Yet?". 13th Annual Workshop on Network and Systems Support for Games. Nagoya: Institute of Electrical and Electronics Engineers. pp. 1–6. doi:10.1109/NetGames.2014.7008969. ISBN 978-1-4799-6882-4. https://www.iis.sinica.edu.tw/~swc/pub/gpu_virtualization_for_cloud_games.html. Retrieved 14 September 2020. 
  3. 3.0 3.1 Walters, John; Younge, Andrew; Kang, Dong-In; Yao, Ke-Thia; Kang, Mikyung; Crago, Stephen; Fox, Geoffrey (2014). "GPU Passthrough Performance: A Comparison of KVM, Xen, VMware ESXi, and LXC for CUDA and OpenCL Applications". IEEE 7th International Conference on Cloud Computing. Anchorage: IEEE Computer Society. pp. 636–643. doi:10.1109/CLOUD.2014.90. ISBN 978-1-4799-5063-8. https://ieeexplore.ieee.org/document/6973796. Retrieved 13 September 2020. 
  4. Yu, Hangchen; Rossbach, Christopher (25 June 2017). "Full Virtualization for GPUs Reconsidered". ISCA-44 14th Annual Workshop on Duplicating, Deconstructing and Debunking. Toronto. https://www.cs.utexas.edu/~hyu/publication/wddd17-gpuvm.pdf. Retrieved 12 September 2020. 
  5. Tian, Kun; Dong, Yaozu; Cowperthwaite, David (June 2014). "A Full GPU Virtualization Solution with Mediated Pass-Through". USENIX Annual Technical Conference. Philadelphia: USENIX. pp. 121–132. ISBN 978-1-931971-10-2. https://www.usenix.org/system/files/conference/atc14/atc14-paper-tian.pdf. 
  6. Gottschlag, Mathias; Hillenbrand, Marius; Kehne, Jens; Stoess, Jan; Bellosa, Frank (November 2013). "LoGV: Low-Overhead GPGPU Virtualization". 10th International Conference on High Performance Computing. Zhangjiajie: IEEE Computer Society. pp. 1721–1726. doi:10.1109/HPCC.and.EUC.2013.245. ISBN 978-0-7695-5088-6. http://os.itec.kit.edu/downloads/logv_low_overhead%20gpgpu_virtualization.pdf. Retrieved 16 September 2020. 
  7. Suzuki, Yusuke; Kato, Shinpei; Yamada, Hiroshi; Kono, Kenji (June 2014). "GPUvm: Why Not Virtualizing GPUs at the Hypervisor?". USENIX Annual Technical Conference. Philadelphia: USENIX. pp. 109–120. ISBN 978-1-931971-10-2. https://www.usenix.org/system/files/conference/atc14/atc14-paper-suzuki.pdf. Retrieved 14 September 2020. 
  8. Duato, José; Peña, Antonio; Silla, Federico; Fernández, Juan; Mayo, Rafael; Quintana-Ortí, Enrique (December 2011). "Enabling CUDA acceleration within virtual machines using rCUDA". 18th International Conference on High Performance Computing. Bangalore: IEEE Computer Society. 1–10. doi:10.1109/HiPC.2011.6152718. ISBN 978-1-4577-1951-6. https://core.ac.uk/download/pdf/231705177.pdf. Retrieved 13 September 2020. 
  9. Lagar-Cavilla, Horacio; Tolia, Niraj; Satyanarayanan, Mahadev; Lara, Eyal (June 2007). "VMM-Independent Graphics Acceleration". written at San Antonio. VEE '07. New York City: Association for Computing Machinery. pp. 33–43. doi:10.1145/1254810.1254816. ISBN 978-1-59593-630-1. http://www.cs.cmu.edu/~satya/docdir/lagar-cavilla-vee-vmgl-2007.pdf. Retrieved 12 September 2020. 
  10. 10.0 10.1 Template:Cite tech report
  11. visaac. "VMware Workstation 16 Pro Release Notes" (in en). https://docs.vmware.com/en/VMware-Workstation-Pro/16/rn/VMware-Workstation-16-Pro-Release-Notes.html. 
  12. Template:Cite tech report
  13. Bright, Peter (11 March 2014). "Valve releases open source Direct3D to OpenGL translator". Ars Technica. https://arstechnica.com/gaming/2014/03/valve-releases-open-source-direct3d-to-opengl-translator/. 
  14. Template:Cite tech report
  15. Template:Cite tech report
  16. Template:Cite tech report
  17. Template:Cite tech report
  18. Larabel, Michael (19 December 2018). "VirtualBox 6.0 3D/OpenGL Performance With VMSVGA Adapter". Phoronix. https://www.phoronix.com/scan.php?page=article&item=virtualbox-60-vmsvga&num=1. 
  19. Larabel, Michael (29 January 2009). "VirtualBox Gets Accelerated Direct3D Support". Phoronix. https://www.phoronix.com/scan.php?page=news_item&px=NzAyNA. 
  20. Hi! - The Thincast Workstation FreeRDP Blog
  21. "Virgil 3D GPU project". freedesktop.org. https://virgil3d.github.io/. 
  22. Template:Cite tech report
  23. Wollny, Gert (28 August 2019). "Virglrenderer and the state of virtualized virtual worlds". Collabora News & Blog. https://www.collabora.com/news-and-blog/blog/2019/08/28/virglrenderer-state-of-virtualized-virtual-worlds/. 
  24. Hoffmann, Gerd (28 November 2019). "virtio gpu status and plans". https://www.kraxel.org/blog/2019/11/virtio-gpu-status-and-plans/. 
  25. Template:Cite tech report
  26. Template:Cite tech report
  27. 27.0 27.1 Template:Cite tech report
  28. 28.0 28.1 Template:Cite tech report
  29. 29.0 29.1 Template:Cite tech report
  30. 30.0 30.1 30.2 Larabel, Michael (4 May 2014). "Intel Pushes Their Graphics Virtualization Capabilities". Phoronix. https://www.phoronix.com/scan.php?page=news_item&px=MTY4MTc. 
  31. 31.0 31.1 "Bringing New Use Cases and Workloads to the Cloud with Intel Graphics Virtualization Technology (Intel GVT-g)". Intel. 2016. https://01.org/sites/default/files/documentation/gvt_flyer_final.pdf. 
  32. 32.0 32.1 Jain, Sunil (4 May 2014). "Intel Graphics Virtualization Update". Intel. https://01.org/blogs/2014/intel%C2%AE-graphics-virtualization-update. 
  33. "Changelog for VirtualBox 6.1". Oracle Corporation. 10 December 2019. https://www.virtualbox.org/wiki/Changelog-6.1. 
  34. "PCI passthrough via OVMF - Video card driver virtualization detection". https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#Video_card_driver_virtualisation_detection. 
  35. "GeForce GPU Passthrough for Windows Virtual Machine (Beta)". 2021-03-30. https://nvidia.custhelp.com/app/answers/detail/a_id/5173/~/geforce-gpu-passthrough-for-windows-virtual-machine-%28beta%29. 
  36. 36.0 36.1 "PCI passthrough via OVMF - ArchWiki". https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF#Video_card_driver_virtualisation_detection. 
  37. Tian, Lan (2020-06-25). "Intel and NVIDIA GPU Passthrough on a Optimus MUXless Laptop". https://lantian.pub/en/article/modify-computer/laptop-intel-nvidia-optimus-passthrough.lantian/#Stop-Host-OS-from-Tampering-with-NVIDIA-GPU. 
  38. Zheng, Xiao (August 2015). "Media Cloud Based on Intel Graphics Virtualization Technology (Intel GVT-g) and OpenStack". Intel Developer Forum. San Francisco: Intel. https://01.org/sites/default/files/documentation/sz15_sfts002_100_engf.pdf. Retrieved 14 September 2020. 
  39. Wang, Zhenyu (September 2017). "Full GPU virtualization in mediated pass-through way". XDC2017. Mountain View, California: X.Org Foundation. https://www.x.org/wiki/Events/XDC2017/wang_gvt.pdf. Retrieved 14 September 2020. 
  40. Template:Cite tech report
  41. Template:Cite tech report
  42. 42.0 42.1 Template:Cite tech report
  43. Wang, Hongbo (18 October 2018). "2018-Q3 release of XenGT (Intel GVT-g for Xen)" (Press release). Intel Open Source Technology Center. Retrieved 14 August 2020.
  44. 44.0 44.1 Template:Cite tech report
  45. Wang, Hongbo (18 October 2018). "2018-Q3 release of KVMGT (Intel GVT-g for KVM)" (Press release). Intel Open Source Technology Center. Retrieved 14 August 2020.
  46. "NVIDIA Virtual GPU Software Supported GPUs". Nvidia. https://docs.nvidia.com/grid/gpus-supported-by-vgpu.html. 
  47. Template:Cite tech report
  48. "Systems/motherboards emulated". https://pcem-emulator.co.uk/status.html. 
  49. Template:Cite tech report
  50. 50.0 50.1 Template:Cite tech report
  51. Template:Cite tech report
  52. Template:Cite tech report
  53. Template:Cite tech report