Performance-gain Investigation of Foveated Rendering Technique
Modern virtual reality (VR) head-mounted displays render more than 4 million pixels, and each pixel needs to be rasterized, lit, shaded, and colored. High-resolution displays, along with an increased visual quality of VR content, put a high burden on the graphics equipment; thus, only increasingly powerful graphics processing units (GPUs) handle the demand. Foveated rendering (FR) is a technique with the potential to reduce the required processing performance for graphics-intensive VR applications significantly. The technique adapts the rendering quality dynamically and only renders high-quality content at the user's focus point. Previous research proposed promising FR techniques. Research is often focused on the technical implementation of FR, however, it remains inconclusive whether or not it is beneficial in all situations.
This research aims to investigate the performance gain break-even for foveated rendering. The research approach is experimental. Three different scenes with changing geometrical and algorithmic complexity are analyzed on two different GPU types using rotating FR settings. The performance of the FR renderer and a standard renderer is profiled and compared to a theoretical model. The results indicate that a low-end mobile GPU benefits with a high overhead FR implementation when the scene contains > 121,000 vertices, >= 3 lights, and until a high-resolution circular region around the gaze point covers <= 30% of the scene. On the other hand, a high-end desktop GPU benefits with FR implementation when the scene is vast with > 900,000 vertices, illuminated by >= 15 lights, and enabled with shadows while covering 10% of the scene with the high-resolution foveal region. The results meet the theoretical expectations. Furthermore, FR benefits are higher for a scene with a high per-pixel computational cost than a scene with a high number of pixels. In a nutshell, factors such as the scene characteristics, the order of complexity of the shading algorithms, hardware specifications, and foveal region features need monitoring to maximize FR's usefulness.
Committee: Jin Tian (major professor), Rafael Radkowski (major professor), and Yan-Bin Jia
Join on Zoom: https://iastate.zoom.us/j/95958055556