Multiple viewpoint rendering by exploiting image coherence in epipolar space
More Info
expand_more
Abstract
In recent years, there has been a widespread increase in the adoption of virtual reality and 3d displays which require many images as input, e.g. head mounted displays or lightfield displays. Real-time rendered imagery for such displays is commonly computed using brute-force rendering pipelines based on forward(+) or deferred rendering pipelines. This process can be sped up based on the coherence between images, and is the aim of the presented study. The result is a proof-of-concept framework meant as alternative for brute force generation of each image, whereby the developed algorithms interpolate from a set of source images based on the theory of epipolar geometry. This theory describes a relation of a surface point in 3d space between different images.
Besides advantages as better performance compared to the baseline, interpolation-based image generation also has disadvantages. Specific to the presented use case is the need for special support of commonly used view dependent properties. As such, support for specularity is demonstrated based on inclusion of the Phong reflection model in the framework.
Additionally, several techniques are included to improve the performance of the basic interpolation algorithm. Thus, further reducing the frame times of the developed interpolation-based rendering pipelines compared to the baseline rendering pipeline, which for this study is a deferred rendering pipeline. Although, the increased performance results in decreased image quality, the resulting images are of acceptable quality when compared to the baseline. All aspects considered, the results show a rendering framework based on epipolar geometry is viable, but the current implementation leaves many features used in production rendering pipelines to be ported. Moreover, the current framework implementation is built upon techniques that may have a lot of room for improvement.