3DTV Rendering from Multiple Cameras
More Info
expand_more
expand_more
Abstract
In this master's thesis a framework for the rendering of scenes for 3DTV applications is presented. The goal of our work is to investigate ways to generate virtual views of a scene captured using a relatively small number of real cameras. The approach adopted uses per-pixel operations and therefore is suitable for fast parallel implementation. A depth map is implicitly generated for every of the virtual viewpoints. The approach is refined by the use of camera proximity weights and an outlier removal method. The rendering system can handle occlusions and can cope with a broad range of different sceneries. Our framework allows for flexible and scalable generation of a virtual view, in terms of the number of capturing cameras.
Files
Ewi_manta_2008.pdf
(pdf | 4.33 Mb)