PriNeRF

Prior constrained Neural Radiance Field for robust novel view synthesis of urban scenes with fewer views

Journal Article (2024)
Author(s)

Kaiqiang Chen (Chinese Academy of Sciences)

Bo Dong (Chinese Academy of Sciences)

Zhirui Wang (Chinese Academy of Sciences)

Peirui Cheng (Chinese Academy of Sciences)

Menglong Yan (Chinese Academy of Sciences)

Xian Sun (Chinese Academy of Sciences)

M. Weinmann (TU Delft - Computer Graphics and Visualisation)

Martin Weinmann (Karlsruhe Institut für Technologie)

Research Group
Computer Graphics and Visualisation
DOI related publication
https://doi.org/10.1016/j.isprsjprs.2024.07.015
More Info
expand_more
Publication Year
2024
Language
English
Research Group
Computer Graphics and Visualisation
Volume number
215
Pages (from-to)
383-399
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Novel view synthesis (NVS) of urban scenes enables the exploration of cities virtually and interactively, which can further be used for urban planning, navigation, digital tourism, etc. However, many current NVS methods require a large amount of images from known views as input and are sensitive to intrinsic and extrinsic camera parameters. In this paper, we propose a new unified framework for NVS of urban scenes with fewer required views via the integration of scene priors and the joint optimization of camera parameters under an geometric constraint along with NeRF weights. The integration of scene priors makes full use of the priors from the neighbor reference views to reduce the number of required known views. The joint optimization can correct the errors in camera parameters, which are usually derived from algorithms like Structure-from-Motion (SfM), and then further improves the quality of the generated novel views. Experiments show that our method achieves about 25.375 dB and 25.512 dB in average in terms of peak signal-to-noise (PSNR) on synthetic and real data, respectively. It outperforms popular state-of-the-art methods (i.e., BungeeNeRF and MegaNeRF) by about 2–4 dB in PSNR. Notably, our method achieves better or competitive results than the baseline method with only one third of the known view images required for the baseline. The code and dataset are available at https://github.com/Dongber/PriNeRF.

Files

1-s2.0-S092427162400282X-main.... (pdf)
(pdf | 6.71 Mb)
- Embargo expired in 03-02-2025
License info not available