Reducing Overfitting in 3D Gaussian Splatting using Depth Supervision
T.H.B. Spanhoff (TU Delft - Electrical Engineering, Mathematics and Computer Science)
Xucong Zhang – Mentor (TU Delft - Pattern Recognition and Bioinformatics)
M. Weinmann – Graduation committee member (TU Delft - Computer Graphics and Visualisation)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
3D Gaussian Splatting (3DGS) is a method for representing 3D scenes, but is prone to overfitting when trained with limited viewpoint diversity, of- ten resulting in artifacts like floating Gaussians at incorrect depths. This paper addresses this issue by introducing 3D Gaussian Splatting with Depth, which incorporates depth supervision from RGB Depth (RGB-D) cameras into the training process. By using depth data to guide the placement of Gaussians, the proposed method aims to reduce artifacts. Through quantitative and qualitative analysis, this paper demonstrates that depth-supervised Gaussian splatting mitigates overfitting artifacts, particularly in outdoor scenes with a mediocre cam- era point diversity. The depth-supervised model is able to reduce the depth loss by a factor of three times without substantially increasing the loss on regular views.