Multi-Camera Registration for VR

A flexible, feature-based approach

More Info
expand_more

Abstract

Real-time point cloud capturing and multiple depth camera 3D reconstruction are vital elements that bring real-time representations into a virtual world and provide an im- mersive experience which can be applied to develop VR/AR applications. To make this possible, camera calibration plays an essential role in providing important camera spa- tial information for 3D scene reconstruction. However, there are still many drawbacks left to improve on camera extrinsic parameters calculation in most existing systems: such as the procedure relies too much on extra calibration markers, or specific depth sensors may have complicated procedures that cannot easily be generalized to other depth sensors.To improve on this, we propose a markerless, feature-based pipeline for multiple cam- era re-calibration. This pipeline contains four main stages. It adopts feature descriptor extracting and matching to solve the issue of requiring additional markers, and the point cloud registration accuracy is improved by using point cloud segmentation and part selection. The experiment results obtained in this research show that this pipeline can calibrate four cameras with a single object (such as a chair, lamp) without the need for additional calibration markers. The extrinsic parameters calculated using this pipeline is more accurate and requires less processing time than originally. This pipeline provides the potential for further human point cloud capturing and camera calibration in real-time 3D reconstruction.