Inter frame compression of 3D dynamic point clouds

More Info
expand_more

Abstract

In recent years Virtual Reality (VR) and Augmented Reality (AR) applications have seen a drastic increase in commercial popularity. Different representations have been used to create 3D reconstructions for AR and VR. Point clouds are one such representation that are characterized by their simplicity and versatility making them suitable for real time applications. However point clouds are unorganized and identifying redundancies to use for compressing them is challenging. For the compression of time varying or dynamic sequences it is critical to identify temporal redundancies that can be used to describe predictors and further compress streams of point clouds.

Most of the previous research into point cloud compression relies on the octree data structure. However this approach was used on relatively sparse datasets. Recently, new dense photorealistic point cloud datasets have become available with the ongoing standardization activities on point cloud compression. To compress them using existing octree based codecs is computationally expensive as the tree depth required to achieve a reasonable level of detail is much higher than what was used previously. We propose a point cloud codec that terminates the octree at a fixed level of detail and encodes additional information in an enhancement layer. We also add inter prediction to the enhancement layer in order to gain further bit rate savings. We validate our codec by evaluating it in the framework set up by standardization organizations such as MPEG. We then demonstrate an improvement over the current MPEG anchor codec.