Visualization of Point Clouds in Mobile Augmented Reality using Continuous Level of Detail Method

More Info
expand_more

Abstract

Visualizing the point clouds is an integral part of processing the data, which enables users to explore and interact with the point clouds more intuitively. However, most of the current point cloud renderers are developed in non-immersive environments. In the last few years, some new technologies, such as Augmented Reality (AR), Virtual Reality (VR) and Mixed Reality (MR), have emerged and introduced new ways of presenting the 3D content. Among them, AR is the most commonly used technology, since AR applications can be used on mobile devices without specific equipment like helmet and handles. There are already plenty of existed applications for mobile AR environment. These mobile AR applications play important roles in fields like architecture, industrial design, navigation, advertisement, medicine and gaming. However, the use of point clouds in mobile AR environment is still waiting to be explored. Like rendering point clouds on other platforms, the biggest issue of showing point clouds in the mobile AR environment is that point cloud datasets usually have a massive amount of data. Moreover, the mobile devices have relatively limited CPU and GPU resources, reaching ideal performance, and the visual quality requirement is quite challenging. Also, the current c{clod} methods are developed for desktop, VR, and web-based viewers. The cLoD method has to be improved and revised in order to fit the mobile AR environment. In this paper, an interactive visualization of point clouds using cLoD method in the mobile AR environment will be realized. The main idea of this method is to reduce the number of points to be rendered considerably. A cLoD model that has an ideal distribution over LoDs is used in the method, with which can remove unnecessary points without sudden changes in density as present in the commonly used discrete level-of-detail approaches. Camera position, orientation and distance from the camera to point cloud model are taken into consideration while filtering the points as well. In order to further improve the visual quality, an adaptive point size rendering strategy is applied. What is more, for user's convenience, in the c{gui} some setting options are provided to the rendering system to change the value of parameters required in the rendering. The performance of the rendering system is evaluated with multiple quantity indicators and examined by different types of datasets. The result shows that our method can significantly improve rendering performance and meanwhile achieve good visual quality. The finalized rendering system is suitable for most of the indoor applications and some of the outdoor applications. What's more, a comparison between our cLoD method and the traditional mesh-based approaches will be presented to show that our cLoD method has the potential to replace mesh models in some cases. The resource code and the installation package are available at: https://github.com/LiyaoZhang0702/AR_PointCloud}.