Fusion of perceptions for perceptual robotics

More Info
expand_more

Abstract

Fusion of perception information for perceptual robotics is described. The visual perception is mathematically modelled as a probabilistic process obtaining and interpreting visual data from an environment. The visual data is processed in a multiresolutional form via wavelet transform and optimally estimated via extended Kalman filtering in each resolution level and the outcomes are fused for each data block. The measurement involves visual perception in the virtual reality which has direct implications prominently in both design and perceptual robotics including navigation issues of actual autonomous robotics. For the interaction with the environment and visual data acquisition, the laser beams approach in robotics is considered and implemented by means of an agent in virtual reality which plays the role of robot in reality.

Files