Radar and video multimodal learning for human activity classification
Richard J. de Jong (Student TU Delft)
F. Uysal (TU Delft - Microwave Sensing, Signals & Systems)
Matijs J.C. Heiligers (TNO)
Jacco de Wit (TNO)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Camera systems are widely used for surveillance in the security and defense domains. The main advantages of camera systems are their high resolution, their ease of use, and the fact that optical imagery is easy to interpret for human operators. However, particularly when considering application in the defense domain, cameras have some disadvantages. In poor lighting conditions, dust or smoke the image quality degrades and, additionally, cameras cannot provide range information. These issues may be alleviated by exploiting the strongpoints of radar. Radar performance is largely preserved during nighttime, in varying weather conditions and in dust and smoke. Furthermore, radar provides range information of detected objects. Since their qualities appear to be complementary, can radar and camera systems learn from each other? In the current study, the potential of radar/video multimodal learning is assessed for the classification of human activity.