Compressive Sensing and Fast Simulations

Applications to Radar Detection

More Info
expand_more

Abstract

In most modern high-resolution multi-channel radar systems one of the major problems to deal with is the huge amount of data to be acquired, processed and/or stored. But why do we need all these data? According to the well known Nyquist-Shannon sampling theorem, real signals have to be sampled at at least twice the signal bandwidth to prevent ambiguities. Therefore, sampling of very wide bandwidths may require Analog to Digital Converter (ADC) hardware that is unavailable or very expensive; especially in multi-channel systems, the cost and power consumption can become critical factors. In applications involving interleaving of radar modes in time or space (antenna aperture), multi-function operation often leads to conflicting requirements on sampling rates in both time and spatial domains. So while, on one hand, the increased number of degrees of freedom improves the system performance, on the other hand it puts a significant burden on both the off-line analysis and performance evaluation of sophisticated detectors, and on the real time acquisition and processing. For example, space-time adaptive processing algorithms significantly enhance the detection of targets buried in noise, clutter and jamming. However, evaluating the optimal filter weights is an immense computational load when simulating such detectors in the design phase as well as in real time implementation. In some cases, measurement time may also be a constraint, as in 3D radar imaging for airport security inspection of passengers. Conventional acquisition of a full 3D high resolution image requires a measurement time that can be unacceptable in this situation. In this thesis we investigate sampling methods that can deal with the problems of processing complexity as well as analysis (or performance evaluation) extremely efficiently by reducing the required amount of samples. By cleverly using properties of the signals or random variables involved, the considered techniques, namely Compressive Sensing (CS) and Importance Sampling (IS), both alleviate the burden related to data handling in complex radar detectors. These methods, although very different in nature, provide an alternative to classical sampling techniques. The first, compressive sensing, is based on a revolutionary acquisition and processing theory that enables reconstruction of sparse signals from a set of measurements sampled at a much lower rate than required by the Nyquist-Shannon theorem. This results in both shorter acquisition time and reduced amount of data. The second, importance sampling, has roots in statistical physics and represents a fast and effective method for the design and analysis of detectors whose performance have to be evaluated by simulations. By efficiently sampling the underlying probability density function, importance sampling provides a very fast alternative to conventional Monte Carlo simulation. The first part of the thesis deals with the design and analysis of adaptive detectors for compressive sensing based radars. In systems using compressive sensing, the target signal, which is assumed to be sparse, is estimated from the noisy, undersampled measurements via L1-norm minimization algorithms. CS recovery algorithms require proper setting of parameters (thresholds) and are therefore not inherently adaptive. Classical radar systems employ a matched filter, matched to the transmitted waveform, followed by a Constant False Alarm Rate (CFAR) processor for the detection of targets embedded in unknown background clutter and noise. However, the non-linearity introduced by a CS recovery algorithm does not allow straightforward application of conventional adaptive detector design methodology. In the work reported here, by making use of the properties of the Complex Approximate Message Passing algorithm, we are able to propose an adaptive non-linear recovery stage combined with classical CFAR processing, and derive a novel adaptive CS detector. Additionally, our theoretical findings are also demonstrated via both simulated and experimental results. Furthermore, we provide a methodology to predict the performance of the proposed detectors that can be used to evaluate how transmitted power can be traded against undersampling, making it possible to incorporate CS-based sampling and detection in radar system design. The second part of this thesis focuses on deriving methods of importance sampling for fast simulation of rare events especially applicable to Space Time Adaptive Processing (STAP) radar detectors. These type of methods are, however, of much wider applicability. They can and have been used in many other situations that require intensive and time-consuming Monte Carlo simulations. In conducting rare event simulations of systems that involve signal processing operations that are mathematically complex, there are two principal issues that contribute to simulation time. The first issue concerns the rare event itself whose probability is being sought. The second concerns the computational intensity that accompanies the signal processing. It is a daunting task to conduct conventional Monte Carlo simulations that involve several millions of trials to estimate low false alarm probabilities, with as many matrix inversions, as required in STAP. We demonstrate how fast simulation schemes can deal with these aspects, and devise tailored importance sampling biasing schemes for evaluating the performance of STAP detectors which are analytically difficult or impossible to analyze, such as low rank STAP detectors. By comparing our results with traditional Monte Carlo methods, we show that importance sampling can achieve tremendous gain in terms of computational time.