Deep Learning Wavefront Sensing

Via Raw Shack-Hartmann Images

More Info
expand_more

Abstract

The Delft Center for Systems and Control (DCSC) 'Smart Optics' aim to achieve higher resolution imaging through Adaptive Optics (AO). Adaptive optics is a modern technique for detecting and correcting real-time wavefront aberrations and is widely used in biomedical imaging and astronomical imaging. Wavefront sensing lies at the core of Adaptive Optics and is known to pose some challenges. Measurement of the wavefront cannot be done directly and has to be estimated through an intensity distribution on a detector. One approach to wavefront sensing is by using a Shack-Hartmann (SH) sensor. A Shack-Hartmann sensor (a pupil-plane sensor) subdivides the wavefront into N spatial areas using sub-apertures. The individual slopes across all sub-apertures are integrated to reconstruct the wavefront. The major advantage of using a Shack-Hartmann sensor is its fast operation speed, caused by the linear relationship between local slopes and original wavefront. This enables real-time wavefront reconstruction. The Shack-Hartmann sensor however, has some limitations. Its ability to reconstruct higher-order aberrations is restricted by the amount of lenses within the micro-lens array. Furthermore, a centroiding algorithm is used to compute the local slopes. Going from spots to centroids decreases the amount of informative pixels and greatly limits its wavefront reconstruction potential. Moreover, these centroiding algorithms often add a measure of uncertainty since spots can have irregular shapes or cross-over/overlap. In this Master Thesis a novel approach to phase reconstruction from the raw SH measurement is proposed. Here, we show that Deep Learning techniques in combination with a micro-lens array can surpass traditional SH phase reconstruction methods and alleviate their current limitations. The proposed method uses the entire Shack-Hartmann Pattern (HP) as input to a neural network, supplying the network with more information than existing Deep Learning SHWR methods, which still rely on centroids. Using this approach, we can combine the accuracy of sensor-less techniques with the speed of a Shack-Hartmann sensor. Three different neural network architectures are considered in this thesis. Two of these neural networks (Alex-Net and Xception) are adapted to output a series of Zernike coefficients. Using these estimated Zernike coefficients, a wavefront can be reconstructed. The remaining neural network, U-Net, performs a direct pixel-wise estimation of the phase-map. The input Shack-Hartmann patterns are created using different micro-lens array (MLA) geometries, consisting of 25-, 256- or 900 lenses. The networks are evaluated on their ability to reconstruct a combination of 32- or 100- Zernike coefficients.

Files

License info not available