Myopic deconvolution from wave front sensing (MDWFS) is a powerful tool for high-resolution imaging. It is typically used with monochromatic, short exposure images with integration times less than the coherence time for the atmosphere, and Shack-Hartmann wave-front sensor data where the number of sub-apertures across the pupil is commensurate with the turbulence strength D/r_0 where D is the diameter of the telescope and r_0 is the spatial coherence length of the atmosphere. However, there are important imaging scenarios that do not fit this model. Imaging faint targets usually requires integration times greater than the atmospheric coherence time and large spectral bandwidths. Observing targets during poor seeing conditions results in D/r_0 values that are significantly greater than the number of sub-apertures across the pupil. In these cases, we may expect that a high fidelity estimate of the object will require an algorithm that accurately models the physical effects of broad temporal and spectral bandwidth in the point-spread function. In this paper we demonstrate the performance of a new MDWFS algorithm, called DORA, designed to work with imagery obtained in strong turbulence conditions. This algorithm includes models of the temporal behavior of the atmosphere and finite spectral bandwidth. It includes several stages of processing, including DWFS and joint estimation via multi-frame blind deconvolution (MFBD). Results based on simulated data show that DORA will provide high-fidelity restorations for imagery acquired through strong turbulence conditions, D/r_0>40. Real-world performance of the new code is established with results from data acquired with the AEOS 3.6 m telescope both with and without adaptive optics compensation.
展开▼