Background Recent advances in signal processing and instrumentation have made it possible to design compressive imaging systems that are low-cost, high-performance alternatives to conventional imagers (e.g., single pixel cameras, codedaperture imagers) . Contrary to conventional imagers that acquire all of the pixels in parallel, compressive imagers acquire all of the pixels at once, by multiplexing them. In such systems, the information is compressed at the hardware level using spatial codes, and dedicated algorithms are required to reconstruct the image.
The goal of this project is to demonstrate the feasibility of compressive fluorescence imaging, with the development of both the instrumentation and the reconstruction algorithms. We will target biomedical applications such as image-guided surgery and small-animal imaging. The challenge in compressive imaging is two-fold. First, compressive acquisitions are slow, as the measurements are performed sequentially, while conventional imagers are based on parallel measurements. Second, compressive approaches are limited by the trade-off between the spatial and spectral resolution. In this project, you will design a fast high-resolution imager using deep learning. In particular, a structured illumination design will be considered based on selective plane illumination microscopy [2, 3]. Our idea is to design a set-up that allows minimal compression; i.e., where the spatial codes are adapted to the object under study. To do so, the PhD Fellow will investigate convolutional network-learning approaches following the previous work of the team [4, 5]. Convolutional networks will also be considered for the reconstruction of the hyperspectral cube, to build on previous promising results .
This is a joint project between two laboratories of the University of Lyon: the Light-Matter Institute (ILM) and the Medical Imaging Laboratory (CREATIS). The position is funded by the PRIMES (Physics, Radiobiology, Imaging and Simulation) Project, the goal of which is to develop concepts and methods for exploration, diagnosis and therapy of cancers and pathologies associated with aging. This study is also related to the ANR funded ARMONI Project that targets compressive image-guided surgery. The PhD Fellow will have access to compressive imager prototypes as well as to existing algorithms, to design the spatial codes and reconstruct the images.
We are looking for an enthusiastic and creative student with a background in optics and computer science or machine learning. The successful candidate is expected to contribute to both the optical instrumentation and the reconstruction algorithms.
€1,850 gross monthly salary. The PhD position is funded for three years, starting in October 2019. It includes €5k support for travel and conference registrations. Extra teaching duties are available (max. 64 hours/year, ∼€40/hour).
How to apply?
Please send detailed CV, motivation letter, and academic records by May 2, 2019, to nicolas.ducros@ creatis.insa-lyon.fr and firstname.lastname@example.org
 M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nature Photonics, vol. 13, pp. 13–20, Jan. 2019.
 R. M. Power and J. Huisken, “Adaptable, illumination patterning light sheet microscopy,” Scientific Reports, vol. 8, p. 9615, June 2018. 00000.
 M. Aakhte, E. A. Akhlaghi, and H.-A. J. Mu¨ller, “SSPIM: a beam shaping toolbox for structured selective plane illumination microscopy,” Scientific Reports, vol. 8, p. 10067, July 2018. 00000.
 F. Rousset, N. Ducros, A. Farina, G. Valentini, C. D’Andrea, and F. Peyrin, “Adaptive basis scan by wavelet prediction for single-pixel imaging,” IEEE Transactions on Computational Imaging, vol. 3, no. 1, pp. 36–46, 2017.
 F. Rousset, N. Ducros, F. Peyrin, G. Valentini, C. D’Andrea, and A. Farina, “Time-resolved multispectral imaging based on an adaptive single-pixel camera,” Opt. Express, vol. 26, no. 8, pp. 10550–10558, 2018.
 S. Zhang, H. Hua, and Y. Fu, “Fast parallel implementation of dual-camera compressive hyperspectral imaging system,” IEEE Transactions on Circuits and Systems for Video Technology, p. 1, 2018.