Deep Learning-Based 3D Imaging of Fluorescent Samples from 2D Image (Case No. 2019-521)

Summary:

UCLA researchers in the Department of Electrical and Computer Engineering have developed a deep learning-based approach termed Deep-Z that enables 3D imaging of fluorescent samples using a single 2D image, without mechanical scanning, additional hardware, or a trade-off resolution or speed.   

Background:

Three-dimensional (3D) fluorescence microscopic imaging has various applications in biomedical and physical sciences. However, fluorescence emission from samples is both spatially and temporally incoherent, and fluorescence microscopy generally lacks digital image propagation and time-reversal framework that is commonly employed in coherent microscopy for 3D imaging. Therefore, 3D fluorescence imaging used in confocal or various super-resolution microscopy techniques is generally acquired through scanning across the samples volume, where several 2D fluorescence images are acquired, one for each focal plane or point in 3D. Because of the requirement for mechanical scanning in 3D fluorescence imaging, the image acquisition speed and the throughput of the system are greatly limited for volumetric samples, even with optimized scanning strategies. Additionally, because the images at different sample planes/points are not acquired simultaneously, the temporal variations of the sample fluorescence can inevitably cause image artifacts. Moreover, photo-toxicity and photobleaching are also concerning because the sample needs to be repeatedly excited during the scanning process. Thus, non-scanning 3D fluorescence microscopy methods are needed to overcome these challenges. 

Innovation:

Researchers at UCLA have developed a deep learning-based neural network termed Deep-Z that inherently learns the physical laws governing fluorescence wave propagation and time-reversal, and computationally retrieves 3D image of fluorescent sample from a single 2D wide-field fluorescence image, without sacrificing the imaging speed, spatial resolution, filed-of-view, or throughput of a standard fluorescence microscope. This data-driven fluorescence image propagation framework does not need a physical model of the imaging system, and rapidly propagates a 2D fluorescence image onto user-defined 3D surfaces without iterative searches or parameter estimates. In addition to rapid 3D imaging of a fluorescent sample volume, it can also be used to digitally correct for various aberrations due to the sample and/or the optical system. 

Patent: 

Systems and methods for two-dimensional fluorescence wave propagation onto surfaces using deep learning

Potential Applications:

•    3D fluorescence microscopic imaging

Advantages:

•    Removes need for mechanical scanning or additional hardware during 3D fluorescence imaging

Development to Date:

Using this data-driven framework, the researchers have increased the depth-of-field of a microscope by 20-fold, imaged Caenorhabditis elegans neurons in 3D using a single fluorescence image. 

Related Papers:

Deep-Z: 3D Virtual Refocusing of Fluorescence Images Using Deep Learning

Reference:

UCLA Case No. 2019-521

Lead Inventor:

Aydogan Ozcan
 

Patent Information:
For More Information:
Nikolaus Traitler
Business Development Officer (BDO)
nick.traitler@tdg.ucla.edu
Inventors:
Aydogan Ozcan
Yair Rivenson
Yichen Wu