Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS FOR TWO-DIMENSIONAL FLUORESCENCE WAVE PROPAGATION ONTO SURFACES USING DEEP LEARNING
Document Type and Number:
WIPO Patent Application WO/2020/139835
Kind Code:
A1
Abstract:
A fluorescence microscopy method includes a trained deep neural network. At least one 2D fluorescence microscopy image of a sample is input to the trained deep neural network, wherein the input image(s) is appended with a digital propagation matrix (DPM) that represents, pixel-by-pixel, an axial distance of a user-defined or automatically generated surface within the sample from a plane of the input image. The trained deep neural network outputs fluorescence output image(s) of the sample that is digitally propagated or refocused to the user-defined surface or automatically generated. The method and system cross-connects different imaging modalities, permitting 3D propagation of wide-field fluorescence image(s) to match confocal microscopy images at different sample planes. The method may be used to output a time sequence of images (e.g., time-lapse video) of a 2D or 3D surface within a sample.

Inventors:
OZCAN AYDOGAN (US)
RIVENSON YAIR (US)
WU YICHEN (US)
Application Number:
PCT/US2019/068347
Publication Date:
July 02, 2020
Filing Date:
December 23, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV CALIFORNIA (US)
International Classes:
G01N15/14; G06K9/00; G06T7/00
Domestic Patent References:
WO2013104938A22013-07-18
Foreign References:
US20080290293A12008-11-27
US20170185871A12017-06-29
US20170249548A12017-08-31
US20180286038A12018-10-04
Other References:
OUNKOMOL ET AL.: "Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy", NATURE METHODS, vol. 15, no. 11, 17 September 2018 (2018-09-17), pages 917 - 920, XP036929866, Retrieved from the Internet [retrieved on 20200221]
SAKURIKAR PARIKSHIT ET AL.: "RefocusGAN: Scene Refocusing Using a Single Image", LECTURE NOTES IN COMPUTER SCIENCE, vol. ECCV, 6 October 2018 (2018-10-06), pages 519 - 535, XP047489263, ISBN: 978-3-540-74549-5, DOI: 10.1007/978-3-030-01225-0_31
HAN LIANG ET AL.: "Refocusing Phase Contrast Microscopy Images", LECTURE NOTES IN COMPUTER SCIENCE, vol. 10434, 4 September 2017 (2017-09-04), ISBN: 978-3-030-58594-5
WU, Y. ET AL.: "Three-dimensional virtual refocusing of fluorescence microscopy images using deep learning", NAT METHODS, vol. 16, 2019, pages 1323 - 1331, XP055871864, DOI: 10.1038/s41592-019-0622-5
See also references of EP 3903092A4
Attorney, Agent or Firm:
DAVIDSON, Michael S. (US)
Download PDF:
Claims:
What is claimed is:

1. A fluorescence microscopy method comprising:

providing a trained deep neural network that is executed by software using one or more processors;

inputting at least one two-dimensional fluorescence microscopy input image of a sample to the trained deep neural network, wherein the at least one input image is appended with a digital propagation matrix (DPM) that represents, pixel-by-pixel, an axial distance of a user- defined or automatically generated surface within the sample from a plane of the input image; and

outputting at least one fluorescence output image of the sample from the trained deep neural network that is digitally propagated or refocused to the user-defined or automatically generated surface defined by the DPM.

2. The method of claim 1, wherein a plurality of fluorescence output images using a plurality of DPMs from the trained deep neural network are digitally combined to create a volumetric image of the sample

3. The method of claim 1, wherein a plurality of fluorescence output images using a plurality of DPMs from the trained deep neural network are digitally combined to create an extended depth of field (EDOF) image of the sample.

4. The method of claim 1, wherein at least one fluorescence output image using at least one DPM from the trained deep neural network is used to create an improved-focus image of the sample

5. The method of claim 1, wherein a plurality of fluorescence output images from the trained deep neural network are digitally combined to create an image of the sample over an arbitrary user-defined or automatically generated 3D surface. 6 The method of claim 1, wherein a plurality of fluorescence output images from the trained deep neural network are digitally combined to extend the depth of field of the microscope used to obtain the input image.

7. The method of claim 1, wherein the fluorescence output image(s) from the trained deep neural network enable a reduction of photon dose or light exposure on the sample volume.

8. The method of claim 1, wherein the fluorescence output image(s) from the trained deep neural network enable a reduction of photobleaching of the sample volume.

9. The method of claim 1, wherein a time sequence of two-dimensional fluorescence microscopy input images of a sample are input to the trained deep neural network, wherein each image is appended with a digital propagation matrix (DPM) that represents, pixel-by-pixel, an axial distance of a user-defined or automatically generated surface within the sample from a plane of the input image and wherein a time sequence of fluorescence output images of the sample is output from the trained deep neural network that is digitally propagated to the user- defined or automatically generated surface(s) corresponding to the DPM(s) of the input images.

10. The method of claim 9, wherein one or more of the time sequences of

fluorescence output images from the trained deep neural network are combined to create a time- lapse video of the sample volume.

11. The method of claim 9, wherein one or more of the time sequences of

fluorescence output images from the trained deep neural network are combined to create a time- lapse video of the sample over an arbitrary user-defined or automatically generated 3D surface.

12. The method of claim 9, wherein the time sequence of two-dimensional fluorescence microscopy input images of the sample is obtained with a camera using stream or video mode and wherein the time sequence of fluorescence output images of the sample has the same or improved frame rate compared to the two-dimensional fluorescence microscopy input images.

13. The method of any of claims 1-12, wherein the user-defined or automatically generated surface comprises a plane, curved surface, an arbitrary surface or an axial depth range located within the sample.

14. The method of any of claims 1-12, wherein the sample comprises at least one of a living organism, a fixed organism, live cell(s), fixed cell(s), live tissue, fixed tissue, pathological slide, biopsy, liquid, bodily fluid, or other microscopic objects.

15. The method of any of claims 1-12, wherein at least one input image is acquired using a spatially engineered point spread function.

16. The method of any of claims 1-12, wherein the trained deep neural network is trained with a generative adversarial network (GAN) using matched pairs of (1) a plurality of fluorescence images axially-focused at different depths and appended with different DPMs, and (2) corresponding ground truth fluorescence images captured at a correct/target focus depth defined by the corresponding DPM.

17. The method of any of claims 1-12, wherein the one or more user-defined or automatically generated surfaces each define a two-dimensional plane.

18. The method of any of claims 1 -12, wherein the one or more user-defined or automatically generated surfaces each define a tilted plane or a curved surface.

19. The method of any of claims 1-12, wherein the one or more user-defined or automatically generated surfaces each define an arbitrary three-dimensional surface.

20. The method of any of claims 1-12, wherein the DPM is spatially uniform.

21. The method of any of claims 1-12, wherein the DPM is spatially non-uniform.

22. The method of any of claims 1-12, wherein the input image(s) has/have the same or substantially similar numerical aperture and resolution as the ground truth images.

23. The method of any of claims 1-12, wherein the input image(s) have a lower numerical aperture and poorer resolution compared to the ground truth images, wherein the trained deep neural network learns and performs both virtual refocusing and super-resolution of fluorescence input images.

24. The method of any of claims 1-12, wherein the input image(s) to the trained deep neural network are obtained by using and/or the trained deep neural network is trained by using one of the following types of microscopes: a super-resolution microscope, a confocai

microscope, a confocai microscope with single photon or multi-photon excited fluorescence, a second harmonic or high harmonic generation fluorescence microscope, a light-sheet

microscope, a structured illumination microscope, a computational microscope, a ptychographic microscope.

25. The method of any of claims 1 -12, wherein the two-dimensional microscopy input image is obtained with a fluorescence microscopy modality of a first type and the fluorescence output image resembles and is substantially equivalent to a fluorescence microscopy image of the same sample obtained using a fluorescence microscopy modality of a second type.

26. The method of any of claims 1-12, wherein the two-dimensional fluorescence microscopy input image of the sample comprises a wi de-field image and the fluorescence output image resembles and is substantially equivalent to a confocai microscopy image of the same sample.

27. The method of any of claims 1-12, wherein the trained deep neural network is trained with a generative adversarial network (GAN) using matched pairs of: (1) a plurality of fluorescence images of a first microscope modality axially-focused at different depths and appended with different DPMs, and (2) corresponding ground truth fluorescence images captured by a second, different microscope modality at a correct/target focus depth defined by the corresponding DPM.

28. The method of claim 27, wherein the first microscope modality comprises a wide- field fluorescence microscope modality and the second, different microscope modality comprises one of the following types of microscopes: a super-resolution microscope, a confocal

m icroscope, a confocal mi croscope with single photon or multi -photon excited fluorescence, a second harmonic or high harmonic generation fluorescence microscope, a light-sheet

microscope, a structured illumination microscope, a computational microscope, a ptychographic microscope.

29. The method of any of claims 1 -12, wherein the at least one two-dimensional fluorescence microscopy input image is obtained by a fluorescence microscope comprising an engineered point spread function

30. The method of any of claims 1 -12, wherein two or more input images are obtained at different axial planes or surfaces within the sample are simultaneously input to a separate trained deep neural network which was trained to output at least one fluorescence output image of the sample that is digitally propagated or refocused to the user-defined or automatically generated surface defined by the DPM that is input to the same deep neural network along with the input images.

31. A system for outputting fluorescence microscopy images comprising a computing device having image processing software executed thereon, the image processing software comprising a trained deep neural network that is executed using one or more processors of the computing device, wherein the trained deep neural network is trained using matched pairs of (1) a plurality of fluorescence images axially -focused at different depths and appended with different digital propagation matrices (DPMs) each of which represents, pixel-by-pixel, an axial distance of a user-defined or automatically generated surface within the sample from a plane of the input image, and (2) corresponding ground truth fluorescence images captured at a correct/target focus depth defined by the corresponding DPM, the image processing software configured to receive one or more two-dimensional fluorescence microscopy input image(s) of a sample appended with a corresponding DPM and outputting one or more fluorescence output image(s) of the sample from the trained deep neural network that are digitally propagated to one or more user- defined or automatically generated surface(s) defined by the appended DPM(s).

32. The system of claim 31, wherein a plurality of fluorescence output images using a plurality of DPMs from the trained deep neural network are digitally combined to create an extended depth of field (EDOF ) image of the sample.

33. The system of claim 31, wherein at least one fluorescence output image using at least one DPM from the trained deep neural network is used to create an improved-focus image of the sample

34. The system of claim 31, wherein the computing device comprises at least one of personal computer, laptop, tablet, server, ASIC, or one or more graphics processing units (GPUs).

35. The system of claim 31, wherein the input image(s) has/have the same or substantially similar numerical aperture and resolution as the ground truth images

36. The system of claim 31, wherein the input image(s) have a lower numerical aperture and poorer resolution compared to the ground truth images, wherein the trained deep neural network learns and perform s both virtual refocusing and super-resolution of fluorescence input images.

37. The sy stem of claim 31 , wherein the input image or images used to train the deep neural network are obtained by using one of the following types of microscopes: a super resolution microscope, a confocal microscope, a confocal microscope with single photon or multi-photon excited fluorescence, a second harmonic or high harmonic generation fluorescence microscope, a light-sheet microscope, a structured illumination microscope, a computational microscope, a ptychographic microscope.

38. The system of claim 31, wherein the deep neural network is trained using a uniform DPM and wherein the one or more user-defined or automatically generated surfaces comprise one or more arbitrary three-dimensional surfaces.

39. The system of claim 31, wherein the output fluorescence images of the trained neural network are digitally combined to form a three-dimensional volumetric image and/or time-lapse video of the sample.

40. The system of claim 31, wherein a plurality of fluorescence output images from the trained deep neural network are digitally combined to create an image and/or time-lapse video of the sample over an arbitrary' user-defined or automatically generated 3D surface.

41. The system of clai 31, wherein a plurality of fluorescence output images from the trained deep neural network are digitally combined to extend the depth of field of the microscope used to output the input image.

42. The system of claim 31, wherein the fluorescence output image(s) from the trained deep neural network enable a reduction of photon dose or light exposure on the sample volume.

43. The system of claim 31 , wherein the fluorescence output image(s) from the trained deep neural network enable a reduction of photobleaching of the sample volume

44. The system of claim 31, wherein at least one input image is acquired using a spatially engineered point spread function (PSF).

45. The system of claim 31, wherein the image processing software is configured to receive a time sequence of fluorescence microscopy input images of a sample, each appended with at least one corresponding DPM, and outputting a time sequence of fluorescence output images of the sample from the trained deep neural network that is digitally propagated to the user-defined or automatically generated surface(s) defined by the appended DPM(s) of the input images.

46. The system of claim 45, wherein the user-defined or automatically generated surface(s) comprise a two-dimensional surface.

47. The system of claim 45, wherein the user-defined surface(s) or automatically generated comprise an arbitrary three-dimensional surface or volume.

48. The system of claim 31, wherein the one or more fluorescence output image(s) are merged together to form a time-lapse volumetric video of the sample.

49. The system of claim 31, further comprising a fluorescence microscope configured to acquire the one or more fluorescence microscopy input image(s) of a sample.

50. The system of claim 49, wherein the fluorescence microscope further comprises phase a d/or amplitude masks located along an optical path to create an engineered point spread function in 3D.

51. The system of claim 44, wherein the spatially engineered point spread function comprises a double-helix point-spread function.

52. The system of claim 49, wherein the fluorescence microscope comprises a light- sheet based microscopy system.

Description:
SYSTEMS AND METHODS FOR TWO-DIMENSIONAL FLUORESCENCE WAVE PROPAGATION ONTO SURFACES USING DEEP LEARNING

Related Applications

[0001] This Application claims priority to U.S. Provisional Patent Application Nos.

62/912,537 filed on October 8, 2019 and 62/785,012 filed on December 26, 2018, which are hereby incorporated by reference in their entirety. Priority is claimed pursuant to 35 U.S.C. § 1 19 and any other applicable statute.

Technical Field

[0002] The technical field generally relates to the systems and methods for obtaining fluorescence images of a sample or objects. More particularly, the technical field relates to fluorescence microscopy that uses a digital image propagation framework by training a deep neural network that inherently learns the physical laws governing fluorescence wave propagation and time-reversal using microscopic image data, to virtually refocus 2D fluorescence images onto user-defined 3D surfaces within the sample, enabling three-dimensional (3D) imaging of fluorescent samples using a single two-dimensional (2D) image, without any mechanical scanning or additional hardware. The framework can also be used to correct for sample drift, tilt, and other aberrations, all digitally performed after the acquisition of a single fluorescence image. This framework also cross-connects different imaging modalities to each other, enabling 3D refocusing of a single wide-field fluorescence image to match confoca! microscopy images acquired at different sample planes.

Background

[0003] Three-dimensional (3D) fluorescence microscopic imaging is essential for biomedical and physical sciences as well as engineering, covering various applications. Despite its broad importance, high-throughput acquisition of fluorescence image data for a 3 D sample remains a challenge in microscopy research. 3D fluorescence information is usually acquired through scanning across the sample volume, where several 2D fluorescence images/measurements are obtained, one for each focal plane or point in 3D, which forms the basis of e ., confocal, two- photon, light-sheet, or various super-resolution microscopy techniques. However, because scanning is used, the image acquisition speed and the throughput of the system for volumetric samples are limited to a fraction of the frame-rate of the camera/detector, even with optimized scanning strategies or point-spread function (PSF) engineering. Moreover, because the images at different sample planes/points are not acquired simultaneously, the temporal variations of the sample fluorescence can inevitably cause image artifacts. Another concern is the phototoxicity of illumination and photobleaching of fluorescence since portions of the sample can be repeatedly excited during the scanning process.

10004] To overcome some of these challenges, non-scanning 3D fluorescence microscopy methods have also been developed, so that the entire 3D volume of the sample can be imaged at the same speed as the detector framerate. One of these methods is fluorescence light-field microscopy. This system typically uses an additional micro-lens array to encode the 2D angular information as well as the 2D spatial information of the sample light rays into image sensor pixels; then a 3 D focal stack of images can be digitally reconstructed from this recorded 4D light-field. However, using a micro-lens array reduces the spatial sampling rate which results in a sacrifice of both the lateral and axial resolution of the microscope. Although the image resolution can be improved by 3D deconvolution or compressive sensing techniques, the success of these methods depends on various assumptions regarding the sample and the forward model of the image formation process. Furthermore, these computational approaches are relatively time- consuming as they involve an iterative hyperparameter tuning as part of the image reconstruction process. A related method termed multi-focal microscopy has also been developed to map the depth information of the sample onto different parallel locations within a si ngle image.

However, the improved 3D imaging speed of this method also comes at the cost of reduced imaging resolution or field-of-view (FOV) and can only infer an experimentally pre-defined (fixed) set of focal planes within the sample volume. As another alternative, the fluorescence signal can also be optically correlated to form a Fresnel correlation hologram, encoding the 3D sample information in interference patterns. To retrieve the missing phase information, this computational approach requires multiple images to be captured for volumetric imaging of a sample. Quite importantly, all these methods summarized above, and many others, require the addition of customized optical components and hardware into a standard fluorescence microscope, potentially needing extensive alignment and calibration procedures, which not only increase the cost and complexity of the optical set-up, but also cause potential aberrations and reduced photon-efficiency for the fluorescence signal.

Summary

[0005] Here, a digital image propagation system and method in fluorescence microscopy is disclosed that trains a deep neural network that inherently learns the physical law's governing fluorescence wave propagation and time-reversal using microscopic image data, enabling 3D imaging of fluorescent samples using a single 2D image, without any mechanical scanning or additional hardware. In one embodiment, a deep convolutional neural network is trained to virtually refocus a 2D fluorescence image onto user-defined or automatically generated surfaces (2D or 3D) within the sample volume. Bridging the gap between coherent and incoherent microscopes, this data-driven fluorescence image propagation framework does not need a physical model of the imaging system, and rapidly propagates a single 2D fluorescence image onto user-defined or automatically generated surfaces without iterative searches or parameter estimates. In addition to rapid 3D imaging of a fluorescent sample volume, it can also be used to digitally correct for various optical aberrations due to the sample and/or the optical system. This deep learning-based approach is referred to herein sometimes as“ Deep-Z” or u Deep-Z ” and it is used to computationally refocus a single 2D wide-field fluorescence image (or other image acquired using a spatially engineered point spread function) onto 2D or 3D surfaces within the sample volume, without sacrificing the imaging speed, spatial resolution, field-of-view, or throughput of a standard fluorescence microscope. The method may also be used with multiple 2D wide-field fluorescence images which may be used to create a sequence of images over time (e.g., a movie or time-lapse video clip).

[0006] With this data-driven computational microscopy Deep-Z framework, the framework was tested by imaging the neuron activity of a Caenorhabditis elegans worm in 3D using a time- sequence of fluorescence images acquired at a single focal plane, digitally increasing the depth- of-field of the microscope by 20-fold without any axial scanning, additional hardware, or a trade off of imaging resolution or speed. Furthermore, this learning-based approach can correct for sample drift, tilt, and other image or optical aberrations, all digitally performed after the acquisition of a single fluorescence image. This unique framework also cross-connects different imaging modalities to each other, enabling 3D refocusing of a single wide-field fluorescence image to match confocal microscopy images acquired at different sample planes. This deep learning-based 3D image refocusing method is transformative for imaging and tracking of 3D biological samples, especially over extended periods of time, mitigating phototoxicity, sample drift, aberration and defocusing related challenges associated with standard 3D fluorescence microscopy techniques.

[0007] In one embodiment, a fluorescence microscopy method includes providing a trained deep neural network that is executed by software using one or more processors. At least one two-dimensional fluorescence microscopy input image of a sample is input to the trained deep neural network wherein each input image is appended with or otherwise associated with one or more user-defined or automatically generated surfaces. In one particular embodiment, the image is appended with a digital propagation matrix (DPM) that represents, pixel-by-pixel, an axial distance of a user-defined or automatically generated surface within the sample from a plane of the input image. One or more fluorescence output image(s) of the sample is/are generated or output by the trained deep neural network that is digitally propagated or refocused to the user- defined or automatically generated surface as established or defined by, for example, the DPM.

[0008] In one embodiment, a time sequence of two-dimensional fluorescence microscopy input images of a sample are input to the trained deep neural network, wherein each image is appended with a digital propagation matrix (DPM) that represent, pixel-by-pixel, an axial distance of a user-defined or automatically generated surface within the sample from a plane of the input image and wherein a time sequence of fluorescence output images of the sample (e.g., a time-lapse video or movie) is output from the trained deep neural network that is digitally propagated or refocused to the user-defined or automatically generated surfaee(s) corresponding to the D PM of the input images.

[0009] In another embodiment, a system for outputting fluorescence microscopy images comprising a computing device having image processing software executed thereon, the image processing software comprising a trained deep neural network that is executed using one or more processors of the computing device, wherein the trained deep neural network is trained using matched pairs of (1) a plurality of fluorescence images axial ly-focused at different depths and appended with different DPMs (each of which represents, pixel -by-pixel, an axial distance of a user-defined or automatically generated surface within the sample from a plane of the input image), and (2) corresponding ground truth fluorescence images captured at the correct/target focus depth defined by the corresponding DPM which are used to establish parameters for the deep neural network, the image processing software configured to receive one or more two- dimensional fluorescence microscopy input images of a sample and one or more user-defined or automatically generated surfaces that are appended to or otherwise associated with the image(s). For example, each image may be appended with a DPM. The system outputs a fluorescence output image (or multiple images in the form of a movie or time-lapse video clip) of the sample from the trained deep neural network that is digitally propagated or refocused to the one or more user-defined or automatically generated surfaces as established by, for example, the DPM(s).

[0010] In one embodiment, the trained deep neural network is trained with a generative adversarial network (GAN) using matched pairs of (1) a plurality of fluorescence images of a first microscope modality axially-focused at different depths and appended with different DPMs, and (2) corresponding ground truth fluorescence images captured by a second, different microscope modality at a correct/target focus depth defined by the corresponding DPM.

[0011] In one embodiment, the fluorescence microscope that is used to obtain the two- dimensional images may include within the optical setup hardware modifications to create a spatially engineered point spread function (PSF) in the axial direction (z direction). This may include, for example, phase and/or amplitude masks located along the optical path (axial direction). A double-helix PSF is one exemplary engineered PSF. In addition, the fluorescence microscope may include a wi de-field fluorescence microscope. It may also include a light sheet system. In other embodiments, the input image to a trained deep neural network or training images for the deep neural network are obtained by using one of the following types of microscopes: a super-resolution microscope, a confocal microscope, a confocal microscope with single photon or multi -photon excited fluorescence, a second harmonic or high harmonic generation fluorescence microscope, a light-sheet microscope, a structured illumination microscope, a computational microscope, a ptychographic microscope.

Brief Description of the Drawings

[0012] FIG. 1 illustrates one embodiment of a system that uses a trained deep neural network to generate one or more fluorescence output image(s) of the sample that is digitally propagated (refocused) to the user-defined or automatically generated surface. The system obtains one or more two-dimensional fluorescence images which are input to the trained deep neural network. The trained deep neural network then outputs digitally propagated (refocused) image(s) to user- defined or automatically generated surface(s) including three-dimensional surfaces.

[0013] FIG. 2A schematically illustrates the refocusing of fluorescence images using the Deep-Z network. By concatenating a digital propagation matrix (DPM) to a single fluorescence image, and running the resulting image through a trained Deep-Z network, digitally refocused images at different planes can be rapidly obtained, as if an axial scan is performed at the corresponding planes within the sample volume. The DPM has the same size as the input image and its entries represent the axial propagation distance for each pixel and can also be spatially non-uniform. The results of Deep-Z inference are compared against the images of an axial- scanning fluorescence microscope for the same fluorescent bead (300 nm), providing a very' good match.

[0014] FIG. 2B illustrates lateral FWHM histograms for 461 individual/isolated fluorescence nano-beads (300 nm) measured using Deep-Z inference (N=T captured image) and the images obtained using mechanical axial scanning (N=41 captured images) provide a very good match to each other.

[0015] FIG. 2C illustrates axial FWHM measurements for the same data set of FIG. 2B, also revealing a very' good match between Deep-Z inference results and the axial mechanical scanning results.

[0016] FIG. 3 illustrates the 3D imaging of C. Elegans neuron nuclei using the Deep-Z network. Different ROIs are digitally refocused using Deep-Z to different planes within the sample volume, the resulting images provide a very' good match to the corresponding ground truth images, acquired using a scanning fluorescence microscope. The absolute difference images of the input and output with respect to the corresponding ground truth image are also provided on the right, with structural similarity index (SSIM) and root mean square error (RMSE) values reported, further demonstrating the success of Deep-Z. Scale bar: 25 pm.

[0017] FIG. 4A illustrates the maximum intensity projection (MIP) along the axial direction of the median intensity image taken across the time sequence showing C. Elegans neuron activity tracking in 3D using the Deep-Z network. The red channel (Texas red) labels neuron nuclei. The green channel (FITC) labels neuron calcium activity. A total of 155 neurons were identified, 70 of which were active in calciu activity. Scale bar: 25 pm. Scale bar for the zoom- in regions: 10 pm. [0018] FIG. 4B illustrates all the 155 localized neurons are shown in 3D, where depths are color-coded.

[0019] FIG. 4C illustrates 3D tracking of neuron calcium activity events corresponding to the 70 active neurons. The neurons were grouped into 3 clusters (C1-C3) based on their calcium activity pattern similarity. The locations of these neurons are marked by the circles in FIG. 4A (Cl (blue), C2(cyan) and C3(yellow)).

[0020] FIG. 5A illustrates the measurement of a tilted fluorescent sample (300 nm beads).

[0021] FIG. 5B illustrates the corresponding DPM for the tilted plane of FIG. 5 A.

[0022] FIG. 5C illustrates an image of the measured raw fluorescence image; the left and right parts are out-of-focus in different directions, due to the sample tilt.

[0023] FIG. 5D illustrates the Deep-Z network output image that rapidly brings all the regions into correct focus

[0024] FIG. 5E and 5F illustrate the lateral FWHM values of the nano-beads shown in FIGS. 5C and 5D, respectively, clearly demonstrating that the Deep-Z network with the non-unifor DPM of FIG. 5B brought the out-of-focus particles into focus.

[0025] FIG. 5G illustrates the measurement of a cylindrical surface with fluorescent beads (300 nm beads).

[0026] FIG. 5H illustrates the corresponding DPM for the curved surface of FIG. 5G.

[0027] FIG. 51 illustrates an image of the measured raw fluorescence image; the middle region and the edges are out-of-focus due to the curvature of the sample.

[0028] FIG. 5J illustrates the Deep-Z network output image that rapidly brings all the regions into correct focus.

[0029] FIGS. 5K and 5L illustrate the lateral FWHM values of the nano-beads shown in FIGS. 51, 5.1, respectively, clearly demonstrating that Deep-Z with the non-uniform DPM brought the out-of-focus particles into focus.

[0030] FIG. 6A illustrates a single wide-field fluorescence image (63x/l 4NA objective lens) of BPAEC microtubule structures that is digitally refocused using Deep-Z r to different planes in 3D, retrieving volumetric information from a single input image and performing axial sectioning at the same time.

[0031] FIG. 6B illustrates the matching images (matched to FIG 6A images) captured by a confocal microscope at the corresponding planes. [0032] FIG. 6C illustrates the matching wi de-field (WF) images (matched to FIG. 6A images) at the corresponding planes. These scanning WF images report the closest heights to the corresponding confocal images, and have 60 nm axial offset since the two image stacks are discretely scanned and digitally aligned to each other x-z and y-z cross-sections of the refocused images are also shown to demonstrate the match between Deep~Z+ inference and the ground truth confocal microscope images of the same planes; the same cross-sections (x-z and y-z) are also shown for a wide-field scanning fluorescence microscope, reporting a significant axial blur in each case. Each cross-sectional zoomed-in image spans 1.6 pm in z-direction (with an axial step size of 0.2 pm), and the dotted arrows mark the locations, where the x-z and y-z cross- sections were taken.

[0033] FIG. 6D illustrates the absolute difference images of the Deep-Z+ output with respect to the corresponding confocal images are also provided, with SSIM and RMSE values, further quantifying the performance of Deep-Z+. For comparison, the absolute difference images of the ‘standard’ Deep-Z output images as well as the scanning wide-field fluorescence microscope images are shown with respect to the corresponding confocal images, both of which report increased error and weaker SSIM compared to |GT - Deep-Z+ j. The quantitative match between jGT - WFj and jGT - Deep~Z\ also suggests that the impact of 60 nm axial offset between the confocal and wide-field image stacks is negligible. Scale bar: 10 pm.

[0034] FIG. 7 illustrates an input image of a 300 nm fluorescent bead was digitally refocused to a plane 2 pm above it using the Deep-Z network, where the ground truth was the mechanically scanned fluorescence image acquired at this plane. Bottom row: same images as the first row', but saturated to a dynamic range of [0, 10] to highlight the background. The SNR values were calculated by first taking a Gaussian fit on the pixel values of each image to find the peak signal strength. Then the pixels in the region of interest (ROI) that were 10 s away (where o 2 is the variance of the fitted Gaussian) were regarded as the background (marked by the region outside the red dotted circle in each image) and the standard deviation of these pixel values w ' as calculated as the background noise. The Deep-Z network rejects background noise and improves the output image SNR by ~ 40 dB, compared to the mechanical scan ground truth image.

[0035] FIG. 8 illustrates structural similarity (SSIM) index and correlation coefficient (Corr. Coeff.) analysis for digital refocusing of fluorescence images from an input plane at Z jnput to a target plane at z target. A scanned fluorescence z-stack of a C. elegans sample was created, within an axial range of -20 pm to 20 pm, with 1 pm spacing. First column: each scanned image at Z j npuS: in this stack was compared against the image at z target , forming cross-correlated SSIM and Corr. Coe! ' ! ' matrices. Both the SSIM and Corr. Coeff. fall rapidly off the diagonal entries. Second (middle) column: A Deep-Z network trained with fluorescence image data corresponding to +/- 7.5 pm propagation range (marked by the diamond in each panel) was used to digitally refocus images from Z jnput to z target . The output images were compared against the ground truth images at z ta rget using SSIM and Corr Coeff. Third column: same as the second column, except the training fluorescence image data included up to +/- 10 pm axial propagation (marked by the diamond that is now enlarged compared to the second column). These results confirm that Deep- Z learned the digital propagation of fluorescence, but it is limited to the axial range that it was trained for (determined by the training image dataset). Outside the training range (defined by the diamonds), both the SSIM and Corr. Coeff. values considerably decrease.

[0036] FIGS 9A-9T illustrate digital refocusing of fluorescence images of C. elegans worms along with corresponding ground truth (GT) images. FIGS. 9 A and 9K illustrate measured fluorescence images ( Deep-Z input). FIGS. 9B, 9D, 9L, 9N illustrate the Deep-Z network output images at different target heights (z). FIGS. 9C, 9E, 9M, and 90 illustrate ground truth (GT) images, captured using a mechanical axial scanning microscope at the same heights as the Deep- Z outputs FIGS 9F and 9P illustrate overlay images of Deep-Z output images in and GT images. FIGS. 9G, 91, 9Q, and 9S illustrate absolute difference images of Deep-Z output images and the corresponding GT images at the same heights. FIGS 9H, 9J, 9R, and 9T illustrate absolute difference images of Deep-Z input and the corresponding GT images. Structural similarity index (SSIM) and root mean square error (RMSE) were calculated for the output vs GT and the input vs GT for each region, displayed in FIGS. 9G, 91, 9Q, 9S and FIGS. 9H, 9J,

9R, 9T, respectively. Scale bar: 25 pm.

[0037] FIG. 10 illustrates the 3D imaging of C. elegans head neuron nuclei using Deep-Z network. The input and ground truth images were acquired by a scanning fluorescence microscope with a 40></1.4NA objective. A single fluorescence image acquired at z== : 0 pm focal plane (marked by dashed rectangle) was used as the input image to Deep-Z network and was digitally refocused to different planes within the sample volume, spanning around -4 to 4 pm; the resulting images provide a good match to the corresponding ground truth images. Scale bar: 25 pm. [0038] FIG. 1 1 illustrates the digital refocusing of fluorescence microscopy images of BPAEC using the Deep-Z network. The input image was captured using a 20x/0.75 NA objective lens, using the Texas Red and FITC filter sets, occupying the red and green channels of the image, for the mitochondria and F-actin structures, respectively. Using Deep-Z, the input image was digitally refocused to 1 pm above the focal plane, where the mitochondrial structures in the green channel are in focus, matching the features on the mechanically-scanned image (obtained directly at this depth). The same conclusion applies for the Deep-Z output at z ::: 2 pm, where the F-actin structures in the red channel come into focus. After 3 pm above the image plane, the details of the image content get blurred. The absolute difference images of the input and output with respect to the corresponding ground truth images are also provided, with SSIM and RAISE values, quantifying the performance of Deep-Z. Scale bar: 20 pm.

[0039] FIG. 12A illustrates the max intensity projection (MIP) (C. elegans neuron activity tracking and clustering) along the axial direction of the median intensity image over time. The red channel (Texas red) labels neuron nuclei and the green channel (FITC) labels neuron calcium activity. A total of 155 neurons were identified in the 3D stack, as labeled here. Scale bar: 25 pm. Scale bar for the zoom-in regions: 10 pm.

[0040] FIG. 12B illustrates the intensity of the neuron calcium activity, DR(/), of these 155 neurons is reported over a period of - 35 s at ~3.6Hz. Based on a threshold on the standard deviation of each DR(/), neurons are separated into those that are active (right-top, 70 neurons) and less active (right-bottom, 85 neurons).

[0041] FIG. 12C illustrates a similarity matrix of the calcium activity patterns of the top 70 active neurons.

[0042] FIG. 12D illustrates the top 40 eigen values of the similarity matrix. An eigen-gap is shown at k=3, which was chosen as the number of clusters according to eigen-gap heuristic (i.e. choose up to the largest eigenvalue before the eigenvalue gap, where the eigenvalues increase significantly).

[0043] FIG. 12E illustrates normalized activity DR( )/ Fo for the k=3 clusters after the spectral clustering on the 70 active neurons

[0044] FIG. 12F illustrates the similarity matrix after spectral clustering. The spectral clustering rearranged the row and column ordering of the similarity matrix of FIG. 12C to be block diagonal in FIG. 12F, which represents three individual clusters of calcium activity patterns.

[0045] FIG. 13 A illustrates a fluorescent sample consisting of 300 run fluorescent beads digitally refocused to a plane 5 pm above the sample by appending a DPM with uniform entries. The ground truth is captured using mechanical scanning at the same plane. Vertical average (i.e., the pixel average along the y-axis of the image) and its spatial frequency spectrum (i.e., the Fourier transform of the vertical average with the zero-frequency removed) are shown next to the corresponding images.

[0046] FIG. 13B illustrates digital refocusing of the same input fluorescence image of FIG. 13A by appending a DPM that defines a sinusoidal 3D surface with varying periods, from 0.65 pm to 130 pm along the x-axis, with an axial oscillation range of 8 pm, i.e., a sinusoidal depth span of -1 pm to -9 pm with respect to the input plane. The ground truth images were bicubic- interpolated in 3 D from a z-scanned stack with 0.5 pm axial spacing. Vertical average of each DPM and the corresponding spatial frequency spectrum are shown below each DPM. Vertical average of the difference images (i.e., the resulting Deep-Z image minus the reference Deep-Z image in FIG. 13 A as well as the ground truth image minus the reference ground truth image in FIG. 13A and the corresponding spectra are shown below each image.

[0047] FIGS. 13C-13F illustrate correlation coefficient (Corr. Coeff. - FIG. 13C), structural similarity index (SSIM --- FIG. 13D), mean absolute error (MAE - FIG. 13E) and mean square error (MSE - FIG. 13F) were used to compare Deep-Z output images against the ground truth images at the same 3D sinusoidal surfaces defined by the corresponding DPMs, with varying periods from 0.65 pm to 170 pm along the x-axis. Reliable Deep-Z focusing onto sinusoidal 3D surfaces can be achieved for lateral modulation periods greater than ~32 prn (corresponding to ~ 100 pixels), as marked by the arrows in FIGS. 13C-13F. The same conclusion is also confirmed by the results and spatial frequency analysis reported in FIG. 13B.

[0048] FIG. 14 illustrates the generator and discriminator network structures used in Deep-Z according to one embodiment. ReLU: rectified linear unit. Conv: convolutional layer.

[0049] FIG. 15A schematically illustrates the registration (in the lateral direction) of a wide- field fluorescence z-stack against a confocal z-stack. Both the wi de-field and the eonfocal z- stacks were first self-aligned and extended depth of fi eld (EDF) images were calculated for each stack. The EDF images w^ere stitched spatially and the stitched EDF images from wide-field were aligned to those of confocal microscopy images. The spatial transformations, from stitching to the EDF alignment, were used as consecutive transformations to associate the wide-field stack to the confocal stack. Non-empty wide-field ROIs of 256x256 pixels and the corresponding confocal ROIs were cropped from the EDF image, which were further aligned.

[0050] FIG 15B illustrates an example image showing an overlay of the regi stered image pair, with wide-field image

[0051] FIG. 15C illustrates focus curves in the wide-field stack and the confocal stack that were calculated and compared based on the corresponding SSIM values and used to align the wide-field and confocal stacks in the axial direction.

[0052] FIG. 16A illustrates the refocusing capability of Deep-Z under lower image exposure. Virtual refocusing of images containing two microbeads under different exposure times from defocused distances of -5, 3 and 4.5 pm, using two Deep-Z models trained with images captured at 10 ms and 100 ms exposure times, respectively.

[0053] FIG 16B illustrates a graph of median FWHM values of 91 microbeads imaged inside a sample FQV after the virtual refocusing of an input image across a defocus range of -10 pm to 10 pm by the Deep-Z (100 ms) network model. The test images have different exposure times spanning 3 ms to 300 ms

[0054] FIG. 16C illustrates a graph of median FWHM values of 91 microbeads imaged inside a sample FOV after the virtual refocusing of an input image across a defocus range of -10 pm to 10 pm by the Deep-Z (00 ms) network model. The test images have different exposure times spanning 3 ms to 300 ms.

[0055] FIG. 17A illustrates Deep-Z based virtual refocusing of a different sample type and transfer learning results. The input image records the neuron activities of a C. elegans that is labeled with GFP; the image is captured using a 20x/0.8NA objective under the FITC channel. The input image was virtually refocused using both the optimal worm strain model (denoted as: same model, functional GFP) as well as a different model (denoted as: different model, structural tagRFP). Also illustrated are the results of a transfer learning model which used the different model as its initialization and functional GFP image dataset to refine it after ~ 500 iterations (-30 min of training).

[0056] FIG. 17B illustrates Deep-Z based virtual refocusing of a different sample type and transfer learning results although a different C. elegans sample is shown (compared to FIG. 17A). The input image records the neuron nuclei labeled with tagRFP imaged using a 20x/0.75NA objective under the Texas Red channel. The input image was virtually refocused using both the exact worm strain model (same model, structural, tagRFP) as well as a different model (different model, 300 nm red beads). Also illustrated are the results of a transfer learning model which used the different model as its initialization and structural ragRFP image dataset to refine it after ~ 4,000 iterations (~6 hours training). Image correlation coefficient (r) is shown at the lower right comer of each image, in reference to the ground truth mechanical scan performed at the corresponding microscope system (Leica and Olympus, respectively). The transfer learning was performed using 20% of the training data and 50% of the validation data, randomly- selected from the original data set.

[0057] FIG. 18 illustrates virtual refocusing of a different microscope system and transfer learning results. The input image records the C. elegans neuronal nuclei labeled with tag GFP, imaged using a Leica SP8 microscope with a 20x/0.8NA objective. The input image was virtually focused using both the exact model (Leica SP8 20 /0.8\ A) as well as a different model (denoted as: different model, Olympus 20x/0.75NA). Also illustrated are the results of a transfer learning model using the different model as its initialization and Leica SP8 image dataset to refine it after ~ 2,000 iterations (~40 min training). Image correlation coefficient (r) is shown at the lower right comer of each image, in reference to the ground truth mechanical scan performed at the corresponding microscope system. The transfer learning was performed using 20% of the training data and 50% of the validation data, randomly selected from the original data set.

[0058] FIGS. 19A and 19B illustrate time-modulated signal reconstruction using Deep~Z. A time-modulated illumination source was used to excite the fluorescence signal of microbeads (300 nm diameter). Time-lapse sequence of the sample was captured under this modulated illumination at the in-focus plane (z = 0 pm) as well as at various defocused planes (z = 2-10 pm) and refocused using Deep-Z to digitally reach z = 0 pm. Intensity variations of 297 individual beads inside the FOV (after refocusing) were tracked for each sequence. Based on the video captured in FIG. 19 A, every other frame was taken to form an image sequence with twice the frame-rate and modulation frequency, and added it back onto the original sequence with a lateral shift (FIG. 19B). These defocused and super-imposed images w ? ere virtually refocused using Deep-Z to digitally reach z=0 pm, in-focus plane. Group 1 contained 297 individual beads inside the FOV with 1 Hz modulation. Group 2 contained the signals of the other (new) beads that are super-imposed on the same FOV with 2 Hz modulation frequency Each intensity curve was normalized, and the mean and the standard deviation of the 297 curves were plotted for each time-lapse sequence Vi rtually -refocused Deep-Z output tracks the sinusoidal illumination, very' closely following the in-focus reference time-modulation reported in target (z = 0 pm).

[0059] FIGS. 20A-20L illustrate C. elegans neuron segmentation comparison using the Deep Z network (and merged) with mechanical scanning. FIGS. 20 A, 20D are the fluorescence images used as input to Deep-Z FIGS. 20B and 20E are the segmentation results based on FIGS 20A, 20D, respectively FIGS. 20C and 20F are the segmentation results based on the virtual image stack (-10 to 10 pm) generated by Deep-Z using the input images in FIGS. 20 A, 20D, respectively. FIG. 20G is an additional fluorescence image, captured at a different axial plane (z = 4 pm). FIG. 20H is the segmentation results on the merged virtual stack (-10 to 10 pm). The merged image stack was generated by blending the two virtual stacks generated by Deep-Z using the input images of FIGS. 20D and 20G. FIG. 201 is the segmentation results based on the mechanically-scanned image stack used as ground truth (acquired at 41 depths with 0.5 pm axial spacing). Each neuron 'as represented by a small sphere in the segmentation map and the depth information of each neuron was color-coded. FIGS. 20J-20L show the detected neuron positions in FIGS. 20E, 20F, 20H compared with the positions in FIG. 201, and the axial displacement histograms between the Deep-Z results and the mechanically-scanned ground truth results were plotted.

[0060] FIGS. 21 A-21H show the Deep-Z- based virtual refocusing of a laterally shifted weaker fluorescent object next to a stronger object. FIG. 21 A shows a defocused experimental image (left bead) at plane z was shifted laterally by d pixels to the right and digitally weakened by a pre-determined ratio (right bead), which was then added back to the original image, used as the input image to Deep-Z. Scale bar: 5 pm. FIG. 2 IB is an example of the generated bead pair with an intensity ratio of 0.2; showing in-focus plane, defocused planes of 4 and 10 pm, and the corresponding virtually-refocused images by Deep-Z. FIGS. 21C-21H are graphs of the average intensity ratio of the shifted and weakened bead signal with respect to the original bead signal for 144 bead pairs inside a FOV, calculated at the virtually refocused plane using different axial defocus distances (z). The crosses“x” in each FIG. mark the corresponding lateral shift distance, below which the two beads cannot be distinguished from each other, coded to represent bead signal intensity ratio (spanning 0.2-1.0). Arrows shows direction of increasing signal intensity ratio values corresponding to legend.

[0061] FIGS 22A-22D illustrate the impact of axial occlusions on Deep-7, virtual refocusing performance. FIG. 22B is a 3D virtual refocusing of two beads that have identical lateral positions but are separated axially by 8 pm; Deep-2 G, as usual, used a single 2D input image corresponding to the defocused image of the overlapping beads. The virtual refocusing calculated by Deep-Z exhibits two maxima representing the two beads along the z-axis, matching the simulated ground truth image stack. FIG. 22B sho 's a simulation schematic: two defocused images in the same bead image stack with a spacing of d w ' as added together, with the higher stack located at a depth of /. 8 pm. A single image in the merged image stack was used as the input to Deep-Z for virtual refocusing. FIGS. 22C-22D report the average and the standard deviation (represented by background range) of the intensity ratio of the top (i.e. , the dimmer) bead signal with respect to the bead intensity in the original stack, calculated for 144 bead pairs inside a FOV, for z = 8 pm with different axial separations and bead intensity ratios (spanning 0.2-1.0). Arrow's show's direction of increasing signal intensity ratio values corresponding to legend

[0062] FIGS 23 A-23E illustrate the Deep-Z inference results as a function of 3D fluorescent sample density. FIG. 13 A show ' s a comparison of Deep-Z inference against a mechanically- scanned ground truth image stack over an axial depth of +/~ 10 pm with increasing fluorescent bead concentration. The measured bead concentration resulting from the Deep-Z output (using a single input image) as well as the mechanically-scanned ground truth (which includes 41 axial images acquired at a scanning step size of 0.5 pm) is shown on the top left corner of each image. MIP: maximal intensity projection along the axial direction. Scale bar: 30 pm FIGS. 23B-23E illustrate a comparison of Deep-Z output against the ground truth results as a function of the increasing bead concentration. The solid line is a 2 nd order polynomial fit to all the data points. The dotted line represents y=x, shown for reference. These particle concentrations were calculated/measured over a FOV of 1536x 1536 pixels (500x500 pm 2 ), i.e. 15-times larger than the specific regions shown in FIG. 23 A.

[0063] FIG. 24A illustrates the fluorescence signal of nanobeads imaged in 3D, for 180 times of repeated axial scans, containing 41 planes, spanning +/- 10 pm with a step size 0 5 pm. The accumulated scanning time is ~30 min. [0064] FIG. 24B illustrates the corresponding scan for a single plane, which is used by Deep- Z to generate a virtual image stack, spanning the same axial depth within the sample (+/- 10 pm) The accumulated scanning time for Deep-Z is ~ 15 seconds. The center line represents the mean and the shaded region represents the standard deviation of the normalized intensity for 681 and 597 individual nanobeads (for date in FIG. 24A and 24B, respectively) inside the sample volume.

Detailed Description of Illustrated Embodiments

[0065] FIG. 1 illustrates one embodiment of a system 2 that uses a trained deep neural network 10 to generate one or more fluorescence output image(s) 40 of a sample 12 (or object! s) in the sample 12) that is digitally propagated to one or more user-defined or automatically generated surface(s). The system 2 includes a computing device 100 that contains one or more processors 102 therein and image processing software 104 that incorporates the trained deep neural network 10. The computing device 100 may include, as explained herein, a personal computer, laptop, tablet PC, remote server, application-specific integrated circuit (ASIC), or the like, although other computing devices may be used (e.g., devices that incorporate one or more graphic processing units (GPUs)).

[0066] In some embodiments, a series or time sequence of output images 40 are generated, e.g., a time-lapse video clip or movie of the sample 12 or objects therein. The trained deep neural network 10 receives one or more fluorescence microscopy input image(s) 20 (e.g., multiple images taken at different times) of the sample 12. The sample 12 may include, by way of illustration and not limitation, a pathological slide, biopsy, bodily fluid, organism (living or fixed), cel!(s) (living or fixed), tissue (living or fixed), cellular or sub-cellular feature, fluid or liquid sample containing organisms or other microscopic objects. In one embodiment, the sample 12 may be label-free and the fluorescent light that is emitted from the sample 12 is emitted from endogenous fluorophores or other endogenous emitters of frequency-shifted light within the sample 12 (e.g., autofluorescence). In another embodiment, the sample 12 is labeled with one or more exogenous fluorescent labels or other exogenous emitters of light.

Combinations of the two are also contemplated.

[0067] The one or more input image(s) 20 is/are obtained using an imaging device 110, for example, a fluorescence microscope device 110. In some embodiments, the imaging device 110 may include wi de-field fluorescence microscope 110 that provides an input image 20 over and extended field-of view (FOV) · The trained deep neural network 10 outputs or generates one or more fluorescence output image(s) 40 that i s/are digitally propagated to a user-defined or automatically generated surface 42 (as established by the digital propagation matrix (DPM) or other appended data structure). The user-defined or automatically generated surface 42 may include a two-dimensional (2D) surface or a three-dimensional (3D) surface. For example, this may include, a plane at different axial depths within the sample 12. The user-defined or automatically generated surface 42 may also include a curved or other 3D surface. In some embodiments, the user-defined or automatically generated surface 42 may be a surface that corrects for sample tilt (e.g., tilted plane), curvature, or other optical aberrations. The user- defined or automatically generated surface 42, which as explained herein may include a DPM, is appended to (e.g., through a concatenation operation) or otherwise associated with the input image(s) 20 that is/are input to the trained deep neural network 10. The trained deep neural network 10 outputs the output image(s) 40 at the user-defined or automatically generated surface 42.

[0068] The input image(s) 20 to the trained deep neural network 10 in some embodiments, may have the same or substantially similar numerical aperture and resolution as the ground truth (GT) images used to train the deep neural network 10. In other embodiments, the input image(s) may have a lower numerical aperture and poorer resolution compared to the ground truth (GT) images. In this later embodiment, the trained deep neural network 10 performs both virtual refocusing and improving the resolution (e.g., super-resolutions) of the input image(s) 20. This additional functionality is imparted to the deep neural network 10 by training the same to increase or improve the resolution of the input image(s) 20.

[0069] In other embodiments, multiple user-defined or automatically generated surfaces 42 may be combined to create a volumetric (3D) image of the sample 12 using a plurality of output images 40. Thus, a stack of output images 40 generated using the trained deep neural network 10 may be merged or combined to create a volumetric image of the sample 12. The volumetric image may also be generated as a function of time, e.g , a volumetric movie or time-lapse video clip that show's movement over time. In a similar fashion, multiple user-defined or automatically generated surfaces 42 may be used to create an output image with an extended depth of field (EDOF) that extends the depth of field of the microscope 110 used to generate the input image 20 In this option a plurality of output images 40 using a plurality of DPMs 42 are digitally combined to create and EDOF image of the sample 12. In a related embodiment, at least one output image 40 using one or more DPMs 42 are used to create an improved-focus image of the sample 12.

[0070] In one particular embodiment, the output image(s) 40 generated by the trained deep neural network 10 are of the same imaging modality of used to generate the input image 20. For example, if a fluorescence microscope 110 was used to obtain the input image(s) 20, the output image(s) 40 would also appear to be obtained from the same type of fluorescence microscope 110, albeit refocused to the user-defined or automatically generated surface 42. In another embodiment, the output imagefs) 40 generated by the trained deep neural network 10 are of a different imaging modality of used to generate the input image 20. For example, if a wide-field fluorescence microscope 110 was used to obtain the input image(s) 20, the output image(s) 40 may appear to be obtained from a confocal microscope and refocused to the user-defined or automatically generated surface 42.

[0071] In one preferred embodiment, the trained deep neural network 10 is trained as a generative adversarial network (GAN) and includes two parts: a generator network (G) and a discriminator network (D) as seen in FIG. 14. The generator network (G), includes down- sampling path 44 and a symmetric up-sampling path 46. In the down-sampling path 44, there are five down-sampling blocks in one particular implementation. Each block in the down-sampling path 44 contains two convolution layers that map an input tensor to an output tensor. The fifth down-sampling block in the down-sampling path 44 connects to the up-sampling path 46 The up-sampling path 46 includes, in one embodiment, four up-sampling blocks each of wdiich contains two convolutional layers that map the input tensor to the output tensor. The connection between consecutive up-sampling blocks is an up-convolution (convolution transpose) block that up-samples the image pixels by 2x. The last block is a convolutional layer that maps the channels (in one embodiment as described herein forty -eight (48)) to one output channel.

[0072] The discriminator network (D) is a convolutional neural network that consists of six consecutive convolutional blocks, each of which maps the input tensor to the output tensor.

After the last convolutional block, an average pooling layer flattens the output and reduces the number of parameters as explained herein. Subsequently there are fully-connected (FC) layers of size 3072 x 3072 with LReLU activation functions, and another FC layer of size 3072 x 1 with a Sigmoid activation function. The final output represents the score of the Discriminator (D), which falls within (0, 1), where 0 represents a false and 1 represents a true label. During training, the weights are initialized (e.g., using the Xavier initializer), and the biases are initialized to 0.1. The trained deep neural network 10 is executed using the image processing software 104 that incorporates the trained deep neural network 10 and is executed using a computing device 100. As explained herein, the image processing software 104 can be implemented using any number of software packages and platforms. For example, the trained deep neural network 10 may he implemented using TensorFlow although other programming languages may be used (e.g., Python, C++, etc.). The invention is not limited to a particular software platform.

[0073] The fluorescence output image(s) 40 may be displayed on a display 106 associated with the computing device 100, but it should be appreciated the image(s) 40 may be displayed on any suitable display (e.g., computer monitor, tablet computer, mobile computing device, etc.). Input images 20 may also optionally be displayed with the one or more output image(s) 40. The display 106 may include a graphical user interface (GUI) or the like that enables the user to interact with various parameters of the system 2. For example, the GUI may enable to the user to define or select certain time sequences of images to present on the display 106. The GUI may thus include common movie-maker tools that allow the user to clip or edit a sequence of images 40 to create a movie or time-lapse video clip. The GUI may also allow the user to easily define the particular user-defined surface(s) 42. For example, the GUI may include a knob, slide bar, or the like that allows the user to define the depth of a particular plane or other surface within the sample 12. The GUI may also have a number of pre-defmed or arbitrary user-defined or automatically generated surfaces 42 that the user may choose from. These may include planes at different depths, planes at different cross-sections, planes at different tilts, curved or other 3D surfaces that are selected using the GUI. This may also include a depth range within the sample 12 (e.g., a volumetric region in the sample 12). The GUI tools may permit the user to easily scan along the depth of the sample 12. The GUI may also provide various options to augment or adjust the output image(s) 40 including rotation, tilt-correction, and the like. In one preferred embodiment, the user-defined or automatically generated surfaces 42 are formed as a digital propagation matrix (DPM) 42 that represents, pixel-by-pixel, the axial distance of the desired or target surface from the plane of the input image 20. In other embodiments, the image processing software 104 may suggest or provide one or more user-defined or automatically generated surfaces 42 (e.g., DPMs). For example, the image processing software 104 may automatically generate one or more DPMs 42 that correct for one or more optical aberrations. This may include aberrations such as sample drift, tilt and spherical aberrations. Thus, the DPM(s) 42 may be automatically generated by an algorithm implemented in the image processing software 104. Such an algorithm, which may be implemented using a separate trained neural network or software, may operate by having an initial guess with a surface or DPM 42 that is input with a fluorescence image 20. The result of the network or software output is analyzed according to a metric (e.g., sharpness or contrast). The result is then used to generate a new surface of DPM 42 that is input with a fluorescence image 20 and analyzed as noted above until the result has converged on a satisfactory' result (e.g., sufficient sharpness or contrast has been achieved or a maximum result obtained). The image processing software 104 may use a greedy algorithm to identify these DPMs 42 based, for example, on a surface that maximizes sharpness and/contrast in the image. An important point is that these corrections take place offline and not while the sample 12 is being imaged.

[0074] The GUI may provide the user the ability to watch selected movie clips or time-lapse videos of one or more moving or motile objects in the sample 12 In one particular embodiment, simultaneous movie clips or time-lapse videos may be shown on the display 106 with each at different focal depths. As explained herein, this capability of the system 2 eliminates the need for mechanical axial scanning and related optical hardware but also significantly reduces phototoxicity or photobleaching within the sample to enable longitudinal experiments (e.g., enables a reduction of photon dose or light exposure to the sample 12). In addition, the virtually created time-lapse videos/movie clips are temporally synchronized to each other (i.e., the image frames 40 at different depths have identical time stamps) something that is not possible with scanning-based 3D imaging systems due to the unavoidable time delay between successive measurements of different parts of the sample volume.

[0075] In one embodiment, the system 2 may output image(s) 40 in substantially real-time with the input image(s) 20. That is to say, the acquired input image(s) 20 are input to the trained deep neural network 10 along with the user-defined or automatically generated surface(s) and the output image(s) 40 are generated or output in substantially real-time. In another embodiment, the input image(s) 20 may be obtained with the fluorescence microscope device 110 and then stored in a memory or local storage device (e.g., hard drive or solid-state drive) which can then be run through the trained deep neural network 10 at the convenience of the operator.

[0076] The input image(s) 20 (in addition to training images) obtained by the microscope device 110 may be obtained or acquired using a number of different types of microscopes 110. This includes: a super-resolution microscope, a confocal microscope, a confocal microscope with single photon or multi -photon excited fluorescence, a second harmonic or high harmonic generation fluorescence microscope, a light-sheet microscope, a structured illumination microscope, a computational microscope, a ptychographic microscope.

[0077] Experimental

[0078] In the Deep-Z system 2 described herein, an input 2D fluorescence image 20 (to be digitally refocused onto a 3D surface within the volume of the sample 12) is first appended with a user-defined surface 42 in the form of a digital propagation matrix (DPM) that represents, pixel-by-pixel, the axial distance of the target surface from the plane of the input image as seen in FIGS 1 and 2. The Deep-Z image processing software 104 includes a trained deep neural network 10 that is trained using a conditional generative adversarial neural network (GAN) using accurately matched pairs of (1) various fluorescence images axially-focused at different depths and appended with different DPMs, and (2) the corresponding fluorescence images (i.e., the ground truth (GT) labels) captured at the correct/target focus plane defined by the corresponding DPM. Through this training process that only uses experimental image data without any assumptions or physical models, the generator network of GAN-based trained deep neural network 10 learns to interpret the values of each DPM pixel as an axial refocusing distance, and outputs an equivalent fluorescence image 40 that is digitally refocused within the sample 12 volume to the 3D surface defined by the user (i.e., the DPM or other user-defined or

automatically generated surface 42), where some parts of the sample are focused, while some other parts get out-of-focus, according to their true axial positions with respect to the target surface.

[0079] To demonstrate the success of this unique fluorescence digital refocusing system 2, Caenorhabditis elegans (C. elegans) neurons were imaged using a standard wide-field fluorescence microscope with a 20x/0.75 numerical aperture (NA) objective lens, and extended the native depth-of-fxeld (DOF) of this objective (~1 pm) by ~20-fold, where a single 2D fluorescence image was axially refocused using the trained deep neural network 10 to Dz = ±10 mth with respect to its focus plane, providing a very good match to the fluorescence images acquired by mechanically scanning the sample within the same axial range. Similar results were also obtained using a higher NA objective lens (4()x/l .3 NA). Using this deep learning-based fluorescence image refocusing system 2, 3D tracking of the neuron activity of a C. elegans worm was further demonstrated over an extended DOF of ±10 mpi using a time-sequence of fluorescence images acquired at a single focal plane. Thus, a time-series of input images 20 of a sample 12 (or objects within the sample 12) can be used to generate a time-lapse video or movie for 2D and/or 3D tracking over time.

[0080] Furthermore, to highlight some of the additional degrees-of-freedom enabled by the system 2, spatially non-uniform DPMs 42 were used to refocus a 2D input fluorescence image onto user-defined 3D surfaces to computationally correct for aberrations such as sample drift, tilt and spherical aberrations, all performed after the fluorescence image acquisition and without any modifications to the optical hardware of a standard wide-field fluorescence microscope.

[0081] Another important feature of the system 2 is that it permits cross-modality digital refocusing of fluorescence images 20, where the trained deep neural network 10 is trained with gold standard label images obtained by a different fluorescence microscopy 110 modality to teach the trained deep neural network 10 to refocus an input image 20 onto another plane within the sample volume, but this time to match the image of the same plane that is acquired by a different fluorescence imaging modality compared to the input image 20. This related framework is referred to herein as Deep-Z+. In this embodiment, the output image 40 generated by an input image 20 using a first microscope modality resembles and is substantially equivalent to a microscopy image of the same sample 12 obtained with a microscopy modality of the second type. To demonstrate the proof-of-concept of this unique capability, a Deep-Z+ trained deep neural network 10 was trained with input and label images that were acquired with a wide-field fluorescence microscope 110 and a confocal microscope (not shown), respectively, to blindly generate at the output of this cross-modality Deep-Z+, digitally refocused images 40 of an input wide-field fluorescence image 20 that match confocal microscopy images of the same sample sections

[0082] It should be appreciated that a variety of different imaging modalities will work with the cross-modality functionality. For example, the first microscope modality may include a fluorescence microscope (e.g., wide-field fluorescence) and the second modality may include one of the following types of microscopes; a super -resolution microscope, a confocal microscope, a confocal microscope with single photon or multi-photon excited fluorescence, a second harmonic or high harmonic generation fluorescence microscope, a light-sheet microscope, a structured illumination microscope, a computational microscope, a ptychographic microscope.

[0083] After its training, the deep neural network 10 remains fixed, while the appended DPM or other user-defined surface 42 provides a“depth tuning knob” for the user to refocus a single 2D fluorescence image onto 3D surfaces and output the desired digitally-refocused fluorescence image 40 in a rapid non-iterative fashion. In addition to fluorescence microscopy, Deep-Z framewOrk may he applied to other incoherent imaging modalities, and in fact it bridges the gap between coherent and incoherent microscopes by enabling 3D digital refocusing of a sample volume using a single 2D incoherent image. The system 2 is further unique in that it enables a computational framework for rapid transformation of a 3D surface onto another 3D surface within the fluorescent sample volume using a single forward-pass operation of the trained deep neural network 10.

[0084] Digital refocusing of fluorescence images using Deep-Z

[0085] The system 2 and methods described herein enable a single intensity-only wide-fieid fluorescence image 20 to be digitally refocused to a user-defined surface 42 within the axial range of its training. FIG. 2A demonstrates this concept by digitally propagating a single fluorescence image 20 of a 300 nm fluorescent bead (excitation/emission; 538 nm/584 nm) to multiple user defined planes as defined by the DPMs 42. The native DOF of the input fluorescence image 20, defined by the NA of the objective lens (20 c /0.75 NA), is ~ 1 pm. Using the Deep-Z system 2, the image of this fluorescent bead was digitally refocused over an axial range of - ±10 pm, matching the mechanically-scanned corresponding images of the same region of interest (ROI), which form the ground truth (GT). Note that the PSF in FIG. 2A is asymmetric in the axial direction, which provides directional cues to the neural network 10 regarding the digital propagation of an input image by Deep-Z. Unlike a symmetric Gaussian beam, such PSF asymmetry along the axial direction is ubiquitous in fluorescence microscopy systems. In addition to digitally refocusing an input fluorescence image 20, the Deep-Z system 2 also provides improved signal-to-noise ratio (SNR) at its output 40 in comparison to a fluorescence image of the same object measured at the corresponding depth (see FIG. 7); at the heart of this SNR increase compared to a mechanically-scanned ground truth is the ability of the trained deep neural network 1(3 to reject various sources of image noise that were not generalized during its training phase. To further quantify Deep-Z system 2 output performance PSF analysis was used. FIGS 2B, 2C illustrate the histograms of both the lateral and the axial full-width-half- maximum (FWHM) values of 461 individual/isolated nano-beads distributed over ~ 500 x 500 pm 2 . The statistics of these histograms very well agree with each other, confirming the match between Deep-Z output images 40 calculated from a single fluorescence image (Ά-1 measured image) and the corresponding axially-scanned ground truth (GT) images (A -41 measured images). This quantitative match highlights the fact that Deep-Z system 2 indirectly learned, through image data, the 3D refocusing of fluorescence light. However, this learned capability is limited to be within the axial range determined by the training dataset (e.g., ±10 pm in this work), and fails outside of this training range (see FIG. 8 for quantification of this phenomenon). Of course, training over a wider axial range will improve the range of axial refocusing for the trained deep neural network 10.

[0086] Next, the Deep-Z system 2 was tested by imaging the neurons of a C. elegans nematode expressing pan-neuronal tagRFP. FIG. 3 demonstrates the blind testing results for Deep-Z based refocusing of different parts of a C. elegans worm from a single wide-field fluorescence input image 20. Using the Deep-Z system 2, non-distinguishable fluorescent neurons in the input image 20 were brought into focus at different depths, while some other in- focus neurons at the input image 20 got out-of-focus and smeared into the background, according to their true axial positions in 3D; see the cross-sectional comparisons to the ground truth mechanical scans provided in FIG. 3 (also see FIGS. 9A-9J for image difference analysis). For optimal performance, this Deep-Z system 2 was specifically trained using C. elegans samples 12, to accurately learn the 3D PSF information together with the refractive properties of the nematode body and the surrounding medium. Using the Deep-Z system 2, a virtual 3D stack and 3D visualization of the sample 12 were generated (from a single 2D fluorescence image of a C. elegans worm) over an axial range of ~ ±10 pm. Similar results were also obtained for a C. elegans imaged under a 4CW1.3NA objective lens, where Deep-Z successfully refocused the input image over an axial range of - ±4 pm (see FIG 10).

[0087] Because the Deep-Z system 2 can digitally reconstruct the image of an arbitrary plane within a 3D sample 12 using a single 2D fluorescence image 20, without sacrificing the inherent resolution, frame-rate or photon-efficiency of the imaging system, it is especially useful for imaging dynamic (e.g., moving) biological samples 12. To demonstrate this capability, a video was captured of four moving C. elegans worms 12, where each image frame 40 of this fluorescence video was digitally refocused to various depths using Deep-Z trained deep neural network 10. This enabled the creation of simultaneously running videos of the same sample volume, each one being focused at a different depth (e.g., z depth). This unique capability not only eliminates the need for mechanical axial scanning and related optical hardware, but also significantly reduces phototoxicity or photobleaching within the sample to enable longitudinal experiments. Yet another advantageous feature is the ability to simultaneously display temporally synchronized time-lapse videos or movie clips at different depths which is not possible with conventional scanning-based 3D imaging systems. In addition to 3D imaging of the neurons of a nematode, the system 2 also works well to digitally refocus the images 20 of fluorescent samples 12 that are spatially denser such as the mitochondria and F-actin structures within bovine pulmonary artery endothelial cells (BPAEC) as seen in FIG. 11 for example.

[0088] As described so far, the blindly tested samples 12 were inferred with a Deep-Z trained deep neural network 10 that was trained using the same type of sample 12 and the same microscopy system (i.e., same modality of imaging device 110). The system 2 w'as also evaluated under different scenarios, where a change in the test data distribution is introduced in comparison to the training image set, such as e.g., (1) a different type of sample 12 is imaged, (2) a different microscopy system 110 is used for imaging, and (3) a different illumination power or SNR is used. The results (FIGS. 17 A, 1738, 18, 19) and related analysis reveal the robustness of Deep-Z system 2 to some of these changes; however, as a general recommendation to achieve the best performance with th Q Deep-Z system 2, the neural network 10 should be trained (from scratch or through transfer learning, which significantly expedites the training process, as illustrated in FIGS. I7A, 17B, 18 using training images obtained with the same microscope imaging device/system 1 10 and the same types of samples, as expected to be used at the testing phase.

[0089] Sample drift-induced defocus compensation using Deep-Z

[0090] The Deep-Z system 2 also enables the correction for sample drift induced defocus after the image 20 is captured. Videos were generated showing a moving C. elegans worm recorded by a wade-field fluorescence microscope 110 with a 2Qx/0.8NA objective lens (DOF ~ 1 pm). The worm was defocused ~ 2 - 10 mth from the recording plane. Using the Deep-Z system 2, one can digitally refocus each image frame 20 of the input video to different planes up to 10 pm, correcting this sample drift induced defocus. Such a sample drift is conventionally compensated by actively monitoring the image focus and correcting for it during the measurement, e.g., by using an additional microscope. The Deep-Z system 2, on the other hand, provides the possibility to compensate sample drift in already-captured 2D fluorescence images.

[0091] 3D functional imaging of C. elegans using Deep-Z

[0092] An important application of 3D fluorescence imaging is neuron activity tracking. For example, genetically modified animals that express different fluorescence proteins are routinely imaged using a fluorescence microscope 110 to reveal their neuron activity. To highlight the utility of the Deep-Z system 2 for tracking the activity of neurons in 3D, a fluorescence video of a C. elegans worm was recorded at a single focal plane (z = 0 pm) at -3 6 Hz for -35 sec, using a 20X/0.8NA objective lens with two fluorescence channels: FITC for neuron activity and Texas Red for neuron locations. The input video image frames 20 were registered with respect to each other to correct for the slight body motion of the worm between the consecutive frames

(described herein in the Methods section). Then, each frame 20 at each channel of the acquired video were digitally refocused using Deep-Z trained deep neural network 10 to a series of axial planes from -10 pm to 10 pm with 0.5 pm step size, generating a virtual 3D fluorescence image stack (of output images 40) for each acquired frame. A comparison video was made of the recorded input video along with a video of the maximum intensity projection (MIP) along z for these virtual stacks. The neurons that are defocused in the input video can be clearly refocused on demand at the Deep-Z output for both of the fluorescence channels. This enables accurate spatio-temporal tracking of individual neuron activity in 3D from a temporal sequence of 2D fluorescence images 20, captured at single focal plane.

[0093] To quantify the neuron activity using Deep-Z output images 40, voxels of each individual neuron were segmented using the Texas Red channel (neuron locations), and tracked the change of the fluorescence intensity, i.e., AF(t)— F(f)— F 0 , in the FITC channel (neuron activity) inside each neuron segment over time, where F(t) is the neuron fluorescence emission intensity and F 0 is its time average. A total of 155 individual neurons in 3D were isolated using Deep-Z output images 40, as shown in FIG. 4B, where the color represents the depth (z location) of each neuron. For comparison, FIG. 20b reports the results of the same segmentation algorithm applied on just the input 2D image, where 99 neurons were identified, without any depth information.

[0094] FIG. 4C plots the activities of the 70 most active neurons, which were grouped into clusters C1-C3 based on their calcium activity pattern similarities. The activities of all of the 155 neurons inferred using Deep-Z are provided in FIGS. 12A-12F. FIG. 3C reports that cluster C3 calcium activities increased at t = 14 s, whereas the activities of cluster C2 decreased at a similar time point. These neurons very likely correspond to the motor neurons type A and B that promote backward and forward motion, respectively, which typically anti-correlate with each other. Cluster Cl features two cells that were comparatively larger in size, located in the middle of the worm. These cells had three synchronized short spikes at t : = 4, 17 and 32 sec. Their 3D positions and calcium activity patern regularity suggest that they are either neuronal or muscle cells of the defecation system that initiates defecation in regular intervals in coordination with the locomotion system.

[0095] It should be emphasized that all this 3D tracked neuron activity was in fact embedded in the input 2D fluorescence image sequence (i.e., images 20) acquired at a single focal plane within the sample 12, but could not be readily inferred from it. Through the Deep-Z system 2 and its 3D refocusing capability to user-defined surfaces 42 within the sample volume, the neuron locations and activities were accurately tracked using a 2D microscopic time sequence, without the need for mechanical scanning, additional hardware, or a trade-off of resolution or imaging speed.

[0096] Because the Deep~Z system 2 generates temporally synchronized virtual image stacks through purely digital refocusing, it can be used to match (or improve) the imaging speed to the limit of the camera framerate, by using e.g., the stream mode, which typically enables a short video of up to 100 frames per second. To highlight this opportunity, the stream mode of the camera of a Leica SP8 microscope was used two videos were captured at 100 fps for monitoring the neuron nuclei (under the Texas Red channel) and the neuron calcium activity (under the FITC channel) of a moving C. elegans over a period of 10 sec, and used Deep-Z to generate virtually refocused videos from these frames over an axial depth range of +/- 10 pm.

[0097] Deep-Z based aberration correction sing spatially non-uniform DPMs

[0098] In one embodiment, uniform DPMs 42 were used in both the training phase and the blind testing in order to refocus an input fluorescence image 20 to different planes within the sample volume. Here it should be emphasized that, even though the Deep-Z trained deep neural network 10 was trained with uniform DPMs 42, in the testing phase one can also use spatially non-uniform entries as part of a DPM 42 to refocus an input fluorescence image 20 onto user- defined 3D surfaces. This capability enables digital refocusing of the fluorescence image of a 3D surface onto another 3D surface, defined by the pixel mapping of the corresponding DPM 42.

[0099] Such a unique capability can be useful, among many applications, for simultaneous auto-focusing of different parts of a fluorescence image after the image capture, measurement or assessment of the aberrations introduced by the optical system (and/or the sample) as well as for correction of such aberrations by applying a desired non-uniform DPM 42. To exemplify this additional degree-of-freedom enabled by the Deep-Z system 2, FIGS. 5A-5L demonstrates the correction of the planar tilting and cylindrical curvature of two different samples, after the acquisition of a single 2D fluorescence image per object. FIG. 5 A illustrates the first

measurement, where the plane of a fluorescent nano-bead sample was tilted by 1.5° with respect to the focal plane of the objective lens. As a result, the left and right sides of the acquired raw fluorescence image (FIG. 5C) were blurred and the corresponding lateral FWHM values for these nano-beads became significantly wider, as reported in FIG. 5E. By using a non-uniform DPM 42 as seen in FIG. 5B, which represents this sample tilt, the Deep-Z trained deep neural network 1(3 can act on the blurred input image 20 (FIG. 5C) and accurately bring all the nano beads into focus (FIG. 5D), even though it was only trained using uniform DPMs 42 The lateral FWHM values calculated at the network output image became monodispersed, with a median of ~ 0.96 pm (FIG. 5F), in comparison to a median of - 2.14 pm at the input image (FIG. 5E). Similarly, FIG. 5G illustrates the second measurement, where the nano-beads were distributed on a cylindrical surface with a diameter of -7.2 mm. As a result, the measured raw fluorescence image exhibited defocused regions as illustrated in FIG. 51, and the FWHM values of these nano bead images were accordingly broadened (FIG. 5K), corresponding to a median value of - 2.41 pm. On the other hand, using a non-uniform DPM 42 that defines this cylindrical surface (FIG. 5H), the aberration in FIG. 51 was corrected using Deep-Z trained deep neural network 10 (FIG. 5J), and similar to the tilted sample case, the lateral FWHM! values calculated at the network output image once again became monodispersed, as desired, with a median of - 0.91 pm (FIG. 5L). [00100] To evaluate the limitations of this technique, the 3D surface curvature was quantified that a DPM 42 can have without generating artifacts. For this, a series of DPMs 42 were used that consisted of 3D sinusoidal patterns with lateral periods of I) = 1, 2, ..., 256 pixels along the x-direction (with a pixel size of 0.325 pm) and an axial oscillation range of 8 pm, i.e., a sinusoidal depth span of -1 pm to -9 pm with respect to the input plane. Each one of these 3D sinusoidal DPMs 42 was appended on an input fluorescence image 20 that w^as fed into the Deep-Z network 10. The network output at each sinusoidal 3D surface defined by the corresponding DPM 42 was then compared against the images that were interpolated in 3D using an axially-scanned z-stack with a scanning step size of 0.5 pm, which formed the ground truth images that were used for comparison. As summarized in FIGS 13 A-13F, the Deep-Z network 10 can reliably refocus the input fluorescence image 20 onto 3D surfaces defined by sinusoidal DPMs 42 when the period of the modulation is > 100 pixels (i.e., > 32 pm in object space). For faster oscillating DPMs 42, with periods smaller than 32 pm, the network output images 40 at the corresponding 3D surfaces exhibit background modulation at these high-frequencies and their harmonics as illustrated in the spectrum analysis reported in FIGS. 13-13F. These higher harmonic artifacts and the background modulation disappear for lower frequency DPMs 42, which define sinusoidal 3D surfaces at the output with a lateral period of > 32 pm and an axial range of 8 pm.

[00101] Cross-modality digital refocusing of fluorescence images: I)eep~Z+

[00102] The Deep-Z system 2 enables digital refocusing of out-of-focus 3D features in a wide- field fluorescence microscope image 20 to user-defined surfaces. The same concept can also be used to perform cross-modality digital refocusing of an input fluorescence image 20, where the generator network G can be trained using pairs of input and label images captured by two different fluorescence imaging modalities (i.e., referred to as Deep-Z l·). After its training, the Deep-Z+ trained deep neural network 10 learns to digitally refocus a single input fluorescence image 20 acquired by a fluorescence microscope 110 to a user-defined target surface 42 in 3D, but this time the output 40 will match an image of the same sample 12 captured by a different fluorescence imaging modality at the corresponding height/plane. To demonstrate this unique capability, a Deep-Z+ deep neural network 10 was trained using pairs of wide-field microscopy images (used as inputs) and confocal microscopy images at the corresponding planes (used as ground truth (GT) labels) to perform cross-modality digital refocusing. FIGS. 6A-6D demonstrates the blind testing results for imaging microtubule structures of BPAEC using this Deep-Z+ system 2. As seen in FIGS. 6B-6D, the trained Deep-Z+ network 10 digitally refocused the input wide field fluorescence image 20 onto different axial distances, while at the same time rejecting some of the defocused spatial features at the refocused planes, matching the confocal images of the corresponding planes, which serve as the ground truth (GT) (FIG 6C). For instance, the microtubule structure at the lower left corner of a ROI in FIGS. 6A-6C, which was prominent at a refocusing distance of z = 0.34 pm, was digitally rejected by the Deep-Z + network 10 at a refocusing distance of z = -0.46 pm (top image of FIG. 6B) since it became out- of-focus at this axial distance, matching the corresponding image of the confocal microscope at the same depth. As demonstrated in FIGS. 6A-6D, the Deep-Z+ system 2 merges the sectioning capability of confocal microscopy with its image refocusing framework. FIGS. 6B and 6C also reports x-z and y-z cross-sections of the Deep-Z+ output images 40, where the axial distributions of the microtubule structures are significantly sharper in comparison to the axial scanning images of a wide-field fluorescence microscope, providing a very good match to the cross-sections obtained with a confocal microscope, matching the aim of its training.

[00103] The Deep-Z system 2 is powered by a trained deep neural network 2 that enables 3D refocusing within a sample 12 using a single 2D fluorescence image 20. This framework is non iterative and does not require hyperparameter tuning following its training stage. In Deep-Z, the user can specify refocusing distances for each pixel in a DPM 42 (following the axial range used in the training), and the fluorescence image 20 can be digitally refocused to the corresponding surface through Deep-Z trained deep neural network 10, within the transformation limits reported herein (see e.g., FIG. 8 and FIGS. 13A-13F). The Deep-Z- based system 2 is also robust to changes in the density of the fluorescent objects within the sample volume (up to a limit, which is a function of the axial refocusing distance), the exposure time of the input images, as well as the illumination intensity modulation (see FIGS. 16A-16C, 19A-19B, 21A-2IH, 22A-22D, 23A- 23E and description for detailed results). Because the distances are encoded in DPM and modeled as a convolutional channel, one can train the network 10 with uniform DPMs 42, which still permits one to apply various non-uniform DPMs 42 during the inference stage as reported herein for e.g., correcting the sample drift, tilt, curvature or other optical aberrations, which brings additional degrees-of-freedom to the imaging system. [00104] Deep learning has also been recently demonstrated to be very' effective in performing deconvolution to boost the lateral and the axial resolution in microscopy images. The Deep-Z network 10 is unique as it selectively deconvolves the spatial features that come into focus through the digital refocusing process (see e.g. FIG. 11), while convolving other features that go out-of-focus, bringing the contrast to in-focus features, based on a user-defined DPM 42.

Through this Deep-Z framework, the snapshot 3D refocusing capability of coherent imaging and holography is brought to incoherent fluorescence microscopy, without any mechanical scanning, additional hardware components, or a trade-off of imaging resolution or speed. This not only significantly boosts the imaging speed, but also reduces the negative effects of photobleaching and phototoxicity on the sample 12. For a wi defield fluorescence microscopy experiment, where an axial image stack is acquired, the illumination excites the fluorophores through the entire thickness of the specimen or sample 12, and the total light exposure of a given point within the sample volume is proportional to the number of imaging planes (N z ) that are acquired during a single-pass z-stack. In contrast, the Deep-Z system 2 only requires a single image acquisition step, if its axial training range covers the sample depth. Therefore, this reduction, enabled by the Deep-Z system 2, in the number of axial planes that need to be imaged within a sample volume directly helps to reduce the photodamage to the sample (see, e.g., FIGS. 24A-24B).

[00105] Finally, it should be noted that the retrievable axial range in this method depends on the SNR of the recorded image, i.e., if the depth information carried by the PSF falls below the noise floor, accurate inference will become a challenging task. To validate the performance of a pre-trained Deep-Z network model 10 under variable SNR, the inference of Deep-Z was tested under different exposure conditions (FIGS. 16A-16C), revealing the robustness of its inference over a broad range of image exposure times that were not included in the training data. An enhancement of ~20x in the DOF of a wide-field fluorescence image was demonstrated using the Deep-Z system 2. This axial refocusing range is in fact not an absolute limit but rather a practical choice for training data, and it may be further improved through hardware modifications to the optical set-up by e.g., engineering the PSF in the axial direction. In addition to requiring extra hardware and sensitive alignment/calibration, such approaches would also require brighter fluorophores, to compensate for photon losses due to the insertion of additional optical components in the detection path. [00106] Methods

[00107] Sample preparation

[00108] The 300 nm red fluorescence nano-beads were purchased from MagSphere Inc. (Item # PSF-300NM 0.3 LIM RED), diluted by 5,000 times with methanol, and ultrasonicated for 15 minutes before and after dilution to break down the clusters. For the fluorescent bead samples on a flat surface and a tilted surface, a #1 coverslip (22x22 mm 2 , -150 pm thickness) was thoroughly cleaned and plasma treated. Then, a 2.5 mE droplet of the diluted bead sample was pipetted onto the coverslip and dried. For the fluorescent bead sample 12 on a curved

(cylindrical) surface, a glass tube (~ 7.2 mm diameter) was thoroughly cleaned and plasma treated. Then a 2.5 pL droplet of the diluted bead sample 12 was pipetted onto the outer surface of the glass tube and dried.

[00109] Structural imaging of C. elegans neurons was carried out in strain AML18. AML 18 carries the genotype wtfls3 [rab-3p::NLS::GFP + rab-3p: :NLS : :tagRFP] and expresses GFP and tagRFP in the nuclei of all the neurons. For functional imaging, the strain AML32 was used, carrying wtfls5 [rab-3p::NLS::GCaMP6s + rab-3p::NLS: TagRFP] The strains were acquired from the Caenorhabditis Genetics Center (( ' (·( ' ). Worms were cultured on Nematode Growth Media (NGM) seeded with OP50 bacteria using standard conditions. For imaging, worms were washed off the plates with M9, and anaesthetized with 3 mM levamisole. Anaesthetized worms were then mounted on slides seeded with 3% Agarose. To image moving worms, the levamisole was omitted.

[00110] Two slides of multi -labeled bovine pulmonary artery endothelial cells (BPAEC) were acquired from Thermo Fisher: FluoCells Prepared Slide #1 and FluoCells Prepared Slide #2. These cells were labeled to express different cell structures and organelles. The first slide uses Texas Red for mitochondria and FITC for F-actin structures. The second slide uses FITC for microtubules.

[00111] Fluorescence image acquisition

[00112] The fluorescence images of nano-beads, C. elegans structure and BPAEC samples were captured by an inverted scanning microscope (1X83, Olympus Life Science) using a 20C/Ό.75NA objective lens (UPLSAPO20X, Olympus Life Science). A 130W fluorescence light source (U-HGLGPS, Olympus Life Science) was used at 100% output power. Two bandpass optical filter sets were used: Texas Red and FITC. The bead samples w^ere captured by placing the coverslip with heads directly on the microscope sample mount. The tilted surface sample was captured by placing the coverslip with beads on a 3D-printed holder, which creates a 1.5° tilt with respect to the focal plane. The cylindrical tube surface with fluorescent beads was placed directly on the microscope sample mount. These fluorescent bead samples were imaged using Texas Red filter set. The C. elegans sample slide was placed on the microscope sample mount and imaged using Texas Red filter set. The BPAEC slide rvas placed on the microscope sample mount and imaged using Texas Red and FITC filter sets. For all the samples, the scanning microscope had a motorized stage (PROSCAN XY STAGE KIT FOR 1X73/83) that moved the samples to different FOVs and performed image-contrast-based auto-focus at each location. The motorized stage was controlled using MetaMorph® microscope automation software (Molecular Devices, LLC). At each location, the control software autofocused the sample based on the standard deviation of the image, and a z-stack was taken from -20 pm to 20 pm with a step size of 0.5 pm. The image stack was captured by a monochrome scientific CMOS camera (ORCA- flashd.O v2, Hamamatsu Photonics K.K.), and saved in non-compressed tiff format, with 81 planes and 2048 c 2048 pixels in each plane.

[00113] The images of C. elegans neuron activities were captured by another scanning wide- field fluorescence microscope (TCS SP8, Leica Microsystems) using a 2()x/0.8NA objective lens (HCPLAP020x/0.80DRY, Leica Microsystems) and a 40x/1.3NA objective lens (HC PL APO 40x/l 30 OIL, Leica Microsystems). Two bandpass optical filter sets were used: Texas Red and FITC. The images were captured by a monochrome scientific CMOS camera (Leica- DFC9000GTC-VSC08298). For capturing image stacks of anesthetized worms, the motorized stage controlled by a control software (LAS X, Leica Microsystems) moved the sample slide to different FOVs. At each FOV, the control software took a z-stack from -20 pm to 20 prn with a step size of 0.5 pm for the 20x/Q.8NA objective lens images, and with a step size of 0.27 pm for the 40C/Ί.3NA objective lens images, with respect to a middle plane (z=0 pm). Two images w'ere taken at each z-plane, for Texas Red channel and FITC channel respectively. For capturing 2D videos of dynamic worms, the control software took a time-lapsed video that also time- multiplexed the Texas Red and FITC channels at the maximum speed of the system. This resulted in an average framerate of ~3.6 fps for a maximum camera framerate of 10 fps, for imaging both channels. [00114] The BPAEC wi de-field and confoeal fluorescence images were captured by another inverted scanning microscope (TCS SP5, Leica Microsystems). The images were acquired using a 63 x /1.4 NA objective lens (HC PL APO 63x71 .40 Oil CS2, Leica Microsystems) and FITC filter set was used. The wide-field images were recorded by a CCD with 1380x 1040 pixels and 12-bit dynamic range, whereas the confoeal images were recorded by a photo-multiplier tube (PMT) with 8-bit dynamic range (1024x 1024 pixels). The scanning microscope had a motorized stage that moved the sample to different FOVs and depths. For each location, a stack of 12 images with 0.2 pm axial spacing was recorded.

[00115] Image pre-processing and training data preparation

[00116] Each captured image stack was first axially aligned using an Image! plugin named “StackReg”, which corrects the rigid shift and rotation caused by the microscope stage inaccuracy. Then an extended depth of field (EDF) image was generated using another Image.! plugin named“Extended Depth of Field.” This EDF image was used as a reference image to normalize the whole image stack, following three steps: (1) Triangular threshold was used on the image to separate the background and foreground pixels; (2) the mean intensity of the

background pixels of the EDF image was determined to be the background noise and subtracted; (3) the EDF image intensity was scaled to 0-1, where the scale factor was determined such that 1% of the foreground pixels above the background were greater than one (i.e., saturated); and (4) each image in the stack was subtracted by this background level and normalized by this intensity scaling factor. For testing data without an image stack, steps (1) - (3) were applied on the input image instead of the EDF image.

[00117] To prepare the training and validation datasets, on each FQV, a geodesic dilation with fixed thresholds was applied on fluorescence EDF images to generate a mask that represents the regions containing the sample fluorescence signal above the background. Then, a customized greedy algorithm was used to determine a minimal set of regions with 256 x 256 pixels that covered this mask, with ~5% area overlaps between these training regions. The lateral locations of these regions were used to crop images on each height of the image stack, where the middle plane for each region w'as set to be the one with the highest standard deviation. Then 20 planes above and 20 planes below this middle plane were set to be the range of the stack, and an input image plane was generated from each one of these 41 planes. Depending on the size of the data set, around 5-10 out of these 41 planes w^ere randomly selected as the corresponding target plane, forming around 150 to 300 image pairs. For each one of these image pairs, the refocusing distance was determined based on the location of the plane (i.e., 0.5 pm times the difference from the input plane to the target plane). By repeating this number, a uniform DPM 42 was generated and appended to the input fluorescence image 20. The final dataset typically contained ~ 100,000 image pairs. This was randomly divided into a training dataset and a validation dataset, which took 85% and 15% of the data respectively. During the training process, each data point was further augmented five times by flipping or rotating the images by a random multiple of 90°. The validation dataset was not augmented. The testing dataset was cropped from separate measurements with sample FOVs that do not overlap with the FOVs of the training and validation data sets.

[00118] Deep-Z network architecture

[00119] The Deep-Z network is formed by a least square GAN (LS-GAN) framework, and it is composed of two parts: a generator (G) and a discriminator (D), as shown in FIG. 14. The generator (G) is a convolutional neural network (CNN and consists of a down-sampling path 44 and a symmetric up-sampling path 46. In the down sampling path 44, there are five down- sampling blocks. Each block contains two convolutional layers that map the input tensor x k to the output tensor x k+1 :

[00120] x k+1 = x k + ReLU[CONV k2 {ReLU [CONV kl {x k }]}] (1)

[00121] where ReLU[.] stands for the rectified linear unit operation, and CQNV{ } stands for the convolution operator (including the bias terms). The subscript of CONV denotes the number of channels in the convolutional layer; along the down-sampling path one has: k =

25, 72, 144, 288, 576 and k 2 — 48, 96, 192, 384, 768 for levels k— 1, 2, 3, 4, 5, respectively. The“+” sign in Eq. (1) represents a residual connection. Zero padding was used on the input tensor x k to compensate for the channel number mismatch between the input and output tensors. The connection between two consecutive down-sampling blocks is a 2x2 max-pooling layer with a stride of 2x2 pixels to perform a 2x down-sampling. The fifth down-sampling block connects to the up-sampling path, which will be detailed next.

[00122] In the up-sampling path 46, there are four corresponding up-sampling blocks, each of which contains two convolutional layers that map the input tensor y k+ 1 to the output tensor y k using:

[00123] y k = ReLU[CONV k4 {ReLU[CONV k3 {CAT(x k+1 ,y k+1 )}]}] (2) [00124] where the CAT(-) operator represents the concatenation of the tensors along the channel direction, i.e. CAT(x k+1 ,y k+1 ) appends tensor x k+1 from the down-sampling path to the tensor y k+1 in the up-sampling path at the corresponding level k+1. The number of channels in the convolutional layers, denoted by k 3 and k 4 , are k 3 = 72, 144, 288, 576 and k 4 =

48, 96, 192, 384 along the up-sampling path for k = 1, 2, 3, 4, respectively. The connection between consecutive up-sampling blocks is an up-convolution (convolution transpose) block that up-samples the image pixels by 2x. The last block is a convolutional layer that maps the 48 channels to one output channel (see FIG. 14).

[00125] The discriminator is a convolutional neural network that consists of six consecutive convolutional blocks, each of which maps the input tensor Z j to the output tensor z. ;÷1, for a given level i:

[00126] z i+1 = LReLU [CONVJ 2 {LReLU [CQ NV j , {z ]]] (3)

[00127] where the LReLU stands for leaky ReLU operator with a slope of 0.01. The subscript of the convolutional operator represents its number of channels, which are =

48,96,192, 384, 768, 1536 and i 2 = 96,192, 384, 768, 1536, 3072, for the convolution block i = 1, 2, 3, 4, 5, 6, respectively.

[00128] After the last convolutional block, an average pooling layer flattens the output and reduces the number of parameters to 3072. Subsequently there are fully-connected (FC) layers of size 3072 x 3072 with LReLU activation functions, and another FC layer of size 3072 x 1 with a Sigmoid activation function. The final output represents the discriminator score, which falls within (0, 1), where 0 represents a false and 1 represents a true label.

[00129] All the convolutional blocks use a convolutional kernel size of 3 x 3 pixels, and replicate padding of one pixel unless mentioned otherwise. All the convolutions have a stride of 1 x 1 pixel, except the second convolutions in Eq. (3), which has a stride of 2 x 2 pixels to perform a 2x down-sampling in the discriminator path. The weights are initialized using the Xavier initializer, and the biases are initialized to 0.1.

[00130] Training and testing of the Deep-Z network

[00131] The Deep-Z network 10 learns to use the information given by the appended DPM 42 to digitally refocus the input image 20 to a user-defined plane. In the training phase, the input data of the generator G(. ) have the dimensions of 256 c 256 c 2, where the first channel is the fluorescence image, and the second channel is the user-defined DPM. The target data of G(. ) have the dimensions of 256 c 256, which represent the corresponding fluorescence image at a surface specified by the DPM. The input data of the discriminator D(. ) have the dimensions of 256 x 256, which can be either the generator output or the corresponding target z (j During the training phase, the network iteratively minimizes the generator loss L G and discriminator loss L D , defined as:

[00134] where AGs the number of images used in each batch (e.g., N = 20), is the generator output for the input x®, is the corresponding target label, D(. ) is the discriminator, and MAE(. ) stands for mean absolute error a is a regularization parameter for the GAN loss and the MAE loss in L G . In the training phase, it was chosen as a - 0.02. For training stability and optimal performance, adaptive momentum optimizer (Adam) was used to minimize both L G and L d, with a learning rate of 10 4 and 3 X 10 ~5 for L G and L D respectively. In each iteration, six updates of the generator loss and three updates of the discriminator loss were performed. The validation set was tested every 50 iterations, and the best network (to be blindly tested) was chosen to be the one with the smallest MAE loss on the validation set.

[00135] In the testing phase, once the training is complete, only the generator network (G) is active. Thus, the trained deep neural network 10 in the final, trained only includes the generator network (G). Limited by the graphical memory of the GPU, the largest image FOV that was tested was 1536 c 1536 pixels. Because image w'as normalized to be in the range 0-1 , whereas the refocusing distance was on the scale of around -10 to 10 (in units of pm), the DPM entries were divided by 10 to be in the range of -1 to 1 before the training and testi ng of the Deep-Z network, to keep the dynamic range of the image and DPM matrices similar to each other.

[00136] The network was implemented using TensorFlow, performed on a PC with Intel Core Ϊ7-8700K six-core 3.7GHz CPU and 32GB RAM, using a Nvidia GeForce lOSOTi GPU. On average, the training takes ~ 70 hours for ~ 400,000 iterations (equivalent to ~ 50 epochs). After the training, the network inference time was ~ 0 2 s for an image with 512 x 512 pixels and ~ Is for an image with 1536 x 1536 pixels on the same PC. [00137] Measurement of the lateral and axial FWHM values of the fluorescent beads

[00138] For characterizing the lateral FWHM of the fluorescent bead samples, a threshold was performed on the image to extract the connected components. Then, individual regions of 30 x 30 pixels were cropped around the centroid of these connected components A 2D Gaussian fit was performed on each of these individual regions, which was done using lsqcurvefit in Matlab (MathWorks, Inc) to match the function:

[00140] The lateral FWHM was then calculated as the mean FWHM of x and y directions, i .e.,

[00142] where D c = A y = 0.325 mpi was the effective pixel size of the fluorescence image on the object plane. A histogram was subsequently generated for the lateral FWHM values for all the thresholded beads (e.g , n = 461 for FIGS. 2A-2C and n > 750 for FIGS. 5A-5L).

[00143] To characterize the axial FWHM values for the bead samples, slices along the x-z direction with 81 steps were cropped at y = y c for each bead, from either the digitally refocused or the mechanically-scanned axial image stack. Another 2D Gaussian fit was performed on each cropped slice, to match the function:

[00145] The axial FWHM was then calculated as:

[00146] FWHM axjaj = 2V2 In 2 · s z · D z (9)

[00147] where D z = 0.5 mpi was the axial step size. A histogram was subsequently generated for the axial FWHM values.

[00148] Image quality evaluation

[00149] The network output images I out were evaluated with reference to the corresponding ground truth images I G i using five different criteria: (1) mean square error (MSE), (2) root mean square error (RMSE), (3) mean absolute error (MAE), (4) correlation coefficient, and (5) structural similarity index (SSIM). The MSE is one of the most widely used error metrics, defined as:

[001501 out

(10)

[00151] where N x and N y represent the number of pixels in the x and y directions, respectively. The square root of MSE results in RMSE. Compared to MSE, MAE uses 1-norm difference (absolute difference) instead of 2-norm difference, which is less sensitive to significant outlier pixels:

[00153] The correlation coefficient is defined as:

[00154] corr( (12)

mot)')

[00155] where m oiiί and m (;t are the mean values of the images l out and l GT respectively.

[00156] While these criteria listed above can be used to quantify errors in the network output compared to the ground truth (GT), they are not strong indicators of the perceived similarity between two images. SSIM aims to address this shortcoming by evaluating the structural similarity in the images, defined as:

[00158] where o ou[ and a GT are the standard deviations of l out and ! Gl respectively, and °ouc,GT is the cross-variance between the two images.

[00159] Tracking and quantification of C eleg&m neuron activity

[00160] The C. elegans neuron activity tracking video was captured by time-multiplexing the two fluorescence channels (FITC, followed by TexasRed, and then FITC and so on). The adjacent frames were combined so that the green color channel was FITC (neuron activity) and the red color channel was Texas Red (neuron nuclei). Subsequent frames were aligned using a feature-based registration toolbox with projective transformation in Matlab (MathWorks, Inc.) to correct for slight body motion of the worms. Each input video frame was appended with DPMs 42 representing propagation distances fro -10 pm to 10 pm with 0.5 pm step size, and then tested through a Deep-Z network 10 (specifically trained for this imaging system), which generated a virtual axial image stack for each frame in the video.

[00161] To localize individual neurons, the red channel stacks (Texas Red, neuron nuclei) were projected by median-intensity through the time sequence. Local maxima in this projected median intensity stack marked the centroid of each neuron and the voxels of each neuron was segmented from these centroids by watershed segmentation, which generated a 3D spatial voxel mask for each neuron. A total of 155 neurons were isolated. Then, the average of the 100 brightest voxels in the green channel (FITC, neuron activity) inside each neuron spatial mask was calculated as the calcium activity intensity iy(t), for each time frame t and each neuron i = 1,2, ... ,155. The differential activity was then calculated, LF(t) = F(t)— F 0 , for each neuron, where F 0 is the time average of Fit)

[00162] By thresholding on the standard deviation of each AF(t), the 70 most active cells w'ere selected further clustering was performed on them based on their calcium activity pattern similarity (FIG. 12B) using a spectral clustering algorithm. The calcium activity pattern similarity was defined as

[00164] for neurons i and j, which results in a similarity matrix S (FIG. 12C). s = 1.5 is the standard deviation of this Gaussian similarity function, which controls the width of the neighbors in the similarity graph. The spectral clustering solves an eigen-value problem on the graph Laplacian L generated from the similarity matrix S, defined as the difference of weight matrix W and degree matrix D, i.e.,

[00165] L = D W (15)

[00166] where

[00169] The number of clusters was chosen using eigen-gap heuristics, which was the index of the largest general eigenvalue (by solving general eigen value problem Lv = ADv) before the eigen-gap, where the eigenvalues jump up significantly, which was determined to be k=3 (see FIG. 12D). Then the corresponding first k : ==3 eigen-vectors wore combined as a matrix, whose rows were clustered using standard k-means clustering, which resulted in the three clusters of the calcium activity patterns shown in FIG. 12E and the rearranged similarity matrix shown in FIG.

12F. [00170] Cross-modality alignment of wide-fie!d and confocal fluorescence images

[00171] Each stack of the wide-field/confocal pair was first self-aligned and normalized. Then the individual FOVs were stitched together using“Image Stitching” plugin of image.! The stitched wide-field and confocal EDF images were then co-registered using a feature-based registration with projective transformation performed in Matlab (MathWorks, Inc). Then the stitched confocal EDF images as w^eli as the stitched stacks rvere warped using this estimated transformation to match their wide-field counterparts (FIG. 15 A). The non-overlapping regions of the wide-field and warped confocal images rvere subsequently deleted. Then the above- described greedy algorithm was used to crop non-empty regions of 256 x 256 pixels from the remaining stitched wide-field images and their corresponding warped confocal images. The same feature-based registration was applied on each pair of cropped regions for fine alignment. This step provides good correspondence between the wide field image and the corresponding confocal image in the lateral directions (FIG. 15B).

[00172] Although the axial scanning step size was fixed to be 0.2 pm, the reference zero-point in the axial direction for the wide-field and the confocal stacks needed to be matched. To determine this reference zero-point in the axial direction, the images at each depth were compared with the EDF image of the same region using structural similarity index (SSIM), providing a focus curve (FIG. I5C). A second order polynomial fit was performed on four points in this focus curve with highest SSIM values, and the reference zero-point was determined to be the peak of the fit (FIG. 15C). The heights of wide-field and confocal stacks were then centered by their corresponding reference zero-points in the axial direction. For each wide-field image used as input, four confocal images w ? ere randomly selected from the stack as the target, and their DPMs were calculated based on the axial difference of the centered height values of the confocal and the corresponding wide-field images.

[00173] Code availability

[00174] Deep learning models reported in this work used standard libraries and scripts that are publicly available in TensorFlow. Through a custom-written Fiji based plugin, trained network models (together with some sample test images) were provided for the following objective lenses: Leica HC PL APO 20x/0.80 DRY (tw ? o different network models trained on TxRd and FITC channels), Leica HC PL APO 40x/1.30 OIL (trained on TxRd channel), Olympus

UPLSAPO20X - 0.75 NA (trained on TxRd channel). This custom-written plugin and the models are publicly available through the following links: http://bit.ly/deep-z-git and http://bit.ly/deep-z, all of which are incorporated by reference herein.

[00175] Image acquisition and data processing for lower image exposure analysis.

[00176] Training image data were captured using 300 nm red fluorescent bead samples imaged with a 20X/0.75NA objective lens, same as the micro-bead samples reported herein, except that the fluorescence excitation light source was set at 25% pov er (32.5 mW) and the exposure times were chosen as 10 ms and 100 ms, respectively. Two separate Deep-Z networks 10 were trained using the image dataset captured at 10 ms and 100 ms exposure times, where each training image set contained ~ 100,000 image pairs (input and ground truth), and each network was trained for ~ 50 epochs

[00177] Testing image data were captured under the same settings except the exposure times varied from 3 ms to 300 s. The training and testing images were normalized using the same pre-processing algorithm: after image alignment, the input image was similarly first thresholded using a triangular thresholding method to separate the sample foreground and background pixels. The mean of the background pixel values was taken as the background fluorescence level and subtracted from the entire image. The images were then normalized such that 1% of the foreground pixels were saturated (above one). This pre-processing step did not further clip or quantize the image. These pre-processed images (in single precision format) were fed into the network directly for training or blind testing.

[00178] Time-modulated signal reconstruction using Deep-Z

[00179] Training data were captured for 300 nm red fluorescent beads using a 20x/0.75NA objective lens with the Texas Red filter set, same as the microbead samples reported earlier (e.g., FIG. 5), except that the fluorescence light source was set at 25% illumination power (32 5 mW) and the exposure time was chosen as 100 ms.

[00180] Testing data consisted of images of 300 nm red fluorescent beads placed on a single 2D plane (pipetted onto a #1 coverslip) captured using an external light emitting diode (M530L3- Cl, Thoriabs) driven by an LED controller (LEDD1B, Thorlabs) modulated by a function generator (SDG2042X, Siglent) that modulated the output current of the LED controller between 0 to 1.2 A following a sinusoidal pattern with a period of 1 s. A Texas Red filter and 100 ms exposure time were used. The same FOV was captured at in-focus plane (z = 0 mih) and five defocus planes (z = 2, 4, 6, 8, 0 pm). At each plane, a two-second video (i.e. two periods of the modulation) was captured at 20 frames per second. Each frame of the defocused planes w'as then virtually refocused using the trained Deep-Z network 10 to digitally reach the focal plane (z = 0 pm). Fluorescence intensity changes of 297 individual beads within the sample FOV captured at z = 0 pm w ' ere tracked over the two-second time window. The same 297 beads were also tracked as a function of time using those five virtually refocused time-lapse sequences (using Deep-Z output). The intensity curve for each bead was normalized between 0 and 1. The mean and standard deviation corresponding to these 297 normalized curves were plotted in FIGS. 19A- 19B.

[00181] Neuron segmentation analysis

[00182] Neuron locations in FIGS. 20A, 20D, 20G were compared by first matching pairs of neurons from two different methods (e.g., Deep-Z vs. mechanically-scanned ground truth).

Matching two groups of segmented neurons (W 1 W 2 ), represented by their spatial coordinates, 'a considered as a bipartite graph minimal cost matching problem, i.e.:

[00183] argmin å e c e · x e

x e

[00186] x e G (0, 1}

[00187] where x e = 1 represents that the edge between the two groups of neurons (fl s , W 2 ) were included in the match. The cost on edge e = (u x , w 2 ) is defined based on the Manhattan distance between — y 2 1 + j z s — z 2 1. Because the problem satisfies totally unimodular condition, the above integer constraint x e G {0,l}can be relaxed to linear constraint x > 0 without changing the optimal solution, and the problem w ? as solved by linear programming using Matlab function linporg. Then the distances between each paired neurons were calculated and their distributions were plotted

[00188] Deep-Z virtual refocusing capability at lower image exposure

[00189] To further validate the generalization performance of a pre-trained Deep-Z network model under variable exposure conditions (which directly affect the signal -to-noise ratio, SNR), two Deep-Z networks 10 w'ere trained using microbead images captured at 10 ms and 100 ms exposure times and these trained networks were denoted as Deep-Z (10 ms) and Deep-Z (100 ms), respectively, and blindly tested their performance to virtually refocus defocused images captured under different exposure times, varying between 3 ms to 300 ms. Examples of these blind testing results are shown in FIG. 16 A, where the input bead images were defocused by - 5.0, 3.0, and 4 5 pm. With lower exposure times, the input image quality was compromised by noise and image quantization error due to the lower bit depth. As shown in FIG. 16 A, the Deep- Z (100 ms) model can successfully refocus the input images even down to an exposure time of 10 ms. However, the Deep-Z (100 ms) model fails to virtually refocus the input images acquired at 3 ms exposure time, giving a blurry output image with background noise. On the other hand, the Deep-Z (10 ms) model can successfully refocus input images that were captured at 3 ms exposure times, as illustrated in FIGS. I6A-I6C. Interestingly, the Deep-Z (10 ms) model performs slightly worse for input images that were acquired at higher exposure times. For example, the input images acquired at 300 ms exposure time exhibit a slight blur at the output image as demonstrated in the last row of FIG 16A These observations are further confirmed in FIGS. 16B, 16C by quantifying the median FWHM values of the imaged microbeads, calculated at the Dee -Z output images as a function of the refocusing distance. This analysis confirms that Deep-Z (100 ms) model cannot successfully refocus the images captured at 3 ms exposure time outside of a narrow defocus window of - [— 1 pm, 1 pm] (see FIG. 16B). On the other hand, Deep-Z (10 ms) model demonstrates improved refocusing performance for the input images captured at 3ms exposure time (FIG. 16G). These results indicate that training a Deep-Z model with images acquired at exposure times that are relatively close to the expected exposure times of the test images would be important for successful inference. Another important observation is that, compared to the ground truth images, the Deep-Z output images 40 also reject the background noise since noise overall does not generalize well during the training phase of the neural network, as also discussed for FIG. 7.

[00190] Also, the noise performance of Deep-Z can potentially be further enhanced by engineering the microscope’s point spread function (PSF) to span an extended depth-of-field, by e.g., inserting a phase mask in the Fourier plane of the microscope, ideally without introducing additional photon losses along the path of the fluorescence signal collection. For example, phase and/or amplitude masks may be located along the optical path (axial direction) of the microscope 110. A double-helix PSF is one exemplary engineered PSF. In addition, the fluorescence microscope 110 may include a wi de-field fluorescence microscope 110. The microscope 110 may also include a light sheet system. [00191] Robustness of Deep-Z to changes in samples and imaging systems

[00192] In the results so far, the blindly tested samples 12 were inferred with a Deep-Z network 10 that has been trained using the same type of sample 12 and the same microscope system 110. Here, the performance of Deep-Z for different scenarios is discussed where a change in the test data distribution is introduced in comparison to the training image set, such as e.g., (1) a different type of sample 12 that is imaged, (2) a different microscope system 110 used for imaging, and (3) a different illumination power or SNR.

[00193] Regarding the first item, if there is a high level of similarity between the trained sample type 12 and the tested sample type 12 distributions, the performance of the network output is expected to be comparable. As reported in FIGS. 17A, 17B, a Deep-Z network 10 that was trained to virtually refocus images of tagRFP -labeled C. elegans neuron nuclei was blindly tested to virtually refocus the images of GFP-labeled C. elegans neuron activity. The output image results of the different model column are quite similar to the output images of the optimal model, trained specifically on GFP-labeled neuron activity images (same model column), as well as the mechanically-scanned ground truth (GT) images, with a minor difference in the correlation coefficients of the two sets of output images with respect to the ground truth im ages of the same samples. Similar conclusions may be drawn for the effectiveness of a Deep-Z model blindly tested on images of a different strain of C. elegans.

[00194] On the other hand, when the training sample type and its optical features are considerably different from the testing samples, noticeable differences in Deep-Z performance can be observed. For instance, as shown in FIG. 17B, a Deep-Z network 10 that was trained with 300 nm beads can only partially refocus the images of C. elegans neuron nuclei, which are typically 1-5 pm in size, and therefore are not well-represented by the training image dataset containing only nanobeads. This limitation can be remedied through a transfer learning process, where the network 10 trained on one type of sample (e.g., the nanobeads in this example) can be used as an initialization of the network weights and the Deep-Z network 10 can be further trained using new images that contain neuron nuclei. Compared to starting from scratch (e.g., randomized initialization), which takes - 40,000 iterations (~60 hours) to reach an optimal model, transfer learning can help achieve an optimal model with only -4,000 iterations (~6 hours) that successfully refocuses neuron nuclei images, matching the performance of the optimal model (transfer learning column in FIGS. 17A, 17B). This transfer learning approach can also be applied to image different types of C. e!egam using earlier models that are refined with new image data in e.g., -500-1,000 iterations. Another advantage of transfer learning is using less training data; in this ease, for example, only 20% of the original training data used for the optimal model was used for transfer learning.

[00195] Regarding the second item, i.e., a potential change in the microscope system 110 used for imaging can also adversely affect the inference performance of a previously trained network model. One of the more challenging scenarios for a pre-trained Deep-Z network will be when the test images are captured using a different objective lens with a change in the numerical aperture (NA); this directly modifies the 3D PSF profile, making it deviate from th e Deep-Z learned features, especially along the depth direction. Similar to the changes in the sample type, if the differences in imaging system parameters are small, it is expected that a previously trained Deep-Z network 10 can be used to virtually refocus images captured by a different microscope to some extent. FIG. 18 shows an example of this scenario, where a Deep-Z network 10 was trained using the images of C. elegans neuron nuclei, captured using an Olympus 1X81 microscope with a 20x/0.75NA objective lens, and was blindly tested on images captured using a Leica SP8 microscope with 20x/0.8NA objective lens. Stated differently, two different microscopes, manufactured by two different companies, have been used, together with a small NA change between the training and testing phases. As illustrated in FIG. 18, most of the virtual refocusing results remained successful, in comparison to the optimal model . However, due to these changes in the imaging parameters, a couple of mis-arrangements of the neurons in the virtually refocused images can be seen in the different model output column, which also resulted in a small difference of -0.02-0.06 between the correlation coefficients of the optimal Deep-Z network output and the different model output (both calculated with respect to the corresponding ground truth images acquired using two different microscope systems). As discussed previously, one can also use transfer learning to further improve these results by taking the initial Deep-Z model trained on Olympus 1X81 microscope (2CW0.75NA objective) as initialization and further training it for another -2,000 iterations on a new image dataset captured using the Leica SP8 microscope (20x/0.8NA objective). Similar to the example that was presented earlier, 20% of the original training data used for the optimal model was used for transfer learning in FIG. 18.

[00196] As for the third item, the illumination power, together with the exposure time and the efficiency of the fluorophore, contributes to two major factors: the dynamic range and the SNR of the input images. Since a pre-processing step was used to remove the background fluorescence, also involving a normalization step based on a triangular threshold, the input images will always be re-normalized to similar signal ranges and therefore illumination power associated dynamic range changes do not pose a major challenge for the Deep-Z network 10. Furthermore, as detailed earlier, robust virtual refocusing can still be achieved under

significantly lower SNR, i.e., with input images acquired at much lower exposure times (see FIGS 16A-16C). These results and the corresponding analysis reveal that th Q Deep-Z network 10 is fairly robust to changes observed in the dynamic range and the SNR of the input images. Having emphasized this, training a Deep-Z network 10 with images acquired at exposure times that are relatively similar to the expected exposure times of the test images would be

recommended for various uses of the Deep-Z network 10. In fact, the same conclusion applies in general: to achieve the best performance with Deep-Z network 10 inference results, the neural network 10 should be trained (from scratch or through transfer learning wiiich significantly expedites the training process) using training images obtained with the same microscope system 110 and the same types of samples 12 as expected to be used at the testing phase.

[00197] Time-modulated signal reconstruction nsing Deep-Z

[00198] To further test the generalization capability of the Deep-Z network 10, an experiment was conducted where the microbead fluorescence is modulated in time, induced by an external time-varying excitation. FIG. 19A reports the time-modulated signal of 297 individual microbeads at the focal plane (z = 0 pm) tracked over a 2 s period at a frame rate of 20 frames per second, plotted with their normalized mean and standard deviation. This curve shows a similar modulation pattern as the input excitation light, with a slight deviation from a perfect sinusoidal curve due to the nonlinear response of fluorescence. The standard deviation was -1.0% of the mean signal at each point. Testing the blind inference of the Deep-Z network 10, the subsequent entries of FIG. 19A reports the same quantities corresponding to the same field- of-view (FOV), but capture d at defocused planes (z == : 2, 4, 6, 8, 10 pm) and virtually refocused to the focal plane (z = 0 pm) using a Deep-Z network 10 trained with images captured under fixed signal strength. The mean curves calculated using the virtually-refocused images (z = 2, 4, 6, 8, 10 pm) match very well with the in-focus one (z - 0 pm), whereas the standard deviation increased slightly with increased virtual refocusing distance, which were -1.0%, 1.1%, 1.7%, 1.9%, and 2.1% of the mean signal for virtual refocusing distances of z. = 2, 4, 6, 8, and 10 mhi, respectively.

[00199] Based on this acquired sequence of images, every' other frame was taken to form a new video; by doing so, the down sampled video compressed the original 2 s video to 1 s, forming a group of beads that were modulated at doubled frequency, i.e., 2 Hz. This down- sampled video was repeated, and added back onto the original video, frame-by-frame, with a lateral shift of 8 pixels (2.6 pm). FIG. 19B shows the Deep-Z network 10 output on these added images, corresponding to 297 pairs of beads that had the original modulation frequency 1 Hz (first row) and the doubled modulation frequency 2 Hz (second row), masked separately in the same output image sequence. This analysis demonstrates that Deep-Z output tracks the sinusoidal illumination well, closely following the in-focus reference time-modulation reported in the first column, same as in FIG. 19A. A video was also created to illustrate an example region of interest containing six pairs of these 1 Hz and 2 Hz emitters, cropped from the input and output FOVs for different defocus planes.

[00200] C. elegans neuron segmentation comparison

[00201] To illustrate that the Deep-Z network 10 indeed helps to segment more neurons by virtual refocusing over an extended depth of field, the results of the same segmentation algorithm applied on an input 2D image as seen in FIG. 20A, where the segmentation algorithm found 99 neurons, without any depth information (see FIG. 20B. In comparison, Deep-Z output image stack (calculated from a single input image) enabled the detection of 155 neurons (see FIG. 20C and FIG 4B), also predicting the depth location of each neuron (color coded). Note that this sample did not have a corresponding 3D image stack acquired by a scanning microscope because in this case a 2D video was used to track the neuron activity.

[00202] To better illustrate a comparison to the ground truth 3D image stack captured using axial mechanical scanning, the segmentation results for another C. elegans is also shown (FIGS. 20D-20I), calculated using the same algorithm from the 2D input image, the corresponding Deep-Z virtual image stack and the mechanically-scanned ground truth image stack (acquired at 41 depths with 0.5 mhi axial spacing). Compared to the segmentation results obtained from the input image (FIG. 20E), the segmentation results obtained using the Deep-Z generated virtual image stack (FIG. 20F) detected an additional set of 33 neurons, also predicting the correct 3D positions of 128 neurons in total. Compared to the ground truth mechanically-scanned 3D image stack (FIG. 201), the segmentation algorithm recognized 18 fewer neurons for the Deep-Z generated virtual stack, which were mostly located within the head of the worm, where the neurons are much denser and relatively more challenging to recover and segment. In sparser regions of the worm, such as the body and the tail, the neurons were mostly correctly segmented, matching the results obtained using the mechanically-scanned 3D image stack (composed of 41 axial-scans). The depth locations of the segmented neurons (color-coded) also matched well with the corresponding depths measured using the ground truth mechanically-scanned 3D image stack.

[00203] To improve the performance of Deep-Z network-based neuron segmentation in denser regions of the sample (such as the head of a worm ), acquiring more than one input image could be utilized to enhance the degrees of freedom, where the virtually refocused image stack of each Deep-Z input image can be merged with the others, helping to recover some of the lost neurons within a dense region of interest. Compared to the mechanically-scanned 3 D image stack, this would still be significantly faster, requiring fewer images to be acquired for imaging the specimen’s volume. For instance, in FIG. 20H segmentation results are presented by merging two virtual image stacks created by Deep-Z, both spanning -10 pm to 10 pm but generated from two different input images acquired at z = 0 pm and at z = 4 pm, respectively

[00204] The merging was performed by taking the maximum pixel value of the two image stacks. The segmentation algorithm in this case identified N=148 neurons (improved from N=128 in FIG. 20F and the results match better to the ground truth axial scanning results (N=146 in FIG. 201. To shed more light on this comparison, another segmentation algorithm was used on exactly the same image dataset: using a DoG segmentation method, named as TrackMate resulted in 146 neurons for the Deep-Z network 10 output, 177 neurons in the target image stack (mechanically scanned) and 179 in the Deep-Z merged stack (only 2 axial planes used as input images), revealing a close match between Deep-Z results and the results obtained with a mechanically scanned image stack. This comparison between two different neuron segmentation algorithms also shows some inconsistency in the neuron segmentation itself (meaning that there might not be a single ground truth method). It should be noted here that these results should be considered as proof-of-concept studies on the potential applications of Deep-Z network 10 for neuron imaging. Deep-Z can potentially be used as a front-end module to jointly-optimize future deep learning-based neuron segmentation algorithms that can make the most use of Deep-Z network 10 and its output images 40 to reduce the number of required image planes to accurately and efficiently track neural activity of worms or other model organisms. Note also that the segmentation results in this case uses a 2()x/0.8NA objective lens. The presented approach might perform better on the head region of the worm if a higher NA objective was used. However, even using a mechanically-scanned image stack with a higher NA objective and state-of-the-art neuron segmentation algorithms, not all the neurons in the body of a w ? orm can be accurately identified in each experiment.

[00205] Impact of the sample density on Deep-Z inference

[00206] If the fluorescence emitters are too close to each other or if the intensity of one feature is much weaker than the other(s) within a certain FOV, the intensity distribution of the virtually refocused Deep-Z images 40 may deviate from the ground truth (GT). To shed more light on this, numerical simulations were used resulting from experimental data, where (1) a laterally shifted a planar fluorescence image that contained individual 300 nm fluorescent beads, (2) attenuated this shifted image intensity with respect to the original intensity by a ratio (0.2 to 1.0), and (3) added this attenuated and shifted feature back to the original image (see FIGS. 21 A-21B for an illustration of this). Based on a spatially-invariant incoherent PSF, this numerical simulation, derived from experimental data, represents an imaging scenario, where there are two individual sets of fluorescent objects that have different signal strengths with respect to each other, also with a varying distance between them. The resulting images, with different defocus distances (see FIG. 2 IB) were virtually refocused to the correct focal plane by a Deep-Z network that was trained using planar bead samples. FIGS. 21B-21H demonstrates various examples of bead pairs that w^ere laterally separated by e.g., 1-15 pixels and axially defocused by 0-10 pm, with an intensity ratio that spans 0.2-1.0.

[00207] To quantify the performance of Deep-Z inference for these different input images, FIGS. 21C-21H plot the average intensity ratio of 144 pairs of dimmer and brighter beads at the virtually refocused plane as a function of the lateral shift (d) and the intensity ratio between the dimmer and the brighter beads, also covering various defocus distances up to 10 pm; in each panel of this FIG., the minimal resolvable distance between the two beads is marked by a cross symbol“x”. FIGS. 21C-21H reveal that larger defocus distances and smaller ratios require slightly larger lateral shift amount for the bead pairs to be accurately resolved. [00208] Next, the impact of occlusions in the axial direction was examined, which can be more challenging to resolve. For this, new numerical simulations were created, also resulting from experimental data, where this time a planar fluorescent bead image stack was axially shifted and added back to the corresponding original image stack with different intensity ratios (see FIG.

22B for an illustration of this). To accurately represent the inference task, the deep network 10 was trained via transfer learning with an augmented dataset containing axially-overlapping objects FIG. 22A demonstrates the Deep-Z results for a pair of beads located at z = 0 and z = 8 pm respectively. The network 10 was able to successfully refocus these two beads separately, inferring two intensity maxima along the z-axis at z = 0 pm and z = 8 pm, very well matching the simulated mechanically-scanned image stack (ground truth). FIGS. 22C, 22D plot the average of the intensity ratio of the top (i.e., the dimmer) bead and the lower bead (i.e., the bead in the original stack) for 144 individual bead pairs inside a sample FOV, corresponding to z=8 pm with different axial separations id , see FIG. 22B), for both the virtually refocused Deep-Z image stack and the simulated ground truth image stack, respectively. The results in FIGS. 22C, 22D are similar, having rather small discrepancies in the exact intensity ratio values. The results might be further improved by potentially using a 3D convolutional neural network architecture.

[00209] To further understand the impact of the axial refocusing distance and the density of the fluorescent sample on Deep-Z 3D network inference, additional imaging experiments were performed corresponding to 3D bead samples with different densities of particles, which was adjusted by mixing 2.5 pL red fluorescent bead (300 nm) solution at various concentrations with 10 pL ProLong Gold antifade mountant (P 10144, ThermoFisher) on a glass slide. After covering the sample with a thin coverslip, the sample naturally resulted in a 3D sample volume, with 300 nm fluorescent beads spanning an axial range of -20-30 pm. Different samples, corresponding to different bead densities, were axially scanned using a 20x/0.75NA objective lens using the Texas Red channel. To get the optimal performance, a Deep-Z network was trained with transfer learning (initialized with the original bead network) using 6 image stacks (2048 x 2048 pixels) captured from one of the samples. Another 54 non-overlapping image stacks (1536 * 1536 pixels) were used for blind testing; within each image stack, 41 axial planes spanning +/- 10 pm with 0.5 pm step size were used as ground truth (mechanically-scanned), and the middle plane (z=0 pm) was used as the input image 20 to Deep-Z , which generated the virtually refocused output image stack of images 40, spanning the same depth range as the ground truth (GT) images. Thresholding was applied to the ground truth and Deep-Z output image stacks, where each connected region after thresholding represents a 300 nm bead. FIG.

23 A illustrates the input images 20 and the maximal intensity projection (MIP) of the ground truth image stack (GT) as well as the Deep-Z network output image 40 stack corresponding to some of the non-overlapping sample regions used for blind testing. At lower particle

concentrations (below 0.5 x lO 6 pL 1 ), the Deep-Z output image 40 stack results match very well with the mechanically -scanned ground truth (GT) results over the training range of +/- 10 pm axial defocus. With larger particle concentrations, the Deep-Z network output gradually loses its capability to refocus and retrieve all the individual beads, resulting in under-counting of the fluorescent beads.

[00210] In fact, this refocusing capability of the Deep-Z network 10 not only depends on the concentration of the fluorescent objects, but also depends on the refocusing axial distance. To quantify this, FIGS. 23B-23E plot the fluorescent particle density measured using the

mechanically-scanned ground truth image stack as well as the Deep-Z virtually refocused image 40 stack as a function of the axial defocus distance, i.e., ±2.5 pm, ±5 pm, ±7.5 pm and ±10 pm from the input plane (z=0 pm), respectively. For example, for a virtual refocusing range of ±2.5 pm, the Deep-Z output image 40 stack (using a single input image at z= : Q pm) closely matches the ground truth (GT) results even for the highest tested sample density (~ 4x 10 6 pL 1 ); on the other hand, at larger virtual refocusing distances Deep-Z suffers from some under-counting of the fluorescent beads (see e.g., FIGS. 23C-22E). This is also consistent with the analysis reported earlier (e.g., FIGS. 21 A, 21B, 22A-22D), where the increased density of the beads in the sample results in axial occlusions and partially affects the virtual refocusing fidelity of Deep-Z.

[00211] In these examples presented herein, the training image data did not include strong variations in the signal intensities of the particles or axial occlusions that existed in the testing data as this is a disadvantage for Deep-Z network 10. However, a Deep-Z network 10 that is trained with the correct type of samples 12 (matching the test sample 12 type and its 3D structure) will have an easier task in its blind inference and virtual refocusing performance since the training images will naturally contain relevant 3D structures, better representing the feature distribution expected in the test samples. [00212] Reduced photodamage using Deep-Z

[00213] Another advantage of the Deep-Z network 10 would be a reduction in photodamage to the sample 12 Photodamage introduces a challenging tradeoff in applications of fluorescence microscopy in live cell imaging, which sets a practical limitation on the number of images that can be acquired during e.g , a longitudinal experiment. The specific nature of photodamage, in the form of photobleaching and/or phototoxicity, depends on the illumination wavelength, beam profile, exposure time, among many other factors, such as the sample pH and oxygen levels, temperature, fluorophore density and photostability. Several strategies for illumination design have been demonstrated to reduce the effects of photodamage, by e.g., adapting the illumination intensity delivered to the specimen as in controlled light exposure microscopy (CLEM) and predictive focus illumination, or decoupling the excitation and emission paths, as in selective plane illumination microscopy and among others.

[00214] For a widefteld fluorescence microscopy experiment, where an axial image stack is acquired, the illumination excites the fluorophores through the entire thickness of the specimen 12, regardless of the position that is imaged in the objective’s focal plane. For example, if one assumes that the sample thickness is relatively small compared to the focal volume of the excitation beam, the entire sample volume is uniformly exited at each axial image acquisition step. This means the total light exposure of a given point within the sample volume is sub- li nearly proportional to the number of imaging planes (N z ) that are acquired during a single-pass z-stack. In contrast, the Deep-Z system 2 only requires a single image acquisition step if the axial training range covers the sample depth; in case the sample is thicker or dense, more than one input image might be required for improved Deep-Z inference as demonstrated in FIG. 2QH which, in this case, used two input images to better resolve neuron nuclei in the head region of a C. elegans. Therefore, this reduction, enabled by Deep-Z, in the number of axial planes that need to be imaged within a sample volume directly helps to reduce the photodamage to the sample.

[00215] To further illustrate this advantage, an additional experiment was performed where a sample containing fluorescent beads (300 nm diameter, and embedded in ProLong Gold antifade mountant) was repeatedly imaged in 3D with N z =41 axial planes spanning 20 pm depth range (0.5 pm step size) over 180 repeated cycles, which took a total of - 30 min. The average fluorescence signal of the nanobeads decayed down to -80% of its original value at the end of the imaging cycle (see FIG. 24A). In comparison, to generate a similar virtual image stack, the Deep-Z system 2 only requires to take a single input image 20, which results in a total imaging time of - 15 seconds for 180 repeated cycles, and the average fluorescence signal in the Deep-Z generated virtual image stack does not show a visible decay during the same number of imaging cycles (see FIG. 24B). For imaging of live samples, potentially without a dedicated antifade mountant, the fluorescence signal decay would be more drastic compared to FIG. 24A due to photodamage and photobleaching, and Deep-Z can be used to significantly reduce these negative effects, especially during longitudinal imaging experiments.

[00216] The application of Deep-Z network 10 to light sheet microscopy can also be used to reduce the number of imaging planes within the sample 12, by increasing the axial separation between two successive light sheets using Deep-Z 3D inference in between. In general a reduction in N z further helps to reduce photodamage effect if one also takes into account hardware-software synchronization times that are required during the axial scan, which introduces additional time overhead if, e.g., an arc burner is used as the illumination source; this illumination overhead can be mostly eliminated when using LEDs for illumination, which have much faster on-off transition times. The Deep-Z system 2 can substantially circumvent the standard photodamage tradeoffs in fluorescence microscopy and enable imaging at higher speeds and/or improved SNR since the illumination intensity can be increased for a given photodamage threshold that is set, offset by the reduced number of axial images that are acquired through the use of Deep-Z. The following reference (and Supplementary Information) is incorporated by- reference herein: Wu, Y. et ah, Three-dimensional virtual refocusing of fluorescence microscopy images using deep learning, Nat Methods 16, 1323-1331 (2019) doi : 10.1038/s41592-019-0622- 5.

[00217] While embodiments of the present invention have been shown and described, various modifications may be made without departing from the scope of the present invention. The invention, therefore, should not be limited, except to the following claims, and their equivalents.