Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
OPTICAL IMAGING PERFORMANCE TEST SYSTEM AND METHOD
Document Type and Number:
WIPO Patent Application WO/2021/026515
Kind Code:
A9
Abstract:
For testing the imaging performance of an optical system, a test target is positioned at an object plane of the optical system, and illuminated to generate an image beam. One or more images of the test target are acquired from the image beam. From the imaging data acquired, Edge Spread Functions at a plurality of locations within the test target are calculated. A model of Point Spread Functions is constructed from the Edge Spread Functions. Based on the Point Spread Functions, a plurality of imaging performance values corresponding to the plurality of locations are calculated. The imaging performance values are based on Ensquared Energy, Encircled Energy, or Strehl ratio.

Inventors:
RIDGEWAY WILLIAM K (US)
DYKE ALAN (US)
CHEN PENGYUAN (US)
KURZAVA DI RICCO REBECCA (US)
Application Number:
PCT/US2020/045525
Publication Date:
March 18, 2021
Filing Date:
August 07, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
AGILENT TECHNOLOGIES INC (US)
International Classes:
G02B27/62; G01M11/02
Attorney, Agent or Firm:
GLOEKLER, David P. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method for testing imaging performance of an optical system, the method comprising: positioning a test target at an object plane of the optical system; operating the optical system to illuminate the test target and generate an image beam; operating a focusing stage of the optical system to acquire a plurality of images of the test target from the image beam corresponding to a plurality of values of defocus; calculating from each image a plurality of Edge Spread Functions at a plurality of locations within the test target; constructing a plurality of Point Spread Function models from the respective Edge Spread Functions; and based on the Point Spread Function models, calculating a plurality of imaging performance values corresponding to the plurality of locations, wherein the imaging performance values are based on a metric selected from the group consisting of: Ensquared Energy; Encircled Energy; and Strehl ratio.

2. The method of claim 1, comprising at least one of:

(a) wherein the plurality of locations comprises a plurality of field coordinates (x, v) in the object plane within the test target;

(b) wherein the plurality of locations comprises a plurality of focal positions ( z ) along an optical axis passing through the test target, and operating the optical system comprises acquiring a plurality of images of the test target at different focal positions (z).

3. The method of claim 1, comprising producing one or more maps of imaging performance based on a combination of the imaging performance values.

4. The method of claim 3, comprising comparing two or more of the maps to provide a measure of relative alignment of a focal plane of each imaging device or channel relative to the object plane.

5. The method of claim 3, wherein the one or more maps correspond to different imaging channels, and the different imaging channels correspond to different imaging devices of the optical system operated to acquire the image, or different wavelengths of the image acquired, or both different imaging devices and different colors.

6. The method of claim 5, comprising comparing two or more of the maps to provide a measure of relative alignment of each imaging device or channel relative to each other; and both of the foregoing.

7. The method of claim 3, comprising, after producing the one or more maps, adjusting a position of one or more optical components of the optical system, or replacing the one or more optical components, based on information provided by the one or more maps.

8. The method of claim 7, wherein the one or more optical components are selected from the group consisting of: one or more of the imaging devices; an objective of the optical system; one or more tube lenses of the optical system; one or more mirrors or dichroic mirrors; and a combination of two or more of the foregoing.

9. The method of claim 7, wherein the one or more maps are one or more initial maps, and further comprising, after adjusting or replacing the one or more optical components, acquiring a new image of the test target, calculating a plurality of new imaging performance values, and producing one or more new maps of imaging performance.

10. The method of claim 9, comprising comparing the one or more new maps to the one or more initial maps to determine position adjustments to be made to the one or more optical components for optimizing imaging performance.

11. The method of claim 7, comprising at least one of:

(a) wherein the adjusting or replacing finds an optimum pair of conjugate image planes in the optical system;

(b) wherein the adjusting or replacing improves an attribute selected from the group consisting of: focus matching of the imaging devices; imaging device tilt; flattening of field curvature; reduction of astigmatism; reduction of wavelength-dependent focus shift; and a combination of two or more of the foregoing.

12. The method of claim 1, comprising calculating one or more global scores of imaging performance based on a combination of the imaging performance values.

13. The method of claim 12, comprising at least one of:

(a) modifying the one or more global scores to penalize or reward heterogeneity of the imaging performance values over a range of field coordinates (x, y) in the object plane within the test target, or through a range of focal positions (z) of the object plane, or both of the foregoing;

(b) modifying the one or more global scores to penalize or reward similarity of different imaging channels as a function of field coordinates (x, y) in the object plane within the test target, or focal position (z) of the object plane, or both of the foregoing, wherein the different imaging channels correspond to different imaging devices of the optical system operated to acquire the image, or different wavelengths of the image acquired, or both different imaging devices and different colors.

14. The method of claim 1, comprising, after calculating the imaging performance values, adjusting a position of one or more optical components of the optical system based on information provided by the imaging performance values.

15. The method of claim 14, wherein the imaging performance values are initial imaging performance values, and further comprising, after adjusting the one or more optical components, acquiring a new image of the test target, and calculating a plurality of new imaging performance values.

16. The method of claim 15, comprising comparing the new imaging performance values to the initial imaging performance values to determine position adjustments to be made to the one or more optical components for optimizing imaging performance.

17. The method of claim 1, comprising at least one of:

(a) wherein positioning the test target comprises aligning the target relative to a datum shared with one or more optical components of the optical system;

(b) wherein operating the optical system comprises utilizing an objective in the image beam, and further comprising adjusting a position of the objective along an axis of the image beam to acquire a plurality of images of the test target at different focal positions (z);

(c) wherein operating the optical system comprises utilizing an objective in the image beam, and further comprising adjusting a position of the objective along an axis of the image beam to acquire a plurality of images of the test target at different focal positions (z), wherein the objective has a configuration selected from the group consisting of: the objective is configured for infinite conjugate microscopy; and the objective is configured for finite conjugate microscopy;

(d) wherein operating the optical system comprises operating two or more imaging devices to acquire respective images of the test target;

(e) wherein operating the optical system comprises operating two or more imaging devices to acquire respective images of the test target, wherein the two or more imaging devices acquire the respective images at two or more different wavelengths;

(f) wherein operating the optical system comprises operating two or more imaging devices to acquire respective images of the test target, wherein the two or more imaging devices acquire the respective images at two or more different wavelengths, and further comprising splitting an image beam propagating from the test target into two or more image beam portions, and transmitting the two or more image beam portions to the two or more imaging devices, respectively;

(g) wherein operating the optical system comprises operating a filter assembly to filter the image beam at a selected wavelength;

(h) wherein operating the optical system comprises utilizing a tube lens in the image beam, and further comprising adjusting a the relative position of one or more lenses or lens groups within the tube lens to acquire a plurality of images of the test target at different positions of the tube lens;

(i) wherein the test target comprises a dark material and an array of bright features disposed on the dark material;

(j) wherein the test target comprises a dark material and an array of bright features disposed on the dark material , and wherein the bright features are polygonal;

(k) wherein the test target comprises a dark material and an array of bright features disposed on the dark material, wherein the bright features are polygonal, and the bright features are tilted such that edges of the bright features are oriented at angles to a pixel array of the optical imaging system that acquires the image.

18. An optical imaging performance testing system, comprising: a target holder configured to hold a test target; a light source configured to illuminate the test target; an imaging device configured to acquire images of the test target; an objective positioned in an imaging light path between the test target and the imaging device, wherein a position of at least one of the objective or the target holder is adjustable along the imaging light path; and a controller comprising an electronic processor and a memory, and configured to control the steps of the method of claim 1 of calculating the plurality of Edge Spread Functions, constructing the plurality of Point Spread Function models, and calculating the plurality of imaging performance values.

19. The system of claim 18, comprising at least one of:

(a) wherein the objective has a configuration selected from the group consisting of: the objective is configured for infinite conjugate microscopy; and the objective is configured for finite conjugate microscopy;

(b) wherein the imaging device comprises a plurality of imaging devices, and further comprises an image separation mirror configured to split the imaging light path into a plurality of imaging light paths respectively directed to the imaging devices;

(c) a filter assembly configured to select a wavelength of an image beam in the imaging light path for propagation to the imaging device;

(d) a tube lens positioned in the imaging light path, wherein the relative position of one or more lenses or lens groups within the tube lens is adjustable;

(e) the test target, wherein the test target comprises a dark material and an array of bright features disposed on the dark material;

(f) the test target, wherein the test target comprises a dark material and an array of bright features disposed on the dark material , and wherein the bright features are polygonal.

(g) the test target, wherein the test target comprises a dark material and an array of bright features disposed on the dark material , the bright features are polygonal, and wherein the bright features are tilted such that edges of the bright features are oriented at angles to a pixel array of the optical imaging system that acquires the image.

20. A non-transitory computer-readable medium, comprising instructions stored thereon, that when executed on a processor, perform the steps of the method of claim 1 of calculating the plurality of Edge Spread Functions, constructing the plurality of Point Spread Function models, and calculating the plurality of imaging performance values.

Description:
OPTICAL IMAGING PERFORMANCE TEST SYSTEM AND METHOD

RELATED APPLICATIONS

[0001] This application claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Patent Application Serial No. 62/884,116, filed August 7, 2019, titled “OPTICAL IMAGING PERFORMANCE TEST SYSTEM AND METHOD,” the content of which is incorporated by reference herein in its entirety.

TECHNICAL FIELD

[0002] The present invention generally relates to the testing of the imaging performance of an optical imaging system, such as may be utilized in microscopy to acquire imaging data from a sample under analysis.

BACKGROUND

[0003] In an optical imaging system, including an automated, high-throughput optical imaging system utilized in microscopy, a key metric of the quality of the imaging data obtained is the sharpness of the images obtained. Many applications of microscopic imaging (for example, in DNA sequencing) depend on the ability to produce uniformly sharp images across the field of view of one or more channels or cameras, despite imperfect focus and tilt of the specimen. The optical alignment of the cameras and other optical components of the optical imaging system to each other and to the specimen being imaged can contribute significantly to the image sharpness and thus the quality of imaging data produced. Accordingly, it is desirable to be able to test and evaluate the imaging performance (i.e., quality) of an optical imaging system. For example, it is useful to determine whether the optical imaging system meets or exceeds some predetermined level of minimum quality deemed to be acceptable for a given application of the optical imaging system. For expedient manufacturing and the calculation of manufacturing performance metrics, it is further useful to describe the entire imaging performance using a single precise number.

[0004] An image acquired by an optical imaging system may be characterized as a mixture of the real specimen (or object) | Six, y)| and an instrument response function, which in microscopy is termed a Point Spread Function [PSF(x, y)|. Mathematically, the image I(x, v) is the sum of the convolution of the specimen and the PSF and the noise at each pixel, N(x, y), as follows: [0005] I(x, y) = PSF(x, y) * S(x, y) + N(x, y) (1)

[0006] An evaluation of imaging performance may involve acquiring an image of a specimen (which may be a test target, i.e., an object provided for the purpose of testing), using the imaging data to measure and quantify the PSF, and compare the results against a predetermined minimum pass criterion.

[0007] It is desirable to describe the quality of the PSF using a single scalar number. Several metrics are commonly used, such as full width at half maximum (FWHM) or Strehl ratio of the real-space PSF, wavefront error or modulation transfer function (MTF) contrast at a single frequency, and the real-space quantity Encircled Energy or Ensquared Energy. The accuracy and precision of the estimate for the measurement of each quantity is strongly linked to the experimental methods used to calculate them. The advantages and disadvantages of each pertain to the accuracy with which they are estimated, the ease of interpretation, and the ability to catch both poor image resolution and poor image signal to noise ratios (SNRs).

[0008] The classical formulation of Equation (1) above does not reflect the fact that the PSF changes over the field of the image, which would be the equivalent of saying that an image could be sharp (high quality) in the middle and blurry (poor quality) at the edges. An optical system that is near to diffraction limited and optimized for detection of weak sources will have a compact PSF containing density in an approximately 5 x5 pixel area, in contrast to a typically much wider image, such as 4096 x 3000 pixel images. The small PSF can vary substantially across the image, necessitating multiple independent measurements of the PSF across the field of view. In addition to changing over field coordinates (x, y), the PSF is strongly affected by the focus of the image in the z direction, and high performance, high numerical aperture (NA) objectives experience a rapid degradation of PSF performance with defocus, which is to say they have a shallow depth of field. Diffraction effects place an upper limit on the depth of field, but poorly corrected or poorly aligned systems will experience even shallower depth of field. The practical limitations of this are significant, as a system of low optical quality might need to be re-focused several times in order to get a sharp image at every point of the field, leading to poor throughput of the system and potential photo-degradation of the sample. [0009] Downstream processing of images by image analysis software produces results that can be strongly influenced by image sharpness. Blurry images of high-contrast punctate objects experience a fundamental degradation of signal to noise ratios and reduction of resolution. For example, blurry images of DNA clusters reduce the ability to perform accurate photometry of nucleotide abundance, reduce the number of small distinct clusters that can be observed and introduce crosstalk between neighboring clusters.

[0010] Image processing algorithms can partially mitigate blurry images by performing any of a number of explicit steps using well known deconvolution methods, or implicit steps such as selective filtering of data. Such approaches incur computational expenses if required to estimate the degree of PSF degradation for each image or batch of images, and the estimate of PSF degradation itself can add noise. Faster approaches assume that PSFs are of a constant sharpness, but any difference between the actual PSF and the assumed PSF can degrade the quality of the image processing results. PSF homogeneity across input images therefore influences the effectiveness of image processing algorithms.

[0011] Existing methods of optical alignment are labor intensive and struggle to get quick feedback that describes the entirety of the optical system performance, e.g. in the case of wavefront measurements they might accurately and quickly describe a single point in the field of view, and neglect to inform the operator how their actions affect distant points in the field. Manufacturing labor costs associated with optical alignment can be lowered with the use of fast predictive feedback to direct operators. Thus, a fast global analysis of system performance, providing feedback with latency on the order of 0-5 seconds, would be beneficial.

[0012] Furthermore, analysis of manufacturing yields (for example using CpK process capability statistics) requires the quality of each unit to be estimated as a continuous variable, ideally so as to look at a single such variable to describe the build of the entire unit.

[0013] Therefore, an ongoing need exists for improved approaches for imaging performance testing of optical imaging systems.

SUMMARY

[0014] To address the foregoing problems, in whole or in part, and/or other problems that may have been observed by persons skilled in the art, the present disclosure provides methods, processes, systems, apparatus, instruments, and/or devices, as described by way of example in implementations set forth below.

[0015] According to one embodiment, a method for testing imaging performance of an optical system includes: positioning a test target at an object plane of the optical system; operating the optical system to illuminate the test target and generate an image beam; operating a focusing stage of the optical system to acquire a plurality of images of the test target corresponding to a plurality values of defocus; calculating from each image a plurality of Edge Spread Functions at a plurality of locations within the test target; constructing a plurality of Point Spread Function models from the respective Edge Spread Functions; and based on the Point Spread Functions, calculating a plurality of imaging performance values corresponding to the plurality of locations within the imaging volume defined by the image plane extruded through the range of defocus positions, wherein the imaging performance values are based on at least one of the following metrics: Ensquared Energy; Encircled Energy; or Strehl ratio. [0016] In an embodiment, illuminating the test target entails back illuminating the test target with a numerical aperture (NA) greater than or equal to the optical system under test. [0017] In an embodiment, the method includes assessing and describing the distribution and mapping of imaging performance values over the imaging volume, or additionally includes calculating condensed summary scores based on such distributions and maps.

[0018] According to another embodiment, an optical imaging performance testing system includes: a target holder configured to hold a test target; a light source configured to illuminate the test target; an imaging device configured to acquire images of the test target; an objective positioned in an imaging light path between the test target and the imaging device, wherein a position of the objective is adjustable along the imaging light path; and a controller comprising an electronic processor and a memory, and configured to control or perform at least the following steps or functions of the method for testing imaging performance: calculating the plurality of Edge Spread Functions, constructing the Point Spread Function models, and calculating the plurality of imaging performance values.

[0019] In an embodiment, the optical imaging performance testing system includes a tube lens positioned in the imaging light path between the test target and the imaging device.

[0020] In an embodiment, the optical imaging performance testing system includes other or additional mechanical devices or components configured to allow the angular orientations and positions of the imaging device to be altered with respect to each other and to the objective. [0021] In an embodiment, the controller is configured to control or perform the following steps or functions of the method for testing imaging performance: assessing and describing the distribution and mapping of imaging performance values over the imaging volume, and calculating condensed summary scores based on such distributions and maps.

[0022] According to another embodiment, a non-transitory computer-readable medium includes instructions stored thereon, that when executed on a processor, control or perform at least the following steps or functions of the method for testing imaging performance: calculating the plurality of Edge Spread Functions, constructing the model of Point Spread Functions, and calculating the plurality of imaging performance values.

[0023] According to another embodiment, a system for testing imaging performance of an optical system includes the computer-readable storage medium.

[0024] Other devices, apparatus, systems, methods, features and advantages of the invention will be or will become apparent to one with skill in the art upon examination of the following figures and detailed description. It is intended that all such additional systems, methods, features and advantages be included within this description, be within the scope of the invention, and be protected by the accompanying claims.

BRIEF DESCRIPTION OF THE DRAWINGS

[0025] The invention can be better understood by referring to the following figures. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. In the figures, like reference numerals designate corresponding parts throughout the different views.

[0026] Figure 1 is a schematic view of an example of an optical imaging performance testing system according to an embodiment of the present disclosure.

[0027] Figure 2 is a schematic perspective cross-sectional view of an example of a target fixture of the optical imaging performance testing system according to an embodiment.

[0028] Figure 3A is a schematic plan view an example of a test target of the optical imaging performance testing system according to an embodiment.

[0029] Figure 3B is a detailed view of one section of the test target illustrated in Figure 3A. [0030] Figure 4A is an example of a three-dimensional (3-D) model of a true, reasonably sharp Point Spread Function or PSF (a PSF with a true Ensquared Energy value of 0.594) produced by implementing the system and method disclosed herein, where intensity level is color-coded and different colors are represented by different levels of shading.

[0031] Figure 4B is an example of a 3-D model of the same PSF related to Figure 4A, derived from a Line Spread Function by Line Spread Function (LSF x LSF) approximation. [0032] Figure 4C is an example of a 3-D model of the same PSF related to Figure 4 A, where the PSF model is fit to Edge Spread Function (ESF) data.

[0033] Figure 4D is a 3-D model showing the difference between the PSF model of Figure 4A and the PSF model of Figure 4B. [0034] Figure 4E is a 3-D model showing the difference between the PSF model of Figure 4A and the PSF model of Figure 4C.

[0035] Figure 4F is a 3-D model showing the difference between the PSF model of Figure 4B and the PSF model of Figure 4C.

[0036] Figure 5A is another example of a 3-D model of a true, relatively sharp PSF (having a true EE value of 0.693), similar to Figure 4A.

[0037] Figure 5B is an example of a 3-D model of the same PSF related to Figure 5A, derived from an LSF x LSF approximation.

[0038] Figure 5C is an example of a 3-D model of the same PSF related to Figure 5A, where the PSF model is fit to Edge Spread Function (ESF) data.

[0039] Figure 5D is a 3-D model showing the difference between the PSF model of Figure 5A and the PSF model of Figure 5B.

[0040] Figure 5E is a 3-D model showing the difference between the PSF model of Figure 5A and the PSF model of Figure 5C.

[0041] Figure 5F is a 3-D model showing the difference between the PSF model of Figure 5B and the PSF model of Figure 5C.

[0042] Figure 6 A is an example of a 3-D model of a true, relatively blurry PSF (having a true EE value of 0.349), otherwise similar to Figure 4A.

[0043] Figure 6B is an example of a 3-D model of the same PSF related to Figure 6 A, derived from an LSF x LSF approximation.

[0044] Figure 6C is an example of a 3-D model of the same PSF related to Figure 6 A, where the PSF model is fit to Edge Spread Function (ESF) data.

[0045] Figure 6D is a 3-D model showing the difference between the PSF model of Figure 6 A and the PSF model of Figure 6B.

[0046] Figure 6E is a 3-D model showing the difference between the PSF model of Figure 6A and the PSF model of Figure 6C.

[0047] Figure 6F is a 3-D model showing the difference between the PSF model of Figure 6B and the PSF model of Figure 6C.

[0048] Figure 7 A is a plot of LSF x LSF approximation accuracy for various PSFs through focus.

[0049] Figure 7B is a plot of accuracy of EE fit to ESF data for various PSFs through focus.

[0050] Figure 7C is a plot of accuracy of EE estimate for one PSF through focus. [0051] Figure 8 A is an example of an image acquired from one of the target features of the test target illustrated in Figure 3A.

[0052] Figure 8B is an example of ESF calculated from the image of Figure 8A. The ESF has been cleaned up using outlier rejection.

[0053] Figure 9 is an example of the result of fitting the ESF of Figure 8B to a model consisting of a linear combination of error functions and calculating EE from the coefficients of the resulting model.

[0054] Figure 10 is an example of calculating ESFs for the four edges of the target feature 313, fitting the ESFs to the linear combination of error functions, and calculating four independent EE values from the corresponding fits.

[0055] Figure 11 is a flow diagram illustrating an example of a method for testing the imaging performance of an optical system according to an embodiment of the present disclosure.

[0056] Figure 12 is a flow diagram illustrating the acquisition of a Z-stack of images according to an embodiment of the present disclosure.

[0057] Figure 13 is a flow diagram illustrating the processing of an individual image according to an embodiment of the present disclosure.

[0058] Figure 14 illustrates an example of a set of EE performance maps that may be generated according to the method, according to an embodiment of the present disclosure. [0059] Figure 15 illustrates an example of combined EE scores and performance maps that may be generated according to the method, according to an embodiment of the present disclosure.

[0060] Figure 16 is a schematic view of a non- limiting example of a system controller (or controller) that may be part of or communicate with an optical imaging performance testing system according to an embodiment of the present disclosure such as the system illustrated in Figure 1.

DETAILED DESCRIPTION

[0061] The present disclosure provides a system and method for testing the imaging performance of an optical imaging system.

[0062] Figure 1 is a schematic view of an example of an optical imaging performance testing system 100 according to an embodiment of the present disclosure. Generally, the structures and operations of various components of the optical imaging performance testing system 100 are known to persons skilled in the art, and accordingly are described only briefly herein as necessary for understanding the subject matter being disclosed. For illustrative purposes, Figure 1 includes a Cartesian (X-Y-Z) frame of reference, the origin of which is arbitrarily located relative to the drawing figure.

[0063] Generally, the testing system 100 includes an optical system 104 under test, a test target stage assembly 106, and a system controller 108 in electrical communication (as needed, e.g., for transmitting power, data, measurement signals, control signals, etc.) with appropriate components of the optical system 104 and test target stage assembly 106. The optical system 104 may be an actual optical system to be tested, such as a product intended for commercial availability. Alternatively, the optical system 104 may be an ensemble of components supported by or mounted to the testing system 100 (e.g., on an optical bench of the testing system 100), and arranged to emulate an actual optical system (e.g., with the components positioned and determined distances and orientations relative to each other, and aligned with each other according to the intended configuration for operation).

[0064] Generally, the optical system 104 may be any system (or instrument, device, etc.) configured to acquire optical images of an object or sample under analysis. Such a sample may be biological (e.g., spores, fungi, molds, bacteria, viruses, biological cells or intracellular components such as for example nucleic acids (DNA, RNA, etc.), biologically derived particles such as skin cells, detritus, etc.) or non-biological. Typically, the optical system 104 is configured to acquire optical images with some range of magnification, and thus may be (or be part of) a type of microscope. The optical system 104 may or may not include a target holder 112 (e.g., a sample stage, or emulation of a sample stage) configured to hold a test target 116 (in the place of a sample that would be imaged in practice). In the event that that the target holder 112 is separate, a mechanical datum 160 generally may be provided that permits attachment of the target holder 112 in a precisely located manner. A light source 120 is configured to illuminate the test target 116, and the light source 120 may or may not be part of the optical system 104 under test. One or more imaging devices 124 and 128 (e.g., a first imaging device 124 and a second imaging device 128 in the illustrated embodiment), such as cameras, are configured to capture images of the test target 116. The optical system 104 further includes other intermediate optics (optical components) as needed to define an illumination light path from the light source 120 to the target holder 112 and test target 116 (and through the test target 116 in the illustrated embodiment), and an imaging light path from the target holder 112 and test target 116 to the imaging device(s) 124 and 128, as appreciated by persons skilled in the art.

[0065] In particular, the optical system 104 typically includes an objective 132 positioned in the imaging light path between the target holder 112 and test target 116 and the imaging device(s) 124 and 128. The objective 132 is configured to magnify and focus the image to be captured by the imaging device(s) 124 and 128. In the optional case of infinity corrected microscope optics, the objective 132 performs this task in combination with tube lenses 152 and 156. In the illustrated embodiment, the objective 132 schematically depicts an objective assembly that includes an objective lens system mounted to an objective stage (or positioning device). The objective stage is configured to move (translate) the objective lens system along the Z-axis. Thus, the objective 132 is adjustable to focus or defocus at any desired depth (elevation) in the thickness of the test target 116, as needed for testing purposes. Because the objective 132 can focus at different elevations, the adjustability enables the optical system 104 to generate a “z-stack” of images, as appreciated by persons skilled in the art. Alternately, the objective 132 could be fixed in space and the specimen could be moved along the z direction in order to vary focus. Alternately, the objective 132 and specimen could be held in place and the tube lenses 152 and 156 could move relative to the imaging device(s) 124 and 128.

[0066] In the present example, the Z-axis is taken to be the optical axis (vertical from the perspective of Figure 1) through the test target 116 on which the illumination light path and the imaging light path generally lie, and along which the objective 132 is adjustable for focusing and defocusing. Accordingly, the X-axis and Y-axis lie in the transverse plane orthogonal to the Z-axis. Hence, locations on the object plane in which the test target 116 is secured by the target holder 112 can be mapped using (X,Y) coordinates, and depth (focal position) can be mapped to the Z-coordinate.

[0067] In the illustrated embodiment, the optical system 104 has a top-read configuration in which the objective 132 is positioned above the target holder 112 and test target 116. In an alternative embodiment, the optical system 104 may have an inverted, or bottom-read, configuration in which the objective 132 is positioned below the target holder 112 and test target 116. The optical system 104 may be configured to capture transmitted or reflected light from the test target 116. In the illustrated embodiment, the optical system 104 is configured for trans-illumination, such that the illumination light path and the imaging light path are on opposite sides of the target holder 112 and test target 116. In an alternative embodiment, the optical system 104 may be configured such that the illumination light path and the imaging light path are on the same side of the target holder 112 and test target 116. For example, the objective 132 may be positioned in both the illumination light path and the imaging light path, such as in an epi-illumination or epi-fluorescence configuration as appreciated by persons skilled in the art.

[0068] The optical system 104 may include a target fixture or assembly 136 of which the target holder 112 is a part. The target holder 112 is mounted, attached, or otherwise mechanically referenced to a target stage (or positioning device) 140. A precise datum 160 may be utilized to mate the optical system 104 with the test target stage assembly 106. The target stage 140 may be configured for precise movement in a known manner via motorized or non- motorized (and automated and/or manual) actuation. The target stage 140 may be configured to translate along and/or rotate about one or more axes. For example, the target stage 140 may be configured for movement with five degrees of freedom (DOF), where the target stage 140 lacks freedom to move along the Z-axis (lacks focusing movement). In embodiments where illumination light is directed from below the target holder 112 (as illustrated), the target holder 112 may include a window or aperture 188 to enable passage of the illumination light, and the test target 116 may be mounted on such window or aperture. [0069] The light source 120 may be any incoherent light source suitable for optical microscopy. By way of contrast, coherent sources will produce ESFs incompatible with the present analysis methods. Examples of an incoherent light source 120 include, but are not limited to, a broadband light source (e.g., halogen lamp, incandescent lamp, etc.), a light emitting diode (LED), or phosphor type material optically pumped by an LED, laser, incandescent lamp etc. In some embodiments, the light source 120 may include a plurality of light-emitting units (e.g., LEDs) configured to emit light at different wavelengths, and a mechanism configured to enable selection of the light-emitting unit to be active at a given time. In the illustrated embodiment, the light source 120 is integrated with (and thus movable with) the target fixture 136 and positioned at a fixed distance from the target holder 112 and test target 116 along the Z-axis. In other embodiments, the light source 120 may be separate from the target fixture 136. In some embodiments, a condenser (not shown) may be positioned between the light source 120 and the target holder 112 and test target 116, for concentrating the illumination light from the light source 120 to enhance illumination of the test target 116, as appreciated by persons skilled in the art. The angle of illumination rays, specifically the numerical aperture (NA) of the illumination rays and the uniformity of radiant intensity, will influence the accuracy of the measured PSF. Ideal sources have a NA that exceeds the object NA of the optical system by a small margin and will exhibit uniform radiant intensity across said angles.

[0070] The imaging devices 124 and 128 are multi-pixel (or pixelated) imaging devices in which the sensors are pixel arrays (e.g., based on charge-coupled device (CCD) or an active- pixel sensor (APS) technology). Accordingly, the imaging devices 124 and 128 may be cameras of the type commonly utilized in microscopy and other imaging applications. In the illustrated embodiment, each of the imaging devices 124 and 128 schematically depicts an imaging device assembly that includes an imaging device unit (including the sensor, i.e. pixel array) mounted to an imaging device stage (or positioning device). Each imaging device stage may be configured for six DOF movement, i.e., both translation along and rotation about all three (X, Y, and Z) axes.

[0071] The imaging devices 124 and 128 may be positioned at any nominal angle and distance relative to each other. In the illustrated embodiment, the first imaging device 124 is positioned on-axis with the objective 132 (along the Z-axis), and the second imaging device 128 is positioned on an axis angled (ninety degrees in the illustrated embodiment) to the Z-axis (on the X-axis in the illustrated embodiment). The optical system 104 includes an image separation mirror 144 (e.g., a beam splitter or dichroic mirror) positioned on-axis between the objective 132 and the first imaging device 124. The image separation mirror 144 is configured to split the image beam propagating from the test target 116 into a first image beam portion that propagates along a first split imaging light path from the image separation mirror 144 (via transmission through the image separation mirror 144) to the first imaging device 124, and a second image beam portion that propagates along a second split imaging light path from the image separation mirror 144 (via reflection from the image separation mirror 144) to the second imaging device 128. The separating or splitting of the image beam into a first image beam portion and a second image beam portion may be on the basis of wavelength (i.e., color) within the electromagnetic spectral range of operation. For example, the first image beam portion may be a blue portion and the second image beam portion may be a red portion of the image beam.

[0072] The optical system 104 may include a filter assembly 148 (i.e., a wavelength selector) positioned in the imaging light path between the objective 132 and the imaging devices 124 and 128, or specifically between the objective 132 and the image separation mirror 144 in the illustrated embodiment. In the illustrated embodiment, the filter assembly 148 may represent a plurality of filters mounted to a filter stage configured to selectively switch different filters into and out from the imaging light path by linear translation (e.g., a filter slide) or rotation (e.g., a filter wheel), as depicted by a double-headed horizontal arrow in Figure 1, thereby enabling the selection of wavelengths to be passed in the image beam portions respectively received by the imaging devices 124 and 128.

[0073] The filter assembly 148 may be utilized to implement multi-channel imaging, with each channel associated with a different wavelength (i.e., color) and each imaging device 124 and 128 utilized for one or more channels. As one non-exclusive example, the optical system 104 may be configured to acquire images in four channels (e.g., blue, green, red, and amber), with the first imaging device 124 designated for two of the channels (e.g., blue and green) and the second imaging device 128 designated for the other two channels (e.g., red and amber). The switching (positioning) of the filter assembly 148 may be coordinated with the operation of the imaging devices 124 and 128 and other components of the optical system 104 as needed to sequentially acquire images in the four channels. In one specific example, the four-channel configuration is useful in DNA sequencing in which different fluorescent labels have been associated with the different nucleobases (adenine, cytosine, guanine, thymine) in the sample under analysis.

[0074] Depending on the embodiment under test, the optical system 104 may further include one or more other types of optical components in the illumination light path and/or imaging light path. Examples include, but are not limited to, field lenses, relay lenses, beam expanders, beam collimators, apertures, slits, pinholes, confocal disks, etc. In the illustrated embodiment, the optical system 104 includes a first tube lens assembly 152 positioned in the first split imaging light path between the image separation mirror 144 and the first imaging device 124, and a second tube lens assembly 156 positioned in the second split imaging light path between the image separation mirror 144 and the second imaging device 128. In the illustrated embodiment, the first tube lens assembly 152 may represent a first tube lens mounted to a tube lens stage configured to adjust (translate) the relative positions of elements within the first tube lens along the axis of the first split imaging light path, as depicted by a double-headed vertical arrow in Figure 1. The second tube lens assembly 156 may represent a second tube lens mounted to a tube lens stage configured to adjust (translate) the relative positions of elements within the second tube lens along the axis of the second split imaging light path, as depicted by a double-headed horizontal arrow in Figure 1. The adjustable tube lenses may be utilized to realize infinity-corrected optics, as appreciated by persons skilled in the art. [0075] The optical system 104 may include a suitable reference datum configured to serve as a reference datum for defining the positions of various adjustable components of the optical system 104, such as the target holder 112/test target 116, the objective 132, etc. In the illustrated embodiment, a fixed-position structure of the optical system 104, such as a surface of an enclosure 160 in which various components of the optical system 104 are housed, may serve as a reference datum.

[0076] Generally, the system controller (or controller) 108 is configured to control the operations of the various components of the optical system 104. This control includes monitoring and controlling (adjusting) the positions of the adjustable components of the optical system 104, and coordinating or synchronizing the operation of the adjustable components with other components such as the light source 120 and the imaging devices 124 and 128. The system controller 108 is further configured to process the imaging data outputted by the imaging devices 124 and 128 (e.g., data acquisition and signal analysis, including digitizing and recording/storing images, formatting images for display on a display device such as a computer screen, etc.). The system controller 108 is further configured to run performance testing of the optical system 104 according to the method disclosed herein, including executing any algorithms associated with the method. The system controller 108 is further configured to provide user interfaces as needed to facilitate operation of the optical system 104 and implementation of the performance testing. For all such functions, the system controller 108 is schematically depicted in Figure 1 as including an electronics module 164 communicating with the optical system 104, and a computing device 168 communicating with the electronics module 164. The electronics module 164 and the computing device 168 will be understood to represent all hardware (microprocessor, memory, electrical circuitry, peripheral devices, etc.), firmware, and software components appropriate for carrying out all such functions, as appreciated by persons skilled in the art.

[0077] In an embodiment, the electronics module 164 may represent components dedicated to controlling the optical system 104 and running the performance testing, while the computing device 168 may represent a more general purpose computing device that is configurable or programmable to interface with the electronics module 164 to facilitate controlling the optical system 104 and running the performance testing. The system controller 108 (electronics module 164 and/or computing device 168) may include a non-transitory computer-readable medium that includes non-transitory instructions for performing the method disclosed herein. The system controller 108 (electronics module 164 and/or computing device 168) may include a main electronic processor providing overall control, and one or more electronic processors configured for dedicated control operations or specific signal processing tasks. The system controller 108 (typically at least the computing device 168) may include one or more types of user interface devices, such as user input devices (e.g., keypad, touch screen, mouse, and the like), user output devices (e.g., display screen, printer, visual indicators or alerts, audible indicators or alerts, and the like), a graphical user interface (GUI) controlled by software, and devices for loading media readable by the electronic processor (e.g., non-transitory logic instructions embodied in software, data, and the like). The system controller 108 (typically at least the computing device 168) may include an operating system (e.g., Microsoft Windows® software) for controlling and managing various functions of the system controller 108.

[0078] It will be understood that Figure 1 is a high-level schematic depiction of the optical imaging performance testing system 100 disclosed herein. As appreciated by persons skilled in the art, other components such as additional structures, devices, and electronics may be included as needed for practical implementations, depending on how the testing system 100 is configured for a given application.

[0079] Figure 2 is a schematic perspective cross-sectional view of an example of the target fixture 136 according to an embodiment. The target holder 112 may include a target support block 172, a clamping plate 176, and one or more fasteners 180. In the illustrated embodiment in which the light source 120 (an LED in the illustrated example) is integrated with the target fixture 136, the target fixture 136 may include a mounting member 184 for supporting and enclosing (and optionally serving as a heat sink for) the light source 120. The mounting member 184 may be part of or attached to the target support block 172. The target support block 172 includes annular shoulders for supporting the test target 116 and any suitable optics (e.g., a window 188) provided between the test target 116 and the light source 120. After mounting the test target 116 to the target support block 172, a coverglass (or cover slip) 192 may be placed on the test target 116. As one example, the coverglass 192 may have a thickness of 500 ± 5pm. The clamping plate 176 (which may be configured as a spring) is then secured to the target support block 172 using the fasteners 180 (which may be, for example, screws engaging threaded bores of the target support block 172) to secure the test target 116 and cover glass 192 in a fixed position in the target fixture 136.

[0080] Figure 3 A is a schematic plan view the test target 116 according to an embodiment. The test target 116 may be or include a substrate of planar geometry (e.g., plate-shaped) composed of (or at least including an outer surface composed of) a dark (light-absorbing) material 309, and a two-dimensional array (pattern) of bright (light-reflecting) target features 313 disposed on (or integrated with) the dark material 309. In the present context, the terms “dark” and “bright” are relative to each other. That is, for an implementation using reflected light illumination, the dark material is darker (more light- absorbing) than the target features 313, and the target features 313 are brighter (more light-reflecting) than the dark material 309. For an implementation featuring transmitted light illumination, the darker material is more light-absorbing and the brighter material better transmits light (more light- transmitting). The target features 313 preferably are polygons, thus providing edges showing the contrast between the bright features 313 and the surrounding dark material 309. In particular, the target features 313 may be rectilinear (squares or rectangles). In an embodiment and as illustrated, the target features 313 are tilted (rotated) in the plane of the test target 116 (the plane normal to the optical axis) relative to the horizontal and vertical directions, and particularly relative to the pixel arrays of the imaging devices 124 and 128. This is further shown in Figure 8A, which is a detailed view of one section of the test target 116 as realized on the pixel array of either imaging device 124 and 128.

[0081] In one non-exclusive example, the dark material 309 is (or is part of) a fused silica substrate, and the target features 313 are a patterned chrome layer disposed on the substrate. The array of target features 313 is a 14 x 10 array of squares with 42 pm edges. The target features 313 are spaced apart (center to center) 92 pm on the long axis of the image and 94 pm on the short axis of the image. The target features 313 are tilted 100 mRad (about 5.73°) relative to the optical axis. The image is 19% light against a dark field. The chrome points toward the objective.

[0082] The testing system 100 may be utilized to perform a method for testing the imaging performance of the optical system 104. According to the method, the test target 116 is positioned at the object plane of the optical system 104 by securing the test target 116 in the target holder 112. Initial adjustments to the positioning of the various adjustable (staged) optical components (e.g., target holder 112, objective 132, tube lens lenses 152 and 156, imaging devices 124 and 128, etc.) may be made. In addition, the wavelengths to be passed to the imaging devices 124 and 128 may be selected by adjusting the filter assembly 148. The light source 120 is then activated to generate the illumination light to illuminate the test target 116, resulting in imaging light emitted from the test target 116. The imaging light is focused by the objective 132 and tube lenses 152 and 156 as image beam portions (split by the image separation mirror 144) onto the pixel arrays of the respective imaging devices 124 and 128 to acquire images of the test target 116 (within the field of view of the objective 132) from the respective image beam portions. The imaging data are then transferred from the imaging devices 124 and 128 to the system controller 108 for processing.

[0083] According to the method, the imaging data are utilized to calculate a plurality of Edge Spread Functions at a plurality of locations within the test target 116. A model of Point Spread Functions are then constructed from the respective Edge Spread Functions that were calculated. From these Point Spread Functions, a plurality of (estimated) values of Ensquared Energy, EE, are then calculated. Alternatively or additionally, Encircled Energy and/or Strehl ratio may be calculated.

[0084] Sharpness (incl. Ensquared Energy) estimation from slant edge targets [0085] In an embodiment, the method entails fitting experimental Edge Spread Functions (ESFs) using a basis set chosen such that the fit parameters can be utilized to create a model of the Point Spread Function. From that model, various measures of sharpness can be derived. [0086] Sharpness metric definitions

[0087] Ensquared energy

[0088] For a given PSF(x, y) centered around the point (x = 0, y = 0), the ensquared energy is the fraction of the PSF that falls within a square of area a centered around the same point:

[0089] EE a = fZ fZ (2) PSF(x,y)dxdy

[0090] Encircled energy

[0091] For a given PSF(x, v) centered around the point ( = 0, y = 0), the encircled energy is the fraction of the PSF that falls within a circle of radius r centered around the same point:

[0093] Strehl ratio

[0094] For a given PSF, the Strehl ratio is the ratio of the maximum value of the normalized PSF (assumed here to occur at (x = 0; y = 0)) to that of a perfect, diffraction limited PSF. In the case of a circular pupil, the diffraction limited PSF is an Airy disk and the Strehl ratio is:

[0096] 0 < Str. £ 1 (5)

[0097] Line Spread Functions (LSFs) and Edge Spread Functions (ESF)

[0098] One way to measure PSFs is to image an array of backlit holes as point sources, with the diameter of holes being substantially smaller than the Airy disk and the separation between holes being large enough that PSFs from neighboring point sources overlap to an insubstantial degree. The challenges of such an approach involve precise fabrication of such tiny features against a high OD target background (especially when testing a high-NA microscope) and the fact that the PSF decays in two dimensions as 1/r 2 and legitimate signal quickly falls into the noise. Even with a currently available sensor with an excess of 70 dB of dynamic range and 10,000 e l full well depth, it is possible to fail to identify PSF density several pixels away from the center of the PSF, and density in that area can rapidly decrease system Signal/Noise ratios (SNRs). Alternate experimental protocols involve fluorescent illumination of dilute mono-disperse coatings of quantum dots or single fluorophores, which can generate reliable point sources but still fall victim to the same SNR constraints as hole targets.

[0099] A common solution is to use an extended edge target instead of a pinhole, the so- called Slant Edge family of methods commonly used to generate Modulation Transfer Function (MTF) measurements. Such a target exploits line symmetry to look at the 1-D projection of a point spread function, known as a Line Spread Function (LSF):

[00100] [00101] Since the LSF decays as 1/r, analysis of the LSF is better able to catch extended density of the PSF several pixels away from the center. Rather than use pinholes to measure PSFs, edges are used to measure LSFs, with the resulting experimental profile known as an Edge Spread Function (ESF). Edges can either be aligned with the axes of the imaging system or slanted. To begin with, consider an edge that is aligned with a y axis, and which is described by the transmission function E(x, y):

[00102]

[00103] The image of this is the 2-D convolution of the PSF with E(x, y) or equivalently the 1-D convolution of LSF(x ) with E(x):

[00104]

[00105] Conceptually, it is straightforward to find LSF(x ) as the derivative of the observed ESF(x ) according to the following relation: [00106]

[00107] However, in practice numerical differentiation of noisy data creates undesirable levels of noise. Instead, according an aspect of the present disclosure, Eqn. 12 is fitted directly as described below. [00108] Approximation of the PSF using separable functions [00109] While it is more robust to measure LSFs instead of PSFs, the ultimate goal is to know the PSF. It is not formally possible to reconstruct the full 2-D PSF from two 1-D LSFs recorded along orthogonal directions. Instead, according an aspect of the present disclosure, the method (e.g., an algorithm as may be embodied in software executed by a control device) calculates an approximation of the PSF as follows:

[00110]

[00111] The assumption is intolerable in two cases: (1) A very precise model of a well- corrected PSF is needed, for example looking at the locations of nulls in an Airy disk; and (2) The PSFs under evaluation are far from perfect (> 1/4 wave RMS wavefront error) and contain substantial asymmetries that are not even about the y or x axis such as coma experienced in the diagonal corner of a field where lyl ~ Ixl.

[00112] The finite accuracy of the LSF x LSF approximation is evaluated in Fig. 7A (LSF x LSF approximation accuracy for various PSFs through focus). For the analysis, a collection of several known PSFs was projected into horizontal and vertical LSFs and an approximate PSF was re-calculated from their outer product using Eqn. 14. The ensquared energies (EE 4 ) were calculated for the original and approximated PSF across a 2 x 2 pixel area. The original PSFs were real world PSFs obtained from experimental wavefront error measurements of several field points of a high NA microscope system. For each of nine wavefront error measurements, thirty-one PSFs were generated by adding defocus terms before integrating across the pupil to obtain the image-space PSF. For sharper PSFs, the LSF x LSF approximation consistently underestimates the EE of the actual PSF by ~ 0:05, but does so in a precise enough way that empirical correction using a polynomial fit or other interpolating method can be performed to restore accuracy. For blunder PSFs, the mean correspondence between the LSF x LSF approximation and the true PSF is better, but precision is poorer than the mean for EE < 0.1. Despite poor relative precision with blurry PSFs, blurry PSFs are unambiguously identified as such and are not confused with sharp PSFs.

[00113] Overall the approximation is precise enough to enable meaningful comparisons of a system under test with PSF sharpness specifications. [00114] Before a detailed description of the ESF fit process, it is helpful to describe the form of the pixelated input data and touch on practical concerns related to image registration. [00115] Slant Edge Targets

[00116] Images of a slant edge target are the sole source of data for PSF fitting for this method, with the edges arranged in an array of squares, each with four edges. The use of slant edge ESF targets is well established. Considering the ESFs for a single edge, a data region can be extracted that is centered around the edge and proceeds for an arbitrary number of pixels either side of the edge, where the length is long enough to capture the whole ESF of a reasonably blurry PSF and also provide a good estimate of the baseline intensity levels for the fully dark and fully light portions of the image. With a slant edge target, multiple ESFs can be combined, for example if an edge is mostly vertical but with a 1/a radian tilt, where a is a value around 10, a collection of a neighboring rows of an image can be collected and pooled such that each edge contains a unique replicate of the ESF, shifted by 1/a pixels relative to the adjacent row. The resulting ESFs can then be viewed together and shifted by (row height)/a in x to align the midpoint of each ESF transition.

[00117] This dataset samples the ESF at intervals of 1/a pixels, but does not per se increase resolution as each observation at a particular pixel is still an average over the active area of the pixel. Pixel active areas and geometries other than pitches may not be known for the particular sensor being employed. The pixel active areas are assumed to be 100% fill factor. The observed image around a vertical slanted ESF will be thus be modeled as follows:

[00118]

[00119] In the current practice, 64 rows of images spanning 32 pixels (16 px either side of the edge) of slant edge targets with a=10 were pooled together for analysis. Outlier rejection eliminated one fourth of rows on the basis of distance from the median value of all rows together, with distance computed as the l 2 norm.

[00120] Image registration [00121] Crude image registration

[00122] The edges were identified by first computing single frequency Fourier Transforms of projections of the center of the image in both vertical and horizontal directions. The phase of the Fourier Transform was used to estimate the overall shift of the image by employing the known dimensions of the target, assuming a nominal magnification.

[00123] The image was broken into adjacent tiles and the centroid of each tile in x and also in y of each tile was computed along with the mean value of each tile. From the crude shifts of the FFT the approximate locations of each square of the image were known well enough to then compute the centroid of each square from the pre-computed centroid values, providing an early estimate of the locations of each square.

[00124] The global set of square (x; y) coordinates was fit to a linear model of rotation, translation and magnification to generate a revised estimate of square locations based on a global model.

[00125] Fine image registration

[00126] Each square was precisely located by identifying the centers of the four edges and averaging the location information. Fits to vertical edges were computed by computing the numerical derivative along each row and cubing the result to identify the edge, then using that product as a weight for a centroid calculation to estimate where the edge falls along the row coordinates. The collection of edge coordinates and row numbers were fit to a linear model of slope and offset using a robust statistics median of medians approach. The technique was repeated for all four edges, with rows and columns swapped for horizontal edges. The collection of edge offsets was used to estimate the centers of the squares, and the median value of slopes was used to compute the rotation angle of the squares for use in slanted edge calculations.

[00127] The fine registered fit values of the squares were then fit again to a global model of whole image translation, rotation and magnification. The results of this were applied to compute the relative orientations of two cameras in a multi-camera system, specifically relative image translation Ax Ay and rotation 0. The magnification parameter was used to compute the overall magnification of each image. Distortion can be computed by modifying the fit functions to include a specific model of distortion or by collectively analyzing the residuals of the fit to a model that specifies a specific distortion model, e.g. no distortion.

[00128] Fitting PSF models to ESF data

[00129] According to embodiments of the present disclosure, several methods may be utilized to take ESF data and create a PSF consistent with the LSF x LSF approximation (Eqn. 14), as follows.

[00130] Direct outer product [00131] LSF x (x ) and LSF y (y ) can be computed directly from a numerical derivative of the vertical and horizontal ESF data (Eqn. 13), and the outer product can be taken. The resulting matrix will oversample the PSF by a factor of a where the edges are slanted by 1/a radians, and the images can be down-sampled by integration to produce a pixelated version of the PSF. [00132] In another embodiment, however, this approach is not utilized for several reasons.

First, it does not provide much averaging of the data, and is sensitive to noise in the numerical derivatives, which is generally high. Second, this is an estimate of the pixelated PSF and design specifications and tests frequently refer to the unpixelated PSF as the output of common optical modeling software. [00133] Interpolating the ESF

[00134] The noise of the numerical derivatives can be reduced by fitting the ESFs to N smooth continuous functions with known derivatives, where N is less than the number of total data points in the experimental ESF.

[00135]

[00136] This helps with noise, but like the prior method it does not fit to the un-pixelated PSF directly.

[00137] Fitting ESFs to a PSF model

[00138] An improvement on fitting the ESFs to smooth functions is to first assume a particular model of a PSF, denoted PSF a , and then derive analytical expressions for ESFs using the model. Key to this is that the PSF model is a separable function in x and y. A Gaussian basis set is chosen using logarithmically spaced inverse widths b,, though other basis sets could be used in principle.

[00139]

[00140] where (x, y) are the field coordinates for points on the object plane within the test target 116, N g is the number of Gaussians employed to comprise the basis set utilized to model the PSF, a, is the set of coefficients that weigh the contribution of each Gaussian i in the model of Eqn. 20, and /;, is proportional to the inverse square of the width of each Gaussian.

[00141] The unpixelated ESF can then be computed directly in terms of the coefficients:

[00142]

[00143] where erf(x) is the Gaussian error function, i.e. the definite integral of each Gaussian, and is commonly available in numerical libraries.

[00144] The pixelated ESF can be calculated by numerical integration of the unpixelated ESF:

[00145]

[00146] Note that the fitting functions used to fit the LSF are symmetric about xo such that LSF x (x - xo) = LSF x (xo - x). This symmetry is not strictly necessary, but was employed to reduce the fit space and increase equivalency of rising versus falling ESFs, i.e. ESFs from the right hand side of a square versus the left hand side of a square. The upper and lower fit regions of the ESFs are statistically quite different, as images from high gain sensors are shot noise limited counting processes and in absolute terms the variances of the lower, darker portion of the ESF data are lower than the variances of ESF data at the top. If weighted linear least squares fits of the ESFs are performed using the signal level of each ESF data point (in units of photoelectrons) as a weight equivalent to its variance, this results in different fits to left and right ESFs for PSFs with LSFs that are asymmetric about the origin.

[00147] Alternate algorithms could make use of a different basis set in Eqn. 20 that disregards the explicit symmetry used here. Such basis sets could be functions commonly used for interpolation such as B-splines, polynomials, or a combination such as Gaussians as employed in Eqn. 20 together with polynomials of odd powers or cosine functions of varying frequencies to allow a fit of an asymmetric LSF. Simultaneous linear least squares fits of rising and falling ESFs (e.g. left and right edges of the squares) could use shot noise derived weights to fit asymmetric LSFs to asymmetric basis functions.

[00148] Numerical fitting

[00149] In practice the image data are fit using linear least squares. QR factorization is used as part of the known DGELS LAPACK (linear algebra package) routine as implemented in the INTEL MKL Math kernel library. The Image data Img are fit using the design matrix D to obtain fit coefficients Ai. Several pre-computed versions of D exist, using nine different offsets xo and every row of data will use rows of D corresponding to an xo value closest to the xo value of the data.

[00150]

[00151] The fit coeffcient X\ 01 represents the baseline intensity of the image due to background noise and bleed-through of the test target. If the data being fit are a vertical Edge resulting in ESF x (x), the coeffcients L] 1 1; X[2]; X\ A01 represent the coefficients a x;i , a x;i , <¾ % in Eqn. 20. If the data being fit are a horizontal Edge resulting in ESF y (y), the coefficients X[l], X[2]; X\ N y | represent the coefficients a, , \ ; a, , \ ; a y ,N g in Eqn. 20.

[00152] Reconstructing the PSF model and calculating sharpness

[00153] A model of the PSF can be reconstructed from the fit coeffcients a, obtained in the previous section. Several different models represent different levels of knowledge. The most complete model combines the vertical and horizontal LSFs using Eqn.20 and ensquared energy can be directly computed from simple products of sums of the coefficients.

[00154]

[00155] The accuracy of this method is discussed below. [00156] Encircled energy can likewise be computed by evaluating Eqns. 3 and 20 numerically. The Strehl ratio takes a simple form:

[00157]

[00158] Alternate simple PSF model

[00159] While the PSF is a fundamentally 2-D concept, it has been useful to express the EE of a PSF that was determined on the assumption that both the vertical and horizontal LSFs are identical. Such uses include looking for astigmatism the image by comparing the focal positions that maximize EE estimated from a horizontal ESF vs. EE estimated from a vertical ESF. To this end, a simple PSF model has been employed with a very limited number of fit coefficients a [00160]

[00161] This form of data feedback is the primary method by which system alignment has been performed and has proven to be quite robust in operation.

[00162] The model of Eqn. 37 was chosen to be separable with respect to x and y, such that a 2-D reconstruction of the PSF can be obtained from ESF data that only contain information about a one dimensional LSF. As such, the model may be incapable of representing complex PSFs that result from a highly aberrated or defocused system, but is adequate to describe a PSF of a well-compensated and in-focus system. The ESF can be further modified to reflect the degradation of the PSF by the contribution of the detector MTF, which in the worst case is achieved by averaging the exact ESF of Eqn. 38 over the entire length of each pixel. [00163] Data fitting, for example based on linear least squares regression, is performed by fitting Eqn. 38 against experimental data to obtain coefficients a,·. The PSF model is constructed by evaluating Eqn. 37 using the values of a, obtained from the fit. The EE values are then analytically calculated by integrating Eqn. 37 over a square region to obtain Eqn. 40. [00164] Accuracy of the algorithm

[00165] The accuracy of the whole algorithm was assessed using synthetic images with known PSFs, using the same real-world PSFs used to assess the accuracy of the LSF x LSF approximation. The PSFs were then convolved with an image of an infinitely sharp test target. The convolution represents an ideal noise-less image with a grid of 10 x 14 slanted squares. The squares provide testing variety in that they are not superimposable on each other because the placement of each square relative to the camera pixel grid differs on a sub-pixel level depending on where the center of the square falls. To understand the effects of shot noise on the data, images were first scaled to match the intensities in actual images of the target, and a baseline was added to add the effects of system background intensity and bleed-through of the test target. The scaled images were expressed in units of photoelectrons. A shot noise sampled image was created from this scaled image by sampling the intensity of each pixel from a distribution calculated specifically for each pixel. The shot noise Poisson distribution was approximated as a normal distribution that had a mean corresponding to the mean of the scaled image and a variance also equal to the mean of the image. The images were finally scaled to 12 bit depth as the upper bits of a 16 bit TIFF by using a scale factor of 10,550 photoelectrons per 2 16 - 1 image counts.

[00166] After each image was created, it was ran through the full analysis pipeline to estimate the ensquared energy of each square using Eqn. 35. Additionally, models of the PSFs were created from the fit coefficients a L:i and · < , to allow for direct comparison. Figures 4A- 4F show the results from a reasonably sharp PSF with a true EE of 0.594. 3-D plots of the PSF depict the true PSF (Figure 4A), the PSF derived from the LSF x LSF approximation (Figure 4B), and finally the PSF model that was recovered from analyzing the synthetic images and fitting the ESF data (Figure 4C). Differences between the three PSFs are shown Figures 4D- 4F, using the same vertical scale. As expected, the PSF models bare a strong resemblance to the LSF x LSF approximation. Note that the density of the true PSF at the base falls off faster in the front right of the plots than it does in the front left of the plots. That property is retained by the LSF x LSF model, and the PSF model from ESF data faithfully recovers it too. The LSF x LSF approximation gives rise to underestimation of the peak and overestimation of horizontal and vertical rows and columns emanating from the peak. This results in an underestimation of the true EE and is unavoidable given the approximation. When the PSF is viewed from above, this results in a characteristic “+” shaped PSF model. Nonetheless, the faithful reconstruction of the LSF x LSF model and accurate calculation of the EE solely from fit coefficients a A: , and , ·,·, suggests that the ESF fitting procedure is behaving as expected. [00167] Two additional PSFs are fit in the same manner, and are shown in Figures 5A-5F and 6A-6F, which depict sharp and blurry PSFs, respectively. Note that the asymmetry of the blurrier PSF is not adequately captured by the LSF x LSF model, as predicted.

[00168] The synthetic image analysis was repeated for nine different experimental PSFs calculated over a range of thirty-one different focus positions, using the same PSFs used in Figure 7A. A scatter plot of EE estimated using Eqn. 35 versus the EE of the known starting PSF is shown in Figure 7B. Each point of the scatter plot represents an image comprised of 10 x 14 squares, with each square sampled independently with the expectation that they will produce the same estimate of EE along with noise resulting from both shot noise and algorithmic errors, if any, that are exposed when presented with multiple squares that differ with regard to sub-pixel placement. The error bars represent the standard deviation over all 140 measurements. Points exhibit a natural clustering in various locations that results from PSF generation creating a series of similar PSFs around the optimal focus of each PSF. [00169] EE is underestimated by the algorithm, which is expected given the core LSF x LSF approximation. The EE estimates would be expected to exactly match the LSF x LSF approximations, but in this case they are slightly elevated. While this does make the overall estimates more accurate it also exposes an opportunity to improve the performance of the ESF fitting and EE estimation numerical methods.

[00170] The accuracy of EE estimates in Figure 7B is quite acceptable for the purposes of instrument testing, and demonstrates that the fundamental approach taken by the method disclosed herein is capable of generating usable data and a useful test platform.

[00171] Precision of the algorithm

[00172] The precision of the algorithm is acceptable as judged by the relatively tight standard deviations of EE estimates in Figure 7B. However, the use of feedback for optical alignment presents an additional level of requirements beyond just a tight standard deviation. When performing optimizations, typically with small perturbations to the system, it is important that feedback signal does not jump around as a function of the adjustment variable. Figure 7C is a plot of EE versus focus position illustrating that the algorithm provides a very smooth and predictable response curve to the perturbation of defocus. While the distance between the red and black lines (or circular and triangular data points, respectively) indicates that EE is underestimated, it is systematically and consistently underestimated and the inaccuracy of the method does not jump around wildly as defocus position changes from point to point. This curve is thus amenable to higher level interpretation and fitting e.g. to determine the defocus position at which EE is maximized.

[00173] It was noted above that alternate basis sets could be utilized to approximate the LSF, including basis sets that permitted the fitting of asymmetric LSFs. If such methods were utilize, they could likely generate a better accuracy than is shown in Fig. Figure 7B. However, it would be necessary to validate that the response curve of the form of Figure 7C is smooth enough for optimization use. An unproven concern is that the use of a larger number of fitting parameters would compromise smoothness. However, this assumption remains untested and higher accuracy models might defy expectations and actually be as smooth or smoother.

[00174] EE score

[00175] The Ensquared Energy (EE) values may be utilized as a metric of the imaging performance of the optical system 104 under test. The user may make initial settings or adjustments to the optical components of the system (e.g., alignment between components, and between components and the test target 116). The user may then run the test of the optical system 104 according to the method described herein. Based on the results of the test, the user may determine whether further adjustments should be made to improve the results (e.g., further optimize alignment and focus). After making any further adjustments, the user may then run the test again. Iterations of testing and adjusting may continue until the results are deemed acceptable.

[00176] The optical system 104 may be determined to have successfully passed the imaging performance test if all field points in the image exceed a predetermined minimum threshold EE value (e.g., EE > 0.51). However, assessing imaging performance solely on the basis of whether all EE values pass may in some circumstances be considered disadvantageous. This approach may be unforgiving in a case where the optical system 104 exhibits a few dull spots but otherwise is a sharp (high image quality) system. Moreover, as optical performance is continuously assessed during the progress of an alignment procedure, discontinuities arising from the use of thresholding make thresholding methods problematic to use as an optimization process where continuous metrics are preferred (e.g., aligning to laser power, or aligning to minimum root mean square (RMS) wavefront error), and the thresholding effects may not be conducive to manufacturing yield calculations.

[00177] To overcome the foregoing potential disadvantages, in a further embodiment of the presently disclosed method, an EE score, Es, is calculated based on the EE values. The EE score describes the entire field with a single continuous variable that is required to exceed (or meet or exceed) a predetermined minimum threshold to pass the imaging performance test. For example, the EE score may be weighted such that Es > 100 is a passing score. The EE score has two components: <EE> is the spatial mean of all EE values observed in all field points over the specified depth of field; and the overlap integral EO measures the consistency of EE among different imaging channels (e.g., four channels corresponding to four colors to which the images are filtered, where in the illustrated embodiment, two imaging devices 124 and 128 are utilized in conjunction with the adjustable filter assembly 148 to each acquire images in two of the four channels). EO is rewarded if sharp and blurry regions are coincident in both field position (x, y) and focal position (z). EO is penalized if the four channels are inconsistent, for example if the two imaging devices 124 and 128 are not focused well enough relative to each other, or if the blue channel is sharp in the upper left corner and blurry in the lower right comer while the red channel is opposite, etc. The EE score and its components and are calculated according to Eqns 41-44 below.

[00178]

[00179] where again x and y represent the field points arrayed in a grid of size N x by N y , c is the color (e.g., in a four-channel embodiment, colors 1, 2, 3, and 4 might correspond to blue, green, red, and amber), z is the focal position, and nm is nanometers.

[00180] While the EE score considers the contributions from all channels, it is also useful to have a similar score considering a single channel at a time and which only considers homogeneity over field positions and focus positions. The per-channel EE score EC c is such an example, with the heterogeneity of a single channel being captured in Eqn. 46, similar to variance over mean squared.

[00181]

[00182] If the integral in Eqn. 46 is zero, then the penalty represented by the heterogeneity EH will be zero.

[00183] Fitting EE vs. Z

[00184] The integrals in Eqns. 42, 43 and 46 aim to characterize the imaging performance over focus and could be computed using simple sums of experimental data. Alternately, the integrals could be computed by first fitting experimental pairs of EE values and their corresponding z locations to a Gaussian profile (Eqn. 49). The integrals are then computed as analytical solution of the definite integrals of the Gaussian fit function per Eqns. 49-55below. [00185]

[00186] where a is the peak EE value obtained over focus, w is the Gaussian width and z.O is the z value corresponding to the peak EE value. This is of particular utility in that it allows for relatively crude z steps to be taken (and thus less data acquisition) while still accurately capturing z0 to within a fraction of the z-step size. Eqn 55 may be utilized to calculate the integral in Eqn 42 above.

[00187] For EO calculations, the overlap integrals between different channels in Eqn. 44 could also be computed using analytical integrals of a Gaussian fit of experimental data as illustrated in Eqns. 56-63:

[00188]

[00189] where wi and W2 are the Gaussian widths corresponding to channel 1 and 2. The product of two Gaussians is itself a third Gaussian centered at z0 p with width w p . Eqns 56-63 may be utilized to calculate Eqn. 47 above. [00190] Penalties indicate how much performance was lost to various optical defects

[00191] Beyond estimating the present EE score for the optical system 104, the method may entail communicating to the user how much potential score is lost due to one or more artifacts, or optical defects (e.g., aberrations). In this manner, the method may inform the user as to which specific defects are negatively impacting the imaging performance the most. For this purpose, the method may calculate one or more penalties associated with one or more respective artifacts. Penalties are the difference between the current EE score and the score that would be obtained were the defect(s) accounted for. The method may include displaying information corresponding to penalties across the field of the image displayed to the user. When considering just the <EE> contribution to the EE score (without the overlap scores EO), the penalty calculations should always result in a larger <EE> quantity. The penalty is communicated as the difference between the actual score and the artificially increased score, which should always be a negative number. When considering the overlap scores EO, a penalty calculation may occasionally result in a positive number. [00192] One example of a penalty is the penalty from imaging device (e.g., camera) Z-shift. In calculating this penalty, the method may assume that both imaging devices 124 and 128 could be focused optimally relative to each other, but does not assume that axial chromatic aberrations within the imaging devices 124 and 128 are zero. The method calculates the ideal focus position for the first imaging device 124 as the average of the two sets of wavelength- filtered imaging data the first imaging device 124 acquires (e.g., the average of blue and green data). The method also calculates the ideal focus position for the second imaging device 128 as the average of the two sets of wavelength-filtered imaging data the second imaging device 128 acquires (e.g., the average of red and amber data).

[00193] Another example of a penalty is the penalty from imaging device (e.g., camera) tilt. For each channel, the method constructs a focal surface as the average of both colors (e.g., blue and green in channel 1), fits it to a plane using linear least squares to find the tip and tilt portions which are then removed from the focal plane while preserving the focus difference for each imaging device 124 and 128.

[00194] Another example of a penalty is the penalty from astigmatism. The EE scores for horizontal and vertical LSFs are calculated independently, allowing the focal plane tilt and height to be optimal for each direction. The maps are then averaged together to create the final estimate.

[00195] Another example of a penalty is the penalty from axial chromatic aberrations. This penalty is calculated twice, once for each channel. For channel 1 , the optimal focus for the two colors (e.g., blue and green) is found for each, the difference is split, and each channel is offset by that amount. The resulting focal positions may still be non-ideal because of imperfect camera-camera focus differences.

[00196] Another example of a penalty is the penalty from field flatness. This penalty is calculated twice, once for each channel. For channel 1, the optimal focal surfaces of the exposures of the two colors (e.g., blue and green exposures) are averaged together, and an optimal fit plane is subtracted from the focal position of the (e.g., blue and green) data, while channel 2 is treated normally. This penalty potentially may have conflicting interpretations. For example, if both channels are equally non- flat, they will both suffer in terms of <EE> but the overlap EO will be enhanced in a productive way. Flattening out a channel will eliminate both effects. Typically the <EE> effect will dominate, but this is not guaranteed. [00197] Figure 8 A is an example of an image acquired from one of the target features 313 of the test target 116. Figure 8B is an example of ESF calculated from the image of Figure 8 A. In Figure 8B, the image coordinate zero corresponds to one of the edges of the imaged target feature 313. Some of the data points in Figure 8B are outliers. As illustrated, one or more of these outliers may correspond to dark artifacts (e.g., dirt or debris from fabrication) on the target feature 313. The method may include rejecting (e.g., ignoring or removing) the outliers in the ESF data.

[00198] Figure 9 is an example of the result of fitting the ESF of Figure 8B to the error function and calculating EE from the ESF.

[00199] Figure 10 is an example of calculating ESFs for the four edges of the target feature 313, fitting the ESFs to the error function, and calculating four independent EE values from the corresponding ESFs. Nominally, the left and right edges should result in the same measurement (EEieft = EE rig ht), and the upper and lower edges should result in the same measurement (EE up = EEdown). However, the measurements for the left and right edges are not necessarily the same as the measurements for the upper and lower edges.

[00200] In an alternative embodiment, Encircled Energy may be utilized as the imaging performance metric instead of Ensquared Energy. In a further alternative embodiment, the Strehl ratio may be utilized as the imaging performance metric instead of Ensquared Energy. [00201] Figure 11 is a flow diagram 1100 illustrating an example of a method for testing the imaging performance of an optical system, such as the optical system 104 described herein, according to an embodiment of the present disclosure. As illustrated, the method is initiated by moving the Z-stage of the objective to an approximate focus position. The user then initiates the process of Z-stack acquisition, by which a plurality of images of the test target are acquired at different focal positions. The hardware of the optical system then performs the Z-stack acquisition, an embodiment of which is further illustrated in Figure 12. The Z-stack range is then compared with the focus performance during the Z-stack acquisition. The Z-focus position is then updated based on the Z-stack results. A determination is then made as to whether the focus was found in the middle one- third of the Z-stack range. If yes, then the method determines that the data currently acquired are usable, global calculations are performed on the Z-stack, and the user is notified of the new data so generated.

[00202] On the other hand, if it is determined that the focus was not found in the middle one-third of the Z-stack range, calculations of imaging performance values (e.g., Ensquared Energy estimates as described herein) are made from the data currently acquired but the user is warned that the data are incomplete or not reliable as focus was not found in the middle of the range. The process of Z-stack acquisition may then be repeated with a more informed guess of the focal position obtained from this z-stack.

[00203] Figure 12 is a flow diagram 1200 illustrating the acquisition of a Z-stack of images according to an embodiment of the present disclosure. After the user initiates the Z-stack acquisition (Figure 11), a range of Z-positions is calculated around the initial focus guess. For each field of view, FOV i, in the set of FOVs, the Z-stage is commanded to move to the defocus position corresponding to FOV i. The actual position of the Z-stage is then queried and recorded. Images from all imaging devices are then simultaneously acquired. High-speed processing of each image by itself is then performed, an embodiment of which is further illustrated in Figure 13. The image data are then pooled. A plurality of PSF models, one per ESF, are then constructed by fitting ESFs to a basis set of Gaussian error functions, as described herein.

[00204] Figure 13 is a flow diagram 1300 illustrating the high-speed processing of an image by itself according to an embodiment of the present disclosure. The image is down-sampled to representations of intensity, horizontal centroid, and vertical centroid. A crude global registration is then performed using the down-sampled image. Each square (or other type of target feature of the test target) is located using the centroid data. A plurality of square locations are then fitted to a global model of translation, rotation, and magnification. ESFs are then extracted from each edge of each square, and edge angles and centers are calculated. The ESF data are then purged of outliers. A plurality of PSF models are then constructed by independently fitting each ESF to a basis set of Gaussian error functions, as described herein. Imaging performance values (e.g., Ensquared Energy estimates as described herein) are then calculated for each ESF. Data may be averaged (EEhorizontai = (EEieft = EE rig ht)/2, EEverticai = (EE Up = EE down )/2.) The process illustrated in Figure 13 may be repeated one a plurality of images, and the resulting image data may be pooled (Figure 12) and further processed.

[00205] In an embodiment, one or more of the flow diagrams 1100, 1200, and 1300 may represent an optical imaging performance testing system, or part of an optical imaging performance testing system, configured to carry out (e.g., control or perform) the steps or functions described above and illustrated in one or more of Figures 11, 12, and 13. For this purpose, a controller (e.g., the controller 108 shown in Figure 1) including a processor, memory, and other components as appreciated by persons skilled in the art, may be provided to control the performance of the steps or functions, such as by controlling the components of the testing system involved in carrying out the steps or functions. This may involve receiving and processing user input and/or machine-executable instructions, as appreciated by persons skilled in the art.

[00206] Figure 14 illustrates an example of a set of EE performance maps that may be generated according to the method, and as may be displayed to the user on a display screen of a computing device (e.g., computing device 168). The columns of maps correspond to different colors, which in the present example are, from left to right, blue, green, amber, and red. In the uppermost row (row 1), each map displays the average EE for each point in the field of view (FOV). In the next row (row 2), each map displays the best EE found at any defocus value for each point in the FOV. In the next row (row 3), each map displays the surface of best focus as determined by EE. In the next row (row 4), each map displays the astigmatism of horizontal versus vertical imaging data as determined by EE. In the bottommost row (row 5), each map displays the depth of field as a z-range over which EE exceeds a minimum value.

[00207] Figure 15 illustrates an example of combined EE scores and performance maps that may be generated according to the method, and as may be displayed to the user on a display screen of a computing device (e.g., computing device 168). The uppermost map (1) is a global EE map showing the usable EE over all channels and defocus positions. Below the uppermost map (1), the group of maps (2) show the contribution of various optical imperfections such as those described above. As such, these maps (2) inform the user as to how much better imaging performance would be if the user were to have addressed the defect in question. In the present example, the first row of the group of maps (2) show global scores for different colors, which in the present example are, from left to right, blue, green, amber, and red. The second row of the group of maps (2) show, from left to right, overlap, the penalty to EE from imaging device Z shift, the penalty to EE from axial chromatic aberrations in channel 1 , and the penalty to EE from axial chromatic aberrations in channel 2. The third row of the group of maps (2) show, from left to right, the penalty to EE from astigmatism, the penalty to EE from imaging device tilt, the penalty to EE from field flatness in channel 1, and the penalty to EE from field flatness in channel 2.

[00208] Figure 16 is a schematic view of a non- limiting example of a system controller (or controller) 1600 that may be part of or communicate with an optical imaging performance testing system according to an embodiment of the present disclosure. For example, the system controller 1600 may correspond to the system controller 108 of the testing system 100 described above and illustrated in Figure 1. [00209] In the illustrated embodiment, the system controller 1600 includes a processor 1602 (typically electronics-based), which may be representative of a main electronic processor providing overall control, and one or more electronic processors configured for dedicated control operations or specific signal processing tasks (e.g., a graphics processing unit or GPU, a digital signal processor or DSP, an application-specific integrated circuit or ASIC, a field- programmable gate array or FPGA, etc.)· The system controller 1600 also includes one or more memories 1604 (volatile and/or non-volatile) for storing data and/or software. The system controller 1600 may also include one or more device drivers 1606 for controlling one or more types of user interface devices and providing an interface between the user interface devices and components of the system controller 1600 communicating with the user interface devices. Such user interface devices may include user input devices 1608 (e.g., keyboard, keypad, touch screen, mouse, joystick, trackball, and the like) and user output devices 1610 (e.g., display screen, printer, visual indicators or alerts, audible indicators or alerts, and the like). In various embodiments, the system controller 1600 may be considered as including one or more of the user input devices 1608 and/or user output devices 1610, or at least as communicating with them. The system controller 1600 may also include one or more types of computer programs or software 1612 contained in memory and/or on one or more types of computer-readable media 1614. The computer programs or software may contain non- transitory instructions (e.g., logic instructions) for controlling or performing various operations of the testing system 100. The computer programs or software may include application software and system software. System software may include an operating system (e.g., a Microsoft Windows® operating system) for controlling and managing various functions of the system controller 1600, including interaction between hardware and application software. In particular, the operating system may provide a graphical user interface (GUI) displayable via a user output device 1610, and with which a user may interact with the use of a user input device 1608.

[00210] The system controller 1600 may also include one or more data acquisition/signal conditioning components (DAQ) 1616 (as may be embodied in hardware, firmware and/or software) for receiving and processing signals (e.g., imaging data) outputted by the optical system under test (e.g., the optical system 104 described above and illustrated in Figure 1), including formatting data for presentation in graphical form by the GUI. The DAQ 1616 may also be configured to transmit control signals to the optical system to control movement/positioning of the adjustable optical components, such as those described herein. The DAQ 1616 may correspond to all or part of the electronics module 164 described above and illustrated in Figure 1.

[00211] The system controller 1700 may further include a data analyzer (or module) 1618 configured to process signals outputted from the optical system and produce data therefrom, including imaging performance metrics, scores, maps, or the like, as described throughout the present disclosure. Thus, the data analyzer 1618 may be configured to implement (control or perform) all or part of any of the methods disclosed herein. For these purposes, the data analyzer 1618 may be embodied in software and/or electronics (hardware and/or firmware) as appreciated by persons skilled in the art. The data analyzer 1618 may correspond to all or part of the electronics module 164 and/or computing device 168 described above and illustrated in Figure 1.

[00212] It will be understood that Figure 16 is high-level schematic depiction of an example of a system controller 1600 consistent with the present disclosure. Other components, such as additional structures, devices, electronics, and computer-related or electronic processor-related components may be included as needed for practical implementations. It will also be understood that the system controller 1600 is schematically represented in Figure 16 as functional blocks intended to represent structures (e.g., circuitries, mechanisms, hardware, firmware, software, etc.) that may be provided. The various functional blocks and any signal links between them have been arbitrarily located for purposes of illustration only and are not limiting in any manner. Persons skilled in the art will appreciate that, in practice, the functions of the system controller 1600 may be implemented in a variety of ways and not necessarily in the exact manner illustrated in Figure 16 and described by example herein.

[00213] EXEMPLARY EMBODIMENTS

[00214] Exemplary embodiments provided in accordance with the presently disclosed subject matter include, but are not limited to, the following:

[00215] 1. A method for testing imaging performance of an optical system, the method comprising: positioning a test target at an object plane of the optical system; operating the optical system to illuminate the test target and generate an image beam; operating a focusing stage of the optical system to acquire a plurality of images of the test target from the image beam corresponding to a plurality of values of defocus; calculating from each image a plurality of Edge Spread Functions at a plurality of locations within the test target; constructing a plurality of Point Spread Function models from the respective Edge Spread Functions; and based on the Point Spread Function models, calculating a plurality of imaging performance values corresponding to the plurality of locations, wherein the imaging performance values are based on a metric selected from the group consisting of: Ensquared Energy; Encircled Energy; and Strehl ratio.

[00216] 2. The method of embodiment 1 , wherein the plurality of locations comprises a plurality of field coordinates ( x , v) in the object plane within the test target.

[00217] 3. The method of any of the preceding embodiments, wherein the plurality of locations comprises a plurality of focal positions (z) along an optical axis passing through the test target, and operating the optical system comprises acquiring a plurality of images of the test target at different focal positions (z).

[00218] 4. The method of any of the preceding embodiments, comprising producing one or more maps of imaging performance based on a combination of the imaging performance values.

[00219] 5. The method of embodiment 4, comprising comparing two or more of the maps to provide a measure of relative alignment of a focal plane of each imaging device or channel relative to the object plane.

[00220] 6. The method of embodiment 4 or 5, wherein the one or more maps correspond to different imaging channels, and the different imaging channels correspond to different imaging devices of the optical system operated to acquire the image, or different wavelengths of the image acquired, or both different imaging devices and different colors.

[00221] 7. The method of embodiment 6, comprising comparing two or more of the maps to provide a measure of relative alignment of each imaging device or channel relative to each other; and both of the foregoing.

[00222] 8. The method of any of embodiments 4-7, comprising, after producing the one or more maps, adjusting a position of one or more optical components of the optical system, or replacing the one or more optical components, based on information provided by the one or more maps.

[00223] 9. The method of embodiment 8, wherein the one or more optical components are selected from the group consisting of: one or more of the imaging devices; an objective of the optical system; one or more tube lenses of the optical system; one or more mirrors or dichroic mirrors; and a combination of two or more of the foregoing.

[00224] 10. The method of embodiment 8 or 9, wherein the one or more maps are one or more initial maps, and further comprising, after adjusting or replacing the one or more optical components, acquiring a new image of the test target, calculating a plurality of new imaging performance values, and producing one or more new maps of imaging performance.

[00225] 11. The method of embodiment 10, comprising comparing the one or more new maps to the one or more initial maps to determine position adjustments to be made to the one or more optical components for optimizing imaging performance.

[00226] 12. The method of any of embodiments 8-11, wherein the adjusting or replacing finds an optimum pair of conjugate image planes in the optical system.

[00227] 13. The method of any of embodiments 8-12, wherein the adjusting or replacing improves an attribute selected from the group consisting of: focus matching of the imaging devices; imaging device tilt; flattening of field curvature; reduction of astigmatism; reduction of wavelength-dependent focus shift; and a combination of two or more of the foregoing. [00228] 14. The method of any of the preceding embodiments, comprising calculating one or more global scores of imaging performance based on a combination of the imaging performance values.

[00229] 15. The method of embodiment 14, comprising modifying the one or more global scores to penalize or reward heterogeneity of the imaging performance values over a range of field coordinates (x, v) in the object plane within the test target, or through a range of focal positions ( z ) of the object plane, or both of the foregoing.

[00230] 16. The method of embodiment 14 or 15, comprising modifying the one or more global scores to penalize or reward similarity of different imaging channels as a function of field coordinates (x, v) in the object plane within the test target, or focal position (z) of the object plane, or both of the foregoing, wherein the different imaging channels correspond to different imaging devices of the optical system operated to acquire the image, or different wavelengths of the image acquired, or both different imaging devices and different colors. [00231] 17. The method of any of the preceding embodiments, comprising, after calculating the imaging performance values, adjusting a position of one or more optical components of the optical system based on information provided by the imaging performance values.

[00232] 18. The method of embodiment 17, wherein the imaging performance values are initial imaging performance values, and further comprising, after adjusting the one or more optical components, acquiring a new image of the test target, and calculating a plurality of new imaging performance values.

[00233] 19. The method of embodiment 18, comprising comparing the new imaging performance values to the initial imaging performance values to determine position adjustments to be made to the one or more optical components for optimizing imaging performance.

[00234] 20. The method of any of the preceding embodiments, wherein positioning the test target comprises aligning the target relative to a datum shared with one or more optical components of the optical system.

[00235] 21. The method of any of the preceding embodiments, wherein operating the optical system comprises utilizing an objective in the image beam, and further comprising adjusting a position of the objective along an axis of the image beam to acquire a plurality of images of the test target at different focal positions (z).

[00236] 22. The method of embodiment 21, wherein the objective has a configuration selected from the group consisting of: the objective is configured for infinite conjugate microscopy; and the objective is configured for finite conjugate microscopy.

[00237] 23. The method of any of the preceding embodiments , wherein operating the optical system comprises operating two or more imaging devices to acquire respective images of the test target.

[00238] 24. The method of embodiment 23, wherein the two or more imaging devices acquire the respective images at two or more different wavelengths.

[00239] 25. The method of embodiment 24, comprising splitting an image beam propagating from the test target into two or more image beam portions, and transmitting the two or more image beam portions to the two or more imaging devices, respectively.

[00240] 26. The method of any of the preceding embodiments, wherein operating the optical system comprises operating a filter assembly to filter the image beam at a selected wavelength. [00241] 27. The method of any of the preceding embodiments, wherein operating the optical system comprises utilizing a tube lens in the image beam, and further comprising adjusting a the relative position of one or more lenses or lens groups within the tube lens to acquire a plurality of images of the test target at different positions of the tube lens.

[00242] 28. The method of any of the preceding embodiments, wherein the test target comprises a dark material and an array of bright features disposed on the dark material. [00243] 29. The method of embodiment 28, wherein the bright features are polygonal.

[00244] 30. The method of embodiment 29, wherein the bright features are tilted such that edges of the bright features are oriented at angles to a pixel array of the optical imaging system that acquires the image. [00245] 31. An optical imaging performance testing system, comprising: a target holder configured to hold a test target; a light source configured to illuminate the test target; an imaging device configured to acquire images of the test target; an objective positioned in an imaging light path between the test target and the imaging device, wherein a position of at least one of the objective or the target holder is adjustable along the imaging light path; and a controller comprising an electronic processor and a memory, and configured to control the steps of the method of any of the preceding embodiments of calculating the plurality of Edge Spread Functions, constructing the plurality of Point Spread Function models, and calculating the plurality of imaging performance values.

[00246] 32. The system of embodiment 31, wherein the objective has a configuration selected from the group consisting of: the objective is configured for infinite conjugate microscopy; and the objective is configured for finite conjugate microscopy.

[00247] 33. The system of embodiment 31 or 32, wherein the imaging device comprises a plurality of imaging devices, and further comprising an image separation mirror configured to split the imaging light path into a plurality of imaging light paths respectively directed to the imaging devices.

[00248] 34. The system of any of embodiments 31-33, comprising a filter assembly configured to select a wavelength of an image beam in the imaging light path for propagation to the imaging device.

[00249] 35. The system of any of embodiments 31-34, comprising a tube lens positioned in the imaging light path, wherein the relative position of one or more lenses or lens groups within the tube lens is adjustable.

[00250] 36. The system of any of embodiments 31-35, comprising the test target, wherein the test target comprises a dark material and an array of bright features disposed on the dark material.

[00251] 37. The system of embodiment 36, wherein the bright features are polygonal.

[00252] 38. The system of embodiment 37, wherein the bright features are tilted such that edges of the bright features are oriented at angles to a pixel array of the optical imaging system that acquires the image.

[00253] 39. A non-transitory computer-readable medium, comprising instructions stored thereon, that when executed on a processor, perform the steps of the method of any of the preceding embodiments of calculating the plurality of Edge Spread Functions, constructing the plurality of Point Spread Function models, and calculating the plurality of imaging performance values.

[00254] 40. A system for testing imaging performance of an optical system, comprising the computer-readable storage medium of embodiment 39.

[00255] It will be understood that one or more of the processes, sub-processes, and process steps described herein may be performed by hardware, firmware, software, or a combination of two or more of the foregoing, on one or more electronic or digitally-controlled devices. The software may reside in a software memory (not shown) in a suitable electronic processing component or system such as, for example, the controller 108 or 1600 schematically depicted in Figure 1 or 16. The software memory may include an ordered listing of executable instructions for implementing logical functions (that is, “logic” that may be implemented in digital form such as digital circuitry or source code, or in analog form such as an analog source such as an analog electrical, sound, or video signal). The instructions may be executed within a processing module, which includes, for example, one or more microprocessors, general purpose processors, combinations of processors, digital signal processors (DSPs), field- programmable gate arrays (FPGAs), or application specific integrated circuits (ASICs). Further, the schematic diagrams describe a logical division of functions having physical (hardware and/or software) implementations that are not limited by architecture or the physical layout of the functions. The examples of systems described herein may be implemented in a variety of configurations and operate as hardware/software components in a single hardware/software unit, or in separate hardware/software units.

[00256] The executable instructions may be implemented as a computer program product having instructions stored therein which, when executed by a processing module of an electronic system (e.g., the controller 108 or 1600 in Figure 1 or 16), direct the electronic system to carry out the instructions. The computer program product may be selectively embodied in any non-transitory computer-readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as an electronic computer- based system, processor-containing system, or other system that may selectively fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this disclosure, a computer-readable storage medium is any non- transitory means that may store the program for use by or in connection with the instruction execution system, apparatus, or device. The non-transitory computer-readable storage medium may selectively be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device. A non-exhaustive list of more specific examples of non-transitory computer readable media include: an electrical connection having one or more wires (electronic); a portable computer diskette (magnetic); a random access memory (electronic); a read-only memory (electronic); an erasable programmable read only memory such as, for example, flash memory (electronic); a compact disc memory such as, for example, CD-ROM, CD-R, CD-RW (optical); and digital versatile disc memory, i.e., DVD (optical). Note that the non-transitory computer-readable storage medium may even be paper or another suitable medium upon which the program is printed, as the program may be electronically captured via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner if necessary, and then stored in a computer memory or machine memory.

[00257] It will also be understood that the term “in signal communication” as used herein means that two or more systems, devices, components, modules, or sub-modules are capable of communicating with each other via signals that travel over some type of signal path. The signals may be communication, power, data, or energy signals, which may communicate information, power, or energy from a first system, device, component, module, or sub-module to a second system, device, component, module, or sub-module along a signal path between the first and second system, device, component, module, or sub-module. The signal paths may include physical, electrical, magnetic, electromagnetic, electrochemical, optical, wired, or wireless connections. The signal paths may also include additional systems, devices, components, modules, or sub-modules between the first and second system, device, component, module, or sub-module.

[00258] More generally, terms such as “communicate” and “in . . . communication with” (for example, a first component “communicates with” or “is in communication with” a second component) are used herein to indicate a structural, functional, mechanical, electrical, signal, optical, magnetic, electromagnetic, ionic or fluidic relationship between two or more components or elements. As such, the fact that one component is said to communicate with a second component is not intended to exclude the possibility that additional components may be present between, and/or operatively associated or engaged with, the first and second components.

[00259] It will be understood that various aspects or details of the invention may be changed without departing from the scope of the invention. Furthermore, the foregoing description is for the purpose of illustration only, and not for the purpose of limitation — the invention being defined by the claims.