Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
CAMERA-BASED INTERFERENCE FRINGE METROLOGY WITH SUPERRESOLUTION FOR LASER BEAM ANGLE AND ABSOLUTE FREQUENCY MEASUREMENT
Document Type and Number:
WIPO Patent Application WO/2024/068961
Kind Code:
A1
Abstract:
A method for measuring an interference fringe period of interference fringes from two combined laser beams includes imaging the combined beams with first and second digital image sensor arrays to produce first and second images, where a spatial period of pixels of the first and second arrays is greater than a fringe spatial period of the interference fringes, resulting in a first and second periodic intensity modulation pattern in the first and second images due to a spatial sub-sampling aliasing effect; calculating, up to an integer ambiguity, a value of the interference fringe spatial period, from the first periodic intensity modulation pattern and from the spatial period of the pixels of the first array; and resolving the integer ambiguity using the first and second periodic intensity modulation patterns and characteristics of the first and second arrays, thereby determining the interference fringe period with no integer ambiguity.

Inventors:
LÖFFLER WOLFGANG (NL)
WAN LIPENG (CN)
VISIMBERGA GIUSEPPE (NL)
Application Number:
PCT/EP2023/077115
Publication Date:
April 04, 2024
Filing Date:
September 29, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV LEIDEN (NL)
International Classes:
G01J9/02; G01B9/02
Foreign References:
EP3832251A12021-06-09
Other References:
PARKER D H ED - DRIGGERS RONALD G: "MOIRE PATTERNS IN THREE-DIMENSIONAL FOURIER SPACE", OPTICAL ENGINEERING, SOC. OF PHOTO-OPTICAL INSTRUMENTATION ENGINEERS, BELLINGHAM, vol. 30, no. 10, 1 October 1991 (1991-10-01), pages 1534 - 1541, XP000231876, ISSN: 0091-3286, DOI: 10.1117/12.55958
DORRÍO B V ET AL: "REVIEW ARTICLE; Phase-evaluation methods in whole-field optical measurement techniques", MEASUREMENT SCIENCE AND TECHNOLOGY, IOP, BRISTOL, GB, vol. 10, no. 3, 1 March 1999 (1999-03-01), pages R33 - R55, XP020064718, ISSN: 0957-0233, DOI: 10.1088/0957-0233/10/3/005
Attorney, Agent or Firm:
FRKELLY (IE)
Download PDF:
Claims:
CLAIMS

1. A method for measuring an interference fringe period, the method comprising: a) combining two laser beams to create interference fringes in an imaging plane, wherein the interference fringes have a fringe spatial period; b) imaging the combined beams with a digital image sensor array positioned at the imaging plane to produce a first image, wherein a spatial period of pixels of the digital image sensor array is greater than the fringe spatial period, resulting in a first periodic intensity modulation pattern in the first image due to a spatial sub-sampling aliasing effect; c) calculating, up to an integer ambiguity, a value of the interference fringe spatial period, from the first periodic intensity modulation pattern in the first image and from the spatial period of the pixels of the digital image sensor array; d) altering a pose of the digital image sensor array, resulting in a predetermined change in an effective spatial period of the pixels of the digital image sensor array; e) repeating steps b) and c) to produce a second image and a second periodic intensity modulation pattern, wherein the second image is imaged after altering the pose of the digital image sensor array; and f) resolving the integer ambiguity from the first periodic intensity modulation pattern, from the second periodic intensity modulation pattern, and from characteristics of the altered pose of the digital image sensor array, thereby determining a measured value of the interference fringe period with no integer ambiguity.

2. The method of claim 1 further comprising: splitting a laser beam to generate the two beams, and directing the two beams along two distinct paths; wherein combining two laser beams to create interference fringes in an imaging plane comprises combining the two laser beams at a predetermined relative angle; and determining a wavelength of the laser beam from the measured value of the interference fringe period and from the predetermined relative angle.

3. The method of claim 1 further comprising: directing the two beams to intersect at the imaging plane at a relative angle; wherein the two beams have a predetermined wavelength; and determining the relative angle of the laser beams from the measured value of the interference fringe period and from the predetermined wavelength.

4. The method of claim 1 further comprising: repeating steps b), c), d) multiple times, whereby the measured value of the interference fringe period is determined with improved accuracy.

5. The method of claim 1 wherein altering the pose of the digital image sensor array comprises rotating the digital sensor array.

6. The method of claim 1 wherein altering the pose of the digital image sensor array comprises tilting the digital sensor array.

7. A method for measuring an interference fringe period, the method comprising: a) combining two laser beams to create interference fringes in an imaging plane, wherein the interference fringes have a fringe spatial period; b) imaging the combined beams with a first digital image sensor array positioned at the imaging plane to produce a first image, wherein a spatial period of pixels of the first digital image sensor array is greater than the fringe spatial period, resulting in a first periodic intensity modulation pattern in the first image due to a spatial sub-sampling aliasing effect; c) imaging the combined beams with a second digital image sensor array positioned at the imaging plane to produce a second image, wherein a spatial period of pixels of the second digital image sensor array is greater than the fringe spatial period, resulting in a second periodic intensity modulation pattern in the second image due to the spatial sub-sampling aliasing effect; d) calculating, up to an integer ambiguity, a value of the interference fringe spatial period, from the first periodic intensity modulation pattern in the first image and from the spatial period of the pixels of the first digital image sensor array; e) resolving the integer ambiguity using the first periodic intensity modulation pattern, the second periodic intensity modulation pattern, and characteristics of the first digital image sensor array and the second digital image sensor array, thereby determining a measured value of the interference fringe period with no integer ambiguity.

8. The method of claim 7 wherein steps (b) and (c) are performed simultaneously with distinct digital image sensor arrays.

9. The method of claim 7 wherein steps (b) and (c) are performed sequentially using the same digital image sensor array.

10. The method of claim 7 wherein the first digital image sensor array and the second digital image sensor array are the same digital image sensor array.

11. The method of claim 7 wherein the first digital image sensor array and the second digital image sensor array are two distinct digital image sensor arrays.

12. The method of claim 7 wherein the first digital image sensor array and the second digital image sensor array are two distinct digital image sensor arrays having distinct pixel sizes in at least one dimension. 13. The method of claim 7 wherein the first digital image sensor array and the second digital image sensor array are two distinct sub-arrays of the same digital image sensor array having distinct pixel sizes in at least one dimension.

14. The method of claim 7 wherein the first digital image sensor array and the second digital image sensor array are the same digital image sensor array, and wherein the first and second images are produced with different angles between the two laser beams.

Description:
CAMERA-BASED INTERFERENCE FRINGE METROLOGY WITH SUPERRESOLUTION FOR LASER BEAM ANGLE AND ABSOLUTE FREQUENCY MEASUREMENT

FIELD OF THE INVENTION

The present invention relates generally to techniques for super-resolution measurement of optical interference fringes and, more specifically, to applications of such measurements to wave meters and angle measurement between optical beams.

BACKGROUND OF THE INVENTION

The precise measurement of periodicities of light fields, fringe metrology, is essential for various metrology tasks such as high precision laser frequency determination, angular and position sensing, which is for instance crucial in nanolithography. Also for investigation of complex field configurations, periodic patterns need to be analyzed, such as for the recently discovered bright superchiral fields that can be synthesized by interference of several optical beams, and for exploration of more general interference phenomena. Several approaches have been reported for high-precision fringe metrology, using position sensing detectors, Fresnel zone plates and heterodyne period measurements.

Direct lens-free imaging of these fields is challenging because of the large camera pixel size compared to the size of interference pattern periodicities, in particular for large-angle interference. The pixel size δ of modern CCD or CMOS image sensors is usually at least several micrometers due to limitations of the silicon base material and achievable signal to noise ratio, resulting in a sampling frequency of ƒ s = 1/δ. The Nyquist-Shannon sampling theorem tells us that only structures with spatial frequencies smaller than ƒ s /2 can be resolved in all detail, leading to the condition f s /2 > f L , where f L is the spatial frequency of the interference pattern. Known methods for probing optical fields on scales much smaller than usual pixel sizes include nanoparticle scanning methods near-field scanning optical microscopy (NSOM) and vectorial field reconstruction. The spatial resolution of these methods is ultimately limited by the size of the probe to around 80 nm, and complex scanning equipment with nanometer precision is needed, and scanning-based methods are rather slow.

SUMMARY OF THE INVENTION

The superposition of several optical beams with large mutual angles results in sub- micrometer periodic patterns with a complex intensity, phase and polarization structure. For high-resolution imaging thereof, one often employs optical super-resolution methods such as scanning nano-particle imaging. Here, we disclose that by using a conventional arrayed image sensor in combination with 2D Fourier analysis, the periodicities of light fields much smaller than the pixel size can be resolved in a simple and compact setup, with a resolution far beyond the Nyquist limit set by the pixel size. We demonstrate the ability to resolve periodicities with spatial frequencies of ~3 μm -1 , 15 times higher than the pixel sampling frequency of 0.188 μm -1 . This is possible by analyzing high-quality Fourier aliases in the first Brillouin zone. In order to obtain the absolute spatial frequencies of the interference patterns, we show that simple rotation of the image sensor is sufficient, which modulates the effective pixel size and allows determination of the original Brillouin zone. Based on this method, we demonstrate wavelength sensing with a resolving power beyond 100,000 without any special equipment.

Disclosed herein are techniques for CCD/CMOS-based super-resolution imaging of periodic light patterns used for high-accuracy (i] light frequency measurement (wavemeter) and/or (ii) angle measurement between two or more optical beams.

These techniques provide a technical solution to (at least) two technical problems: Measurement of laser frequency (or wavelength] with high resolution, and measurement of the angle between two beams which can be used to measure the angle of objects. Measurement of the frequency of laser light requires usually expensive wave meters which make use of interference in glass plates. We achieve the same with less effort. Measurement of angles between laser beams (from which the angle of objects can be determined) requires a complex metrology setup and in particular very expensive calibration objects. Our method can measure the absolute angle between two laser beams with very high precision, without using complex elements.

The interference of two laser beams results in interference fringes that can be much smaller than the camera pixel size, and therefore broadly thought to be undetectable. We have discovered that using Fourier analysis of the camera pictures, and exploiting aliasing (subsampling/undersampling) one can determine the interference spacing with extreme sub-pixel resolution. With a single measurement, this allows determination of the fringe spacing up to an unknown multiplicative factor; this factor can be obtained using a second measurement using different relative sizes of the spatial frequency of the interference fringes and the effective pixel size. This can be accomplished either by repeating the measurement or by performing another simultaneous measurement , where the second measurement and the first measurement are performed with different effective camera pixel sizes relative to the spatial periodicity of the interference fringes. Several approaches for accomplishing this are possible: (i) rotation of the camera chip, which effectively changes the pixel size (if pixels are square or rectangular, possibly also non-rectangular containing a "dead zone" as common with CMOS camera chips), (ii) for angular measurement, adjustment of the light wavelength, (iii) tilting of the camera chip which effectively changes the pixel size in one dimension, (iv) varying the angle between the interfering beams, (v) using pixel arrays or sub-arrays whose pixel sizes differ from each other in at least one dimension, (vi) positioning different miniaturized prisms or any other optically dispersive elements before different pixel sub-arrays of the camera chip, (vii) using two or more camera chips with different pixel sizes after having suitably split the beam, or (viii) any possible combination of the above.

In one aspect, the invention provides a method for measuring an interference fringe period, the method comprising: a) combining two laser beams to create interference fringes in an imaging plane, wherein the interference fringes have a fringe spatial period; b) imaging the combined beams with a digital image sensor array positioned at the imaging plane to produce a first image, wherein a spatial period of pixels of the digital image sensor array is greater than the fringe spatial period, resulting in a first periodic intensity modulation pattern in the first image due to a spatial sub-sampling aliasing effect; c) calculating, up to an integer ambiguity, a value of the interference fringe spatial period, from the first periodic intensity modulation pattern in the first image and from the spatial period of the pixels of the digital image sensor array; d) altering a pose of the digital image sensor array, resulting in a predetermined change in an effective spatial period of the pixels of the digital image sensor array; e) repeating steps b) and c) to produce a second image and a second periodic intensity modulation pattern, wherein the second image is imaged after altering the pose of the digital image sensor array; and f) resolving the integer ambiguity from the first periodic intensity modulation pattern, from the second periodic intensity modulation pattern, and from characteristics of the altered pose of the digital image sensor array, thereby determining a measured value of the interference fringe period with no integer ambiguity.

In some embodiments, the method may include splitting a laser beam to generate the two beams, and directing the two beams along two distinct paths, wherein combining two laser beams to create interference fringes in an imaging plane comprises combining the two laser beams at a predetermined relative angle; and determining a wavelength of the laser beam from the measured value of the interference fringe period and from the predetermined relative angle.

In some embodiments, the method may include directing the two beams to intersect at the imaging plane at a relative angle; wherein the two beams have a predetermined wavelength; and determining the relative angle of the laser beams from the measured value of the interference fringe period and from the predetermined wavelength.

In some embodiments, the method may include repeating steps b), c), d) multiple times, whereby the measured value of the interference fringe period is determined with improved accuracy. In some embodiments, the method may include altering the pose of the digital image sensor array comprises rotating the digital sensor array.

In some embodiments, the method may include altering the pose of the digital image sensor array comprises tilting the digital sensor array.

In another aspect, the invention provides a method for measuring an interference fringe period, the method comprising: a) combining two laser beams to create interference fringes in an imaging plane, wherein the interference fringes have a fringe spatial period; b) imaging the combined beams with a first digital image sensor array positioned at the imaging plane to produce a first image, wherein a spatial period of pixels of the first digital image sensor array is greater than the fringe spatial period, resulting in a first periodic intensity modulation pattern in the first image due to a spatial sub-sampling aliasing effect; c) imaging the combined beams with a second digital image sensor array positioned at the imaging plane to produce a second image, wherein a spatial period of pixels of the second digital image sensor array is greater than the fringe spatial period, resulting in a second periodic intensity modulation pattern in the second image due to the spatial sub-sampling aliasing effect; d) calculating, up to an integer ambiguity, a value of the interference fringe spatial period, from the first periodic intensity modulation pattern in the first image and from the spatial period of the pixels of the first digital image sensor array; and e) resolving the integer ambiguity using the first periodic intensity modulation pattern, the second periodic intensity modulation pattern, and characteristics of the first digital image sensor array and the second digital image sensor array, thereby determining a measured value of the interference fringe period with no integer ambiguity.

In some embodiments, steps (b) and (c) are performed simultaneously with distinct digital image sensor arrays. Alternatively, steps (b) and (c) are performed sequentially using the same digital image sensor array. In some embodiments, the first digital image sensor array and the second digital image sensor array are the same digital image sensor array. In some embodiments, the first digital image sensor array and the second digital image sensor array are two distinct digital image sensor arrays.

In some embodiments, the first digital image sensor array and the second digital image sensor array are two distinct digital image sensor arrays having distinct pixel sizes in at least one dimension.

In some embodiments, the first digital image sensor array and the second digital image sensor array are two distinct sub-arrays of the same digital image sensor array having distinct pixel sizes in at least one dimension.

In some embodiments, the first digital image sensor array and the second digital image sensor array are the same digital image sensor array, and wherein the first and second images are produced with different angles between the two laser beams.

BRIEF DESCRIPTION OF THE DRAWINGS

Fig. 1. Illustration of mapping light fields using an image sensor whose pixel size is much larger than the periodicities of the interference pattern. In the Fourier transform of images captured by the camera, the aliases of high-spatial frequency components are clearly visible in the first Brillouin zone indicated by the box.

Fig. 2. Experimental setup for pixel super-resolution interference pattern sensing, according to an embodiment of the invention.

Figs. 3A-3F. Experimental results of probing superposed light fields using a CMOS image sensor with a 5.3 μm pixel pitch, where the structure of the interference pattern is much smaller than the pixel size.

Figs. 4A-4B. Measurements illustrating resolving 3-beam light field superpositions. Figs. 5A-5B. Graphs illustrating experimental results of phase sensing.

Figs. 6A-6B. Graphs illustrating experimental results of wavelength sensing using a technique according to an embodiment of the invention.

Fig. 7. Numerical experiments on the measurement for different fill factors. The measured spatial frequency (red markers, right axis) is uncorrelated to the pixel fill factor as expected, in contrast to the visibility of interference pattern (blue markers, left axis) which depends strongly on the pixel fill factor.

Fig. 8. An illustration of a pixel array having two sub-arrays whose pixels differ in size from each other, according to an embodiment of the invention.

Fig. 9. An illustration of prisms positioned above a pixel array, according to an embodiment of the invention.

Fig. 10. A flowchart illustrating steps of method for measuring an interference fringe period, according to an embodiment of the invention.

DETAILED DESCRIPTION

1. INTRODUCTION

Direct lens-free imaging of physical light fields is conventionally considered to be impossible because of the large camera pixel size compared to the interference pattern periodicities. Fig. 1 is an illustration of the mapping of light fields 100 using a standard image sensor 102 whose pixel size is much larger than the periodicities of the interference pattern in the light field 100. As a result, conventional sensing is not able to resolve the sub-pixel patterns in the light field. Here we disclose how periodic light structures can reliably be detected with a simple arrayed image sensor such as a CCD or CMOS camera by exploiting the aliasing effect happening for sampling below the Nyquist limit. In 2D fast Fourier transforms of CMOS camera images into the spatial frequency domain 104, the "aliases" of the high- spatial-frequency interference patterns are clearly visible. In the Fourier transform 104 of images captured by the camera 102, the aliases of high-spatial frequency components are clearly visible in the first Brillouin zone 106. Their spatial frequencies can be determined with remarkable precision, and also the relative phase of the beams can be retrieved with high accuracy.

We show that the ambiguity of calculating the original spatial frequency from the measured spatial alias frequencies can be resolved by changing the effective image sensor pixel size by several techniques such as simply rotating the image sensor. This method is fast and the setup extremely simple since it only employs an image sensor mounted on a rotation stage and no imaging optics. We demonstrate measurement of interference periodicities 15 times smaller than the pixel size. Undersampling results in a reduction of the signal-to-noise ratio [SNR], but we find that this is not a major issue here and the dynamic range of standard CCD or CMOS detectors is sufficient. Undersampling is more often used in the temporal domain including in white-light interference microscopy as a function of path delay, but no study has directly been explored in the spatial domain with a arrayed image sensor.

We describe an embodiment of a high precision wavemeter with a resolution of 5 μm using a 5.3 μm pixel-size image sensor. Alternatively, if the laser wavelength is known, the same technique can be used to measure the angle between two laser beams with an accuracy of 8 μ rad.

THEORY and ARRANGEMENT

Plane-wave interference.

Let us consider a superposition of N plane electromagnetic waves with the same angular frequency a) = ck and a fixed phase relation. The resulting electric field is

Thus, the mean square of the electric field intensity is where the first term is the zero-spatial-frequency background and the second term describes the pairwise interference resulting in fringes that can be described by k jl = kj — k l giving the orientation of the pattern with spatial frequency , assuming that the polarizations are not orthogonal,

By performing a continuous 2D Fourier transform of the mean square of the electric field, we obtain the corresponding field in the spatial-frequency domain where 5 denotes the Dirac delta function, α is the zero-frequency intensity, and the weight functions |γ jl | and arg (γ jl ) denote the magnitude and phase of the spatial frequency components, respectively. The vector ƒ describes the frequency and direction of the interference pattern from the plane-wave pair.

Image sensor sampling and Fourier transform.

On an arrayed image sensor chip with a pixel pitch of δ x and δ y , the intensity is integrated over the area of each pixel in an incoherent way. The observation of such light fields with the image sensor S s (r) can be written as M

Here the circled asterisk symbol stands for 2D convolution of the superposed light fields S(r) with the pixel responsivity distribution S p (r), which is then filtered by a comb function T consisting of an array of Dirac delta functions characterized by the pixel spacing Δr = (δ x , δ y ). Due to pixelation, the discrete Fourier transform of the light pattern sampled by the sensor array becomes (for a detailed derivation, see Appendix 1] where the real integers u and v characterize the order of spatial aliasing and the complex function gives the discrete response function

Here is the Fourier transform of the two-dimensional pixel responsivity distribution within each pixel. For simplicity, we now assume that the image sensor has square pixels with δ x = δ y = δ=1/ƒ s . Eq. (6) and (7) suggest that the nature of spatial sampling results in a periodic structure of the spatial frequency of the light fields, containing both the true spatial frequency and its alias . Aliases appear if high spatial frequencies (ƒ > ƒ s /2) "fold back" into the first Brillouin zone (ƒ < f s /2), a phenomenon analogous to the "Umklapp process" in solid-state physics. Thus, the real high spatial frequency of the light fields f L at alias order n results in a different measured spatial frequency ƒ m = |ƒ L — nƒ s | for the ID case and for the 2D case.

Resolving the frequency ambiguity.

In order to determine the original spatial frequency, one needs to determine the original Brioullin zone number. We found that by rotating the image sensor, the original Brillouin zone number can be determined and thus, the true spatial frequency of the light fields be determined. This is because the rotation of the image sensor around its center surface normal changes the effective image sensor pixel dimensions and thereby the sampling frequency ƒ s . From the perspective of the light field, its spatial frequencies f jl are transformed into is the standard rotation matrix (below we will discuss explicit cases).

An experimental apparatus for implementing pixel super-resolution interference pattern sensing is shown in Fig. 2. It includes a He-Ne laser 200; tunable laser diode 202 (NewFocus 6224); wavemeter 204 (HighFinesse WS6-200]; rotating camera platform 206 including an image sensor and rotation stage; mirrors 208a-208h; beam splitters 210a-210d; half waveplate 212; and polarizer 214. The beam 216 propagates through the apparatus as shown.

Although it is clear from Eq. (6) that the pixel responsivity distribution of a specific image sensor is important, our method works for all types. Here we use a CMOS image sensor (Cinogy CMOS 1201-nano, i.e., a 5.3 μm pixel size image sensor without protective glass] mounted at the center of a precision rotation stage Newport M-URM80APP controlled by a Newport ESP300 controller, with angular resolution of 0.001 degree, allowing for continuous rotation of the sensor around its centered surface normal. In the process of retrieving the periodicities of the interference fringes of the superimposed light beams, all the necessary steps including rotation of the image sensor chip, recording of images and performing fast 2D Fourier transforms (FFTs) are automated by a computer.

MEASUREMENTS RESULTS

A. One-dimensional interference patterns

We consider first two-beam superpositions. For this configuration, the electric field vectors are chosen to be equal, with respect to the image sensor plane, k 1 = k[sinψ, 0, cosψ] and k 2 = k[-sinΨ, 0, cosΨ ] with ψ = π/4. From Eq. (4) we determine that the spatial frequency of the interference pattern is 2234.85 mm -1 , The Nyquist frequency of our image sensor is 94.34 mm 4 , 23.7 times smaller. We record images while rotating the image sensor around its surface normal in steps of 0.1 degree.

In one example, we probe superposed light fields using a CMOS image sensor with a 5.3 μm pixel pitch, where the structure of the interference pattern is much smaller than the pixel size. The image sensor is mounted on Newport M-URM80APP to form a custom-made rotating image sensor platform. For a continuous rotation Fig. 3A shows a set of captured intensity images 300 in the spatial domain and corresponding FFT images 302 in the spatial frequency domain on a logarithmic scale. Next to the strong zero-frequency peak 304, side- peaks 306a, 306b that are aliases of the high-spatial frequency interference patterns are clearly visible.

During rotation of the image sensor, we first determine the absolute frequency f m =|f m |. Tracing the frequencies of Fourier peak gives the measured trajectories of spatial frequencies shown in Fig. 3B (dotted lines) together with numeric simulations (solid lines). The box 310 denotes one cycle. The pattern shows a periodicity of 90 degrees, and mirror- symmetry with symmetry axes at ), which originates from D 4 group of symmetry of the square shape of the pixel. We observe a good agreement between experiment and simulation (see also Appendix 2). To gain further insight, we perform the analysis over one cycle denoted by the box 310 and disentangle and extract the individual components of trajectories of spatial frequencies for the x andy spatial frequency components, shown in Fig. 3C for x-component and Fig. 3D for y-component. We observe an oscillation of the spatial frequency components upon rotation; a cycle is again completed after rotation over 90 degrees.

We now focus on the spatial frequency in x direction. At θ=0, the x-axis is aligned along the interference fringes, where the image sensor observes zero spatial frequency (which can hardly be determined experimentally because of the overlap with the zero-frequency background]. If the sensor is rotated, the Fourier frequency increases until it reaches the Nyquist criterion, where it is folded back and further rotation results in a decrease in the measured (alias) spatial frequency; this process is repeated until the x-axis is perpendicular to the fringes (0=90 degrees]. Thus, the observed oscillation pattern is in effect a manifestation of folding, where the high spatial frequencies beyond the first Brillouin zone oscillate back and forth across the positive half of the first Brillouin zone upon rotation. From this physical picture, the x-component of the true spatial frequencies upon rotation can be deduced to be where is the measured x-component of the spatial frequency upon rotation. A detailed derivation of this is shown in Appendix 3. They-frequency component exhibits a reverse trajectory compared to the x- frequency component, due to the symmetry, and is where θ' = π/2 — θ.

In Fig. 3E and Fig. 3F we show the real spatial frequencies of the interference patterns for the X-component and Y-component, respectively. The inset in Fig. 3E and Fig. 3F are magnifications of the spatial frequencies near the flat regions of the trajectories. The numerical and experimental data show good agreement. Fig. 3E and Fig. 3F present the original frequency trajectories of the physical light field retrieved using Equations (8] and (9), together with the corresponding theoretical prediction from projection. The image sensor rotation induces a change of the true spatial frequency projected along the axis, where the other frequency component disappears at the end of the rotation trajectory and thus, the true spatial frequency of the light field is retrieved. For the measurements of the x component, we retrieve the spatial frequency of physical light fields to be 2222.3759 mm -1 , in excellent agreement with the simulation and theory. The slight deviation in the true spatial frequency is attributed to the imperfect alignment of the light beams. This shows that we can simply measure the beam half angles with very high precision using our rotation technique, which is, in our case, 44.6811 degrees (the precision will be discussed later). Further, we observe that the retrieved spatial frequencies retrieved through the x- andy- frequency components are in practice not equal, with the latter being 2222.7448 mm -1 . We argue that a residual tilt of the image sensor, or a small (75 pm) asymmetry of the image sensor pixels, is responsible for this.

B. Probing superposition of multiple coherent beams of light

To further demonstrate the power of our technique, we show that the proposed method works also for multi-beam superpositions, generating rich periodic patterns. It is clear from Eq. (3) that every beam-pair superposition produces a pattern with high spatial frequency that contributes to the overall superposed light field distribution, we now explore the three waves configuration shown in Fig. 2. The specific parameters of this superposition are listed in Table 1, where the mutual beam half-angle is ψ = π/4, the phase φ = π /4, α and y are the relative phase of the waves. When rotating the image sensor, we observe three distinct Fourier peak trajectories, as expected. All these Fourier peaks are traced simultaneously during image sensor rotation, the frequency trajectories measured along the y-direction are shown in Fig. 4A, where the zone denoted by the boxes reveal a repeating pattern which are shifted for different beam combinations, containing information about their relative angles between the beams. In resolving 3-beam light field superpositions, Fig. 4A shows rotation measurements in the under-sampled case showing numerical simulations (dots) and experimental measurements (solid). The boxed regions show the repeating pattern that is angle-shifted for different beam combinations.

By using Eq. (9) one can retrieve the true spatial frequencies, whose retrieved spatial frequency trajectories for the 2D interference patterns are shown in Fig. 4B. The spatial frequencies are shown in units of 1/mm. The true spatial frequencies can then be extracted from the end of the trajectories, where interference of the PR, PQ and QR beam combinations are, respectively, 1573, 2227 and 1577 mm -1 . The theoretical predictions for perfect alignment are 1580, 2235 and 1580 mm- 1 , the errors are thus 0.45%, 0.36% and 0.19% which we attribute to a small misalignment of the beams. This indicates the high precision of our method, and that it is also applicable to multiple-beam superpositions.

Table 1. 3-beam interference configuration

2. PHASE RETRIEVAL

Having measured the spatial frequencies, we show that it is also possible to retrieve the relative phases of the beams in superposition using our method. From Eq. (4)-(7) we see that the phase of the Fourier peak is independent of the pixel pitch as well as two- dimensional pixel responsivity distribution, i.e., the precise properties of sensor therefore play no role for the phase of the Fourier peak of the Fourier transform. Therefore, the relative-phase information of the superposed light beams can be obtained, even in the undersampled case.

To prove this point, we attach a piezo chip (Thorlabs PA4HEW) to one of the mirrors in one of the arms, this allows for finely tuning the path difference with nanoscale precision, inducing a shift of the fringes of the interference pattern. We apply to the piezoelectric- actuated mirror a sawtooth signal with a frequency of 100 mHz. We measure the absolute Fourier peak phase and the relative phase compared to another non-shifted beam combination. Fig. 5A shows the measured phase of the Fourier peak. We see that the Fourier phase changes mostly linearly with the applied voltage. Synchronously, we measure the relative phase change of this Fourier peak compared to another Fourier peak originating from two beams without a piezo element, shown in Fig. 5B. We observe the same pattern except for a constant phase offset. This proves our theoretical predication, opening a new avenue to perform phase locking.

3. SENSITIVITY AND WAVELENGTH SENSING

We now demonstrate application of our method to wavelength sensing.

Theory.

The maximum-magnitude FFT-pixel is taken for our spatial frequency reconstruction. Therefore, the main inaccuracies of the setup are the precision of the rotation angle Δθ, the spatial frequency resolution determined by the precise image sensor dimensions, and a residual tilt of the image sensor.

The rotation angle resolution of the Newport M-URM80APP yields an expected wavelength sensing resolution of sub-pm/sub-fm for an angular rotation accuracy of Δθ = 0.1 deg/0.001 deg , and we obtain the wavelength sensing resolution Δλ c (see Appendix 4 for details): where Δf c is the spatial frequency resolution and is determined by the image size L via 1/L. The finite size of the image sensor yields an error of 60 pm. The latter clearly appears to be the more important limit on resolution for our demonstration experiment. We can improve this by spatial interpolation via zero-padding, which enables a significant increase of the resolution of the measured spatial frequencies. In our scheme, a zero-padding up to 2 18 x 2 10 pixels are performed for all results. To ensure a good SNR, the interference size is relevant since it straightforwardly influences the width of the Fourier spots and thus, the accuracy of the measured spatial frequencies.

Measurements.

Experimentally, an external-cavity semiconductor laser at ~776.3 nm (New Focus model

6224) with a linewidth of less than 300 kHz is coupled into our setup via a single-mode fiber. We scan the wavelength and measure it using a Fizeau-interferometer based wavemeter (HighFinesse WS6-200).

Figs. 6A-6B show our experimental results demonstrating high precision of wavelength resolution sensing using our technique. The dots in Fig. 6A and Fig. 6B show the wavelength shift for the numerical simulation and experimental data, respectively. The error bars represent the standard deviations corresponding to 15 independent measurements. The theoretical prediction results are indicated by the lines.

Due to the stability of the tunable laser source and the absolute accuracy of the wavemeter (200 MHz) the wavelength was scanned in steps of 5 pm. The observed linear relation is in good agreement with theoretical analysis and numerical simulation, thereby confirming the wavelength shifts down to at least 5 pm or 2.5 GHz in frequency are readily discernable by our method. This accuracy can easily be improved by enlarging the interference pattern region, increasing intensity, and using peak fitting for determination of the spatial frequencies of the Fourier aliases, and fitting our model to the experimental data for many rotated pictures simultaneously.

Spectral performance.

We can derive the spectral resolving power R that is equal to the ratio between the size of zero-padded image and the interference period (See Appendix 5). The simplicity of the inverse relation between the pixel size and the half length of the first Brillouin zone δ = 1/ƒ s means that, once the aliasing order is established, a unique mapping of spatial frequencies to wavelength is possible, without the need of post-processing. For 6 = 5.3 μm pixel size, this is equivalent to a free spectral range (FSR) of , much larger than the FSR of our Fizeau interferometers (WS6-200, 100 GHz) and standard Fabry-Perot cavities (several GHz). Our calculations in Equation (12) already resulted in a predicted wavelength accuracy below a femtometer, thus a spectral resolving power beyond 10 8 , bringing it to the required sensitivity for probing the Doppler wobbles induced by exoplanets or the Zeeman-splitting of spectral lines of hydrogen and antihydrogen. However, a direct proof of this theoretically achievable accuracy would require an ultrastable, high-SNR system. From our setup it is evident that our technique has several major advantages over the existing wavelength sensing techniques: higher resolution and experimental simplicity. Perhaps more importantly, systems containing optical elements such as lenses, gratings or glass blocks (Fizeau interferometers) cannot be used for high-energy photons where media are strongly absorbing. Our technique is therefore suited for the extreme-ultraviolet (EUV) and x-ray regime where recently ptychographic methods have been explored — with the added benefit of experimental simplicity.

4. DISSCUSSION AND OUTLOOK

An arrayed image sensor in combination with Fourier analysis according to embodiments of the invention allows for deep-subpixel sensing of periodic interference patterns. This is possible by exploiting aliasing, and the absolute spatial frequencies as well as phase spectrum can be obtained by rotation of the sensor, even if it is square.

A proof-of-principle experiment has demonstrated wavelength sensing with picometer resolution, potentially much simpler than Fizeau- or Fabry-Perot based interferometers. Our technique can also quite easily be applied to all wavelength ranges where arrayed image detectors are available, for instance, to metrology challenges in EUV lithography.

Our approach is quite general, it can be applied to other wave systems in nature, ranging from electromagnetic and acoustic waves to matter waves. We emphasize that the remarkable accuracy can be further improved by enlarging the interference region, using the fringe-lock [4] , and improvements on rotational accuracy.

A promising extension and initial motivation of the study is applying our methods to the observation and characterization of bright superchiral fields. Additionally, our technique may be combined with concepts from quantum metrology: For instance, in the case that interfering light fields are not coherent states of light but N00N states, an N-fold-enhanced resolution as compared with a classical interference lithography is possible, using an image sensor sensitive to multi-photon absorption or with single photon resolution. This is reminiscent of recent schemes with pixel super-resolution quantum imaging, which have been achieved by measuring the joint probability distribution of the spatial resolution of spatially entangled photons. Light fields with high-spatial frequency features appear also in other fields, such as by surface plasmon interference for nanolithography, at the interface between metals and dielectrics the wavelength of surface plasma waves can be down to the nanometer scale, while their frequencies remain in the optical range, going beyond the free- space diffraction limit of the light.

Appendix 1: Derivation of Eqs. (4)-(7)

The Fourier transform of Eq. (3) is given by

On substituting Eq. (3) into Eq. (11), one obtains the superposition of light fields in the spatial frequency domain, as described by Eq. (4) in the main text.

The observation of light fields with the image sensor S s (r) can be written as [1]

Note that for CMOS sensor, the pixel spacing is, in general, not equal to the pixel size due to the imperfect fill factor.

We now consider the field in the spatial frequency domain . From the sifting property, the convolution with a Dirac delta function simply shifts its origin. By substituting Eq. (12) into Eq. (13) and further taken the convolution theorem into account (the Fourier transform of a convolution of two functions is equal to the product of the Fourier transforms of each function) we obtain [2]

Appendix 2: Calculation of the undersampled light fields in the rotated frame and experimental data acquisition

For the numerical calculations, we assume that the pixels are square. The sensor has 1280x1024 pixels with a pixel pitch of 5.3 μ m. The calculation of the intensity of the superposed light fields is performed in subpixel units, which are incoherently added to yield the integrated light intensity on a sensor pixel. The sampling pitch of the light fields should be much smaller than the pixel pitch for an appropriate mapping the superimposed light field on the sensor, and is set to 0.081 pm in our calculations. The sensor noise, which is modeled by a white stochastic process (white Gaussian noise], is added to the intensity pattern. Note that the fill factor strongly affects the visibility V, but has no influence on the resulting spatial frequencies (see Appendix 6], thus unity fill factor is used in our model.

Experimentally, the light patterns were recorded by our CMOS image sensor. To ensure a good signal to noise ratio, our image sensor is set to automatically adjust to the optimal exposure time until the signal level is greater than 50% and stays in the range between 50% - 90% of full saturation level. Additional attenuation is used to avoid saturation. A single rotating scan with steps of 0.1 deg takes ~30 minutes for ID configuration (over 90 degrees], in the 2D configuration requires a larger angle to be scanned and therefore takes longer, about 1 hour.

Appendix 3: Derivation of Eqs. (8) and (9)

We consider first the one-dimensional case of an interference pattern with spatial frequency ƒ original along the x-axis, observed by a one-dimensional pixilated image sensor with pixel size δx leading to the spatial sampling frequency ƒ s = 1/δx. This sensor is rotated by an angle θ with respect to the x-axis, it therefore observes a spatial frequency of fringe ƒ r (θ) = |ƒsinθ| . If the sensor spatial frequency is smaller than twice the real observed frequency, ƒ s < 2 ƒ r (θ), aliasing or subsampling results in a different measured spatial frequency ƒ m (θ) = | ƒ r (θ) — n ƒ s |, where n is the order of the alias or Brillouin zone. This order n can be calculated like where double brackets denote the floor operation.

The number of folding branches, which is practical observable in the full rotating measurement, is the integer part of the ratio of the measured frequency to the half-length of the Brillouin zone Thus, the absolute value of ƒ m (θ) = | ƒ r (θ) — n ƒ s | can be replaced with ƒ m cos mπ = ƒ r - nƒ s (16)

By virtue of the floor operation relation [3] and Hermite's identity [3]

Combing Eqs. (14), (15), (17) and (18), we then obtain for the relation between the aliasing order n and the number of folding branches m

Substituting the relation Eq. (19) into Eq. (16), we eventually reach the Eq. (8) in the main text, that is, the true spatial frequencies, can be deduced from the measured spatial frequency ƒ m and the number of folding branches m in a full scanning measurement

The same derivation holds for the true y-frequency component , except for a reverse rotation angle θ' = 90° — θ due to the symmetry, reaching the Eq. (9) in the main text.

According to Eqs. (8) and (9) one can retrieve the real spatial frequencies of interference in a full scan, and in combination with analysis and data-fitting parameters, one could fit the experiment data to theory and numerical calculation in an elegant manner. This allows wavelength/beam angle sensing with very high precision.

Appendix 4: Discussion about wavelength sensing resolution and derivation of Eq. (10)

The trajectory of the Fourier peaks upon rotation is the phenomenon used to retrieve the true spatial frequencies of the light fields, thus for an angular translation stage with angular resolution Δθ r limits the precision of the retrieved spatial frequencies via the projected spatial frequencies in a scan ƒ r (θ) =| ƒ sinθ|, one thus obtains the wavelength sensing resolution affected by the angular resolution For the case presented in Fig. 3 with the spatial frequency/around 2235 mm -1 and Δθ r =0.1 deg gives a wavelength sensing resolution Δλ = 0.96 pm, while for the best performance of a Newport M-URM80APP rotation stage with a resolution Δθ r =0.001 deg leads to Δλ = 0.096 fm.

Since the trace is performed in the Fourier domain, the resolution of spatial frequencies Δƒ c in the Fourier domain should be taken into consideration, which yields the Eq. [10] in the main text

Appendix 5: Derivation of the spectral resolving power of our configuration

The spectral resolving power is the capability to resolve two close-by wavelengths and is given by R = |λ/Δλ| with Δλ being the spectral resolution. Substituting the wavelength resolution given by Eq. (10) into R, one obtains the theoretical spectral resolving power of our setup where L is the image size with zero padding, D represents the spatial periodicity of the light fields and r denotes their ratio r = L/D. Since we are interested in the case L >> D, or say, r >> 1, which implies that Nyquist-Shannon condition cannot be fulfilled and aliasing occurs. In this sense, the aliasing effect can be used to greatly improve the spectral resolving power, in contrast to Echelle grating where high spectral resolving power is obtained in high diffraction orders.

Appendix 6: Influence of the camera fill factor

Fig. 7 shows numerical experiments on the measurement for different fill factors. The measured spatial frequency (triangles, right axis) is uncorrelated to the pixel fill factor as expected, in contrast to the visibility of interference pattern (dots, left axis) which depends strongly on the pixel fill factor. The influence of the fill factor of the pixel was investigated in a numerical experiment. Simulations were performed for square pixels with 10 nm subpixel units. The spacing between the pixels is varied from 0 to 1.5 μ m in steps of 30 nm, corresponding a fill factor ranging from 71.7% to 100%. The other parameters are the same as for the other calculations. Note that CCDs often have a 100% fill factor but CMOS image sensors can have much less. The visibility and measured spatial frequencies are shown in Fig. 7. We observe that the interference visibility strongly depends on the fill factor, which can be seen as a manifestation of the fact that pixels do not sample point-like positions but sampling points are integrate over a larger area.

For frequency measurement, a light beam is split in two (e.g., by a beamsplitter] and recombined on CCD imaging sensor, or the relative angles between several beams can be measured.

Although the single-pixel noise of a camera is high, if we use 100k-1M pixels it averages out and very small amplitudes of the aliases in spatial-frequency space can be detected. In the Fourier-transformed image (to spatial frequency domain, kx and ky], the constant background can easily be removed (it sits at kx=ky=0), and on a Log scale the two alias- peaks are nicely visible (they are cross-shaped due to the rectangular camera size and bad Fourier windowing].

Alternate embodiments

As discussed earlier, several approaches are possible to effectively change the camera pixel size relative to the spatial periodicity of the interference fringes pattern in order to determine the multiplicative factor needed to determine the fringe spacing. In addition to rotation of the camera chip, the effective cameral pixel size can also be changed by tilting of the camera chip which effectively changes the pixel size in one dimension. It also is possible to use pixel arrays or sub-arrays whose pixel sizes differ from each other in at least one dimension in a non-trivial way, i.e., not integer multiple or integer divisions (then often the alias frequency remains the same]. For example, a custom CCD array with rows with different pixel size/periodicity (different sampling periods for different CCD detection areas) may be used, as shown in Fig. 8, where pixels in an upper sub-array 800 have a larger pixel size than pixels in a lower sub-array 802. Using a CCD where rows have different pixel sizes, we can derive the true spatial fringe period without rotation. The pixels could also all have the same height, but different widths. Alternatively, two CCDs may be used with different pixel sizes after suitably splitting the beam.

Instead of changing the pixel spacing or pitch, one can also vary the angle between the beams to obtain the multiplicative factor. This can be done by various methods of beam steering including placing one or more prisms on top of the CCD chip in correspondence to different pixel rows, or on top of two or more CCD chips positioned across different optical paths obtained by suitable beam-splitting.

Some embodiments involve two simultaneous (non-sequential) measurements of different Moire patterns formed on different rows of the CMOS or CCD camera chip or on different chips.

Some embodiments provide different approaches to resolve the integer-ambiguity. These may involve changing in a non-trivial (such that the alias frequency changes) way the spatial frequency of the interference fringes impinging different portions of a standard CMOS or CCD camera chip, or different standard CMOS or CCD camera chips.

Some embodiments may involve varying the angle between the interfering beams, using pixel arrays or sub-arrays whose pixel sizes differ from each other in at least one dimension (Fig. 8), positioning different miniaturized prisms or any other optically dispersive elements before different pixel sub-arrays of the CMOS camera chip, using two CMOS camera chips with different pixel sizes after having suitably split the beam, and/or any possible combination of these. For example, one could also use one or more different prisms for multiple measurements (Fig. 9), positioning a different prism for each measurement, or for simultaneous measurements across different paths obtained by beam splitting on two identical camera chips. In the embodiment shown in Fig. 9, light beams 900, 902 of which the angle and/or wavelength is supposed to be measured are directed to be incident on the device. Dielectric prisms 904 transparent for the wavelength with a refractive index different from air are positioned above the arrayed image sensor 906, which may be a CCD or CMOS array, for example.

For deriving the original/real fringe period of two or more interfering beams 900, 902 and to resolve the integer multiplicative factor that appears by aliasing, prisms 904 of different height are placed on a CCD array 906. By optical refraction at the prism-air interface 908, the beams acquire different angles inside the prism depending on the angle of incidence of the beam and the refractive index of the prism, leading to different fringe spacing sensed by the arrayed image sensor. Prisms or similar objects of different height lead to a different change of the angle at which the beams interfere at the image sensor. This has a similar effect as rotating the camera chip where also different fringe spacings are sensed by the horizontal and vertical direction of the pixel array. This change of height can also be continuous, as for instance realized by a curved object, such as a spherical lens. The attachment of the prisms on the camera chip can be direct or at a distance with a gap, as the beams exiting the prism at the bottom will retain a prism-height-depending angle.

These various embodiments are instances of a general method for measuring an interference fringe period as outlined in the flowchart of Fig. 10. In step 1000 two laser beams are combined to create interference fringes in an imaging plane, where the interference fringes have a fringe spatial period. In step 1002 the combined beams are imaged with a first digital image sensor array positioned at the imaging plane to produce a first image, where a spatial period of pixels of the first digital image sensor array is greater than the fringe spatial period, resulting in a first periodic intensity modulation pattern in the first image due to a spatial sub-sampling aliasing effect. In step 1004 the combined beams are imaged with a second digital image sensor array positioned at the imaging plane to produce a second image, wherein a spatial period of pixels of the second digital image sensor array is greater than the fringe spatial period, resulting in a second periodic intensity modulation pattern in the second image due to the spatial sub-sampling aliasing effect. In step 1006 a value of the interference fringe spatial period is calculated, up to an integer ambiguity, from the first periodic intensity modulation pattern in the first image and from the spatial period of the pixels of the first digital image sensor array. In step 1008 the integer ambiguity is resolved using the first periodic intensity modulation pattern, the second periodic intensity modulation pattern, and characteristics of the first digital image sensor array and the second digital image sensor array, thereby determining a measured value of the interference fringe period with no integer ambiguity.