Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMAGE RECONSTRUCTOR
Document Type and Number:
WIPO Patent Application WO/2009/108050
Kind Code:
A9
Abstract:
A method and apparatus to reconstruct a sharp image from multiple phase diverse intermediate images is described. The degree of defocus of all intermediate images is unknown, but the diversity defocus is known. Images can be processed real-time because of intrinsically non- iterative algorithms. Such an apparatus is insensitive to defocus and can be included in imaging systems for extended depth of field (EDF) imaging, range finding and 3D-imaging. Additionally, wave-front sensors can be constructed by processing sub-areas of images. Applications include digital imaging, distance, speed and direction measurement and wave-front sensing, which function can be combined with the camera function.

Inventors:
SIMONOV ALEKSEY NIKOLAEVICH (NL)
ROMBACH MICHIEL CHRISTIAAN (NL)
Application Number:
PCT/NL2009/050084
Publication Date:
September 30, 2010
Filing Date:
February 25, 2009
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SIMONOV ALEKSEY NIKOLAEVICH (NL)
ROMBACH MICHIEL CHRISTIAAN (NL)
International Classes:
G06T5/50
Download PDF:
Claims:
Claims

1. Method for providing at least one final in- focus image of at least one object in a plane, the method comprising the following steps: - providing at least two phase-diverse intermediate images of the object with an optical system having an optical transfer function, wherein each intermediate image has a different and a priori unknown degree of defocus in relation to the in-focus image plane of the at least one object, but wherein the degree of defocus of any intermediate image in relation to any other intermediate image is a priori known, and wherein the optical transfer function of the optical system is a priori known; providing a generating function which combines spatial spectra of, at least two, intermediate images, and, independently from said combination of spatial spectra, combines the optical transfer functions corresponding to said spatial spectra,

- adapting the said combinations of spatial spectra and optical transfer functions such that the generating function becomes independent from the degree of defocus of, at least one, intermediate image compared to the in-focus image plane; - reconstructing the at least one final in-focus image by a non-iterative algorithm which algorithm is based on said generating function.

2. Method according to claim 1, characterized in that it is preceded by additional processing to adapt the spatial spectrum of at least one of said intermediate images to lateral shift of said image compared to any other intermediate image.

3. Method according to claim 1-2, characterized in that it includes additional processing based on an additional generating function to provide the degree of defocus of at least one of said intermediate images compared to the in-focus image plane.

4. Method according to claim 3, characterized in that the degree of defocus of the intermediate image compared to the in-focus image plane is included in the non- iterative algorithm

5. Apparatus for providing at least one final in- focus image of at least one object in a plane, the apparatus comprising: imaging means adapted to provide at least two, phase-diverse intermediate images of the object, wherein each intermediate image has a different and a priori unknown degree of defocus compared to the in- focus image plane of the object, but wherein the degree of defocus of any intermediate image in relation to any other intermediate image is a priori known; at least one optical system adapted to depict the object on the imaging means, wherein the optical transfer function of the at least one optical system is a priori known; processing means,

- adapted to provide a generating function which combines spatial spectra of, at least two, intermediate images, and, independently from said combination of spatial spectra, combines the optical transfer functions corresponding to said spatial spectra;

- adapted to combine said spatial spectra and optical transfer functions such that the generating function becomes independent from the degree of defocus of, at least one, intermediate image compared to the in- focus image plane;

- adapted to apply a non- iterative algorithm including said generating function to obtain a reconstruction of the final image.

6. Apparatus according to claim 5, characterized in that the processing means are adapted to convert the spatial spectrum of at least one intermediate image to lateral-shift of said intermediate image compared to any other intermediate image.

7. Apparatus according to claim 5-6, characterized in that the processing means are adapted to determine an additional generating function providing the degree of defocus of, at least one, intermediate image compared to the in- focus image plane.

8. Apparatus according to claim 7, characterized in that the processing means are adapted to adapt the non- iterative algorithm to the degree of defocus of the intermediate image compared to the in- focus image plane.

9. Apparatus as claimed in claim 5, characterized in that the imaging means comprise at least one image photo-sensor.

10. Apparatus according to claim 9, characterized in that the apparatus is adapted to execute at least two exposures on the same imaging photo-sensor which is adapted to provide phase-diverse intermediate images.

11. Apparatus according to claim 9, characterized in that the apparatus is adapted to execute at least two subsequent exposures on the same imaging photo-sensor which is adapted to assume at least two predetermined different positions along the optical axis for subsequent exposures.

12. Apparatus as claimed in claim 9, characterized in that, the apparatus is adapted to execute at least two intermediate exposures on at least two independent areas of at least two imaging photo-sensors.

13. Apparatus as claimed in any of the claims 5-12, characterized in that the processing means are adapted to process intermediate sub-images from corresponding sub-areas of at least two intermediate images in at least two final in- focus sub-images.

14. Apparatus according to claim 13 characterized in that the apparatus includes processing means to compose one final image by combination of at least two final in- focus sub-images.

15. Apparatus according to claims 13, characterized in that it includes processing means to reconstruct a wave-front by combining defocus curvatures of, at least two, intermediate sub-images.

16. Apparatus according to claims 6, characterized in that the apparatus includes processing means adapted to reconstruct a wave-front by combining lateral shifts of at least two intermediate sub-images.

17. Apparatus according to claim 6 and 13, characterized in that the apparatus includes processing means to reconstruct by ray-tracing at least one final in- focus image.

18. Apparatus according to claim 5, characterized in that the apparatus includes at least two, sensors each comprising a photo-sensor with a single photo-sensitive spot.

19. Apparatus according to claim 18, characterized in that the apparatus includes at least two amplitude masks each combined with a focusing optical element.

20. Apparatus according to claim 19, characterized in that the processing means are adapted to range finding.

Description:
Image reconstructor

Background of the invention

The present invention relates to imaging and metering techniques. Firstly, the invention provides methods, systems and embodiments of these for estimating aberration errors of an image and reconstruction of said image based on a set of multiple intermediate images by non-iterative algorithms and, secondly, provides methods to reconstruct wave-fronts. An apparatus based on the invention can be either a dedicated camera or wave-front sensor, or these functions can be combined.

The invention has a broad scope of embodiments and applications including, image reconstruction for one or more focal distances, image reconstruction for EDF, speed, distance and direction measurement device and wave- front sensors for various applications. Reconstruction of images independent from the defocus aberration has most practical applications. Therefore, the device or derivates thereof can be applied for digital imaging insensitive to defocus (in cameras), digital imaging for extended depth of field ("EDF", in cameras), as optical distance, speed and direction measurement device (in measuring and metering devices). Camera units and wave-front sensors according to the methods and embodiments set forth in this document can be designed to be entirely solid state, with no moving parts, to be constructed from only very few components, for example, in a basic embodiment: simple optics, for selected application even only one lens, one beam splitter (or other beam splitting element, for example, phase grating) and two sensors and to be combined with dedicated data processing units/processing chips, with all these components in, for example, one solid polymer assembly.

In this document "intermediate image" refers to a phase-diverse intermediate image which has an unknown defocus compared to the in-focus image plane but a known a priori diversity defocus in respect of any other intermediate image in multiple intermediate images. The "in-focus image" plane is a plane optically conjugate to an object plane and thus having zero defocus error.

The terms "object" and "image" conform to the notations of Goodman for a generalized imaging system (J.W. Goodman, Introduction to Fourier Optics, McGraw-Hill Co., Inc., New York, 1996, Chap. 6). The object is positioned in the "object plane" and the corresponding image is positioned in the "image plane". "EDF" is an abbreviation for Extended Depth of Field.

The term "in- focus" refers to in focus/optical sharpness/in optimal focus, and the term "defocus" to defocus/optical un-sharpness/blurring. An image is meant to be in- focus when the image plane is optically conjugate to the corresponding object plane.

This document merely, by the way of example, applies the invention to camera applications for image reconstruction resulting in a corrected in- focus image because defocus is, in practice, the most important aberration. The methods and algorithms described herein can be adapted to analyse and correct for any aberration of any order or combination of aberrations of different orders. A man skilled in the arts will conclude that the concepts set forth in this document can be extended to other aberrations as well by adaptation of formulas presented as for the applications above.

This invention can, in principle, be adapted for application to all processes involving waves, but is most directly applicable to incoherent monochromatic wave processes. Colour imaging can be achieved by splitting white light into narrow spectral bands. White, visible light can be imaged when separated in, for example, red (R), blue (B) and green (G) spectral bands, e .g. by common filters for colour cameras, for example, RGB Bayer pattern filters providing the computation means with adaptations for, at least three, approximately monochromatic spectra and combining images. The invention can be applied to infrared (IR) spectra. X-rays produced by an incandescent cathode tube are, by definition, not coherent and not monochromatic, but the methods can be used for X-rays by application of, for example, crystalline monochromators to produce monochromacity.

For ultrasound and coherent radio frequency signals the formulas can be adapted for the coherent amplitude transfer function of the corresponding system. A man skilled in the arts will conclude that the concepts set forth in this document can be extended to almost any wave process and to almost any aberration of choice by adaptation and derivation of the formulas and mathematical concepts presented in this document. This document describes methods to obtain sharp, focused images in planes, slices, along the optical axis as well as optical sharpness in three-dimensional space, and EDF imaging in which all objects in the intended cubic space are sharp and in-focus. The traditional focusing process, i. e. changing the distance between imaging optics and image on film or photo-detector or, otherwise, changing the focal distance of the optics takes time, requires additional, generally mechanically moving components to the camera and, last but not least, knowledge of the distance to the object of interest. Such focusing shifts the plane of focus along the optical axis. Depth of field in a single image can, traditionally, only be extended by decreasing the diameter of the pupil of the optics, i. e. by using low-NA objectives or, alternatively, apodized optics. However, decreasing the diameter of the aperture reduces the light intensity reaching the photo-sensors or photographic film and significantly degrades the image resolution due to narrowing of the image spatial spectrum. Focusing and EDF at full aperture by using computational methods present a considerable interest in imaging systems and is clearly preferable to such traditional optical/mechanical methods.

Furthermore, a method to achieve this with no moving parts (as a solid state system) is generally preferable for both manufacturer and end-user because of low cost of equipment and ease of use.

Several methods have been proposed for digital reconstruction of in-focus images some of which will be summarized below in the context of the present invention described in this document.

Optical digital technologies regarding defocus correction and EDF started with a publication of Hausler (Optics Communications 6(1), pp. 38-42, 1972) which described a combination of multiple images into a single image in such a way that the final image results in EDF. This method does not reconstruct the final image from the set of defocused images but combines various in-focus areas of different images. The present invention differs from this approach because it reconstructs the final image from intermediate, defocused images that may not contain in-focus areas at all, and, automatically, combines these images into a sharp final EDF image.

Later, methods based on phase coding/decoding which include an optical mask in the optical system which is designed such that the incoherent optical transfer function remains unchanged within a range of defocuses. Dowsky and co-workers (refer to, for example, US2005264886, WO9957599 and E.R. Dowski and W.T. Cathey, Applied Optics 34(11), pp. 1859-1866, 1995) developed methods and applications of EDF imaging systems based on wave front coding/decoding with a phase filter followed by a straightforward decoding algorithm to reconstruct the final EDF image from the phase encoded intermediate image.

The present invention described in this document does neither include coding of wave- fronts nor the use of phase filters.

Also, various phase-diversity methods determine the phase of an object by comparison of a precisely focused image with a defocused image, refer to, for example, US 6771422 and US2004/0052426.

US2004/0052426 describes non-iterative techniques for phase retrieval for estimating errors of an optical system, and includes capturing a sharp image of an object at a focal point and combining this image with a number of, intentionally, blurred unfocused images of the same object. This concept differs from the concept described in this document in that, firstly, the distance to the object must be known beforehand, or, alternatively, the camera be focused on the object, and, secondly, the method is designed and intends to estimate of optical errors of the optics employed in said imaging. This technique requires at least one focused image at a first focal point in combination with multiple unfocused images. These images are then used to calculate wave- front errors.

The present invention differs from US 2004/0052426 in that the present invention does not require a focused image, i. e. knowledge of the distance from an object to the first principal plane of the optical system, prior to capture of the intermediate images, and uses only a set of unfocused intermediate images with unknown degree of defocus relative to the object.

US6771422, describes a tracking system with EDF including a plurality of photo- sensors, a way of determining the defocus status of each sensor and to produce an enhanced final image. The defocus aberration is found by solving the transport equation derived from the parabolic equation for the complex field amplitude of a monochromatic and coherent light wave. The present invention differs from US6771422 in that it does not intend to solve the transport equation. The present invention is based on the known a priori information on the incoherent optical transfer function (OTF) of the optical system to predict the evolution of intensity distribution for different image planes and, thus, the degree of defocus by direct calculations with non-iterative algorithms.

Other methods to reconstruct images based on a plurality of intermediate images/intensity distributions taken at different and known degrees of defocus employ iterative phase diversity algorithms (see, for example, J.J. Dolne et ah, Applied Optics 42(26), pp. 5284-5289, 2003). Such iteration can take considerable computational power and computing time which is unlikely to be carried out in real-time. The present invention described in this document differs from the standard phase diversity algorithms in that it is an essentially non- iterative method.

WO2006/039486 (and subsequent patent literature regarding the same or derivations thereof as well as Ng Ren et al, 2005, Stanford Tech Report CTSR 2005-02, providing an explanation of the methods), use an optical system designed such that it allows determination of the intensity and angle of propagation, by an array of micro lenses, of the light at different locations on the sensor plane resulting in a so-called "plenoptic" camera. The sharp images of the object points at different distances from the camera can be recalculated (for example, by ray-tracing). It must be noted that with the method described in the present document the intensity and angle of incidence of the light rays at different locations on the intermediate image plane can be derived and methods analogous to WO2006/039486, i. e. ray-tracing, can be applied to calculate sharp images of an extended object.

The present invention described in this document differs from WO2006039486 and related documents in that the present invention does not explicitly use such information on angle of incidence obtained with an array of microlenses, for example, a Shack- Hartman wave-front sensor, but instead the respective light ray direction is directly calculated by finding the relative lateral displacement for at least one pair of phase- diverse images and using the a priori known defocus distance between them. Additionally, the intermediate phase-diverse images described in this document can also be used for determining the angle and intensity of individual rays and to compose an EDF image by ray-tracing.

All documents mentioned in the sections above are included in this document by reference.

Description of the invention

The present invention relates to imaging techniques. From the single invention a number of applications can be derived:

Firstly, the invention provides a method for estimation of defocus in the optical system without prior knowledge of the distance to the object; the method is based on digital processing of multiple intermediate defocused images, and,

Secondly, provides means to digitally reconstruct a final in-focus image of the object based on digital processing of multiple intermediate defocused images, and,

Thirdly, can be used for wave-front sensing by analyzing local curvature of sub-images from which an estimated wave-front can be reconstructed, and,

Fourthly, can reconstruct EDF images by either combining images from various focal planes (for example "image stacking"), or, by combining in-focus sub-images (for example "image stitching"), or, alternatively, by correction of wave-fronts, or, alternatively, by ray-tracing to project an image in a plane of choice.

Fifthly, provides methods to calculate speed and distance of an object by analyzing subsequent images of the object including speed in all directions, X, Y and Z based on a multiple of intermediate images and consequently the acquired information on focal planes, and, Sixthly, can be used to estimate the shape of a wave-front by reconstruction of tilt of individual rays by calculating the relative lateral displacement for at least one pair of phase-diverse images and using the a priori known defocus distance between them, and,

Seventhly, provides methods to calculate by ray-tracing a new image of an extended object in any image plane of the optical system (for example, approximating a "digital lens" device), and,

Eighthly, can be adapted to determine the wavelength of light when defocus is known precisely, providing the basis for a spectrometer, and,

Ninthly, can be adapted to many non-optical applications, for example, tomography for digital reconstruction of a final sharp image of an object of interest from multiple blurred intermediate images resulting from a non-local spatial response of the acquisition system (i. e. an intermediate image degradation can be attributed to a convolution with the system response function), of which the response function is known a priori, the relative degree of blurring of any intermediate image compared to other intermediate images is known a priori, and the absolute degree of blurring of any intermediate image is not known a priori.

With the methods described in this document a focused final image of an object is derived, by digital reconstruction, from at least two, defocused intermediate images having an unknown degree of defocus compared to an ideal focal plane (or, alternatively, the distance from the object to the principal planes of imaging system), but having a precisely known degree of defocus of each intermediate image compared to any other intermediate image.

Firstly, a method of reconstruction and which method can be included in an apparatus will be described. The method starts with at least two defocused, i. e. phase-diverse, intermediate images from which a final in-focus image can be reconstructed by a non- iterative algorithm and an optical system having an optical transfer function which is a priori known. Note that each intermediate image has a different and a priori unknown degree of defocus in relation to the in-focus image plane of the object, but the degree of defocus of any intermediate image in relation to any other intermediate image is a priori known.

To digitally process the images obtained above the method includes the following steps: a generating function comprising a combination of the spatial spectra of said intermediate images and a combination of their corresponding optical transfer functions is composed. said combinations of spatial spectra and optical transfer functions are adjusted such that the generating function becomes independent from the degree of defocus of, at least one, intermediate image compared to the in- focus image plane. (This adjustment can take the form of adjustment of coefficients or adjustment of functional dependencies or a combination thereof, so the relationship between the combination of spatial spectra and their corresponding optical transfer functions can be designed as linear, non-linear or functional relationships, depending on the intended application.) the final in- focus image is reconstructed by a non-iterative algorithm based on said combinations of spatial spectra and corresponding optical transfer functions.

An apparatus to carry out the tasks set forth above must include the necessary imaging means and processing means.

Such method includes an equation based on the generating function/functional satisfying

≡ Ψ[H(ω x j ,,φ - Δφ 1 )x/ 0 x j ,),K ,H(ω x j ,,φ - Δφ Λ/ )x/ 0 x j ,)] = (1) = £^(ω x j ,,φ 0 ,Δφ 1 K ,Δφ Λ/ ,[/ 0 x j ,)])δφ^, p≥0

where

/(CO x , CO 3 ,, φ - Δφ B ) = I n (co x , CO 3 , ) = — J J /„ (x, y) exp[-z(co x x + co^)] dxdy (2)

is the spatial spectrum of the n-th intermediate phase-diverse image, 1 < n ≤ M ; x and y are the transverse coordinates in the intermediate image plane; M is the total number of intermediate images, M ≥ 2. Value Δφ κ (known a priori from the system configuration) is the diversity defocus between the n-th intermediate image plane and a chosen reference image plane. Analogously, the spatial spectrum of the object (i. e. final image) is i °° °° I 0 x , ω ) = — - f |7 0 (x , /) eχp[-z(ω x x' + ω /)] dx'dy (3)

here x and y are the transverse coordinates in the object plane. In the right-side of Eq. 1 the spatial spectra of phase-diverse images are substituted with I n x y ) = H(ω x j ,,φ 0 + δφ - Δφ B )/ 0 x j ,) , where H(ω x j ,,φ) denotes the defocused incoherent optical transfer function (OTF) of the optical system; the unknown defocus φ is substituted by a sum of the defocus estimate φ 0 and the deviation δφ ≡ φ -φ 0 , | δφ /φ 0 |« 1 : φ = φ 0 +δφ . The series coefficients B p x , ω^ , φ 0 , Δφ t K , Δφ^ , [I 0 x , ω^ )] ) functionally dependent on the spatial spectrum of the object I 0 x y ) can be found from Ψ by decomposing the defocused OTFs H(ω x j ,,φ 0 +δφ - Δφ κ ) into Taylor series in δφ .

The generating function/functional Ψ is chosen to have zero first- and higher-order derivatives up to the K -order with respect to unknown δφ :

d'Ψ υ Υ = 0, i = \K K. (4)

3(δφ) !

Thus, 5 ! x j ,,φ 0 ,Δφ 1 K ,Δφ Λ/ ,[/ o x j ,)]) = O for / = 1K K and Eq. 1 simplifies to

Ψ[I(ω x y ,φ -A%),K ,I(ω x y ,φ -Δςv)] = = 5 0 x j ,,φ 0 ,Δφ 1 K ,Δφ Λ/ ,[/ 0 x j ,)]) + O(δφ j:+1 ).

Finally, neglecting the residual term O(δφ^ +1 ) in Eq. 5, the object spatial spectrum I 0 ((ύ x ,(ύ y ) can be found by solving the approximate equation

Ψ[/(ω x j ,,φ -Δφ t ),K ,/(ω x j ,,φ -Δqv)] ≡

(6) ≡ 5 0 x j ,,φ 0 ,Δφ 1 K ,Δφ Λ/ ,[/ o x j ,)]).

So, having two or more intermediate images I n ((ύ x ,(ύ y ) , n = 1,2... , and knowing a priori the system optical transfer function H(ω x j ,,φ) a generating function Ψ according to Eq. 1 independent from the unknown defocus φ (or δφ ), as required by Eq. 4, can be composed by an appropriate choice of functional relation between I n ((ύ x , (ύ y ) and, subsequently between H(ω x j ,,φ 0 +δφ - Δφ κ ) corresponding to said spatial spectra I n ((ύ x ,(ύ y ) . Reconstruction of the object spectrum I 0 ((ύ x ,(ύ y ) , which is the basis for the final in- focus image or in- focus picture, by a non-iterative algorithm based on Eq. 6 which includes, on the one hand, the combination of the spatial spectra I n ((ύ x ,(ύ y ) and, on the other hand, the combination of incoherent OTFs

H(ω x j ,,φ 0 +δφ - Δφ B ) which are substituted by the corresponding Taylor expansions in δφ .

An important example of the generating function is a linear combination of the spatial spectra / B x ,ω ) of the intermediate phase-diverse images

M

Ψ = L ^(ω x j ,,φ 0 , Δφ 1 K Δφ Λ/ )/ κ x j ,) =

(7) = / 0 x j ,)£^(ω x j ,,φ 0 , Δφ 1 K ,Δφ Λ/ )δφ^,

where the coefficients g B x j ,,φ 0 ,Δφ 1 K Δφ^) with n = lK M are chosen to comply with Eq. 4. In this case Eq. 5 results in

= / 0 x j ,)x {5 0 x j ,,φ 0 ,Δφ 1 K ,Δφ Λ/ )+ O(δφ j:+1 )}.

The coefficients g B x j ,,φ 0 ,Δφ 1 K Δφ^) can be found from Eq. 8 by making substitutions I n ((ύ x y ) = H(ω x j ,,φ 0 +δφ -Δφ B )/ 0 x j ,) , where an explicit expressions for the incoherent optical transfer function (OTF) H(ω x j ,,φ) of the optical system is used. In such a way, g B x j ,,φ 0 ,Δφ 1 K Δφ^) are known a priori functions depending only on the optical system configuration. The analytical expression for the system OTF H(ω x j ,,φ) can be found by many ways including fitting of the calculated OTF, general formulas are given, for example, by Goodman (J.W. Goodman, Introduction to Fourier Optics, McGraw-Hill Co., Inc., New York, 1996). The "least-mean-square" solution / 0 i j ,) of Eq. 8 that minimizes the mean square error (MSE)

MSE = J j\ I 0 x y ) -I 0 x y ) \ 2 x y (9)

takes the form

where the constant ε -1 , by analogy with the least-mean-square-error filter (Wiener filter), denotes the signal-to-noise ratio. "Noise" in the algorithm is caused by the residual term O(δφ^ +1 ) in Eq. 8 depending on δφ . When | B 0 | has no zeros within the spatial frequency range of interest Ω , the constant 8 can be defined from Eq. 8 as follows:

ε = min Ω | 5 0 t x j ,,φ 0 ,Δφ 1 K ,Δφ Λ/ )xO(δφ j:+1 )| . (11)

So, Eq. 10 describes the non- iterative algorithm for the object reconstruction with the generating function chosen as a linear combination of the spatial spectra of the phase- diverse images.

The defocus estimate φ 0 can be found by many ways, for example, from a pair of phase diverse images. If / 1 x j ,) is the spatial spectrum of the first image that is characterized by unknown defocus φ and I 2 ((ύ x ,(ύ y ) is the spatial spectrum of the second image with defocus φ + Δφ , here Δφ being the difference in defocus predetermined by the system configuration, then the estimate of defocus is given by an appropriate expression:

where the OTF expansion H((ύ x ,(ύ ,φ) = γ 0 t φ 2 + ... in the vicinity of φ = 0 valid at ω x + (ύ y 2 |« 1 is used. The coefficient A denotes the ratio A =< *2<Α,*,) -Iι<P.,*,) > t (13)

and the averaging is carried out over low spatial frequencies | (ύ x + ω 2 |« 1 . In addition, the estimate φ 0 of the is unknown defocus φ can be found from three consecutive phase-diverse images I 1 (CH x W y ) with defocus φ — Δφ 1 ? / 2 x j ,) with defocus φ and I 3 (Qd x , ω y ) with defocus φ + Δφ 2 (Aq) 1 and Δφ 2 are specified by the system arrangement):

l χΔφ^Δφ[ _ 2 χΔ φi 2 -Δφ 2 2

The coefficient χ is the ratio of images spectra

I 2 (CO x Wy) - I 1

averaged over low spatial frequencies | (ύ x 2 1« 1 . Note that in practice the best estimates of defocus according Eq. 14 were achieved when the numerator and the denominator in Eq. 15 were averaged independently, i. e.

'

Note that an estimate of defocus (φ 0 in Eq. 1) is necessary to start these computations, the estimate is automatically provided by the formulas specifying the reconstruction algorithm above. Such an estimate can also be provided by other analytical methods, for example, by determining the first zero-crossing in the spatial spectrum of the defocused image as described by I. Raveh et al. (I. Raveh, et ah, Optical Engineering 38(10), pp. 1620-1626, 1999).

In practice, calculations according to Eq. 16 together with Eq. 14 can be used for an apparatus to determine degree of defocus with, at least, two photo-sensors having only one photo-sensitive spot, for example photo-diodes or photo -resistors. A construction for such apparatus likely includes a photo-sensor but also an amplitude mask, focusing optics and processing means which are adapted to calculate the degree of defocus of, at least, one intermediate image. The advantage of such system is that no Fourier transformations required for calculations which significantly reduces calculation time. This could be achieved by, for example, simplification of Eq. 16 to a derivate of the Parseval's theorem, for example:

J ]u(x,y)x{I 3 (x,y)-I 2 (x,y)}dxdy

where U(x,y) defines the amplitude mask in one or multiple image planes.

Also, photo-diodes and photo-resistors are significantly less expensive compared to photo-sensor arrays and are more easily assembled.

Note that a Fourier transformation can be achieved by processing methods as described above, but can also be achieved by optical methods, for example, by optical means, for example, by an additional optical element between the beam splitter and imaging photosensor. Using such optical Fourier transformation will significantly reduce digital processing time which might be advantageous for specific applications.

Such apparatus can be applied as, for example, a precise and inexpensive optical range meter, camera component, distance meter. Such apparatus differs from existing range finders with multiple discrete photo-sensors which all use phase-detection methods. The distance of the object to the camera can be estimated once the degree of defocus is known via a simple optical calculation, so the methods can be applied to a distance metering device. Also, the speed and direction of an object in X, Y and Z directions (also: 3D space) can be estimated with additional computation means and information on at least two subsequent final images and the time in between capture of the intermediate images for these final images. Such inexpensive component for solid state image reconstruction will increase consumer, military (sensing and targeting, with or without the camera function and with or without wave-front sensing functions) and technical applications.

As an alternative, the estimate can be obtained by an additional device, for example, an optical or ultrasound distance measuring system. However, most simply, in the embodiments described in this document, the estimate is provided by the algorithm itself without the aid of any additional measuring device.

Note that an estimate on the precision of the degree of defocus can also be obtained by Cramer-Rao analysis as described in (DJ. Lee et al, J. Opt. Am. A 16(5), pp. 1005- 1015, 1999) which document is a part of this document by reference.

Apart from the method described above the invention also provides an apparatus for providing at least two, phase-diverse intermediate images of said object wherein each of the intermediate images has a different degree of defocus compared to an ideal focal plane (i. e. an image plane of the same system with no defocus error), but having a precisely known degree of defocus of each intermediate image compared to any other intermediate image. The apparatus includes processing means for reconstructing a focused image of the object by an algorithm expressed by Eq. 6.

Note that a man skilled in the art will conclude that: (a) - said Fourier-based processing with spatial spectra of images can also be carried out by processing the corresponding amplitudes of wavelets to the same effect, (b) - the described method of image restoration can be adapted to optical wave-front aberrations different from defocus. In this case each phase-diverse image is characterized by an unknown absolute magnitude of an aberration but a known a priori difference in the aberration magnitude relative to any other phase-diverse image, and, (c) - the processing functions mentioned above can be applied to any set of set of images or signals which are blurred, but of which the transfer (blurring) function is known. For example, the processing function can be used to reconstruct images/signals with motion blur or Gaussian blur in addition to said out- of-focus blur.

Secondly, an additional generating function to provide the degree of defocus of at least one of said intermediate images compared to the in- focus image plane is provided here, and the degree of defocus can be calculated by additional processing by an apparatus. An improved estimate for unknown defocus can be directly calculated from at least two phase-diverse, intermediate images obtained with the optical system by a non-iterative algorithm according to: Ψ-B' δφ ≡ ^-A , (18)

B l

and thus an improved estimate becomes φ = φ 0 + δφ . The generating function Ψ' in this case obeys

Ψ'[I(ω x y ,φ -Δφ^K ,I(ω x y ,φ -Δφ^ )] =

(19) = £5;(ω x j ,,φ 0 ,Δφ 1 K ,Δφ Λ/ ,[/ 0 x j ,)])δφ^, p≥O

and

d'Ψ'

= 0, i = 2K K. (20)

3(δφy

In compliance with Eq. 20, 5'(ω x j ,,φ 0 ,Δφ 1 K ,Aφ M ,[I 0 x y )]) = 0 for i = 2K K and Eq. 19 reduces to

Ψ'[/(ω x j ,,φ - Δ φi ),K ,I(ω x y ,φ - Ay M )] =B o ' + i?;δφ + O(δφ κ+ι ). (21)

The latter formula yields directly Eq. 18. Note that the coefficients B p ' are, in general, functional dependencies of the object spectrum / 0 x j ,) which, in turn, can be found from Eq. 6.

It can be necessary to correct the spatial spectrum of at least one of said intermediate images to lateral shift of said image compared to any other intermediate image because the image reconstruction as described in this document is sensitive to lateral shifts. A method to do such correction is given below which method can be included in processing means of an apparatus carrying out such image reconstruction. The general algorithm according to Eq. 6 requires a set of defocused images as input data. However, due to, for example, mis-alignments of the optical system some intermediate images can be shifted in the plane perpendicular to the optical axis resulting in incorrect restoration of the final image.

Using shift-invariance property of the Fourier power spectra and the Hermitian redundancy in the image spectrum, i. e. image intensity is a real value, in combination with the Hermitian symmetry of the OTF, i. e. H(ω x y ,φ) = H * (-ω x ,-ω y ,φ) , the spectrum of the shifted intermediate image can be recalculated to exclude the unknown shift. An example of the method for excluding shift dependence is described below. Assuming that the n - th intermediate image is shifted by Ax , Ay , in compliance with Eq. 3 its spectrum becomes

1 ° r r I n x ,ω ) = — \ \ I n (X- Ax,y - Ay =)exp[-i(ω x x + ω y)]dxdy = 2π _ J _ J , (22)

= / B x j ,)eχp[-/(ω x Δx + ω j ,Δy)]

I n ((ύ x ,(ύ y ) being the unshifted spectrum. In many practical cases the exit pupil of an optical system is a symmetrical region, for example, square or circle, and the defocused OTF H(ω x j ,,φ) e Re (real value). For two intermediate images, one of which is supposed to be unshifted, we have in agreement with Eq. 22

/ B x j ,) = H(ω x j ,,φ B )/ 0 x j ,)eχp[-/(ω x Δx+ω j ,Δj)], /, (ω x , ω y ) = H(ω x , Co , φ, )/ 0 (CO x , ωΛ

From Eq. 23, [I n x y )/ I,(ω x y )f ~ exp[-2i(ω x Ax +ω y Ay)] ≡ exp(-2zθ) , where

i = V— 1 and the shift-dependent factor can be obviously excluded from / κ x j ,) .

Thus, the shift-corrected spectrum takes the form / κ x j ,) = / κ x j ,) exp(zi!>) and it can be further used in the calculations according to Eq. 6. Note that the formulas above give an example of a method for correcting the lateral images shift and there are also other methods to obtain shift-corrected spectra, for example, correlation technique, analysis of moments in intensity distribution.

The quality of reconstruction of an object / 0 x j ,) according to the non- iterative algorithm given by Eq. 6 can thus be significantly improved by replacing the initial defocus estimate φ 0 with the improved estimate φ = φ 0 + δφ , where δφ is provided by

Eq. 21. The degree of defocus of the intermediate image compared to the in- focus image plane can be included in the non-iterative algorithm and the processing means of an apparatus for such image reconstruction adapted accordingly.

At least two intermediate images are required for the reconstruction algorithm specified by Eq. 6, but any number of intermediate images can be used providing higher quality of restoration and weaker sensitivity to the initial defocus estimate φ 0 since the generating function Ψ gives the (M - 1) - th order approximation to B 0 x , G) 3 , ,φ 0 , Aq) 1 K , Δφ^ , [I 0 x ,(ύ y )]) defined by Eq. 1 with respect to the unknown value δφ . The resolution and overall quality of the final image will increase with increasing the number M of intermediate images, at the expense of implementation of a larger number of photo-sensors or increasingly complex optical/mechanical arrangement, increasing computation time. Reconstruction via three intermediate images is used as an example in this document.

The degrees of defocus of the multiple intermediate images relatively to the ideal focal plane (i. e. an image plane of the same system with no defocus error) differ. In Eq. 1 the defocus of the n - th intermediate image φ κ = φ - Δφ κ (n = lK M and M is the total number of intermediate images) is unknown prior to provision of the intermediate images. However, as mentioned earlier, the difference in degree of defocus Δφ κ of the multiple intermediate images relatively to each other (or any chosen image plane) must be known with great precision. This imposes no problems in practice, because the relative difference in defocus is specified in the design of the camera and its optics. Note that these relative differences vary in different camera designs, the type of photosensors^) used and intended applications of the image reconstructor. Moreover, the differences in defocus Δφ κ can be found and accounted in further computations by performing calibration measurements with well-defined objects.

The degree of defocus of the image can be estimated using non- iterative calculations using fixed and straightforward formulas given above and the information provided by the intermediate images. Such non- iterative calculations are of low computational cost, provide stable and precise results. Furthermore, such non- iterative calculations can be performed by relatively simple dedicated electronic circuits, further expanding the possible applications of the invention. Thus, the reconstruction of a final sharply focused image is independent from the degree of defocus of any of the intermediate images relative to the object. The precision of the measurement of the absolute defocus (and, therefore, the precision of the range which is calculated from defocus values), is fundamentally limited by the combination of the entrance aperture (D ) of the primary optics and the distance ( z ) from the primary optics to an object of interest. In the case when a diffraction- limited spot implies the "circle of confusion" of an optical system, the depth of field becomes

~ (z/ D) 2 and represents the defocus uncertainty. For a high aperture aplanatic lens, an explicit expression was derived by Sheppard (C.J.R. Sheppard, J. Microsc. 149, 73-75, 1988).

So, a high precision for defocus and range estimates requires, by definition, a large aperture of an optical system. This can be achieved by fitting, for example, a very large lens to the apparatus. However, such lens may require a lens-diameter of one meter, a size likely not practical for the majority of applications which require small camera units. An effectively large aperture can also be obtained by optically combining light signals from multiple, at least two, optical elements, for example, relatively small reflective or refractive elements positioned outside of the optical axis. Such optical elements must be positioned in the direction perpendicular to the optical axis, but not necessarily so. The theoretical depth of focus, i. e. axial resolution, corresponds to the resolution of the whole refractive surface of which the dimension is characterized by the distance between the optical elements. The optical elements can be regarded as small individual sectors at the periphery of a large refractive surface. Clearly, the total light intensity received by an image sensor depends on the combined apertures of the multiple optical elements. Such system with multiple apertures can be made flat and, in the case of only two light sources, also linear.

So, according to the above, an apparatus, for, for example, range finding applications, can be constructed which combines, at least two, light signals from, at least two, optical elements which are positioned opposite at a distances perpendicular to the optical axis.

The procedures described in this document so far require that the distances between the image planes is known precisely because the generating function, or functional (see Ψ , as in Eq. 1) combines spatial spectra of intermediate images with a priori known diversity defocus. However, a man skilled in the art may conclude that, alternatively, such procedure can be adapted to process intermediate images that are spatially modulated by a priori known phase and amplitude phase masks. Such masks spatially modulate the phase and/or amplitude of the light waves on their way to image sensors, and result in spatially phase and/or amplitude modulated intermediate images. The final image can be restored digitally by subsequent processing of, at least one, spatially modulated intermediate image according to existing and well-known decoding algorithms, or, alternatively, by algorithms adapted from procedures described in this document which adaptations to formulas above will be set forth below. Said modulations preferably includes defocus, but not necessarily so. Such wave-front encoding can be achieved by, for example, including, at least one, phase mask, or, at least one, amplitude mask, or a combination of any number of phase and amplitude masks having a precisely known modulation function. The system embodiment implies that, at least one, phase and /or amplitude mask is located in the exit pupils of the imaging system.

For a set of intermediate images I n x ,(ύ y ) , 1 < n < M , obtained with phase and/or amplitude masks, Eq. 1 can be rewritten

Ψ[I ι x y ),K ,I M x y )] ≡ Ψ[H ι x y )xI 0 x y ),K ,H M x y )xI 0 x y )] =

(24) where, the OTFs are (see, for example, H.H. Hopkins, Proc. Roy. Soc. of London, A231, 91-103, 1955)

H m x y ) = -±- ] J> B (ξ +^,η +^)P; " (ξ -^,η -^)<*ξtfl , (25)

Ω « = I P n (ξ,η) | 2 dt,ch\ being the area of the n -th pupil in canonical coordinates

(ξ ,η ) and the pupil function is given by

P n (ξ ,η ) = P n (0) (ξ ,η ) exp[rθ B (ξ ,η )] . (26)

In Eq. 26, the function P κ (O) (ξ,η) , P κ (O) (ξ,η)e R , is the amplitude transmission function corresponding to n -th amplitude mask, and i3- B (ξ ,η ) is the phase function representing the n -th phase mask in the exit pupil. In case of defocus φ κ , for example, i3- κ (ξ ,η ) = φ (ξ 2 2 ) . It is important to note that Eq. 25 , or Eq. 26 in its phase function, implicitly contains unknown defocus φ , which alternatively can be expressed as φ =φ o +δφ (φ 0 defocus estimate).

Consider now reconstruction of the object spectrum / 0 x ,ω ) from Eq. 24. The objective of the method is to properly choose combinations of i3- B (ξ ,η ) and/or P κ (O) (ξ,η) for all intermediate images which combinations ensure validity of Eq. 4. Finally, the object spectrum I 0 (CO x , CO y ) can be recalculated from Eq. 6 with an alternative generating function/functional Ψ given by Eq.24 and invariant to defocus φ (up to terms ~ δφ^).

Analogously to Eqs. 19-20, a new generating function/functional Ψ' can be constructed by properly combining i3- κ (ξ ,η ) and/or P κ (0) (ξ ,η ) to retain only linear terms in δφ in the right-hand side of Eq. 19. Unknown defocus φ can be subsequently found from Eq. 21 by substituting Ψ' .

So, an imaging apparatus can be designed which includes, in addition to the basic image forming optics described elsewhere in this document, at least one, optical mask to spatially modulate the incoming light signal. Either the phase or the intensity of said signal of, at least one, intermediate image can be modulated. Both phase and intensity of, at least one, intermediate image, can be spatially modulated by, at least one, phase mask, or separate phase masks are included for separate and independent modulation functions. The resulting modulation results in, at least one, spatially modulated light signal which can be subsequently reconstructed in accordance with the method described above by digital means to diminish sensitivity of the imaging apparatus to, at least one, selected optical aberrations which can be defocus aberration.

Image reconstruction: an example with three intermediate images At least two intermediate images are required for a reconstruction as described above but any number can be used as starting point for such reconstruction. As an illustration of the reconstruction algorithm set forth in the present document, we now consider an example with three intermediate images. Assume that the spatial spectra of three consecutive phase-diverse images are

/ 1 x j ,) , I 2 x y ) and I 3 x y ) , their defocuses are φ -Δφ , φ and φ +Δφ , respectively. In agreement with Goodman (J.W. Goodman, Introduction to Fourier Optics, McGraw-Hill Co., Inc., New York, 1996, Chap. 6) the reduced magnitude of defocus is specified as

where D is the exit pupil size, λ is the wavelength, Z 1 is the position of the ideal image plane along the optical axis, z a is the position of the shifted image plane. The defocus estimate (for the second image) can be found from Eq. 14

where, in agreement with Eq. 16,

= \\[I 3 ( β > x , β > y )-I 2 ( β > x , β > y )]dω x y ^[I 2 x y )-I ι x y y\dω x y '

and the integration is performed over low spatial frequencies | ω 2 + ω 2 |« 1 . With φ 0 in hand, and following to Eq. 7 the generating function satisfying Eq. 4 becomes

Ψ ≡ I 0 x y )x (A 0 + V (A 1 + A 3 Δφ 2 ) + 2μ(A 2 + A 4 Δφ 2 ) + O(δφ 3 )} , (30)

and

5 o x j ,,φ o ,Δφ) = A 0 + V (A 1 + /z 3 Δφ 2 ) + 2μ(/z 2 + /z 4 Δφ 2 ) . (31)

the coefficient v and μ are

v = KK - i KK (32)

4A 2 A 4 -3A 2 + 8A 2 Δφ 2 '

and A 1 (z = 0,4 ) are the Taylor series coefficients of the defocused OTF H(ω x j ,,φ = φ 0 +δφ) in the neighbourhood of φ 0 , i. e.

H(ω x y 0 +δφ) = Ji 0 + Ji 1 δφ + Ji 2 δφ 2 + Ji 3 δφ 3 + Ji 4 δφ 4 + O(δφ 5 ) . (34)

Finally, the spectrum of the reconstructed image, in concordance with Eq. 10, can be rewritten as

An improved estimate of defocus φ = φ 0 + δφ complies with Eq. 18 for the generating function specified by Eq. 7

D D D' δφ = ^-^ , (36)

B l

where B 0 is given by Eq. 31 and

B o ' = h 0 +X (Ji 1 + Ji 3 Ay 2 ) + 2σ (h 2 + Ji 4 Ay 2 ) , (37)

B[ = h ι + 2τ h 2 + 6σh 3 + 4τ Ji 4 Ay 2 , (38)

(39)

The coefficients coefficient V and μ are specified by Eqs. 32-33, the coefficients τ and σ in Eqs. 37-39 satisfy the following equations

τ = --\ (40)

4 A 4 σ = _J_ 4AA -3A 3 2 (41)

48 A 4 2

The optimum difference in defocus Δφ between the intermediate images is related to the specific dynamic range of the image photo-sensors, i. e. their pixel depth, as well as optical features of the object of interest. Depending on defocus magnitude, the difference in distance between the photo-sensors must exceed at least one wavelength of light to produce a detectable difference in intensity of images. The right-hand terms in Eqs. 35 and 39 are, in fact, the finite-difference approximations of the corresponding derivatives of the defocus-dependent image spectrum 7(ω x j ,,φ) = H(ω x j ,,φ)/ O x j ,) with respect to defocus φ . By reducing the difference in defocus between the intermediate images or, other words, by reducing the distance between the intermediate image planes the precision of approximation can be increased. High pixel-depth or, alternatively, high dynamic range allows for sensing small intensity variations and, thus, small difference in defocus between the intermediate images can be implemented which results in increased quality of the final image.

Various embodiments of a device can be designed, which include, but which are not restricted to, various embodiments described below.

Apart from the method and apparatus which are adapted to provide an image wherein a single object is depicted in-focus, a preferred embodiment provides a method and apparatus wherein the intermediate images depict more than one object, each of the depicted objects having a different degree of focus in each of the intermediate images, and before the execution of said method one of those objects is selected.

Clearly, the image reconstructor with its providing means of intermediate images must have at least one optical component (to project an image) and at least one photo-sensor (to capture the image/light). Additionally, the reconstructor requires digital processing means, displays and all other components required for digital imaging.

Firstly, a preferred embodiment for providing means includes one image photo-sensor which can move mechanically, for example the device can be designed including optics to form an image on one sensor, which image photo-sensor or, alternatively, the whole camera assembly moves a predetermined and precise distance along the optical axis in between the subsequent intermediate exposures. The simplicity of such device is the need for only one photo-sensor, the complexity is mechanical needs for precise movement. Such precise movement is most effectively reached for only two images because of only need for two alternative stopping positions of the device. Alternatively, another embodiment has mechanical moving parts is a system with optics and one sensor, but with a spinning disc with, stepwise, sectors with different optical thickness. An image is taken each time a sector of different and known thickness is in front of the photo-sensor. The thickness of the material provides for a precisely known delay of the wave-front for each image separately and, thus, a set of intermediate images can be provided for subsequent reconstruction by the image reconstruction means.

Secondly, a solid state device (with no mechanical parts/movement) can be employed. In a preferred embodiment of the providing means the optics can be designed such that at least two independent intermediate images are provided to one fixed image photosensor. These images can be, for example, two large distinct sub-areas each covering approximately half of the photo-sensor and the required diversity defocus can be provided by, for example, a planar mask.

Also, at least two independent image photo-sensors (for example, three in the example set forth throughout this document) which independent image photo-sensors each produce separate intermediate images, likely, but not strictly necessary, simultaneously. The device can be designed including optics to form an image which image is split in multiple images by, for example, at least one, beam splitter, or alternatively phase grating, with a sensor at the end of each splitted beam with a light path which is precisely known and which represents a known degree of defocus compared to at least one other intermediate image. Such design (for example, with mirror optics analogous to the optics of a Fabry-Perot interferometer) has, for example, beam splitters to which a large number of sensors or independent sectors on one sensor, for example, three, can be added. The simplicity of such device is the absence of mechanical movement and its proven construction for other, for example said, interferometer, applications. Thirdly, a scanning device can provide the intermediate images. Preferably, a line scanning arrangement is applied. Line scanners with linear photo-sensors are well known and can be implemented without much technical difficulty as providing means for an image reconstructor. The image can be sensed by a linear sensor scanning in the image plane. Such sensors, even at high pixel depth, are inexpensive and mechanical means to move such sensors are well known from a myriad of applications. Clearly, disadvantages of this embodiment are complex mechanics and increased time to capture intermediate images because scanning takes time. Alternatively, a scanner configuration employing several line photo-sensors positioned in the intermediate image planes displaced along the optical axis can be used to take the intermediate images simultaneously.

Fourthly, the intermediate images can be produced by different light frequency ranges. Pixels of the sensor can be fitted alternatively with a red, blue or a green filter in a pattern, for example, in a well known Bayer pattern. Such image photo-sensors are commonplace in technical and consumer cameras. Firstly, the colour split provides a delay and subsequent difference in defocus of the pixel groups. A disadvantage of this approach is that only grey-scale images will result as a final image. Alternatively, for colour images the colour split is applied to the final image, and intermediate images for the different colours reconstructed separately prior to stacking of such images.

Arrangement for coloured images are well known, for example, Bayer pattern filters for the image photo-sensor or spinning discs with different colour filters in front of the optics of the providing means which disc is synchronized with the image capture process. Alternatively, red (R), blue (B) and green (G) spectral bands ("RGB"), or any other combination of spectral bands, can also be separated by prismatic methods, as is common in professional imaging systems.

Fifthly, a spatial light modulator, for example, a liquid crystal device or an adaptive mirror, can be included in the light path, of the light path of at least one sensor, to modulate the light in between the taking of the intermediate images. Note that the adaptive mirror can be of a most simple design because only defocus alteration is required which greatly reduces the number of actuators in such mirror. Such modulator can be of a planar design, i. e. "piston" phase filter, just to lengthen the path of the light, or such modulator can have any other phase modulating shape, for example, cubic filter. Using cubic filters allows for combinations of methods described in this document with wave-front coding/decoding technologies, to which references can be found in this document.

Lastly, an image reconstructor adapted to process intermediate sub-images from corresponding sub-areas of at least two intermediate images in at least two final in- focus sub-images can be constructed for EDF and wave-front applications. Such reconstructor has at least one image photo-sensor (for an image/measuring light intensity) or multiple image photo-sensors (for measuring light intensity only) each divided in multiple sub- sensors with each sub-sensor producing an intermediate image independent of the other sub-sensors by projecting intermediate images on the sensor by, for example, a segmented input lens, or segmented input lens array.

It should be noted that increasing the number of intermediate images with consequently decreasing sensor area per intermediate image increases the precision of the estimate of defocus but decreases the image quality/resolution per intermediate image. So, for example, an application requiring high image quality the number of sub-sensors should be reduced whereas for applications requiring precise estimation of distance and speed the number of sub-sensors should be increased. Methods for calculating the optimum for such segmented lenses and lens arrays are known and summarized in, for example, Ng Ren et al. , 2005, Stanford Tech Report CTSR 2005-02 and technologies and methods related to Shack-Hartmann lens arrays. A man skilled in the art will recognize that undesired effects such a parallax between the intermediate images on the sub- sensors can be corrected for by calibration of the device during manufacturing, or, alternatively, digitally during image reconstruction, or by increasing the number of sub- sensors and their distribution on the photo-sensor.

Alternatively, small sub-areas of at least two intermediate images can be distributed over the photo-sensors in a pattern. For example, the sensor can be fitted with a device or optical layer including optical steps, which delays the incoming wave-front differently for sub-areas in the pattern of, for example, lines or dots. Theoretically, the sub-areas can have the size of one photo-sensor pixel. The sub-areas must, of course, be digitally read out separately to produce at least two intermediate images with different but known degrees of defocus (phase shift). Clearly, the final image quality is dependent on the number of pixels representing an intermediate image. From at least two adjacent final sub-images a composite final image can be made, for example, for EDF applications.

An image reconstructor which reconstructs sub-images of the total image, which sub- images can be adjacent, independent, randomly selected or overlapping can also be applied as a wave-front sensor, other words, it can detect differences in phase for each sub-image by estimation of the local defocus or, alternatively, estimate tilts per sub- image based on comparison of the spatial spectra of neighbouring images. The apparatus should therefore include processing means to reconstruct a wave-front by combining defocus curvatures of, at least two, intermediate sub-images.

For wave-front sensing applications the method which determines defocus for a total final image, or a total object, can be extended to a system which estimates the degree of defocus in a multiple of sub-intermediate-images (henceforth: sub-images) based on, at least, two intermediate full images. For small areas the local curvature can be approximated by defocus curvature (degree of defocus), and at small sub-images any aberration of any order higher or equal to 2 can be approximated by local curvature, i. e. degree of defocus. Consequently, the wave-front can be reconstructed based on the local curvatures determined for the small sub-images and the image reconstruction device becomes effectively a wave-front sensor. This approach is, albeit using local curvatures and not tilts, in essence an analogue to the workings of a Shack-Hartmann sensor which uses local tilt within each local sub-aperture to estimate the shape of a wave-front. In the method described in this document local curvatures are used for the same. The well known Shack-Hartmann algorithms can be adapted to process information on curvatures rather than tilts. The sub-images can have, in principle, any shape and can be independent or partly overlapping depending on the required accuracy and application. For example, scanning the intermediate image by a linear photo-sensor, i. e. scanning can produce sub-images of lines. For wave-front sensors applications are numerous, which applications will increase with less expensive wave-front sensors.

However, the intermediate images can also be used to estimate the angulation (from lateral displacements of sub-images) of light rays compared to the optical axis by comparison of the spatial spectra of the neighbouring intermediate images and then reconstruct the shape of the wave-front by applying methods developed for the analysis of so called hartmanngrams. The apparatus should therefore include means adapted to reconstruct a wave-front by combining lateral shifts of at least two intermediate sub- images.

Moreover, a new image of the object can be calculated as it projected on the plane perpendicular to the optical axis at any distance from the exit pupil, i.e. reconstruction of final in- focus images by ray-tracing. Assuming, for example, in the optical system using two intermediate images, the spatial spectrum of the first image is Z 1 (CO x , CO 3 ,) and the spectrum of the second image taken in the plane displaced by Δz along the Z-axis is I 2 (CO x , Cύ y ) . Lateral shift of the second image by Ax and Ay , in conformity with Eq. 22, results in following change in the spatial spectrum of the second image

/ 2 (co x ,co ) =

= / 2 x j ,)eχρ[-/(ω x Δx+ω j ,Δy)]

/ 2 x j ,) being the unshifted spectrum. In many practical cases the exit pupil of an optical system is a symmetrical region, for example, square or circle, and, thus, the defocused OTF H(co x ,co ,φ) e Re (real value). For two intermediate images, one of which is supposed to be unshifted, we have by analogy with Eq. 23

T 2 x y ) = H(ω x y ,z + Az)I 0 x y )Qx V [-i(ω x Ax+ω y Ay)] I 1 (CO x , CO 3 , ) = H(CO x , CO 3 , , z)I 0 (CO x , CO 3 , )

where H(co x ,co ,z) is the system OTF with defocus expressed in terms of the displacement z with respect to the exit pupil plane. The intermediate images specified by Z 1 (co x , CO 3 ,) and / 2 x j ,) are supposed to be displaced longitudinally by a small

distance | Δz |« z to prevent significant lateral shift of Z 2 (CO x , CO y ) . From Eq. 43, it follows that

[I 2 (CO x , CO 3 , ) / 1 1 (CO x , CO 3 , )] 2 ~ exp[-2z-(co x Δx + ω y Ay)] ≡ exp(-2ifl) , (44)

where i3- = C0 x Δx + a> y Ay . The lateral shifts Δx and Ay can be obviously found from

Eq. 44. Note that other mathematical methods applicable to Fourier transforms of the images or/and their intensity distributions can be implemented to get information on lateral displacements Ax and Ay , for example, correlation method, analysis of moments in intensity distributions. From the formulas above, the ray- vector characterizing the whole image specified by the spatial spectra / 1 x j ,) becomes v = {Ax, Ay, Az) and a new image (rather a point of image) at any displaced plane with the coordinate z perpendicular to the optical axis Z can be conveniently calculated by ray-tracing (for example, D. Malacara and M. Malacara, Handbook of optical design, Marcel Dekker, Inc., New York, 2004). Note that the ray intensity / v is given by the integral intensity of the whole image/sub-image

/., = Wlfafϊdxdy. (45)

{x,y)Ξ sub-image

The integration in Eq. 45 is performed over the image/sub-image area. By splitting the images into a large number of non-overlapping or even overlapping sub-areas depending on the application requirements, the procedure described above can be applied to each sub-area separately, resulting in a final image as it is projected on the image plane at any given distance from the exit pupil and having the number of "pixels" equal to the number of sub-areas. This function is close to the principle described in WO2006/039486 (and subsequent patent literature regarding the same or derivations thereof as well as Ng Ren et al, 2005, Stanford Tech Report CTSR 2005-02, providing an explanation of the methods), but the essential difference is that the method described in the present document does not require an additional array of microlenses. The information on local tilts, i. e. ray directions, is recalculated from the comparison of the spatial spectra of the intermediate defocused images. It should be noted that the estimated computational cost for the described method is significantly lower than those given in WO2006/039486, in other words, the described method can provide real-time capability.

Images with EDF can be obtained by correction of a single wave-front in a single final image. The non-iterative computation methods described in this document will allow for rapid computations on, for example, dedicated electronic circuits. Extended computation time on powerful computers has been a drawback of various EDF imaging techniques to date. EDF images can also be obtained by dividing a total image in sub- images of a much less number than the wave-front application, requiring likely thousands of sub-images, described above. The degree of defocus is determined per sub-image (which can be small number of sub-images, say, only a dozen or so sub- images per total image, or very large numbers with each sub-image represented by only a number of pixels. The desired number of sub-images depends on required accuracy, specifications of the device and its application), and the sub-images corrected accordingly followed by reconstruction of a final image by combination of corrected sub-images. This procedure results in a final image in which all extended (three- dimensional) objects are sharply in focus.

EDF images can also be obtained by stacking at least two final images each reconstructed to correct for defocus for at least one focal plane of the same objects in cubic space. Such digital stacking procedures are well known.

The list of embodiments above includes examples for possible embodiments, and other designs to the same effect can be implemented, albeit of likely increasing complexity. The choice of embodiment clearly depends on the specifics of the application.

It should be noted that the preferred methods above imply a non-iterative method for image reconstruction. Non-iteration is most simple and save computing time. In our prototypes we reach reconstruction times ~50 ms allowing real-time imaging. However, two or three iterations of calculations can improve estimate of defocus in selected cases and improve image quality. Whether iterations should be applied depends on the application and likely need for real-time imaging. Also, for example, two intermediate images combined with re-iteration of calculations can be preferred by the user to three intermediate images combined with non-iterative calculations. The embodiments and methods of reconstruction are dependent on the intended application.

Applications of devices employing image reconstruction as described in this document are basically in nearly any optical camera system and are too numerous to list in full. Some, but not all, applications are listed below.

Scanning is an important application. Image scanning is a well known technology and can hereby be extended for camera applications. Note that images with an EDF can be reconstructed by dividing the intermediate images in a multiple of sub-sectors. For each sub-area the degree of defocus can be determined and, consequently, the optical sharpness of the sub-sector reconstructed. So, the final image will be composed of a multiple of optical focused images and have an EDF, even at full aperture camera settings. Linear scanning can be employed to define such linear sub-areas.

Pattern recognition and object tracking is extremely sensitive to a variety of distortions including defocus. This invention provides a single sharp image of the object by single exposures as well as additional information on speed, distance and angle of travelling by multiple exposures. Applications can be military tracking and targeting systems, but also medical, for example, endoscopy with added information of distances.

Methods described in this document are sensitive to wavelength. This phenomenon can be employed to split images at varying image depth when light sources of different wavelength are employed. For example, focusing at different layer depth in multilayer CD/DVD discs can be achieved for different depth simultaneously with lasers of different wavelength. A multilayer DVD pick-up optical system which reads different layers simultaneously can thus be designed. Other applications involve consumer and technical cameras insensitive to defocus error, iris scanning cameras insensitive to the distance of the eye to the optics, and a multiple of homeland security camera applications. Also, automotive cameras can be designed which are not only insensitive to defocus, but also, for example, calculate distance and speed of chosen objects, parking aids, wave-front sensors in numerous military and medical applications. Availability of inexpensive wave-front sensors will only increase the number of applications.

As pointed out, the reconstruction method described above is highly dependent of the wavelength of light forming the image. So, the methods can be adapted to determine the wavelength of light when defocus is known precisely. Consequently, the image reconstructor can, alternatively be designed as a spectrometer.

Figure 1 shows a sequence of defocused intermediate images from the image side of the optical system from which intermediate images the final image can be reconstructed. An optical system with exit pupil, 1, provides, in this particular example, three photosensors (or sections/parts thereof, or as subsequent images in time, see various options in the description of the invention in this document) with three intermediate images, 2, 3, 4, with the optical axis, 5, and which images have a precisely known distance, 6, 7, 8, compared to the exit pupil, 1 , and, alternatively, precisely known distances compared to each other, 9, 10. Note that a precisely known distance of an photo-sensor/image plane compared to the principle plane in such a system translates, via standard and traditional optical formulas, in a precisely known difference of defocus compared to each other.

Figure 2 shows a reconstructed image of a page from a textbook, 11, by reconstruction from three intermediate images into one final image, 12, defocused at the dimensionless value of φ = 40, a second image, 13, defocused at φ = 45, and another, 14, defocused at φ = 50. The reconstruction was carried out on intermediate images with digitally simulated defocus, and a dynamic range of 14-16 bit/pixel. Note that all defocused images are distinctly unreadable to a degree that even the mathematical integral sign can not be recognized from any of the intermediate images.

Figure 3 shows a reconstructed image of a scene with building, 15, by reconstruction from three intermediate images into one final image, 16, defocused at the dimensionless value of φ = 50, a second image, 17, defocused at φ = 55, and another, 18, defocused at ςø = 60. The reconstruction was carried out on intermediate images with digitally simulated defocus, and a dynamic range of 14 bit/pixel.

Figure 4 shows a reconstructed image of the letters "PSF" on a page, 19, by reconstruction from three intermediate images into one final image, 20, defocused at the dimensionless value of φ = 95, a second image, 21, defocused at φ = 100, and another, 22, defocused at φ = 105. The reconstruction was carried out on intermediate images with digitally simulated defocus. The final image, 19, has a dynamic range of 14- bit/pixel, is reconstructed with a three-step defocus correction, with the final defocus deviation from exact value: δζε>~0.8.

Figure 5 shows an example of an embodiment of the imaging system employing two intermediate images to reconstruct a sharp final image. Incoming light, 23, is collected by an optical objective, 24, with a known exit pupil configuration, 25, and then is divided by a beam splitter, 26, into two light signals. The light signals are finally detected by two photo-sensors, 27 and 28, positioned in the image planes shifted, one with respect to another, by a specified distance along the optical axis. Photo-sensors 27 and 28 provide simultaneously two intermediate, for example, phase-diverse, images for the reconstruction algorithm set forth in this document.

Figure 6 shows an example of an embodiment of the imaging system employing three intermediate images to reconstruct a sharp final image. Incoming light, 23, is collected by an optical objective, 24, with a known exit pupil configuration, 25, and then is divided by a first beam splitter, 26, into two light signals. The reflected part of light is detected by a photo-sensor, 27, whereas the transmitted light is divided by a second beam splitter, 28. The light signals from the beam splitter 28 are, in turn, detected by two photo-sensors, 29 and 30, positioned in the image planes shifted, one with respect to another, and relative to the image plane of the sensor 27. Photo-sensors 27, 29 and 30 provide simultaneously three intermediate, for example, phase-diverse, images for the reconstruction algorithm set forth in this document.

Figure 7 illustrates the method, in this example for two intermediate images, to calculate an object image in an arbitrary image plane, i. e. at an arbitrary defocus, based on local ray- vector and intensity determined for a small sub-area of the whole (defocused) image. Two consecutive phase-diverse images, 2 and 3, with a predetermined defocus, or alternatively displacement, 9, along the optical axis, 5, are divided by a digital (software-based) procedure into a plurality of sub-images. Comparison of the spatial spectra calculated for a selected image area, 31, on phase- diverse images allows evaluating the ray- vector direction, 32, which characterizes light propagation, in geometric optics limit, along the optical axis 5. Using the integral intensity over the area 31 as a ray intensity, in combination with the ray- vector, a corresponding image point, 33, /. e. point intensity and position, located in an arbitrary image plane, 34, can be found by ray-tracing. In calculations, the distance, 35, from the new image plane 34 to one of the intermediate image planes is assumed to be specified.