Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR COMPRESSED IMAGING
Document Type and Number:
WIPO Patent Application WO/2008/129553
Kind Code:
A1
Abstract:
An imaging system and method are presented for use in compressed imaging. The system comprises at least one rotative vector sensor, and optics for projecting light from an object plane on said sensor. The system is configured to measure data indicative of Fourier transform of an object plane light field at various angles of the vector sensor rotation.

Inventors:
STERN ADRIAN (IL)
Application Number:
PCT/IL2008/000555
Publication Date:
October 30, 2008
Filing Date:
April 27, 2008
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
OPTICAL COMPRESSED SENSING (IL)
STERN ADRIAN (IL)
International Classes:
G06E3/00
Other References:
A. STERN: "Compressed imaging system with linear sensors", OPTICS LETTERS, vol. 32, no. 21, 26 September 2007 (2007-09-26), pages 3077 - 3079, XP001509590, Retrieved from the Internet [retrieved on 20080729]
E. J. CANDÈS, J. ROMBERG, T. TAO: "Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information", IEEE TRANSACTIONS ON INFORMATION THEORY, vol. 52, no. 2, February 2006 (2006-02-01), pages 489 - 509, XP002491493, Retrieved from the Internet [retrieved on 20080729]
A. H. DELANEY, Y. BRESLER: "A Fast and Accurate Fourier Algorithm for Iterative Parallel-Beam Tomography", IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 5, no. 5, 1 May 1996 (1996-05-01), pages 740 - 753, XP011025991, Retrieved from the Internet [retrieved on 20080729]
M. A. NEIFELD, J. KE: "Optical architectures for compressive imaging", APPLIED OPTICS, vol. 46, no. 22, 3 May 2007 (2007-05-03), pages 5293 - 5303, XP001507049, Retrieved from the Internet [retrieved on 20080729]
M. B. WAKIN, J. N. LASKA, M. F. DUARTE, D. BARON, S. SARVOTHAM, D. TAKHAR, K. F. KELLY, R. G. BARANIUK: "An Architecture for Compressive Imaging", 2006 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP'06), 8 October 2006 (2006-10-08) - 11 November 2006 (2006-11-11), pages 1273 - 1276, XP031048876, Retrieved from the Internet [retrieved on 20070729]
A. VEERARAGHAVAN, R. RASKAR, A. AGRAWAL, A. MOHAN, J. TUMBLIN: "Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing", ACM TRANSACTIONS ON GRAPHICS, vol. 26, no. 3, July 2007 (2007-07-01), XP002491494, Retrieved from the Internet [retrieved on 20080729]
C. FERREIRA, A. MOYA, T. SZOPLIK, J. DOMINGO: "Hough-transform system with optical anamorphic preprocessing and digital postprocessing", APPLIED OPTICS, vol. 31, no. 32, 10 November 1992 (1992-11-10), pages 6882 - 6888, XP000321825, Retrieved from the Internet [retrieved on 20080729]
J. SHAMIR: "Cylindrical lens systems described by operator algebra", APPLIED OPTICS, vol. 18, no. 24, 15 December 1979 (1979-12-15), pages 4195 - 4202, XP002491495, Retrieved from the Internet [retrieved on 20080729]
Attorney, Agent or Firm:
REINHOLD COHN AND PARTNERS (Tel-aviv, IL)
Download PDF:
Claims:

CLAIMS

1. An imaging system for use in compressed imaging, the system comprising at least one rotative vector sensor and optics, projecting light from an object plane on said sensor, the system being configured to measure data indicative of Fourier transform of an object plane light field at various angles of the vector sensor rotation.

2. The system of Claim 1 comprising at least two said vector sensors in a staggered configuration.

3. The system of Claim 1 comprising at least two said vector sensors arranged in a stack. 4. The system of Claim 1 comprising at least two vector sensors with sensitivity peak wavelengths differing for more than 20% of a shortest of said sensitivity peak wavelengths.

5. The system of Claim 1 comprising a slit.

6. The system of Claim 1 comprising a cylindrical lens. 7. The system of Claim 1 comprising a cylindrical mirror.

8. The system of Claim 1 comprising a 4-f optical element arrangement.

9. The system of Claim 1 comprising a 2-f optical element arrangement.

10. The system of Claim 1 comprising a source of coherent radiation.

11. The system of Claim 1 comprising a beam splitter and configured as a holographic system.

12. The system of Claim 1 comprising a vector sensor with a sensitivity peak being between 300 GHz and 3 THz.

13. The system of Claim 1 comprising a vector sensor with a sensitivity peak being in infrared range with a frequency higher than 3 THz. 14. The system of Claim 1 comprising a vector sensor with a sensitivity peak being in visible range.

15. The system of Claim 10 comprising a slit.

16. The system of Claim 1 comprising a control unit configured to initiate measurements by said at least one vector sensor at predetermined rotation angles. 17. The system of Claim 1 comprising a control unit configured to reconstruct an image from data measured by the sensor at various angles of its rotation.

18. The system of Claim 17, wherein a set of said various angles is predetermined.

19. The system of Claim 1 comprising a control unit configured to reconstruct an image using minimization of total variation optimization technique from data measured by the sensor for various angles of its rotation.

20. The system of Claim 1 comprising a control unit configured to reconstruct an image using a Z 1 minimization technique from data measured by the sensor for various angles of its rotation.

21. The system of Claim 1 comprising a control unit configured to reconstruct an image using a maximum a posteriori estimation technique from data measured by the sensor for various angles of its rotation.

22. The system of Claim 1 comprising a control unit configured to reconstruct an image using a penalized maximum likelihood estimation technique from data measured by the sensor for various angles of its rotation. 23. An imaging system for use in compressed imaging, the system comprising a pixel matrix sensor and optics compressively projecting light from an object plane on a pixel vector of said sensor, the system configured to affect a direction of the light projection and measure data indicative of Fourier transform of the object plane light field by matching an orientation of said pixel vector within the pixel matrix and the direction of the light projection.

24. The system of Claim 23 comprising a slit.

25. The system of Claim 23 comprising a cylindrical lens or mirror.

26. The system of Claim 23 comprising a source of coherent radiation.

27. The system of Claim 23, said sensor having a sensitivity peak being in infrared range with a frequency higher than 3 THz.

28. The system of Claim 23, said sensor having a sensitivity peak being in visible range.

29. The system of Claim 23 comprising a control unit configured to reconstruct an image from data measured at various pixel vector orientations within the pixel matrix. 30. A method for use in compressed imaging, the method comprising reconstructing an image from data indicative of Fourier transform of an object plane light field, a set of spatial frequencies of the data having a star configuration in two-

dimensional spatial frequency space, an envelope of the star being of a substantially circular shape.

31. The method of Claim 30 using minimization of total variation optimization technique. 32. The method of Claim 30 using a Z 1 minimization technique.

33. The method of Claim 30 using a maximum a posteriori estimation technique.

34. The method of Claim 30 using a penalized maximum likelihood estimation technique.

35. A method for use in compressed imaging, the method comprising reconstructing an image from data indicative of Fourier transform of an object plane light field, a set of spatial frequencies of the data having a star configuration in two- dimensional spatial frequency space, a ratio between a length of a shortest star ray a and a length of a longest ray being less than 0.65 or larger than 0.75.

36. The method of Claim 35 using minimization of total variation optimization technique.

37. The method of Claim 35 using a Z 1 minimization technique.

38. The method of Claim 35 using a maximum a posteriori estimation technique.

39. The method of Claim 35 using a penalized maximum likelihood estimation technique. 40. A method for use in compressed imaging, the method comprising sequentially projecting light from an object plane on various directions within a rotation plane of a rotative vector sensor and rotating the vector sensor so as to measure data indicative of Fourier transform of the object plane light field by said sensor for the various directions of the projected light. 41. A method for use in compressed imaging, the method comprising sequentially compressively projecting light from an object plane on various directions within a pixel matrix sensor plane and measuring data indicative of Fourier transform of the object plane light field for the various directions by a pixel vector within the pixel matrix. 42. An imaging system, substantially as described hereinabove.

43. An imaging system, substantially as illustrated in any of the drawings.

Description:

METHOD AND SYSTEM FOR COMPRESSED IMAGING

RELATED APPLICATIONS

The present invention claims priority from the U. S. Provisional Application No. 60/907,943, filed on April 24, 2007, the disclosure of which is hereby incorporated by reference in its entirety.

TECHNICAL FIELD

The invention is generally in the field of compressed or compressive imaging.

BACKGROUND

Compressed imaging is a subfield of the emerging field of compressed or compressive sensing or sampling. Compressed imaging exploits the large redundancy typical to human or machine intelligible images in order to capture less samples than commonly captured. In contrast to the common imaging approach, in which a conventional image with a large number of pixels is first captured, and then, often, this image is loss-compressed digitally, the compressed imaging approach attempts to obtain the image data in a compressed way by minimizing the collection of redundant for some further task data. The further task may be visualization. In other words, the compressed imaging avoids collection of data which will not be of value for human viewing or for some machine processing. Thus, the compressed imaging uses sensing processes allowing production of only loss-compressed, when compared with conventional, images. This production step, if done, is called reconstruction. A few reconstruction methods are known. For examples, the following article may be consulted: E. J. Candes, J. Romberg and T. Tao, "Robust uncertainty Principles: Exact signal reconsfruction from highly incomplete frequency information", IEEE Transactions on Information Theory, vol 52(2), 489-509, Feb. 2006 ([I]). This article is incorporated herein by reference. Image reconstruction from "incomplete" data is

possible due that fact that common images are highly redundant, as we experience with conventional compression techniques (e.g. JPEG).

GENERAL DESCRIPTION There is a need in the art for compressed imaging techniques. The inventor presents a new technique that can be applied for example in scanning, inspection, surveillance, remote sensing, in visible, infrared or terahertz radiation imaging. The technique may utilize at least one pixel sensor extending in one dimension (vector sensor) and moving in a way including rotation relatively the imaged scene. Pixels of the vector sensor(s) are typically arranged along a straight line, although this is not required. Different pixel subsets may be arranged along parallel lines and shifted by a non-whole number of pixels relatively each other and/or be sensitive to different wavelengths. The sensor is preceded by optics which projects on the sensor a signal (a field, for example the intensity field) indicative of the 2D Fourier transform of the object plane field along the dimension of the sensor (ID Fourier field). Due to the motion, a series of such ID fields or field strips is obtained, and due to the rotation, the series includes field strips extending in various directions. Thus, the strips can cover 2D Fourier space. However, the spatial frequencies of the object plane field, which contribute to at least one of the sensor measurements, are distributed non- uniformly in orthogonal spatial frequency coordinates (i.e. in the 2D Fourier space): spatial frequencies with larger magnitudes are separated by longer arcs, i.e. by larger spatial frequency distances, in the dimension of rotation. The full set of measurements may be augmented by one of reconstruction processes so that a total number of pixels, which will be shown in the reconstructed image, will be greater than the total number of pixels measured within the series.

It should be noted that using vector sensor(s) may be especially preferable if imaging is to be performed in those wavelength regions, in which matrix pixel sensors are expensive.

As well, matrix pixel sensors may be effectively utilized within the inventor's technique. If this is the case, optics is still setup to project on the matrix sensor a series of "one-dimensional" signals. Imaging can be performed not by all pixels of the matrix at a time, but by a rotating pixel vector, i.e. "vector trace", within the matrix.

Selection or definition of current read-out pixel vector can be done electronically. Imaging with a matrix sensor presents one of the ways to avoid physical rotation of the vector sensor; however, elements or parts of projecting optics still may need to be rotated. The imaging scheme relying on the use of a matrix sensor may help to save energy, increase sensor lifetime, and generate information-dense image data. These properties may be of high value in field measurements or in surveillance as they may relax memory and data transmission resources requirements and imaging system servicing requirements.

The following should be understood with respect to the motion of the vector pixel sensor and vector pixel trace in case of the pixel matrix sensor:

First, in those cases, in which the motion of the sensor relatively the imaged scene is reduced to simple rotation, the sensed spatial frequencies form a regular star in the 2D Fourier space. In some embodiments, the star is at least 16-pointed. The star may be at least 32-pointed. The star envelope is a circle. Second, the motion may have components other than rotation. For example, the imaging system may be carried by an airplane. In this case, Fourier coefficients acquire a phase shift proportional to the airplane velocity. It should be understood that the unshifted phases can be restored if the motion is known.

Third, in case of an electronic control of read-out pixel vector, the length of the vector pixel trace typically varies for a given pixel non-circular matrix shape, depending on the direction or rotation angle of the pixel vector. In particular, vector trace length will vary with the most typical pixel matrix - rectangular, if all pixels are read-out along sensed directions. However, it should be understood, that not all pixels of a pixel vector, on which light is projected, have in fact to be read-out or kept in memory, in both cases - of the vector sensor(s) and of the matrix sensor. Hence, in either of these cases, star "rays" may be of close lengths or of significantly different lengths. For example, in some embodiments a ratio of the length of the shortest "ray" of the star to the length of the longest "ray" of the star is less than 0.65 or even 0.5, and in some embodiments this ratio is larger than 0.75 or even 0.9. Additionally, irregularities of the star shape may be associated with variations in angular and radial sampling pitch. In other words, the pitches do not have to be constant. They may be selected to match specific application data acquisition goals, which for example may be to collect spatial frequencies more densely if sensor is

oriented in a certain direction, so as to image specific object features. Non-regular (non equidistant) angular or radial sampling may permit better modeling of the acquisition process. For instance, the angular steps can be adapted to capture the Fourier samples on a pseudo-polar grid, which may simplify the reconstruction process and/or may improve its precision. For example, the grid may be selected for optimal presentation of image at the common rectangular grid.

In a broad aspect of the invention, there is provided an imaging system for use in compressed imaging. The system may include at least one rotative vector sensor and optics, projecting light from an object plane oh the sensor, and be configured to measure data indicative of Fourier transform of an object plane light field at various angles of the vector sensor rotation.

The system may include a pixel matrix sensor and optics, compressively projecting light from an object plane on a pixel vector of the sensor, and be configured to affect a direction of the light projection and measure data indicative of Fourier transform of the object plane light field by matching an orientation of the pixel vector within the pixel matrix and the direction of the light projection.

The system may include at least two vector sensors in a staggered configuration. The system may include at least two vector sensors arranged in a stack configuration. The system may include at least two vector sensors with sensitivity peak wavelengths differing for more than 20% of a shortest of the sensitivity peak wavelengths.

The system may include a slit. It may include a cylindrical lens and/or mirror. The system may include a 4-f optical element arrangement. It may include a 2-f optical element arrangement. The system may include a source of coherent radiation. It may include a beam splitter and be configured as a holographic system.

The vector sensor may have a sensitivity peak between 300 GHz and 3 THz. The sensitivity peak may be in infrared range with a frequency higher than 3 THz. The peak may be in visible range. The system may include a control unit configured to initiate measurements by said at least one vector sensor at predetermined rotation angles. Alternatively or additionally, the control unit may be configured to reconstruct an image from data measured by the sensor at various angles of its rotation. A set of the various angles

may be predetermined. The control unit may be configured to reconstruct an image using minimization of total variation optimization technique or a Z 1 minimization technique from data measured by the sensor for various angles of its rotation. Reconstruction may utilize a maximum a posteriori estimation technique. Reconstruction may be done using a penalized maximum likelihood estimation technique.

In another broad aspect of the invention, there is provided a method for use in compressed imaging, the method including reconstructing an image from data indicative of Fourier transform of an object plane light field, a set of spatial frequencies of the data having a star configuration in two-dimensional spatial frequency space, an envelope of the star being of a substantially circular shape.

The method may include reconstructing an image from data indicative of Fourier transform of an object plane light field, a set of spatial frequencies of the data having a star configuration in two-dimensional spatial frequency space, a ratio between a length of a shortest star ray a and a length of a longest ray being less than 0.65 or larger than 0.75.

The reconstruction may be done using minimization of total variation optimization technique or a Z 1 minimization technique.

In yet another broad aspect of the invention, there is provided a method for use in compressed imaging, the method including sequentially projecting light from an object plane on various directions within a rotation plane of a rotative vector sensor and rotating the vector sensor so as to measure data indicative of Fourier transform of the object plane light field by the sensor for the various directions of the projected light. In yet another broad aspect of the invention, there is provided a method for use in compressed imaging, the method including sequentially compressively projecting light from an object plane on various directions within a pixel matrix sensor plane and measuring data indicative of Fourier transform of the object plane light field for the various directions by a pixel vector within the pixel matrix.

BRIEF DESCRIPTION OF THE DRAWINGS

In order to understand the invention and to see how it may be carried out in practice, a few embodiments of it will now be described, by way of non-limiting example only, with reference to accompanying drawings, in which: Fig. IA shows an example of a star-shaped spatial frequency set suitable for realization of compressed imaging scheme according to the invention;

Figs. IB and 1C present an original image and an image reconstructed from the set of Fourier coefficients mapped in Fig. IA;

Fig. 2 shows an example of an imaging system usable for compressed imaging with coherent light in accordance with the invention;

Figs. 3A and 3B illustrate compressed imaging simulation performed for the system of Fig. 2;

Fig. 4 shows an example of an imaging system usable for imaging with incoherent light according to the invention; Fig. 5 presents an exemplary arrangement of multiple vector sensors for use in various imaging systems of the invention;

Figs. 6A-6D illustrate compressed imaging simulations performed for the system of Fig. 4 and for a conventional linear scanning system;

Fig. 7 shows an example of a holographic imaging system usable for compressed imaging according to the invention;

Fig. 8 shows an example of an imaging system using a pixel matrix sensor in accordance with the invention.

DESCRIPTION OF EMBODIMENTS

In this section, first some mathematical ideas applicable within the invented technique are illustrated, and then examples of various suitable optical systems are presented.

Typically, for an image j\n,ni\ defined on NxN pixel («=1,2.. N, m=l,2..N) an equivalent number NxN of Fourier coefficients is needed for reconstruction of the image by direct means (e.g. by inverse Fourier transform). But, in the compressed imaging framework, satisfactory reconstruction can be obtained by using appropriate nonlinear recovery algorithms applied to only part of the conventionally needed

Fourier coefficients. As mentioned above, Fourier coefficients, which will be directly or indirectly measured by the moving sensor and its optical system, will be non- uniformly distributed in the Fourier space, and low spatial frequency components will be sampled denser than high spatial frequency components. In Fig. IA there is shown an example of a set of spatial frequencies, which is distributed in such a way. Another distribution of this kind was presented in [I]. The set of Fig. IA includes X=32 radial lines in the Fourier plane (spatial frequency plane); radial lines are inclined with different polar angles O 1 = I- , I = 0..(Z - I) , relatively

the horizontal axis. Along each of the radial lines, frequencies of the set are distributed uniformly. The total number of frequencies on each radial line is 256; frequencies lie within a circle C in the illustration and satisfy the inequality ω < <»„

When Fourier coefficients of the image are known at such a set, the image may be reconstructed. This is illustrated by Figs. IB and 1C. The first of these images is

"conventional": it is the original infrared image of the inventor. This image has 256 by 256 pixels. The second of these images is reconstructed from only 256x32 Fourier coefficients F{ω,θ ι ), {θ t }^ 0 , determined from the first image. Although the set

{F(ω,θ ι )}^ with 256 frequency values in each of 32 radial lines covers only 12.5% of the original Fourier set, the original image could be reconstructed essentially completely as seen from Fig. 1C. In this example, the reconstruction was carried out digitally by the minimization of the total variation optimization technique: mm||D/| subject to F(ω, θ, ) = F(ω, G 1 ) for all [θ, }£J , where H 1 denotes the I 1 norm and D defines the finite difference operator. In more detail, the following minimization was done by the inventor:

min ∑ \Df[n, F{ω, θ, ) , \ω\ ≤ ω msκ for all {θ, }£J , where the finite difference operator D was given by

Df[n,m] = and F denotes the Fourier transform of the reconstructed image / . Basically, the reconstruction algorithm seeked a solution / with minimum complexity - defined here as total variation

∑ \Df[n, m] - and whose Fourier coefficients over the radial strips ψ(ω, θ, )] ;=0

matched those found from the original image {.F^,^)}^ "1 . The reconstruction criterion used by the inventor differed from the criterion used in [1] in that only spatial frequencies within the circle |<ø| < &> max were used for reconstruction by the inventor. This criterion used by the inventor relates to the technique with which sets of Fourier coefficients can be obtained by compressed imaging.

In accordance with the technique of the invention, direct or indirect measurements of the Fourier transform of the object field may be done with a rotationally moving vector sensor and provide a suitable set of spatial frequencies and Fourier coefficients for satisfactory reconstruction.

Referring to Fig. 2, there is shown an exemplary imaging system 100 configured for obtaining a desired set of field Fourier coefficients for the compressed imaging reconstruction with coherent light. Imaging system 100 samples the Fourier plane by using the common 4-f configuration. The system includes spherical lenses Li, L 2 and a cylindrical lens L 3 with focal lengths/ / , a slit D, and a line light sensor S. The light sensor, together with lens L 3 and slit D, or an object O, which is to be imaged, may be setup on a rotative mount. This mount may form a part of the imaging system. System 100 is arranged in such a way that a series of radial lines in the Fourier plane of the object can be masked out and then Fourier transformed optically. Object O is positioned at distance β from lens L 1 and is coherently illuminated; the object-reflected field is presented by function βx,y). (Accordingly, the imaging system may include a source of coherent illumination, such as a laser). Hence, the function βx,y) is two-dimensionally (2D) Fourier transformed by lens Li. Slit D located at distance 2fi from the object and (currently) aligned with in-plane angle 0 / . It filters out the radial Fourier spectrum F(ω,θ ι ) . The following lenses L 2 and L 3 are conventional one-dimensional (1-D) optical Fourier transformers. Lens L 3 , which is perpendicular to the slit, performs a 1-D Fourier transform of the masked Fourier spectrum, and lens L 2 projects it on the vector sensor S. Thus, the sensor captures the 1-D Fourier transform of the radial strip in the Fourier domain with orientation 6> / ; that is the vector sensor measurement can be written as g θι (r) = 3 ω {F(ω,θ,)}, |ω| ≤ ø max , where

3 ffl denotes the one-dimensional (1-D) Fourier operator in the radial ω direction. By

selecting the finite length of the slit L M , the maximum measured radial frequency of {F(ω,θ,)}^ can be defined as <» max = 2πL M /λf, , where λ is the wavelength of the coherent light and / / is the focal length of lens L 1 . As a result the measured spatial frequency samples lay in a circle, similar to circle C shown in Fig. IA. From the measured field g θj (r) the respective Fourier strip F(ω,θ,) can be obtained, by simply inverse Fourier transforming the measured field numerically. By rotating the imaging system (or the vector sensor, the cylindrical lens, and the slit) with respect to the object, a desired number (e.g. L) exposures can be taken, capturing g θι (r) for all

{θ,}f ~ l . Then, the 1-D Fourier transform along the radial lines are taken for all \g θι {ryfcl yielding the set of required radial Fourier samples {F(CO, θ, )}^ .

It should be noted that if the vector sensor is an intensity sensor, then measurements need to be nonnegative in order not to lose information. This may be guaranteed if the object field has a sufficiently large dc component. Otherwise, various method can be used to avoid negative g βι (r) values. One way of doing this is by biasing the field at the recorder, for example, by superimposing to g θ (r) a coherent plane wave with measured or predetermined intensity.

Also, it should be remembered that system 100 is just a representative example. Other variations of the 4-f system, or equivalent systems can be utilized (see for example J. W. Goodman, "Introduction to Fourier optics", chapter 8, or J. Shamir, "Optical systems and processing", SPIE Press, WA, 1999, chapters 5 and 13 and chapters 5 and 6). As well, different implementations of the ID Fourier transform may be used (see for example J. Shamir, "Optical systems and processing ", chapter 13). Generally, optics usable in the inventor's technique may include such optical elements as mirrors, prisms, and/or spatial light modulator (SLM). Figs. 3A and 3B illustrate compressed imaging simulation performed for the system described in Fig. 2. The original object is shown in Fig. 3A. Its size was assumed to be 2.56 x 2.56 mm 2 . The vector sensor was assumed to have 256 pixels of size 10 μm. The slit length was L =I cm, its width was 39 μm. The focal length was μm, and lenses' apertures were 5 cm each. The object was captured with Z=25 exposures taken with θ \ in steps of 7.2 degrees. It was assumed that the object was still and that the rotation of the sensor was

controlled by an appropriate control unit, so that images were taken at predetermined angles or that those angles were measured by the control unit. (The control unit may be based on, for example, a special purpose computing device or a specially programmed general task computer). In the simulated experiment the maximal radial frequency was ω m =- 2πL M /λf ι =2τz>10 5 rad/m. The achieved reconstruction is demonstrated in Fig. 3B. The image was completely reconstructed although the number of measured pixels was 25x256, which is more than 10 times less than the number of pixels in the image of Fig. 3A. Hence, a compression rate of c=10.24 was obtained by optical means only. Referring to Fig. 4, there is shown an exemplary imaging system 200 suitable for use with incoherent light. System 200 includes a cylindrical lens Li and a vector sensor S. It also includes optional relay optics RO (e.g., magnifying lens set, optical aberration setup, or anamorphic lenses for collimating light in x' direction, , as described L. Levi, "Applied Optics", John Wiley and Sons Inc., NY, VoLl, pp. 430, 1992). Lens L 1 projects object O on the sensor. Particularly, lens Li is aligned with and defines an x * axis, which is in-plane rotated by angle θι with respect to the x axis selected in object plane. With the imaging condition fulfilled in the y' direction, the system performs an integral projection of field /(x 1 ,/) on the y' axis (see L. Levi, Applied Optics, John Wiley and Sons Inc., NY, VoLl, pp. 430, 1992, if needed). Linear sensor S, aligned with they' axis, captures the line integral where K is a normalization factor and M y defined the lateral magnification along y' .

This integral, which is proportional to the projection of f(x',y') on y\ can be recognized as a Radon transform. According to the "central slice theorem" the Radon transform is the 1-D Fourier transform of the slice F(ω,θ, ) . Therefore, the intensity measured by sensor S is g θ (r) = g(r - / )= 3 a {F(ω,θ ι )}, and F{ω,θ j ) can be obtained by inverse Fourier transforming the measured field g θι (r) . By rotating the imaging system with respect to the object and taking L exposures, field g θl {r) is captured for

It should be noted herein that, although above description separates steps of object Fourier representation calculation and reconstruction, in practice these operations may be fused together: the Fourier calculation and reconstruction may be presented as a single operation, which will utilize the 2D Fourier transform inexplicitly. Further, this single operation may be described without reference to

Fourier transform. It could be said, that the system in Fig. 3 captures linear projections of the image and thus optically performs the Radon transform, and that the further reconstruction is done by some constrained inverse Radon transform. It should be understood, however, that despite changes in the reconstruction process, the field indicative of the -Fourier transform of the object is still measured.

Referring to Fig. 5, there is shown an exemplary staggered arrangement 250 of two vector sensors Si and S 2 which may be used for improving the measurement resolution in either of the above detailed imaging approaches: to this end, arrangement 250 may replace single sensor S in either system 100 or 200. Such a replacement makes use of the field extending perpendicularly to the sensor: the intensity is constant over lines perpendicular to the vector sensor. In arrangement 250 both vector sensors are exposed to the same intensity distribution, but sample this distribution differently. Thus the staggered configuration permits an overall finer sampling: the two stage staggered sensor permits sampling at interval δ/2 instead of δ, where δ denotes the vector sensor pixel size. Multiple (more than two) staggered sensors may be utilized if even a finer resolution is desired.

Similarly to the case with staggered sensors, the vector sensor can be replaced by multiple adjacent sensors sensitive to different wavelengths, which together with proper optical relay can implement a multispectral imaging system. In case of measurements with coherent light, the wavelength of coherent illumination may be tuned.

As well, a stack of (aligned) vector sensors can be used to collect more light even of the same wavelength. Since the projected signal is "one-dimensional", aligned pixels will produce the same or close measurements. Referring to Figs. 6A-6D, they present results of numerical simulations performed by the inventor for the system described above with reference to Fig. 4 and for a conventionally arranged scanning system. In Fig. 6 A there is shown an object located at a distance 300 m from the imaging system. The figure has 256x256 pixels.

Relay optics is assumed to perform a lateral magnification of 0.001. It could also be used for preconditioning the incoming signal for example by filtering or polarizing. Lens L 1 was assumed to have the magnification of 0.2 in y 1 direction and an aperture of 70 mm. Distances zy and z^ in Fig. 4 were assumed to be 0.5 m and 0.04 m, respectively. The system was assumed to operate in long wave infrared regime with an average wavelength! = 10//m . Vector sensor S had 256 pixels of size zl=20 μm. The resolvable spatial frequency in the object plane is sensor limited by its maximum value ω m - 2π ■ 0.001 • 0.2/δ = 62.6 rad/m. The sensor scanned the 2-D Fourier spectrum of the image with a rotational motion. Fig. 6B shows a result of image reconstruction based on only L—32 exposures, capturing the 32 radial strips F(ω,θ t ) of 2D Fourier domain as shown in Fig. IA. The reconstruction appears to be of a high quality. If the scanning would be performed conventionally, i.e. by a linear sensor moving translationally, 256 exposures would have to be made for obtaining the conventionally used grid of Fourier coefficients. Hence, there is an eight times difference in acquisition time between different scanning regimes. Therefore it is seen, that the sensed image is intrinsically compressed in the technique of the invention.

Fig. 6C shows the image that would be obtained with the conventional linear scanning and with 32 equidistant exposures, or alternatively, with a 2D sensor having 256x32 pixels. It is evident, that many details that are preserved in Fig. 6B are missing in Fig. 6C. Even efficient post-processing of Fig. 6C, while yielding Fig. 6D, did not reveal details that are seen in Fig. 6B.

Referring to Fig. 7, there is schematically presented an imaging system 300 implementing a Fourier digital holographic scheme for measurements of complex Fourier spectrum. System 300 is to be used with coherent light. It includes a coherent light source CLS, beam splitters BSi and BS 2 , a lens L 1 , sensor S, and optics that makes a reference beam B R propagate from beam splitter BS 1 to beam splitter BS 2 (the latter optics is not shown). Coherent illumination of the light source is reflected from the object (which is not shown) and results in creation of object field f(x,y). Lens L 1 is positioned to perform the 2D-Fourier transform of field f(x,y) and distances between the object plane and the lens and between the lens and the sensor are equal to the lens focal length. Hence, sensor S measures the encoded Fourier field g &! (r) = 3 a {F(ω > θ ι )} after mixing with the reference beam. The type of encoding

depends on the type of holography - as described for example in J. W. Goodman, "Introduction to Fourier optics", (McGraw-Hill, second, ed. NY, 1996).

The phase shift interferometer technique, or any other on-line or off-line holographic technique can be used [see for example J. W. Goodman, "Introduction to Fourier optics", chapter 9]. By this method the sensor measures directly the (encoded) Fourier radial spectrum g θ (r) = 3 ω {F(ω, O 1 )} .

It should be noted with respect to holographic schemes, that in such measurements the well known property of conjugate symmetry of Fourier transform of real objects may be utilized: thanks to this property, only half of the complex Fourier coefficients need to be measured. This can be implemented, for example, by using a vector sensor of half size performing a rotational motion of 180° around an axis passing at one of its edges.

Holographic schemes, and more generally the technique of the invention, can as well work with various Fourier-related transforms, for example with the Fresnel transform.

Referring to Fig. 8, there is schematically shown an imaging system 400 using a pixel matrix sensor S M and operating with incoherent light. System 400 includes the same optics as system 200. It is equipped with an appropriate control unit 410, which controls rotative cylindrical lens Li and read-out process from pixel matrix sensor SM- The control unit may be based on, for example, a special purpose computing device or a specially programmed general task computer. It should be understood, that in other embodiments control can be provided as well when desired.

The reconstruction can be carried out by other optimization technique than the above mentioned total variation minimization optimization technique. In general, any a-priory knowledge or assumption about the object features can be incorporated into used optimization technique. For common images, high quality results are expected from searches of reconstructed images with minimum complexity. For example, high quality reconstruction may be obtained by using Z 1 minimization techniques, or by using maximum entropy criterion, or maximum apriori methods with generalized Gaussian priors, or wavelet "pruning" methods. As well, the reconstruction may rely on the maximum a-posteriori estimation techniques or the penalized maximum likelihood estimation techniques.

The above mentioned total variation minimization may be viewed as an /j minimization of the gradient together with the assumption that the images to be captured are relatively smooth. Techniques of 1 \ minimization may be especially convenient, when they can be efficiently implemented by using "linear programming" algorithms - see E. J. Candes, J. Romberg and T. Tao, "Robust uncertainty Principles: Exact signal reconstruction fi'om highly incomplete frequency information"; D. L. Donoho, "Compressed Sensing", IEEE Transactions on Information Theory , vol 52(4), 1289- 1306, April 2006; and Y. Tsaig and D. L. Donoho, "Extensions to Compressed Sensing", Signal Processing, vol 86, 549-571, March 2006. The described above compressed imaging technique may utilize also algorithms for motion estimation and change detection efficiently applied to the collected data. For example, "opposite ray algorithms" may be used involving complete rotations of the line sensor (i.e. rotations for 360° rather than for 180°). In a full rotation two frames are captured. However, motion and change can be still be estimated with only half cycle rotation, by applying tracking algorithms on the data represented as sinogram.

Additionally, the following should be noted regarding the herein described compressed imaging technique. This technique can be applied for capturing not only still images, but also video sequences. As well, within this technique, color imaging and/or imaging in various spectral ranges is allowed.

Those skilled in the art will readily appreciate that various modifications and changes can be applied to the embodiments of the invention without departing from its scope defined in and by the appended claims.