Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
OMNIDIRECTIONAL IMAGING DEVICE
Document Type and Number:
WIPO Patent Application WO/2015/181443
Kind Code:
A1
Abstract:
A panoramic camera (500) comprises: - an input element (LNS1), - an aperture stop (AS1), and - a focusing unit (300), wherein the input element (LNS1) and the focusing unit (300) are arranged to form an annular optical image (IMG1) on an image plane (PLN1), and the aperture stop (AS1) defines an entrance pupil (EPU k ) of the imaging device (500) such that the effective F-number (F eff ) of the imaging device (500) is in the range of 1.0 to 5.6.

Inventors:
AIKIO MIKA (FI)
MÄKINEN JUKKA-TAPANI (FI)
Application Number:
PCT/FI2015/050351
Publication Date:
December 03, 2015
Filing Date:
May 22, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
TEKNOLOGIAN TUTKIMUSKESKUS VTT OY (FI)
International Classes:
G02B13/06; G02B17/06; G03B3/04; G03B15/00; G03B37/00
Foreign References:
US20130057971A12013-03-07
JP2006285002A2006-10-19
US20040252384A12004-12-16
EP2028516A12009-02-25
US4566763A1986-01-28
US5473474A1995-12-05
US20110074917A12011-03-31
US20130194382A12013-08-01
US20120105980A12012-05-03
Other References:
AIKIO, M. ET AL.: "Omnidirectional Camera'.", IEEE 9TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTER COMMUNICATION AND PROCESSING (ICCP)., 5 September 2013 (2013-09-05), Romania, pages 217 - 221
Attorney, Agent or Firm:
TAMPEREEN PATENTTITOIMISTO OY (Tampere, FI)
Download PDF:
Claims:
CLAIMS

1 . An imaging device (500) comprising:

- an input element (LNS1 ),

- an aperture stop (AS1 ), and

- a focusing unit (300),

wherein the input element (LNS1 ) comprises:

- an input surface (SRF1 ),

- a first reflective surface (SRF2),

- a second reflective surface (SRF3), and

- an output surface (SRF4),

wherein the input surface (SRF1 ) is arranged to provide a first refracted beam (B1 k) by refracting light of an input beam (B0k), the first reflective surface (SRF2) is arranged to provide a first reflected beam (B2k) by reflecting light of the first refracted beam (B1 k), the second reflective surface (SRF3) is arranged to provide a second reflected beam (B3k) by reflecting light of the first reflected beam (B2k) such that the second reflected beam (B3k) does not intersect the first refracted beam (B1 k), the output surface (SRF4) is arranged to provide an output beam (B4k) by refracting light of the second reflected beam (B3), the input element (LNS1 ) and the focusing unit (300) are arranged to form an annular optical image (IMG1 ) on an image plane (PLN1 ), and the aperture stop (AS1 ) defines an entrance pupil (EPUk) of the imaging device (500) such that the effective F-number (Feff) of the imaging device (500) is in the range of 1 .0 to 5.6. 2. The device (500) of claim 1 wherein the ratio (fi/Wk) of the focal length (fi) of the focusing unit (300) to the width (Wk) of the entrance pupil (EPUk) is in the range of 1 .0 to 5.6, and the ratio (fi/Ahk) of the focal length (fi) to the height (Ahk) of said entrance pupil (EPUk) is in the range of 1 .0 to 5.6. 3. The device (500) of claim 1 or 2 wherein the focusing unit (300) is arranged to form a focused beam (B6k) impinging on an image point (Pk ) of the annular optical image (IMG1 ), the position of the image point (Pk ) corresponds to an elevation angle (9k) of an input beam (B0k), and the dimensions (dAsi ) of the aperture stop (AS1 ) and the focal length (fi) of the focusing unit (300) have been selected such that the cone angle (Δφ3ι< + Δφ^) of the focused beam (B6k) is greater than 9° for at least one elevation angle (9k) which is in the range of 0° to +35°.

4. The device (500) according to any of the claims 1 to 3 wherein the focusing unit (300) is arranged to form a focused beam (B6k) impinging on an image point (Pk') of the annular optical image (IMG1 ), the position of the image point (Pk') corresponds to an elevation angle (9k) of an input beam (B0k), and the dimensions (dAsi ) of the aperture stop (AS1 ) and the focal length (fi ) of the focusing unit (300) have been selected such that the cone angle (Δφ3ι< + A bk) of the focused beam (B6k) is greater than 9° for each elevation angle (9k) which is in the range of 0° to +35°. 5. The device (500) according to any of the claims 1 to 4 wherein the focusing unit (300) is arranged to form a focused beam (B6k) impinging on an image point (Pk') of the annular optical image (IMG1 ), the position of the image point (Pk') corresponds to an elevation angle (9k) of an input beam (B0k), the modulation transfer function (MTF) of the imaging device (500) at a reference spatial frequency (VREF) is higher than 40% for each elevation angle (9k) which is in the range of 0° to +35°, and the reference spatial frequency (VREF) is equal to 1 00 line pairs / mm divided by the square root of a dimensionless outer diameter (dMAx/mm), said dimensionless outer diameter (dMAx mm) being calculated by dividing the outer diameter of the annular optical image (IMG1 ) by one millimeter (1 0"3 meters).

6. The device (500) according to any of the claims 1 to 4 wherein the focusing unit (300) is arranged to form a focused beam (B6k) impinging on an image point (Pk') of the annular optical image (IMG1 ), the position of the image point (Pk') corresponds to an elevation angle (9k) of an input beam (B0k), the modulation transfer function (MTF) of the imaging device (500) at a first spatial frequency (vi ) is higher than 50% for each elevation angle (9k) which is in the range of 0° to +35°, and the first spatial frequency (vi ) is equal to 300 line pairs divided by the outer diameter (dMAx) of the annular optical image (IMG1 ).

7. The device (500) according to any of the claims 1 to 6 wherein the first refracted beam (B1 k), the first reflected beam (B2k), and the second reflected beam (B3k) propagate in a substantially homogeneous material without propagating in a gas. 8. The device (500) according to any of the claims 1 to 7 wherein the optical image (IMG1 ) has an inner radius (ΓΜΙΝ) and an outer radius (ΓΜΑΧ), and the ratio of the inner radius (ΓΜΙΝ) to the outer radius (ΓΜΑΧ) is in the range of 0.3 to 0.7.

9. The device (500) according to any of the claims 1 to 8 wherein the vertical field of view (ΘΜΑΧ-ΘΜΙΝ) of the imaging device (500) is defined by a first angle value (ΘΜΙΝ) and by a second angle value (ΘΜΑΧ), wherein the first angle value (ΘΜΙΝ) is lower than or equal to 0°, and the second angle value (ΘΜΑΧ) is higher than or equal to +35°.

10. The device (500) of claim 9 wherein the first angle value (ΘΜΙΝ) is lower than or equal to -30°, and the second angle value (ΘΜΑΧ) is higher than or equal to +45°.

1 1 . The device (500) according to any of the claims 1 to 10 wherein the first reflective surface (SRF2) of the input element (LNS1 ) is a substantially conical surface.

12. The device (500) according to any of the claims 1 to 1 1 , wherein first reflective surface (SRF2) and the second reflective surface (SRF3) of the input element

(LNS1 ) are arranged to reflect light by total internal reflection (TIR).

13. The device (500) according to any of the claims 1 to 12 wherein the vertical position (hSRF3) of the boundary of the second reflective output surface (SRF3) of the input element (LNS1 ) is higher than the vertical position (IISRFIA) of the upper boundary of the input surface (SRF1 ) of the input element (LNS1 ).

14. The device (500) according to any of the claims 1 to 13 wherein input element (LNS1 ) comprises a central hole for attaching the input element LNS1 to one or more other components.

15. The device (500) according to any of the claims 1 to 14 wherein the device (500) is arranged to form an image point (Pk') of the annular optical image (IMG1 ) by focusing light of an input beam (B0k), and the shapes of the surfaces (SRF1 , SRF2, SRF3, SRF4) of the input element (LNS1 ) have been selected such that the radial position (rk) of the image point (Pk') depends in a substantially linear manner on the elevation angle (9k) of the input beam (B0k).

16. The device (500) according to any of the claims 1 to 15, wherein the radial distortion of the annular optical image (IMG1 ) is smaller than 20% when the vertical field of view (ΘΜΑΧ-ΘΜΙΝ) is defined by the angles ΘΜΙΝ = 0° and ΘΜΑΧ = +35°.

17. The device (500) according to any of the claims 1 to 16 comprising a wavefront modifying unit (200), wherein the input element LNS1 and the wavefront modifying unit (200) are arranged to provide an intermediate beam (B5k) such that the intermediate beam (B5k) is substantially collimated after passing through the aperture stop (AS1 ), and the focusing unit (300) is arranged to focus light of the intermediate beam (B5k) to said image plane (PLN1 ).

18. The device (500) according to any of the claims 1 to 17 wherein the device (500) is arranged to form a first image point from first l ight received via a first entrance pupil, and to form a second image point from second light received via a second different entrance pupil, the imaging device (500) is arranged to form a first intermediate beam from the first light and to form a second intermediate beam from the second light such that the first intermediate beam and the second intermediate beam pass through the aperture stop (AS1 ), and the aperture stop (AS1 ) is arranged to define the entrance pupils by preventing propagation of marginal rays (B50k) such that the light of the marginal rays (BOOk) do not contribute to forming the annular optical image (IMG1 ).

19. The device (500) according to any of the claims 1 to 18 wherein the focusing unit (300) is arranged to provide a focused beam (B6k), and the diameter (dAsi ) of the aperture stop (AS1 ) has been selected such that the ratio of a first sum (Δφ3κ + A bk) to a second sum ( Δβ^ + Δββι ) is in the range of 0.7 to 1 .3, wherein the first sum (Δφ3ι< + A bk) is equal to the cone angle of the focused beam (B6k) in the tangential direction of the annular optical image (IMG1 ), and the second sum (Δβ^ + Δββΐ ) is equal to the cone angle of the focused beam (B6k) in the radial direction of the annular optical image (IMG1 ).

20. A method for capturing an image by using the imaging device according to any of the claims 1 to 19, the method comprising forming the annular optical image (IMG1 ) on the image plane (PLN1 ).

Description:
OMNIDIRECTIONAL IMAGING DEVICE

FIELD

The present invention relates to optical imaging.

BACKGROUND

A panoramic camera may comprise a fish-eye lens system for providing a panoramic image. The panoramic image may be formed by focusing an optical image on an image sensor. The fish-eye lens may be arranged to shrink the peripheral regions of the optical image so that the whole optical image can be captured by a single image sensor. Consequently, the resolving power of the fish- eye lens may be limited at the peripheral regions of the optical image.

SUMMARY

An object of the present invention is to provide a device for optical imaging. An object of the present invention is to provide a method for capturing an image.

According to a first aspect, there is provided an imaging device (500) comprising: - an input element (LNS1 ),

- an aperture stop (AS1 ), and

- a focusing unit (300),

wherein the input element (LNS1 ) comprises:

- an input surface (SRF1 ),

- a first reflective surface (SRF2),

- a second reflective surface (SRF3), and

- an output surface (SRF4),

wherein the input surface (SRF1 ) is arranged to provide a first refracted beam (B1 k ) by refracting light of an input beam (BO k ), the first reflective surface (SRF2) is arranged to provide a first reflected beam (B2 k ) by reflecting light of the first refracted beam (B1 k ), the second reflective surface (SRF3) is arranged to provide a second reflected beam (B3 k ) by reflecting light of the first reflected beam (B2 k ) such that the second reflected beam (B3 k ) does not intersect the first refracted beam (B1 k ), the output surface (SRF4) is arranged to provide an output beam (B4 k ) by refracting light of the second reflected beam (B3), the input element (LNS1 ) and the focusing unit (300) are arranged to form an annular optical image (IMG1 ) on an image plane (PLN1 ), and the aperture stop (AS1 ) defines an entrance pupil (EPU k ) of the imaging device (500) such that the effective F-number (F eff ) of the imaging device (500) is in the range of 1 .0 to 5.6.

According to a second aspect, there is provided a method for capturing an image by using the imaging device (500), the method comprising forming the annular optical image (IMG1 ) on the image plane (PLN1 ).

Further aspects are defined in the claims. The aperture stop may provide high light collection power, and the aperture stop may improve the sharpness of the image by preventing propagation of marginal rays, which could cause blurring of the optical image. In particular, the aperture stop may prevent propagation of those marginal rays which could cause blurring in the tangential direction of the annular optical image.

The imaging device may form an annular optical image, which represents the surroundings of the imaging device. The annular image may be converted into a rectangular panorama image by digital image processing. The radial distortion of the annular image may be low. In other words, the relationship between the elevation angle of rays received from the objects and the positions of the corresponding image points may be substantially linear. Consequently, the pixels of the image sensor may be used effectively for a predetermined vertical field of view, and all parts of the panorama image may be formed with an optimum resolution.

The imaging device may have a substantially cylindrical object surface. The imaging device may effectively utilize the pixels of the image sensor for capturing an annular image, which represents the cylindrical object surface. For certain applications, it is not necessary to capture images of objects, which are located directly above the imaging device. For those applications, the imaging device may utilize the pixels of an image sensor more effectively when compared with e.g. a fish-eye lens. The imaging device may be attached e.g. to a vehicle in order to monitor obstacles, other vehicles and/or persons around the vehicle. The imaging device may be used e.g. as a stationary surveillance camera. The imaging device may be arranged to capture images for a machine vision system.

In an embodiment, the imaging device may be arranged to provide panorama images for a teleconference system. For example, the imaging device may be arranged to provide a panorama image of several persons located in a single room. A teleconference system may comprise one or more imaging devices for providing and transmitting panorama images. The teleconference system may capture and transmit a video sequence, wherein the video sequence may comprise one or more panorama images.

The imaging device may comprise an input element, which has two refractive surfaces and two reflective surfaces to provide a folded optical path. The folded optical path may allow reducing the size of the imaging device. The imaging device may have a low height, due to the folded optical path.

BRIEF DESCRIPTION OF THE DRAWINGS

Fig. 1 shows, by way of example, in a cross sectional view, an imaging device which comprises an omnidirectional lens, Fig. 2 shows, by way of example, in a cross sectional view, an imaging device which comprises the omnidirectional lens,

Fig. 3a shows, by way of example, in a three dimensional view, forming an annular optical image on an image sensor,

Fig. 3b shows, by way of example, in a three dimensional view, forming several optical images on the image sensor,

Fig. 4 shows, by way of example, in a three dimensional view, upper and lower boundaries of the viewing region of the imaging device,

Fig. 5a shows an optical image formed on the image sensor, Fig. 5b shows by way of example, forming a panoramic image from the captured digital image, Fig. 6a shows, by way of example, in a three-dimensional view, an elevation angle corresponding a point of an object,

Fig. 6b shows, by way of example, in a top view, an image point corresponding to the object point of Fig. 6a

Fig. 7a shows, by way of example, in a side view, an entrance pupil of the imaging device,

Fig. 7b shows, by way of example, in an end view, the entrance pupil of Fig. 7a, Fig. 7c shows, by way of example, in a top view, the entrance pupil of Fig. 7a,

Fig. 8a shows, by way of example, in a top view, the aperture stop of the imaging device,

Fig. 8b shows, by way of example, in an end view, rays passing through the aperture stop,

Fig. 8c shows, by way of example, in a side view, rays passing through the aperture stop,

Fig. 8d shows, by way of example, in an end view, propagation of peripheral rays in the imaging device,

Fig. 8e shows, by way of example, in a top view, propagation of peripheral rays from the input surface to the aperture stop,

Fig. 9a shows, by way of example, in a side view, rays impinging on the image sensor, Fig. 9b shows, by way of example, in an end view, rays impinging on the image sensor, Fig. 9c shows modulation transfer functions for several different elevation angles,

Fig. 10 shows by way of example, functional units of the imaging device,

Fig. 1 1 shows, by way of example, characteristic dimensions of the input element,

Fig. 12 shows by way of example, an imaging device implemented without the beam modifying unit, and

Fig. 13 shows by way of example, detector pixels of an image sensor.

DETAILED DESCRIPTION

Referring to Fig. 1 , an imaging device 500 may comprise an input element LNS1 , an aperture stop AS1 , a focusing unit 300, and an image sensor DET1 . The imaging device 500 may have a wide viewing region VREG1 about an axis AX0 (Fig. 4). The imaging device 500 may have a viewing region VREG1 , which completely surrounds the optical axis AX0. The viewing region VREG1 may represent 360° angle about the viewing region VREG1 . The input element LNS1 may be called e.g. as an omnidirectional lens or as a panoramic lens. The optical elements of the imaging device 500 may form a combination, which may be called e.g. as the omnidirectional objective. The imaging device 500 may be called e.g. as an omnidirectional imaging device or as a panoramic imaging device. The imaging device 500 may be e.g. a camera.

The optical elements of the device 500 may be arranged to refract and/or reflect light of one or more light beams. Each beam may comprise a plurality light rays. The input element LNS1 may comprise an input surface SRF1 , a first reflective surface SRF2, a second reflective surface SRF3, and an output surface SRF4. A first input beam B0i may impinge on the input surface SRF1 . The first input beam B0i may be received e.g. from a point Pi of an object O1 (Fig. 3a). The input surface SRF1 may be arranged to provide first refracted light B1 1 by refracting light of the input beam B0i, the first reflective surface SRF2 may be arranged to provide a first reflected beam B2i by reflecting light of the first refracted beam B1 i, the second reflective surface SRF3 may be arranged to provide second reflected beam B3i by reflecting light of the first reflected beam B2i, and the output surface SRF4 may be arranged to provide an output beam B4i by refracting light of the second reflected beam B3i .

The input surface SRF1 may have a first radius of curvature in the vertical direction, and the input surface SRF1 may have a second radius of curvature in the horizontal direction. The second radius may be different from the first radius, and refraction at the input surface SRF1 may cause astigmatism. In particular, the input surface SRF1 may be a portion of a toroidal surface. The reflective surface SRF2 may be e.g. a substantially conical surface. The reflective surface SRF2 may cross-couple the tangential and sagittal optical power, which may cause astigmatism and coma (comatic aberration). The refractive surfaces SRF1 and SRF4 may contribute to the lateral color characteristics. The shapes of the surfaces SRF1 , SRF2, SRF3, SRF4 may be optimized e.g. to minimize total amount of astigmatism, coma and/or chromatic aberration. The shapes of the surfaces SRF1 , SRF2, SRF3, SRF4 may be iteratively optimized by using optical design software, e.g. by using a software available under the trade name "Zemax". Examples of suitable shapes for the surfaces are specified e.g. in Tables 1 .2 and 1 .3, and in Tables 2.2, 2.3.

The imaging device 500 may optionally comprise a wavefront modifying unit 200 to modify the wavefront of output beams provided by the input element LNS1 . The wavefront of the output beam B4i may be optionally modified by the wavefront modifying unit 200. The wavefront modifying unit 200 may be arranged to form an intermediate beam B5i by modifying the wavefront of the output beam B4i . The intermediate beam may also be called e.g. as a corrected beam or as a modified beam. The aperture stop AS1 may be positioned between the input element LNS1 and the focusing unit 300. The aperture stop may be positioned between the modifying unit 200 and the focusing unit 300. The aperture stop AS1 may be arranged to limit the transverse dimensions of the intermediate beam B5i . The aperture stop AS1 may also define the entrance pupil of the imaging device 500 (Fig. 7b).

The light of the intermediate beam B5i may be focused on the image sensor DET1 by the focusing unit 300. The focusing unit 300 may be arranged to form a focused beam B61 by focusing light of the intermediate beam B5i . The focused beam B61 may impinge on a point Pi' of the image sensor DET1 . The point Pi' may be called e.g. as an image point. The image point may overlap one or more detector pixels of the image sensor DET1 , and the image sensor DET1 may provide a digital signal indicative of the brightness of the image point.

A second input beam BOk may impinge on the input surface SRF1 . The direction DIR k of the second input beam BOk may be different from the direction DIR1 of the first input beam BO1. The beams BO1 , BOk may be received e.g. from two different points Pi , Pk of an object 01 .

The input surface SRF1 may be arranged to provide a refracted beam B1 k by refracting light of the second input beam BOk, the first reflective surface SRF2 may be arranged to provide a reflected beam B2 k by reflecting light of the refracted beam B1 k , the second reflective surface SRF3 may be arranged to provide a reflected beam B3k by reflecting light of the reflected beam B2 k , and the output surface SRF4 may be arranged to provide an output beam B4 k by refracting light of the reflected beam B3k. The wavefront modifying unit 200 may be arranged to form an intermediate beam B5k by modifying the wavefront of the output beam B4 k . The aperture stop AS1 may be arranged to limit the transverse dimensions of the intermediate beam B5k. The focusing unit 300 may be arranged to form a focused beam B6k by focusing light of the intermediate beam B5k. The focused beam B6k may impinge on a point P k ' of the image sensor DET1 . The point P k ' may be spatially separate from the point Pi'.

The input element LNS1 and the focusing unit 300 may be arranged to form an optical image IMG1 on the image sensor DET1 , by receiving several beams BO1 , BOk from different directions DIR1 , DIR k . The input element LNS1 may be substantially axially symmetric about the axis AXO. The optical components of the imaging device 500 may be substantially axially symmetric about the axis AXO. The input element LNS1 may be axially symmetric about the axis AXO. The axis AXO may be called e.g. as the symmetry axis, or as the optical axis.

The input element LNS1 may also be arranged to operate such that the wavefront modifying unit 200 is not needed. In that case the surface SRF4 of the input element LNS1 may directly provide the intermediate beam B5i by refracting light of the reflected beam B5i . The surface SRF4 of the input element LNS1 may directly provide the intermediate beam B5 k by refracting light of the reflected beam B5 k . In this case, the output beam of the input element LNS1 may be directly used as the intermediate beam B5 k .

The aperture stop AS1 may be positioned between the input element LNS1 and the focusing unit 300. The center of the aperture stop AS1 may substantially coincide with the axis AXO. The aperture stop AS1 may be substantially circular.

The input element LNS1 , the optical elements of the (optional) modifying unit 200, the aperture stop AS1 , and the optical elements of the focusing unit 300 may be substantially axially symmetric with respect to the axis AXO. The input element LNS1 may be arranged to operate such that the second reflected beam B3 k formed by the second reflective surface SRF3 does not intersect the first refracted beam B1 k formed by the input surface SRF1 .

The first refracted beam B1 k , the first reflected beam B2 k , and the second reflected beam B3 k may propagate in a substantially homogeneous material without propagating in a gas.

The imaging device 500 may be arranged to form the optical image IMG1 on an image plane PLN1 . The active surface of the image sensor DET1 may substantially coincide with the image plane PLN1 . The image sensor DET1 may be positioned such that the light-detecting pixels of the image sensor DET1 are substantially in the image plane PLN1 . The imaging device 500 may be arranged to form the optical image IMG1 on the active surface of the image sensor DET1 . The image plane PLN1 may be substantially perpendicular to the axis AXO.

The image sensor DET1 may be attached to the imaging device 500 during manufacturing the imaging device 500 so that the imaging device 500 may comprise the image sensor DET1 . However, the imaging device 500 may also be provided without the image sensor DET1 . For example, the imaging device 500 may be manufactured or transported without the image sensor DET1 . The image sensor DET1 may be attached to the imaging device 500 at a later stage, prior to capturing the images IMG1 . SX, SY, and SZ denote orthogonal directions. The direction SY is shown e.g. in Fig. 3a. The symbol k may denote e.g. a one-dimensional or a two-dimensional index. For example, the imaging device 500 may be arranged to form an optical image IMG1 by focusing light of several input beams B0i, BO2, BO3, ..B0k-i , BOk, B0k+i ...

Referring to Fig. 2, the focusing unit 300 may comprise e.g. one or more lenses 301 , 302, 303, 304. The focusing unit 300 may be optimized for off-axis perfor- mance.

The imaging device 500 may optionally comprise a window WN1 to protect the surface of the image sensor DET1 . The wavefront modifying unit 200 may comprise e.g. one or more lenses 201 . The wavefront modifying unit 200 may be arranged to form an intermediate beam B5 k by modifying the wavefront of the output beam B4 k. In particular, the input element LNS1 and the wavefront modifying unit 200 may be arranged to form a substantially collimated intermediate beam B5 k from the light of a collimated input beam BO k . The collimated intermediate beam B5 k may have a substantially planar waveform.

In an embodiment, the input element LNS1 and the wavefront modifying unit 200 may also be arranged to form a converging or diverging intermediate beam B5 k . The converging or diverging intermediate beam B5 k may have a substantially spherical waveform.

Referring to Fig. 3a, the imaging device 500 may be arranged to focus light B6 k on a point P k ' on the image sensor DET1 , by receiving light BO k from an arbitrary point P k of the object 01 . The imaging device 500 may be arranged to form an image SUB1 of an object O1 on the image sensor DET1 . The image SUB1 of the object O1 may be called e.g. as a sub-image. The optical image IMG1 formed on the image sensor DET1 may comprise the sub-image SUB1 . Referring to Fig. 3b, the imaging device 500 may be arranged to focus light B6R on the image sensor DET1 , by receiving light B0R from a second object O2. The imaging device 500 may be arranged to form a sub-image SUB2 of the second object 02 on the image sensor DET1 . The optical image IMG1 formed on the image sensor DET1 may comprise one or more sub-images SUB1 , SUB2. The optical sub-images SUB1 , SUB2 may be formed simultaneously on the image sensor DET1 . The optical image IMG1 representing the 360° view around the axis AXO may be formed simultaneously and instantaneously.

In an embodiment, the objects 01 , 02 may be e.g. on substantially opposite sides of the input element LNS1 . The input element LNS1 may be located between a first object 01 and a second object 02.

The input element LNS1 may provide output light B4 R by receiving light BOR from the second object 02. The wavefront modifying unit 200 may be arranged to form an intermediate beam B5R by modifying the wavefront of the output beam B4 R . The aperture stop AS1 may be arranged to limit the transverse dimensions of the intermediate beam B5R. The focusing unit 300 may be arranged to form a focused beam B6R by focusing light of the intermediate beam B5R.

Referring to Fig. 4, the imaging device 500 may have a viewing region VREG1 . The viewing region VREG1 may also be called e.g. as the viewing volume or as the viewing zone. The imaging device 500 may form a substantially sharp image of an object 01 which resides within the viewing region VREG1 .

The viewing region VREG1 may completely surround the axis AXO. The upper boundary of the viewing region VREG1 may be a conical surface, which has an angle 90°-ΘΜΑΧ with respect to the direction SZ. The angle ΘΜΑΧ may be e.g. in the range of +30° to +60°. The lower boundary of the viewing region VREG1 may be a conical surface, which has an angle 90°-ΘΜΙΝ with respect to the direction SZ. The angle ΘΜΙΝ may be e.g. in the range of -30° to +20°. The angle ΘΜΑΧ may represent the maximum elevation angle on an input beam with respect to a reference plane REF1 , which is perpendicular to the direction SZ. The reference plane REF1 may be defined by the directions SX, SY. The angle ΘΜΙΝ may represent the minimum elevation angle on an input beam with respect to a reference plane REF1 .

The vertical field of view (ΘΜΑΧ-ΘΜΙΝ) of the imaging device 500 may be defined by a first angle value ΘΜΙΝ and by a second angle value ΘΜΑΧ, wherein the first angle value ΘΜΙΝ may be lower than or equal to e.g. 0°, and the second angle value ΘΜΑΧ may be higher than or equal to e.g. +35°. The vertical field of view (ΘΜΑΧ-ΘΜΙΝ) of the imaging device 500 may be defined by a first angle value ΘΜΙΝ and by a second angle value ΘΜΑΧ, wherein the first angle value ΘΜΙΝ is lower than or equal to -30°, and the second angle value ΘΜΑΧ is higher than or equal to +45°.

The vertical field of view (=ΘΜΑΧ-ΘΜΙΝ) of the device 500 may be e.g. in the range of 5° to 60°. The imaging device 500 may be capable of forming the optical image IMG1 e.g. with a spatial resolution, which is higher than e.g. 90 line pairs per mm.

Referring to Fig. 5a, the imaging device 500 may form a substantially annular two- dimensional optical image IMG1 on the image sensor DET1 . The imaging device 500 may form a substantially annular two-dimensional optical image IMG1 on an image plane PLN 1 , and the image sensor DET1 may be positioned in the image plane PLN 1 .

The image IMG1 may be an image of the viewing region VREG1 . The image IMG1 may comprise one or more sub-images SUB1 , SUB2 of objects residing in the viewing region VREG1 . The optical image IMG1 may have an outer diameter and an inner diameter CIMIN- The inner boundary of the optical image IMG1 may correspond to the upper boundary of the viewing region VREG1 , and the outer boundary of the optical image IMG1 may correspond to the lower boundary of the viewing region VREG1 . The outer diameter may correspond to the minimum elevation angle ΘΜΙΝ, and the inner diameter dMiN may correspond to the maximum elevation angle ΘΜΑΧ-

The image sensor DET1 may be arranged to convert the optical image IMG1 into a digital image DIMG1 . The image sensor DET1 may provide the digital image DIMG1 . The digital image DIMG1 may represent the annular optical image IMG1 . The digital image DIMG1 may be called e.g. an annular digital image DIMG1 .

The inner boundary of the image IMG1 may surround a central region CREG1 such that the diameter of the central region CREG1 is smaller than the inner diameter dMiN of the annular image IMG1 . The device 500 may be arranged to form the annular image IMG1 without forming an image on the central region CREG1 of the image sensor DET1 . The image IMG1 may have a center point CP1 . The device 500 may be arranged to form the annular image IMG1 without focusing light to the center point CP1 . The active area of the image sensor DET1 may have a length L D ETI and a width WDETI - The active area means the area which is capable of detecting light. The width WDETI may denote the shortest dimension of the active area in a direction which is perpendicular to the axis AXO, and the length L D ETI may denote the dimension of the active area in a direction, which is perpendicular to the width WDETI - The width WDETI of the sensor DET1 may be greater than or equal to the outer diameter diw \ x of the annular image IMG1 so that the whole annular image IMG1 may be captured by the sensor DET1 .

Referring to Fig. 5b, the annular digital image DIMG1 may be converted into a panoramic image PAN1 by performing a de-warping operation. The panoramic image PAN1 may be formed from the annular digital image DIMG1 by digital image processing.

The digital image DIMG1 may be stored e.g. in a memory MEM1 . However, the digital image DIMG1 may also be converted into the panoramic image PAN1 pixel by pixel, without a need to store the whole digital image DIMG1 in the memory MEM1 .

The conversion may comprise determining signal values associated with the points of the panoramic image PAN1 from signal values associated with the points of the annular digital image DIMG1 . The panorama image PAN1 may comprise e.g. a sub-image SUB1 of the first object O1 and a sub-image SUN2 of the second object O2. The panorama image PAN1 may comprise one or more sub-images of objects residing in the viewing region of the imaging device 500.

The whole optical image IMG1 may be formed instantaneously and simultaneously on the image sensor DET1 . Consequently, the whole digital image DIMG1 may be formed without stitching, i.e. without combining two or more images taken in different directions. The panorama image PAN1 may be formed from the digital image DIMG1 without stitching. In an embodiment, the imaging device 500 may remain stationary during capturing the digital image DIMG1 , i.e. it is not necessary to change the orientation of the imaging device 500 for capturing the whole digital image DIMG1 . The image sensor DET1 may comprise a two-dimensional rectangular array of detector pixels, wherein the position of each pixel may be specified by coordinates (x,y) of a first rectangular system (Cartesian system). The image sensor DET1 may provide the digital image DIMG1 as a group of pixel values, wherein the position of each pixel may be specified by the coordinates. For example, the position of an image point P k ' may be specified by coordinates Xk,yk (or by indicating the corresponding column and the row of a detector pixel of the image sensor DET1 ).

In an embodiment, the positions of image points of the digital image DIMG1 may also be expressed by using polar coordinates (γκΛ). The positions of the pixels of the panorama image PAN1 may be specified by coordinates (u,v) of a second rectangular system defined by image directions SU and SV. The panorama image PAN1 may have a width UMAX, and a height V M AX- The position of an image point of the panorama image PAN1 may be specified by coordinates u,v with respect to a reference point REFP. An image point P k ' of the annular image IMG1 may have coordinates polar coordinates (γκΛ), and the corresponding image point Pk' of the panorama image PAN1 may have rectangular coordinates (Uk,v k ).

The de-warping operation may comprise mapping positions expressed in the polar coordinate system of the annular image DIMG1 into positions expressed in the rectangular coordinate system of the panorama image PAN1 .

The imaging device 500 may provide a curvilinear i.e. distorted image IMG1 from its surroundings VREG1 . The imaging device 500 may provide a large field size and sufficient resolving power, wherein the image distortion caused by the imaging device 500 may be corrected by digital image processing.

In an embodiment, the device 500 may also form a blurred optical image on the central region CREG1 of the image sensor DET1 . The imaging device 500 may be arranged to operate such that the panorama image PAN1 is determined mainly from the image data obtained from the annular region defined by the inner diameter dMiN and the outer diameter dMAx- The annular image IMG1 may have an inner radius Γ Μ ΙΝ (=CIMIN/2) and an outer radius Γ Μ ΑΧ (=d MAX/2). The imaging device 500 may focus the light of the input beam BO k to the detector DET1 such that the radial coordinate r k may depend on the elevation angle 9 k of the input beam BO k .

Referring to Fig. 6a, the input surface SRF1 of the device 500 may receive an input beam BO k from an arbitrary point P k of an object O1 . The beam BO k may propagate in a direction DIR k defined by an elevation angle 9 k and by an azimuth angle φκ. The elevation angle 9 k may denote the angle between the direction DIR k of the beam BO k and the horizontal reference plane REF1 . The direction DIR k of the beam BO k may have a projection DIR k ' on the horizontal reference plane REF1 . The azimuth angle may denote the angle between the projection DIR k ' and a reference direction. The reference direction may be e.g. the direction SX.

The beam BO k may be received e.g. from a point P k of the object 01 . Rays received from a remote point P k to the entrance pupil EPU k of the input surface SRF1 may together form a substantially collimated beam BO k . The input beam BO k may be a substantially collimated beam.

The reference plane REF1 may be perpendicular to the symmetry axis AXO. The reference plane REF1 may be perpendicular to the direction SY. When the angles are expressed in degrees, the angle between the direction SZ and the direction DIR1 of the beam BOk may be equal to 90°-9k. The angle 90°-9k may be called e.g. as the vertical input angle.

The input surface SRF1 may simultaneously receive several beams from different points of the object 01 . Referring to Fig. 6b, the imaging device 500 may focus the light of the beam BO k to a point P k ' on the image sensor DET1 . The position of the image point P k ' may be specified e.g. by polar coordinates γκ, r k . The annular optical image IMG1 may have a center point CP1 . The angular coordinate y k may specify the angular position of the image point P k ' with respect to the center point CP1 and with respect to a reference direction (e.g. SX). The radial coordinate r k may specify the distance between the image point P k ' and the center point CP1 . The angular coordinate γι < of the image point P k ' may be substantially equal to the azimuth angle φκ of the input beam BO k .

The annular image IMG1 may have an inner radius Γ Μ ΙΝ and an outer radius Γ Μ ΑΧ- The imaging device 500 may focus the light of the input beam BO k to the detector DET1 such that the radial coordinate r k may depend on the elevation angle 9 k of said input beam BO k .

The ratio of the inner radius Γ Μ ΙΝ to the outer radius Γ Μ ΑΧ may be e.g. in the range of 0.3 to 0.7.

The radial position r k may depend on the elevation angle 9 k in a substantially linear manner. An input beam BO k may have an elevation angle 9 k , and the input beam BO k may provide an image point P k ' which has a radial position r k . An estimate r k , es t for the radial position r k may be determined from the elevation angle 9 k e.g. by the following mapping equation: r k,est = R MIN + fl(¾ _ ^MIN ) 0 ) fi may denote the focal length of the imaging device 500. The angles of equation (1 ) may be expressed in radians. The focal length fi of the imaging device 500 may be e.g. in the range of 0.5 to 20 mm.

The input element LNS1 and the optional modifying unit 200 may be arranged to operate such that the intermediate beam B5 k is substantially collimated. The input element LNS1 and the optional modifying unit 200 may be arranged to operate such that the intermediate beam B5 k has a substantially planar wavefront. The focal length fi of the imaging device 500 may be substantially equal to the focal length of the focusing unit 300 when the intermediate beam B5 k is substantially collimated after passing through the aperture stop AS1 .

The input element LNS1 and the wavefront modifying unit 200may be arranged to provide an intermediate beam B5 k such that the intermediate beam B5 k is substantially collimated after passing through the aperture stop AS1 . The focusing unit 300 may be arranged to focus light of the intermediate beam B5 k to the image plane PLN1 . The input element LNS1 and the optional modifying unit 200 may also be arranged to operate such that the intermediate beam B5 k is not fully collimated after the aperture stop AS1 . In that case the focal length fi of the imaging device 500 may also depend on the properties of the input element LNS1 , and/or on the properties of the modifying unit 200 (if the device 500 comprises the unit 200).

In the general case, the focal length fi of the imaging device 500 may be defined based on the actual mapping properties of device 500, by using equation (2). f 1 = (2)

The angles of equation (2) may be expressed in radians. 9 k denotes the elevation angle of a first input beam BO k . 0 k +i denotes the elevation angle of a second input beam B0 k +i . The angle 9 k +i may be selected such that the difference 9 k +i-6 k is e.g. in the range of 0.001 to 0.02 radians. The first input beam BO k may form a first image point P k ' on the image sensor DET1 . r k denotes the radial position of the first image point P k '. The second input beam B0 k +i may form a second image point P k+ i' on the image sensor DET1 . r k denotes the radial position of the first image point P k '.

ΘΜΙΝ may denote the elevation angle, which corresponds to the inner radius Γ Μ ΙΝ of the annular image IMG1 . The focal length fi of the imaging device 500 may be e.g. in the range of 0.5 to 20 mm. In particular, the focal length fi may be in the range of 0.5 mm to 5 mm.

The relationship between the elevation angle 9 k of the input beam BO k and the radial position r k of the corresponding image point P k ' may be approximated by the equation (1 ). The actual radial position r k of the image point P k ' may slightly deviate from the estimated value r k , es t given by the equation (1 ). The relative deviation Ar/r k , es t may be calculated by the following equation: = J ^ . 1 00 o /o (3a)

A k,est A k,est The radial distortion of the image IMG1 may be e.g. smaller than 20%. This may mean that the relative deviation Ar/r k , es t of the radial position r k of each image point Pk' from a corresponding estimated radial position r k , es t is smaller than 20%, wherein said estimated value r k , es t is determined by the linear mapping equation (1 ).

The shapes of the surfaces SRF1 , SRF2, SRF3, SRF4 may be selected such that the relative deviation Ar/r k , es t is in the range of -20% to 20%. The radial distortion of the optical image IMG1 may be smaller than 20% when the vertical field of view (ΘΜΑΧ-ΘΜΙΝ) is defined by the angles ΘΜΙΝ = 0° and ΘΜΑΧ = +35°.

The root mean square (RMS) value of the relative deviation Ar/r k , es t may depend on the focal length fi of the imaging device 500. The RMS value of the relative deviation Ar/r k , es t may be calculated e.g. by the following equation:

1 r MAXi r r

RMS = r ~ r est dr (3b)

rMAX - r MIN R MIN ½st where r est = r MIN + f l ( Θ ( Γ ) - Θ ΜΙΝ) ( 3c )

Θ(Γ) denotes the elevation angle of an input beam, which produces an image point at a radial position r with respect to the center point CP1 . The angles of equation (3c) may be expressed in radians. The focal length fi of the imaging device 500 may be determined from the equation (3b), by determining the focal length value fi , which minimizes the RMS value of the relative deviation over the range from ΓΜΙΝ to r M Ax- The focal length value that provides the minimum RMS relative deviation may be used as the focal length of the imaging device 500. The focal length of the imaging device 500 may be defined to be the focal length value fi that provides the minimum RMS relative deviation.

The radial distortion may be compensated when forming the panorama image PAN1 from the image IMG1 . However, the pixels of the image sensor DET1 may be used in an optimum way when the radial distortion is small , in order to provide a sufficient resolution at all parts of the panorama image PAN1 .

The imaging device 500 may receive a plurality of input beams from different points of the object 01 , and the light of each input beam may be focused on different points of the image sensor DET1 to form the sub-image SUB1 of the object 01 .

Referring to Figs. 7a to 7c, the input beam BOk may be coupled to the input element LNS1 via a portion EPUk of the input surface SRF1 . The portion EPUk may be called as the entrance pupil EPUk. The input beam BOk may comprise e.g. peripheral rays BOak, BObk, BOdk, BOek and a central ray BOCk. The aperture stop AS1 may define the entrance pupil EPUk by preventing propagation of marginal rays.

The entrance pupil EPUk may have a width W k and a height Ah k . The position of the entrance pupil EPUk may be specified e.g. by the vertical position z k of the center of the entrance pupil EPUk, and by the polar coordinate angle cok of the center of the entrance pupil EPUk. The polar coordinate cok may specify the position of the center of the entrance pupil EPUk with respect to the axis AXO, by using the direction SX as the reference direction. The angle cok may be substantially equal to the angle (pk+180°.

The input beam BOk may be substantially collimated, and the rays BOa^ BObk, BOCk, BOdk, BOek may be substantially parallel to the direction DIR k of the input beam BOk. The aperture stop AS1 may define the position and the dimensions W k , Ah k of the entrance pupil EPUk according to the direction DIR k of the input beam BOk such that the position and the dimensions W k , Ah k of the entrance pupil EPUk may depend on the direction DIR k of the input beam BOk. The dimensions W k , Ah k of the entrance pupil EPUk may depend on the direction DIR k of the input beam BOk. The position of the center of the entrance pupil EPUk may depend on the direction DIR k of the input beam BOk. The entrance pupil EPUk may be called as the entrance pupil of the imaging device 500 for rays propagating in the direction DIR k . The device 500 may simultaneously have several different entrance pupils for substantially collimated input beams received from different directions. The imaging device 500 may be arranged to focus light of the input beam BO k via the aperture stop AS1 to an image point P k ' on the image sensor DET1 . The aperture stop AS1 may be arranged to prevent propagation of rays, which would cause blurring of the optical image IMG1 . The aperture stop AS1 may be arranged to define the dimensions W k , Ah k of the entrance pupil EPU k . Furthermore, the aperture stop AS1 may be arranged to define the position of the entrance pupil EPUk.

For example, a ray LBOO k propagating in the direction DIR k may impinge on the input surface SRF1 outside the entrance pupil EPU k . The aperture stop AS1 may define the entrance pupil EPU k so that light of a ray LBOO k does not contribute to forming the image point P k '. The aperture stop AS1 may define the entrance pupil EPU k so that the light of marginal rays does not propagate to the image sensor DET1 , wherein said marginal rays propagate in the direction DIR k and impinge on the input surface SRF1 outside the entrance pupil EPU k .

Rays BOak, BObk, BOCk, BOdk, BOek which propagate in the direction DIR k and which impinge on the entrance pupil EPU k may contribute to forming the image point P k '. Rays which propagate in a direction different from the direction DIR k may contribute to forming another image point, which is different from the image point P k '. Rays which propagate in a direction different from the direction DIR k do not contribute to forming said image point P k '.

Different image points P k ' may correspond to different entrance pupils EPU k . A first image point may be formed from first light received via a first entrance pupil, and a second image point may be formed from second light received via a second different entrance pupil. The imaging device 500 may form a first intermediate beam from the first light, and the imaging device 500 may form a second intermediate beam from the second light such that the first intermediate beam and the second intermediate beam pass through the common aperture stop AS1 .

The input element LNS1 and the focusing unit 300 may be arranged to form an annular optical image IMG1 on the image sensor DET1 such that the aperture stop AS1 defines an entrance pupil EPU k of the imaging device 500, the ratio fi/W k of the focal length fi of the focusing unit 300 to the width W k of the entrance pupil EPU k is in the range of 1 .0 to 5.6, and the ratio fi/Ah k of the focal length fi to the height Ah k of said entrance pupil EPU k is in the range of 1 .0 to 5.6. Referring to Fig. 8a to 8c, the aperture stop AS1 may define the dimensions and the position of the entrance pupil EPU k by preventing propagation of marginal rays. The aperture stop AS1 may be substantially circular. The aperture stop AS1 may be defined e.g. by a hole, which has a diameter dAsi - For example, an element 150 may have a hole, which defines the aperture stop AS1 . The element 150 may comprise e.g. a metallic, ceramic or plastic disk, which has a hole. The diameter dAsi of the substantially circular aperture stop AS1 may be fixed or adjustable. The element 150 may comprise a plurality of movable lamellae for defining a substantially circular aperture stop AS1 , which has an adjustable diameter dAsi -

The input beam B0k may comprise rays BOak, BObk, BOCk, BOdk, BOek which propagate in the direction DIR k .

The device 500 may form a peripheral ray B5ak by refracting and reflecting light of the ray BOak. A peripheral ray B5bk may be formed from the ray BObk. A peripheral ray B5dk may be formed from the ray BOdk. A peripheral ray B5ek may be formed from the ray BOe k . A central ray B5Ck may be formed from the ray BOC k .

The horizontal distance between the rays BOa k , BOb k may be equal to the width W k of the entrance pupil EPUk. The vertical distance between the rays BOdk, BOek may be equal to the height Ah k of the entrance pupil EPU k . A marginal ray BOO k may propagate in the direction DIR k so that the marginal ray BOO k does not impinge on the entrance pupil EPU k . The aperture stop AS1 may be arranged to block the marginal ray BOO k such that the light of said marginal ray BOO k does not contribute to forming the optical image IMG1 . The device 500 may form a marginal ray B50k, by refracting and reflecting light of the marginal ray BOO k . The aperture stop AS1 may be arranged to prevent propagation of the ray B50k so that light of the ray B50k does not contribute to forming the image point P k '. The aperture stop AS1 may be arranged to prevent propagation of the light of the ray BOO k so that said light does not contribute to forming the image point P k '. A portion of the beam B5k may propagate through the aperture stop AS1 . Said portion may be called e.g. as the trimmed beam B5k. The aperture stop AS1 may be arranged to form a trimmed beam B5k by prevent propagation of the marginal rays B50k. The aperture stop AS1 may be arranged to define the entrance pupil EPU k by preventing propagation of marginal rays B50k.

The imaging device 500 may be arranged to form an intermediate beam B5 k by refracting and reflecting light of the input beam BO k . The intermediate beam B5 k may comprise the rays BOak, BObk, BOCk, BOdk, BOek. The direction of the central ray B5C k may be defined e.g. by an angle φ 0 ^ The direction of the central ray B5C k may depend on the elevation angle 9k of the input beam BO k . Fig. 8d shows propagation of peripheral rays in the imaging device 500, when viewed from a direction which is parallel to the projected direction DIR k ' of the input beam BO k . (the projected direction DIR k ' may be e.g. parallel with the direction SX). Fig. 8d shows propagation of peripheral rays from the surface SRF3 to the image sensor DET1 . The surface SRF3 may form peripheral rays B3d k , B3e k by reflecting light of the beam B2 k . The surface SRF4 may form peripheral rays B4d k , B4e k by refracting light of the rays B3d k , B3e k . The modifying unit 200 may form peripheral rays B5dk, B5ek from the light of the rays B3dk, B3ek. The focusing unit 300 may form focused rays B6dk, B6ek by focusing light of the rays B5dk, B5ek. Fig. 8e shows propagation of rays in the imaging device 500, when viewed from the top. Fig. 8e shows propagation of light from the input surface SRF1 to the aperture stop AP1 . The input surface SRF1 may form a refracted beam B1 k by refracting light of the input rays BOC k , BOd k , BOe k . The surface SRF2 may form a reflected beam B2 k by reflecting light of the refracted beam B1 k . The surface SRF3 may form a reflected beam B3 k by reflecting light of the reflected beam B2 k . The surface SRF4 may form a refracted beam B4 k by refracting light of the reflected beam B3 k . The modifying unit 200 may form an intermediate beam B5 k from the refracted beam B4 k . The beam B5 k may pass via the aperture stop AP1 in order to prevent propagation of marginal rays.

Fig. 9a shows rays impinging on the image sensor DET1 in order to form an image point P k '. The focusing unit 300 may be arranged to form the image point P k ' by focusing light of the intermediate beam B5 k . The intermediate beam B5 k may comprise e.g. peripheral rays B5ak, B5bk, B5dk, B5ek and a central ray B5Ck. The focusing unit 300 may be arranged to provide a focused beam B6 k by focusing light of the intermediate beam B5 k . The focused beam B6 k may comprise e.g. rays B6ak, B6bk, B6Ck, B6dk, B6ek. The focusing unit 300 may form a peripheral ray B6ak by refracting and reflecting light of the ray B5ak. A peripheral ray B6bk may be formed from the ray B5bk. A peripheral ray B6dk may be formed from the ray B5dk. A peripheral ray B6ek may be formed from the ray B6ek. A central ray B6Ck may be formed from the ray B6Ck.

The direction of the peripheral ray B6ak may be defined by an angle φ 3 ι< with respect to the axis AXO. The direction of the peripheral ray B6bk may be defined by an angle fok with respect to the axis AXO. The direction of the central ray B6Ck may be defined by an angle φοκ with respect to the axis AXO. The rays B6ak, B6bk, B6Ck may be in a first vertical plane, which includes the axis AXO. The first vertical plane may also include the direction DIR k of the input beam BOk.

A(()ak may denote the angle between the direction of the ray B6ak and the direction of the central ray B6Ck. Acj^k may denote the angle between the direction of the ray B6bk and the direction of the central ray B6Ck. The sum Acj^k + Acj^k may denote the angle between the peripheral rays B6ak, B6bk. The sum Acj^k + Acj^k may be equal to the cone angle of the focused beam B6k in the radial direction of the annular optical image IMG1 . The direction of the peripheral ray B6dk may be defined by an angle Ap dk with respect to the direction of the central ray B6Ck. The central ray B6Ck may propagate in the first vertical plane, which also includes the axis AXO. The direction of the peripheral ray B6ek may be defined by an angle A$ ek with respect to the direction of the central ray B6Ck. Ap dk may denote the angle between the direction of the ray B6dk and the direction of the central ray B6Ck. A$ ek may denote the angle between the direction of the ray B6ek and the direction of the central ray B6Ck. The sum Ap dk + A$ ek may denote the angle between the peripheral rays B6dk, B6ek. The sum Ap dk + A e k may be equal to the cone angle of the focused beam B6k in the tangential direction of the annular optical image IMG1 . The cone angle may also be called as the vertex angle or as the full cone angle. The half cone angle of the focused beam B6k may be equal to A dk in a situation where

The sum Acj^k + Acj^k may depend on the dimensions of the aperture stop AS1 and on the focal length of the focusing unit 300. In particular, the sum Acj^k + Acj^k may depend on the diameter dAsi of the aperture stop AS1 . The diameter dAsi of the aperture stop AS1 and the focal length of the focusing unit 300 may be selected such that the sum Δφ 3 ι< + Δφbk is e.g. greater than 9°.

The sum Afak + Δβ β ι< may depend on the diameter of the aperture stop AS1 and on the focal length of the focusing unit 300. In particular, the sum Afak + Δβ β ι< may depend on the diameter dAsi of the aperture stop AS1 . The diameter dAsi of the aperture stop AS1 and the focal length of the aperture stop AS1 may be selected such that the sum A k + Δβ β is e.g. greater than 9°. The dimensions (dAsi ) of the aperture stop AS1 may be selected such that the ratio (Δφ 3 ι< + A(()bk) ( A di + Δβ β ι ) is in the range of 0.7 to 1 .3, in order to provide sufficient image quality. In particular, the ratio (Δφ 3 + Δφbk) ( Δβ^ + Δβ β ι ) may be in the range of 0.9 to 1 .1 to optimize spatial resolution in the radial direction of the image IMG1 and in the tangential direction of the image IMG1 . The cone angle (Δφ 3 ι< + A$ bk ) may have an effect on the spatial resolution in the radial direction (DIRk ), and the cone angle (Δβ^ + Δβ β ι ) may have an effect on the spatial resolution in the tangential direction (the tangential direction is perpendicular to the direction DIR k '). The light of an input beam B0k having elevation angle 9k may be focused to provide a focused beam B6k, which impinges on the image sensor DET1 on the image point P k '. The F-number F(0k) of the imaging device 500 for the elevation angle 9k may be defined by the following equation:

1

F(9 k ) (4a)

2 · NA IMG?k

Where NA| M G,k denotes the numerical aperture of the focused beam B6k. The numerical aperture NA| M G,k may be calculated by using the angles Δφak and A$ bk \

NA IMG>k = n IMG - (4b)

niMG denotes the refractive index of light-transmitting medium immediately above the image sensor DET1 . The angles Δφak and Δφbk may depend on the elevation angle 9k. The F-number F(9k) for the focused beam B6k may depend on the elevation angle 9k of the corresponding input beam BOk.

A minimum value F M IN may denote the minimum value of the function F(9k) when the elevation angle 9k is varied from the lower limit ΘΜΙΝ to the upper limit ΘΜΑΧ . The effective F-number of the imaging device 500 may be defined to be equal to said minimum value FMIN -

The light-transmitting medium immediately above the image sensor DET1 may be e.g. gas, and the refractive index may be substantially equal to 1 . The light- transmitting medium may also be e.g. a (protective) light-transmitting polymer, and the refractive index may be substantially greater than 1 .

The modulation transfer function MTF of the imaging device 500 may be measured or checked e.g. by using an object O1 , which has a stripe pattern. The image IMG1 may comprise a sub-image of the stripe pattern such that the sub- image has a certain modulation depth. The modulation transfer function MTF is equal to the ratio of image modulation to the object modulation. The modulation transfer function MTF may be measured e.g. by providing an object O1 which has a test pattern formed of parallel lines, and by measuring the modulation depth of the corresponding image IMG1 . The modulation transfer function MTF may be normalized to unity at zero spatial frequency. In other words, the modulation transfer function may be equal to 100% at the spatial frequency 0 line pairs/mm. The spatial frequency may be determined at the image plane PLN1 , i.e. on the surface of the image sensor DET1 .

The lower limit of the modulation transfer function MTF may be limited by the optical aberrations of the device 500, and the upper limit of the modulation transfer function MTF may be limited by diffraction.

Fig. 9c shows, by way of example, the modulation transfer function MTF of an imaging device 500 for three different elevation angles 9k=0°, 9k=20°, and 9k=35°. The solid curves shows the modulation transfer function when the test lines appearing in the image IMG1 are oriented tangentially with respect to the center point CP1 . The dashed curves shows the modulation transfer function when the test lines appearing in the image IMG1 are oriented radially with respect to the center point CP1 . Fig. 9c shows modulation transfer function curves of the imaging device 500 specified in Tables 1 .1 to 1 .3.

Each curve of Fig . 9c represents the average of modulation transfer functions MTF determined at the wavelength 486 nm, 587 nm ja 656 nm.

The outer diameter of the annular image IMG1 and the modulation transfer function MTF of the device 500 may depend on the focal length fi of the device 500. In case of Fig. 9c, the focal length fi is equal to 1 .26 mm and the outer diameter of the annular image IMG1 is equal to 3.5 mm.

For example, the modulation transfer function MTF at the spatial frequency 90 line pairs/mm may be substantially equal to 54%. For example, the modulation transfer function MTF at the spatial frequency 90 line pairs/mm may be higher than 50% for the whole vertical field of view from 0° to +35°. The whole width of the annular image IMG1 may comprise approximately 300 line pairs when the spatial frequency is equal to 90 line pairs / mm and the outer diameter of the annular image IMG1 is equal to 3.5 mm (3.5 mm 90 line pairs/mm = 315 line pairs). The modulation transfer function MTF of the imaging device 500 at a first spatial frequency vi may be higher than 50% for each elevation angle 9k which is in the vertical field of view from ΘΜΑΧ to ΘΜΙΝ, wherein the first spatial frequency vi is equal to 300 line pairs divided by the outer diameter dMAx of the annular optical image IMG1 , and the effective F-number F e ff of the device 500 may be e.g. in the range of 1 .0 to 5.6.

The shapes of the optical surfaces of the input element LNS1 and the diameter dAsi of the aperture stop AS1 may be selected such that the modulation transfer function MTF of the imaging device 500 at a first spatial frequency vi may be higher than 50% for at least one elevation angle 9k which is in the range of 0° to +35°, wherein the first spatial frequency vi is equal to 300 line pairs divided by the outer diameter of the annular optical image IMG1 , and the effective F-number Feff of the device 500 may be e.g. in the range of 1 .0 to 5.6. The modulation transfer function at the first spatial frequency vi and at said at least one elevation angle 9k may be higher than 50% in the radial direction and in the tangential direction of the optical image IMG1 . The shapes of the optical surfaces of the input element LNS1 and the diameter dAsi of the aperture stop AS1 may be selected such that the modulation transfer function MTF of the imaging device 500 at a first spatial frequency vi may be higher than 50% for each elevation angle 9k which is in the range of 0° to +35°, wherein the first spatial frequency vi is equal to 300 line pairs divided by the outer diameter of the annular optical image IMG1 , and the effective F-number F e ff of the device 500 may be e.g. in the range of 1 .0 to 5.6. The modulation transfer function at the first spatial frequency vi and at each of said elevation angles 9k may be higher than 50% in the radial direction and in the tangential direction of the optical image IMG1 .

The width W D ETI of active area of the image sensor DET1 may be greater than or equal to the outer diameter of the annular image IMG1 . The shapes of the optical surfaces of the input element LNS1 and the diameter dAsi of the aperture stop AS1 may be selected such that the modulation transfer function MTF of the imaging device 500 at a first spatial frequency vi may be higher than 50% for each elevation angle 9k which is in the range of 0° to +35°, wherein the first spatial frequency vi is equal to 300 line pairs divided by the width WDETI of the active area of the image sensor DET1 , and the effective F-number Feff of the device 500 may be e.g. in the range of 1 .0 to 5.6. The modulation transfer function at the first spatial frequency vi and at each of said elevation angles 9k may be higher than 50% in the radial direction and in the tangential direction of the optical image IMG1 .

Fig. 10 shows functional units of the imaging device 500. The imaging device 500 may comprise a control unit CNT1 , a memory MEM1 , a memory MEM2, a memory MEM3. The imaging device 500 may optionally comprise a user interface U IF1 and/or a communication unit RXTX1 .

The input element LNS1 and the focusing unit 300 may be arranged to form an optical image IMG1 on the image sensor DET1 . The image sensor DET1 may capture the image DIMG1 . The image sensor DET1 may convert the optical image IMG1 into a digital image DIMG1 , which may be stored in the operational memory MEM1 . The image sensor DET1 may provide the digital image DIMG1 from the optical image IMG1 . The control unit CNT1 may be configured to form a panoramic image PAN1 from the digital image DIMG1 . The panoramic image PAN1 may be stored e.g. in the memory MEM2. The control unit CNT1 may comprise one or more data processors. The control unit CNT1 may be configured to control operation of the imaging device 500 and/or the control unit CNT1 may be configured to process image data. The memory MEM3 may comprise computer program PROG1 . The computer program code PROG1 may be configured to, when executed on at least one processor CNT1 , cause the imaging device 500 to capture the annular image DIMG1 and/or to convert the annular image DIMG1 into a panoramic image PAN1 .

The device 500 may be arranged to receive user input from a user via the user interface UIF1 . The device 500 may be arranged to display one or more images DIMG, PAN1 to a user via the user interface UIF1 . The user interface UIF1 may comprise e.g. a display, a touch screen, a keypad, and/or a joystick.

The device 500 may be arranged to send the images DIMG and/or PAN1 by using the communication unit RXTX1 . COM1 denotes a communication signal. The device 500 may be arranged to send the images DIMG and/or PAN1 e.g. to a remote device or to an Internet server. The communication unit RXTX1 may be arranged to communicate e.g. via a mobile communications network, via a wireless local area network (WLAN), and/or via the Internet. The device 500 may be connected to a mobile communication network such as the Global System for Mobile communications (GSM) network, 3rd Generation (3G) network, 3.5th Generation (3.5G) network, 4th Generation (4G) network, Wireless Local Area Network (WLAN), Bluetooth®, or other contemporary and future networks.

The device 500 may also be implemented in a distributed manner. For example, the digital image DIMG may be transmitted to a (remote) server, and forming the panoramic image PAN1 from the digital image DIMG may be performed by the server.

The imaging device 500 may be arranged to provide a video sequence, which comprises one or more panoramic images PAN1 determined from the digital images DIMG1 . The video sequence may be stored and/or communicated by using a data compression codec, e.g. by using MPEG-4 Part 2 codec, H.264/MPEG-4 AVC codec, H.265 codec, Windows Media Video (WMV), DivX Pro codec, or a future codec (e.g. High Efficiency Video Coding, HEVC, H.265). The video sequence may encoded and/or decoded e.g. by using MPEG-4 Part 2 codec, H.264/MPEG-4 AVC codec, H.265 codec, Windows Media Video (WMV), DivX Pro codec, or a future codec (e.g. High Efficiency Video Coding, HEVC, H.265). The video data may also be encoded and/or decoded e.g. by using a lossless codec.

The images PAN1 may be communicated to a remote display or image projector such that the images PAN1 may be display by said remote display (or projector). The video sequence comprising the images PAN1 may be communicated to a remote display or image projector.

The input element LNS1 may be produced e.g. by molding, turning (with a lathe), milling, and/or grinding. In particular, the input element LNS1 may be produced e.g. by injection molding, by using a mold. The mold for making the input element LNS1 may be produced e.g. by turning, milling, grinding and/or 3D printing. The mold may be produced by using a master model. The master model for making the mold may be produced by turning, milling, grinding and/or 3D printing. The turning or milling may comprise using a diamond bit tool. If needed, the surfaces may be polished e.g. by flame polishing and/or by using abrasive techniques.

The input element LNS1 may be a solid body of transparent material. The material may be e.g. plastic, glass, fused silica, or sapphire.

In particular, the input element LNS1 may consist of a single piece of plastic which may be produced by injection molding. Said single piece of plastic may be coated or uncoated. Consequently, large amounts of input element LNS1 may be produced with relatively low manufacturing costs.

The shape of the surface SRF1 may be selected such that the input element LNS1 may be easily removed from a mold.

The thickness of the input element LNS1 may depend on the radial position. The input element LNS1 may have a maximum thickness at a first radial position and a minimum thickness at a second radial position (The second radial position may be e.g. smaller than 90% of the outer radius of the input element LNS1 ). The ratio of the minimum thickness to the maximum thickness may be e.g. greater than or equal to 0.5 in order to facilitate injection molding.

The optical interfaces of the optical elements may be optionally coated with anti- reflection coating(s).

The reflective surfaces SRF2, SRF3 of the input element LNS1 may be arranged to reflect light by total internal reflection (TIR). The orientations of the reflective surfaces SRF2, SRF3 and the refractive index of the material of the input element LNS1 may be selected to provide the total internal reflection (TIR).

In an embodiment, the imaging device 500 may be arranged to form the optical image IMG1 from infrared light. The input element LNS1 may comprise e.g. silicon or germanium for refracting and transmitting infrared light.

The image sensor DET1 may comprise a two-dimensional array of light-detecting pixels. The two-dimensional array of light-detecting pixels may also be called as a detector array. The image sensor DET1 may be e.g. a CMOS image sensor Complementary Metal Oxide Semiconductor) or a CCD image sensor (Charge Coupled Device). The active area of the image sensor DET1 may be substantially parallel to a plane defined by the directions SX and SY.

The resolution of the image sensor DET1 may be selected e.g. from the following list: 800 x 600 pixels (SVGA), 1024 x 600 pixels (WSVGA), 1024 x 768 pixels (XGA), 1280 x 720 pixels (WXGA), 1280 x 800 pixels (WXGA), 1280 x 960 pixels (SXGA), 1360 x 768 pixels (HD), 1400 x 1050 pixels (SXGA+), (1440 x 900 pixels (WXGA+), 1600 x 900 pixels (HD+), 1600 x 1200 pixels (UXGA), 1680 x 1050 pixels (WSXGA+), 1920 x 1080 pixels (full HD), 1920 x 1200 pixels (WUXGA), 2048 x 1 152 pixels (QWXGA), 2560 x 1440 pixels (WQHD), 2560 x 1600 pixels (WQXGA), 3840 x 2160 pixels (UHD-1 ), 5120 x 2160 pixels (UHD), 5120 x 3200 pixels (WHXGA), 4096 x 2160 pixels (4K), 4096 x 1716 pixels (DCI 4K), 4096 x 2160 (DCI 4K), 7680 x 4320 pixels (UHD-2).

In an embodiment, the image sensor DET1 may also have an aspect ratio 1 :1 in order to minimize the number of inactive detector pixels. In an embodiment, the imaging device 500 does not need to be fully symmetric about the axis AXO. For example, the image sensor DET1 may overlap only half of the optical image IMG1 , in order to provide a 180° view. This may provide a more detailed image for the 180° view.

In an embodiment, one or more sectors may be removed from the input element LNS1 to provide a viewing region, which is smaller than 360°.

In an embodiment, the input element LNS1 may comprise one or more holes e.g. for attaching the input element LNS1 to one or more other components. In particular, the input element LNS1 may comprise a central hole. The input element LNS1 may comprise one or more protrusions e.g. for attaching the input element LNS1 to one or more other components. The direction SY may be called e.g. as the vertical direction, and the directions SX and SY may be called e.g. as horizontal directions. The direction SY may be parallel to the axis AXO. The direction of gravity may be substantially parallel with the axis AXO. However, the direction of gravity may also be arbitrary with respect to the axis AXO. The imaging device 500 may have any orientation with respect to its surroundings.

Fig. 1 1 shows radial dimensions and vertical positions for the input element LNS1 . The input surface SRF1 may have a lower boundary having a semi-diameter rsRFi B- The lower boundary may define a reference plane REF0. The input surface SRF1 may have an upper boundary having a semi-diameter TSRFIA- The upper boundary may be at a vertical position IISRFIA with respect to the reference plane REF0. The surface SRF2 may have a lower boundary having a semi-diameter rsRF2B- The surface SRF2 may have an upper boundary having a semi-diameter rsRF2A and a vertical position h S RF2A- The surface SRF3 may have a boundary having a semi-diameter r S RF3 and a vertical position h S RF3- The surface SRF4 may have a boundary having a semi-diameter r S RF4 and a vertical position h S RF4-

For example, the vertical position h S RF4 of the boundary of the refractive output surface SRF4 may be higher than the vertical position h S RF2A of the upper boundary of the reflective surface SRF2. For example, the vertical position h S RF3 of the boundary of the reflective output surface SRF3 may be higher than the vertical position IISRFIA of the upper boundary of the input surface SRF1 . Tables 1 .1 to 1 .3 show parameters, coefficients, and extra data associated with an imaging device of example 1 .

Table 1 .1 . General parameters of the imaging device 500 of example 1 .

Table 1 .2. Characteristic parameters of the surfaces of example 1 .

Surface Type Radius Thickness Refractive Abbe Diameter

(mm) (mm) index Vd (mm)

1 Not

(SRF1 ) Toroidal -29.2 12.3 1.531 56 applicable

2 Coordinate

break 1

3

(SRF2) Odd Asphere Infinite -5 1.531 56 26

4

(SRF3) Even Asphere 184.9 5.4 1.531 56 12

5

(SRF4) Even Asphere 4.08 6 AIR AIR 7.2

6 Even Asphere -23 2 1.531 56 6.4

7 Even Asphere -9.251 5 AIR AIR 6.4

8 Aperture stop 0.27 AIR AIR 2.6

9 Standard 3.17 1.436 1.587 59.6 3.4 10 Standard -3.55 0.62 1.689 31.2 3.4

1 1 Standard 10.12 1.47 AIR AIR 3.8

12 Even Aspere -3.3 0.9 1.531 56 3.4

13 Even Asphere -2.51 0 AIR AIR 4

14 Even Asphere 3.61 1.07 1.531 56 4.6

15 Even Asphere 3.08 1.4 AIR AIR 4.6

16 Plane Infinite 0.5 1.517 64.2 6.2

SRF17 Plane Infinite 1.5 AIR AIR 6.2

SRF18 Image 3.5

Table 1 .3. Cofficients and extra data for defining the shapes of the surfaces of example 1 .

αι α 2 α 3 α 4 α 5

13 -0.088 9.328Ε-03 7.336Ε-03 -1.670Ε-03 3.009Ε-04

αι α 2 α 3 α 4 α 5

14 0.065 -0.031 -4.01 1 Ε-04 -2.644Ε-04 6.290Ε-05

αι α 2 α 3 α 4 α 5

15 0.168 -0.075 3.363Ε-04 6.978Ε-04 -6.253Ε-05

The standard surface may mean a spherical surface centered on the optical axis AXO, with the vertex located at the current axis position. A plane may be treated as a special case of the spherical surface with infinite radius of curvature. The z- coordinate of a standard surface may be given by:

r denotes the radius, i.e. the horizontal distance of a point from the axis AXO. The z-coordinate denotes the vertical distance of said point from the vertex of the standard surface. The z-coordinate may also be called as the sag. c denotes the curvature of the surface (i.e. the reciprocal of a radius). K denotes the conic constant. The conic constant K is less than -1 for a hyperboloid. The conic constant K is -1 for a paraboloid surface. The conic constant K is in the range of -1 to 0 for an ellipsoid surface. The conic constant K is 0 for a spherical surface. The conic constant K is greater than 0 for an oblate ellipsoid surface.

A toroidal surface may be formed by defining a curve in the SY-SZ-plane, and then rotating the curve about the axis AXO. The toroidal surface may be defined using a base radius of curvature in the SY-SZ-plane, as well as a conic constant K and polynomial aspheric coefficients. The curve in the SY-SZ-plane may be defined by: z = + oqy 2 + a 2 y 4 + a 3 y 6 + a 4 y 8 + a 5 y 10 + ... (5)

ai , a,2, a,3, a 4 , a 5 , ... denote polynomial aspheric constants, y denotes horizontal distance of a point from the axis AXO. The z-coordinate denotes the vertical distance of said point from the vertex of the surface, c denotes the curvature, and K denotes the conic constant. The curve of equation (5) is then rotated about the axis AXO at a distance R from the vertex, in order to define the toroidal surface. The distance R may be called e.g. as the radius of rotation.

An even asphere surface may be defined by:

,2

cr

z = + oqr + a 2 r + a 3 r + a 4 r + a 5 r + ... (6)

ai, a,2, a,3, a 4 , a 5l ... denote polynomial aspheric constants, r denotes the radius, i.e. the horizontal distance of a point from the axis AXO. The z-coordinate denotes the vertical distance of said point from the vertex of the surface, c denotes the curvature, and K denotes the conic constant. An odd asphere surface may be defined by:

z = + β 2 Γ 2 + β 3 Γ 3 + β 4 Γ 4 + β 5 Γ 5 + ... (7)

βι, β 2 , β3, β4, βδ,--- denote polynomial aspheric constants, r denotes the radius, i.e. the horizontal distance of a point from the axis AXO. The z-coordinate denotes the vertical distance of said point from the vertex of the surface, c denotes the curvature, and K denotes the conic constant. The default value of each polynomial aspheric constant may be zero, unless a non-zero value has been indicated.

In case of an odd asphere, the coefficient (βι, β 2 , β3, β 4 , βδ) of at least one odd power (e.g. r 1 , r 3 , r 5 ) deviates from zero. In case of an even asphere, the coefficients (βι, β 2 , β3, β4, βδ) of odd powers (e.g. r 1 , r 3 , r 5 ) are zero. The values shown in the tables have been indicated according to the coordinate system defined in the operating manual of the Zemax software (ZEMAX Optical Design Program, User's Manual, October 8, 2013). The operating manual is provided by a company Radiant Zemax, LLC, Redmond USA.

Fig. 12 shows an example where the imaging device 500 does not need to comprise the beam modifying unit 200 between the input element LNS1 and the aperture stop AS1 . In this case, the input element LNS1 may directly provide the intermediate beam B5k. Tables 2.1 to 2.3 show parameters associated with an example 2, where the output beam of the input element LNS1 is directly guided via the aperture stop AS1 .

Table 2.1 . General parameters of the imaging device 500 of example 2.

Effective F-n umber F e ff 1 :3.8

Upper limit ΘΜΑΧ of elevation angle +1 1 °

Lower limit ΘΜΙΝ of elevation angle -1 1 °

Focal length fi 1 .26 mm

Total system height 20 mm

Outer diameter of input element 24 mm

LNS1

Image disc outer radius Γ Μ ΑΧ 1 .6 mm

Image disc inner radius Γ Μ ΙΝ 0.55 mm

Table 2.2. Characteristic parameters of the surfaces of example 2.

Table 2.3. Cofficients and extra data for defining the shapes of the surfaces of example 2.

The notation E-03 means 10 "3 , E-04 means 10 "4 , E-05 means 10 "5 , E-06 means 10 "6 , E-07 means 10 "7 , and E-08 means 10 "8 .

The device 500 of example 1 (specified in tables 1 .1 , 1 .2, 1 .3) and/or the device of example 2 (specified in tables 2.1 , 2.2, 3.2) may be used e.g. when the wavelength of the input beam BOk is in the range of 450 nm to 650 nm. The device 500 of example 1 (tables 1 .1 , 1 .2, 1 .3) and/or the device of example 2 (tables 2.1 , 2.2, 3.2) may provide a high performance simultaneously for the whole wavelength range from 450 nm to 650 nm. The device 500 of example 1 or 2 may be used e.g. for capturing a color image IMG1 by receiving visible input light.

The device 500 of example 1 or 2 may also be scaled up or scaled down e.g. according to the size of the image sensor DET1 . The optical elements of the device 500 may be selected so that the size of the optical image IMG1 may match with the size of the image sensor DET1 . An imaging device may have dimensions, which may be determined e.g. by multiplying dimensions of example 1 or 2 with a constant value. Said constant value may be called e.g. as a scaling-up factor or as scaling down factor.

Referring to Fig. 13, the image sensor DET1 may comprise a plurality of detector pixels PIX. The detector pixels PIX may be arranged in a two-dimensional rectangular array. An individual pixel PIX may have a width W P | X . The detector pixels of the sensor DET1 may have a width W P | X . The pixel width W P | X may be e.g. in the range of 1 μηη to 10 μηη. The highest spatial frequency VCUTI which can be detected by image sensor DET1 may be called as the spatial cut-off frequency VCUTI of the image sensor DET1 . The highest spatial frequency VCUTI which can be detected by image sensor DET1 may be equal to 0.5/W P | X (=0.5 line pairs / W P | X ). For example, the cut-off frequency VCUTI may be 71 line pairs / mm when the pixel width W P | X is equal to 7 μηη. In an embodiment, the shapes of the optical surfaces of the input element LNS1 and the diameter dAsi of the aperture stop AS1 may be selected such that the modulation transfer function MTF of the imaging device 500 at the spatial cut-off frequency VCUTI may be higher than 50% for each elevation angle 9k which is in the range of 0° to +35°, wherein the cut-off frequency VCUTI is equal to 0.5/W P | X , and the effective F-number F e ff of the device 500 may be e.g. in the range of 1 .0 to 5.6. The modulation transfer function at the first spatial frequency vi and at each of said elevation angles 9k may be higher than 50% in the radial direction and in the tangential direction of the optical image IMG1 . In an embodiment, the performance of the imaging optics 500 may also be evaluated based on the size of the image sensor DET1 . The image sensor DET1 may have a diagonal dimension SDETI - A reference spatial frequency VREF may be determined according to the following equation:

43.2mm 10 line pairs

v REF = - £ (8)

SDETI MM

The shapes of the optical surfaces of the input element LNS1 and the diameter dAsi of the aperture stop AS1 may be selected such that the modulation transfer function MTF of the imaging device 500 at the reference spatial frequency VREF may be higher than 40% for each elevation angle 9k which is in the range of 0° to +35°, wherein the reference spatial frequency VREF is determined according to the equation (8), and the effective F-number F e ff of the device 500 is e.g. in the range of 1 .0 to 5.6. The modulation transfer function at the reference spatial frequency VREF and at each of said elevation angles 9k may be higher than 40% in the radial direction and in the tangential direction of the optical image IMG1 .

For example, the diagonal dimension SDETI of the sensor may be substantially equal to 5.8 mm. The reference spatial frequency VREF calculated from the diagonal dimension 5.8 mm by using the equation (8) may be substantially equal to 74 line pairs / mm. The curves of Fig. 9c show that the modulation transfer function MTF of the imaging device 500 of example 1 satisfies the condition that the modulation transfer function MTF is greater than 50% at the reference spatial frequency VREF = 74 line pairs / mm, for the elevation angles 9k = 0°, 9k = 20°, and 9k = 35°, in the radial direction, and in the tangential direction of the optical image. Alternatively, the reference spatial frequency VREF may also be determined according to the following equation:

where denotes the outer diameter of the image IMG1 . Typically, the spatial resolution of the optical image IMG1 does not need to be higher than the size of the detector pixels. The reference spatial frequency VREF may be determined according to the equation (9) so that the requirements for the spatial resolution of very small images may be more relaxed than in the case of larger images. For example, the reference spatial frequency VREF calculated for the outer diameter di iAx = 2 mm by using the equation (9) may be substantially equal to 71 line pairs / mm. The reference spatial frequency VREF corresponding to an outer diameter dMAx = 3.5 mm may be substantially equal to 53 line pairs / mm. The reference spatial frequency VREF corresponding to an outer diameter dMAx = 10 mm may be substantially equal to 32 line pairs / mm.

The modulation transfer function MTF of the imaging device 500 at the reference spatial frequency VREF may be higher than 40% for each elevation angle 9k which is in the range of 0° to +35°, and the reference spatial frequency VREF may be equal to 100 line pairs / mm divided by the square root of the dimensionless outer diameter dMAx mm of the annular optical image IMG1 . The dimensionless outer diameter dMAx mm is calculated by dividing the outer diameter dMAx of the annular optical image IMG1 by a millimeter. The shapes of the optical surfaces of the input element LNS1 and the diameter dAsi of the aperture stop AS1 may be selected such that the modulation transfer function MTF of the imaging device 500 at the reference spatial frequency VREF may be higher than 40% for each elevation angle 9k which is in the range of 0° to +35°, wherein the reference spatial frequency VREF is determined according to the equation (9), and the effective F-number F e ff of the device 500 is e.g. in the range of 1 .0 to 5.6. The modulation transfer function at the reference spatial frequency VREF and at each of said elevation angles 9k may be higher than 40% in the radial direction and in the tangential direction of the optical image IMG1 . The symbol mm means millimeter, i.e. 10 "3 meters.

For the person skilled in the art, it will be clear that modifications and variations of the devices and the methods according to the present invention are perceivable. The figures are schematic. The particular embodiments described above with reference to the accompanying drawings are illustrative only and not meant to limit the scope of the invention, which is defined by the appended claims.