Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AN APPARATUS AND A METHOD FOR PRODUCING A DEPTH-MAP
Document Type and Number:
WIPO Patent Application WO/2015/059346
Kind Code:
A1
Abstract:
An apparatus (10) comprising: a first image sensor (106); first optics (104) for the first image sensor (106); a second image sensor (206); second optics (204) for the second image sensor (206), wherein the first optics (104) and the second optics (204) are differently configured to provide different axial chromatic aberration at the respective first image sensor (106) and at the second image sensor (206).

Inventors:
VILERMO MIIKKA (FI)
VÄÄNÄNEN MAURI (FI)
Application Number:
PCT/FI2014/050011
Publication Date:
April 30, 2015
Filing Date:
January 08, 2014
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NOKIA TECHNOLOGIES OY (FI)
International Classes:
G01B11/22; G02B30/00; G06T7/00; H04N9/097; H04N13/02
Domestic Patent References:
WO2013156101A12013-10-24
Foreign References:
US20110286634A12011-11-24
US20130215299A12013-08-22
US20100172020A12010-07-08
US20110109749A12011-05-12
US6188514B12001-02-13
US20120033105A12012-02-09
Attorney, Agent or Firm:
NOKIA TECHNOLOGIES OY et al. (Virpi TognettyKarakaari 7, Espoo, FI)
Download PDF:
Claims:
CLAIMS

1 . An apparatus comprising:

a first image sensor;

first optics for the first image sensor;

a second image sensor;

second optics for the second image sensor,

wherein the first optics and the second optics are differently configured to provide different axial chromatic aberration at the respective first image sensor and at the second image sensor.

2. An apparatus as claimed in claim 1 , wherein the difference in axial chromatic aberration at the respective first image sensor and at the second image sensor is dependent upon distance to imaged objects in an imaged scene.

3. An apparatus as claimed in claim 1 or 2, wherein the axial chromatic aberration of at least one of the first optics or the second optics is dependent upon a distance to imaged objects in an imaged scene.

4. An apparatus as claimed in any preceding claim, wherein the difference in axial chromatic aberration at the respective first image sensor and the second image sensor is dependent upon a distance between a first plane of the first optics and imaged objects in an imaged scene and a distance between a second plane of the second optics and imaged objects in an imaged scene.

5. An apparatus as claimed in any preceding claim, wherein the difference in axial chromatic aberration at the respective first image sensor and the second image sensor is dependent upon a distance between a common plane of the first optics and the second optics and imaged objects in an imaged scene.

6. An apparatus as claimed in any preceding claim comprising: circuitry configured to produce a depth-map using output from the first image sensor and output from the second image sensor.

7. An apparatus as claimed in claim 6, wherein the output from the first image sensor is a first image and the output from the second image sensor is a second image, having the same resolution as the first image and wherein the depth map has the same resolution as the output from the first image sensor and the output from the second image sensor.

8. An apparatus as claimed in any preceding claim comprising: circuitry configured to estimate distance to an imaged object using images of the imaged object at the respective first image sensor and at the second image sensor.

9. An apparatus as claimed in any preceding claim comprising: circuitry configured to compare a parameter for output from the first image sensor with the parameter for output from the second image sensor.

10. An apparatus as claimed in any preceding claim comprising: circuitry configured to match an output, from a color pixel of the first image sensor, having an associated first value of a parameter with an output, from a color pixel of the second image sensor, having an associated second value of the parameter and use the first value and the second value of the parameter to estimate a distance of an optical object corresponding to the matched color pixels from the apparatus.

1 1 . An apparatus as claimed in claim 10 comprising: circuitry configured to determine a difference between the first value and the second value of the parameter and use the difference to determine a distance to the imaged object.

12. An apparatus as claimed in any of claims 9 to 1 1 , wherein the parameter is sharpness. 13. An apparatus as claimed in claim 12, wherein the circuitry is configured to:

determine a first sharpness value for a set of pixels in the first image;

determine a second sharpness value for a corresponding set of pixels in the second image; compare the first sharpness value and the second sharpness value;

use the result of the comparison to estimate distance to the image object.

14. An apparatus as claimed in any of claims 9 to 1 1 , wherein the parameter is a Fourier transform of at least a portion of an image.

15. An apparatus as claimed in claim 14, wherein the circuitry is configured to:

determine a first Fourier transform for a set of pixels in the first image;

determine a second Fourier transform for a corresponding set of pixels in the second image;

divide the first Fourier transform by the second Fourier transform; and

use the result of the division to estimate distance to the image object.

16. An apparatus as claimed in any of claims 9 to 1 1 , wherein the parameter is color.

17. An apparatus as claimed in any of claims 9 to 1 1 , wherein the parameter is an auto-focus setting for an auto-focused image.

18. An apparatus as claimed in any of claims 9 to 1 1 , wherein the circuitry is configured to compare, for each color channel, a parameter for images of the imaged object at the respective first image sensor and at the second image sensor.

19. An apparatus as claimed in any of claims 9 to 18 comprising: circuitry configured to determine whether a first condition is satisfied, and configured to enable use of image disparity between the first image sensor and the second image sensor to determine a distance to an imaged object, when a first condition is not satisfied.

20. An apparatus as claimed in any of claims 9 to 19 comprising: circuitry configured to determine whether a first condition is satisfied, and configured to enable use of the first value and the second value of the parameter to determine a distance to an imaged object, when the first condition is satisfied.

21 . An apparatus as claimed in claim 19 or 20, wherein the first condition requires that a difference between the first value and the second value of the parameter is greater than a threshold sensitivity of the image sensors. 22. An apparatus as claimed in claim 19, 20 or 21 , wherein the first condition requires that the distance to the imaged object is beyond a threshold distance.

23. An apparatus as claimed in claim 22, wherein the first optics and the second optics are differently configured to have significantly different axial chromatic aberration for objects within the threshold distance.

24. An apparatus as claimed in claim 22, wherein the first optics and the second optics are differently configured to have similar axial chromatic aberration for objects within the threshold distance.

25. An apparatus as claimed in any of claims 6 to 24, further comprising circuitry configured to enable correction of images to correct for the axial chromatic aberration. 26. An apparatus as claimed in any preceding claim, embodied in a camera module for an electronic device.

27. An apparatus as claimed in any preceding claim, further comprising circuitry configured to change at least one of the first optics and the second optics to adapt focal length, chromatic aberration, or where an optical axis of the optics meets the respective image sensor.

28. An apparatus as claimed in any of claims 6 to 27, wherein the circuitry comprises at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, control at least partially operation of the circuitry.

29. An apparatus as claimed in any preceding claim, wherein the first image sensor is a single image sensor comprising in excess of 10 million pixels and the second image sensor is a single image sensor comprising the same number of pixels as the first image sensor.

30. A method comprising:

using a difference between a first value of a parameter measured in respect of a first pixel of a first image sensor imaging an object via first optics and a second value of the parameter measured in respect of a second pixel of a second image sensor imaging the same object via second optics, to estimate a distance to the imaged object in a manner dependent upon a difference in axial chromatic aberration between the first optics and the second optics.

31 . A method as claimed in claim 30, wherein the difference is dependent upon known axial chromatic aberration introduced by the first and second optics.

32. A method as claimed in claim 30 or 31 , wherein one or both of the measured first value of the parameter and the measured second value of the parameter suffers from axial chromatic aberration and wherein there is a difference in axial chromatic aberration between the first image sensor and the second image sensor.

33. A method as claimed in claim 30, 31 or 32 comprising: using image disparity between the first image sensor and the second image sensor to determine a distance to an imaged object, when the distance to the imaged object is within a threshold distance.

34. A method as claimed in claim 33, wherein using image disparity to determine a distance comprises: matching at least some pixels output from the first image sensor at a first position within the first image sensor with at least some pixels output from the second image sensor at a second position within the second image sensor and using the first position and the second position to estimate a distance of an imaged object corresponding to the matched pixels from the apparatus.

35. A method as claimed in any of claims 30 to 34 additionally comprising: using a difference between a first value of a parameter measured in respect of a first pixel of a first image sensor imaging an object and a second value of the parameter measured in respect of a second pixel of a second image sensor imaging the same object, to determine a distance to the imaged object, when the distance to the imaged object is within a threshold distance.

36. A method as claimed in any of claims 30 to 35 comprising: using a difference between a first value of a parameter measured at a first pixel of a first image sensor imaging an object and a second value of the parameter measured at a second pixel of a second image sensor imaging the same object, to determine a distance to the imaged object, when the distance to the imaged object is outside a threshold distance. 37. A method as claimed in claim 35 or 36, wherein using a difference to determine a distance comprises matching an output, from a color pixel of the first image sensor, having a first value of the parameter with an output, from a color pixel of the second image sensor, having a second value of the parameter and using the first value and the second value of the parameter to estimate a distance of an imaged object corresponding to the matched color pixels from the apparatus.

38. A method as claimed in any of claims 30 to 37 comprising using the determined distance to compensate one of the first image and the second image to form a compensated image.

39. A method as claimed in any of claims 30 to 38, wherein the parameter is image sharpness.

40. A method as claimed in any of claims 30 to 38, wherein the parameter is color.

41 . A method as claimed in any of claims 30 to 38, wherein the parameter is an image Fourier transform.

42. A method as claimed in any of claims 30 to 38, wherein the parameter is an auto- focus setting for an auto-focused image.

43. A stereoscopic method of producing a depth-map comprising:

for each pixel of an imaged scene, processing a difference between a first parameter value measured in respect of a pixel by a first image sensor via first optics and a second parameter value measured in respect of the pixel by a second image sensor via second optics, to determine a distance for the pixel

in a manner dependent upon a difference in axial chromatic aberration between the first optics and the second optics.

44. An apparatus comprising means for performing the method of any of claims 30 to 42. 45. A computer program which when run on a processor enables the processor to control the performance of the method as claimed in any of claims 30 to 42.

46. An apparatus comprising:

at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, enable of the method as claimed in any of claims 30 to 42.

Description:
TITLE

An apparatus and a method for producing a depth-map.

TECHNOLOGICAL FIELD

Embodiments of the present invention relate to an apparatus and a method for producing a depth-map.

BACKGROUND

It is possible to produce a depth-map for a scene that indicates a depth to one or more objects in the scene by processing stereoscopic images. Two images are recorded at offset positions at different image sensors. Each image sensor records the scene from a different perspective. The apparent offset in position of an object between the images caused by the parallax effect (disparity) may be used to estimate a distance to the object so long as the distance is not too large compared to the offset between the image sensors.

BRIEF SUMMARY

According to various, but not necessarily all, embodiments of the invention there is provided an apparatus comprising: a first image sensor; first optics for the first image sensor; a second image sensor; second optics for the second image sensor, wherein the first optics and the second optics are differently configured to provide different axial chromatic aberration at the respective first image sensor and at the second image sensor.

According to various, but not necessarily all, embodiments of the invention there is provided a method comprising: using a difference between a first value of a parameter measured in respect of a first pixel of a first image sensor imaging an object via first optics and a second value of the parameter measured in respect of a second pixel of a second image sensor imaging the same object via second optics, to estimate a distance to the imaged object in a manner dependent upon a difference in axial chromatic aberration between the first optics and the second optics.

According to various, but not necessarily all, embodiments of the invention there is provided a stereoscopic method of producing a depth-map comprising: for each pixel of an imaged scene, processing a difference between a first parameter value measured in respect of a pixel by a first image sensor via first optics and a second parameter value measured in respect of the pixel by a second image sensor via second optics, to determine a distance for the pixel in a manner dependent upon a difference in axial chromatic aberration between the first optics and the second optics.

BRIEF DESCRIPTION For a better understanding of various examples of embodiments of the present invention reference will now be made by way of example only to the accompanying drawings in which:

Figure 1 illustrates an example of an imaging apparatus for which first optics and second optics are differently configured to provide different axial chromatic aberration at a first image sensor and at a second image sensor;

Figures 2A and 2B illustrate the concept of axial chromatic aberration;

Figure 3 illustrates an example of an imaging apparatus for which first optics and second optics are differently configured to provide different axial chromatic aberration at a first image sensor and at a second image sensor;

Figure 4 illustrates an example of circuitry comprising distance estimation circuitry; Figure 5 illustrates a method that uses sharpness comparison of the output from the first image sensor and the output from the second image sensor to determine distance;

Figure 6 illustrates an example of a method which may also be performed by the distance estimation circuitry;

Figure 7 illustrates an example of circuitry as previously described additionally comprising color correction circuitry; Figure 8 illustrates an example of circuitry additionally comprising optics control circuitry;

Figure 9 illustrates an example of an electronic device comprising a camera module; Figure 10A illustrates an example of circuitry comprising a processor and memory storing a computer program; and

Figure 10B illustrates a delivery mechanism for a computer program;

DETAILED DESCRIPTION Figure 1 and subsequent Figures, illustrate examples of an imaging apparatus 10 comprising a first image sensor 106; first optics 104 for the first image sensor 106; a second image sensor 206; and second optics 204 for the second image sensor 206, wherein the first optics 104 and the second optics 204 are differently configured to provide different axial chromatic aberration at the respective first image sensor 106 and at the second image sensor 206.

An object 2 in a scene is imaged separately in the first image sensor 106 and in the second image sensor 206. Figures 2A and 2B illustrate the concept of axial chromatic aberration. Axial chromatic aberration occurs when optics 4 focus different colors of light in different focal planes.

In Figure 2A, the optics 4 does not introduce any axial chromatic aberration. The object 2 lying in the object plane OP is imaged as an imaged object 2' to the focal plane FP of the optics 4. There is no axial chromatic aberration. In computational optics, where the optics 4 is considered to have a response function F, the image I and object O are related by l=F * O, where * is a convolution. The function F is not dependent upon the wavelength of light but is dependent upon the object distance. The object distance a, the image distance b, and the focal length f of the lens may be related in accordance with the lens equation 1/f=1/a+ 1/b. In Figure 2B, the light 3 from the object 2 in the object plane OP, is focussed by the optics 4 into different focal planes Fi, F 2 , F 3 . A first color of light is focussed into the focal plane Fi, a second color of light is focussed in the focal plane F 2 and a third color of light is focussed in the focal plane F 3 . In computational optics, where the optics 4 is considered to have a response function F, the image I and object O are related by l=F * O, where * is a convolution. The function F is dependent upon the wavelength of light λ and the object distance a. The object distance a, the image distance b, and the focal length f of the lens may be related in accordance with the lens equation 1/f(A)=1/a+ 1/b.

The wavelength-dependent focal length arising from chromatic aberration, smears an object pixel over wavelength-dependent circles of confusion. The sharpness of the image I, at a particular image distance b, is therefore dependent upon the wavelength of the light.

Figure 3 illustrates an example of the imaging apparatus 10 illustrated in Figure 1 . The optical apparatus 10 comprises a first image sensor 106; first optics 104 for the image sensor 106; a second image sensor 206; and second optics 204 for the second image sensor 206. The first optics 104 and the second optics 204 are differently configured to provide different axial chromatic aberration at the respective first image sensor 106 and the second image sensor 206.

The first optics 104 has a first optical axis 1 10. The first optical axis 1 10 is an imaginary straight line that defines a path along which light propagates through the first optics 104 without refraction. The first optical axis 1 10 may pass through a centre of curvature of each optic surface within the first optics 104, and may coincide with the axis of rotational symmetry.

Likewise, the second optics 204 has a second optical axis 210. The second optical axis 210 is an imaginary straight line that defines a path along which light propagates through the second optics 204 without refraction. The second optical axis 210 may pass through a centre of curvature of each optic surface within the second optics 204, and may coincide with the axis of rotational symmetry. In this example the first optical axis 1 10 and the second optical axis 210 are displaced from each other. The first optical axis 1 10 and the second optical axis 210 are parallel and are displaced by an offset 100.

The first image sensor 106 is aligned with the first optical axis 1 10. The second image sensor 206 is aligned with the second optical axis 210. There is therefore an offset displacement 100 between the first image sensor 106 and the second image sensor 206.

In the illustrated example the first image sensor 106 is a single image sensor. In some, but not necessarily all, examples it may comprise in excess of 10 million pixels. It may, for example, comprise 40 million or more pixels. A pixel in this document comprises a red, a green and a blue sub-pixel.

Likewise, the second image sensor 206 is a single image sensor in this example. It may comprise in excess of 10 million pixels. It may, for example, comprise 40 million or more pixels where each pixel comprises a red, a green and a blue sub-pixel. The resolution of the first image sensor 106 and the resolution of the second image sensor 206 may be the same. That is, they may have the same number of pixels arranged in the same size array with the same size of pixel.

The first optics 104 may be a lens system comprising one or more lenses in this example. Each lens may have optically symmetric characteristics about a common first optical axis 1 10. In some examples, the first optics 104 comprises a single lens. However, in other examples, the first optics may comprise a combination of multiple lenses. The lens or lenses of the first optics 104 are chosen so that the first optics 104 has a particular axial chromatic aberration that is dependent upon distance.

The second optics 204 may be a lens system comprising one or more lenses. Each lens may have optically symmetric characteristics about a common second optical axis 210. In some examples, the second optics 204 comprises a single lens. However, in other examples, the second optics 204 may comprise a combination of multiple lenses. The lens or lenses of the second optics 204 are selected so that the second optics 204 has a particular axial chromatic aberration that is dependent upon distance.

An object 2 may be at a variable distance d from a common plane Y occupied by the first optics 104 and the second optics 204. The imaged object 2 lies in an object plane OP. The variable distance between the object plane OP and the plane of the optics Y is a distance d.

In general a difference in axial chromatic aberration at the respective first image sensor 106 and the second image sensor 206 is dependent upon a depth distance between a first plane of the first image sensor 106 and an imaged object 2 in an image scene and a depth distance d between a second plane of the second image sensor 206 and the imaged object 2 in an image scene. In the example of Figure 3, the first and second planes are coincident.

The consequence of axial chromatic aberration is a wavelength dependent blurring/sharpness. The blurring/sharpness of an imaged object is therefore dependent upon wavelength of light and the distance d to the imaged object.

Therefore by providing for differential analysis of the consequences of axial chromatic aberration at the first image sensor 106 and the second image sensor 206 one can estimate the distance d of the imaged object from the apparatus 10.

The consequences of axial chromatic aberration at the first image sensor 106 may be represented by a parameter P for the first image.

The consequences of axial chromatic aberration at the second image sensor 106 may be represented by the parameter P for the second image. The difference in the parameter P due to axial chromatic aberration at the first image sensor 106 and at the second image sensor 206 may be greater than a threshold sensitivity s of the image sensors beyond a threshold distance dj. The threshold distance dj is greater than the focal distance F of the first and second optics. The threshold distance dj may be greater than 3 metres.

Figure 4 illustrates an example of circuitry 50 comprising distance estimation circuitry 52 configured to produce a depth-map 53 using output 107 from the first image sensor 106 and output 207 from the second image sensor 206. The circuitry 50 may be part of the apparatus 10 or separate from the apparatus 10.

The distance estimation circuitry 52 is configured to estimate a distance to an image object 2 corresponding to a single pixel using output from the first image sensor 106 and output from the second image sensor 206.

When the first image sensor and the second image sensor have the same resolution, the produced depth-map 53 may also have the same resolution. That is, the depth- map 53 may be a pixel-by-pixel depth-map.

Figure 5 illustrates a method 60 that uses differential analysis on the output 107 from the first image sensor 106 and the output 207 from the second image sensor 206. The method 60 may be performed by distance estimation circuitry 52. In more detail, at block 61 of the method 60, pixels of the first image sensor 106 and pixels of the second image sensor 206 are matched. A color pixel of the first image sensor 106 and a color pixel of the second image sensor 206 are matched if they image the same optical object. That is, the pair of pixels correspond and the first pixel of the first image sensor 106 and the second pixel of the second image sensor 206 relate to the same pixel of the image scene.

Next, at block 65, the method 60 uses a difference between a first value of a parameter Pi measured in relation to a first pixel of the first image sensor 106 imaging an object and a second value of the parameter P 2 measured in relation to a matched second pixel of the second image sensor 206 imaging the same object, to determine a distance to the imaged object. This is repeated for each matched pixel pair to produce a depth-map 53 at block 66.

The block 65 comprises in more detail, a series of blocks 62, 63, 64 which are carried out for each matched pair of pixels.

At block 62, the method 60 determines a first value of the parameter Pi in relation to a first pixel for the first image sensor 106. Then the method 60 determines a second value of the parameter P 2 in relation to a second pixel for the second image sensor 206, where the first and second pixels form a matched pair.

Then at block 63, the method 60 calculates a difference Di 2 between the first value of the parameter Pi and the second value of the parameter P 2 for the corresponding pair of matched pixels. The difference may be expressed in different ways. For example it may be expressed as an arithmetic subtraction or as a quotient.

Next at block 64, the method 60 uses the difference Di 2 to look up a distance d. This is repeated for matched pixel pairs so that a distance d is obtained for multiple pixels. This is output at block 66 as a depth-map 53.

The circuitry 50 may store in a memory, such as memory 82 in Figure 10A, a data structure that maps differences Di 2 to axial distances d. The data structure may, for example, be a lookup table.

This difference Di 2 may arise because one of the first optics 104 and the second optics 204 does not introduce any axial chromatic aberration but the other of the first optics 104 and the second optics 204 introduces axial chromatic aberration. Alternatively, this difference Di 2 may occur because both the first optics 104 and the second optics 204 introduce axial chromatic aberration.

Various examples of the parameter P will now be provided. In one example, but not necessarily all examples, the parameter P is pixel color C. A particular color C may be defined as a point in color space. If the color space is spanned by N color vectors, then a point in the color space may be defined as a distance along the spanning color vectors. Thus if a color space is defined by red (R), green (G) and blue (B) color vectors, a color C may be defined by a red intensity, a green intensity and a blue intensity (RGB). As a consequence of axial chromatic aberration, a pixel has a point in the color space that is dependent upon an axial distance of the imaged object from the apparatus 10- changing the axial distance d changes the location of the point in color space. The difference Di 2 is then a difference between a first color of the first pixel for the first image sensor 106 and the color of the matched second pixel for the second image sensor 206. However, this differential analysis will only provide an accurate result when the matched pixels are within an area of pixels of the same color, such that the color of the first and second pixels is dependent only upon blurred images of the same colored pixels.

In another example, but not necessarily all examples, the parameter P is sharpness. In some, but not necessarily all examples, sharpness for a pixel may be calculated as a squared sum of the local gradients in the neighbourhood of the pixel divided by the number of pixels in the neighbourhood. The sharpness may be calculated separately for each color channel.

In some cases the sharpness estimation from one color component may be more accurate than from another color component. For example, the axial chromatic aberration difference may be greater in the red color channel compared to other color channels. It may then be beneficial to use the depth calculated from the red color channel for the other color channels (green and blue). In such a case the depth map is not a pixel-by-pixel depth map. Also, Depth map estimation is at its most accurate in the area between the points where color components reach their maximum sharpness. Two lenses may have more maximum color component sharpness points compared to a single lens. Therefore, the depth map estimation can be made more accurate than with a single lens. In another example, but not necessarily all examples, the parameter P is the Fourier Transform of a sample of the image comprising the matched pixel.

In this example, one uses computational optics, and assumes that the second optics 204 applies a known depth dependent filter X(d) compared to the first optics 104. The relationship between the first image 11 and the second image I2 will therefore be 12= 11 * X(d). For a corresponding sample of the images, X(d) can be determined using Fourier analysis:

d can then be determined from a lookup table

A first Fourier transform F(I1 ) for a set of pixels in the first image is determined. A second Fourier transform for a corresponding set of pixels in the second image is determined. Then a quotient is obtained by dividing the second Fourier transform by the first Fourier transform. Then the inverse Fourier transform of the quotient is calculated and the result is used to estimate distance to the image object.

In the above examples of parameter P, the focus settings for the first and second optics remains fixed. In another example, but not necessarily all examples, autofocus is used for one or both of the first and second optics (in a particular color channel). In this example, the parameter P may be the autofocus settings for the respective first and second optics in one or more color channels. A sharpness value, for example as defined above, may be used for automatic focusing.

Figure 6 illustrates an example of a method 70 which may also be performed by the distance estimation circuitry 52.

At block 71 , it is determined whether a first condition is satisfied or not.

The first condition may for example require that a difference between the first value of the parameter Pi and the second value of the parameter P 2 is greater than a threshold sensitivity of the image sensors. This condition may be applied on a pixel- by-pixel basis. Alternatively or in addition, the first condition may require that the distance to the imaged object 2 is beyond a threshold distance dj.

If the first condition is not satisfied, then the method 70 moves to block 72 which enables distance estimation in the near field. At block 72 either block 73 and/or block 74 are performed.

If the first condition is satisfied, then the method moves to block 75 for distance estimation in the far field. At block 75, distance estimation is performed using block 76.

At block 74, the method 70 uses image disparity between the first image sensor 106 and the second image sensor 206 to determine a distance d to an imaged object 2. Using image disparity to determine a distance may comprise matching multiple pixels output from the first image sensor 106 at a first position within the first image sensor with multiple pixels output from the second image sensor 206 and a second position within the second image sensor and using the first position and the second position to estimate a distance of an optical object 2 corresponding to the object from the apparatus 10.

At block 73, the method 70 uses a s difference between a first value of the parameter Pi measured in respect of a first pixel of a first image sensor 106 imaging an object 2 and a second value of the parameter P 2 measured in respect of a second pixel of a second image sensor 206 imaging the same object 2 to determine a distance d to the imaged object 2. This is repeated for matching pairs of pixels.

At block 76, the method 70 may comprise using a difference between a first value of the parameter Pi measured in respect of a first pixel of a first image sensor 106 imaging an object 2 and a second value of the parameter P 2 measured in respect of a second pixel of a second image sensor 206 imaging the same object 2 to determine a distance d to the imaged object 2. This is repeated for matching pairs of pixels. Using a parameter difference to determine a distance d, in either of blocks 73 or 76, comprises matching an output from a color pixel of the first image sensor 106, having a first parameter value Pi with an output from a color pixel of the second image sensor 206 having a second parameter value P 2 and using the first parameter value Pi and the second parameter value P 2 to estimate a distance d of an optical object corresponding to the matched color pixels from the apparatus. A suitable method has been described in detail in relation to Figure 5.

In some cases one color component in the two lenses may have the same axial chromatic aberration. The color component can be different in different lenses. For example red component in lens 1 and blue component in lens 2 may have the same axial chromatic aberration. This component(s) may be used to align the two images or for creating a depth map for a part of the image using disparity (block 74). The two remaining color components are then used for estimating the depth map using their parameter difference (block 73).

The first optics 104 and the second optics 204 may be differently configured to have similar axial chromatic aberration for objects within the threshold distance dj and have significantly different axial chromatic aberration for objects beyond the threshold distance dj Consequently up to and including the first threshold distance dj the distance of an object is determined via block 72 of Figure 6. Whereas after the threshold distance dj, the distance is determined via the block 75 of Figure 6.

Alternatively, the first optics 104 and the second optics 204 may be differently configured to have significantly different axial chromatic aberration for objects 2 within the threshold distance dj. The threshold distance may be determined by the sensitivity of the image sensors. In the near field, the distance to the object may be determined using block 73 and/or block 74 of Figure 6. Figure 7 illustrates an example of circuitry 50 as previously described additionally comprising color correction circuitry 54. The color correction circuitry 54 is configured to use the depth-map 53 to compensate the color of the less well corrected image by replacing it pixel by pixel with the color of the better corrected image.

Figure 8 illustrates an example of circuitry 50 as previously described additionally comprising optics control circuitry 56. The optics control circuitry 56 is configured to change one or more of the first optics 104 and the second optics 204 to adapt their focal length, axial chromatic aberration, or where an optical axis 1 10, 210 of the optics 104, 204 meets the respective image sensor 106, 206. Figure 9 illustrates an example of an electronic device 60 comprising a camera module 62. The optical apparatus 10 is embodied in the camera module 62.

Implementation of the circuitry 50 can be in hardware alone (a circuit, a processor...), have certain aspects in software including firmware alone or can be a combination of hardware and software (including firmware).

The circuitry 50 may be implemented using instructions that enable hardware functionality, for example, by using executable computer program instructions in a general-purpose or special-purpose processor that may be stored on a computer readable storage medium (disk, memory etc) to be executed by such a processor.

Figure 10A illustrates an example of circuitry 50. The circuitry 50 comprises at least one processor 80; and at least one memory 24 including computer program code 84, the at least one memory 82 and the computer program code 84 configured to, with the at least one processor 80, control at least partially operation of the circuitry 50 as described above.

The processor 80 and memory 82 are operationally coupled and any number or combination of intervening elements can exist (including no intervening elements)

The processor 80 is configured to read from and write to the memory 82. The processor 80 may also comprise an output interface via which data and/or commands are output by the processor 80 and an input interface via which data and/or commands are input to the processor 80.

The memory 82 stores a computer program 84 comprising computer program instructions that control the operation of the apparatus 10 when loaded into the processor 80. The computer program instructions 84 provide the logic and routines that enables the apparatus to perform the methods illustrated in and/or described in relation to Figs 1 to 12. The processor 80 by reading the memory 82 is able to load and execute the computer program 84.

The apparatus 10 in this example therefore comprises: at least one processor 80; and at least one memory 82 including computer program code 84 the at least one memory 82 and the computer program code 84 configured to, with the at least one processor 80, cause the apparatus 10 at least to perform: using a difference between a first parameter value Pi measured at a first pixel of a first image sensor 106 imaging an object 2 and a second parameter value P2 measured at a second pixel of a second image sensor 206 imaging the same object, to determine a distance d to the imaged object 2.

The computer program 84 may arrive at the apparatus 10 via any suitable delivery mechanism 86, as illustrated in Figure 10B. The delivery mechanism may be, for example, a non-transitory computer-readable storage medium, a computer program product, a memory device, a record medium such as a compact disc read-only memory (CD-ROM) or digital versatile disc (DVD), an article of manufacture that tangibly embodies the computer program 84. The delivery mechanism may be a signal configured to reliably transfer the computer program 84. The apparatus 10 may propagate or transmit the computer program 84 as a computer data signal.

Although the memory 82 is illustrated as a single component it may be implemented as one or more separate components some or all of which may be integrated/removable and/or may provide permanent/semi-permanent/ dynamic/cached storage. References to 'computer-readable storage medium', 'computer program product', 'tangibly embodied computer program' etc. or a 'controller', 'computer', 'processor' etc. should be understood to encompass not only computers having different architectures such as single /multi- processor architectures and sequential (Von Neumann)/parallel architectures but also specialized circuits such as field- programmable gate arrays (FPGA), application specific circuits (ASIC), signal processing devices and other processing circuitry. References to computer program, instructions, code etc. should be understood to encompass software for a programmable processor or firmware such as, for example, the programmable content of a hardware device whether instructions for a processor, or configuration settings for a fixed-function device, gate array or programmable logic device etc.

As used in this application, the term 'circuitry' refers to all of the following:

(a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) and

(b) to combinations of circuits and software (and/or firmware), such as (as applicable): (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and

(c) to circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present. This definition of 'circuitry' applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term "circuitry" would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware. The term "circuitry" would also cover, for example and if applicable to the particular claim element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in server, a cellular network device, or other network device. It will be appreciated that the foregoing description describes and enables a stereoscopic method of producing a depth-map 53 comprising:

for each pixel of an imaged scene, processing a difference between a first parameter value Pi measured for the pixel by a first image sensor 106 and a second parameter value P 2 measured for the pixel by a second image sensor 206 to determine a distance d for the pixel.

In the foregoing description, the first image sensor 106 and the second image sensor 206 have been described as separate sensors.

The image sensors 106, 206 may in some examples be physically separate and independent. The image sensors 106, 206 in other examples may be functionally independent but not physically independent. For example, different portions of a single image sensor may be used for the first image sensor 106 and the second image sensor 206. For example, the same or overlapping portions of a single image sensor may be used at different times as the first image sensor 106 and as the second image sensor 206.

The imaging apparatus 10 may, for example, be an electronic device or a module for incorporation within an electronic device. Examples of electronic devices include dedicated cameras, devices with camera functionality such as mobile cellular telephones or personal digital assistants etc.

As used here 'module' refers to a unit or apparatus that excludes certain parts/components that would be added by an end manufacturer or a user.

The blocks illustrated in Figs 6 and 7 may represent steps in a method and/or sections of code in the computer program 84. The illustration of a particular order to the blocks does not necessarily imply that there is a required or preferred order for the blocks and the order and arrangement of the block may be varied. Furthermore, it may be possible for some blocks to be omitted.

Although embodiments of the present invention have been described in the preceding paragraphs with reference to various examples, it should be appreciated that modifications to the examples given can be made without departing from the scope of the invention as claimed.

Features described in the preceding description may be used in combinations other than the combinations explicitly described.

Although functions have been described with reference to certain features, those functions may be performable by other features whether described or not. Although features have been described with reference to certain embodiments, those features may also be present in other embodiments whether described or not.

Whilst endeavoring in the foregoing specification to draw attention to those features of the invention believed to be of particular importance it should be understood that the Applicant claims protection in respect of any patentable feature or combination of features hereinbefore referred to and/or shown in the drawings whether or not particular emphasis has been placed thereon.

I/we claim:




 
Previous Patent: GAS TANK

Next Patent: DRILLING DEVICE