Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DETERMINING AN INTERSECTION LOCATION OF AN OPTICAL AXIS OF A LENS WITH A CAMERA SENSOR
Document Type and Number:
WIPO Patent Application WO/2018/087424
Kind Code:
A1
Abstract:
This specification describes a method comprising: based on a plurality of images of at least one displayed straight line captured by an image capture device (10) having at least one lens (20) and a sensor (50), the line having a different relative position in each respective image, wherein the lens introduces a degree of radial distortion to the captured images and has an optical axis, determining a location of intersection of the optical axis of the lens with the sensor by determining an intersection location of two non-parallel lines from the captured images having the smallest deviations from respective fitted straight lines. The lens may be a fisheye lens. The camera module may include an optical system with a combination of one or more lenses and mirrors.

Inventors:
BILCU RADU (FI)
SCHRADER MARTIN (FI)
BALDWIN ANDREW (FI)
Application Number:
PCT/FI2017/050755
Publication Date:
May 17, 2018
Filing Date:
November 02, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NOKIA TECHNOLOGIES OY (FI)
International Classes:
G06T7/80; G01M11/02; G02B7/02; G02B27/00; G02B27/62; G03B5/00; G03B43/00; G06T5/00; G06T5/50; G06T7/00; H04N17/00
Foreign References:
US6002525A1999-12-14
US20050036706A12005-02-17
US20050018175A12005-01-27
US20090059041A12009-03-05
US20150146048A12015-05-28
US6002525A1999-12-14
US20050036706A12005-02-17
US20050018175A12005-01-27
US20090059041A12009-03-05
US20150146048A12015-05-28
Other References:
BRITO, J. H. ET AL.: "Radial Distortion Self-Calibration", IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 23 June 2013 (2013-06-23), PORTLAND, OR, USA, pages 1368 - 1375, XP032493036
BRITO, J. H. ET AL.: "Radial Distortion Self-Calibration", IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 23 June 2013 (2013-06-23), Portland, OR, USA, pages 1368 - 1375, XP032493036
Attorney, Agent or Firm:
NOKIA TECHNOLOGIES OY et al. (FI)
Download PDF:
Claims:
Claims

1. A method comprising:

based on a plurality of images of at least one displayed straight line captured by an image capture device having at least one lens and a sensor, the line having a different relative position in each respective image, wherein the lens introduces a degree of radial distortion to the captured images and has an optical axis, determining a location of intersection of the optical axis of the lens with the sensor by determining an intersection location of two non- parallel lines from the captured images having the smallest deviations from respective fitted straight lines.

2. A method according to claim l, comprising fitting a respective first order polynomial to each of a plurality of lines from captured images, and calculating the deviation of each line from its respective first order polynomial.

3. A method according to claim 2, comprising determining the intersection location of two non-parallel lines from the captured images having the smallest deviations from their respective fitted first order polynomial. 4. A method according to any preceding claim, further comprising checking the curvature of each line in each captured image, and determining the position of the intersection location of the optical axis with the sensor relative to the line based on the curvature of the line. 5. A method according to claim 4, wherein checking the curvature of the straight line comprises fitting a second order polynomial to the line in each captured image.

6. A method according to any of claims 4 to 5, further comprising controlling a display to subsequently display a straight line at a position based on the curvature of at least one line from a captured image.

7. A method according to any of claims 4 to 6, comprising performing a half-interval search of displayed lines at half intervals while the number of pixels between the left and right hand lines is above a first predetermined threshold, and while the number of pixels between the top and bottom lines is above a second predetermined threshold.

8. A method according to claim 7, wherein when first and second predetermined thresholds are reached, controlling display of a respective pair of intersecting lines passing through each respective pixel in the final interval following the half-interval search. 9. A method according to any preceding claim, wherein the displayed straight lines comprise perpendicular straight lines.

10. A method according to claim 9, wherein the displayed lines comprise horizontal and vertical straight lines.

11. A method according to any preceding claim, wherein the displayed straight lines each comprise a borderline between two different coloured areas of a displayed pattern, the borderline demarcating the two different coloured areas. 12. A method according to any preceding claim, further comprising capturing images of a displayed straight line respectively located on a display at different quadrants of the display, e.g. a left hand side, right hand side, top, and bottom of the display.

13. A method according to any preceding claim, comprising controlling a display to display the at least one straight line.

14. A method according to any preceding claim, comprising determining a

correspondence between display pixel locations and a location on the sensor of the camera module, based on at least one captured image of a displayed known pattern, and a determined centrepoint of the pattern in the captured image.

15. A method according to any preceding claim, comprising causing display of at least one straight line on a display, and causing movement of the camera module relative to the display to capture a plurality of images having lines in a different respective position in each image.

16. A computer program comprising machine readable instructions that when executed by computing apparatus causes it to perform the method of any preceding claim.

17. Apparatus configured to perform the method of any of claims 1 to 15.

18. Apparatus comprising:

at least one processor: and at least one memory including computer program code, which when executed by the at least one processor, causes the apparatus to perform a method comprising:

based on a plurality of images of at least one displayed straight line captured by an image capture device having at least one lens and a sensor, the line having a different relative position in each respective image, wherein the lens introduces a degree of radial distortion to the captured images and has an optical axis, determining a location of intersection of the optical axis of the lens with the sensor by determining an intersection location of two non- parallel lines from the captured images having the smallest deviations from respective fitted straight lines.

19. Apparatus according to claim 18, wherein the computer program code, when executed by the at least one processor, causes the apparatus to perform:

fitting a respective first order polynomial to each of a plurality of lines from captured images, and calculating the deviation of each line from its respective first order polynomial.

20. Apparatus according to claim 19, wherein the computer program code, when executed by the at least one processor, causes the apparatus to perform: determining the intersection location of two non-parallel lines from the captured images having the smallest deviations from their respective fitted first order polynomial.

21. Apparatus according to any of claims 18 to 20, wherein the computer program code, when executed by the at least one processor, causes the apparatus to perform:

checking the curvature of each line in each captured image, and determining the position of the intersection location of the optical axis with the sensor relative to the line based on the curvature of the line.

22. Apparatus according to claim 21, wherein checking the curvature of the straight line comprises fitting a second order polynomial to the line in each captured image.

23. Apparatus according to claims 21 or 22, wherein the computer program code, when executed by the at least one processor, causes the apparatus to perform:

controlling a display to subsequently display a straight line at a position based on the curvature of at least one line from a captured image.

24. Apparatus according to any of claims 21 to 23, wherein the computer program code, when executed by the at least one processor, causes the apparatus to perform:

performing a half-interval search of displayed lines at half intervals while the number of pixels between the left and right hand lines is above a first predetermined threshold, and while the number of pixels between the top and bottom lines is above a second predetermined threshold.

25. Apparatus according to claim 24, wherein when first and second predetermined thresholds are reached, the computer program code, when executed by the at least one processor, causes the apparatus to perform:

controlling display of a respective pair of intersecting lines passing through each respective pixel in the final interval following the half-interval search.

26. Apparatus according to any of claims 18 to 25, wherein the displayed straight lines comprise perpendicular straight lines.

27. Apparatus according to claim 26, wherein the displayed lines comprise horizontal and vertical straight lines.

28. Apparatus according to any of claims 18 to 27, wherein the displayed straight lines each comprise a borderline between two different coloured areas of a displayed pattern, the borderline demarcating the two different coloured areas.

29. Apparatus according to any of claims 18 to 28, wherein the computer program code, when executed by the at least one processor, causes the apparatus to perform:

capturing images of a displayed straight line respectively located on a display at different quadrants of the display, e.g. a left hand side, right hand side, top, and bottom of the display.

30. Apparatus according to any of claims 18 to 29, wherein the computer program code, when executed by the at least one processor, causes the apparatus to perform:

controlling a display to display the at least one straight line.

31. Apparatus according to any of claims 18 to 30, wherein the computer program code, when executed by the at least one processor, causes the apparatus to perform:

determining a correspondence between display pixel locations and a location on the sensor of the camera module, based on at least one captured image of a displayed known pattern, and a determined centrepoint of the pattern in the captured image.

32. Apparatus according to any of claims 18 to 31, wherein the computer program code, when executed by the at least one processor, causes the apparatus to perform: causing display of at least one straight line on a display, and causing movement of the camera module relative to the display to capture a plurality of images having lines in a different respective position in each image. 33. A computer-readable medium having computer-readable code stored thereon, the computer-readable code, when executed by at least one processor, causes the performance of: based on a plurality of images of at least one displayed straight line captured by an image capture device having at least one lens and a sensor, the line having a different relative position in each respective image, wherein the lens introduces a degree of radial distortion to the captured images and has an optical axis, determining a location of intersection of the optical axis of the lens with the sensor by determining an intersection location of two non- parallel lines from the captured images having the smallest deviations from respective fitted straight lines. 34· Apparatus comprising means for:

based on a plurality of images of at least one displayed straight line captured by an image capture device having at least one lens and a sensor, the line having a different relative position in each respective image, wherein the lens introduces a degree of radial distortion to the captured images and has an optical axis, determining a location of intersection of the optical axis of the lens with the sensor by determining an intersection location of two non- parallel lines from the captured images having the smallest deviations from respective fitted straight lines.

Description:
Determining an Intersection Location of an Optical Axis of a Lens with a Camera Sensor

Field

This specification generally relates to determining an intersection location of an optical axis of a lens with a camera sensor.

Background

Wide angle lenses, such as fisheye lenses, introduce strong geometrical distortions of a scene. For example, when a straight line is imaged, the projection of the line into the imaging sensor is strongly distorted, and may appear curved. The distortion usually increases for pixels further away from the optical axis of the lens, and is zero or negligible close to the optical axis of the lens. Lenses with narrower field-of-view also introduce geometrical distortions but with reduced effect. In many applications, the captured images are warped to correct for the lens distortion, which necessitates the knowledge of the optical axis position and lens distortion parameters. In other applications, precise measurement of the lens distortion and optical axis position is needed for 3D modelling of the scene. Lens distortion modelling and measurement usually necessitate the knowledge of the location of the intersection of the optical axis with the sensor.

Summary

According to a first aspect, the specification describes a method comprising: based on a plurality of images of at least one displayed straight line captured by an image capture device having at least one lens and a sensor, the line having a different relative position in each respective image, wherein the lens introduces a degree of radial distortion to the captured images and has an optical axis, determining a location of intersection of the optical axis of the lens with the sensor by determining an intersection location of two non-parallel lines from the captured images having the smallest deviations from respective fitted straight lines. The method may further comprise fitting a respective first order polynomial to each of a plurality of lines from captured images, and calculating the deviation of each line from its respective first order polynomial.

The method may further comprise determining the intersection location of two non-parallel lines from the captured images having the smallest deviations from their respective fitted first order polynomial. The method may further comprise checking the curvature of each line in each captured image, and determining the position of the intersection location of the optical axis with the sensor relative to the line based on the curvature of the line. Checking the curvature of the straight line may comprise fitting a second order polynomial to the line in each captured image.

The method may further comprise controlling a display to subsequently display a straight line at a position based on the curvature of at least one line from a captured image.

The method may further comprise performing a half-interval search of displayed lines at half intervals while the number of pixels between the left and right hand lines is above a first predetermined threshold, and while the number of pixels between the top and bottom lines is above a second predetermined threshold.

The method may further comprise, when first and second predetermined thresholds are reached, controlling display of a respective pair of intersecting lines passing through each respective pixel in the final interval following the half-interval search. The displayed straight lines may comprise perpendicular straight lines.

The displayed lines may comprise horizontal and vertical straight lines.

The displayed straight lines may each comprise a borderline between two different coloured areas of a displayed pattern, the borderline demarcating the two different coloured areas.

The method may further comprise capturing images of a displayed straight line respectively located on a display at different quadrants of the display, e.g. a left hand side, right hand side, top, and bottom of the display.

The method may further comprise controlling a display to display the at least one straight line.

The method may further comprise determining a correspondence between display pixel locations and a location on the sensor of the camera module, based on at least one captured image of a displayed known pattern, and a determined centrepoint of the pattern in the captured image. The method may further comprise causing display of at least one straight line on a display, and causing movement of the camera module relative to the display to capture a plurality of images having lines in a different respective position in each image. According to a second aspect, the specification describes a computer program comprising machine readable instructions that when executed by computing apparatus causes it to perform any method as described with reference to the first aspect.

According to a third aspect, the specification describes an apparatus configured to perform any method as described with reference to the first aspect.

According to a fourth aspect, the specification describes an apparatus comprising:

at least one processor: and

at least one memory including computer program code, which when executed by the at least one processor, causes the apparatus to perform a method comprising:

based on a plurality of images of at least one displayed straight line captured by an image capture device having at least one lens and a sensor, the line having a different relative position in each respective image, wherein the lens introduces a degree of radial distortion to the captured images and has an optical axis, determining a location of intersection of the optical axis of the lens with the sensor by determining an intersection location of two non- parallel lines from the captured images having the smallest deviations from respective fitted straight lines.

The computer program code, when executed by the at least one processor, may cause the apparatus to perform: fitting a respective first order polynomial to each of a plurality of lines from captured images, and calculating the deviation of each line from its respective first order polynomial.

The computer program code, when executed by the at least one processor, may cause the apparatus to perform: determining the intersection location of two non-parallel lines from the captured images having the smallest deviations from their respective fitted first order polynomial.

The computer program code, when executed by the at least one processor, may cause the apparatus to perform: checking the curvature of each line in each captured image, and determining the position of the intersection location of the optical axis with the sensor relative to the line based on the curvature of the line. Checking the curvature of the straight line may comprise fitting a second order polynomial to the line in each captured image.

The computer program code, when executed by the at least one processor, may cause the apparatus to perform: controlling a display to subsequently display a straight line at a position based on the curvature of at least one line from a captured image.

The computer program code, when executed by the at least one processor, may cause the apparatus to perform: performing a half-interval search of displayed lines at half intervals while the number of pixels between the left and right hand lines is above a first

predetermined threshold, and while the number of pixels between the top and bottom lines is above a second predetermined threshold.

When first and second predetermined thresholds are reached, the computer program code, when executed by the at least one processor, may cause the apparatus to perform: controlling display of a respective pair of intersecting lines passing through each respective pixel in the final interval following the half-interval search.

The displayed straight lines may comprise perpendicular straight lines.

The displayed lines may comprise horizontal and vertical straight lines.

The displayed straight lines may each comprise a borderline between two different coloured areas of a displayed pattern, the borderline demarcating the two different coloured areas.

The computer program code, when executed by the at least one processor, may cause the apparatus to perform: capturing images of a displayed straight line respectively located on a display at different quadrants of the display, e.g. a left hand side, right hand side, top, and bottom of the display.

The computer program code, when executed by the at least one processor, may cause the apparatus to perform: controlling a display to display the at least one straight line.

The computer program code, when executed by the at least one processor, may cause the apparatus to perform: determining a correspondence between display pixel locations and a location on the sensor of the camera module, based on at least one captured image of a displayed known pattern, and a determined centrepoint of the pattern in the captured image. The computer program code, when executed by the at least one processor, may cause the apparatus to perform: causing display of at least one straight line on a display, and causing movement of the camera module relative to the display to capture a plurality of images having lines in a different respective position in each image.

According to a fifth aspect, the specification describes a computer-readable medium having computer-readable code stored thereon, the computer-readable code, when executed by at least one processor, causes the performance of: based on a plurality of images of at least one displayed straight line captured by an image capture device having at least one lens and a sensor, the line having a different relative position in each respective image, wherein the lens introduces a degree of radial distortion to the captured images and has an optical axis, determining a location of intersection of the optical axis of the lens with the sensor by determining an intersection location of two non-parallel lines from the captured images having the smallest deviations from respective fitted straight lines.

According to a sixth aspect, the specification describes an apparatus comprising means for: based on a plurality of images of at least one displayed straight line captured by an image capture device having at least one lens and a sensor, the line having a different relative position in each respective image, wherein the lens introduces a degree of radial distortion to the captured images and has an optical axis, determining a location of intersection of the optical axis of the lens with the sensor by determining an intersection location of two non- parallel lines from the captured images having the smallest deviations from respective fitted straight lines.

Brief Description of the Figures

For a more complete understanding of the methods, apparatuses and computer-readable instructions described herein, reference is now made to the following descriptions taken in connection with the accompanying drawings in which:

Figure l is a schematic illustration of a calibration system for determining the position of the intersection of the lens optical axis and the sensor plane, according to embodiments of this specification;

Figure 2 is a flow chart illustrating an example of operations which may be performed by a module for detection of the intersection of the optical axis and the sensor, according to embodiments of this specification;

Figure 3 is a flow chart illustrating an example of operations which may be performed by a module for detection of the intersection of the optical axis and the sensor, according to embodiments of this specification

Figure 4 illustrates an example of a known pattern to be displayed to the camera module; Figure 5 illustrates an example of a pattern to be displayed to the camera module;

Figure 6 illustrates an example of a pattern to be displayed to the camera module;

Figure 7 illustrates an example of an image captured by the camera module, and fitting a polynomial to the border line;

Figure 8 illustrates an example of a known pattern to be displayed to the camera module; Figure 9 is a schematic illustration of an example configuration of the camera module according to embodiments of this specification;

Figure 10 is a computer-readable memory medium upon which computer-readable code may be stored, according to embodiments of this specification.

Detailed Description

In the description and drawings, like reference numerals may refer to like elements throughout.

In brief, embodiments described herein provide identification of a location on a camera sensor at which the optical axis of the lens intersects. The optical axis describes a line through the optical system with the highest degree of rotational symmetry. Rotating the optical system around this line while imaging an object will not change the image of the object. . The embodiments can be applied to any kind of lenses having radial distortion. Such lenses may include, but are by no means limited to, wide angle lenses including fisheye lenses. Moreover, there are camera modules, such as catadioptric cameras where the optical system comprises a combination of one or more lenses and mirrors. The embodiments can also be applied to such catadioptric cameras. Determining the position of the optical axis of the lens with respect to the camera sensor allows the camera to be calibrated against the lens, or facilitates movement of the lens relative to the camera sensor into a better position, e.g. during manufacture.

In one phase, a display is controlled to display a plurality of images of at least one displayed straight line. A camera module including the sensor and the lens is controlled to capture the displayed images. A module for detection of the intersection of the lens optical axis and the sensor, which may be integral with or external to the camera module, processes captured images. It also controls the display of images on the display. Plural images, each with straight lines at different locations, are displayed. The lines appear curved in the captured images because of the distortion caused by the lens. The module for detection of the intersection of the lens optical axis and the sensor fits a first order polynomial to lines from some of the captured images, and determines the deviation of the coordinates of the (curved) lines in the captured images from their respective fitted straight lines. This is performed horizontally and vertically. The display may be controlled to display straight lines at all vertical and horizontal positions, at the resolution of the display, in a relatively small area around the point on the display that corresponds to the point on the camera sensor at which the optical axis of the lens intersects with the sensor. The module for detection of the intersection of the lens optical axis and the sensor determines the intersection location of the lines having the smallest deviation from their respective fitted straight lines, both

horizontally and vertically, and thus determines the location of intersection of the optical axis of the lens with the sensor.

This can provide identification of the intersection of the lens optical axis with the sensor accurately and reliably. Moreover, it does not require fitting of complex curve equations, which impose a processing burden and which can provide errors in results. It also does not require any specific positional relationship between the display and the camera module.

The fitted straight lines refer to lines which best fit the coordinates of the lines in the captured images. The lines may be fit according to any suitable line fitting method.

In an iterative phase prior to the above phase, the display is controlled to display straight lines at positions that are increasingly closer to the point on the display that corresponds to the point on the camera sensor at which the optical axis of the lens intersects. This is performed without needing to know the positional relationship between the display, lens and sensor. In particular, a straight line is caused to be displayed at a position on the display, and the direction of curvature (left or right, up or down) in the captured image is used to determine the location of the intersection of the optical axis and the sensor relative to the displayed line. Next, a straight line is displayed at a location on the display that is in the determined relative direction, and the direction of curvature (left or right, up or down) in the captured image is used to determine the location of the intersection of the optical axis and the sensor relative to the newly displayed line. The iteration may utilise a half-interval search, for instance. Once a sufficiently small area of the display has been identified as corresponding to the sensor area containing the intersection location of the optical axis of the lens with the sensor, the first phase may be used to identify the location to a greater level of accuracy. The iterative phase reduces the time needed to find the intersection location of the optical axis of the lens with the sensor, but may be omitted in some implementations.

Figure ι is a schematic illustration of a calibration system l. The system includes a camera module io which comprises a lens 20 and a sensor 50. The lens may be a wide angle lens 20 such as, for example, a fisheye lens. The lens 20 can also be a combination of two or more individual lenses. Alternatively, the lens may be any other kind of lens which introduces some form of radial distortion to the captured image. The camera module 10 can be a catadioptric module where instead of one or more lenses 20, a combination of one or more mirrors and lenses can be used to project the image of a scene onto the sensor 50.

The system also includes a display 30, for instance, a flat display. The flat display 30 may be a TV display. Alternatively, the flat display 30 may comprise a pattern projected onto a flat wall by a high resolution projector. The display 30 may comprise any suitable means for displaying an image to the camera module 10. The camera module 10 is configured to capture at least one image displayed on the display. For example, in a configuration set up for calibration, the camera module 10 is positioned such that the lens 20 is directed at the display 30.

The lens 20 produces geometrical distortions in images captured by the camera module 10. For example, straight lines imaged by camera module 10 may appear as curved lines in the captured images. The distortion increases for pixels further away from the optical axis of the lens. The least distortion is produced at the location at which the optical axis of the lens intersects with the sensor.

The system 1 additionally comprises a detection module 40 for detection of the location of the intersection of the lens optical axis and the sensor. The module 40 may form part of a computing device such as a PC, or a server, for example. Alternatively, the detection module 40 may form part of the camera module 10. The detection module 40 may be configured to receive captured images from the camera module 10. The detection module 40 may be configured to process images captured by the camera module 10. The detection module 40 may be configured to control the display 30 to display a pattern.

The camera module 10 is configured to capture a plurality of images each of at least one displayed straight line, the line being located at a different relative position in each respective image. The detection module 40 is configured to determine an intersection location of two non-parallel lines from the captured images having the smallest deviation from a respective fitted straight line. An example of how this determination may be performed is described in more detail with reference to Figures 2 and 3.

Accurate determination of the intersection location of the optical axis with the sensor may be helpful for 3D modelling of a scene. Lens distortion modelling and measurement usually needs the knowledge of the intersection location of the optical axis position with the sensor 50 area. Once the intersection location of the optical axis with the sensor has been

determined, images captured by the camera module 10 can be modified using software e.g. to model a 3D scene. The detection of the location intersection between the lens optical axis and the sensor may be made during the manufacturing process. The determined intersection location of the optical axis with the sensor may be used to move the lens relative to the camera sensor 50 into a desired position, for instance such that the optical axis of the lens intersects with a position at or close to the centre of the camera sensor. The determined intersection of the optical axis and the sensor may alternatively be used at the manufacturing stage to configure image processing software such that the software processes captured images optimally. Additionally, after the manufacturing stage, the determination of the location of the intersection of the lens optical axis and the sensor may be made in order to determine any changes to the position of the optical axis of the lens relative to the sensor 50 during the life of the camera module. Changes to the optical axis relative to the sensor 50 can occur in the case that the camera module suffers an impact, for example, if the camera module is bumped or dropped.

The system 1 may be configured such that the display 30 appears as large as possible in the field of view of the camera module 10, advantageously occupying the whole of the field of view of the camera module 10. To achieve this, the display 30 may be positioned close to the lens of the camera module, without the display 30 extending beyond the field of view of the lens 20. The display 30 may include a large number of pixels. For example, the display may be a 4k or 8k resolution display. By providing a large number of pixels and providing the display close to the camera lens, the accuracy of the optical axis location determination may be improved. Typically, the higher the resolution of the display, the higher the accuracy of detection of the intersection between the lens optical axis and the sensor. Figure 2 is a flow chart illustrating various operations which may be performed by the calibration system. In some embodiments, not all of the illustrated operations need to be performed. Operations may also be performed in a different order compared to the order presented in Figure 2. In operation S110, the display 30 is controlled to display a plurality of images of at least one displayed straight line, and the camera module 10 is configured to capture these plurality of images of at least one displayed straight line. The camera module 10 may provide the captured images to a detection module 40. The detection module 40 may form part of the camera module 10. Alternatively, the detection module 40 may form part of a separate computing device such as a PC or a server.

In operation S120, the detection module 40 is configured to fit a first order polynomial to each of the lines from the captured images. In the captured images, the distortion from the camera lens 20 results in the lines in the captured images having a curvature which increases with the distance of the projection of the line on the sensor 40 from the intersection location of the optical axis of the lens 20 with the sensor. In step S130 the detection module 40 is configured to determine the deviation of the coordinates of the imaged lines from their respective fitted straight lines.

In step S140, the detection module 40 is configured to determine the intersection location of the lines having the smallest deviation from the respective fitted lines. The determined intersection location of the lines is determined to be intersection location of the optical axis of the lens with the sensor of the camera module 10.

An iterative method which may be carried out in order to perform the above steps is described below in relation to Figure 3. In the iterative method, the display 30 is controlled to display straight lines at positions that are increasingly closer to the point on the display that corresponds to the point on the camera sensor 50 at which the optical axis of the lens 20 intersects. In the iterative method, the display 30 is controlled to display straight lines at all vertical and horizontal pixel positions, at the resolution of the display, in a relatively small area around the point on the display that corresponds to the point on the camera sensor 50 at which the optical axis of the lens 20 intersects with the sensor 50. However, it will be appreciated that any other suitable method of determining an intersection location of two straight lines in the captured images may be used.

Figure 3 is a flow chart illustrating various operations which may be performed by the calibration system. In some embodiments, not all of the illustrated operations need to be performed. Operations may also be performed in a different order compared to the order presented in Figure 3.

In operation S210, the detection module 40 may check the curvature of a vertical line displayed on the left hand side of the display. Initially, the display is configured to show a known pattern including a centrepoint 60. An example pattern is a checkerboard, such as that shown in Figure 4. The checkerboard in Figure 4 is merely an example of a pattern which may be used as the known pattern in the method. The checkerboard may be of any suitable size. For example, the checkerboard may be displayed over a given number of display pixels. In the example of Figure 4, the

checkerboard is a 4x4 checkerboard pattern. It will be appreciated that any other dimensioned checkerboard may be used, as long as the centrepoint 60 of the pattern is suitable for detection in a captured image of the pattern.

The centrepoint 60 of the pattern is displayed close to the left border of the display. The pixel coordinates of the centrepoint 60 of the pattern in this step are at dispXleft (i.e. on the left hand side of the display) and dispYres/2 (where dispYres is the vertical resolution of the display). However, the vertical pixel coordinate may instead be any vertical pixel coordinate on the display, other than dispYres/2. The camera module is configured to capture an image of the displayed pattern. The centrepoint 60 of the pattern is detected in the captured image of the pattern by the detection module 40. In this way, the detection module 40 may determine the correspondence between the display coordinates (coordinates on the display 30 at which the centre of the checkerboard is displayed) and the image coordinates

(coordinates on the camera sensor 50 at which the centre of the checkerboard is present). The display is configured subsequently to show a pattern containing two differently coloured areas, as shown for example in Figure 5. In the example of Figure 5, the image includes one white area and one black area on different portions of the screen. A vertical borderline 70 demarcates the two areas. In the example of Figure 5, the borderline is located at a horizontal pixel position dispXleft. The vertical size of each area is equal to the vertical resolution of the display.

The camera module 10 is configured to capture an image of the displayed pattern. The border line between the two areas is detected in the captured image. The border line 70 may appear curved in the captured image due to the distortion introduced by the wide angle lens of the camera module. The amount of distortion introduced depends on the distance between the border line and the intersection location of the optical axis of the lens 20 with the sensor 50.

The detection module 40 is configured to calculate a polynomial function of second order to fit the border line 70 in the captured image. The polynomial is in the form: x = Ay 2 + By + C, and may be measured with respect to a coordinate system applied to the captured image as shown in Figure 7. The second order polynomial may be fit according to any suitable fitting method. The sign of the parameter A is determined. In this example, if the sign of A is positive, then this indicates that the intersection location of the optical axis with the sensor 50 is to the right hand side of the curve. If A is negative at this step, the procedure is stopped. If A is positive at this step, the procedure continues in the next step to determine the curvature of a line right hand side line as follows.

In operation S220, the detection module 40 may check the curvature of a vertical line displayed on the right hand side of the display. In a similar way to the above operation, the display is configured to show the known pattern including a centrepoint, such as the checkerboard described above and shown in Figure 4. The centrepoint 60 of the pattern is displayed close to the right border of the display. The pixel coordinates of the centrepoint 60 of the pattern in this step are at dispXright (i.e. on the right hand side of the display) and dispYres/2 (where dispYres is the vertical resolution of the display). As described above, a vertical pixel coordinate other than dispYres/2 may be used for the centrepoint coordinates. The camera module 10 is configured to capture an image of the displayed pattern. The centrepoint 60 of the pattern is detected in the captured image of the pattern. As described above, in this way, the system may determine the correspondence between the display coordinates and the image coordinates.

The display is configured to show a pattern containing two differently coloured areas, with a border line between the two areas being located at a horizontal pixel position dispXright. The vertical size of each area is equal to the vertical resolution of the display.

The camera module is configured to capture an image of the displayed pattern. The borderline between the two areas is detected in the captured image. The detection module 40 is configured to determine a second order polynomial to fit the borderline in the captured image. The sign of the parameter A is determined. In this step, if the sign of A is negative, then this indicates that the intersection location of the optical axis with the sensor 50 is to the left hand side of the curve. If A is positive at this step, the procedure is stopped. If A is negative at this step, the procedure continues as follows.

In operation S230, the detection module 40 may check the curvature of a horizontal line displayed at the top of the display. The display is configured to show a known pattern including a centrepoint 60, such as a checkerboard. The centrepoint 60 of the pattern is displayed close to the top border of the display. The pixel coordinates of the centrepoint 60 of the pattern in this case are at dispYtop (i.e. at the top of the display) and dispXres/ 2 (where dispXres is the horizontal resolution of the display). A horizontal pixel coordinate other than dispXres/2 may be used for the centrepoint of the displayed checkerboard. The camera module is configured to capture an image of the displayed pattern. The centrepoint 60 of the pattern is detected in the captured image of the pattern. The display is configured to show a pattern containing two differently coloured areas, as shown for example in Figure 6. In the example of Figure 6, the image includes one white area and one black area on different portions of the screen. A horizontal borderline 80

demarcates the two areas. In the example of Figure 6, the borderline is located at a vertical pixel position dispYtop. The horizontal size of each area is equal to the horizontal resolution of the display.

The camera module is configured to capture an image of the displayed pattern. The borderline between the two areas is detected in the captured image. The detection module 40 is configured to calculate a polynomial function of second order to fit the border line in the captured image.

The sign of the parameter A is determined. In this example, if the sign of A is positive, then this indicates that the intersection location of the optical axis with the sensor 50 is below the curve. If A is negative at this step, the procedure is stopped.

If A is positive at this step, the procedure continues to determine the curvature of a horizontal line at the bottom of the display as follows. In operation S240, the detection module 40 may check the curvature of a horizontal line displayed at the bottom of the display. The display is configured to show a known pattern including a centrepoint 60, such as a checkerboard. The centrepoint 60 of the pattern is displayed close to the bottom border of the display. The pixel coordinates of the centrepoint 60 of the pattern in this case are at dispYbottom (i.e. at the bottom of the display) and dispXres/ 2. A horizontal pixel coordinate other than dispXres/2 may be used for the centrepoint of the displayed checkerboard. The camera module is configured to capture an image of the displayed pattern. The centrepoint 60 of the pattern is detected in the captured image of the pattern. The display is configured to show a pattern containing two differently coloured areas, with a borderline being located at a vertical pixel position dispYbottom. The horizontal size of each area is equal to the horizontal resolution of the display.

The camera module is configured to capture an image of the displayed pattern. The borderline between the two areas is detected in the captured image.

The system is configured to calculate a polynomial function of second order to fit the border line in the captured image. The polynomial is in the form: y = Ax 2 + Bx + C. The sign of the parameter A is determined. In this example, if the sign of A is negative, then this indicates that intersection location of the optical axis with the sensor is above the curve. If A is positive at this step, the procedure is stopped. If A is negative at this step, the procedure continues as follows.

In step S250, a half-interval search is performed. Each iteration may include the following sub-steps. In sub-step (a), the known pattern described above is displayed. The centrepoint of the pattern is displayed at (dispXleft + dispXright)/2 and (dispYtop + dispYbottom)/2. An image of the pattern is captured. The coordinates of the centrepoint of the pattern are detected in the captured image. In sub-step (b), a pattern including two areas separated by a horizontal borderline is displayed. The borderline is located at vertical coordinate (dispYtop+dispYbottom)/2. An image is captured and the borderline is detected in the captured image. A polynomial of second order is fitted to the detected borderline in the captured image. The value of A is determined. If A is greater than zero, this indicates that the intersection location of the optical axis with the sensor 50 is below the borderline. If A is determined to be greater than zero, the coordinate dispYtop is updated as dispYtop = (dispYtop + dispYbottom)/2. If A is less than zero, this indicates that the intersection location of the optical axis with the sensor 50 is above this line. If A is determined to be less than zero, the coordinate dispYbottom is updated as dispYbottom = (dispYtop +dispYbottom)/2.

In sub-step (c), a pattern including two areas separated by a vertical borderline is displayed. The borderline is located at horizontal coordinate (dispXleft + dispXright)/2. An image is captured and the borderline is detected in the captured image. A polynomial of second order is fitted to the detected borderline in the captured image. The value of A is determined. If A is greater than zero, this indicates that the intersection location of the optical axis with the sensor 50 is to the right hand side of the borderline. If A is determined to be greater than zero, the coordinate dispXleft is updated as dispXleft = (dispXleft + dispXright)/2. If A is less than zero, this indicates that the intersection location of the optical axis with the sensor 50 is to the left of the borderline. If A is determined to be less than zero, the coordinate dispXright is updated as dispXright = (dispXleft +dispXright)/2.

Sub-steps a-c are cyclically repeated with the updated coordinate values of dispXleft, dispXright, dispYtop and dispYbottom, while the difference between the left and right horizontal coordinates is greater than N number of pixels (i.e. while dispXright - dispXleft > N), and while the difference between the top and bottom vertical coordinates (i.e. while dispYtop - dispYbottom >N) is greater than M number of pixels. In this way, the search interval is halved at each iteration, until the length of the search interval is no less than a minimum number N pixels for the horizontal interval and M pixels for the vertical interval. The values of N and M may be set to any suitable value. In one example, the N and M may each be set to be equal to 4.

The iteration may be stopped before dispXleft = dispXright and dispYtop = dispYbottom. This is because as the projection of the borderlines on the sensor approach the intersection point between the optical axis of the lens and the sensor, the borderline in the captured images becomes straighter. The value of A for lines very close to the optical axis becomes very small, meaning the line may be straight, or the curvature cannot be reliably detected. Once the degree of curvature of the lines is sufficiently low, which occurs after a certain number of iterations, the procedure continues as follows to determine the coordinates of two intersecting straight lines.

In step S260, the final search interval is scanned to find the intersection location of the optical axis with the sensor 50. The search intervals are between the final updated values of dispXleft, dispXright, dispYtop and dispYbottom after the final iteration in step S250. Each pair of horizontal and vertical coordinates in the interval are scanned. In each iteration, the known pattern is displayed with the centrepoint at the horizontal and vertical coordinates being scanned. An image of the pattern is captured, and the coordinates of the centrepoint is detected. A pattern including two areas with a horizontal borderline is displayed. The borderline is displayed at the vertical coordinate being scanned. An image of the pattern is captured. The borderline is detected in the captured image. A first order polynomial is fitted to the borderline in the form y = Ax +B. A pattern including two areas with a vertical borderline is then displayed. The borderline is displayed at the horizontal coordinate being scanned. An image of the pattern is then captured. The borderline is detected in the captured image. A first order polynomial is fitted to the borderline in the form x=Ay + B.

The deviation of each point of the detected line from the fitted line is calculated. The intersection location of the optical axis with the sensor 50 is determined to be the point in the captured image for which both horizontal and vertical lines have the smallest deviations from the respective fitted line. Therefore, the intersection location of the optical axis with the sensor 50 may be determined based on an intersection location of two lines which have a degree of curvature below a given threshold. The two intersecting lines may be selected based on their deviation from a respective fitted straight line being the smallest.

The steps described above may be passed once, and the horizontal and vertical coordinates of the intersection location of the optical axis with the sensor 50 are determined. Alternatively, the above procedure may be run multiple times to obtain multiple estimates of the coordinates of the intersection location of the optical axis with the sensor 50. There may be small differences between the determined coordinates of the intersection location of the optical axis and the sensor 50 from each run due to noise in the estimation. From the multiple estimations the average, median, or any other statistical measure may be computed in order to obtain a single pair of coordinates. As described above, the values of N and M may be set to any suitable value. However, it will be understood that larger values of N and M increase the processing time in step S260, as a larger number of optical axis intersection coordinate candidates are scanned over the larger search interval. The known pattern displayed in the above steps is not limited to being a checkerboard.

Alternatively, the known pattern may be a circle. The circle may be positioned at given coordinates of the display in the same way as the centrepoint of the checkerboard in order to determine the correspondence between the display's coordinates and the coordinates of the captured images. An example of a circle having a centrepoint at dispXleft, dispYres/2 is shown in Figure 8a. An example of the same circle is shown in Figure 8b, having a centrepoint at dispXres/2, dispYtop. It will be appreciated that any suitable pattern may be used. Additionally, it will be appreciated that a checkerboard pattern as shown in Figure 4 is not limited to being a 4x4 checkerboard, but may include any suitable number of rectangles. It will also be appreciated that the patterns may be provided in any suitable colour. In the examples described herein, the patterns are in black and white, but they are not limited to such colours. The centrepoint of the checkerboard is described as being detected in order to determine the correspondence between the display coordinates and the camera sensor 50 coordinates. However, it will be appreciated that any of the intersection points on the checkerboard may be chosen to be detected for the coordinate correspondence

determination.

Any displayed pattern may be scaled so that it is completely contained on the display. In this way, detection of the pattern may be improved. It will be appreciated that instead of patterns including two areas demarcated by a borderline, a pattern showing a single line may be displayed. Alternatively, more than one line at a time may be displayed. For example, two intersecting lines may be displayed.

However, by using a pattern including two areas each having a different colour such as black and white, the detectability of the line may be improved. The detectability of a single line may depend on the sensor 50 and display resolutions.

If the display is a TV display, it may include a larger number of horizontal pixels than vertical pixels. Therefore, instead of displaying a vertical line on the display, the display may only show horizontal lines, and the camera module may be rotated 90 degrees relative to the display. Alternatively, the relative roll between the displayed border and the camera module may be any suitable angle, in which case the errors in the detection of the beginning and end of the border lines may be taken into account.

In some examples, the relative rotation between the display and sensor 50 plane of the camera module may be limited, including the roll angle.

It will be appreciated that the borderlines displayed may not only be horizontal and vertical lines, but may be lines displayed at any angle on the display. Additionally, the displayed lines need not be perpendicular to each other, but may be displayed at any angle to each other.

Alternatively to changing the pattern on a fixed display, the display may be configured to display a fixed pattern, where the display is moveable relative to the camera module. For example, the display may be configured to be moved by a high precision motor so that the pattern is imaged on the sensor 50 at different locations. The detection module 40 may be configured to control the position of the display 30 relative to the camera module. This may involve moving the position of the display 30, for example by controlling a motor connected to the display. Alternatively, this may involve moving the position of the camera module 10. The camera module 10 may be controlled by the detection module 40 to rotate relative to the display.

The display may be configured to show the displayed patterns to have as large as possible size, so that as large as possible field of view of the lens is covered.

The display may be provided at as small a distance as possible from the lens of the camera module. This increases the coverage of the lens field of view. The method is not limited to the half-interval search algorithm disclosed above, but any other efficient search algorithm could be implemented.

In some embodiments, the steps S210 - S250 are not performed. In such embodiments, only step S260 is performed, and each pixel of the display is scanned. In this case, respective first order polynomials are fitted to captured images of displayed horizontal and vertical lines for each pixel of the display. This may increase the overall processing time taken to determine the intersection location of the optical axis with the sensor. In some embodiments, step S260 are not performed. In such embodiments, only steps S210 - S250 are performed. In this case, second order polynomials are fitted to captured images of displayed horizontal and vertical lines in as described in steps S210 - S250. Instead of stopping the iteration when the search interval is at least a few pixels and performing step S260, the half interval search may continue until two of the respective fitted lines have a value of A of approximately zero, indicating that the line is a first order polynomial. The intersection location of the optical axis with the sensor 50 is determined to be the point at which the first order polynomials intersect. When the curvature reaches a small value as the pixels being searched approach the optical axis, the uncertainty in the sign of A may increase, and so the location of the intersection of the optical axis with the sensor 50 may be determined to a lower level of accuracy compared to performing the method depicted in Figure 3.

By performing steps S210 - S260 a search interval including the optical axis may be determined relatively quickly by using the half interval searching method, and the position of the intersection of the optical axis and the sensor may be determined to a high level of accuracy by scanning each (displayed) pixel in the final search interval and fitting respective first order polynomials to the captured lines for each pixel in the search interval. However, it will be appreciated that not all of the steps need to be performed in combination to obtain a determination of the intersection location of the optical axis with the sensor 50.

Figure 9 is a schematic block diagram of an example configuration of a camera module 10 such as described with reference to Figures 1 to 6. The camera module 10 may comprise memory and processing circuitry. The memory 11 may comprise any combination of different types of memory. In the example of Figure 9, the memory comprises one or more read-only memory (ROM) media 13 and one or more random access memory (RAM) memory media 12. The camera module 10 comprises one or more sensors 50 which may be configured to receive a projection of an image through the lens 20. The processing circuitry 14 may be configured to process a captured image as described with reference to Figures 2 and 3. Alternatively, the processing circuitry may be separate to the camera module 10. In this case, the camera module 10 may further comprise an output interface 15 configured for transmitting a captured image to the processing circuitry. The memory described with reference to Figure 9 may have computer readable instructions stored thereon 13A, which when executed by the processing circuitry 14 causes the processing circuitry 14 to cause performance of various ones of the operations described above. The processing circuitry 14 described above with reference to Figure 9 may be of any suitable composition and may include one or more processors 14A of any suitable type or suitable combination of types. For example, the processing circuitry 14 may be a programmable processor that interprets computer program instructions and processes data. The processing circuitry 14 may include plural programmable processors. Alternatively, the processing circuitry 14 may be, for example, programmable hardware with embedded firmware. The processing circuitry 14 may be termed processing means. The processing circuitry 14 may alternatively or additionally include one or more Application Specific Integrated Circuits (ASICs). In some instances, processing circuitry 14 may be referred to as computing apparatus.

The processing circuitry 14 described with reference to Figure 9 is coupled to the memory 11 (or one or more storage devices) and is operable to read/write data to/from the memory. The memory may comprise a single memory unit or a plurality of memory units 13 upon which the computer readable instructions 13A (or code) is stored. For example, the memory 11 may comprise both volatile memory 12 and non-volatile memory 13. For example, the computer readable instructions 13A may be stored in the non-volatile memory 13 and may be executed by the processing circuitry 14 using the volatile memory 12 for temporary storage of data or data and instructions. Examples of volatile memory include RAM, DRAM, and SDRAM etc. Examples of non-volatile memory include ROM, PROM, EEPROM, flash memory, optical storage, magnetic storage, etc. The memories 11 in general may be referred to as non- transitory computer readable memory media.

The term 'memory', in addition to covering memory comprising both non-volatile memory and volatile memory, may also cover one or more volatile memories only, one or more nonvolatile memories only, or one or more volatile memories and one or more non-volatile memories.

The computer readable instructions 13A described herein with reference to Figure 9 may be pre-programmed into the camera module 10. Alternatively, the computer readable instructions 13A may arrive at the camera module 10 via an electromagnetic carrier signal or may be copied from a physical entity such as a computer program product, a memory device or a record medium such as a CD-ROM or DVD. The computer readable instructions 13A may provide the logic and routines that enable the camera module 10 to perform the

functionalities described above. The combination of computer-readable instructions stored on memory (of any of the types described above) may be referred to as a computer program product.

Figure 10 illustrates an example of a computer-readable medium 16 with computer-readable instructions (code) stored thereon. The computer-readable instructions (code), when executed by a processor, may cause any one of or any combination of the operations described above to be performed.

As will be appreciated, the camera module 10 described herein may include various hardware components which have may not been shown in the Figures since they may not have direct interaction with the shown features.

Embodiments may be implemented in software, hardware, application logic or a combination of software, hardware and application logic. The software, application logic and/or hardware may reside on memory, or any computer media. In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media. In the context of this document, a "memory" or "computer- readable medium" may be any non-transitory media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.

Reference to, where relevant, "computer-readable storage medium", "computer program product", "tangibly embodied computer program" etc., or a "processor" or "processing circuitry" etc. should be understood to encompass not only computers having differing architectures such as single/multi-processor architectures and sequencers/parallel architectures, but also specialised circuits such as field programmable gate arrays FPGA, application specific circuits ASIC, signal processing devices and other devices. References to computer program, instructions, code etc. should be understood to express software for a programmable processor firmware such as the programmable content of a hardware device as instructions for a processor or configured or configuration settings for a fixed function device, gate array, programmable logic device, etc.

As used in this application, the term 'circuitry' refers to all of the following: (a) hardware-only circuit implementations (such as implementations in only analogue and/or digital circuitry) and (b) to combinations of circuits and software (and/or firmware), such as (as applicable): (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile device or server, to perform various functions) and (c) to circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.

This definition of 'circuitry' applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term "circuitry" would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware. The term "circuitry" would also cover, for example and if applicable to the particular claim element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in server, a cellular network device, or other network device.

If desired, the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the above- described functions may be optional or may be combined. Similarly, it will also be

appreciated that the flow diagrams of Figures 2 and 3 are examples only and that various operations depicted therein may be omitted, reordered and/ or combined.

Although various aspects are set out in the independent claims, other aspects comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims.

It is also noted herein that while the above describes various examples, these descriptions should not be viewed in a limiting sense. Rather, there are several variations and

modifications which may be made without departing from the scope of the appended claims.