Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR OBTAINING OPTICAL PARAMETERS OF CAMERA
Document Type and Number:
WIPO Patent Application WO/2004/092826
Kind Code:
A1
Abstract:
The present invention is a method and system for analyzing the mapping mechanism and accordingly obtaining the optical parameters of a camera. A specific mapping characteristic that one sight ray exclusively corresponds to one imaged point is utilized; with reference to a particular imaged point, absolute spatial coordinates conforming to the mapping characteristic are searched for analyzing the mapping mechanism of the camera. A planar target with a physical central-symmetric pattern (PCP) is employed to locate the principal point and absolutely orient the optical axis, with the aid of the both centers of the PCP and its corresponding image with the similar geometric feature, termed the imaged central-symmetric pattern (ICP). Then refer to the optical axis and actively adjust the relative distance between the camera and the target to enable the mapping traces, cast from different calibration marks on the target, to overlap on the image plane. Based on this phenomenon, the sight ray can be analyzed by the overlapping mechanism and a methodology developed thereby to obtain the optical parameters of the camera. Because the invention totally and solely employs the measured data to deduce the parameters, which means in other words that no postulation of a given mapping mechanism is necessary; it is most suitable for application to the kind of camera with an unknown optical model. Actually, the bigger the deformation of the image, the higher the sensitivity of the invention in its operations. Hence, the applications of wide-angle cameras can be widely expanded and, furthermore, the invention can evaluate or determine the specifications of the camera. The operating procedure of the invention is simple and low-cost, wherefore it has a major practicability commercially as well as industrially.

Inventors:
JAN GWO-JEN (CN)
CHANG CHUANG-JAN (CN)
Application Number:
PCT/IB2004/001109
Publication Date:
October 28, 2004
Filing Date:
April 12, 2004
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
APPRO TECHNOLOGY INC (CN)
JAN GWO-JEN (CN)
CHANG CHUANG-JAN (CN)
International Classes:
G01M11/02; G03B13/00; G03B37/06; G03B43/00; (IPC1-7): G03B37/06; G01M11/02; G03B43/00
Foreign References:
US20030090586A12003-05-15
EP1028389A22000-08-16
US5185667A1993-02-09
US5870135A1999-02-09
Download PDF:
Claims:
CLAIMS What is claimed is :
1. A method for obtaining the optical parameters of a camera, which utilizes the specific characteristic that one single sight ray in space corresponds to one single imaged point on an image plane to obtain the optical parameters of a camera, the method comprises : placing a target with a physical centralsymmetric pattern (PCP) thereon in the field of view (FOV) of the camera, in which the PCP is composed of a central mark, located at the geometric center thereof, and at least two calibration marks, individually termed the first calibration mark and the second calibration mark, located on a straight radial line centered at the central mark ; collimating the target and the camera to enable an optical axis of the camera to perpendicularly pass through the central mark; recording the pixel coordinate of an imaged point imaged by the first calibration mark ; moving the target along the optical axis by locking the central mark thereon in order to enable the second calibration mark to image in an overlapping manner at the same pixel coordinate of the imaged point; extracting both the spatial absolute coordinates of the first and second calibration marks and deducing a sight ray defined by the two spatial absolute coordinates; and regarding the point of intersection of the sight ray and the optical axis as a 11279PCTPA viewpoint of the camera.
2. The method according to claim 1, wherein the step of collimating the target and the camera is fulfilled by locating a principal point on the image plane, hence a spatial sight ray perpendicularly passing through both the principal point and the central mark representing the optical axis.
3. The method according to claim 2, wherein the method of locating the principal point comprises: further providing a plurality of centersymmetric geometric figures to the PCP; placing the target in the FOV of the camera to allow the PCP to image on the image plane; adjusting the relative position between the target and the camera until the image of the PCP turns into an imaged centralsymmetric pattern (ICP); and examining the symmetry of the ICP with at least one symmetric index to ensure that the imaged traces of the plurality of geometric figures are symmetrical, the feature coordinate of the imaged point mapped by the central mark locating the principal point.
4. The method according to claim 3, wherein the plurality of geometric figures is selected from the group comprising concentric circles, concentric rectangles, concentric triangles and concentric polygons.
5. The method according to claim 3, wherein the plurality of geometric figures is a combination of any number of concentricandsymmetric circles, rectangles, triangles and/or polygons.
6. 11279PCTPA.
7. The method according to claim 3, wherein the at least one symmetric index comprises imageddistortion indexes, a horizontal deviation index and a vertical deviation index.
8. A method for obtaining the optical parameters of a camera, which utilizes the specific characteristic that one single sight ray in space corresponds to one single imaged point on an image plane to obtain the optical parameters of a camera, the method comprises: placing a target with a physical centralsymmetric pattern (PCP) thereon in the field of view (FOV) of the camera, in which the PCP is composed of a central mark, located at the geometric center thereof, and a plurality of calibration marks defined by a plurality of geometric figures; collimating the target and the camera to enable an optical axis of the camera to perpendicularly pass through the central mark; varying the relative position between the target and the camera along the optical axis and recording a plurality of objecttoimage conjugate coordinate pairs corresponding to the plurality of calibration marks separately in different the relative positions to form an objecttoimage conjugatecoordinate array; and searching a target point along the optical axis to enable an overlapping index to approach optimal traceoverlap by means of analyzing the object toimage conjugatecoordinate array on the basis of the target point, in which the target point is a viewpoint of the camera.
9. The method according to claim 7, wherein the step of collimating the target and the camera is fulfilled by locating a principal point on the image plane, hence a spatial sight ray perpendicularly passing through both the principal point and the central mark representing the optical axis.
10. The method according to claim 8, wherein the method for locating the principal point further comprises: placing the target in the FOV of the camera to allow the PCP to image on the image plane; adjusting the relative position between the target and the camera until the image of the PCP turns into an imaged centralsymmetric pattern (ICP); and examining the symmetry of the ICP with at least one symmetric index to ensure that the imaged traces of the plurality of geometric figures achieve a symmetry request, the feature coordinate of the imaged point mapped by the central mark locating the principal point.
11. The method according to claim 9, wherein the at least one symmetric index comprises imageddistortion indexes, a horizontal deviation index and a vertical deviation index.
12. The method according to claim 7, wherein the plurality of geometric figures is selected from the group comprising concentric circles, concentric rectangles, concentric triangles and concentric polygons.
13. The method according to claim 7, wherein the plurality of geometric figures is a combination of any number of concentricandsymmetric circles, rectangles, triangles and/or polygons.
14. The method according to claim 7, wherein the plurality of objecttoimage conjugatecoordinate pairs is composed of the absolute coordinates of the plurality of calibration marks, or the coordinates of the camera, and the pixel coordinates corresponding to the plurality of calibration marks, in which three parameters, including the image height, an object height and an object distance, can be deduced by means of analyzing the plurality of objecttoimage conjugatecoordinate pairs.
15. The method according to claim 7, wherein the overlapping index is a divergent length, which is deduced by the steps comprising: analyzing the objecttoimage conjugatecoordinate array to obtain a plurality of data points; and adjacently connecting the plurality of data points to form the divergent length which is minimized to make the overlapping index optimal.
16. The method according to claim 14, wherein the plurality of data points expresses the relationship between two variables of the zenithal distance (a) and the image height (p), representing a projection curve of the camera as a whole and being obtained by means of analyzing the objecttoimage conjugatecoordinate array and the postulated locations of the target point.
17. The method according to claim 14, wherein the plurality of data points expresses the relationship between two variables of the zenithal focal length (zFL) and the image height (p), representing the level of distortion of the camera as a whole and being obtained by means of analyzing the objecttoimage conjugatecoordinate array and the postulated locations of the target point.
18. The method according to claim 16, wherein the zenithal focal length (zFL) is determined by the mathematic equation as follows: zFL = p*cot (a) wherein: p is the image height, which is the distance between an imaged point and a principal point on the image plane; and a is the zenithal distance, which is the angular distance of a sight ray away from the optical axis.
19. A method for obtaining the optical parameters of a camera, which utilizes the specific characteristic that one single sight ray in space corresponds to one single imaged point on an image plane to obtain the optical parameters of a camera, the method comprises: looking for at least two different absolute coordinates in space, all of which project at the same imaged point, in order to define the sight ray ; deducing a zenithal distance (a) on behalf of the sight ray, which is the angular distance of the sight ray away from an optical axis of the camera; further deducing a plurality of zenithal distances (a) on behalf of a plurality of sight rays separately corresponding to a plurality of imaged points; and obtaining a projection function describing the projecting behavior of the camera from the relationship between the plurality of imaged points and the plurality of zenithal distances (a).
20. The method according to claim 18, wherein the method for defining the sight ray further comprises: placing a target with a physical centralsymmetric pattern (PCP) thereon in the field of view (FOV) of the camera, in which the PCP is composed of a central mark, located at the geometrical center thereof, and at least two calibration marks, individually termed the first calibration mark and the second calibration mark, located on a straight radial line centered at the central mark; collimating the target and the camera to enable the optical axis of the camera to perpendicularly pass through the central mark ; recording the pixel coordinate of the imaged point imaged by the first calibration mark; moving the target along the optical axis by locking the central mark thereon in order to enable the second calibration mark to image in an overlapping manner at the same pixel coordinate of the imaged point; and extracting both the spatial absolute coordinates of the first and second calibration marks, the two spatial absolute coordinates defining the sight ray.
21. The method according to claim 19, wherein the point of intersection of the sight ray and the optical axis is a viewpoint of the camera.
22. The method according to claim 19, wherein the step of collimating the target and the camera is fulfilled by locating a principal point on the image plane, hence a spatial sight ray perpendicularly passing through both the principal point and the central mark representing the optical axis, the method further comprising: further providing a plurality of centersymmetric geometric figures to the PCP; placing the target in the FOV of the camera to allow the PCP to image on the image plane; 11279PCTPA adjusting the relative position between the target and the camera until the image of the PCP turns into an imaged centralsymmetric pattern (ICP); examining the symmetry of the ICP with at least one symmetric index to ensure that the imaged traces of the plurality of geometric figures are symmetrical, the feature coordinate of the imaged point mapped by the central mark locating the principal point; and according to the given position of the target, picking the spatial sight ray perpendicularly passing through both the principal point and the central mark as the optical axis.
23. The method according to claim 21, wherein the at least one symmetric index comprises imageddistortion indexes, a horizontal deviation index and a vertical deviation index.
24. The method according to claim 21, wherein the plurality of geometric figures is selected from the group comprising concentric circles, concentric rectangles, concentric triangles and concentric polygons.
25. The method according to claim 21, wherein the plurality of geometric figures is a combination of any number of concentricandsymmetric circles, rectangles, triangles and/or polygons.
26. The method according to claim 18, wherein through analyzing the relationship between the plurality of imaged points and the plurality of zenithal distances (a) a viewpoint of the camera is obtained.
27. The method according to claim 25, wherein the viewpoint is obtained by the steps comprising: placing a target with a physical centralsymmetric pattern (PCP) thereon in the field of view (FOV) of the camera, in which the PCP is composed of a central mark, located at the geometric center thereof, and a plurality of calibration marks defined by a plurality of geometric figures9 collimating the target and the camera to enable the optical axis of the camera to perpendicularly pass through the central mark ; varying the relative position between the target and the camera along the optical axis and recording a plurality of objecttoimage conjugate coordinate pairs corresponding to the plurality of calibration marks separately in different the relative positions to form an objecttoimage conjugatecoordinate array; and searching a target point along the optical axis to enable an overlapping index to approach optimal traceoverlap by means of analyzing the objecttoimage conjugatecoordinate array on the basis of the target point, in which the target point is the viewpoint of the camera.
28. The method according to claim 26, wherein the plurality of objecttoimage conjugatecoordinate pairs is composed of the absolute coordinates of the plurality of calibration marks, or the coordinates of the camera, and the pixel coordinates corresponding to the plurality of calibration marks, in which three parameters, including the image height, object height and object distance, can be deduced by means of analyzing the plurality of objecttoimage conjugatecoordinate pairs.
29. The method according to claim 26, wherein the overlapping index is a divergent length, which is deduced by the steps comprising: 11279PCTPA analyzing the objecttoimage conjugatecoordinate array to obtain a plurality of data points; and adjacently connecting the plurality of data points to form the divergent length which is minimized to make the overlapping index optimal.
30. The method according to claim 28, wherein the plurality of data points expresses the relationship between two variables of the zenithal distance (a) and the image height (p), representing a projection curve of the camera as a whole and being obtained by means of analyzing the objecttoimage conjugatecoordinate array and the postulated locations of the target point.
31. The method according to claim 28, wherein the plurality of data points expresses the relationship between two variables of the zenithal focal length (zFL) and the image height (p), representing the level of distortion of the camera as a whole and being obtained by means of analyzing the objecttoimage conjugatecoordinate array and the postulated locations of the target point.
32. The method according to claim 30, wherein the zenithal focal length (zFL) is determined by the mathematic equation as follows: zFL = p*cot (a) wherein : p is the image height, which is the distance between an imaged point and a principal point on the image plane; and a is the zenithal distance, which is the angular distance of a sight ray proceeding away from the optical axis.
33. A system for obtaining the optical parameters of a camera, which is employed to analyze the relationship between a plurality of sight rays in object space and a plurality of imaged points on an image plane, the system comprises: a target possessed of a physical centralsymmetric pattern (PCP) which is composed of a central mark and a plurality of centersymmetric geometric figures defining a plurality of calibration marks; a camera equipped with a nonlinear perspective projection lens used to capture the rays from the PCP and form a corresponding image on the image plane; an adjusting platform possessed of three rigid axes which are perpendicular to one another in order to define a coordinate system used to adjust the relative position between the target and the camera; a platform controller connected with the adjusting platform and used to provide power to and limit the moving range of the adjusting platform; and a processing unit connected with the camera and the platform controller, which is used to command the platform controller to adjust the positions of the three rigid axes and grab the absolute coordinates of the plurality of calibration marks and their corresponding imaged pixel coordinates in order to form an objecttoimage conjugatecoordinate array as the basis of calculation for obtaining the camera's projection function representing the relationship between the object space and the image plane as the mapping mechanism of the camera.
34. The system according to claim 32, wherein the system further comprises an illuminant used to light up the target.
35. The system according to claim 32, wherein the plurality of calibration marks is individually constructed by an active lighting element.
36. The system according to claim 34, wherein the active lighting element is a LED (Light Emitting Diode).
37. The system according to claim 32, wherein the processing unit further comprises : a frame grabber connected with the camera, which is employed to turn the analogical signals captured by the camera into digital ones; a digital image processor connected with the frame grabber, which is employed to process the digital signals in order to extract the imaged pixel coordinates; and a CPU in charge of controlling the frame grabber and the digital image processor.
38. The system according to claim 32, wherein the processing unit is a personal computer (PC).
39. The system according to claim 32, wherein the camera is selected from the group comprising a CCD camera, a CMOS camera and the one mounting an image sensor.
40. The system according to claim 32, wherein the plurality of geometric figures is selected from the group comprising concentric circles, concentric rectangles, concentric triangles and concentric polygons.
41. The system according to claim 32, wherein the plurality of geometric figures is a combination of any number of concentricandsymmetric circles, rectangles, triangles and/or polygons.
Description:
METHOD AND SYSTEM FOR OBTAINING OPTICAL PARAMETERS OF CAMERA

BACKGROUND OF THE INVENTION Field of Invention The present invention relates to a method and system for obtaining the optical parameters of a camera. Particularly, it is a method and system for analyzing the camera with a lens seriously diverging from the rectilinear projection mechanism, such as a fisheye lens, to obtain the optical parameters comprising the principal point, the viewpoint, the focal length constant and the projection function.

Related Art The camera systems in the field of artificial vision have preferred using lenses with a narrow field of view (FOV) in order to obtain images approaching an ideal perspective projection mechanism for precise measurement and easy image processes. The pinhole model is usually a basis to deduce the camera's parameters. The obtained intrinsic and extrinsic parameters can be employed in visual applications in the quest for improved precision, for instance in 3-D cubical inference, stereoscopy, automatic optical inspection, etc. As for the image deformation, a polynomial function is used to describe the deviation between original images and the ideal model or to conduct the job of calibration. These applications, however, currently have a common limitation of narrow visual angles and an insufficient depth of field.

11279-PCT-PA A fisheye camera (also termed a fisheye image sensor) mounted with a fisheye lens, which focuses deeper and wider, can capture a clear image with a FOV of as much as over 180 degrees, but a severe barrel distortion develops. If the application is a surveillance system and the request is only to monitor the movement of people or things, a partial distortion in images can be tolerated. If the purpose is to take pictures for virtual reality (VR), it is also acceptable that images"look like"normal ones. However, if the purpose involves the measurement of an object's physical size or 3-D image metering, it has to be admitted techniques for precisely obtaining the optical parameters of the fisheye camera are still absent.

Because the optical geometry of the fisheye camera is far from the rectilinear perspective projection model, the optical parameters are hard to be precisely deduced by those methods employed in the related art for normal cameras. Therefore, technologies developed for the usual visual disciplines have not resulted in the capability to process images of the fisheye camera (simplified as"fisheye images"hereinafter).

R. Y. Tsai [1987] (Tsai, "A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV camera and lenses", IEEE Journal of Robotics and Automation, Vol. RA-3, No 4, Aug, 1987, pp 323-344) brought up a radial alignment constraint in the radial-symmetric projection mechanism to derive the parameters of the camera. He employed five non-coplanar points of known absolute coordinates in viewed space and the positions of their corresponding images in the image plane, referring to the radial alignment of the optical axis constraining the vectors of the image distance and absolute distance to deduce the coefficients of a rotation matrix and translation matrix, which stand for the orientation, displacement and viewpoint of the

11279-PCT-PA camera. The focal length is obtained through the hypothesis of the rectilinear projection geometry, but a non-linear function is taken to describe the distortion mechanism of the image. Its chief merit is the ability to obtain the parameters of the camera with only simple experimental devices. In cameras with little distortion, the results from Tsai's model are quite accurate. But its demonstration is also based on the hypothesis of the rectilinear projection; it will involve a large error under a severely nonlinear projection system like the fisheye lens, and the results will be dependent on the arrangement of calibration marks. Hence, Tsai's model cannot be directly quoted in the case of wide- angle cameras, such as ones mounted with the fisheye lens.

However, if an artificial vision system has the advantages of wide-angle views, clear images and the capability of handling a cubical projection mechanism, it will be substantially more functional and competitive with a wider application field. Moreover, the excellent advantages of a nearly infinite view depth, simple structure and tiny volume are strengths vis-a-vis which other kinds of lenses are scarcely comparable to the fisheye lens. However, the severe distortion is a vital disadvantage in some applications, so the issue of identifying the features and irregular mapping mechanism of the fisheye lens and accordingly developing the related methodology is extremely important. Further, the applications depending on the accuracy of image calibration, for example in the stereoscope or autonomous robotic vision, are difficult to accurately handle without the precise optical parameters of the fisheye camera.

Owing to the poor deduced accuracy of the optical parameters of a camera based on the rectilinear perspective projection model, some alternative solutions have been advanced for handling the transformation of the fisheye image. Among these alternative

approaches an image-based algorithm aims at a specific camera which mounts a specific lens conforming to a specific projection mechanism so as to deduce the optical parameters based simply on the images displayed. With reference to FIGs. 1A and IB, wherein FIG.

IA expresses the imageable area 1 of a fisheye image in a framed oval/circular region and FIG. 1B is the hemispherical spatial projecting geometry corresponding to FIG. 1A, both figures note the zenithal distance of a, which is the angle defined by an incident ray and the optical axis 21, and the azimuthal distance of ß, which is the angular vector in the polar coordinate system whose origin is set at the principal point. Quoting the positioning concept of a globe, 3 is the angle referring to the mapping domain 13'of the prime meridian 13 on the equatorial plane in the polar coordinate system, shown in FIG. 1B.

Thus, 7r/2-a is regarded as latitude and (3 as longitude. Therefore, if several imaged points are situated along the same radius of the imageable area 1, their corresponding spatial incident rays would be on the same meridional plane (like the sector determined by the arc C'E'G'and two spherical radii); namely, their azimuthal distances (p) are invariant, such as points D, E, F, and G in FIG. 1A corresponding to points D', E', F', and G'in FIG. 1B.

(Note: the phenomenon utilized by the image-based algorithm is not only relevant to the fisheye lens; actually, it is the radial alignment constraint in Tsai's model on condition of using a rectilinear perspective projection lens.) In addition to the specific projection mechanism, the image-based algorithm makes several basic postulates: first, the imageable area 1 of the fisheye image is an analyzable oval or circle, and the intersection of the major axis 11 and the minor axis 12 (or two diameters instead) situates the principal point, which is cast by the optical axis 21 shown in FIG. 1B; secondly, the boundary of the image is projected by the light rays of oc/2 ;

11279-PCT-PA third, a and p are linearly related, wherein p, termed a principal distance, is the length between an imaged point (such as point E) and the principal point (point C). For example, the value of a at point E is supposed to be 71/4 since it is located in the middle of the radius of the imageable area 1 and, therefore, the sight ray corresponding to point E is destined to pass through point E'in the hemispherical sight space, as shown in FIG. IB. The same occurs with points C and C', points D and D', points F and F1, and so on. An imaged point on the image plane can be denoted as (u, v) in the Cartesian coordinate system or as (p, ß) in the polar coordinate system, both taking the principal point as their origin; the vector coordinate of its corresponding sight ray in space is denoted as (a, ß).

Although the mapping mechanism was not really put on discussion in the image- based algorithm, it is actually the equidistant projection (simplified as the EDP hereinafter) with the postulation of a 180-degree visual angle (simplified totally as the EDPjt hereinafter). The EDP's projection function is p=ka wherein k is a constant and, actually, the focal length constant off, In order to fit the postulations described above, a qualified camera body mounted with a qualified lens is utterly necessary. Generally it is a special combination with no room for flexibility. Based on the EDP7r postulation, the focal length constant (/) can be obtained by dividing the radius of the imageable area 1 with 7r/2 ; the spatial angle (a, ß) of the corresponding incident ray can also be analyzed from the planar coordinates (u, v) in the imageable area 1.

Therefore, in light of the known skills of image-analysis, an"ideal EDP7v image"can be transformed into the image remapped by the rectilinear perspective projection referring to any projection line as a datum axis. This image-based algorithm is easy and no extra calibrating object is needed.

11279-PCT-PA The US patent 5,185, 667 accordingly developed a method to transform fisheye images conforming to the rectilinear perspective projection model along the projection mechanism shown in FIGs. 1A and 1B so as to monitor a hemispherical field of view (180 degrees by 360 degrees). This patented technology has been applied in endoscopy, surveillance and remote control as disclosed in US patents 5,313, 306,5, 359,363 and 5,384, 588. However, it is worth noting that these serial US patents did not concretely demonstrate a general fitness toward average fisheye lenses. Thus, the image-transformed accuracy of the patented technology is a big question when no specific fisheye lens is used.

Currently, in practice system application manufacturers ask for limited-specification fisheye lenses combined with particular camera bodies and provide exclusive software, and then the patented technology (US patent 5,185, 667) will have practical and commercial value.

Major parts of the image-based postulates mentioned, however, are unrealistic because many essential factors or variations have not been taken into consideration. First, the EDPTt might just be a special case among possible projection geometric models (note: however, it is the most familiar projection model of the fisheye lens). Referring to FIG. 2, three possible and typical projection curves of the fisheye lens are shown, implying moreover that the natural projection mechanism of the fisheye lens might be the following: the stereographic projection (or SGP, whose projection function is p=2fxtan (a/2)) and the orthographic projection (or OGP, whose projection function is p=fisin (cc)).

Moreover, the coverage of the FOV is not constantly equal to K, perhaps being either larger or smaller. From the curves in FIG. 2, the differences respectively between the three projection models are obviously increasing along the growing zenithal distances (a).

11279-PCT-PA Thus, distortions will develop if all projection geometries are locked on the EDP ? i to transform images accordingly. Secondly, the FOV of 7r is hard to evaluate since the shape of the imageable area 1 is always presented as a circle, irrespective of the angular scale of the FOV. A third factor concerns the errors caused in locating the image border even though the FOV is certainly equal to x. The radial decay caused by the radiometric response is an unavoidable phenomenon in a lens, especially when dealing with a larger FOV. This property will induce a radial decay on the image intensity, occurring especially with some simple lenses, so that the actual boundary is extremely hard to set under that bordering effect. Perhaps no real border feature even exists under the consideration of the diffraction phenomenon of light. Finally, if the imageable area 1 of a camera is larger than the sensitive zone of a CCD, only parts of the"boundary"of an image will show; hence the image transformation cannot be effectively executed. Consequently, the image-based algorithm depends extensively on the selected devices irrespective of whether the lens conforms to the ideal EDP7s postulation or not. Alternatively, the method will result in poor accuracy, modeling errors, a doubtful imageable area 1 extracted, an unstable principal point situated, and practical limitations; these problems would keep the methods in the related art from accurately solving the extrinsic and intrinsic parameters of the camera in the interests of developing computer vision systems, not to mention the viewpoint, in behalf of a camera's absolute position, which plays a key role in 3-D metering.

Furthermore, Margaret M. Fleck [Perspective Projection: The Wrong Image Model, 1994] has demonstrated that the projection mechanisms of lenses hardly fit a single ideal model across the whole angular range in practice; otherwise, optics engineers could

11279-PCT-PA develop lenses with special projection functions, such as the fovea lens, in light of the different requirements in applications. Thus, to obligate the postulation of the EDP on all fisheye cameras is an extreme imposition.

On the other hand, although a lens is usually designed with a specific projective mechanism, the refractivity of light limited by the properties of the material in question keeps the lens from a perfect design. Moreover, following manufacture it is difficult to verify whether they match the expected specifications or not. Further, when a fisheye lens is installed in a real system (such as a camera), its focal length constant may vary accordingly (depending on the precision of the mechanical installation). Consequently, if a simple and common technology is developed which can verify the optical features of the fabricated devices being produced in order to provide a guarantee of quality for the products at their sale, it would significantly increase their value.

The Gaussian optics model is a convenient means for describing the imaging logic of an optical system. It is usually the reference model in tracing a camera's errors. The model regards an optical system (such as a camera) as a black box whose features have been defined by several cardinal points. That is to say, the complicated projection geometry is ignored and the projective behavior of light rays is logically analyzed directly with the aid of the cardinal points. Referring to FIG. 3, the cardinal points defined by the Gaussian optics model comprise the first and second focal points F1 and F2, the first and second principal points PI and P2, and the first and second nodal points. If the incident medium of the optical system is air, the nodal points are regarded as the principal points; at the same time, the first principal point PI is also termed the front nodal point (FNP), and the second principal points P2 is called the back nodal point (BNP). Otherwise, two

11279-PCT-PA principal planes 141 and 142 are defined as the datum planes turning the proceeding directions of light rays being projected into the optical system. The intersections determined by the two principal planes 141 and 142 and the optical axis 224 are simply the two principal points P1 and P2. In accordance with the cardinal points F1, F2, Pl, and P2 and the principal planes 141 and 142, the infinite light rays passing through the first focal point F1 will turn to parallel the optical axis 224 at the first principal plane 141, like the lines OC and CO' ; conversely, if light rays are projected into the optical system in parallel directions, they will turn to pass the second focal point F2 when meeting the second principal plane 142, like the lines OB and BO'. A characteristic of this mapping mechanism is that a light ray from the object point O projected toward the first principal point P1 (i. e. the line OP1) will turn in the direction along the optical axis 224 after passing through P1, and turn again in the direction parallel to the line OP1 after passing through P2 (i. e. the line P20') until it's mapped on the sensitive element to form the imaged point O'. In other words, the incident ray passing through PI is parallel to the spatial traces of the light ray passing through P2. In the case of a single lens, the phenomenon appears only in the paraxial zone of a thin lens. However, the Gaussian optics model is an ideal imaging logic which average cameras seek to emulate. A wide- angle lens has to attain this imaging mechanism and is quite different from the fisheye lens.

Regarding a lens such as the fisheye lens, specialists skilled in the art think of no "single viewpoint" ; this is correct in the aspect of Gaussian optics. However, if the limits of Gaussian optics could be overcome, the inherent mapping mechanism of the fisheye lens might be analyzable so that the"single viewpoint"could be logically positioned and

11279-PCT-PA the optical parameters can be deduced thereby. At this point, not only the reliability of analyzing fisheye images is raised but the applications can also be largely expanded in the field of 3-D metering and so forth. Thus, the present invention will carefully look into these issues and free the procedures of camera-parameterization from the ideal image- based postulations, such as the EDP and the image boundary, so as to precisely obtain the optical parameters of the fisheye camera.

SUMMARY OF THE INVENTION In view of the foregoing, it is an object of this invention to provide a method and system aiming at cameras with lenses of the non-linear perspective projection mechanism, in order to analyze the natural optical projection properties of an optical system.

Another object of this invention is to provide a method and system for obtaining optical parameters (comprising the viewpoint, the orientation of the optical axis, the focal length constant and the projection mechanism of the camera) based simply on the natural optical projection phenomena of a lens so as to extend the applications of the fisheye camera to the fields of stereographic measuring and 3-D metering.

Another object of this invention is to provide a method and system for analyzing image distortion according to the coordinates on the image plane, which can directly quantify image distortion by the zenithal distances (a) deduced from the coordinates of imaged points.

Another object of this invention is to provide a method and system capable of examining a lens or the spatial mapping mechanism of the camera mounted with the lens to act as a basis for detennining the specifications or testing the qualities of products.

11279-PCT-PA In accordance with the objects described above, the present invention refers to the degree of distortion of an image projected from a target with a physical central-symmetric pattern (PCP) so as to adjust the absolute coordinate of a camera in order to make the features of the image similar to the PCP; namely, an imaged central-symmetric pattern (ICP) appears on the image plane. Next, an object-to-image conjugate coordinate array, composed of the spatial absolute coordinates of calibration marks on the target and the corresponding image coordinates on the image plane, is sampled and utilized to describe the projecting behavior between object space and the image plane. Accordingly, the projection relationship between the object space (i. e. the sight rays) and the image (i. e. the imaged coordinates) is deduced. Thus, the optical parameters of the camera system can be obtained.

The present invention does not borrow any assumptions from existent ideal projection functions. The deduction of the optical projection mechanism and the quantification of the optical parameters of a camera utterly and totally with only the assistance of the measured projection relationships between the given coordinates of the calibration marks and the corresponding respective imaged coordinates comprise a significant characteristic of the present invention. The invention makes a breakthrough from the limitations and presumptions that those skilled in the related art have strongly believed. The invention is suitable for application to the fisheye camera or the kind with special projection functions and even, as reverse engineering to analyze camera devices having unknown projection models.

Owing to the capability of the invention to precisely deduce the projection function of the camera, its inverse projection function can calibrate the image distortions and

11279-PCT-PA further be applied in the fields of stereology and 3-D metering.

Further scope of applicability of the present invention will become apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the invention, are given by way of illustration only, since various changes and modifications within the spirit and scope of the invention will become apparent to those skilled in the art from this detailed description.

BRIEF DESCRIPTION OF THE DRAWINGS The present invention will become more fully understood from the detailed description given herein below by illustration only, which illustrations are not limitative of the present invention, and wherein: FIGs. 1A and 1B show the schematic view of a calibration method based on an image-based algorithm aiming at the EDP7r of the fisheye images in the related art; FIG. 2 sketches three typical projection functions of the fisheye lens; FIG. 3 shows the schematic view of the mapping optical path of the Gaussian optics model; FIG. 4 cubically shows the 3-D optical paths between the PCP and the fisheye camera in the invention; FIG. 5 shows one embodiment of the PCP which is an octagonal symmetric pattern defined by three concentric circles; FIG. 6 shows the first embodiment of the theoretical model disclosed in the invention where the specific sight ray is determined by three different calibration marks while the

11279-PCT-PA target is moved to three different positions; FIG. 7 shows the first embodiment of the measuring system disclosed in the invention and the related coordinate systems referred to; FIG. 8 shows the second embodiment of the measuring system disclosed in the invention and the related coordinate systems referred to; FIG. 9 shows the second embodiment of the theoretical model disclosed in the invention which takes the solid center of the PCP as the origin of the absolute coordinate system and moves the camera to equivalently deduce the specific sight ray; FIG. 10 statistically shows the moving traces of the camera in the platform coordinate system measured in an experiment as the ICP is attained, the above traces representing as well the spatial traces of the optical axis in the platform coordinate system; FIG. 11 statistically shows the pixel coordinates of the imaged center measured in the experiment; FIG. 12A statistically shows the profiles of the average image heights (p) defined by the three concentric circles varied by the different locations (referring to FIG. 10) of the camera in the experiment ; FIG. 12B shows the varying ranges and the overlapping situation of the average image heights (p) corresponding to FIG. 12A; FIG. 13 shows the beautiful overlapping profiles of the zenithal distances (a) to the image heights (p) as the viewpoint is exactly set in the experiment; FIG. 14 shows the divergent profiles of the zenithal distances (a) to the image heights (p) when the location of the viewpoint is shifted from its exact position;

FIG. 15 shows the beautiful overlapping profiles of the zenithal focal length (zFL) to the image heights (p) as the viewpoint is exactly set in the experiment; FIG. 16 shows the divergent profiles of the zenithal focal length (zFL) to the image heights (p) when the location of the viewpoint is shifted from its exact position; and FIG. 17 shows the divergent length composed of multiple traces of the zFL, which is used to evaluate the overlapping degree of the profiles, an example taken from FIGs. 15 and 16.

DETAILED DESCRIPTION OF THE INVENTION Several coordinate systems are defined in advance of the detailed technical disclosure necessary to a convenient analysis: 1. The absolute coordinate system of W (X, Y, Z) places its origin at the geometric center of a target, and defines the direction perpendicularly away from the target as the positive of the Z-axis.

2. The image-plane coordinate system of C' (x, y) or P'(p, ß) represents the image plane of the camera in the Cartesian coordinate system or the polar coordinate system in which its origin is set at the principal point.

3. The pixel coordinate system of I (u, v) represents images which can be directly observed on a computer screen with an unit of"pixel". The principal point is imaged at the coordinate denoted as I (u, vc) on the computer screen. Basically, the dimensions on the image plane, C' (x', y') or P' (p', p'), can correspond to the pixel coordinate system of I (u, v). Therefore, the Cartesian coordinate system of C (u, v) or the polar coordinate system of P (p, ß) can represent as well the pixel

11279-PCT-PA coordinate system of I (u, v) in which I (uc, vc) is the origin.

4. The camera outer-space coordinate system of N (a, p, h) describes the geometry of the sight rays in the field of view (FOV) of the camera.

5. The camera inner-space coordinate system of S (a', ß', f) describes the projection geometry inside the camera.

The serial numbers of sampled points will be identified at the subscript positions and the sampling sequence is indicated by means of an array. For example, Wn (a, b, c) [k] expresses that a calibration mark of n is located at (a, b, c) in the absolute coordinate system during the kt"test. The other coordinates are denoted by a similar rule. Part of the denotation could be omitted in the interests of fluent comprehension while readability is not adversely affected. The coordinate denotations will be quoted in the invention hereinafter.

The fisheye lens is severely diverged from the Gaussian optics model to be a non- linear perspective projection lens, which means its projective behavior cannot be interpreted by the well-known pinhole model following the rectilinear perspective projection mechanism. Compared with other lenses, an image captured by the fisheye lens (simplified as the fisheye image) is possessed of a severe barrel distortion. The fisheye lens is frequently employed to create dramatic or extraordinary effects but it is found lacking in the accurate mapping of the original dimensions and features of objects.

However, there are still a couple of rules to follow while mapping images. Rule 1, the quantity of distortions throughout the fisheye image are distributed with a radial symmetry whose point of origin is termed a principal point, and its optical projection geometry in space symmetrically encircles the optical axis of the camera. Rule 2, all

11279-PCT-PA object points located on the same specific sight ray in object space are totally projected onto a specific imaged point on the image plane. The projection mechanism in space can be postulated as follows : incident rays (including active or inactive reflective light rays) cast from an object in the FOV will logically converge on a unique spatial optical center (or termed the viewpoint, simplified as VP) and then divergently map on the image plane in light of a projection function. The rules and the postulation described above are well known to specialists skilled in the related art of optical engineering.

The present invention designs a particular target according to the characteristic of the radial symmetry of distortion across a fisheye image (rule 1) to locate the principal point on the image plane and position the optical axis in space. Then, the specific projection relationship between the sight ray and the imaged point (rule 2) is analyzed to obtain the absolute coordinate of the VP on the optical axis as well as the absolute coordinate of the sight ray in space; the focal length constant is deduced accordingly and the projection model of the camera is induced consequently. The present invention needs no assumption of any existent projection models, such as the equidistant projection (EDP), the stereographic projection (SGP) or the orthographic projection (OGP), only if cameras possessed of the mapping properties of fisheye images, or similar ones, all of which are analyzable in the invention.

The spatial projecting symmetry of rule 1 is illustrated in FIG. 4, which shows the optical projection paths between the fisheye camera and the planar target 30 being placed in the FOV thereof; wherein take the fisheye lens 221 and the image plane 225 standing equivalently for the fisheye camera. With a view of geometry, a planar drawing capable of representing an axis-symmetric geometrical arrangement in space can image a center-

11279-PCT-PA symmetric image inside the camera. Therefore, referring to FIG. 5, a planar target 30 with a physical central-symmetric pattern (PCP) 31 thereon is placed in the FOV of the camera.

The PCP 31 is composed of a central mark 38 located at the geometric center thereof and a plurality of calibration marks 311-318,321-328, 331-338 defined by a plurality of center-symmetric geometric figures. The relative position of the target 30 and the camera is adjusted in order to obtain an imaged central-symmetric pattern (ICP) 226 on the image plane 225. Obtaining the ICP 230 means the optical axis 224 perpendicularly penetrates both the principal point 227 on the image plane 225 and the central mark 38 of the PCP 31.

The position of the optical axis 224 in space can be determined absolutely by referring to the target 30 because its absolute position is man-made and given in advance. The feature coordinate of the blob imaged by the central mark 38 (or the center of gravity of the imaged blob) is regarded as the principal point 227 on the image plane 225.

If the projection behavior of a camera conforms to any known circular-function relationship (note: meaning the product of a circular function and a focal length), the incident rays cast from the PCP 31 will certainly and essentially achieve a collimating mechanism; namely, referring to FIG. 4 again, all incident rays will converge at a logical optical center of the fisheye lens 221, termed the front cardinal point (FCP) 222, and divergently refract onto the image plane 225 (or the optical sensor) from the back cardinal point (BCP) 223 according to the projection function so that two light cones whose zeniths are separately at the FCP 222 and BCP 223 are formed. The FCP 222 and BCP 223 are two referred points for the two distinct spaces delimiting the projecting behaviors inside and outside the fisheye camera. Sight rays refer to the FCP 222 and the image plane 225 refers to the BCP 223 while analyzing the projection mechanism of the fisheye

11279-PCT-PA camera. The distance between the two cardinal points 222 and 223 is arbitrary because it is not a parameter of the camera system. The present invention therefore merges the two cardinal points 222 and 223 at a single viewpoint (VP) or picks the FCP 222 on behalf of the VP in order to simplify the imaging logic. Such a technique of expression is often seen in volumes on optics in discussing lenses.

The equivalent mapping mechanism of rule 2 is shown in FIG. 6. As far as an optical model is concerned the different object points on the sight ray 80 (such as the three calibration marks 313,323, and 333 in FIG. 5 whose absolute coordinates are W313 [p], W323 [q], and W333 [r] respectively while the target 30 is passed through at the three different locations of p, q, and r) can hardly be told apart simply by a single image message (such as the imaged point 91) on the image plane 225. Another aspect is that if at least two different object points simultaneously map at the same imaged blob, the sight ray 80 defined by these object points can be determined by the spatial absolute coordinates thereof. The intersection of the sight ray 80 and the optical axis 224 situates the FCP 222 or, instead, the VP.

The projection mechanism of any sight ray 80 (also called the incident ray) of the fisheye lens can be explained by the Gaussian optics model. The sight ray 80 is assumed to be refracted at the FCP 222 (that is the FNP 222'in the Gaussian optics model, referring to FIG. 3) and then mapped on the image plane 225 to form an imaged point 91 whose coordinate is C' (u, v) after the sight ray 80 meets the optical axis 224; therefore a track parallel to the sight ray 80 from the imaged point 91 can be inferred inversely to obtain the corresponding BNP 223'. If the projection behavior of the sight ray 80 conforms to the Gaussian optics model, the BNP 223'matches the BCP 223, and the focal

11279-PCT-PA length constant (can be derived by an object distance, an object height and an image height only by employing simple mathematic geometry. Only the lenses following the Gaussian optics model can obtain the same focal length as a constant wherever the imaged point 91 is located.

If the sight ray 80 corresponding to any coordinate on the image plane 225 is analyzable, the mapping geometry of the camera is totally describable without the need for the camera's projection function. This is a vital subject and basis of the invention and will be disclosed hereinafter.

Due to the severe distortions caused by the fisheye lens, it is impossible to enable all sight rays 80 to pass through a single BNP 223'. That is to say, there is no unique focal length constant in view of the Gaussian optics model. However, the geometric projection mechanism between a specific sight ray 80 and its corresponding imaged point 91 is still separately describable by a Gaussian model. The invention terms the individual focal length attained by this method as a zenithal focal length (simplified as the zFL hereinafter), that is, the distance between the BNP 223'and the principal point 227 shown in FIG. 6 ; wherein the location of the BNP 223'is determined by the line parallel to the sight ray 80 and passing the imaged point 91 C' (u, v) observable on the image plane 225.

The zFL can also be called the image-height focal length because image heights of equivalent values will correspond to the same zFL and each image height is determined by an imaged point 91. Thus, the existence of a different but unique zFL corresponding to every single imaged point 91 is definitely inferred in view of the Gaussian optics model, but its values will be decreasing while the image heights are increasing. Based on the one-to-one correspondence, the mapping/distortion mechanism of the fisheye camera can

also be described by the zFL, one of the parameters of the camera.

If the projection function of the fisheye lens is describable by a circular function, the relationship between the image height (p) and the zenithal distance (a) is deducible as well; wherein the zenithal distance (a) is the angular distance of an incident ray away from the optical axis 224 in space. Taking the EDP as an example, the image height (p) is determined by the product of the zenithal distance (a) and the focal length constant (t), namely p= f ° cx ; the value of a is derivable when both p and are given.

Referring to FIG. 4 again, the relationship between the zenithal distance (a) defined by the outer light cone and the image height (p) which is the radius at the bottom of the inner light cone is described by the projection function. Inversely, if the relationship can be measured, the projection function can be inferred as well. This mechanism is not limited in one single function with a closed form, such as trigonometric functions. The present invention terms as an ideal lens the kind whose mapping mechanism throughout the entire FOV can be described by a single circular function. Logically, once the natural projection function of the lens is obtained, there does exist a BCP 223 in a modal.

However, as far as the outer light cone is concerned, it is correct to say that there is only one FCP 222 because the origin of the sight ray 80 utilized to describe the absolute projection space is modally infinite, and it is reasonable to regard the camera as a point.

Referring to FIG. 4 again, if there is a camera mounting an ideal lens, the value of a corresponding to a physical object point in the FOV can be obtained by a simple tangent function if the absolute coordinate of the FCP 222 is given. Furthermore, the point of intersection of the incident sight ray 80 and the optical axis 224 situates the FCP 222; taking the image plane 225 as a base and referring to the focal length constant 0 as the

11279-PCT-PA height, the unique image height (p) corresponding to the sight ray 80 can infer the BCP 223, the zenith of the inner light cone.

The absolute coordinate of the FCP 222 and the orientation of the optical axis 224, both of which are the extrinsic parameters of the camera, can represent the position of the camera; the focal length constant and the projection function are regarded as the intrinsic parameters of the camera. The invention develops a measuring system and an analyzing methodology to verify that these parameters are deducible in logic without knowledge of the camera's projection model.

The realization of the mapping mechanism described above is a key basis to design the measuring system in the invention. The arrangement of the measuring system is shown in FIG. 7, in which the movement of the target 30 refers to the one in FIG. 6. The measuring system employs a computer program executed automatically for the automatized measuring procedures comprising the capture of images, the extraction of feature coordinates of imaged blobs, and the deductions for the intrinsic and extrinsic parameters of the camera 22.

Speaking generally, the measuring system is a composition of hardware devices and software elements used to perform the mapping mechanism described above. Apart from the devices and elements in operation, the qualities of measurement are also greatly influenced by the surrounding factors of the laboratory such as the relative positions of the devices, and both the specification and installation of lamps. The theoretical model in FIG. 6 coupled with the measuring system in FIG. 7 present the first embodiment of the invention. However, in practice, the first embodiment will cause irregular illumination on the surface of the target 30 cast from the illuminant 24 while the target 30 is moving at

11279-PCT-PA different locations; this can certainly affect the experimental accuracy. A second embodiment of the invention is therefore introduced with the benefits of simplified calculation and uniform illumination, as shown in FIG. 8, in which the target 30 is fixed as the origin of reference of the absolute coordinate system 28 and the camera 22 is moved instead. The corresponding theoretical model of the second embodiment is shown in FIG. 9. The description hereinafter will take the second embodiment as a representative example to disclose the details of the invention. However, it does not imply a limitation in the invention; any variations or modifications following the same spirit are not to be regarded as a departure from the spirit and scope of the invention.

The present invention defines four coordinate systems depending on each other in the measuring system, referring to FIG. 8 to know their inset positions: (1) the absolute coordinate system 28 (denoted as W= (X, Y, Z)) defined by the target 30; (2) the platform coordinate system 29 (denoted as W'= (X', Y', Z')) defined by the adjusting platform 23 driving the orientation and location of the camera 22; (3) the pixel coordinate system 27 (denoted as I= (u, v) ) displayed on a computer screen and corresponding to the image- plane coordinate system 27' (denoted as C' (x', y') or P' (p', p')) on the image plane 225; and (4) the camera coordinate system 26 (denoted as N (a, ß, h) and S (a', ß', f)) utilized to describe the imaging geometry of the camera 22.

The camera coordinate system 26 is composed of N (a, ß, h) and S (a', in which a and P have been defined hereinbefore while a'and (3'are the corresponding angular distances determined by virtual rays with reference to the image plane 225. Referring to FIG. 4 again, S (a', ß', f) defines the refractive light rays bounded on the inner light cone placing the zenith at the BCP 223 while N (a, ß, h) defines the corresponding sight ray 80

11279-PCT-PA bounded on the outer light cone whose zenith is at the FCP 222. Owing to the irregular refraction from the outer to the inner space of the camera 22, a'is not equal to a but ß'is usually the equivalent of (3 (or ß+7C). The functional relationship between a and a'can represent the mapping mechanism of the camera 22; however, a'cannot be directly observed.

The image-plane coordinate system 27'defines the dimensions of images on the image plane 225 separately in the Cartesian coordinate system (C' (x', y') ) or the polar coordinate system (P'(p', ß')), placing the origin at the principal point 227.

The pixel coordinate system 27 expresses the dimensions of images displayed on the computer screen individually in the Cartesian coordinate system (C (x, y) ) or the polar coordinate system (P (p, P)), placing the origin at the feature coordinate imaged by the principal point 227 (denoted as I (uc, vc) =C (0, 0) = (0, 3)) with a unit of a pixel.

Referring to FIG. 5 again, the absolute coordinate system 28 regards the center of the PCP 31 (i. e. the barycentric coordinate of the central mark 38) as the origin, and defines the X-axis by the feature coordinates of the horizontal calibration marks 335,325, 315,38, 311,321 and 331 and the Y-axis by the feature coordinates of the vertical calibration marks 333,323, 313,38, 317,327 and 337; accordingly W38=W (0,0, 0).

During the experiment, the position of the target 30 is keeping fixed so that the absolute coordinates of all the calibration marks 38, 311-318,321-328, and 331-338 are consequently ensured. Then, the camera 22 is moved within a particular object space and the image changes enable the mapping mechanism of the sight ray 80 defined by a and (3 in the camera coordinate system 26 to be analyzable. The details of the analysis will be disclosed hereinafter.

11279-PCT-PA Referring to FIG. 8 again, the camera 22 is fixed on the adjusting platform 23 which is composed of three rigid axes perpendicular to one another, that is the X'rigid axis 231, the Y'rigid axis 232 and the Z'rigid axis 233 correspondingly representing the X'-, Y'- and Z'-axis in the platform coordinate system 29; wherein the positive of the Z'-axis is the direction departing from the target 30. Ideally, the three axes (X', Y', Z') of the platform coordinate system 29 have to be parallel with the ones (X, Y, Z) of the absolute coordinate system 28. However, in practice, initially there is a six-dimensional difference between the two coordinate systems 28 and 29. Therefore, in addition to freely driving the three rigid axes 231,232, and 233 of the adjusting platform 23 to position the camera 22 fixed thereon, an omnidirectional base 70 is installed on the Y'-axis (under the camera 22) for passing, tilting or rotating the camera 22. The mechanical arrangement can collimate the optical axis 224 to the Z-axis. The details of the operation will be disclosed hereinafter.

The pixel coordinate system 27 is utilized to express the two-dimensional memory coordinates of digital video signals captured by the camera 22, digitized by a frame grabber 252 and then provided to a CPU 251 or a digital image processor 253. Logically, the value in the pixel coordinate system 27 can represent the dimensions of images on the image plane 225; however, the proportion of both their units reveals a transformed relationship, termed an aspect ratio. A square image displayed on the screen might not have the aspect ratio equal to 1; it will turn an original circular image into an elliptic one.

In practice, the image mapped on the image plane 225 is displayed on the screen for observers who can only read the values in the pixel coordinate system 27 to indirectly represent the dimensions of images. The value of the aspect ratio can also be ensured by

the invention. The details will be disclosed hereinafter.

The measuring system not only builds a mechanical structure for the coordinate systems described above but also functionally plays as a device for capturing images, calculating feature coordinates, and adjusting coordinate systems. The details of the rest of the major devices are described as follows: 1. The camera 22: a BW camera applied to surveillance, which is equipped with a 1/2-inch CCD (charge coupled device) and a fisheye lens (with a vendor's specification of 2. 8mm focal length). It has the capability of focusing infinitely and outputs video signals along the standards of the NTSC (National Television System Committee) and further transmits the video signals to the frame grabber 252. In addition to the CCD camera, it is another embodiment to adopt a CMOS (Complementary Metal Oxide Semiconductor) camera or cameras mounting image sensors.

2. An illuminant 24: an important element in the invention. The category and arrangement of the illuminant 24 totally affect the distribution of the illumination and cause different results in the experiment. The invention takes two lamps with high frequency conversions as the illuminant 24 to light up the target 30. The relative position of the illuminant 24 and the target 30 is fixed for the full duration of the experiment in order to keep the illumination stable.

3. A platform controller 21: utilized to control the movement of the adjusting platform 23 through the commands of software and provide power to and limit the moving range of the adjusting platform 23. If necessary, users can manually adjust the orientation of the camera 22 as well.

4. A processing unit 25: a normal personal computer (PC), which is employed to retrieve, process and calculate the images of the camera 22 and command the platform controller 21 to adjust the position of the camera 22. Wherein, the CPU 251 is utilized to execute the software, handle the entire operation and manage data; the digital image processor 253, which is connected with the frame grabber 252, is employed to process digital signals in order to extract pixel coordinates; the frame grabber 252 is utilized to turn analogical signals into digital ones and store them in a memory in order to supply the digital image processor 253 and the CPU 251 in calculating the imaged feature coordinates corresponding to the calibration marks 38, 311-318,321-328, and 331-338 in real time. The frame grabber 252, the digital image processor 253 and the CPU 251 are integrated together in the PC with the operation system of MS Windows. The software developed for the experimental operation will be disclosed hereinafter.

5. The target 30: fixed in the FOV of the camera 22 as a reference for analyzing the sight ray 80. A physical central-symmetric pattern (PCP) 31 is illustrated on the target 30. The PCP 31 is composed of a central origin located at the geometric center thereof and a plurality of calibration marks is defined by a plurality of center-symmetric geometric figures. The embodiment of the PCP 31 shown in FIG. 5 takes the central mark 38 as its central origin to define three concentric circles as the plurality of center-symmetric geometric figures. Eight individual calibration marks 311-318, 321-328, and 331-338 are symmetrically placed on the three concentric circles to form three symmetric

11279-PCT-PA regular octagons. The radii of the three concentric circles are 20mm, 40mm and 60mm respectively. The locations of the calibration marks 311-318,321- 328, and 331-338 begin at 0 degree and mark every /4 shift, totaling 24 marks.

They are black squares each of 8mm width and 8mm length. Otherwise, take the four extreme external calibration marks 331,333, 335, and 337 as the tangent points to form a square whose four vertexes are the test marks 341-344.

The PCP 31 is drafted by a computer-aided designer (CAD) and printed on a piece of high-quality photo paper by an ink-jet printer to form the target 30.

Besides, there is another embodiment to form the marks 38,311-318, 321-328, 331-338, and 341-344 by using LEDs (light emitting diodes) as active lighting elements in order to attain better image quality; meanwhile, the illuminant 24 could be absent in the measuring system. During the experiment, the target 30 is fixed at a proper location on an experimental table and its absolute coordinate can be precisely defined.

The embodiment of the PCP 31 is not limited in FIG. 5, depicting three regular octagons defined by three concentric circles. It performs well as long as the PCP 31 fits a concentric and symmetric design. Hence triangles, rectangles, squares or any other polygons shaped by a number of calibration marks are all possible forms for the PCP 31.

The better choice, however, is if each of the geometric figures is composed of even calibration marks. It has the advantage of easy calculation. The extremeness of a polygon is a circle, such as the PCP 31 shown in FIG. 4. Besides, a 3-D calibration target 30 might have the same function if it can symmetrically surround the optical axis 224.

Before entering into the details of implementing the invention, the issues that the

11279-PCT-PA invention intends to solve are listed as follows: 1. deducing the principal point 227 on the image plane 225 and situating the absolute position of the optical axis 224 in space; 2. deducing the absolute coordinate of the FCP 222 (also called the viewpoint); 3. deducing a length profile of the zFL (also termed the length profile of the image-height focal length) ; 4. deducing the projection function from the absolute coordinate system 28 to the camera coordinate system 26; and 5. deducing the distortive degree of the image and the calibration mechanism.

Relative to the above issues the invention discloses an experimental procedure and a deductive method described as follows: A. Locate the principal pOil1t 227 I (uc, vc) on the pixel coordinate system and collimate the optical axis 224 to W (0, 0, z) by regularizing the image of the PCP 31 to achieve an ICP 226.

According to the axial-symmetric projection geometry of the fisheye lens, the radial-symmetric distortion of the fisheye image and the centric-symmetric arrangement of the PCP 31, if and only if the optical axis 224 is collimated to the Z-axis in the absolute coordinate system 28, a concentric and symmetric image, i. e. the ICP 226, can be achieved. The spatial disposition of the measuring system is adjusted in light of the symmetry of the imaged blobs (actually the symmetry of their barycentric coordinates, termed the feature coordinates) mapped by the calibration marks displayed on the computer screen. A computer program, which controls the procedure to dynamically adjust the absolute coordinate of the camera 22, perfonns the work for adjusting camera's

11279-PCT-PA orientation with manual assistance. The camera coordinate system 26 is therefore collimated to the absolute coordinate system 28 once the adjustment is completed. At the time, the geometric center of the ICP 226 (i. e. the feature coordinate of the imaged blob mapped by the central mark 38 if the PCP 31 is similar to the one shown in FIG. 5) is the principal point 227; meanwhile, the optical axis 224 perpendicularly penetrates both the principal point 227 and the geometric center of the PCP 31 (i. e. the feature coordinate of the central mark 38). The details are described as follows: 1. Use one's eyesight to properly set the relative position between the adjusting platform 23 and the target 30 in order to make the three rigid axes 231-233 of the adjusting platform 23 as parallel as possible to the axes of the absolute coordinate system 28.

2. Properly place the illuminant 24 to distribute uniform illumination on the target 30 and regard the center of the PCP 31 (the barycenter of the central mark 38) as the origin W (0,0, 0) of the absolute coordinate system 28.

3. Install the camera 22 on the Y'rigid axis 232 of the adjusting platform 23. An omnidirectional base 70 is mounted at the bottom of the camera 22 to manually pan, tilt or rotate the camera 22. The optical axis 224 denoted as S (0, 0, in the camera coordinate system 26 has to coincide with the Z'-axis denoted as W'(0, 0, z) in the platform coordinate system 29 so that the movement of the camera 22 along the Z'rigid axis 233 can be regarded as the equivalent of the one along the optical axis 224. Therefore, in practice, the utmost care is exerted to collimate the Z-axis in the absolute coordinate system 28, the Z'-axis in the platform coordinate system 29 and the optical axis 224 in the camera coordinate

system 26 in order to align them along the same straight line.

4. Vary the coordinate of the camera 22 on the adjusting platform 23 to locate the four test marks 341-344 beside the four corners of the computer screen in order to maximize the calibrated range.

5. A symmetry-analyzing background program is employed to keep tracing the geometric center of the ICP 226 (i. e. the feature coordinate of the imaged blob mapped by the central mark 38), and by referring to the center I (u38, v38), to calculate the"image-distortion indexes"and"horizontal/vertical deviation indexes"of the imaged blobs mapped by the calibration marks 311-318,321-328 and 331-338. These indexes are displayed on the computer screen and as a feedback to the program to command the platform controller 21 driving the adjusting platform 23 to vary the coordinate of the camera 22 in the platform coordinate system 29, that is W' (x', y', z'), until these indexes approach optimal values. If these indexes displayed on the screen have reached satisfactory standards, the program will go on to the next step, or repeat this step.

6. Record the"imaged-distortion indexes", the"horizontal/vertical deviation indexes"and the"object-to-image conjugate coordinates", denoted as (W.' (x', y', z') [0], I, (u, v) [0]), obtained during the procedure. Wherein, W.' (x', y', z') [0] means the platform coordinate of the camera 22 while I" (u, v) [0] is the pixel coordinate of the calibration mark of n; n may be equal to 38,311-318, 321-328 or 331-338, representing any calibration mark of the PCP 31 in FIG. 5; k=0 means an initial position of the camera 22, increasing by 1 with each movement, such as the p"', q"'and measured in FIGs. 6 and 9.

As shown here the collimation procedure for the camera coordinate system 26 and the absolute coordinate system 28 has been achieved. The"imaged-distortion indexes" and the"holizontal/vertical deviation indexes"are going to be interpreted before the advanced discussion. The symmetry-analyzing background program is kept running during the whole experiment. In order to enable software to guide the task of adjustment, not only the image of the PCP 31 but these indexes representing the symmetry of the image are also displayed on the screen. The measuring system actively adjusts the position of the camera 22 in light of these symmetric indexes, sometimes with the assistance of hands. The imaged-distortion indexes and the horizontal/vertical deviation indexes are defined as follows: a. The imaged-distortion indexes (su [m] [k], sv [m] [k] ) are the summation of imaged differences between the calibration marks 311-318,321-328, 331- 338 and the central mark 38 individually in the u-vector and v-vector of the pixel coordinate system 27. Referring to the serial numbers shown in FIG. 5, the formulae of the indexes are given as follows: wherein 1#m#3 ; 1#a#8 and k=0. u(300+m*10+a) actually is un, standing for the u-vector of In (u, v), and the same rule applies to v (300+mFlo+a). The

11279-PCT-PA imaged-distortion indexes, denoted as (su [m] [k], sv [m] [k] ), should both approach zero if an ideal ICP 226 is obtained by reason of the symmetric distribution of the calibration marks 311-318, 321-328, and 331-338. b. The horizontal deviation index is the standard deviation of the v-vectors of In (u, vj [k], which are the feature coordinates of all horizontal imaged blobs in the pixel coordinate system 27. Referring to the PCP 31 shown in FIG. 5, n is equal to 335,325, 315,38, 311,321 and 331, so the horizontal deviation index is the standard deviation of the series composed of v335 [k], v325 [k], v315[k], v38[k], v311[k], v321[k] and v331[k]. c. The vertical deviation index is the standard deviation of the u-vectors of In (un, vn) [k], which are the feature coordinates of all vertical imaged blobs in the pixel coordinate system 27. Referring to the PCP 31 shown in FIG. 5, n is equal to 333,323, 313,38, 317,327 and 337, so the vertical deviation index is the standard deviation of the series composed of u333 [k], u323 [k], u313 [k], u38[k], u317[k], u327[k] and u337[k].

Minimizing the symmetric indexes described above can help collimate the optical axis 224 (S (0,0, f)) to the Z-axis of the absolute coordinate 28. This implies that the Z-axis also perpendicularly passes through the principal point 227 (I (use, vc)) on the image plane 225, and the optical axis 224 is traceable by referring to the given absolute coordinate of the PCP 31. Nevertheless the absolute coordinate of the camera 22 (i. e. the absolute coordinate of the viewpoint) is unknown till now.

The aspect ratio is also a parameter in the field of camera calibration. The invention can easily attain the parameter because the horizontal vectors and the vertical vectors of

11279-PCT-PA In (un, vn) [k] have reflected it directly. If the aspect ratio is equal to one, in an ideal situation the image heights (p) of the vertexes of a regular polygon will be exactly the same after calibration. This is the case in practice.

B. Deduce the absolute coordinate of an identical sight ray 80 by realizing an overlapping mechanism of different calibration marks located in the same radial direction and locate the viewpoint of the camera 22.

From the analysis that different absolute coordinates map on the same imaged point 91 to deduce the mapping mechanism of the camera 22 is a significant innovation of the invention. These different coordinates construct a sight ray 80, termed the identical sight ray 80. Any single imaged point on the image plane 225 can be modally analyzed to obtain its corresponding identical sight ray 80.

Referring to the measuring system in FIG. 8, the camera 22 is moved further along the optical axis 224 which is locked on the normal passing through the central mark 38.

The imaged blobs mapped by the calibration marks 311-318, 321-328, and 331-338 are getting closer toward the principal point 227 while the object distances are getting bigger.

During the period of time different calibration marks may map at the same location of one imaged blob. The relative offsets of the camera 22 (actively driven by the program) are measurable. This offset data couples with the feature coordinate of the particular imaged blob overlapped by different calibration marks, and the given absolute coordinates of the calibration marks 311-318, 321-328, 331-338 can infer the absolute location of the identical sight ray 80 in space corresponding to the particular imaged blob.

Referring to FIG. 6 again, the present invention takes the first embodiment as an example to explain how to locate the identical sight ray 80 in theory; the camera 22 is

11279-PCT-PA fixed but the target 30 is being moved in this embodiment. If at least two different calibration marks (like the three calibration marks 313,323, and 333 in the vertical direction on the target 30) jointly map at the same imaged point 91 (I (u, v)) while they move to at least two different absolute coordinates in space (such as W313[p], W323[q], and W333 [r] in the figure), the identical sight ray 80 corresponding to the imaged point 91 can be defined thereby. Because the line composed of the calibration marks lying on the same diameter of the PCP 31 is constantly perpendicular to the optical axis 224, the particular imaged point 91 overlapped is obtainable, that is I3, 3 (u, v) [p] = I323 (u, v) [q] I333 (u, v) [r], while driving the target 30 to move along the optical axis 224. Actually, it is identical to Tsai's radial alignment constraint and is a characteristic of the radial-symmetric mapping mechanism; basically it is fully identified by those skilled in the art. The intersection point of the identical sight ray 80 and the optical axis 224 is exactly the FCP 222, also termed the viewpoint (VP), representing the absolute coordinate of the camera 22 in space.

In addition to marking the imaged point 91 I (u, v) distorted by the fisheye lens, FIG.

6 also shows the imaged point 92 I (u', v') calibrated to conform to the rectilinear mapping mechanism. The difference between the two imaged points 91 and 92 is customarily called the distortion value of the imaged point 91 I (u, v).

Nevertheless, in practice, in order to keep the illumination uniform and simplify the calculation, the invention adopts the second embodiment and implements it in an experiment; wherein, on the contrary, the camera 22 is moved but the target 30 is fixed during the experiment; it is also able to achieve the same mapping mechanism as in FIG.

6. Referring to FIG. 9, move the camera 22 (represented by the FCP 222) away from the

11279-PCT-PA target 30 along the optical axis 224, which is already collimated to the Z'rigid axis 233, in order to change the relative offsets between the target 30, possessed of three calibration marks 313,323, and 333, and the camera 22 (represented by the FCP 222). Further, enable the three calibration marks 313,323, and 333, whose coordinates are separately W3, 3, W323 and W333, to map at the same coordinate of the imaged point 91 (i. e. I3l3 [p] = I323[q]=I333[r]) on the image plane 225 in three different tests numbered as p, q, and r (which means the FCP 222 of the camera 22 is individually located at Wc[p], Wc[q] and Wc[r]).

Note the initial offset of the FCP 222 (Wc[p]) from the central mark 38 (W38(0,0,0)) as D. With unvarying direction the camera 22 is driven along the optical axis 224 with several movements; meanwhile, the feature coordinates of imaged points (In [k]) mapped by the calibration marks 38, 311-318,321-328, and 331-338 are extracted and separately coupled with the corresponding locations (W,,' [k]) of the camera 22 in the platform coordinate system 29 to form an object-to-image conjugate-coordinate pair (Wc' [k], In [k]), in which k is the sampled sequence.

The procedure for data extraction is divided into two parts: (1) each time after the camera 22 moves a distance of dZ'along the Z'rigid axis 233, actively and finely adjust the position but keep the direction of the camera 22 unvaried through the symmetry- analyzing program in order to keep an optimal symmetry on the ICP 226; (2) extract the data of the object-to-image conjugate-coordinate pair (Wc'(x', y', z') [k], In (u, v) [k] ) in each test, and pool it in each of a sequence of tests to form an object-to-image conjugate- coordinate array. Continuing after the steps in section A, the details are described as follows:

11279-PCT-PA 7. Keep the arrangement of the measuring system unchanged from the former section A, while the optical axis 224 has been collimated to the Z-axis in the absolute coordinate 28 and the initial object-to-image conjugate-coordinate pair of (Wc'[0], In [0]) is already obtained. Set the initial offset of the FCP 222 from the target 30 as D--the aim of the calculation (note: generally the orientation of the camera 22 needn't be adjusted in this procedure); 8. Raise the location index of k and actively control the camera 22 to increase an offset of dZ'along the Z'rigid axis 233; 9. In light of the symmetric indexes (the imaged-distortion indexes and the horizontal/vertical deviation indexes) displayed on the screen, the position of the camera 22 (i. e. W'(X', Y'), while the z-vector is fixed) is finely adjusted until the symmetry of the ICP 226 reaches a preset standard; then, record the object-to- image conjugate-coordinate pair (WC'[k], In [k] ) ; 10. If the location of the camera 22 is still within the default experimental range, the program will back up to the previous step 8, or go on to the next step; 11. Close the symmetry-analyzing background program ; 12. Finish acquiring the data regarding the object-to-image conjugate-coordinate array provided for calculation; and 13. Deduce the related coefficients of the parameters of the camera 22 (the details will be described hereinafter).

The following will introduce the data obtained in a practical experiment to verify the practicability of the invention. In the experiment, each offset of the camera 22 (i. e. the dZ') moving along the Z'rigid axis 233 is increased by 10 mm. There are totally 19 offsets,

plus the initial one, constructing an object-to-image conjugate-coordinate array (W' [0.. 19], In0.. 19] ) composed of 20 object-to-image conjugate-coordinate pairs. After the procedure described above, the data for deriving the parameters of the camera 22 is already obtained and will be analyzed by the following steps: 1. The position profiles of the camera 22 (W' [0.. 19] ) in the platform coordinate system 29: FIG. 10 illustrates the distribution of the series of positions of the camera 22 from Wc'[0] =W'(-7. 5mm,-15mm, 0mm) to Wc' [19] = W' (-8mm,- 19mm, 190mm), while the ICP 226 is reached in each test; the profiles also suggest the traces of the optical axis 224 in the platform coordinate system 29.

Wc'[0] = W'(-7.7mm,-15.0mm,0mm) indicates the initial position in the platform coordinate system 29 in which the x'-vector is-7.7 mm and the y'-vector-15.0 mm, and the z'-vector here is treated as the reference point during the experiment.

Although the profiles ofX,' [0.. 19] and Yc'[0.. 19] show slight deviations, they still hold linearity. This reveals that the optical axis 224 can be efficiently traced by means of the symmetry of the ICP 226. Nevertheless, it also emerges that the platform coordinate system 29 is not perfectly collimated to the absolute coordinate system 28 in the experiment, but the deviation is quite small; that is 0.3% in the x'-vector and 2% in the y'-vector. The result suggests that the camera's offsets in the absolute coordinate system 28 can be replaced by the ones just along the Z'-axis and it is reliable, because the percentage of the error is only 0.002% calculated by the formula of Therefore, the absolute offsets of the camera 22 (ZC [k]) during the experiment are regarded as Zo' [k] + D.

2. The position profiles of the imaged blob mapped by the central mark 38 (I38 (u, v) [0.. 19]) in the pixel coordinate system 27: FIG. 11 illustrates the feature coordinates of the imaged blobs mapped by the central mark 38 ; each data pair also stands for the principal point 227 practically measured in the pixel coordinate system 27. In accordance with the spatial mapping symmetry of the camera 22 shown in FIG. 4, I33 (u, v) [k] should be a constant and does not vary with the position of the camera 22 while it is moving along the Z'-axis (Z' [0.. 19] ) in the platform coordinate system 29. Based on these measured data, the standard deviations of the principal point 227 are separately 0.25 pixel in the u-vector and 0.18 pixel in the v-vector. Further, the result in linear matching indicates the location of the principal point 227 at I (uc, vc) = I (318.1, 236.1) pixels. In conclusion, the slight values of the standard deviations attest to the reliability of the experimental result, and verify that the postulation that the coordinate of the principal point 227 is a constant.

3. The featured image-height profiles (pJO., 19] ; m= [1.. 3] ) of the ICP 226 in the pixel coordinate system 27: FIG. 12A illustrates the profiles of three average image heights (individually p1, p2 and p3, each defined by the calibration marks located on the same circle from the inside to the outside; for example, pl determined by calibration marks 311-318) in the pixel coordinate system 27, which are varied with Zc'[0.. 19] while the camera 22 is moving along the Z'-axis.

The formula is as follows:

(3) wherein k is the serial number of samplings, m the layer of the concentric circles from the inside to the outside, n the number of the calibration marks 311-318, 321-328 and 331-338 shown in FIG. 5, and pm [k] the average image-height array corresponding to the concentric circles in each test. FIG. 12B, redrawn from FIG.

12A, shows a clear overlapping phenomenon between each pair of the three average image heights (p1[0.. 19], p2 [0.. 19] and p3 [0.. 19]) corresponding to the three layers of the concentric circles. This supports the postulation described above that measured image-height ranges hold the information for positioning the identical sight ray 80. In an ideal model, the image heights defined by equi-radius calibration marks should be equal to each other when the ICP 226 reaches perfect symmetry. The experimental result shows that the deviation of descriptive statistics of the equi-radius calibration marks on a selected circle is 0.22 pixel; this proves that the measuring system can imitate the circular symmetric projection mechanism and perform satisfactorily in practice.

The optical parameters of the camera are deduced according to the data measured in the experiment. Taking the first embodiment as an example, referring to FIG. 6 again, if We is the optical origin, the zenithal distance (a), i. e. the angular distance of the sight ray 80 away from the optical axis 224, is formulated as: ocm [k] = tan (R,, Z [p]) = (R2, Z [q] ) = (R3, Z [r])-------------------------------------- (4) wherein Rn.. ] denotes the radii of the three concentric circles on the target 30, namely the object heights in absolute space; Z [p] represents the object distance of the target 30 on the

Z-axis while k=p, that is to say Z [p] =D here, and Z [q] as well as Z [r] are the denotations following the same rule. A line segment is determined by W313[p], W323[q] and W333[r] if Wn [p.. r] are given. The extension of the line segment toward the optical axis 224 will intersect to determine the absolute coordinate of We. Similarly in FIG. 9, the positions (Wc' [p.. r]) of the moving camera 22 in the platform coordinate system 29 are observable and controllable while the target 30 is fixed. The absolute coordinates (Wc [p], Wc [q] and Wc [r] ) of the camera 22 can be accordingly obtained by comparing similar triangles bounded by both the optical axis 224 and the target 30 which are perpendicular to each other if the three offsets of the camera 22 are given. These are the two theoretical models of the invention for solving the FCP 222.

However, considering the limitations of samplings in the experiment, it is hard to obtain two image heights (or the imaged coordinates mapped by the calibration marks) ; exactly coinciding with each other in practice. Moreover, the unavoidable errors caused by the random quality of image signals while setting the feature coordinates also suggest that the sight ray 80 should not be directly deduced and, accordingly, the FCP 222 will not be located thereby, even though exactly coinciding feature coordinates are attained.

In view of the limitations in practice, the present invention proposes an alternative method to analyze the measured data. There are three groups of data categorized from the original one, including: the image heights (pm [0.. 19] ; m= [1.. 3] ), the object heights (lkn ; m= [1.. 3] ) and the camera's offsets (Wc'[0.. 19] ). The three groups of data are sufficiently sampled for being over determined, and employed to deduce the FCP 222 (or termed the viewpoint) and the mapping mechanism of the camera 22.

First, the image heights have an inverse proportion to the object distances (i. e. the

distances of the camera 22); FIG 12A shows this phenomenon, from which the image heights cannot be directly related to the mapping mechanism of the camera 22. However, on the basis of the postulation about the identical sight ray 80, if the object heights (i. e. the physical lengths of the radii of the PCP 31) are represented by another kind of aspect, such as the zenithal distance (a), the mutual meanings of the three profiles in FIG. 12A are thereby related, namely the overlapping of image heights or/and the overlapping of angular distances. The fact of one identical sight ray 80 holding one zenithal distance (a) offers a consistent explanation for all the sampled image heights if the object heights are replaced with a. Therefore, the FCP 222 (or the viewpoint) ought to be located first in order to obtain accurate object distances and turn the object heights exactly into the zenithal distances (a). The overlapping phenomenon of the ranges of different image heights (p) revealed in the experiment implies that replacing the object heights with the zenithal distances (a) also presents an overlapping mechanism. Therefore, the object distance (the distance between the camera 22 and the target 30) is involved as a factor to deduce the sight ray 80 of the camera 22; then the overlapping phenomenon of the zenithal distances (a) similar to FIG. 12B would appear as well.

Therefore, on the basis of the conception described in FIG. 9, the method of trial- and-error is utilized in searching for the FCP 222 along the optical axis 224 (note: at this time the optical axis 224 has been positioned). In other words, taking the successive target points one by one on the optical axis 224, each point is supposed to be the FCP 222, so that the initial distance (D [p] ) between Wc [p] and the target 30 is accordingly determined. The offsets between Wc [p], Wc [q] and W [r] are already given so D [q] and D [r] can be derived from D [p]. Based on the three given coordinates (Wc[p], Wc[q] and

11279-PCT-PA Wc [r] ) and referring to a equi-length image height (I3l3 [p] =I323 [q] =I333 [r]), only when D [p] is accurate can the three corresponding zenithal distances (i. e. a3l3, a323 and a333), transformed from the object heights by the tangent function, be equal to each other.

In the experiment, the object-to-image conjugate coordinate array (Wc (x',y',z')[0..19], In(u, v) [0.. 19]) is extracted at 20 positions from k=0 to k=19.

Namely, aiming at every object height (1t,), extract the image-height profile (#m [0.. 19] ) at twenty given positions of the camera 22 (We [0.. 19] ). Twenty object distances (D [0: 19] ) would be obtained as well in the course of the experiment, while the distance of Wc [0] is assumed to be D [0]. The a-profiles (am [0: 20] X m= [1.. 3] ) are accordingly deduced by referring to both D [0 : 19] and the object heights, or the radii of the concentric circles. The task of posing the camera 22 is realized by the overlapping degree of the traces disturbed by the zenithal distances (am [0: 20] X m= [1.. 3] ), referring to the image heights (pm [0.. 19], m= [1.. 3] ) ; it is termed the first overlapping index in the invention. The overlapping phenomenon will appear only if the FCP 222, i. e. the value of D [0], is accurately fixed.

This is the first method for posing the camera 22 disclosed in the invention. FIG. 13 illustrates the traces of the data points of am [0.. 19] to pm [0.. 19] when D [0] is accurately acquired; it reveals an extremely good overlapping phenomenon on the profiles corresponding to the three concentric circles. The functional relationship between the image height (p) and the zenithal distance (a) is exactly the projection function of the camera 22, and hence the curve shown in FIG. 13 is the so-called projection curve or projection function in optics. To date, the measurement for the projection function of the camera 22 with a nonlinear perspective projection model is still unattainable in the related art (note: not for the lens); however, the present invention can achieve the function only

11279-PCT-PA with the assistance of simple equipment. On the other hand, if D [0] is shifted from the accurate value, taking 50 mm as an example, an obvious divergent phenomenon of the a-traces will appear, as shown in FIG. 14.

In conclusion, the object-to-image conjugate-coordinate array is found to be capable of deducing the projection function and posing the camera 22 (i. e. positioning the FCP 222). Although the experimental result shows that the projection function is close to the EDP (equidistant projection), this is just a special case and will not cause any limitation in the projection model in the invention. The method disclosed in the invention can be widely applied in various sorts of projection curves.

The projection curve is able to describe the mapping mechanism of the camera 22, but unable to quantify the distortion degree of the camera system. From the experimental result, it is clear that the lens used in the embodiment is similar to the one with the EDP so that its projective curve is supposedly a straight line, and the result is quite close. In the aspect of a rectilinear projection model, there is a nonlinear relationship (minus the direct-proportions) between the distortion degrees and the image heights. For the convenience of further explaining the distortion mechanism of the camera system, the invention defines an optical parameter, termed the zenithal focal length (simplified as the zFL hereinafter). Referring to FIG. 6, the zFL (also termed the focal length constant in the Gaussian optics model) is the distance between BNP 223'and the principal point 227, which can express the specific mapping mechanism of the sight ray 80 of a particular zenithal distance (a), and formulized as follows: zFLm [0.. 19] = p [0.. 19] ot (aj0.. 19])------------------ (5)

11279-PCT-PA In the aspect of the one-to-one corresponding relationship between the imaged coordinate I (u, v) and the identical sight ray 80, the zFL can be regarded as the focal length constant in conformity with the rectilinear perspective projection model while only one specific imaged coordinate is considered. The zFL will vary along with different image heights (p); the bigger the margins of different zFLs, the severer the radial distortion of the camera system. Therefore, an image height can be explained as a zenithal distance (a) in the object space ; however, in the matter of the mapping mechanism, the image height is also dependent on the zFL, so the function of zFL (p) can directly reveal the distortion degrees of the camera system, subsequently presented in their totality as"the zFL-curve" or"the zFL-function".

While turning the image heights (pJO.. 19]) in FIG. 12A into the zFLs (zFLm [0.. 19] ), it is necessary to refer to the object distances, and the overlapping phenomenon of the zFL-profiles also has the capability of locating the FCP 222 of the camera 22. This is the first method for posing the camera 22 disclosed in the invention. The image heights (pm [0 19] ) shown in FIG. 12A can be replaced by the zFLs (zFL m [0s19]) with a consistent explanation. Therefore, the method of trial-and-error is employed for searching the FCP 222 along the optical axis 224; namely, by taking the successive target points on the optical axis 224 one by one, each of which point is supposed to be the FCP 222 so that the initial distance (D [0]) is accordingly determined. Further, the image heights (pn, [0.. 19], m= [1.. 3] ) are accordingly turned into the corresponding zFLs (zFL m [0.. 19], m= [1.. 3] ), and the task of posing the camera 22 is realized by the overlapping degree of the zFL-profiles ; this is termed the second overlapping index in the invention.

FIG. 15 illustrates the profiles of the zFL-function showing a pretty good overlapping

phenomenon. It also signifies that the FCP 222 on the optical axis 224 can be truly positioned with the aid of the practical results of the experiment. On the other hand, while D [0] is shifted from the accurate value, taking 5 mm as an example, an obvious divergent phenomenon appears among the three zFL-profiles, as shown in FIG. 16. Besides, FIG.

15 also directly exposes the distortion degrees of the image or the distortion mechanism of the camera system.

It is worth noting that, comparing FIG. 16 with FIG. 14, the divergent phenomenon in FIG. 16, with only a 5-mm shift of D-value, is much more apparent than the one in FIG.

14 with a 50-mm shift of D-value. This proves the sensitivity of zFL (p) is much higher than the one of a (p) to the position of the FCP 222 of the camera 22. It also reveals that, in practice, utilizing the overlapping degree of the zFL-curve to fix the FCP 222 of the camera 22 is a superior method. Moreover, when the image height (pm [O.. 19]) approaches zero, the zFL will be the focal length of the lens mounted in the camera 22; generally ideal lenses take this value as their focal length constant.

To enable the method for positioning the viewpoint in the invention to be more multi-functional and suitable to any kind of projection function, the invention further discloses a method for verifying the overlapping degree of the profiles. Owing to the high sensitivity of the zFL-function, taking it as an example, a"divergent length" (also called the"feature length") on the overlapping portion of the profiles is calculated after rearranging the three groups of data in FIGs. 15 and 16 to evaluate the overlapping degree of the curve of zFL (p), as shown in FIG. 17. The method connects all adjacent points representing the relationships of the zFLs to the image heights (p), and then the total length (i. e. the divergent length) of the overlapping portion is calculated. If the divergent

11279-PCT-PA length is the minimum, like the curve notated as zFL in FIG. 17, the overlapping degree of the tracks of zFL (p) is supposed to be optimum, and accordingly, the tested target point on the optical axis 224 is the FCP 222 (or the viewpoint) of the camera 22. Otherwise, the tracks of zFL (p) reflect a longer divergent length, like the one notated as zFL shift.

Furthermore, the invention utilizes the nature of the image projected from the innovative PCP 31 to estimate the quality of the arrangement of the measuring system, to modify the arrangement accordingly and predict whether the camera system is capable of being examined or not. The mapping mechanisms of some cameras with defects are apparently below expectations because the distortion model of the camera 22 is unpredictable. For example, if the optical axis 224 of a lens-set is not perpendicular to the image plane 225 in the camera 22, it is impossible to get an utterly symmetric image no matter how much effort is expended. However, the method disclosed in the invention can eliminate these cameras, with all sorts of defects, from calibration beforehand.

In conclusion, the method disclosed in the invention attains the goal of evaluating the specifications and obtaining the optical parameters of the camera 22 either by the projection function or the zFL-function of the said camera 22.

Therefore, the method and the measuring system disclosed in the invention perform most satisfactorily indeed in analyzing the mapping mechanism of the camera 22. Further, the present invention can guide or modify the arrangement of the measuring system as well as determine the reliability of the measured parameters by the distribution of the measured data, and is finally applied in calibrating cameras or employed to develop image-processing/image-transfonned technologies.

11279-PCT-PA Overall, the invention has the following advantages: 1. The capabilities of accurately locating the optical axis 224, posing the camera 22 (namely, fixing the absolute coordinate of the FCP 222) and evaluating the projection function and the focal length constant of the camera 22.

2. The capabilities of quantifying the distortion of the imaged points through the zFL-function.

3. The capability of verifying the reliability of the measuring system through the measured data.

4. The capability of verifying the quality of the target camera 22 through the measured data.

5. The capability of directly turning the imaged points into the corresponding projection angles (i. e. the zenithal distances) in space.

6. The capability of being applied in stereoscopic applications.

7. The merits of simplicity and low cost make the method suitable for any kind of nonlinear mapping mechanism of the camera 22.

The invention being thus described, it will be obvious that the same may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the invention, and all such modifications as would be obvious to one skilled in the art are intended to be included within the scope of the following claims.