Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD FOR EXPLORING OPTICAL PARAMETERS OF CAMERA
Document Type and Number:
WIPO Patent Application WO/2004/092825
Kind Code:
A1
Abstract:
The present invention is a method for exploring the optical parameters of a camera. Images projected from a plane target with a center-symmetric pattern are utilized to lead the function of alignment in the system, which takes advantage of the characteristic that image deformation is symmetric to the principal point owing to the phenomenon that projecting optical paths symmetrically surround the optical axis. The absolute position of a camera is deduced by the spatial absolute coordinates of the calibration points on the target and the imaged coordinates thereof, upon the base of a given projection model and the located optical axis. The simple target can imitate the delicate and complex multicollimator calibration mechanism. The accuracy of the absolute position of the optical axis essentially affects the measurement quality. Two image-process strategies are created by the present invention in order to analyze the symmetry of the images. These indirect indexes are employed to position the principal point on the image plane and the spatial absolute position of the optical axis. The method of trial-and-error is employed to determine the exact location of the optical projection center, and then the focal length constant is deducible. Referring to the intrinsic and extrinsic parameters obtained can transform the fisheye images with metering accuracy so that the relative applications of the fisheye camera are widely expanded.

Inventors:
JAN GWO-JEN (CN)
CHANG CHUANG-JAN (CN)
Application Number:
PCT/IB2004/001106
Publication Date:
October 28, 2004
Filing Date:
April 13, 2004
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
APPRO TECHNOLOGY INC (CN)
JAN GWO-JEN (CN)
CHANG CHUANG-JAN (CN)
International Classes:
G01M11/02; G03B13/12; G03B37/06; G03B43/00; (IPC1-7): G03B37/06; G03B43/00; G01M11/02
Foreign References:
US20030090586A12003-05-15
EP1028389A22000-08-16
US5185667A1993-02-09
US5870135A1999-02-09
Download PDF:
Claims:
11280pctF CLAIMS M t 1S C1almed 1S :
1. A method for exploring the optical parameters of a camera, the camera having a nonlinear perspective projection lens which conforms to one of a plurality of given projection models, the method comprises: providing a target with a physical centralsymmetric pattern (PCP) composed of a pattern center and a plurality of centersymmetric geometric figures; placing the target in the field of view (FOV) of the camera to allow the PCP imaging on an image plane; adjusting the relative position between the target and the camera in order to obtain an imaged centralsymmetric pattern (ICP), imaged by the PCP, on the image plane; and examining the symmetry of the ICP by at least one symmetric index until the at least one symmetric index meets the requirement for accuracy, following that the feature coordinate of an imaged point projected from the pattern center is a principal point on the image plane.
2. The method according to claim 1, wherein the pattern center further determines a spatial sight ray, which perpendicularly passes through the pattern center, to be an optical axis of the camera.
3. The method according to claim 1, wherein the plurality of centersymmetric geometric figures is selected from the group comprising concentric circles, concentric rectangles, concentric triangles or concentric polygons.
4. 11280pctF.
5. The method according to claim 1, wherein the plurality of centersymmetric geometric figures is a combination of any number of concentricandsymmetric circles, rectangles, triangles and/or polygons.
6. The method according to claim 1, wherein the symmetric index is determined by the steps comprising: calculating a distancesummation by summing up a plurality of border distances lying in the same radial direction, where each borderdistance is defined as the length from the principal point to one of a plurality of imaged contours projected from the plurality of centersymmetric geometric figures; and computing a plurality of differences where each is obtained by subtracting two distancesummations in the opposite radial directions, the plurality of differences composing the symmetry index.
7. The method according to claim 1, wherein the symmetry index is determined by the steps comprising : calculating a distancesummation by summing up a plurality of border distances lying in the same radial direction, where each borderdistance is defined as the length from the principal point to one of a plurality of imaged contours projected from the plurality of centersymmetric geometric figures; and computing a plurality of sums where each is obtained by adding two distancesummations in the opposite radial directions, the plurality of sums composing the symmetry index.
8. 11280pctF.
9. The method according to claim 1, wherein when the plurality of center symmetric geometric figures is a plurality of concentric circles, the symmetry index is determined by the steps comprising: transforming the ICP by a polarcoordinate transformation so as to turn the plurality of concentric circles into a plurality of lines; and examining the linearity of the plurality of lines as comprising the symmetry index.
10. The method according to claim 2, wherein along the optical axis a VP is further located by the steps comprising: selecting one of the plurality of given projection models as a test projection function ; postulating a test point on the optical axis; taking the test point as a reference point and deducing at least two zenithal distances (a) defined by at least two geometric figures selected from the plurality of centersymmetric geometric figures; calculating at least two principal distances (p) defined by at least two imaged contours on the image plane corresponding to the at least two geometric figures; and obtaining at least two focal lengths by separately substituting one of the at least two zenithal distances (a) and the corresponding one of the at least two principal distances (p), comprising at least two groups of data, into the test projection function, when the at least two focal lengths are equal to a constant, the test point is the VP and the test projection function is the natural projection function of the camera.
11. The method according to claim 8, wherein the spatial absolute coordinate of the VP is referring to the absolute position of the PCP.
12. The method according to claim 8, wherein the test projection function is selected from the group comprising an equidistant projection (EDP), an orthographic projection (OGP) and a stereographic projection (SGP).
13. The method according to claim 8, wherein the at least two groups of data are further examined by an £algorithm to minimize the value of an error function whose mathematical form is wherein ei (z) is an error function obtained by subtracting a focal length constant from the test projection function, wi (D) is a weight function and N is the amount of imaged contours sampled.
14. The method according to claim 8, wherein the at least two groups of data are further examined by a 6algorithm to minimize the value of an error function whose mathematical form is wherein Y (D) = is a focal length constant corresponding to the i'imaged contour based on the test projection function, wi (D) is a weight function and N is the amount of imaged contours sampled.
15. The method according to claim 1, wherein the target is mounted on an adjusting platform possessed of three rigid axes which are perpendicular to each other and capable of adjusting the position of the target.
16. The method according to claim 1, wherein the camera is mounted on a camera holder comprising a PTZhead for adjusting the direction of the lens of the camera.
Description:
METHOD FOR EXPLORING OPTICAL PARAMETERS OF CAMERA

BACKGROUND OF THE INVENTION Field of Invention \ The invention relates to a method for exploring the optical parameters of a camera.

Particularly, it is a method of utilizing the center-symmetrical characteristic of camera image deformation to develop image-process techniques in order to situate the principal point and analyze the optical parameters of cameras in consideration of various nonlinear perspective projection models. The analyzable parameters comprise the intrinsic projection function and the absolute coordinates representing the extrinsic position of the camera.

Related Art The camera systems in the field of artificial vision have preferred using lenses with a narrow field of view (FOV) in order to obtain images approaching an ideal perspective projection mechanism for precise measurement and easy image processes. The pinhole model is usually a basis to deduce the camera's parameters. The intrinsic and extrinsic parameters obtained can be employed in the visual applications in quest of higher precision, for instance in 3-D cubical inference, stereoscopy, automatic optical inspection, etc. Regarding image deformation, a polynomial function is used to describe the deviation between original images and the ideal model, or in conducting the work of calibration.

These applications, however, currently have the common limitations of narrow visual angles and an insufficient depth of field.

11280pctF A fisheye camera (also termed a fisheye image sensor) mounted with a fisheye lens, which focuses deeper and wider, can capture a clear image with a FOV of 180 degrees or even more, but a severe barrel distortion develops. Because the optical geometry of the fisheye camera is extremely different from the rectilinear perspective projection model, the optical parameters are hard to be precisely deduced by those methods used in the related art for normal cameras. Therefore, technologies developed for the usual visual disciplines have not resulted in any capability in processing the images of the fisheye camera (simplified as"fisheye images"hereinafter).

Eventually, the panospherical imaging field has replaced the use of the fisheye image sensor (also called a dioptric sensor) by alternatively developing various camera systems with complex reflective optical elements (also called catadioptric sensors) as compensation. These solutions employed optical components such as reflectors or prisms to take panoramic views, for instance, the technologies which were disclosed in the US patents 6, 118, 474 and 6,288, 843 Bl. However, the catadioptric systems often elongate the ray traces, complicate the image-forming mechanism and attenuate the imaging signals by indirectly taking the reflective images through the added optical elements. A blind area will be unavoidable at the center of an image because of the frontal installation of the reflective element.

To expand the FOV, the camera system with a mechanical pan-tilt-zoom motoring function is another solution in the related art, which separately captures surrounding images in a row to achieve a panoramic view, such as, for instance, the technology disclosed in the US patent 6,256, 058 B1. Or conversely, a number of cameras are deployed to simultaneously capture images in different directions in order to seam a

11280pctF panorama together. However, the first method of a rotation type cannot capture an entire scene in a single shot so that the flaw of asynchronism remains in the results. Furthermore, the volume of both systems can hardly be shrunk to approach a hidden function or to take a close-range view, not to mention the heavy weights of the camera bodies which consume more electricity, or the rotating device which is relatively easily thrown out of order. In addition to the extra cost of multi-cameras, the sampling and integration of the images from individual cameras still present many problems. Hence, adopting lenses with a very wide FOV (such as the fisheye lens or compounded catadioptric sensors) to capture an entire scene in a single shot is a tendency of this kind of camera systems while considering many practical requirements in applications.

Owing to the poorly deduced accuracy of the optical parameters of a camera based on the rectilinear perspective projection model, some alternative solutions were evolved to tackle the transformation of fisheye images. They involve an image-based algorithm aims at a specific camera which is mounted with a specific lens conforming to a specific projection mechanism so as to deduce the optical parameters based solely on the images displayed. With reference to FIG. 1A and FIG. 1B, wherein FIG. 1A expresses the imageable area 1 of a fisheye image in a framed oval/circular region and FIG. 1B is the hemispherical spatial projecting geometry corresponding to FIG. 1A, both figures note the zenithal distance of a, which is the angle defined by an incident ray and the optical axis 21, as well as the azimuthal distance of ß, which is the angular vector in the polar coordinate system whose origin is set at the principal point. Citing the positioning concept of a globe, 3 is the angle referring to the mapping domain 13'of the prime meridian 13 on the equatorial plane in the polar coordinate system, as shown in FIG. 1B.

11280pctF Thus, 7r/2-a is regarded as the latitude and ß as the longitude. Therefore, if several imaged points lie on the same radius of the imageable area 1, their corresponding spatial incident rays would be on the same meridional plane (such as the sector determined by the arc C'E'G'and two spherical radii); that is, their azimuthal distances (p) are invariant, as with points D, E, F, and G in FIG. 1A corresponding to points D', E', F', and G'in FIG. 1B.

In addition to the specific projection mechanism, the image-based algorithm further needs several basic postulates: first, the imageable area 1 of the fisheye image is an analyzable oval or circle, and the intersection of the major axis 11 and minor axis 12 (or, rather, of the two diameters) situates the principal point, which is cast by the optical axis 21 as shown in FIG. 1B ; secondly, the boundary of the image is projected by the light rays of a=7/2 ; third, a and p are linearly related, wherein p, termed a principal distance, is the length between an imaged point (such as point E) and the principal point (point C). For example, the value of a at the point E is supposed to be 7r/4 since it is located in the middle of the radius of the imageable area 1; hence the sight ray corresponding to point E is destined to pass through point E'in the hemispherical sight space, as shown in FIG. 1B ; the same is true with points C and C', points D and D', points F and F', etc.

An imaged point on the image plane can be denoted as C' (u, v) in a Cartesian coordinate system or as P' (p, 0) in a polar coordinate system, both taking the principal point as their origin. Although the mapping mechanism was not really put on discussion in the image-based algorithm, it is actually the equidistant projection (EDP) whose function is p=ka where k is a constant and, actually, the focal length constant off.

The US patent 5,185, 667 accordingly developed a method to transform fisheye images conforming to the rectilinear perspective projection model in accordance with the

11280pctF projection mechanism shown in FIGs. lA and 1B so as to monitor a hemispherical field of view (180 degrees by 360 degrees). This patented technology has been applied in endoscopy, surveillance and remote control as disclosed in US patents 5,313, 306, 5,359, 363 and 5,384, 588. The present invention terms the EDP coupled with a FOV of 180 degrees as"EDP7r". Based on the EDPTt postulation, the focal length constant «) can be obtained by dividing the radius of the imageable area 1 by 7r/2 ; the spatial angle (a, ß) of the corresponding incident ray can also be analyzed from the planar coordinates C' (u, v) on the imageable area 1. In light of the known image-analyzing skills, an"ideal EDPTE image"can be transformed into the image remapped by the rectilinear perspective projection, referring to any projection line as a datum axis. This image-based algorithm is easy and no extra calibration object needed. However, it is worthy noting that these serial US patents did not concretely demonstrate the general suitability to average fisheye lenses or not. Thus, the accuracy of the patented technology in transforming images remains a big question insofar as no specific fisheye lens is used. The current practice has system-application manufacturers asking for limited-specification fisheye lenses combined with particular camera bodies and providing exclusive software, and only then does the patented technology (US patent 5,185, 667) have practical and commercial value.

Major parts of the image-based postulates mentioned above, however, are unrealistic because many essential factors or variations have not been taken into consideration. First, the EDPTE might just be a special case among possible geometrical projection models (note: however, it is the most common projection model of the fisheye lens). With reference to FIG. 2, it shows three possible and typical projection curves of the fisheye lens, and implies that the natural projection mechanism of the fisheye lens may lie in the

11280pctF following projections: the stereographic projection (or SGP, whose projection function is p=2fxtan (a/2) ) and the orthographic projection (or OGP, whose projection function is p =fxsin (a) ). Moreover, the coverage of the FOV is not constantly equal tog, ranging from larger to smaller. From the curves in FIG. 2, the differences between the three projection models respectively are obviously increasing with the growing zenithal distances (a).

Thus, distortions will develop if lock all projection geometries on the EDP7r and transform images accordingly. Secondly, the FOV of is difficult to evaluate since the form of the imageable area 1 is always presented as a circle irrespective of the angular scale of the FOV. A third factor concerns the errors caused in locating the image border even if the FOV is certainly equal to . The radial decay caused by the radiometric response is an unavoidable phenomenon in a lens, especially when dealing with a larger FOV. This property will induce a radial decay on the image intensity, especially occurring with some simple lenses, so that the real boundary is extremely hard to set under that bordering effect. Perhaps there could even be no actual border feature upon consideration of the diffraction phenomenon of light. Finally, if the imageable area 1 of a camera is larger than the sensitive zone of a CCD, only parts of the"boundary"of an image will show up, and therefore the image transformation cannot be effectively executed.

Consequently, the image-based algorithm depends considerably on the chosen devices irrespective of whether the lens conforms to the ideal EDP7 postulation or not.

Alternatively, the method will result in poor accuracy, modeling errors, a doubtful imageable area 1 extracted, an unstable principal point situated, as well as practical limitations.

Moreover, Margaret M. Fleck [Perspective Projection: The Wrong Image Model,

11280pctF 1994] has demonstrated that the projection mechanisms of lenses hardly fit a single ideal model over the whole angular spectrum in practice; otherwise, optics engineers could develop lenses with special projection functions, such as the fovea lens, in light of the different requirements in applications. Thus, imposing the postulation of the EDP on all fisheye cameras is extremely forced.

Obviously, there has been no discussion in the related art about how to position the principal point in a real camera system, not to mention the deduction of the extrinsic parameters (namely, the position of the optical axis and the viewpoint thereon representing the camera in the absolute coordinate system) and the intrinsic parameters (namely, the projection function and its coefficients, such as"2",'f"and"a/2"in the projection function of p=2fxtan (a/2) ). These limitations keep the fisheye lens from advanced applications. The present invention will carefully look into these issues and free the procedure of camera parameterization from ideal image-based postulations, such as the EDP7r and the image boundary, in order to precisely obtain the optical parameters and to exactly transform fisheye images with fidelity on the basis of the obtained parameters.

Apart from this, visual measurement can also be well developed via the technology disclosed by the present invention.

SUMMARY OF THE INVENTION In view of the foregoing, the object of this invention is to provide a camera- parameterizing method, which aims at the camera mounted with the lens of a non-linear perspective projection mechanism, simply based on the natural optical projection phenomenon of the lens.

11280pctF Another object of this invention is to provide a method absolutely positioning the principal point and the optical axis based on the characteristic of barrel distortion which is symmetrical to the principal point owing to the phenomenon that projecting optical paths symmetrically surround the optical axis. The optical axis is therefore traceable and the optical parameters of a camera, such as the viewpoint (VP), the focal length constant and the projection function, are inducible accordingly.

In accordance with the objects described above, the present invention provides a method for exploring the optical parameters of a camera, which is an advanced research based on the US patent applications of 09/981,942 and 10/234,258. A physical central- symmetric pattern (PCP) is designed on a target according to the center-symmetric characteristic of fisheye image's distortion. The target is placed in the FOV of the fisheye camera and its position is adjusted until an imaged central-symmetric pattern (ICP) appears on the image plane. At least one symmetry index is employed to test the ICP's symmetry. If the ICP's symmetry satisfies an accuracy request, the geometrical center of the ICP is where the principal point can be located. Furthermore, the sight ray perpendicularly passing through the geometric center of the PCP will stand for the optical axis. Thus, the absolute position of the optical axis in space can be deduced by referring to the given position of the PCP.

Based on the traceable spatial trace of the optical axis of the camera, the method of trial-and-error is employed at every point on the optical axis with the numerical limitations composed of the absolute physical radii of the PCP and the measured imaged radii of the ICP in order to obtain an optical center (or termed the viewpoint, simplified as the VP) satisfying a specific projection model. The focal length constant of the camera

11280pctF can also be determined by the mathematical equation of the specific projection model as the VP is already fixed. The projection model involved in deduction can either be the equidistant projection (EDP), the stereographic projection (SGP) or the orthographic projection (OGP), those well-known projection models in the related art, or a particular projection function provided by lens designers or manufacturers.

The method disclosed by the invention can ensure the extrinsic and intrinsic optical parameters of the fisheye camera, so the fisheye images can be accordingly transformed into ones with various image formats.

Further scope of applicability of the present invention will become apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the invention, are given by way of illustration only, since various changes and modifications within the spirit and scope of the invention will become apparent to those skilled in the art from this detailed description.

BRIEF DESCRIPTION OF THE DRAWINGS The present invention will become more fully understood from the detailed description given herein below of illustrations only, and thus are not limitative of the present invention, and wherein: FIGs. 1A and 1B show the schematic view of a calibration method based on an image-based algorithm aiming at the EDPTi of fisheye images in the related art; FIG. 2 sketches three typical projection functions of the fisheye lens; FIG. 3 shows an embodiment of the physical central-symmetric pattern (PCP)

11280pctF according to the spirit of the invention; FIG. 4 cubically shows the 3-D optical paths between the PCP and the fisheye camera in the invention; FIG. 5A shows a schematic view of multi-collimated optical paths simulated by an aligned PCP on two meridional planes a 7E-distance from each other, and also a schematic view of optical paths for interpreting the projection behavior of sight rays through a small sphere (taking the EDP as an example); FIG. 5B shows a schematic view of the cubically optical paths highlighting a part of FIG. 5A ; FIG. 6 shows the employed embodiment of the PCP in an experiment based on the invention; FIG. 7 shows the schematic view of a device arrangement performing the task of adjusting the position between the fisheye camera and the target; FIG. 8A shows the imaged schematic view in an experiment imaged by the PCP shown in FIG. 6; FIG 8B shows the signal-intensity curves of the image contours shown in FIG. 8A along the four directions of the northeast, southwest, northwest and southeast; FIG. 9 shows an imaged schematic view transformed by a polar-coordinate transformation, which takes the principal point as the origin, from the image shown in FIG. 8A; and FIG. 10 shows the approaching curves for seeking the viewpoint in light of three different projection functions in an experiment.

11280pctF DETAILED DESCRIPTION OF THE INVENTION The technology disclosed in the present invention is an advanced research based on the US patent applications of 09/981,942 and 10/234, 258.

The fisheye lens is a non-linear perspective projection lens, which means its projecting behavior cannot be interpreted by the well-known pinhole model when spatial sight rays pass through this kind of lens. The fisheye lens has the merits of a wide field of view (FOV) and an infinite depth of field in comparison with other lenses following the rectilinear projection, but a severe barrel distortion comes along in its images. Namely, the quantities of distortion throughout an image are distributed with a radial symmetry whose point of origin is termed a principal point. The projection mechanism of a camera in space can be described as follows: the incident rays cast from an object in the FOV will logically converge onto a unique spatial optical center (or termed the viewpoint, simplified as the VP) and then divergently map on the image plane in light of a projection function; meanwhile, the optical projection geometry in the FOV symmetrically encircles the optical axis of the camera. The geometrical optical model is well known to those skilled in the related art of optical engineering. However, no proper analytical technology has come up yet; hence only the rectilinear perspective projection model is available as a foundation for developing a computer vision system. This limitation comes from the huge related quantity of a barrel distortion, which cannot be analyzed yet. However, the characteristic of distortion turns into a key feature in the invention to develop a method for parameterizing the fisheye camera, a method working better the severer the distortion.

From a geometrical view, a planar drawing capable of representing an axis- symmetric geometric arrangement in space can image a center-symmetric image inside a

11280pctF camera. Therefore, a planar target 22, as shown in FIG. 3, with a physical central- symmetric pattern (PCP) 220 thereon, is placed in the FOV of a camera. The relative position of the target 22 and the camera is adjusted in order to obtain an imaged central- symmetric pattern (ICP) 230 on the image plane 23, as shown in FIG. 4. Obtaining the ICP 230 here means the optical axis 21 perpendicularly penetrates both the image center 235 and the pattern center 225, and the front cardinal point (FCP) 242 and the back cardinal point (BCP) 243 are both located on the optical axis 21. The position of the optical axis 21 in space can be absolutely determined by referring to the target 22 because its absolute position is man-made and given in advance. Therefore, seeking the ICP 230 is the core procedure.

The PCP 220 shown in FIG. 3 can be regarded as an arc-laid optical layout imitating the multicollimator. The quite aged multicollimator metrology has been employed to calibrate large-scale aerial convex lenses. It utilizes an arc-laid array composed of independent point-light sources with accurate alignment to generate a bundle of light rays converging at a specific point whose absolute position in space is given. Through adjusting the position of a test camera to make its image clearest, the VP of the test camera will be regarded as covering at the preset light-ray converging point. Therefore, the VP can be positioned by referring to the test camera's physical body. Each light ray from the point-light sources simulates an incident ray from infinity with a known zenithal distance of a, which is the angle extending from the optical axis 21 to the incident ray.

Because the coordinate of the imaged point imaged by each light ray can be measured accurately, the a-to-p (p is the image height) projection profile of the lens can be obtained from the directly measured data.

11280pctF As far as operating models are concerned, the physical arrangement of the multicollimator can measure the image element, module or system of any circularly axis-symmetric projecting optical paths, and accordingly obtain the projection model thereof; the projection model being put in operation has no limitation on certain close circular functions. Certainly, the multicollimator is also suitable in examining the fisheye lens. However, the accurate arc mechanism of the multicollimator is too sophisticated to be realized in normal labs. The present invention creates a much easier way by employing a planar drawing to indirectly imitate the multicollimator's arrangement in space.

FIG. 3 shows an embodiment of the PCP 220 having a solid circular center and a plurality of center-symmetric geometric figures (such as the concentric circles therein) designed in light of the spirit of the invention described above. The measurement/calibration mechanism of the multicollimator will be used to assist in describing the basis of the present invention. FIG. 4 shows the planar target 22 in the 3- D space of the system and the optical projection paths generated thereby in the FOV of the fisheye camera; wherein the fisheye lens 24 and the image plane 23 stand equivalently for the fisheye camera. If the projection behavior of a camera conforms to any known circular-function relationship (meaning the product of a circular function and a focal length), the incident rays cast from the PCP 220 will certainly and essentially achieve a collimating mechanism; namely, all incident rays will converge at a logical optical center of the fisheye lens 24, termed the front cardinal point (FCP) 242, and then refract divergently onto the image plane 23 (or the optical sensor) from the back cardinal point (BCP) 243 according to the projection function. The FCP 242 and BNP 243 are two referred points for the two distinct spaces delimiting the projecting behavior inside and

11280pctF outside the fisheye camera. Sight rays refer to the FCP 242 and the image plane 23 refers to the BCP 243 while analyzing the projection mechanism of the fisheye camera. The distance between the two cardinal points 242 and 243 is arbitrary because it is not a parameter of the camera system. The present invention therefore merges the two cardinal points 242 and 243 at a single viewpoint (VP) 241 as shown in FIG. 5A in order to unify the imaging logic. FIG. 5A shows the optical paths on two meridional planes comprising the optical axis 21 corresponding to the 3-D model of FIG. 4. FIG. 5A also reveals that a' is inferred backwards from p. The logical relationship between a and a'is ruled by the natural projection model of the test lens.

To present the theoretical foundation of the invention clearly, the referred coordinate systems are defined as follows: 1. The absolute coordinate system of W (X, Y, Z) places its origin at the geometrical center of the target 22, and defines the positive of the Z-axis as the direction perpendicularly away from the target 22.

2. The camera outer-space projection coordinate system is E (a, ß, h), wherein a, P, and h are three defined vectors. This coordinate system imitates the well-known geodetic coordinate system of G («, h), where zp is the latitude, X the longitude and h is identical to the one in E (a, p, h), the height. With reference to FIG. 5B, the three vectors in E (a, ß, h) are similar to the three in G (cp, ,, h), except that a refers to the optical axis 21 and (p refers to the equatorial plane 31. The inner and outer projection spaces of the camera are therefore demarcated into two hemispheres by the lens while setting the origin of E (a, p, h) at the VP 241. If h is positive, it means the object point 221 is located within 180 degrees in an"object

11280pctF projection space" ; if h is negative, the object point 221 will be located at a visual angle larger than 180 degrees. Furthermore, the imaged points 231, 302 in the"image projection space"behind the fisheye lens 24 are no longer defined by a and h.

3. The image-plane coordinate system of C' (x, y) or P' (p, p) represents the image plane 23 vis-a-vis the Cartesian coordinate system or the polar coordinate system wherein its origin is set at the principal point 235.

4. The pixel coordinate system of I (u, v) represents the image which can be directly observed on a computer screen with a unit of"pixel". The principal point 235 is imaged at the coordinate denoted as I (uc, vc) on the computer screen. Basically, the imaged dimensions on the image plane 23, C' (x', y') or P'(p', ß'), can correspond to the pixel coordinate system of I (u, v). Therefore, the Cartesian coordinate system of C (u, v) or the polar coordinate system of P (p, ß) can represent the pixel coordinate system of I (u, v) as well, where I (uc, v^) is the origin.

FIG. 5B also shows the orientation-and-location relationship between E (a, ß, h) and W (X, Y, Z) once the coordinate systems are set up. The objective in establishing the coordinate systems is to align the Z-axis of W (X, Y, Z) with the optical axis 21 and make them overlap, as shown in FIG. 5B. This figure utilizes a"small sphere"30, which is a technical term in cartography, identically expressing the optical projection traces in the inner and outer projection spaces of a camera mounting a fisheye lens conforming to the equidistant projection (EDP). The concept shown in the figure is suitable to other projection functions as well; the lens category usable in the present invention sets no

11280pctF limitations regarding any kind of the EDP. Some technical terms of geodesy and cartography, both highly developed disciplines, will be introduced hereinafter in order to assist the description of the theoretical foundation and the image-transformed principle of the present invention.

FIG. 5A shows an arc boundary of a"large sphere"40 in order to explain how the PCP 220 on the target 22 imitates the arc-laid point-light sources of the multicollimator, in addition to the small sphere 30 with the radius off (the focal length constant). Once the optical axis 21 perpendicularly passes through the pattern center 225 of the PCP 220, it means that the planar target 22 is normally secant to the large sphere 40, and the outermost circle of the PCP 220 is regarded as the secant circle (a small circle in geodetic terms) on the surface of the large sphere 40.

Naturally, sight rays cast from any points (such as point 221) on the target 22 will perpendicularly penetrate the surface of the small sphere 30 at incident point 301 and converge toward the spherical center (namely, the VP 241). This means that every concentric circle of the PCP 220 constructs a symmetric light cone in the outer projection space of the camera, and its convergent point is the VP 241, like the cubical optical paths shown in FIG. 4. Logically, the sight rays will be refracted in light of a projection function while passing through the VP 241, and then project on the image plane 23 to form the imaged point 231. Based on the spatially axis-symmetric characteristic mentioned above, the projected image is expected to be an imaged central-symmetric pattern (ICP) 230 if the optical axis 21 has aligned the pattern center 225; the geometric-symmetric center of the ICP 230 is exactly the principal point 235.

Hence, the relative position between the target 22 and the test camera ought to be

11280pctF properly adjusted until the ICP 230 is obtained (that is, until the symmetry of the projected image reaches a certain preset accuracy); meanwhile, the feature coordinate imaged by the pattern center 225 can be regarded as the location of the principal point 235 which is the origin, denoted as C' (0, 0) or P' (0, P), on the image plane 23. This location is ,) in the pixel coordinate system. The sight ray passing through the principal point 235 and being perpendicular to the image plane 23 would pass perpendicularly through the pattern center 225 of the PCP 220 as well. Thus, the sight ray passing perpendicularly through the pattern center 225 can stand for the position of the optical axis 21. The above procedure achieves the function of tracing the optical axis 21; this is a breakthrough in exploring the extrinsic parameters of the fisheye camera.

There are many kinds of the test pattern 220 available in the invention, not just the planar concentric circles shown in FIG. 3. Every PCP 220 composed of concentric-and- symmetric geometric figures is a practicable embodiment. Namely, the concentric rectangles, the concentric triangles or the concentric hexagons are all applicable in the invention in addition to the concentric circles. Even the combination of any number of concentric-and-symmetric circles, rectangles, triangles and/or polygons is a possible embodiment of the PCP 220 in the invention. Of course, a 3-D calibration target may achieve the same function if it can symmetrically surround the optical axis 21, but it will not result in an easier processing procedure.

An embodiment of the invention is presented below in order to concretely demonstrate the positioning method for the principal point 235 and the optical axis 21. In a practical experiment, the PCP 220 is designed as shown in FIG. 6, which is printed with a laser printer on a piece of A3-size paper as an embodiment of the target 22. In

11280pctF consideration of the severe distortion of the fisheye image rapidly increasing along the outward radial direction, the radii of the concentric circles of the PCP 220 are designed to be progressively larger outwards in order to match this optical phenomenon of the fisheye lens. The radial scales of the concentric circles can refer to the initially imaged contours projected from a plain target 22 like the one shown in FIG. 3. The object-to-image contour relationship is obtained first at a proper measured location, and the widths of the physical concentric circles are then determined accordingly in order to enable the system to clearly display both the middle and the outer image ranges simultaneously. Besides, the visible contours of the black and white concentric circles spaced in between will benefit the following image-processing procedure.

With reference to FIG. 7, the target 22 is fixed on an adjusting platform 50 and is moved as close to the camera 60 as possible in order to allow the PCP 220 to lie across the whole FOV of the fisheye lens 24; the projected image will now cover the most sensitive zone of the CCD. The above arrangement is devised for sampling the image information at larger visual angles, because this part of the image is mostly able to reflect the specific projection model of the fisheye lens 24; that is, referring to FIG. 2 again, the differences between different projection models become more obvious as the angles get larger.

The test camera 60 is a CCD B/W camera (Type CV-M50E, by Mechademic Company, Japan) mounting a fisheye lens (Type DW9813, by Daiwon Optical Co., Korea); this is a pretty simple camera system. The following specifications are offered by the vendors: the focal length is 1. 78 mm and the diagonal FOV is 170 degrees; both the length and height of each CCD cell are 9. 8jim, which is a referred unit while calculating the image height (p) in the pixel coordinate system.

The adjusting platform 50 is mainly composed of three rigid axes perpendicular to each other, namely, the X'rigid axis 51, the Y'rigid axis 52 and the Z'rigid axis 53.

Every movement of the target 22 will represent the relative offset in the absolute coordinate system of W (X, Y, Z) because the relative position between the target 22 and the three rigid axis bodies 51,52, and 53, which can be precisely controlled by a computer, is firmly fixed. For the purpose of simplifying description, take the coordinates where the three rigid axis bodies 51,52, and 53 located to stand for the absolute coordinates of physical positions, and define the positive direction of the Z-axis as the one the test camera 60 moving away from the target 22. Ideally, the optical axis 21 in E (a, P, h) has to be parallel with the Z'rigid axis 53 in W (X, Y, Z) in the final adjustment.

However, in practice there is an initial six-dimension difference between E (a, p, h) and W (X, Y, Z), including three offset variables and three rotation variables; hence the two coordinate systems have to be aligned. First, the camera holder 70 is moved to a proper location based on visional judgment and the PTZ-head 71 is adjusted in order to turn the camera 60 to aim at the target 22, namely, to make the optical axis 21 of the camera 60 "look like"perpendicular to the target plane. Then, a computer program finely adjusts the absolute coordinate of the target 22 by referring to the displayed image and the symmetric indexes thereof; meanwhile, the orientation of the camera 60 is adjusted as well by the PTZ-head 71 under the camera 60 for seeking the optimum symmetry of the displayed image. Based on this device arrangement, the optical axis 21 supposedly aligns with the Z'rigid axis 53 if, ideally, the optical axis 21 is adjusted to pass perpendicularly through the feature coordinate of the pattern center 225 on the target 22.

Two image-symmetry judged methods are disclosed in the invention for examining

11280pctF the alignment relationship between E (a, p, h) and W (X, Y, Z) so as to position the principal point 235 and the optical axis 21. However, this does not signify a limitation in the invention; any variations or modifications following the same spirit of judging image symmetry are not to be regarded as a departure from the spirit and scope of the invention.

FIG. 8A is the schematic view of the ICP 230 processed by the method of the present invention and shown on the computer screen. Taking the image as the reference plane, and the principal point 235 (note: this actually is the imaged blob of the pattern center 225 in practice) as the datum point, eight radial symmetric directions, including the south, north, east, west, northeast, southwest, northwest and southeast, are selected as sampled directions for extracting the contours of the imaged concentric circles. Two kinds of marks are displayed on the screen as well in order to indicate two different sampled border points; referring to the image center and moving along the radial lines outwards, wherein"---"proceeds from black to white and"+"moves from white to black. A border-distance is defined as the length from the sampled border point to the image center.

All border-distances in the same direction are summed up to a"distance-summation" ; that is to say, eight distance-summations are calculated and separately denoted as"SS","NN", "EE","WW","NE","SW","NW"and"SE". The difference between two distance- summations in the opposite directions is supposed to be close to the value of zero if the ICP 230 reaches ideal symmetry; namely, there are four differences-diff 1=NN-SS, diff 2=EE-WW, diff 3=NE-SW and diff 4=NW-SE-each of which is expected to achieve the value of zero. Alternatively, the sum of two distance-summations in the opposite directions should have the largest value if the ICP 230 reaches an ideal symmetry ; namely, there are four sums-sum l=NN+SS, sum 2=EE+WW,

1 lZ6UpcE sum 3=NE+SW and sum-4=NW+SE-each of which is expected to attain the maximum value. Therefore, the four differences or the four sums or both (categorized together as the first symmetric index in the invention) displayed on the computer screen are the references for the examination of the orientation of the target 22, and accordingly, the relative position of the target 22 and the test camera 60 is adjusted in order to reach an optimal symmetry of the ICP 230.

The techniques in processing the fisheye images are currently still rare. A computer program conducts the imaged-contour extraction through an image-processing method in the invention. An imaged-border-identified algorithm is created in the invention according to the special characteristics of fisheye images; this algorithm operates automatically in the background to conduct the imaged-contour extraction during the experiment. Due to the rapid radial decay of the radiometric responses of the fisheye images, with reference to FIG. 8B, the peaks of original signal intensity (expressed by the solid signal curves) rapidly decay near the border of the fisheye image so that featured representative signals are hard to recognize in this area. Thus an unsharp-mask processing program is developed in the invention to handle the progressive decay. First, a histogram equalizing process is performed in order to elevate the signal levels near the border area, manifested by the dashed signal curves. Next, a non-casual low-pass filter is applied to generate the dynamic threshold levels (expressed by horizontal solid lines). The profiles of the dynamic threshold levels feature the edges at the crossing points with the equalized dashed signal curves. These edges are automatically delimited by the processing program and shown as the square waves at the bottom. The skills for extracting the coordinates of the imaged contours are an important subject in the discipline of image metering;

11280pctF different kinds of imaging or photographing techniques should correspond to different processing methods owing to different energy spectrum. The fisheye images are possessed of very special phenomena; that is to say, large differences in image qualities exist as zenithal distances (a) increase. This is a point needed to pay attention while processing images of this kind. Other details related to the image processing techniques are well known to those skilled in the related art, so the details will be skipped here.

The second symmetric index in the invention is also according to the characteristic that fisheye-lens imaging is symmetric to the principal point 235 on the image plane 23.

Take the concentric circles of the PCP 220 in FIG. 6 as an example. If the optical axis 21 is already perpendicularly aligned to the pattern center 225 on the target 22, transforming P' (p, p) in FIG. 8A into C (p, p), the Cartesian coordinate system (namely the polar- coordinate transformation), will turn the circles into horizontal lines as shown in FIG. 9 where its X-axis is ß, the Y-axis is p, and its origin corresponds to the principal point 235 in FIG. 8A. The contour linearity of the transformed black/white lines is the second symmetric index in the invention. The experiment's results show that this second symmetric index is highly sensitive. Only a slight offset from the correct relative position between the target 22 and the camera 60 would cause severe bending curves on the screen.

Hence, this second symmetric index is a commendable reference no matter that it is identified by computer calculation or just naked-eye observation. This deduction is also suitable for other circular symmetric targets.

When the symmetry of the ICP 230 is at the optimum, examined by either the first or the second symmetry index, or both, the optical axis 21 will be regarded as being perpendicularly aligned to the pattern center 225, and the imaged point projected from the

11280pctF pattern center 225 is considered the principal point 235; meanwhile, the axis orthogonal to the pattern center 225 would pass through the principal point 235 and be perpendicular to the image plane 23. That is to say, the spatial sight ray representing the orthogonal axis can absolutely position the optical axis 21 of the fisheye lens 24. Thus it can be seen that an innovative contribution of the invention is this: the absolute coordinate of the optical axis 21 can be obtained by referring to the absolute position of the PCP 220. This implies that the spatial absolute coordinate of the VP 241 on the optical axis 21 can also be determined by referring to the absolute position of the PCP 220. Hence the issue of posing the fisheye camera is solved.

Referring to FIG. SA again, after the exposure of the optical axis 21, the VP 241 must be located on the optical axis 21 in light of the theory of optics. It means that the possible range of the VP 241 shrinks to a quite limited scope. Under the numerical limitations of the concentric circles'radii of the PCP 220 and ICP 230 denoted as (ri, p), the method of trial-and-error is employed to examine each test point postulated as the VP 241 on the optical axis to find the optimal spot of the VP 241 in conformity with a specific projection model, following which the focal length constant (denoted as J) of the fisheye camera is derivable. The details are shown as follows: If the location of the VP 241 on the optical axis 21 is given, the value of D is determined by referring to the coordinate of the pattern center 225 of the PCP 220; accordingly, the zenithal distance of ai, defined by the it''concentric circle of the PCP 220, is accordingly determined, that is, ai=tan~l (ri/D). Further, the principal distances (pi) corresponding to the i"'imaged contours are derivable from the image plane 23. The values of 1 can be figured out by dividing p ; by a ; in the case of the EDP (p=fa). If the test

11280pctF camera is an ideal EDP camera, all the fi calculated from the different concentric circles are supposed to be a constant. Hence, the optical feature of the fisheye camera can be identified by changing the value of D as well as adopting different projection models, such as the stereographic projection (SGP, wherein p=2fxtan (a/2) ) or the orthographic projection (OGP, wherein p=fxsin (a) ), until a successful matching level is attained.

For descriptive purposes, the VP 241 is postulated as the origin E (0,0, 0) of the E (a, ß, h) coordinate system, and the optical axis 21 (denoted as E (0, P, h), where P and h are arbitrary) is postulated to overlap with the Z-axis (denoted as W (0, 0, z), where z is a real number) in the absolute coordinate system. Set the radii of the concentric circles of the PCP 220 as ri and the corresponding image heights as pi. If the distance of D between the VP 241 and the PCP 220 is given, since both pi and ai are functions of D, the projection function of the EDP is turned into the mathematic form: pi (D) = f*ai (D), wherein i = 1-N and N is the amount of imaged contours on the ICP 230 which can be processed or sampled. If the Nth imaged contour is taken as the common reference, namely pN (D) =f*aN (D), the relation with the ith imaged contour is given as: pi (D)/pN (D)-ai /αN(D)=0--------------------------------------------------- ------------(1) However, the value of D cannot be foreseen in advance because the VP 241 is not yet fixed. A free point (0,0, z) therefore replaces (0, 0, D) to formulate the equation (1); a difference is given as: ei (z) =#i(D)/#N(D) - αi(z)/αN(z)----------------------------------------------- ---------------(2) Because ai is decided by z and ri in the function of αi(z)=tan-1(ri/z), and the scales of pi are fixed on the image plane 23 (namely, the pi (D) which is invariable while the value of z has changed), at least two conjugated coordinates (namely, the (ri, pi), representing

the information concerning a pair corresponding to the object point 221 and the imaged point 231) are needed to be measured in the experiment in order to decide the value of ei (z). Scanning along the optical axis 21, the object distance D can be fixed at the minimum of ei (z) according to the equation (2); meanwhile the exact spot of the VP 241 is consequently fixed.

However, the equation (2) only refers to two selected concentric circles. In order to cover the overall FOV and investigate the effective range of the test projection function, multiple traces spanning the image are necessary and ought to preferably reach larger visual angles. To fairly deal with the contribution of each imaged contour to the test projection function, a weight function is defined by referring to the increasing range of each imaged contour, which is: wi (D)=(#i(D)-#i-1(D))/#N(D)----------------------------------- ---------------------------(3) where po (D) is a null value and is treated as the radius of the principal point 235.

Thus, the error function, which is practically used in the overall evaluation to search the VP 241 on the optical axis 21, is: where z is the distance of a free point on the optical axis 21 from the PCP 220. The VP 241 is located at the point where the e (z) is minimum, or null. Equation (4) is established on the postulation of the EDP; if the postulation is turned into other possible projection models, such as the SGP (p=2>xtan (a/2) ) or the OGP (p=fxsin (a) ), equations from (1) to (4) have to be derived once again according to their projection functions separately. Overall, the derivation in accordance with the idea described above is termed the"s-algorithm"in the invention.

To obtain the focal length constantf the measured pi (D) and the respective ai (D) are based to obtain: where (D) =pi (D)/ ai (D). Similarly, thefi (D) will be equal to 1/2*pi (D) /tan (as (D)/2) if the postulated projection function becomes the SGP; or, thef (D) equals pi (D)/ sin (ai (D) ) if the postulated projection function is the OGP. The (D) and thef (D) should be equal to the inherent focal length constant f of the fisheye camera as long as the postulated projection function is exact, no error occurs in measurement, and the D value is correctly inferred.

Put into practice, the descriptive-statistics standard deviation of all f (D) can be the basis to evaluate the accuracy of the postulated projection model. Namely, the following equation can qualify the applicability of the postulated projection function, which is termed the"o-algorithm"in the invention: For an advanced evaluation of the reliability of the test results (including the position of the optical axis 21 and the matching projection model), referring to FIG. 7 again, the target 22 is separately moved twice along the positive direction of the Z-axis from the initial coordinate of the target 22 where the alignment of the optical axis 21 is completed, covering 5 mm each time. The position of the camera 60 and the target's coordinate on the X'and Y'rigid axes are fixed during the two more advanced tests. Counting the first test, the three experiments are named as"Testl","Test2"and"Test3"in sequence.

Table 1 the parameters and results of the three tests

s-algorithm s-algorithm offset D (EDP) D (OGP) D (SGP) D (EDP) D (OGP) D (SGP) 0 (Testl) 19.3 26.4 15.6 19.9 26.2 16.5 5 (Test2) 24.4 33.1 19.7 24.9 32. 8 20.7 10 (Test3) 29.4 40.1 23.8 29.9 39.4 24.9 f 1. 82 2.44 2.99 1. 85 2.42 3.10 I 0. 03 I 0. 03 I 0. 054 0.003 0.004 0.004 (unit: mm, except £ and a which have no unit.)

Table 1 lists the inferred values of D, f and # / # obtained in the three tests. Each test separately employs the £-algorithm and the-algorithm to handle the measured data in the postulations selected from the group comprising the EDP, the OGP and the SGP.

Comparing the absolute offsets in the left column, the results conclude that the test lens is very close to the EDP type because the D-values shown in the EDP columns, irrespective of whether they're inferred by the £-algorithm or the a-algoritlun, faithfully reflect the 5mm offset each time; however, a difference of about 0. 5mm persists in the D-values between the two algorithms in each test. Moreover, the inferred values of f (1. 82mm/1. 85mm) in the EDP-postulation are mostly close to the 1. 78 mm provided by the specifications; the difference may be caused by the manual fabrication of the lens. On the other hand, the results corresponding to the OGP and the SGP are all far from the given data, namely the absolute offsets and the focal length constant (). The pretty small values of #/# shown in the last row demonstrate the excellent accuracy of the two algorithms disclosed in the invention.

FIG. 10 shows the s-profiles and the a-profiles while testing the D-value along the

11280pctF Z-axis in the absolute coordinate system, taking"Testl"as an example. The s-or o- curves all reveal obvious minimums in six test conditions (three projection models multiplied by two algorithms). The single minimum in each test condition verifies the existence and location of the VP 241; it also proves the practicability of the invention.

Nevertheless, there are different locations of the VP 241 and values of the focal length constant while postulating different projection functions in the same lens; this implies that a single test is not enough to find out the real natural projection function of the test lens in the invention. In practice, only a particular circular projection function is also not enough to entirely describe the projecting behavior of a single lens.

The method disclosed in the invention, however, is not limited to a specific projection model, such as the EDP. Any non-linear projection lens with a given projection function is analyzable and the test camera mounting the lens is accordingly parameterized.

The method disclosed in the invention has the function of categorizing the real natural projection models of test cameras without the postulation of an exact 7v-FOV. The transformation and presentation of fisheye images in the invention is completely based on the optical models, starting from the principal point 235 and moving outwards, so the problems of the blurred boundaries and their uncertain visual angles can be ignored.

Consequently, users can designate a user-defined area on the image plane 23, which is the part effectively complying with a particular projection model, and transform the image solely within the user-defined area. Thus, the problem of fixing the boundaries is eliminated, and any precondition that the whole imaging area has to comply with a particular projection function is removed. In some cases, users can even shorten the sampling area to meet the required accuracy.

11280pctF In conclusion, the principal point 235, the optical axis 21 and the VP 241 can be positioned accurately with the aid of the method in the invention, and the morphologic fidelities of the images transformed can accordingly be recovered. Hence, the applications of the fisheye camera will be widely expanded by the present invention.

In general, the invention is possessed of the following advantages: 1. The present invention enables the function of tracing the position of the optical axis 21 to be concretely realized in practice so as to be able to further search the spatial absolute coordinate of the VP 241. This is a breakthrough in parameterizing cameras.

2. The present invention radically simplifies the logic of image transformation so that the speed of such transformation is faster and at a low cost because the necessary optical parameters, such as the principal point 235 and the focal length constant, can be precisely inferred by the invention; the images transformed accordingly recover morphological fidelity.

3. The method of camera-parameterization disclosed in the invention is suitable for various fisheye cameras with different projection mechanisms, without any limitation of particular projection model, i. e. the EDP.

4. The present invention needs no postulation established on an assumed and uncertain image boundary; hence the problem of the blurred boundary of the fisheye image is eliminated.

5. The present invention can accurately locate the single VP 241 in a particular projection model as the optical center for image transformation, so image metering through the fisheye camera becomes practicable.

6. The high precision of the optical parameters will greatly extend the applicable visual angles to the current visual systems.

The invention being thus described, it will be obvious that the same may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the invention, and all such modifications as would be obvious to one skilled in the art are intended to be included within the scope of the following claims.