Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD FOR DETERMINING A SPATIAL UNCERTAINTY IN IMAGES OF AN ENVIRONMENTAL AREA OF A MOTOR VEHICLE, DRIVER ASSISTANCE SYSTEM AS WELL AS MOTOR VEHICLE
Document Type and Number:
WIPO Patent Application WO/2019/012004
Kind Code:
A1
Abstract:
The invention relates to a method for determining a spatial uncertainty (25) between object points in an environmental area (5) of a motor vehicle (1) and image points representing the object points in at least one image (24) of the environmental area (5) which is generated by means of image data (22) of at least one camera (3) of the motor vehicle (1), wherein a distribution (D, D1, D2, D3, D4) for at least one extrinsic camera parameter (Tz, Rx, Ry, Rz) of the vehicle-side camera (3) is provided, a sample set (11) with random values (12) of the at least one extrinsic camera parameter (Tz, Rx, Ry, Rz) is determined based on the at least one distribution (D, D1, D2, D3, D4), a set (21) of reference world coordinates of at least one predetermined reference object point (26) in the environmental area (5) is determined by means of a coordinate transformation (13) dependent on the sample set (11) and the spatial uncertainty (25) is determined as a function of the set (21) of projected reference world coordinates of the reference object point (26). The invention moreover relates to a driver assistance system (2) as well as to a motor vehicle (1).

Inventors:
SIOGKAS GEORGE (IE)
VOROS ROBERT (IE)
STARR MICHAEL (IE)
Application Number:
PCT/EP2018/068827
Publication Date:
January 17, 2019
Filing Date:
July 11, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CONNAUGHT ELECTRONICS LTD (IE)
International Classes:
G06T7/73; G06V10/98
Domestic Patent References:
WO2012139660A12012-10-18
WO2012139636A12012-10-18
Other References:
SUNDARESWARA R ET AL: "Bayesian Modelling of Camera Calibration and Reconstruction", 3-D DIGITAL IMAGING AND MODELING, 2005. 3DIM 2005. FIFTH INTERNATIONAL CONFERENCE ON OTTAWA, ON, CANADA 13-16 JUNE 2005, PISCATAWAY, NJ, USA,IEEE, 13 June 2005 (2005-06-13), pages 394 - 401, XP010811021, ISBN: 978-0-7695-2327-9, DOI: 10.1109/3DIM.2005.24
SHIGANG LI ET AL: "Easy Calibration of a Blind-Spot-Free Fisheye Camera System Using a Scene of a Parking Space", IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, IEEE, PISCATAWAY, NJ, USA, vol. 12, no. 1, 1 March 2011 (2011-03-01), pages 232 - 242, XP011348841, ISSN: 1524-9050, DOI: 10.1109/TITS.2010.2085435
Attorney, Agent or Firm:
JAUREGUI URBAHN, Kristian (DE)
Download PDF:
Claims:
Claims

Method for determining a spatial uncertainty (25) between object points in an environmental area (5) of a motor vehicle (1 ) and image points representing the object points in at least one image (24) of the environmental area (5) which is generated by means of image data (22) of at least one camera (3) of the motor vehicle (1 ), wherein:

- a distribution (D, D1 , D2, D3, D4) for at least one extrinsic camera parameter (Tz, Rx, Ry, Rz) of the vehicle-side camera (3) is provided,

- a sample set (1 1 ) with random values (12) of the at least one extrinsic camera parameter (Tz, Rx, Ry, Rz) is determined based on the at least one distribution (D, D1 , D2, D3, D4),

- a set (21 ) of reference world coordinates of at least one predetermined reference object point (26) in the environmental area (5) is determined by means of a coordinate transformation (13) dependent on the sample set (1 1 ), wherein predetermined reference world coordinates of the predetermined reference object point (26) are converted from a world coordinate system (W) of the environmental area (5) into an image coordinate system (I) of the image data (22) and re-projected to the world coordinate system (W); and

- the spatial uncertainty (25) is determined as a function of the set (18) of projected reference world coordinates of the reference object point (26).

Method according to claim 1 ,

characterized in that

based on the spatial uncertainty (25), a systematic error is determined between at least one end point of a line marking (8) on a roadway of the motor vehicle (1 ) and a representation of the end point in the at least one image (24) of the roadway generated by the image data (22) of the at least one camera (3).

Method according to claim 1 or 2,

characterized in that

a type of the distribution (D, D1 , D2, D3, D4) and/or at least one measure describing the distribution (D, D1 , D2, D3, D4), in particular a mean value and/or a standard deviation of values of the at least one extrinsic camera parameter (Tz, Rx, Ry, Rz), is determined and the sample set (1 1 ) is determined depending on the type of the distribution (D, D1 , D2, D3, D4) and/or the at least one measure.

4. Method according to any one of the preceding claims,

characterized in that

as the at least one extrinsic camera parameter (Tz, Rx, Ry, Rz) at least one rotation (Rx, Ry, Rz) and/or at least one translation (Tz) is determined, wherein a distribution (D, D1 , D2, D3, D4) as well as a sample set (1 1 ) with random values (12) is determined for each extrinsic camera parameter (Tz, Rx, Ry, Rz).

5. Method according to any one of the preceding claims,

characterized in that

the sample set (1 1 ) with random values (12) of the at least one extrinsic camera parameter (Tz, Rx, Ry, Rz) is simulated by means of a Monte Carlo method as a function of the at least one distribution (D, D1 , D2, D3, D4).

6. Method according to claim 5,

characterized in that

within the Monte Carlo method, the random values (12) are generated by means of a deterministic random number generator as a function of the at least one distribution (D, D1 , D2, D3, D4).

7. Method according to any one of the preceding claims,

characterized in that

the at least one distribution (D, D1 , D2, D3, D4) is determined by approximating measurement values (9) of the at least one extrinsic camera parameter (Tz, Rx, Ry, Rz) by means of a predetermined distribution function, in particular a Gaussian distribution.

8. Method according to claim 7,

characterized in that

the measured values (9) of the at least one extrinsic camera parameter (Tz, Rx, Ry, Rz) are measured during at least one camera calibration of the at least one camera (3).

9. Method according to any one of the preceding claims,

characterized in that

the coordinate transformation (13) is additionally performed as a function of at least one predefined intrinsic camera parameter (16).

10. Method according to any one of the preceding claims,

characterized in that

a two-dimensional reference subarea (15) is predefined in the environmental area (5) with reference object points (26), and the coordinate transformation (13) is performed on the reference object points (26) of the reference subarea (15).

1 1 . Method according to any one of the preceding claims,

characterized in that

- the conversion of the at least one reference object point (26) from the world coordinate system (W) into the image coordinate system (I) is performed as a function of the sample set (1 1 ) and

- the re-projection of the reference object point (26) transferred into the image coordinate system (I) back into the world coordinate system (W) is performed as a function of at least one predetermined extrinsic ideal camera parameter (20).

12. Method according to any one of the preceding claims,

characterized in that

- the conversion of the at least one reference object point (26) from the world coordinate system (W) into the image coordinate system (I) is performed as a function of at least one predetermined extrinsic ideal camera parameter (20) and

- the re-projection of the reference object point (26) transferred into the image coordinate system (I) back into the world coordinate system (W) is performed as a function of the sample set (1 1 ).

13. Method according to any one of the preceding claims,

characterized in that

for the set (21 ) of reference world coordinates dependent on the sample set (1 1 ), a mean value and a standard deviation are determined and the spatial uncertainty (25) is determined as a function of the mean value and the standard deviation.

14. Driver assistance system (2) for a motor vehicle (1 ) comprising at least one camera (3) for capturing image data (22) from an environmental area (5) of the motor vehicle (1 ), and an image processing device (6) which is designed to perform a method according to one of the preceding claims.

15. Motor vehicle (1 ) with a driver assistance system (2) according to claim 14.

Description:
Method for determining a spatial uncertainty in images of an environmental area of a motor vehicle, driver assistance system as well as motor vehicle

The invention relates to a method for determining a spatial uncertainty between object points in an environmental area of a motor vehicle and of image points representing the object points in at least one image of the environmental area which image is generated by means of image data of at least one camera of the motor vehicle. The invention also relates to a driver assistance system as well as a motor vehicle.

It is already known from the prior art to assist a driver of a motor vehicle by means of driver assistance systems. Therefore, at least one camera of the motor vehicle detects image data from an environmental area of the motor vehicle in order to recognize objects based on the image data and to determine spatial positions of the objects relative to the motor vehicle. Such a driver assistance system can, for example, be an automatic line marking detection system, wherein as the objects roadway markings are recognized in the form of lines on a roadway surface of the motor vehicle and their positions relative to the motor vehicle are determined. Based on the spatial position or location of the road markings with respect to the motor vehicle, it is then possible, for example, to recognize or calculate the current lane of the motor vehicle as well as the adjacent lanes. For example, the driver can be supported during lane-changing maneuvers.

In the detection of the environmental area by means of cameras, in particular, original images from the surroundings of the motor vehicle are projected onto an arbitrary image plane. Such original images could, for example, be raw data of a fish-eye camera which distorts the environmental area due to a fish eye lens. The images projected onto the image plane can, for example, be top views of the environmental area. A relation between the original image and the projected image can be described via intrinsic and extrinsic camera parameters of the camera capturing the raw data. These intrinsic and extrinsic camera parameters influence a precision as well as a quality of the projection of the original image on the image plane. Moreover, the accuracy of the projection itself is limited by a resolution of the original image and the projected image. These two sources of error result in a combined, systematic error in the form of a spatial uncertainty between the original image and the projected image. It is the object of the present invention to provide a solution as to how a spatial uncertainty in image data of a camera of a motor vehicle can be determined particularly quickly and reliably.

According to the invention, this object is solved by a method, by a driver assistance system as well as by a motor vehicle having the features according to the respective independent patent claims. Advantageous embodiments of the invention are the subject matter of the dependent patent claims, the description and the figures.

According to one embodiment of a method for determining a spatial uncertainty between object points in an environmental area of a motor vehicle and of image points

representing the object points in at least one image of the environmental area which is generated by means of image data of at least one camera of the motor vehicle, in particular a distribution for at least one extrinsic camera parameter of the vehicle-side camera is pregiven and a sample set with random values of the at least one extrinsic camera parameter is determined based on the at least one distribution. A set of reference world coordinates of at least one predetermined reference object point in the

environmental area can be determined by means of a coordinate transformation dependent on the sample set, wherein predetermined reference world coordinates of the predetermined reference object point are converted from a world coordinate system of the environmental area into an image coordinate system of the image data and are re- projected to the world coordinate system. In particular, the spatial uncertainty is determined as a function of the set of projected reference world coordinates of the reference object point.

According to a particularly preferred embodiment of a method for determining a spatial uncertainty of object points in an environmental area of a motor vehicle and of image points representing the object points in at least one image of the environmental area which is generated by means of image data of at least one camera of the motor vehicle, a distribution for at least one extrinsic camera parameter of the vehicle-side camera is pregiven and a sample set with random values of the at least one extrinsic camera parameter is determined based on the at least one distribution. A set of reference world coordinates of at least one predetermined reference object point in the environmental area is determined by means of a coordinate transformation dependent on the sample set, wherein predetermined reference world coordinates of the predetermined reference object point are converted from a world coordinate system of the environmental area into an image coordinate system of the image data and are re-projected to the world coordinate system. Moreover, the spatial uncertainty is determined as a function of the set of projected reference world coordinates of the reference object point.

In the method, first the image data from the environmental area of the motor vehicle can be detected by the at least one vehicle-side camera and, for example, provided to an image processing device of the driver assistance system. The at least one camera is, in particular, a fish-eye camera which has a fish-eye lens for enlarging its detection area. The image data or raw image data detected by the camera are distorted due to the fish- eye lens. The distorted image data can be projected onto an image plane to produce the image. In particular, image data from at least four cameras forming a surround view camera system are acquired, wherein the distorted image data can be combined and projected onto the image plane in the form of a top view image of the environmental area of the motor vehicle.

On the basis of the projected image, for example on the basis of the top view image, objects, in particular roadway markings, can be recognized in the environmental area of the motor vehicle and their positions in the world coordinate system of the environmental area can be determined. However, these particular position data are associated with an error, in particular a systematic error, which represents the spatial uncertainty of the particular positions in the world coordinate system. The world coordinate system is a three-dimensional coordinate system, for example, a vehicle coordinate system wherein positions of object points in the environmental area can be described by means of an x- coordinate in the vehicle longitudinal direction, a y-coordinate in the vehicle transverse direction and a z-coordinate in the vehicle vertical direction.

By means of the camera, the object points described in the world coordinate system are mapped into a two-dimensional image coordinate system. Positions of image points are described in the two-dimensional image coordinate system, in particular by means of a u- coordinate in the horizontal image direction and a v-coordinate in the vertical image direction. A relation between the world coordinate system and the image coordinate system can be described via extrinsic, external camera parameters and intrinsic, internal camera parameters. The camera parameters thus enable a coordinate transformation between the world coordinate system or an object space and the image coordinate system or an image space. The extrinsic camera parameters describe, in particular, a position and an orientation of the camera, i.e. a pose of the camera, in the object space and establish a connection between the world coordinate system and a camera coordinate system. The intrinsic camera parameters establish the relationship between the camera coordinate system and the image coordinate system.

The better known and more stable the extrinsic camera parameters, the less is the spatial uncertainty between the object points in the environmental area and the image points imaging the object points in the images. The extrinsic camera parameters are often subject to fluctuations. In addition, mostly only a small number of values are available for the extrinsic camera parameters. In order to be able to determine the spatial uncertainty as accurately as possible, the at least one distribution of the values of the at least one extrinsic camera parameter is predetermined. Preferably, as the at least one extrinsic camera parameter at least one rotation and/or at least one translation is determined, wherein a distribution as well as a sample set with random values are determined for each extrinsic camera parameter. By means of the rotation values and the translation values, the position as well as the orientation of the camera, i.e. the pose of the camera, can thus be determined.

In particular, as a first extrinsic camera parameter, a translation along the z-axis, as a second extrinsic camera parameter a rotation around the x-axis, as a third extrinsic camera parameter a rotation around the y-axis, and as a fourth extrinsic camera parameter a rotation around the z-axis is determined. The translation value describes a displacement of the camera in the z-direction to an origin of the world coordinate system. The rotation values describe a rotation of the camera around the three Euler angles. Thereby, the distribution of the values of the associated extrinsic camera parameter is specified or determined, in particular, for each of the extrinsic camera parameters.

Based on the respective distribution of the values of the extrinsic camera parameter, the sample set of random values of the extrinsic camera parameter is generated, in particular simulated. This means that, based on the distribution, a plurality of values of the extrinsic camera parameter are artificially generated, which can be approximated over the predetermined distribution. In particular, a first sample set with random values for the translation in the z-direction is generated on the basis of a first distribution of translation values in the z-direction. Based on a second distribution of rotation values around the x- axis, a second sample set with random values for the rotation around the x-axis is determined, based on a third distribution of rotational values around the y-axis, a third sample set with random values for the rotation around the y-axis is determined, and based on a fourth distribution of rotational values around the z-axis, a fourth sample set with random values for the rotation around the z-axis is determined. The respective sample set represents, in particular, all possible fluctuations which the associated extrinsic camera parameter can have. In other words, the sample set represents all possible values which the associated extrinsic camera parameter can assume.

Then, a coordinate transformation with two coordinate transformation steps is performed depending on the determined sample set. The at least one predetermined reference object point, whose position in the environmental area is known, is projected from the world coordinate system into the image coordinate system in a first transformation step, and the projected reference object point is re-projected into the world coordinate system from the image coordinate system in a second transformation step. Since the coordinate transformation, in particular the first transformation step or the second transformation step, is carried out as a function of the sample set with random values of the extrinsic camera parameters, a plurality of reference world coordinates, i.e. the set of reference world coordinates, is formed during the transformation.

In particular, it is provided that the coordinate transformation is additionally performed as a function of at least one predetermined intrinsic camera parameter. The at least one intrinsic camera parameter serves in particular to restore a connection between the three- dimensional camera coordinate system and the two-dimensional image coordinate system. For example, as the intrinsic camera parameters, a focal length of the camera, coordinates of an image center point and pixel scaling in both image coordinate directions can be specified. The at least one intrinsic camera parameter is assumed to be constant. In particular, the translations or displacements of the camera in the x-direction and in the y-direction are also assumed to be constant and are given for the coordinate

transformation.

The set of reference world coordinates resulting from the coordinate transformation represents the possible reference world coordinates of the object point as a function of the different random values of the extrinsic camera parameter. In other words, the set of reference world coordinates represents the spatial uncertainty in the position

determination of an object point, which is introduced by the different, mutually divergent random values of the extrinsic camera parameter. In particular, the spatial uncertainty is determined for each pixel within the projected image.

According to the method, a computing power of the image processing device is thus used in order to artificially generate the extensive sample set with random values of the at least one extrinsic camera parameter. Since the sample set particularly maps all fluctuations or changes of the extrinsic camera parameters, the spatial uncertainty can be determined particularly reliably on the basis of the sample set. In addition, there is the advantage that no complex analytical solution for determining the spatial uncertainty has to be found. Based on the spatial uncertainty, the integration of the object recognition into a Bayesian network, in particular into a Kalman filter or a particle filter, can also be advantageously facilitated.

Preferably based on the spatial uncertainty, a systematic error is determined between at least one end point of a line marking on a roadway of the motor vehicle and the representation of the end point in the at least one image of the roadway generated by the image data of the at least one camera. Thus, as the objects in the environmental area of the motor vehicle, line markings are recognized on a road surface of the motor vehicle. Based on the spatial uncertainty, a position of the line marking end points can be determined particularly precisely. Thus, a driver assistance system can be implemented in the form of a line marking detection system which can support a driver, for example, when changing lanes to an adjacent lane or when tracking on the current lane.

In a further development of the invention, a type of the distribution and/or at least one measure describing the distribution, in particular a mean value and/or a standard deviation of values of the at least one extrinsic camera parameter, is determined and the sample set is determined as a function of the type of the distribution and/or the at least one measure. A frequency of occurring values of the at least one extrinsic camera parameter can be defined by the type of the distribution and/or by the at least one measure describing the distribution. Knowing the type of distribution and/or the at least one distribution-describing measure, the sample set with random values of the extrinsic camera parameter, which represent the real values of the at least one extrinsic camera parameter particularly realistically, can be simulated.

Particularly preferably, the sample set with random values of the at least one extrinsic camera parameter is simulated by means of a Monte Carlo method as a function of the at least one distribution. In the Monte Carlo method, in particular, the random values generated by means of a deterministic random number generator as a function of the at least one distribution. For example, each extrinsic camera parameter can be described as a vector, wherein the components of the vector are determined by means of the deterministic random number generator or pseudo-random number generator with the aid of the specified distribution. Thus, the generated sample set can be described as a two- dimensional matrix comprising the vectors comprising the random values of the extrinsic camera parameters. In the present case, the matrix has, for example, four vectors which correspond to four extrinsic camera parameters. Using the Monte Carlo method, a particularly large sample set or a sample volume can thus be determined numerically, by means of which the spatial uncertainty for each image point of the generated image of the environmental area can then be determined with high reliability.

In a particularly advantageous embodiment, the at least one distribution is predetermined by approximating measurement values of the at least one extrinsic camera parameter by means of a predetermined distribution function, in particular a Gaussian distribution. In this case, the distribution based on which the sample set with random values of the at least one extrinsic camera parameter is then generated, is determined from measured values of the at least one extrinsic camera parameter. In particular, the measured values of the at least one extrinsic camera parameter are measured during at least one camera calibration of the at least one camera. The measured values are thereby approximated by the distribution function, for example the Gaussian distribution, a mean value as well as a standard deviation being dependent on the measured values. The measured values represent the basis on which the distribution is generated. Based on a first number of measured values of the extrinsic camera parameter, a second number of random values, which is significantly greater in comparison to the first number, is generated. Based on the second number of random values the spatial uncertainty can then be determined particularly accurately and reliably. Thus, it can be advantageously prevented that a large number of camera calibrations or calibration cycles has to be performed for measuring the extrinsic camera parameters. Rather, the extrinsic camera parameters necessary for determining the spatial uncertainty are simulated with the aid of computing power of the image processing device.

It may be provided that a two-dimensional reference subarea with reference object points is predefined in the environmental area and the coordinate transformation is performed on the reference object points of the reference subarea. The two-dimensional reference partial region in the environmental region is a so-called region of interest (ROI) which is located in particular on a roadway of the motor vehicle. Thus, a z-coordinate of the reference image points within the reference subarea is equal to 0. The coordinate transformation for determining the set of reference world coordinates is thus performed on the basis of reference ground points. By selecting the z-coordinate with the value 0, the coordinate transformation can be carried out particularly quickly and simply. According to one embodiment of the invention, the conversion of the at least one reference object point from the world coordinate system into the image coordinate system is carried out as a function of the sample set. The re-projection of the reference object point transferred into the image coordinate system into the world coordinate system is performed as a function of at least one predetermined extrinsic ideal camera parameter.

According to this embodiment, the simulated sample set with random values of the extrinsic camera parameter is therefore used during the first coordinate transformation step. Therefore, a set of reference image coordinates of a reference image point corresponding to the reference object point can be determined from the reference world coordinates as a function of the sample set. This means that when projecting the reference object point located at a reference world position into the image coordinate system, a plurality of reference image positions arise. Based on the sample set with the plurality of random values for the at least one extrinsic camera parameter, a reference image point is thus formed whose reference image position has a spatial uncertainty. This spatial uncertainty of the reference image position is represented by the set of reference image coordinates.

The generated set of reference image coordinates is transformed back into the world coordinate system in the second coordinate transformation step, wherein the at least one extrinsic ideal camera parameter is used in the back-transformation. Therein, the set of reference world coordinates from the set of reference image coordinates can be determined as a function of the at least one predetermined extrinsic ideal camera parameter. The extrinsic ideal camera parameter describes an ideal calibration of the camera. For this purpose, a constant value for each extrinsic camera parameter can be predetermined, for example, and the second transformation step can be performed based on the constant values of the extrinsic camera parameters. In addition, the same, constant intrinsic camera parameters are assumed in both coordinate transformation steps. The intrinsic camera parameters are thus kept stable. The set of reference world coordinates, which is then used to determine the spatial uncertainty between the object space and the image space, is produced by the re-transformation of the set of reference image coordinates from the image coordinate system into the world coordinate system.

In a further embodiment of the invention, the transformation of the at least one reference object point from the world coordinate system into the image coordinate system is carried out as a function of at least one predetermined extrinsic ideal camera parameter. The re- projection of the reference object point transferred into the image coordinate system back into the world coordinate system is carried out as a function of the sample set.

In this embodiment, the sample set with random values of the at least one extrinsic camera parameter is used only in the second transformation step. Within the first transformation step, reference image coordinates of a reference image point

corresponding to the reference object point can be determined from the reference world coordinates as a function of the at least one predetermined extrinsic ideal camera parameter. Thus, in the first transformation step, in which the ideal calibration of the camera is assumed and therefore the at least one predetermined extrinsic ideal camera parameter is used, only one reference image point is generated from the reference object point at a particular reference image position. Within the second transformation step, the set of world reference coordinates can be determined from the reference image coordinates as a function of the sample set. The individual reference image point determined at the first transformation step at the reference image position, which is described by means of the reference image coordinates, is thus transformed back into the world coordinate system as a function of the simulated sample set. Thus, the set of reference world coordinates, which describes the spatial uncertainty between the image space and the object space, arises only in the second transformation step.

Preferably, a mean value and a standard deviation are determined for the set of reference world coordinates, which is dependent on the sample set. The spatial uncertainty is determined as a function of the mean value and the standard deviation. Since the random values of the extrinsic camera parameter can be represented or characterized by means of a distribution, the values of the reference world coordinates can also be represented or characterized by means of a distribution. The spatial uncertainty can then be determined from this distribution. The spatial uncertainty, which describes the systematic error within the at least one image generated by means of the image data of the camera, can, for example, be fed to the Bayesian network.

The invention also relates to a driver assistance system for a motor vehicle comprising at least one camera for capturing image data from an environmental area of the motor vehicle and an image processing device which is designed to carry out a method according to the invention or an advantageous embodiment thereof. The driver assistance system is particularly designed as an automatic line marking detection system. By means of the driver assistance system, roadway markings, for example lines on a roadway surface for the motor vehicle, can thus be recognized. Based on the road markings, the driver can be supported, for example, during tracking or during lane change maneuvers.

A motor vehicle according to the invention comprises a driver assistance system according to the invention. The motor vehicle is designed in particular as a passenger car. The motor vehicle can, for example, comprise a surround view camera system with at least four cameras which can capture image data from the environmental area around the motor vehicle. The cameras have in particular fish eye lenses.

The preferred embodiments presented with reference to the method according to the invention and their advantages apply accordingly to the driver assistance system according to the invention and to the motor vehicle according to the invention.

Further features of the invention are apparent from the claims, the figures and the description of figures. The features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations or alone without departing from the scope of the invention. Thus, implementations are also to be considered as encompassed and disclosed by the invention, which are not explicitly shown in the figures and explained, but arise from and can be generated by separated feature combinations from the explained implementations. Implementations and feature combinations are also to be considered as disclosed, which thus do not have all of the features of an originally formulated independent claim. Moreover, implementations and feature combinations are to be considered as disclosed, in particular by the

implementations set out above, which extend beyond or deviate from the feature combinations set out in the relations of the claims.

The invention is explained in more detail on the basis of preferred exemplary

embodiments and with reference to the attached drawings.

These show in:

Fig. 1 a schematic representation of an embodiment of a motor vehicle according to the invention; Fig. 2a to 2d schematic representations of measured values of extrinsic camera parameters as well as distributions of the measured values of the extrinsic camera parameters;

Fig. 3a, 3b schematic representations of sample sets with random values of the extrinsic camera parameters;

Fig. 4 a schematic representation of a coordinate transformation

between a world coordinate system and an image coordinate system;

Fig. 5 a schematic representation of distorted image data of a camera of the motor vehicle;

Fig. 6 a schematic representation of a top view image generated from image data of a camera of the motor vehicle with a spatial uncertainty; and

Fig. 7 a further schematic representation of a coordinate transformation between a world coordinate system and an image coordinate system.

In the figures identical as well as functionally identical elements are provided with the same reference characters.

Fig. 1 shows a motor vehicle 1 according to the present invention. In the present case, the motor vehicle 1 is designed as a passenger car. The motor vehicle 1 comprises a driver assistance system 2, which can assist a driver of the motor vehicle 1 during his driving task. For this purpose, the driver assistance system 2 in the present case has four cameras 3, which form a surround view camera system 4. The cameras 3 are designed to detect image data from an environmental area 5 of the motor vehicle 1 . The image data acquired by the cameras 3 can be fed to an image processing device 6 of the driver assistance system 2, which is designed to generate images of the environmental area 5, for example, top-view images, from the image data of the cameras 3. Moreover, the image processing device 6 can be designed to recognize objects 7, in particular roadway markings 8 on a roadway of the motor vehicle 1 , in the image data as well as to determine their positions in a world coordinate system W. The world coordinate system W is a three- dimensional coordinate system with an x-axis oriented along a vehicle longitudinal direction, a y-axis oriented along a vehicle transverse direction, and a z-axis oriented along a vehicle vertical direction.

The image data acquired by the cameras 3 are described in a two-dimensional image coordinate system I which comprises a u-axis in the horizontal image direction and a v- axis in the vertical image direction. A relation between the world coordinate system W and the image coordinate system I is established via extrinsic and intrinsic camera parameters of the cameras 3. By the extrinsic camera parameters describing a position and an orientation of a camera 3 in the world coordinate system W a relation between the world coordinate system W and a camera coordinate system is described. By the intrinsic camera parameters a relation between the camera coordinate system and the image coordinate system I is described. In particular, the extrinsic camera parameters can introduce a systematic error during the generation of images from the image data of the cameras 3 which describes a spatial uncertainty between object points in the world coordinate system W of the environmental area 5 and image points in the image coordinate system I of the generated image. Such an image 24 in the form of a top view image comprising the systematic error visualized as a distribution 25 is exemplarily shown in Fig. 6.

In order to be able to reliably determine the systematic error 25, it is desirable that a large number of values for the extrinsic camera parameters are available. Fig. 2a to 2d show measured values 9 of extrinsic camera parameters Tz, Rx, Ry, Rz, which, for example, can be measured during a camera calibration of the cameras 3. Fig. 2a shows a histogram of measured values 9 for a first extrinsic camera parameter in the form of a translation Tz in the z-direction. A position d is indicated on the abscissa, a number A is indicated on the ordinate. The translation Tz in the z-direction describes a displacement of the camera 3 with respect to an origin of the world coordinate system W. Fig. 2b shows a histogram of measured values 9 for a second extrinsic camera parameter in the form of a rotation Rx around the x-axis. Fig. 2c shows a histogram of measured values 9 for a third extrinsic camera parameter in the form of a rotation Ry around the y-axis. In Fig. 2d a histogram of measured values 9 for a fourth extrinsic camera parameter in the form of a rotation Rz around the z-axis is shown. In Fig. 2b, 2c, 2d, an angle a is indicated on the abscissa, on the ordinate a number A is indicated. The rotations Rx, Ry, Rz around the respective axes x, y, z describe rotations of the camera 3 around the Euler-angles.

Through the translations and rotations, a pose of the camera 3 can be specified. These measured values 9 of the extrinsic camera parameters Tz, Rx, Ry, Rz, which are mostly available only in a limited, small number, are approximated by respective distributions D1 , D2, D3, D4, in particular Gaussian distributions. For each of the distributions D1 , D2, D3, D4, at least one measure describing the distribution D1 , D2, D3, D4, for example a mean value and a standard deviation, is determined. A first distribution D1 according to Fig. 2a describes a distribution of the measured values 9 of the translation Tz, a second distribution D2 according to Fig. 2b describes the distribution of the measured values 9 of the rotation Rx, a third distribution D3 according to Fig. 2c describes the distribution of the measured values 9 of the rotation Ry and a fourth distribution D4 according to Fig. 2d describes the distribution of the measured values 9 of the rotation Rz.

Fig. 3a schematically shows a distribution D, which is generated based on measured values 9 of an extrinsic camera parameter. Based on the distribution D, a distribution parameter 10 is determined. The distribution parameter 10 describes in particular the type of the distribution D, for example a Gaussian distribution, as well as the at least one measure describing the distribution D, for example the mean value and the standard deviation. Based on the distribution parameter 10, a sample set 1 1 with random values 12 for the extrinsic camera parameter described by the distribution D can be determined by means of Monte Carlo simulation. For this purpose, the random values 12 can be generated by means of a deterministic random number generator. Thus, for example, a sample set 1 1 with random values 12 for the translation Tz, a sample set 1 1 with random values 12 for the rotation Rx, a sample set 1 1 with random values 12 for the rotation Ry, and a sample set 1 1 with random values 12 for the rotation Ry can be determined. Fig. 3b shows that the sample sets 12, of which only one sample set 12 is shown schematically, can be represented in the form of a matrix M with rows m and columns n. Rows m of the matrix M are formed by vectors, each vector comprising the concrete values Xi , x 2 , x n of the random values 12 of one extrinsic camera parameter. For the four extrinsic camera parameters Tz, Rx, Ry, Rz, the matrix M comprises four rows m.

Based on the sample sets 1 1 with random values 12 of the extrinsic camera parameters Tz, Rx, Ry, Rz, a coordinate transformation 13 is performed, which is shown in Fig. 4. The coordinate transformation 13 has a first transformation step 14 between the world coordinate system W and the image coordinate system I and a second transformation step 19 between the image coordinate system I and the world coordinate system W. Here, the first transformation step 14 is carried out based on the sample sets 1 1 by transferring a reference subarea 15 with reference object points, which are described in reference world coordinates, from the world coordinate system W into the image coordinate system I. The reference subarea 15, a so-called region of interest, is described in world coordinates x, y, z with a height of 0. This means that the reference subarea 15 is a two-dimensional region whose z coordinates are 0.

In addition, in the first transformation step 14, constant intrinsic camera parameters 16 as well as constant translation parameters 17 in the x-direction and y-direction are assumed. After the first transformation step 14, a raw image is produced with a set 18 of reference image coordinates, the set 18 of reference image coordinates being dependent on the sample sets 1 1 with the random values 12 of the extrinsic camera parameters Tz, Rx, Ry, Rz. A plurality of sets 18 of reference image coordinates are shown in Fig. 5 in a raw image 22 distorted by fish eye lenses of the capturing camera 3. The sets 18 of reference image coordinates can be represented by a distribution 23 which describes a fish eye error when projecting the reference world coordinates of the reference subarea 15 into the image coordinate system I.

In the second transformation step 19, the set 18 of reference image coordinates is projected back from the image coordinate system I into the world coordinate system W. The constant intrinsic camera parameters 16 as well as the constant translation parameters 17 in the x-direction and in the y-direction from the first transformation step 14 are again used. In addition, a set 20 of extrinsic ideal camera parameters is used which describes an ideal camera calibration of the camera 3. In the second transformation step 19, a set 21 of reference world coordinates is formed, by means of which the systematic error can be determined. A plurality of sets 21 of reference world coordinates are shown in the projected top view image 24 in Fig. 6, wherein the systematic error describing the spatial uncertainty can be represented by a distribution 25. The spatial uncertainty can be described or quantified by means of a mean value and a standard deviation of the distribution 25 at each image position in the top view image 24.

Fig. 7 shows the coordinate transformation 13 with the first transformation step 14 and the second transformation step 19, the sample sets 1 1 being used during the second transformation step 19 as compared to the coordinate transformation 13 according to Fig. 4. In the first transformation step 14, the reference subarea 15 with the reference object points 26 is transformed from the world coordinate system W into the image coordinate system I by means of the set 20 of the extrinsic ideal camera parameters, the constant intrinsic camera parameters 16 and the constant translation parameters 17 in the x- direction and in the y-direction, which are not shown here. In this case, a raw image 27 with reference image points 28 is produced. The reference image coordinates of the reference image points 28 have, in particular, no error 23 due to the assumption of the ideal calibration of the camera 3. These reference image points 28 are transformed back from the image coordinate system I into the world coordinate system W in the second transformation step 19. For this purpose, the sample sets 1 1 with the random values 12, the constant intrinsic camera parameters 16 (not shown here) and the constant translational parameters 17 in the x-direction and in the y-direction are used. In this way, the systematic errors 25 are introduced in the second transformation step 19.

In order to determine the systematic error 25, the coordinate transformation 13 described with reference to Fig. 4 or the coordinate transformation 13 described with reference to Fig. 7 can be used. The coordinate transformations 13 shown in Fig. 4 and 7 produce identical results, i.e. identical systematic errors 25, if the first transformation step 14 is the ideal inverse of the second transformation step 19.