CHAH EHSAN (IE)
MURPHY ALAN (IE)
WO2018130689A1 | 2018-07-19 |
US20170174023A1 | 2017-06-22 | |||
US20140160276A1 | 2014-06-12 |
Claims 1. Method for determining an angle for ascertaining a pose of a trailer (12), which is connected to a motor vehicle (10), by executing the following method steps: a) capturing a first image and a second image by means of a capturing unit (14) at different points of time (It, It+At), wherein each image shows an area of a ground, on which the trailer is situated, and the capturing unit (14) is arranged at the trailer (12), b) ascertaining a set of image motion vectors ( P ) based on the first and the second image, c) ascertaining each one set of space motion vectors (S) for each of multiple angle datasets (Y) depending on the respective angle dataset (Y) and on the set of image motion vectors ( P ), wherein each of the multiple angle datasets (Y) includes a yaw angle (F), a roll angle and/or a pitch angle, d) calculating each one deviation value for each of the multiple angle datasets (Y) depending on the respective set of space motion vectors (S), e) selecting that angle dataset (Y) from the multiple angle datasets (Y), which satisfies a preset criterion, wherein the angle to be determined is based on the selected angle dataset (Y). 2. Method according to claim 1 , wherein the set of space motion vectors (S) is ascertained depending on preset internal parameters and preset external parameters of the capturing unit (14). 3. Method according to any one of the preceding claims, wherein each image is divided into blocks (B), the corresponding blocks (B) in the second image are associated with a plurality of blocks ( B ) in the first image and an image motion vector (P) is ascertained for each block (B) of the second image depending on this association. 4. Method according to claim 3, wherein a region (ROI) is set for the first and the second image and image motion vectors (P) and/or blocks ( B ) are ascertained only within this region (ROI). 5. Method according to any one of claims 3 to 4, wherein the blocks (B) for the first and the second image are ascertained depending on an odometric parameter of the motor vehicle (10). 6. Method according to any one of claims 3 to 5, wherein each image motion vector ( P ) of the second image is compared to the image motion vectors ( P ) of the immediately adjacent blocks (B) and the image motion vector (P) is discarded depending on this comparison. Method according to any one of claims 3 to 6, wherein a weighting factor (w) is ascertained for each image motion vector ( P ) depending on an analysis of the similarity of the respective image motion vector (P) with the image motion vectors (P) of the immediately adjacent blocks ( B ) and the weighting factor (w) is taken into account in calculating the deviation value. 8. Method according to any one of the preceding claims, wherein a yaw angle (F) is ascertained from the selected angle dataset (Y) and the presence of a jack-knife condition of the trailer (12) is assessed based on this yaw angle (F). Method according to any one of the preceding claims, wherein a tolerance value is preset and exceeding or falling below the tolerance value by the deviation value defines the preset criterion. 10. Method according to any one of claims 1 to 8, wherein in step e) that angle dataset (Y) is selected, which has the lowest deviation value. 1 1. Method according to any one of the preceding claims, wherein the capturing unit (14) continuously captures images and the two most current images are used for the method at every point of time. 12. Angle determination system for a motor vehicle (10) including - a capturing unit (14), which can be attached to a trailer (12) of the motor vehicle (10) and is formed to capture images of a ground, on which the trailer (12) is situated, and - an evaluation unit, which is configured to perform a method according to any one of the preceding claims. 13. Trailer (12) with an angle determination system according to claim 12, wherein the optical capturing unit (14) is designed as a camera and a viewing direction of the camera forms an angle between 20 and 70 degrees, in particular an angle between 40 and 50 degrees, preferably of 45 degrees, with a perpendicular on the ground. 14. Driver assistance system with an angle determination system according to any one of claims 12 to 13. 15. Computer program product with program code means stored on a computer- readable medium to perform the method according to any one of the preceding claims when the computer program product is processed on a processor of an electronic evaluation unit. |
The present invention relates to a method for determining an angle for ascertaining a pose of a trailer connected to a motor vehicle. Moreover, this invention relates to an angle determination system for a motor vehicle including a capturing unit, which can be attached to a trailer of the motor vehicle and is formed to capture images of a ground. The angle determination system comprises an evaluation unit, which is formed to carry out the invention.
When a motor vehicle with a trailer is maneuvered, different angles can form between the motor vehicle and the trailer. Therein, the yaw angle is in particular addressed by the term angle. The yaw angle of the trailer in particular describes how much a longitudinal axis of the trailer is deflected with respect to a longitudinal axis of the motor vehicle.
According to current internal state of knowledge of the applicant, various variants of determining this yaw angle exist. A JLR Trailer Assist uses a preset known target mark attached on the trailer. This target mark has three black circles on a white background.
These three black circles can be detected by corresponding image processing. A Ford Pro Trailer Backup Assistant uses a checkered pattern as the target mark. This checkered pattern is attached on a draw bar of the trailer. Other methods in turn attempt to capture and track a trailer drawbar of the trailer by means of camera technology.
It is common to all of these approaches that they use a preset target or a preset target mark. This means that the angle determination is depending on the quality and reliability of the target mark in these methods. The object of the invention is in offering a more reliable method, by which an angle of the trailer, in particular a yaw angle of the trailer, can be determined.
This object is solved according to the independent claims. Advantageous configurations with convenient developments are indicated in the remaining claims.
The method for determining an angle for ascertaining a pose of a trailer connected to a motor vehicle is described by the following method steps. First, a first image and a second image are captured in a step a). The capture of the first and the second image is effected by means of a capturing unit at different points of time. Therein, each image shows an area of a ground, on which the trailer is situated. The capturing unit is arranged at the trailer. The ground is in particular a road or a floor covering. Fundamentally, the ground can be any solid surface, on which the trailer is positioned. Usually, a motor vehicle and a trailer, respectively, are moved on a road. The ground, on which the trailer stands, can be understood as a ground in the sense of this claim.
Preferably, the capturing unit is arranged at the end of the trailer, which faces away from the motor vehicle. Thus, the capturing unit is preferably arranged in the rear area of the trailer. This is mostly the back area of the trailer. If such an arrangement is selected, thus, the motor vehicle towing the trailer is usually not in the viewing angle of this capturing unit. The motor vehicle can be construed as the towing vehicle. In particular, the towing vehicle comprises an interface to be able to be connected to the trailer.
In a further step b), a first set of image motion vectors is ascertained based on the first and the second image. Thereto, known methods from the image processing technology are preferably employed. Thus, images can for example be divided into multiple reference blocks. A certain reference block from the first image can be associated with the second image. This reference block can be represented by a single point. This point can be expressed in image coordinates. An image motion vector can for example be ascertained by a difference of these two points. Preferably, the blocks of the images are determined or selected such that the blocks are arranged adjoining. Theoretically, the blocks can partially overlap, but this is mostly not provided. Preferably, all of the blocks are rectangularly configured and can even be shaped in square manner. Therein, the edge length of square blocks can turn out different according to demand. Often, the edge length varies from 8 pixels to 64 pixels. With the aid of rectangular or square blocks, the first and the second image can be grid-shaped divided. Thus, a grid structure can be generated, which facilitates an association of a block of the first image with a block of the second image.
In a step c), each one set of space motion vectors is ascertained for each of multiple angle datasets depending on the respective angle dataset and on the set of image motion vectors. Therein, each of the multiple angle datasets includes a yaw angle, a roll angle and/or a pitch angle. The space motion vectors in particular differ from the image motion vectors with respect to their dimension. The image motion vectors are preferably two- dimensional, the space motion vectors are preferably three-dimensional. However, these two types of motion vectors can have a higher dimension by the employment of homogenous coordinates. By the employment of homogenous coordinates in the space motion vectors and the image motion vectors, translations can in particular be easier calculated or represented. The employment of homogeneous coordinates in particular allows representing a translation as a matrix multiplication. In step c), a mapping rule is in particular used, which can generate a set of space motion vectors with the aid of the set of image motion vectors and the respective angle dataset. According to case of application, further parameters or quantities can be taken into account. The angle datasets as well as the different types of motion vectors are preferably represented as matrices.
For example, the angle dataset can be expressed as a 4x4 matrix. The image motion vectors can for example be expressed as 3x1 matrix by means of homogeneous coordinates. Accordingly, a set of space motion vectors can be expressed as a 4x1 matrix. For ascertaining the set of space motion vectors, a matrix multiplication is preferably employed. However, this does not exclude the employment of other methods. The respective angle dataset and the respective angle matrix, respectively, in particular have multiple different angles. Therein, the yaw angle is especially of particular interest. However, the roll angle and the pitch angle can also be ascertained in step c). If the attention is only on the yaw angle, thus, the method step c) can be correspondingly simplified. Thus, a corresponding equation system, which can be provided for step c), can for example be simplified by zeroing the roll angle or the pitch angle. However, the angle dataset is calculated with multiple different angles by default. These different angles, in particular the yaw angle, the roll angle and/or the pitch angle, can be communicated to a vehicle assistance system as result values. Mostly, the set of image motion vectors includes multiple image motion vectors. However, it is possible that the set of image motion vectors has a single image motion vector. The same analogously applies to the set of space motion vectors.
In a step d), each one deviation value is calculated for each of the multiple angle datasets depending on the respective set of space motion vectors. The multiple angle datasets can in particular be preset. Thus, the multiple angle datasets can for example take into account all of the yaw angles within a preset interval. Preferably, the multiple angle datasets are preset or selected such that the angle to be determined is contained therein with high probability. If assumptions cannot be made at all, thus, it is still possible to cover a complete angular range from 0 to 360 degrees by means of the multiple angle datasets. Thus, 360 angle datasets can for example be preset. Each of these angle datasets could represent a different yaw angle. In this case, the yaw angles could be taken into account in steps of 1 ° degrees from 0 to 360° degrees by the 360 angle datasets. However, if already present information can be used, thus, it is not required to take into account a range with all of the possible yaw angles. The distance of the multiple angle datasets from each other can be differently selected. Thus, it is mostly reasonable to increase the number of the angle datasets in an interval in a vicinity of a supposed yaw angle.
In a step e), that angle dataset is selected from the multiple angle datasets, which satisfies a preset criterion. Therein, the angle to be determined is based on the selected angle dataset. For example, the preset criterion can be selecting the lowest deviation value. However, it can also be provided that the deviation value has to exceed or fall below a preset tolerance value. If this is for example not successful based on the multiple angle datasets, thus, it can be provided that further multiple angle datasets are determined. According to the method step c), at least one further set of space motion vectors would be ascertained to these further multiple angle datasets. With the aid of this new set of space motion vectors, at least one new deviation value could be calculated in method step d). Thus, the method steps c) to e) can be cyclically executed until the criterion is satisfied, with the aid of the preset criterion. Therein, this method for determining the angle does not require an artificial reference object or target mark.
With the aid of images showing a ground of the trailer, this method can ascertain the pose thereof. In other methods, which require artificial target marks, in particular the following disadvantages occur:
These target marks can bleach over time or become blurred. This would aggravate or even make impossible the angle determination. If target marks rich in contrast are used, thus, mirroring effects can aggravate the detection of the target marks. In addition, target marks can become polluted such that they can be detected only difficultly or not at all. If too many target marks are employed, thus, this can aggravate or confuse the capture of the target marks. All of these disadvantages do not occur within the scope of this invention.
In addition, the capturing unit can be attached to every location of the trailer. It is only to be noted that a certain extent of ground is seen on the captured images. Since the method according to the invention does not require a target mark for angle determination, this method additionally requires less digital storage space. In particular, a so-called flash memory is not required. The method according to the invention can already carry out the angle determination upon presence of two images at different points of time. Methods based on the capture of a target mark require a certain time for identifying the target mark. This invention can be employed in different trailer systems in simple manner.
A further variant of this invention provides that the set of space motion vectors is ascertained depending on preset internal parameters and preset external parameters of the capturing unit. The capturing unit is preferably designed as a camera, in particular as a rear camera of the trailer. The capturing unit can be described by internal parameters as well as external parameters. The preset internal parameters of the capturing unit can be described by a matrix. The same applies to the external parameters of the capturing unit. The internal parameters in particular include an image distance or a focal distance of a lens of the capturing unit. In addition, the internal parameters can include an offset.
Therein, the external parameters in particular arise by the respective position of the capturing unit in the space. By means of a combination of the internal parameters and the external parameters, a coordinate transformation from an image plane into a world coordinate system can be performed. Thereby, a vector of the image plane can be transferred into a vector of a world coordinate system. The internal and external parameters of the capturing unit are often combined in the form of a so-called calibration matrix. Thereby, the type of the respective capturing unit or camera as well as the positioning and orientation thereof can be taken into account in calculating space motion vectors.
A further variant of this invention provides that each image is divided into blocks, the corresponding blocks in the second image are associated with a plurality of blocks in the first image and an image motion vector is ascertained for each block of the second image depending on this association. This variant can be referred to as block association. This approach is known by the term block matching in English. Preferably, each image is divided into rectangular blocks. Fundamentally, the blocks can be elliptically, hexagonally, diamond-shaped or otherwise formed, but rectangular blocks are mostly used for dividing the images. The positions of corresponding blocks in the image usually do not coincide.
The association of the corresponding blocks can be effected with the aid of characteristic points or characteristic features. Characteristic features of an image can for example be ascertained with the aid of image feature detectors. If a block for example shows a motor vehicle, thus, this motor vehicle can be shown in another block in the second image. This for example results by a proper motion of the motor vehicle. In this case, the motor vehicle itself could be used as the characteristic point or as the characteristic feature. Instead of the motor vehicle, the characteristic feature can be a corner, an edge or a conspicuous symbol. That block of the first image and that block of the second image, which show this motor vehicle or the characteristic feature, would accordingly be associated with corresponding blocks.
Since these blocks are shifted to each other in the image plane, an image motion vector can be correspondingly ascertained therefrom. An image motion vector is in particular a two-dimensional vector, which expresses a motion of corresponding blocks to another image. With the aid of the image motion vector, image coordinates of a first image can be converted into image coordinates of a second image. Therein, a block can in particular be represented with the aid of a point. Thereto, the central point of the respective block is preferably employed. The number of the blocks can vary, but the first image often has the same number of blocks as the second image. By means of a correspondingly fine rasterization, thus, more precise image motion vectors can be ascertained. Image motion vectors are in particular two-dimensional and preferably relate to the images generated by the capturing unit. Space motion vectors are preferably three-dimensional and in particular represent a motion of the ground. Therein, it is to be considered that the motor vehicle moves and a motion of the ground pictorially results therefrom. The motion of the motor vehicle finds expression in the image motion vectors and this in turn correspondingly affects the space motion vectors.
A further variant of this invention provides that a region is set for the first and the second image and image motion vectors and/or blocks are ascertained only within this region.
The ascertainment of image motion vectors for each individual block of the second and first image, respectively, is usually computationally expensive and often requires many digital resources. Therefore, this variant of the invention provides that image motion vectors and blocks, respectively, are only ascertained in a set region. This region is preferably that region, in which the motion of the motor vehicle and the trailer,
respectively, occurs. Therein, non-relevant regions such as for example the upper region of a tree or a sky can mostly be neglected without impairing the precision or accuracy of the method. By an intelligent selection of the region, the method for determining the angle can be considerably accelerated.
A further variant of this invention provides that the blocks for the first and the second image are ascertained depending on an odometric parameter of the motor vehicle. Thus, the regions can be set depending on an odometric parameter of the motor vehicle. The odometric parameter of the motor vehicle can for example be retrieved via the on-board network of the motor vehicle. By means of a CAN bus or FlexRay bus, the odometric parameter can be retrieved by means of an electronic evaluation unit. The odometric parameter can for example be a rotational speed, a speed, a slip angle or the like. Multiple odometric parameters can also be used. Thus, for example depending on a speed and a direction of the traveling motor vehicle, respectively, the region can be set, within which image motion vectors are ascertained. Thereby, setting the region can be dynamically configured depending on a proper motion of the motor vehicle or the trailer.
A further variant of this invention provides that each image motion vector of the second image is compared to the image motion vectors of the immediately adjacent blocks and the image motion vector is discarded depending on this comparison. Instead of the immediately adjacent blocks, all of the blocks can be taken into account, which do not exceed a preset distance to a block of the image motion vector. Ideally, each image motion vector can be associated with exactly one block. A set of image motion vectors can be ascertained with the aid of the first and the second image. This set of image motion vectors usually has many individual image motion vectors. Thus, each individual block can fundamentally have an own image motion vector.
Therein, it often occurs that individual image motion vectors greatly deviate from adjacent image motion vectors. These greatly deviating image motion vectors can be referred to as outliers. These outliers can considerably disturb the method for determining the angle. Therefore, it can be reasonable to identify and not to take into account these outliers for the further method. Thereto, a geometric deviation of each one individual image motion vector with respect to its immediately adjacent image motion vectors can for example be examined. This comparison or this geometric deviation can for example be ascertained based on a correlation function. Similarly, other similarity calculations or a neuronal network can be employed to recognize the outliers. If such an analysis for example yields that a motion vector exceeds a preset tolerance value, thus, it is in particular designated as an outlier.
Preferably, such outliers are no longer taken into account and are discarded. That is, in particular non-considering is meant by the term discarding. Discarded image motion vectors preferably get left out in calculating the deviation value in step d). In particular, a non-linear approach is employed, in which a deviation counter is respectively incremented by 1 in each pair-wise comparison of motion vectors if an underlying geometric tolerance value is exceeded. Since a concerned image motion vector has multiple immediately adjacent blocks and image motion vectors, respectively, this pair-wise comparison can be performed multiple times. In each pair-wise comparison, the deviation counter can be incremented by 1. Thus, the deviation counter can indicate how many outliers are around a motion vector. This in particular depends on the extent of the geometric deviation of the two image motion vectors to each other. If the deviation counter exceeds a certain threshold value, thus, the concerned image motion vector is preferably discarded and no longer taken into account. Thereby, erroneous image motion vectors can be recognized. After such an exemplary method, only useful image motion vectors ideally remain.
A further variant of this invention provides that a weighting factor is ascertained for each image motion vector depending on an analysis of the similarity of the respective image motion vector with the image motion vectors of the immediately adjacent blocks and the weighting factor is taken into account in calculating the deviation value. Weighting factors are often used in calculating image motion vectors. This variant of the invention provides the employment of weighting factors. With the aid of the weighting factors, certain blocks can be more severely taken into account in the first and the second image, respectively. Thus, important relevant blocks can be more severely taken into account. However, in the simplest case, the value of 1 can respectively be applied for the weighting factors. In this case, a weighting would not occur. The weighting factors can be employed in optimization algorithms in the further course, which serve to determine the angle.
A further variant provides that a yaw angle is ascertained from the selected angle dataset and the presence of a jack-knife condition of the trailer is assessed based on this yaw angle. The jack-knife condition is in particular present if the yaw angle exceeds the so- called jack-knife angle. In this case, the trailer can no longer be normally maneuvered and possibly even risks rolling over. Preferably, it is provided that a driver of the motor vehicle is warned of the occurrence of the jack-knife condition of the trailer. Based on the yaw angle, it can be assessed how far the trailer is away from the jack-knife condition. In particular, it can be estimated based on a progress of the yaw angle if the trailer has to expect the occurrence of the jack-knife condition in the future. For example, if the yaw angle continuously monotonically increases and exceeds a warning value at the same time, thus, it can possibly be assumed that the trailer is no longer maneuverable with this tendency. In this case, a driver of the motor vehicle could obtain an acoustic, visual or haptic warning message. Thus, a driver can be prevented from no longer being able to control his trailer. Thus, driving a motor vehicle with a trailer can be more safely and reliably realized.
A further variant of this invention provides that a tolerance value is preset and exceeding or falling below the tolerance value by the deviation value defines the preset criterion. Preferably, new angle datasets are selected if the tolerance value was not exceeded or fallen below. Thus, the method steps c) to e) can be cyclically performed again until the tolerance value was exceeded or fallen below. In this case, one could consider the method steps c) to e) as a loop. In a programming language, one would denote such a loop as "while loop". This means that the method steps c) to d) are repeated and executed until the tolerance value was exceeded or fallen below. The same can be effected with respect to the preset criterion. Thus, it can be ensured that the selected angle dataset contains correspondingly accurate information for the angle to be determined. The angle to be determined is based on the selected angle dataset. Based on the information of the selected angle dataset, the angle to be determined can be derived. In the simplest case, the angle to be determined is directly contained in the angle dataset and only has to be read out.
An order of the angle datasets to be processed can be set for cyclically executing the method steps c) to e). This order can be changed during execution of the method. In particular, the order of the angle datasets to be processed can be dependent on a progress or a development of the deviation value. For example, if the deviation value is to be as low as possible and an increasing deviation value results with an increasing yaw angle, thus, this indicates a false direction relating to the yaw angle. In this case, it is reasonable to no longer use angle datasets with greater yaw angle, but angle datasets with smaller yaw angle in the cyclic method. Thus, the "correct" angle dataset can be faster ascertained.
A further variant of this invention provides a method, wherein that angle dataset is selected in step e), which has the lowest deviation value. In particular, a number of multiple angle datasets can be preset in this case. A deviation value is respectively calculated to these different angle datasets according to step d). In this case, that angle dataset is now selected, which has the lowest deviation value. This variant offers the advantage that the calculation effort is known in advance.
A further variant of this invention provides that the capturing unit continuously captures images and the two most current images are used for the method at every point of time. The method is not restricted to two individual images. Two images are employed for an analysis or calculation of image motion vectors, but this does not mean that further images are not taken into account. A newly captured image preferably replaces the older image. For example, if the first image was captured at a first point of time and the second image was captured at a later second point of time, thus, a third image can be generated at a third point of time. This third image would then be the most current image. In this case, the third image would replace the first image. Thus, the second image would become the first image and the third image would become the second image. With the aid of this new image pair, the method can be again performed. Thus, new current images can be continuously taken into account for determining the angle. A change of the angle during travel of the motor vehicle with a trailer can thus be comprehended. Thus, the angle can be determined depending on time. A change of the angle during travel of the motor vehicle with the trailer can thus be recorded and evaluated with regard to the jack- knife condition. Moreover, the respective angular values, which can be ascertained depending on time, can be used for a vehicle assistance system of the trailer or of the motor vehicle.
The invention further provides an angle determination system for a motor vehicle with a capturing unit, which can be attached to a trailer of the motor vehicle and is formed to capture images of a ground, on which the trailer is situated. In addition, the angle determination system comprises an evaluation unit, which is configured to carry out one of the above described variants. The capturing unit is in particular formed as a camera and can be mounted on the trailer of the motor vehicle. Preferably, the capturing unit is attached in the rear area of the trailer. In particular, it is oriented to optically capture the ground of the trailer. The mentioned advantages and examples analogously apply to the angle determination system.
A further variant of the invention provides a trailer with an angle determination system, wherein the optical capturing unit is designed as a camera and a viewing direction of the camera forms an angle between 20 and 70 degrees, in particular an angle between 40 and 50 degrees, preferably of 45 degrees, with a perpendicular on the ground. If the camera forms an angle of 45 degrees with a perpendicular on the ground, thus, it is ensured that the camera always captures a part of the ground in this viewing direction. Thus, it can be ensured that the first and the second image each show a ground.
A further embodiment relates to a driver assistance system with an angle determination system according to one of the previous variants. In particular, the corresponding yaw angles, roll angles and/or pitch angles can be calculated or derived from the respective angle datasets. Among other things, these angles represent important information for the driver assistance system. In particular, this angle information can be extracted at multiple different points of time. If the evaluation unit has correspondingly powerful digital resources, thus, multiple angle datasets per second and thus the associated yaw angles, roll angles and/or pitch angles per second can be ascertained. Thus, the driver assistance system can timely predict the occurrence or imminence of the jack-knife condition. This statement also relates to further odometric calculations or statements, which are based on the angle datasets.
A further embodiment provides a motor vehicle with the driver assistance system. If the motor vehicle has the driver assistance system with the angle determination system, thus, the motor vehicle can be safely and reliably operated with the trailer. A further embodiment relates to a computer program product with program code means stored on a computer-readable medium to perform the method according to any one of the previous variants when the computer program product is processed on a processor of an electronic evaluation unit. Preferably, the images as well as all of the further information required for performing this method are merged to the electronic evaluation unit. Thus, the electronic evaluation unit can calculate the respective angle datasets and angles in the further course.
In particular, the following formulas can be used to ascertain the deviation value.
Equation 1 represents a motion vector. The motion vector v t includes the component x t as well as y*. Therein, the index i denotes the respective block. Based on equation 2, a weighted average value x can be calculated to multiple motion vectors. Thereto, a weighting factor w t is associated with each component x t . In this case, the index n indicates, over how many motion vectors it is averaged. With the aid of equation 3, a sum of square deviations can be ascertained to the set of space motion vectors. This preferably occurs with the aid of equation 3. The equation 3 contains the respective weighting factors w the respective y components y ; as well as the respective x components x t as well as the weighted average value w* from equation 2. The motion vectors are in particular to be understood as elements of the set of space motion vectors S with regard to the calculation of the deviation value. According to context, the motion vectors v t can relate to the image motion vectors P or to the space motion vectors
S.
Instead of a square deviation, a deviation of absolute errors can also be used. Thereto, equation 4 is preferably employed. In contrast to equation 3, the deviation according to amount is used instead of the square deviation in equation 4. The deviation value can for example be ascertained based on the two possibilities of equation 5. The deviation value is designated as RMSE in this example. RMSE in particular means root mean square error. The second alternative of equation 5 is the corresponding deviation value to equation 4.
P = Ci C e ip S [6] p = B t+ 1 - B t [7]
For performing the method steps c) to e), it can in particular be resorted to equation 6.
With the aid of equation 6, given space motion vectors S can be converted into image motion vectors P. With the aid of corresponding mathematic transformations, a set of space motion vectors S can be similarly ascertained from the set of image motion vectors P. Equation 6 in particular represents a matrix equation. Therein, the respective image coordinates are preferably expressed in homogeneous coordinates. In particular, the matrix P is a 3x1 matrix. Preferably, the matrix C t represents the internal parameters of the capturing unit and is designed as a 3x3 matrix. The matrix C e in particular represents the external parameters of the capturing unit and is designed as a 3x4 matrix in this example. A matrix i relates to the respective angle dataset and is in particular designed as a 4x4 matrix. The matrix S represents the space motion vectors. The matrix S is in particular designed as a 4x1 matrix. Thus, the matrix S has a higher dimension than the matrix P in this example. This is especially due to the fact that the image motion vectors P are preferably two-dimensionally and the space motion vectors S are mostly three- dimensionally configured. In using homogenous coordinates, the image motion vectors P can be three-dimensional and the space motion vectors S can be four-dimensional.
Based on the equations 1 to 6, a cycle is now exemplarily presented, which includes the steps c) to e). Preferably, multiple angle datasets ip are first preset. This means that multiple matrices y can be previously set. Therein, this setting is preferably effected such that the angle to be determined is contained therein. Preferably, all of the three spatial angles are modified by the same amount in the different matrices y. To these multiple angle datasets, a set of associated space motion vectors S is respectively ascertained. That is, to each angle matrix y, a corresponding set of space motion vectors S can be calculated. Thereto, equation 6 can for example be correspondingly applied. For calculating the deviation value for each of the multiple angle datasets i p, the equations 1 to 5 can now be used. The components represented in equations 1 to 5 mostly relate to the space motion vectors S. The image motion vectors P are preferably calculated only once to a given image pair of first and second image and remain unchanged within the cyclic calculation as long as new images do not enter the method. Therefore, the vector v ( was correlated with the matrix S t in equation 1. Therein, equality does not exist between Vi and S j . Equation 1 only suggests that a plurality of motion vectors can be interpreted as a matrix.
In the calculations of the deviation value to the corresponding angle dataset rp the calculation can be differently effected between the two options of the equations 3 and 4. With the aid of equation 3 as well as the left term of the equation 5, a minimum of the square deviation can be ascertained. A minimum of the deviation according to amount can be ascertained with the aid of equation 4 as well as the second term of equation 5. Thus, a respective set of space motion vectors S can be ascertained to each angle dataset ip by corresponding application of the equations 1 to 6.
In the step e), that angle dataset ip is in particular selected, which satisfies a previously set criterion. This criterion can for example mean that that angle dataset y is selected, which has induced the lowest deviation value. However, it can also be provided that the preset criterion was not reached and further angle datasets are introduced into the cyclic calculation method. In this case, the equations 1 to 6 can be again executed to be able to satisfy the method step e). The method step e) is in particular satisfied if the underlying criterion occurs.
Equation 7 is in particular employed for executing method step b). A reference block B t+1 at the point of time t+1 is subtracted from a reference block B t at the point of time t. In this case, the variable B preferably represents image coordinates of a central point of the concerned block. Therein, it is to be noted that they are respectively corresponding blocks. This means that the two blocks in equation 7 preferably relate to different images. B t+1 can relate to a block of the second image and B t can relate to a block of the first image.
Further features of the invention are apparent from the claims, the figures and the description of figures. The features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations without departing from the scope of the invention. Thus, implementations are also to be considered as encompassed and disclosed by the invention, which are not explicitly shown in the figures and explained, but arise from and can be generated by separated feature combinations from the explained implementations. Implementations and feature combinations are also to be considered as disclosed, which thus do not comprise all of the features of an originally formulated independent claim. Moreover, implementations and feature combinations are to be considered as disclosed, in particular by the implementations set out above, which extend beyond or deviate from the feature combinations set out in the relations of the claims.
Now, the present invention is explained in more detail based on the attached drawings.
There shows:
Fig. 1 a schematic top view to a motor vehicle with a trailer; Fig. 2 a schematic flow diagram to exemplary method steps;
Fig. 3 a schematic representation of an image, which is divided into blocks and respectively has motion vectors;
Fig. 4 a schematic representation of motion vectors in non-straight travel of the trailer;
Fig. 5 an exemplary diagram comparing a variant of the method to reference measurements. Fig. 1 exemplarily shows a top view to a system having a motor vehicle 10 as well as a trailer 12. In the rear area of the trailer 12, a capturing unit 14 is arranged. The trailer 12 can comprise multiple capturing units 14. The capturing units 14 can be attached to different locations of the trailer 12. In the example of Fig. 1 , a yaw angle F is shown, which is not 0. It is assumed that both the motor vehicle 10 and the trailer 12 are located on a road. In this example, the road is assumed to be flat. The motor vehicle 10 can be designed as a truck or passenger car.
In Fig. 2, a possible flow diagram of this invention is exemplarily shown. In a first step S1 , two images are captured by the capturing unit 14. Therein, the first image is captured at the point of time it, the second image is captured at the point of time I t+ a t . Preferably, these two images are captured as surround view images. According to the example of Fig. 2, it is assumed that the capturing unit 14 is designed as a camera, which is calibrated. A surround view is in particular a virtual plane view to the ground in the environment of the trailer. This surround view is preferably designed as a top view to the motor vehicle 10 and the trailer 12. Fig. 1 shows a top view to the trailer 12. The first and the second image are conditioned such that they are transformed into a top view analogously as shown in Fig. 1. This top view is virtual since it is generated by means of image processing. In generating this top view, multiple images can be processed to a first or second image. Therein, the intrinsic camera parameters and the external camera parameters are preferably taken into account. At the same time, distortions are preferably removed.
In a step S2, a motion of the trailer 12 is assessed with the aid of the two images. Step S2 can preferably have multiple sub-steps S2a to S2c. In a step S2a, the two images are for example conditioned. Distortions such as for example a pincushion distortion or a barrel distortion as well as fish-eye distortion can be removed or at least reduced. Conditioning these two images to a plane top view (O-view) as well as removing the distortions are not essential for the invention, but well advantageous. Thereby, a preset region ROI can be simpler determined. Similarly, further method steps can be facilitated. The creation of a plane top view as well as the conditioning of the images usually normalizes the appearance of textures between the individual images. Thereby, association of respective corresponding blocks B can be considerably facilitated.
In addition, image feature detectors such as for example SIFT or SURF can be employed to achieve a scale and rotation invariance. Thereby, distorted features can still be reliably associated between the individual images. With the aid of the creation of the plane top view, a scale, rotation, affine and perspective invariance can be automatically generated for all of the features of the ground. With the aid of the thus generated invariance, advanced feature detectors can be redundantly employed, which can considerably improve the method. In a step S2b, a block association (block matching in English) can for example be employed for calculating motion vectors.
In step S2b, image motion vectors P are preferably calculated. Thereto, the first or the second image is divided into multiple blocks B. Certain blocks B can be combined to the region ROI. Fig. 3 exemplarily shows a first image, which is divided into multiple blocks B. With the aid of the index i, the respective blocks B can now be serially numbered. Bottom left in Fig. 3, there is the first block B1 , in the further course the block B, and B n is shown as the last block in Fig. 3. The block B, can either traverse the entire image or only the region ROI. The traverse of the blocks can of course be otherwise implemented. The region of interest (ROI) is top left in the example of Fig. 3. Fig. 3 shows motion vectors in the case of a straight motion of the motor vehicle 10 with the trailer 12. Therefore, all of the motion vectors extend horizontally and have the same length. The motion vectors in Fig. 3 and Fig. 4 in particular represent space motion vectors even if the representation in the figures is only two-dimensional. As Fig. 3 exemplarily shows, rectangular blocks are preferably employed. In case of Fig. 3, these blocks are even designed in square manner. Therein, the blocks in Fig. 3 result in a regular grid. The size of the blocks can be different. The edge length of the square blocks can be 8 pixels, 16 pixels, 24 pixels or 32 pixels.
In the association of blocks B with the respective corresponding blocks, individual pixels within a block B can be ignored to accelerate the association of the blocks B. In Fig. 4, the second image is exemplarily shown. The individual motion vectors v t are no longer horizontally formed. In addition, the region ROI has shifted. In Fig. 3, the block Bi is exemplarily shown with a plus (+) in the region ROI bottom left. This plus is to represent a characteristic feature. The characteristic feature can for example be an edge, a corner or another characteristic. This characteristic feature is also again found in Fig. 4 in the region ROI. The block B t is in the first image (Fig. 3) and in the second image (Fig. 4). Therefore, a plus is also exemplarily registered in Fig. 4 bottom left in the region of ROI. These two blocks Bi of the first image and the second image are thus the corresponding blocks B. This means that a motion has occurred between the first image of Fig. 3 and the second image of Fig. 4. Based on the characteristic feature symbolized by the plus, the concerned block of Fig. 3 can be associated with the corresponding block of Fig. 4. The association is fundamentally effected for all of the blocks B of the region ROI, but the association can be restricted to certain blocks B. Individual blocks B can be ignored to accelerate the method. This can be provided if enough characteristic features are present, which allows generating image motion vectors P.
In particular, the region ROI can be ascertained depending on an odometric parameter of the motor vehicle 10. This means that the region ROI can be estimated by means of a vehicle speed and/or vehicle direction. In order to associate the concerned blocks B of Fig. 3 with the corresponding blocks B of Fig. 4, different association strategies can be used. Therein, a multi-dimensional block matching approach or other intelligent approaches can be used. Thus, an artificial neuronal network can for example be employed to allow an efficient association of the respective blocks B. For example, typical block matching methods, which are used in the field of video compression, can be employed. For example, approaches, in which all of the possible blocks within the image are examined, belong thereto. In order to more efficiently configure the block matching method, further approaches can be employed. A correlation, a cross-correlation, a sum of absolute differences, a sum of square differences or further advanced approaches can be applied as a typical deviation function for ascertaining the deviation value. For example, transformed differences or an analysis in the frequency spectrum belong to the advanced approaches.
In a step S2c, outliers can be ascertained among the image motion vectors P. These outliers are preferably discarded such that they do no longer disturb the method in the further course. Usually, numerous image motion vectors P are ascertained in ascertaining image motion vectors P, which greatly deviate from surrounding image motion vectors P. Fundamentally, all of the image motion vectors P can be used. However, it is
advantageous to provide the method with an algorithm, which can decide if an image motion vector P is to be regarded as an outlier. Thereby, image motion vectors P can be filtered out from objects of the ground such as for example from walls or curbs. Filtering out outliers in the image motion vectors P correspondingly advantageously has an effect in calculating the space motion vectors S. The method and especially cyclic calculations can be considerably accelerated if outliers have been previously filtered out in the image motion vectors.
In step S3, the deviation value or the deviation values are especially calculated. The aim of this step S3 is filtering out that angle dataset Y, which best reflects the spatial orientation of the capturing unit 14. For representing the angle dataset Y or the angle matrix, a Cartesian coordinate system is preferably employed, which relates to the motor vehicle 10. Thereto, angles to the axes respectively result. These angles respectively represent the yaw angle F, the pitch angle and the roll angle. The deviation function in particular exploits geometric characteristics of the space motion vectors S. Thereto, the equations 1 to 6 can in particular be used. With the aid of optimization algorithms such as for example a gradient descent method, a Gauss-Newton method or a Levenberg- Marquardt method, a minimum of the deviations can be ascertained. This minimum is in particular used as the deviation value.
Upon a travel of the motor vehicle 10 with the trailer 12 on a flat ground, which only extends straight, the situation shown in Fig. 3 in particular results. The motion vectors v t shown in Fig. 3 are all horizontally formed and have the same length in this case. Therein, the division of the image into blocks B as well as the calculation of image motion vectors P and space motion vectors S, respectively, succeeds all the easier if the first and the second image, respectively, were previously created as a virtual top view. A previous image processing removing distortions is also advantageous. In Fig. 3 and 4, space motion vectors S in the form of motion vectors v t are exemplarily drawn in all of the blocks B. These motion vectors v t each have x components x, as well as y components y,. From these components, the set of space motion vectors S can be ascertained with the aid of the equation 6. The components of the space motion vectors S can be employed for calculating the deviation value respectively by means of the equations 1 to 5.
This method can be employed for each capturing unit 14 and thus in particular for each camera. Therein, the yaw angle F is of particular interest. This yaw angle F results, as Fig. 1 shows, based on a longitudinal axis of the motor vehicle 10 and a longitudinal axis of the trailer 12. In a step S4, the corresponding yaw angle F can now be derived from the selected angle dataset Y. Therein, that angle dataset is used, which satisfies the preset criterion. This can for example be the angle dataset Y, which has the lowest deviation value. Based on the thus ascertained angle dataset Y or the angle matrix, the yaw angle F can be ascertained. In the simplest case, the corresponding entry is read from the angle matrix, which represents the yaw angle F.
If the yaw angle F is calculated and captured continuously in time, thus, a critical situation of the trailer 12 can be timely recognized. In particular, the occurrence of the so-called jack-knife condition can be timely recognized and averted. Fig. 5 exemplarily shows a diagram, in which the yaw angle F is plotted with respect to a counting index. Therein, the curve DR indicates measured angular values from a rotary encoder. The curve course OF shows the ascertained yaw angle F in Fig. 5. The yaw angle F was ascertained in Fig.5 according to the method presented in Fig. 2. Fig. 5 clearly shows that the method demonstrated in this application is well capable of accurately ascertaining the yaw angle F without time delay.
With the aid of this method, restrictions can be overcome, which exist in methods, which are dependent on target marks. T arget marks such as for example checkered patterns or other distinctive points are no longer required. Already the visual capture of a roadway or a ground is sufficient to ascertain the pose of the motor vehicle 10 and the trailer 12, respectively. The capturing unit 14 or the camera can basically be attached to any location of the trailer 12. Since target marks are no longer required, the storage demand and the employment of digital resources, respectively, can thus be reduced. The presented method and the mentioned examples, respectively, can basically be employed in all of the variants of trailers 12. Thus, this invention shows how the pose of the trailer 12 can be ascertained in efficient manner without particular further requirements. Thereby, the yaw angle F as well as further vehicle angles can be reliably ascertained. Preferably, this method provides that all of the three vehicle angles (yaw angle F, roll angle and pitch angle) are always ascertained at the same time. These three vehicle angles can be integrated in a vehicle assistance system.
Next Patent: BEARING ASSEMBLY AND ADJUSTMENT SCREW FOR ADJUSTING A BEARING CLEARANCE