Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DETERMINING AN ANGULAR POSITION OF A TRAILER WITHOUT TARGET
Document Type and Number:
WIPO Patent Application WO/2018/153915
Kind Code:
A1
Abstract:
The method for determining an angular position of a trailer shall be improved. For this purpose a method is proposed containing the step of obtaining a raw image of at least a part of a trailer (10) by means of a rear camera of a towing vehicle. The raw image is divided into blocks. A texture value is determined for each of the blocks. Those blocks (12) are labelled the texture value of which meet a pregiven repetition criterion. Finally, the angular position of the trailer is determined on the basis of the labelled blocks (12).

Inventors:
SHIVAMURTHY SWAROOP KAGGERE (IE)
STARR MICHAEL (IE)
Application Number:
PCT/EP2018/054275
Publication Date:
August 30, 2018
Filing Date:
February 21, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CONNAUGHT ELECTRONICS LTD (IE)
International Classes:
G06T7/40; B60D1/24; B62D13/06; G06T7/11; G06T7/73
Foreign References:
DE102011113191A12013-03-14
DE102011120814A12013-06-13
US9085261B22015-07-21
US20150115571A12015-04-30
US20130158863A12013-06-20
Other References:
XIE X ET AL: "A Galaxy of Texture Features", 1 October 2008, 20081001, PAGE(S) 375 - 406, ISBN: 978-1-84816-115-3, XP008122517
PHILIPPE ET AL: "Texture Segmentation", 24 May 2016 (2016-05-24), XP055471409, Retrieved from the Internet [retrieved on 20180430]
Attorney, Agent or Firm:
JAUREGUI URBAHN, Kristian (DE)
Download PDF:
Claims:
Claims

1 . Method for determining an angular position of a trailer (10) with respect to a towing vehicle, to which the trailer (10) is coupled, by

- obtaining a raw image of at least a part of the trailer (10) by means of a rear camera of the towing vehicle,

characterized by

- dividing the raw image into blocks,

- determining a texture value for each of the blocks,

labelling blocks (12) the texture value of which meet a pregiven repetition criterion in a first map and

- determining the angular position (19) of the trailer on the basis of the labelled blocks (12) of the first map.

2. Method according to claim 1 ,

characterized by

grouping the blocks of the raw image into horizontal slices (sl1 to sl6) and performing the labelling of the blocks on the basis of a histogram of texture of each of the slices (sl1 to sl6).

3. Method according to claim 1 or 2,

characterized by

determining a luminance value for each of the blocks and

labelling blocks (13) the luminance value of which meet a pregiven repetition criterion in a second map, wherein the step of determining the angular position (19) of the trailer (10) is performed on the basis of the labelled blocks of the first map and the second map.

4. Method according to one of the preceding claims,

characterized by dividing the first map and/or the second map into sectors (sc1 to sc5) and using only those of the sectors (sc1 to sc5) for determining the angular position (19) which include the most labelled blocks.

5. Method according to one of the preceding claims,

characterized by

performing an edge detection on a boundary of a region of the labelled blocks and a region of unlabeled blocks, wherein the angular position (19) of the trailer (10) is determined on the basis of one or more detected edges (15, 16).

6. Method according to claim 5,

characterized by

a pixel level edge refinement is performed on the one or more detected edges (15, 16).

7. Method according to one of the preceding claims,

characterized by

determining a centre of the labelled blocks for determining the angular position (19) of the trailer (10).

8. Method according to claim 7,

characterized by

measuring an angle of the centre (17) with respect to a pregiven normal (18) for determining the angular position (19) of the trailer (10).

9. Method according to claim 7,

characterized by

transforming the labelled blocks (12, 13) into polar coordinates for determining the angular position (19) of the trailer (10).

10. Method according to one of the preceding claims,

characterized in that

the labelled blocks (12, 13) represent the towbar (1 1 ) of the trailer (10). Evaluation device for determining an angular position (19) of a trailer (10) with respect to a towing vehicle, to which the trailer (10) is coupled, including

- a rear camera for the towing vehicle for obtaining a raw image of at least a part of the trailer (10),

characterized by

- a data processing device for

o dividing the raw image into blocks,

o determining a texture value for each of the blocks,

o labelling blocks (12) the texture value of which meet a pregiven repetition criterion in a first map and

o determining the angular position (19) of the trailer on the basis of the labelled blocks (12) of the first map.

Description:
Determining an angular position of a trailer without target

The present invention relates to a targetless method for determining an angular position of a trailer with respect to a towing vehicle, to which the trailer is coupled, wherein a raw image of at least a part of the trailer is obtained by means of a rear camera of the towing vehicle. Moreover, the present invention relates to an evaluation device for determining an angular position of a trailer with respect to a towing vehicle, to which the trailer is coupled, by a rear camera for the towing vehicle for obtaining a raw image of at least a part of the trailer.

In a vehicle/trailer combination with a vehicle and a trailer, there is the problem that the rear space or the trailer itself is only hardly visible for the driver of the vehicle. The rear space around the vehicle/trailer combination is only visible in fractions by the interior mirror and the two exterior mirrors. Thereby, difficulties arise in particular in reversing or in performing other maneuvers. In particular with small or narrow trailers, which are covered by the towing vehicle and are barely visible via the exterior mirrors, the driver barely has a possibility of recognizing, in which angular position the trailer currently is. Assistance systems for trailer operation also often require the determination of the angular position of the trailer.

Furthermore, there may occur other difficult scenarios encountered during driving the vehicle with a trailer. Specifically, when driving forward at speed, a trailer assistance may be helpful which provides trailer's way detection, or when driving across a slope a trailer slip detection would be helpful. In such scenarios the trailer yaw angle should be detected and the respective status of the trailer with respect to the vehicle should be indicated to the user constantly.

Therefore, it is the aim of the present invention to obtain the angle of the towed trailer with respect to the longitudinal axis of the towing vehicle, to which the trailer is coupled. As soon as the angle is obtained, numerous functions are available for the user, which require utilization of this trailer angle. Thus, a trailer parking assistance system can for example be provided when the reverse gear is engaged. In addition, the detected angle can be used for recognizing escalation of the trailer when the vehicle/trailer combination travels forward with a certain speed. Moreover, the detected angle can be used to recognize slipping of the trailer when the vehicle/trailer combination travels on an inclination. Further, the detected angle can also be used to overlay a trajectory of the trailer on a screen for the user depending on the current angular position for example in reversing. Numerous other possibilities of application for the detected angle are conceivable.

Heretofore, some methods are already available to determine the angle of a trailer with respect to the longitudinal axis of the towing vehicle. Thus, Volkswagen for example provides a camera-based trailer assistant to determine the trailer angle. Moreover, Jaguar Landrover provides a JLR trailer assistant, in which a known target (three black circles on white background) are attached to the trailer as target. The corresponding algorithm then detects the known target.

Ford also provides a trailer assistant, which detects a known target. A checkerboard pattern is there attached to the drawbar of the trailer. The algorithm again detects the known checkerboard pattern.

From the printed matter US 9,085,261 B2, a rearview system with trailer angle detection is known. By a camera directed rearward at the towing vehicle, rearward directed captures are made. If a trailer is coupled to the towing vehicle, the trailer angle is calculated from the captured images of the rearward directed camera by means of a processor. Therein, a known target pattern at the trailer is in particular evaluated.

Moreover, from the printed matter US 2015/01 15571 A1 , a method for visual assistance by a graphic overlay on an image of a reversing camera is known. By this visual assistance, the driver can for example be assisted in reversing to a trailer to head for the drawbar with the tow coupling as exactly as possible. Therein, a camera model is provided to match the camera image in the vehicle coordinates with world coordinates. The method predicts the path of the vehicle corresponding to the current steering angle.

Moreover, the printed matter US 2013/0158863 A1 discloses a prediction of a reversing path of a trailer with the aid of GPS and camera images. Therein, a current position of the vehicle and the trailer is obtained with the aid of a GPS system. The current location of the vehicle/trailer combination and a target position of the vehicle/trailer combination are presented on a screen. Thereto, the reversing path of the trailer is predicted depending on the steering angle of the vehicle and the angle between vehicle and trailer and overlaid on the screen. The object of the present invention is in reliably determining the angle that a trailer occupies with respect to the towing vehicle with as little effort as possible.

According to the invention, this object is solved by a method according to claim 1 as well as an evaluation device according to claim 1 1 . Advantageous further developments of the invention are apparent from the dependent claims.

Accordingly, there is provided a method for determining an angular position of a trailer with respect to a towing vehicle, to which the trailer is coupled. Preferably, the angular position is that angle, the trailer occupies relative to the longitudinal axis of the towing vehicle. A raw image of at least a part of the trailer is obtained by means of a rear camera of the towing vehicle for this method. Thus, a raw image is captured by means of a camera from the towing vehicle rearwardly directed to the trailer. Therein, it is not necessarily required that the entire trailer is depicted in the raw image. Rather, it is for example sufficient if the drawbar (also called towbar) of the trailer or also only a part of the drawbar is contained in the raw image.

The raw image is divided into blocks. These blocks preferably have equal size. For example, the raw image is divided into NxM blocks. Each block may have the size of 8x8 pixels. For each of the blocks a texture value is determined. Thus, a block may be reduced to one pixel representing the texture value. This means that the raw image is transformed to a texture value map (first map).

Those blocks of the raw image or those pixels of the texture value map are labelled which meet a pregiven repetition criterion in the first map. For example, if a texture is repeated rarely or even not, it is an unusual texture. Such rare or unusual texture may represent the towbar of the trailer since it appears just once in the raw image. After labelling the blocks, i.e. the first map is generated, the angular position of the trailer may be determined on the basis of the labelled blocks of the first map. For instance, the centre of the region with the labelled blocks with respect to the position of the towing hook or a respective normal of vehicle may represent the angle of the towbar or the trailer.

The method can be performed targetless, meaning that detection of the trailer can be performed without the user requiring a target sticker. In contrast target based trailer assistant systems require the user to place an easy-to-detect target with a known design on the trailer. This helps the camera to identify the trailer that it is meant to track.

However, this means for a targetless method like the present method that there is one less step required to set-up the system at the beginning, and the targetless method also makes maintenance of the system much easier as there is no sticker there to suffer aging, dirt or damage.

Preferably the blocks of the raw image are grouped into (horizontal) slices and the labelling of the blocks is performed on the basis of a histogram of texture of each of the slices. The advantage of dividing the image into slices is that a different threshold can be applied for every cropped portion based on a localized histogram study. Under adverse circumstances, there is a chance to miss some portion of the trailer tow bar with a global histogram threshold. Also slice based texture analysis is capable of handling changes in the surface (e.g. the road can have different types of surfaces)

Another embodiment includes the steps of determining a luminance value of each of the blocks and labelling blocks the luminance value of which meet a pregiven repetition criterion in a second map, wherein the step of determining the angular position of the trailer is performed on the basis of the labelled blocks of the first map and the second map. In other words, beside an analysis of unusual texture features an analysis of unusual luminance features is performed. In a similar way as the texture analysis, the luminance analysis is performed on each of the blocks. A low repetition rate of the luminance values may indicate the towbar of the trailer. Based on these luminance values the second map is generated. The angular position of the towbar or trailer, respectively, can now be obtained from the information of the first map and the second map.

Additionally, there may be performed the step of dividing the first map and/or the second map into sectors and using only those of the sectors for determining the angular position which include the most labelled blocks. Such sectorizing may reduce the amount of blocks to be processed. Specifically, those one or more sectors can be selected for further processing which include the most labelled blocks, i.e. which show a very low repetition of texture values and/or luminance values, for example.

In a further embodiment an edge detection may be performed on a boundary between labelled blocks and unlabeled blocks, wherein the angular position of the trailer is determined on the basis of one or more detected edges. If, for example, the region with the labelled blocks covers the region of the raw image which represents the towbar, the boundaries or edges of this region correspond to the edges of the towbar. Moreover, a pixel level edge refinement may be performed on the one or more detected edges. If the edges are obtained on block basis, the edges are very coarse. Therefore, a refinement of the edges on pixel basis improves the accuracy of the edges tremendously. Consequently, the angular position of the trailer can be performed more accurately, when using edges refined on pixel basis.

In one embodiment a centre of the labelled blocks may be identified for determining the angular position of the trailer. The centre of the labelled blocks may, for example, be determined by using the detected edges. It is easy to identify a centre line of a left edge and a right edge of the region with the labelled blocks, i.e. the towbar, if the middle of the left towbar edge boundary and the corresponding right towbar edge boundary are identified for a couple of pixels towards the towball and a line fitting algorithm is applied.

In a further development the angle of the centre of the region of the labelled blocks with respect to the towball (centre line) is measured versus a pregiven normal for determining the angular position of the trailer. This pregiven normal may be the longitudinal axis of towing vehicle. The difference between this normal and the centre line of the region with the labelled blocks may represent the angle of the towbar or trailer.

Alternatively, the pixels of the centre of the region of labelled blocks may be transferred into polar coordinates for determining the angular position of the trailer. Such

transformation into polar coordinates transforms an angle into a distance which is easier to be detected. Thus, a specific angle of the trailer will be transformed into a certain horizontal position in a polar coordinate map.

As already indicated above, the labelled blocks may represent the towbar of the trailer. A towbar is characterized by a plurality of individual components which result in high texture differences and high luminance differences. Therefore, the towbar can be detected easily with the above described method.

The above described object is also solved by an evaluation device for determining an angular position of a trailer with respect to a towing vehicle, to which the trailer is coupled, including

- a rear camera for the towing vehicle for obtaining a raw image of at least a part of the trailer, and

- a data processing device for

o dividing the raw image into blocks, o determining a texture value for each of the blocks,

o labelling blocks the texture value of which meet a pregiven repetition criterion in a first map and

o determining the angular position of the trailer on the basis of the labelled blocks of the first map.

The advantages and variations of the inventive method as described above also apply to the inventive evaluation device. In this case the method features correspond to respective functional features of the device.

Further features of the invention are apparent from the claims, the figures and the description of figures. The features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations without departing from the scope of the invention. Thus, implementations are also to be considered as encompassed and disclosed by the invention, which are not explicitly shown in the figures and explained, but arise from and can be generated by separated feature combinations from the explained implementations. Implementations and feature combinations are also to be considered as disclosed, which thus do not have all of the features of an originally formulated independent claim. Moreover, implementations and feature combinations are to be considered as disclosed, in particular by the implementations set out above, which extend beyond or deviate from the feature combinations set out in the relations of the claims.

The attached drawings show in:

Fig. 1 a software architecture for a targetless trailer assist system;

Fig. 2 a flowchart of the functionality of determining the angle of the trailer;

Fig. 3 a raw image divided into blocks;

Fig. 4 the raw image divided into slices;

Fig. 5 an example of labelling unusual texture in proceeded slices; Fig. 6 an example of labelling unusual luminance in proceeded slices;

Fig. 7 an example of dividing the image into sectors to find a peak sector with an unusual texture and luminance;

Fig. 8 the image of the trailer with a block level accurate towbar boundary; and

Fig. 9 an actual towbar centre line and a normal to determine the angle of the towbar.

The present invention will now be described in more detail with exemplary embodiments representing preferred examples of the invention.

The proposed method for determining an angular position of a trailer may be used for a trailer assistant system of a vehicle. The method can be described as targetless, since no target sticker is necessary to be attached to the trailer. The proposed method or trailer assist system is a vision based automated feedback solution. The basic idea behind the solution is to segment the trailer towbar which has potential to provide features which are useful to determine the angle of the trailer with respect to the longitudinal axis of the towing vehicle.

In one embodiment the core idea is to use texture differences in an image to determine the position and/or angle of a trailer or component of it in the image. Ideally, any texture based segmentation can be used for the trailer towbar detection. The proposed method may be embedded friendly and can be directly portable on any hardware without effort. A corresponding system will provide better control of the vehicle with the trailer, so that the number of accidents due to various blind spots and instability created due to the trailer can be reduced.

Fig. 1 shows a software architecture for an embodiment of a targetless trailer assist system. The main block of the system is a trailer feedback module 1 . The trailer feedback module 1 receives input data 2, which may be a video frame or a sensor map. The input data are subjected to a block cost metric module 3 of the trailer feedback module 1 . The block cost metric module 3 extracts a texture feature of each block of the image, frame, map etc. The extracted texture feature for each block is analyzed in an unusual texture analysis module 4. In parallel the block cost metric module 3 extracts luminance data for each block of the image, frame, map etc. The respective data are analyzed in an unusual luminance analysis module 5.

Unusual texture features and unusual luminance features obtained from the analysis modules 4 and 5 are used in a feature detection module 6 for detecting a unique feature on the towbar. Thus, those blocks of the image are known which represent the towbar. Based on these blocks a detection of edges of the towbar is performed in edge detection module 7. The resulting edges are used in an angle detector 8 to obtain the trailer yaw angle. The output of the angle detector 8 of the trailer feedback module 1 is an output signal 9 which can be used for controlling the brake, the acceleration or for providing save steering direction change feedback.

Fig. 2 represents a possible flowchart of an exemplary method for determining the angular position of a trailer. The method starts at step S1 . The functionality of the trailer feedback module 1 is defined through some set of configuration parameters. In step S2 the feedback module allocates and initializes necessary resources like memory buffers and data structures. In step S3 configuration parameters are initialized.

In step S4 the incoming video frame 2 is divided into NxM macro blocks (the block size may be 8x8). Furthermore, sets of macro blocks may be grouped in separate slices. Finally, a block cost metric may be calculated for each block in step S4. This step may involve the extraction of texture or luminance, wherein each pixel of a first map is a representation of the texture of a respective block and one pixel of a second map is a representation of the luminance of this block.

In step S5 an unusual texture analysis is performed on each slice. This step involves the extraction of less repeating texture patterns in each slice. Following or parallel step S6 performs an unusual luminance analysis on each slice. This step involves the extraction of less repeating luminance patterns in each slice.

Step S7 performs a texture feature detection of the towbar. This step involves dividing the region of interest (ROI) into sectors around the towball centre and detection of the sector or sectors with peak unusual texture pattern across the sector/sectors. Additionally in step S7 a luminance feature detection is performed on the towbar. Similarly, this step involves dividing ROI into sectors around the towball centre and detection of the sector with the peak unusual luminance pattern across the sector. In step S8 towbar edge detection takes place. This step involves the detection of one or more edges specifically to the left and right of the towbar. In final step S9 detection of a trailer yaw angle is performed based on the towbar centre, for example. The process or the trailer feedback module 1 ends at step S10.

Fig. 3 shows an image of a rear camera of a vehicle towing a trailer 10. The trailer has a towbar 1 1 which couples the trailer 10 to the vehicle (not shown). For further processing the image is divided into equal sized blocks. The number of blocks may be configurable (NxM). Ideally N and M are set to multiples of 8 for better embedded performance. Thus, a single block has the size of 8x8 pixels.

In accordance with Fig. 4 the blocks may be divided into equally sized configurable slices i.e. the slices have essentially the same extension within a specified region. In the example of Fig. 4 six horizontal slices sl1 to sl6 are defined in the centre region of the image. The blocks are not shown for the sake of clarity. The slices may have any form. The horizontal slices as shown in Fig. 4 are efficient for unusual pattern studies. The slices may be used to mask regions outside a predefined ROI (in Figs. 3 to 9 the ROI is below the semicircle) which will not be processed. Specifically, each column of blocks of the ROI may be split into a number of slices. There may be a configuration parameter to select the number of actual slices to be processed.

A block cost metric may be calculated for each block in order to simplify the image. Ideally, any block cost metric can be used for high level block based texture study. The block cost metric may determine block differences with respect to neighbour blocks. This step will extract a texture value for each block and optionally also a peak repeating luminance value of each block. The resulting two smaller maps (first map and second map) are further analyzed for the trailer towbar detection.

An unusual texture analysis (compare step S5) is performed on the first map with the texture values. This analysis may mainly involve the following steps:

Determining a histogram of texture for each slice.

Extract peak histogram bin index and value.

Apply a threshold defining unusual texture as x % (configurable parameter) of peak histogram bin values.

Create a binary map of histogram bins laying below the threshold as repeating (0) or non-repeating pattern (1 ). Lable all blocks 12 with non-repeating patterns as unusual texture.

These blocks will be further analyzed and classified.

As can be seen in Fig. 5 the unusual texture analysis was limited to slices sH , sl2 and sl3. The processing of the other slices would not be effective for determining the angular position of the towbar.

Similarly, an unusual luminance analysis may be performed as shown in Fig. 6. This analysis may involve the following steps:

Determining a histogram of luminance for each slice.

Extracting a peak histogram bin index and value.

Applying a threshold for determining unusual luminance as x % (configurable parameter) of a peak histogram bin value.

Create a binary map (second map) of histogram bins lower than a pregiven threshold as repeating (0) or non-repeating patterns (1 ).

Lable all blocks with non-repeating patterns as unusual luminance.

These blocks will be further analyzed and classified.

Fig. 6 shows blocks 13 of unusual luminance. Again, only slices sH to sl3 are analyzed. The non-repeating blocks 13 mainly correspond with the towbar 1 1 .

As a further optional step a unique feature detection is performed on the towbar 1 1 as shown in Fig. 7. This unique feature detection may involve the following steps:

Splitting the ROI into a number of sectors sc1 to sc5 (configurable) with respect to the towball centre 14.

Each sector size is equal to 180°divided by the number of sectors.

Find sectors with peak unusual patterns for both texture (first map) and luminance (second map).

Label these unique blocks separately.

As a next step a towbar edge detection may be performed. This edge detection may involve the detection of a towbar edge boundary 15 to the left of the towball and a towbar edge boundary 16 to the right of the towball as shown in Fig. 8. Optionally a pixel level edge refinement may be done on edge blocks later to get an accurate pixel level boundary of the towbar on both sides. For obtaining a pixel level boundary a simple gradient check on the block level boundary and the neighbours can be performed. The pixel level towbar boundaries can be used to determine the towbar centre. For example, an average of the left towbar edge boundary 15 and the corresponding right towbar edge boundary 16 may be used for each pixel line. Subsequently, a line fitting algorithm can be used to find the exact centre line of the towbar 1 1 .

Finally, the trailer yaw angle detection has to be performed, i.e. the angular position of the trailer or towbar, respectively, has to be determined as shown in Fig. 9. The angle detection can be performed by measuring the angle 19 of the trailer centre line 17 with respect to a normal 18. The normal is representing an angle of 0°of the trailer with respect to the towing vehicle.

Another method for determining the angular position of the towbar can include the conversion of the pixels of the ROI into polar coordinates with respect to the towball position 14. The average of all blocks classified as "non-repeating" can be used to get the actual towbar angle 19.

The described targetless trailer angle detection does not require a calibration drive which needs to be done correctly in all iterations. Thus, there is no problem of inaccuracy in any of the calibration parameters which may lead to huge inaccurate angle detection measurements. Further, the proposed solution is highly embedded friendly and can be easily ported to any processor. Additionally, the inventive method is highly runtime efficient. It does not need huge memory space and not any persistent memory.

Furthermore, the approach does not need a physical measurement of the towball position. Moreover, camera calibration data is not needed to detect the yaw angle of the trailer. Finally, the inventive method does not require learning steps to determine the angular position of the trailer. In summary, it is a very efficient method for determining the angular position of the trailer.