Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
VISION SYSTEM AND METHOD FOR A MOTOR VEHICLE
Document Type and Number:
WIPO Patent Application WO/2017/207669
Kind Code:
A1
Abstract:
A vision system (10) for a motor vehicle comprises a stereo imaging apparatus (11) with imaging devices (12) adapted to capture images from a surrounding of the motor vehicle, and a processing device (14) adapted to process images captured by said imaging devices (12) and to detect objects, and track detected objects over several time frames, in the captured images. The processing device (14) is adapted to obtain an estimated value for the intrinsic yaw error of the imaging devices (12) by solving a set of equations, belonging to one particular detected object (30), using a non-linear equation solver method, where each equation corresponds to one time frame and relates a frame time, a disparity value of the particular detected object, an intrinsic yaw error and a kinematic variable of the ego vehicle.

More Like This:
JP2023115562REFRIGERATOR
WO/2023/209581ONLINE STEREO CALIBRATION
JP2018157496CALIBRATION DEVICE
Inventors:
LINDGREN LEIF (SE)
MEDLEY FREDRIK (SE)
KNUTSSON PER (SE)
Application Number:
PCT/EP2017/063229
Publication Date:
December 07, 2017
Filing Date:
May 31, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
AUTOLIV DEV (SE)
LINDGREN LEIF (SE)
MEDLEY FREDRIK (SE)
KNUTSSON PER (SE)
International Classes:
G06T7/80
Foreign References:
US20140168377A12014-06-19
US20080144924A12008-06-19
US20120224069A12012-09-06
DE102012009577A12012-11-29
DE102013224502A12014-06-12
US20140168377A12014-06-19
Other References:
BADINO HERNAN ET AL: "Visual Odometry by Multi-frame Feature Integration", 2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, IEEE, 2 December 2013 (2013-12-02), pages 222 - 229, XP032575745, DOI: 10.1109/ICCVW.2013.37
Download PDF:
Claims:
Vision system (10) for a motor vehicle, comprising a stereo imaging apparatus (11) with imaging devices (12) adapted to capture images from a surrounding of the motor vehicle, and a processing device (14) adapted to process images captured by said imaging devices (12) and to detect objects, and track detected objects over several time frames, in the captured images, characterized in that said processing device (14) is adapted to obtain an estimated value for the intrinsic yaw error of the imaging devices (12) by solving a set of equations, belonging to one particular detected object (30), using a non-linear equation solver method, where each equation corresponds to one time frame and relates a frame time, a disparity value of the particular detected object, an intrinsic yaw error and a kinematic variable of the ego vehicle .

Vision system as claimed in claim 1, characterized in that the kinematic variable of the ego vehicle is the vehicle speed .

Vision system as claimed in any one of the preceding claims, characterized in that the set of equations is based on the assumption of an essentially constant ego vehicle speed.

Vision system as claimed in any one of the preceding claims, characterized in that said vision system (10) is adapted to determine whether the ego vehicle speed is es- sentially constant, and to discard in said yaw error es- ti ma. ion image frames not fulfilling this condition.

Vis ion system as claimed in any one of the preceding claims, characterized in that said vision system ( 10 ) is adapted to determine whether a detected object is stationary or near-stationary, and to discard in said yaw error estimation a detected object not being stationary or near-stationary.

Vis ion system s claimed in claim 5 , charac erized in that said vision system (10) is adapted to determine the speed of a detected object, in particular by using a tracker in said processing device (14) adapted to track detected objects over time.

Vi i o system as claimed in any one of the preceding claims, characterized in that said vision system '(10) inserts into said set of equations the known ego vehicle speed available on a data bus (20) of the motor vehicle.

Vision system as claimed in any one of the preceding claims , characterized in hat the ego veh icle speed is derived by solving said set of equations by using said non-linear equation solver method.

Vision system as claimed in any one of the preceding claims, characterized in that the set of equations is based on the assumpt ion of the ego vehicle essentially moves straight.

10. Vision system as claimed in any one of the preceding

claims, characterized in that said vision system (10) is adapted to determine whether the ego vehicle essentially moves straight, and to discard image frames not fulfilling this condition.

Vision system as claimed in claim 10, characterized in that whether the ego vehicle essentially moves straight is determined on the basis of a signal from a yaw rate sensor (22) and/or a steering wheel angle sensor (23) of the motor vehicle.

Vision system as claimed in any one of the preceding claims, characterized in that the set of equations fulfills one or more of the following properties:

- all equations have the same form;

- all equations are obtained by equalizing the current distance to the particular detected object (30) as obtained from vehicle kinematics, to the current distance as obtained from the disparity value;

- the yaw error is expressed as a shift to the disparity value .

Vision system as claimed in any one of the preceding claims, characterized in that the estimated value for the intrinsic yaw error is used for calibrating the yaw angle between the imaging devices (12).

Vision system as claimed in any one of the preceding claims, characterized in that at least three, preferably at least five equations are used in said set of equations .

Vision method for a motor vehicle, comprising capturing images from a surrounding of the motor vehicle using a stereo imaging apparatus ( 11 ) with stereo imagin devices

(12), processing images captured by said imaging devices (12), detecting objects, and tracking detected objects over several time frames, in the captured images, charac- terized by obtaining an estimated value for the intrinsic yaw error of the imaging devices ( 12 ) by solving a set of equations, belonging to one par icular detected object (30), using a non-linear equation solver method, where each equation corresponds to one time frame and relates a frame time, a disparity value of the particular detected object, an intrinsic yaw error and a kinematic variable of the ego vehicle.

Description:
Vision system and method for a motor vehicle The invention relates to a vision system for a motor vehicle, comprising a stereo imaging apparatus with imaging devices adapted to capture images from a surrounding of the motor vehicle, and a processing device adapted to process images captured by said imaging devices and to detect objects in said images corresponding to real-world objects in the surrounding of the motor vehicle, and to track detected objects over several time frames in the captured images. The invention also relates to a corresponding vision method. The yaw angle, also known as squint angle, between the left camera and the right camera in a stereo camera system, must be determined with great accuracy, because an error in this angle results in large distance estimation errors in the stereo calculations. The distance error grows with the square of the distance. For an automotive stereo camera the squint angle will not be constant over the vehicle life time due to thermal changes and the long life time of automotive systems. Therefore, an online solution for estimating a squint angle error is needed.

It is known to estimate the squint angle error using radar or lidar distance information as a reference value, which however requires a radar or lidar reference system. DE 10 2012 009 577 Al describes a method of calibrating the squint angle between two stereo cameras in a motor vehicle from a comparison of a reference driven distance determined from odometric data to a stereoscopic driven distance determined by image processing. However, the external odometric data needed to calculate the reference driven distance constitute an additional systematic source of uncertainty for the determination of the squint angle error.

DE 10 2013 224 502 Al discloses a method of calibrating a stereo camera of a vehicle. The method involves separately determining the optical flow of the images from both cameras, From the optical flow and the estimated ego motion of the vehicle, corresponding 3D environment data are calculated. From a comparison of the 3D environment data of both cameras, a calibration error is determined. Both of the above mentioned methods require exact knowledge of the movement of the ego vehicle given by some other sensor.

OS 2014/0168377 Al discloses a method for aligning a stereo camera of a vehicle mounted object detection system, wherein an image from each camera at two different times is used to determine an observed displacement of a stationary object, like a traffic sign, relative to the vehicle. A predicted displacement of the object relative to the vehicle is also determined using, for example, a difference of size of the object in images taken at two different times. Determining a t i.a reg lation correction based on a difference of the observed displacement and the predicted displacement is used to correct for misalignment of the cameras. The object of the present invention is to provide a vision system and a method of controlling a vision system which allow for an accurate determination of a yaw angle error between the stereo cameras during operation of a motor vehicle. The invention solves this object with the features of the independent claims. According to the invention, a set of equations is solved by using a non-linear equation solver method in order to obtain an estimated value for the intrinsic yaw error. Each equation corresponds to one time frame and relate a time of the time frame, a calculated disparity value of the particular detected object, an intrinsic yaw error, and a kin ematic variable of the ego vehicle, by using a non-linear equation solver method in order to obtain an estimated value for the intrinsic yaw error. It has been found out that in this manner, the yaw angle error between the stereo cameras can be accurately determined during movement of the motor vehicle, allowing for a corresponding yaw angle calibration of the stereo cameras .

The estimated value for the intrinsic yaw error can advantageously be used for calibrating the yaw angle between the imaging devices during driving.

Preferably, all equations in the set have the same form. In particular, all equations in the set are preferably obtained by equalizing the current distance to the particular detected ob ect as obtained from vehicle kinematics , to the current distance as obtained from the disparity value. Herein, the current distance as obtained from the disparity value may be expressed as x = f · b / (d + ε) , where f is the foca I length of the imaging devices in the baseline direction, b is the baseline, i.e. the distance between the imaging devices, d is the current disparity value, and ε is the intrinsic yaw error, In many cases , he baseline direct i on is horizontal. However, the baseline direction may also have a significant vertical component, for example if the stereo imaging devices are located far away from each other, like in the front bumper and behind the wind screen of the vehicle . In such cases , the baseline and focal length direction are measured in the non- horizontal direction between the imaging devices.

In the ideal case, where ε = 0, the above equation gives the well-known relation between the distance x and the disparity d of a detected object. In the real case, where ε 4- 0, the above equation is used with the additional advantageous feature of expressing the yaw error ε as a simple shift to the disparity value d. In other words, the calculated disparity d and the yaw error ε are related to each other by addition or subtraction in the equation. In other words, the ideal disparity D is expressed by adding (or subtracting) the yaw error ε to or from the calculated disparity d. This inventive approach is accurate because of the small angle approximation: the yaw angle ε is very small, such that sin ε ¾ . An example of an unaccepta- ble error ε is 0.05 degrees.

In order to improve the accuracy in the yaw error estimation, the number of equations used in the set of equations is preferably larger than the unknown variables , preferably by a factor of at least 1.5, more preferably by a factor of at least 2, even more preferably by a factor of at least 3. Preferably the number of equations used in the set is at least three, preferably at least five, more preferably at least ten. Generally speaking, the higher the number of equations used, the higher the precision of the unknown variables dete mined.

Preferably, the set of equations is based on the assumption of an essentially constant ego vehicle speed v, simplifying the equations for the yaw error calculation and thus reducing the computational effort. In particular, the above mentioned current distance x to the particular detected object as obtained from vehicle kinematics can be simply expressed by the equa- tion x = (tO - t) - v, where tO is the (unknown) time when the particular detected object is at x = 0, i.e. on the straight line connecting the imaging devices, t is the frame time of the current time frame, such that tO - t is the time to collision TTC, and the kinematic variable v is the ego vehicle speed, i.e. the longitudinal speed of the ego vehicle, generally relative to the particular detected object, or in case of a stationary detected object, is the absolute ego vehicle speed (relative to ground) . Summarizing the above, a practically preferred general form of equation to be used in the yaw error estimation is

(tO - t) · (d + ε ) - f · b / v = 0

Preferably, the vision system is adapted to determine whether a detected object is stationary or near-stationary, and to discard a detected object which is not fulfilling this condition, i.e., which is moving. This can be done by determining the speed of a detected object and discarding the object in case it has non-zero speed, or a speed exceeding a predetermined threshold. The speed of a detected object is preferably estimated by a tracker in said processing device adapted to track detected objects or object candidates over time. Other ways of determining the speed of a detected object are possible. Also, other ways of determining whether a detected object is stationary or near-stationary are possible, for example by using a classifier in the image processing of detected objects, and only taking account of classified objects which are known to be stationary, like poles , traffic signs, lamp posts , trees, zebra crossings, dashed lane markings etc.

In the case of a stationary detected object, the above men- t io ed condition of an essent iaily constant ego veh icle speed relative to the detected object reduces to an essentially constant absolute ego vehicle speed (relative to ground) . Consequently, the vision system is preferably adapted to determine whether the ego vehicle speed is essentially constant, and to discard in said yaw error estimation image frames not fulfilling this condition. The condition of essentially constant ego vehicle speed can be easily evaluated for example by monitoring the ego vehicle speed available on a vehicle data bus, namely the ego vehicle speed measured by an angular velocity sensor arranged in a measuring relationship to a rotating part in the powertrain of the ego vehicle, like the crankshaft, which velocity is proportional to the wheel speed and is displayed by the cockpit speedometer. Other ways of evaluating the condition of an essentially constant ego vehicle speed are possible, for example by monitoring the more exact speed value provided by a satellite navigation system (GPS) receiver of the motor vehicle; or by evaluating the signal from a longitudinal acceleration sensor, which should be zero or near-zero. As mentioned above, the kinematic variable may in particular be the longitudinal vehicle speed. However, the kinematic variable may also be for example the integrated longitudinal vehicle speed, or the longitudinal acceleration of the ego vehicle. In case of longitudinal vehicle speed, there are differ- ent ways of handling the speed variable in the set of equations. In a preferred embodiment, the speed variable v is regarded as known, i.e. the ego vehicle speed v provided by a speed sensor and/or a satellite navigation system receiver of the motor vehicle is inserted into the equations. This has the advantage that by reducing the number of unknown variables by one, the accuracy in the determination of the other unknown variables, including the yaw error , can be improved. On the other hand, the speed variable v may be regarded as un-known, and the ego vehicle speed v can be derived by solving the set of equations by using the non-linear equation solver method, just in the same manner as deriving other unknowns such as the intrinsic yaw error. In this application, the invention deli - ers an ego vehicle speed va lue independent of the speed value related to the wheel speed and displayed by the cockpit speedometer, which may be used for calibrating the ego vehicle speed . Preferably, the set of equations is based on the assumption of the ego vehicle essentia Lly moves straight . This allows for simplification of the equat I ons , and for using the vehicle speed as the longitudinal velocity in the equations, since the lateral speed component is zero or near zero. In this regard, the vision system preferably is adapted to monitor a condition of an essentially straight moving ego vehicle, and to discard image frames not fulfilling said condition. Herein, the condition of an essentially straight moving ego vehicle can be monitored on the basis of a signal from a yaw rate sensor and/or a steering wheel angle sensor of the motor vehicle.

In the following the invention shall be illustrated on the basis of preferred embodiments with reference to the accompanyi g drawings, wherein:

Fig. 1 shows a vision system for a motor vehicle according to an embodiment of the invention. The vision system 10 is mounted in a motor vehicle and comprises an imaging apparatus 11 for acquiring images of a region surrounding the motor vehicle, for example a region in front of the motor vehicle. The imaging apparatus 11 comprises a plurality of optical imaging devices 12, in particular cameras, forming a stereo imaging apparatus 11 and operating in the visible and/or infrared wavelength range, where infrared covers near IR with wavelengths below 5 microns and/or far IR with wavelengths beyond 5 microns.

The imaging apparatus 11 is coupled to a data processing device 14 adapted to process the image data received from the imaging apparatus 11. The data processing device 14 may comprise a pre-processing section 13 adapted to control the cap- ture of images by the imaging apparatus 11, receive the signal containing the image information from the imaging apparatus 11, rectify or warp pairs of left/right images into alignment and/or create disparity or depth images, which per se is known in the art. The image pre-processing section 13 may be real- ized by a dedicated hardware circuit, for example a Field Programmable Gate Array ( FPGA} or an Application Specific Integrated Circuit (ASIC) . Alternatively the pre-processing section 13, or part of its functions, can be realized by software in a microprocessor or in a System-On-Chip (SoC) device com- prising, for example, FPGA, DSP, ARM and/or microprocessor functionality.

Further image and data processing carried out in the processing device 14 by corresponding software advantageously comprises identifying and preferably also classifying possible objects in front of the motor vehicle, such as pedestrians, other vehicles, bicyclists and/or large animals, by a classifier, tracking over time the position of object candidates identified in the captured images by a tracker, and activating or controlling at least one driver assistance device 18 depending on an estimation performed with respect to a tracked object, for example an estimated collision probability. The driver assistance device 18 may in particular comprise a display device to display information relating to a detected object. However , the invention is not limited to a display device. The driver assistance device 18 may in addition or alternatively comprise a warning device adapted to provide a collision warning to the driver by suitable optical, acoustical and/or haptic warning signals; one or more restraint systems such as occupant airbags or safety belt tensioners, pedestrian airbags, hood lifters and the like; and/or dynamic vehicle control systems such as brake or steering control de- vices.

The data processing device 14 is preferably a digital device which is programmed or programmable and preferably comprises a microprocessor, micro-controller, digital signal processor (DSP) or a System-On-Chip (SoC) device. The data processing device 14, pre-processing section 13 and the memory device 25 are preferably realised in an on-board electronic control unit (ECU) and may be connected to the imaging apparatus 11 via a separate cable or a vehicle data bus. In another embodiment the ECU and one or more of the imaging devices 12 can be integrated into a single unit, where a one box solution including the ECU and all imaging devices 12 can be preferred. All steps from imaging, image pre-processing, image processing to possible activation or control of driver assistance device 18 are performed automatically and continuously during driving in real time. The processing device 14 has access to information obtained from other vehicle sensors 19, like velocity sensor 21, yaw rate sensor 22 , steering wheel sensor 23 etc. in the motor vehicle via a digital data bus 20, for example a CAN bus. The velocity sensor 21 may be an angular velocity sensor arranged in a measuring relationship to a rotating part in the power- train of the ego vehicle, like the crankshaft or driveshaft , between the transmission and the wheels. The yaw rate sensor 22 may for example be an acceleration sensor measuring the lateral acceleration of the vehicle.

In the following, the invention is described by way of example as shown in Figure 1, namely through detecting and tracking a pole 30 in the images captured by the imaging system 11. The pole 30 is detected by an object detection section and tracked by a tracker in the image processing device 14. It may be assumed that the pole 30 is detected in time frame tk and found in subsequent in the images corresponding to subsequent time frames tk + i , tk+2 , .... For each frame, the corresponding disparity dk, dk+i, dk+2, ... is calculated in the processing means 14 in the usual manner as the baseline distance in pixels between the detected object, here the pole 30 , in the left and right stereo images.

The vision system permanent ly monitors the ego vehicle speed provided by the vehicle speed sensor 21 . It shall be assumed that the vehicle moves with constant velocity on the road 40 . Furthermore, the processing device 14 determines that the vehicle is moving straight on the road, based on the signal from the yaw rate sensor 22 , or alternatively from the steering wheel sensor 23. Based o information from the t acker i the image processing section of the data processing device, the pole 30 is correctly estimated to be stationary. Since all conditions are met, therefore, the object 30 in the time frames t tk+i, tk+2, ... is regarded admissible by the processing device 14. The processing device 14 then sets up the following set of equat ions for the pole 30 :

(tO - t k ) · (dk + ε ) - f * b / v = 0

(tO - t k+ i) - (dk+i + ε) - f · b / v - 0

(tO - t k+2 ) · (d k+2 + ε) - f · b / v = 0 etc., where it is understood that at least three equations is sufficient to estimate a value of the yaw angle error, and preferably more than three equations, for example at least ten equations, are used in order to achieve a sufficient accuracy in the yaw error estimation. It can be seen that all equations have the same form, and differ in the value of the frame time t and the disparity d, on Ly .

In the above equations, the values for t· κ , tk+i, tk+2, ... and the corresponding disparity values dk, dk+i, dk+2, ... for the pole 30 are inserted, together with the known values for f, b and v. This gives a set of equations with two unknowns, namely the collision time tO (where the pole is on a line connecting the camera devices 12) and the yaw error ε. The set of equations forms a non-linear least squares problem and may be easily solved for tO and ε using a non-linear equation solver method like the Gauss-Newton algorithm.

The yaw angle error ε calculated from a single other object, here the pole 30, could be further filtered over time to im- prove the accuracy. Also a suited average of ε values over a plurality of other objects could be formed. The resulting value of ε has been found to be a sufficiently accurate measure of the yaw angle error, which can be advantageous Ly used in a yaw angle error calibration during driving of the motor vehicle.

Constant ego vehicle speed is not a strict requirement for realizing the invention. For example, an alternative to the equation x=(tO-t)-v used above would be the equation X i = (X · v(t)dt letting the ego vehicle speed v (t ) be variab 1 e , where i is a frame index and a is a multiplicative error of the vehicle speed. The error of the vehicle speed can be regarded multi- plicative as the speed can be expressed as wheel-radius = a · wheel-radius ( set ) and v - a · v (wheel-speed) . It may also be possible to set ot=l in the above equation.

As mentioned above , the invention allows to estimate f b / v. Therefore, if f and b are known, v and a can be estimated.

Also, if v, or an integrated v, i.e. v(t)dt, is known from other sources , like a satellite navigation receiver, f could be regarded unknown and thereby estimated online. On the other hand, if f is considered known, baseline b, which slightly varies over temperature, can be estimated.