Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND DEVICE FOR THE ESTIMATION OF CAR EGOMOTION FROM SURROUND VIEW IMAGES
Document Type and Number:
WIPO Patent Application WO/2016/131585
Kind Code:
A1
Abstract:
A method and device for determining an ego-motion of a vehicle. Respective sequences of consecutive images are obtained from a front view camera, a left side view camera, a right side view camera and a rear view camera and merged. A virtual projection of the images to a ground plane is provided using an affine projection. An optical flow is determined from the sequence of projected images, an ego-motion of the vehicle is determined from the optical flow and the ego-motion is used to predict a kinematic state of the car.

Inventors:
GUERREIRO RUI (GB)
PANAKOS ANDREAS (GB)
SILVA CARLOS (GB)
YADAV DEV (GB)
Application Number:
PCT/EP2016/050937
Publication Date:
August 25, 2016
Filing Date:
January 19, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
APPLICATION SOLUTIONS (ELECTRONICS AND VISION) LTD (GB)
International Classes:
G06T3/40; G06T7/20
Foreign References:
EP2511137A12012-10-17
US20100220190A12010-09-02
Other References:
ANDREA GIACHETTI ET AL: "The Use of Optical Flow for Road Navigation", IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, IEEE INC, NEW YORK, US, vol. 14, no. 1, 1 February 1998 (1998-02-01), XP011053264, ISSN: 1042-296X
NOURANI-VATANI N ET AL: "Practical visual odometry for car-like vehicles", 2009 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION : (ICRA) ; KOBE, JAPAN, 12 - 17 MAY 2009, IEEE, PISCATAWAY, NJ, USA, 12 May 2009 (2009-05-12), pages 3551 - 3557, XP031509637, ISBN: 978-1-4244-2788-8
Attorney, Agent or Firm:
BÜCHNER, Jörg (Sieboldstraße 19, Nuernberg, DE)
Download PDF:
Claims:
CLAIMS

1. Method for determining an ego-motion of a vehicle comprising

- recording a first sequence of consecutive images of a front view camera, a second sequence of consecutive im¬ ages of a left side view camera, a third sequence of consecutive images of a right side view camera, and a fourth sequence of consecutive images of a rear view camera,

- merging the first sequence of consecutive images , the second sequence of consecutive images , the third se¬ quence of consecutive images , and the fourth sequence of consecutive images to obtain a sequence of merged imag- es ,

- providing a virtual pro ection of the images of the sequence of merged images to a ground plane using an af- fine pro ection, thereby obtaining a sequence of pro- j ected images ,

- determining an optical flow, based on the sequence of proj ected images , the optical flow comprising motion vectors of target ob ects in the surroundings of the ve¬ hicle,

- determining an ego-motion of the vehicle based on the optical flow,

- predicting a kinematic state of the car based on the ego-motion .

2. Method according to claim 1 , wherein the determination of the ego-motion comprises

- deriving an angular velocity of the vehicle around an instantaneous center of curvature from the optical flow, - using the derived angular velocity to derive a veloci¬ ty of the vehicle .

3. Method according claim 1 or claim 2, wherein the determination of the ego-motion comprises deriving a current position vector of a target object and a current veloci¬ ty relative to the target obj ect from a previous posi¬ tion of the target obj ect, a previous velocity relative to the target ob ect and an angular velocity with re¬ spect to a rotation around an instantaneous center of curvature .

4. Method according to one of the claims 1 to 3 , wherein the determination of the ego-motion comprises

deriving an angular velocity of the vehicle around an instantaneous center of curvature from a wheel speed and a steering angle using an Ackermann steering model , merging the determined egomotion and the angular veloci¬ ty of the vehicle in an incremental pose update .

5. Method according to one of the claims 1 to 4 , wherein the determination of the ego-motion comprises

deriving motion vectors from the optical flow,

applying a RANSAC procedure to the motion vectors .

6. Method according to one of the claims 1 to 5 , wherein the determination of the ego-motion comprises

deriving motion vectors from the optical flow,

deriving vector of ego-motion from the motion vectors of the optical flow,

applying a prediction filter to the vector of ego-motion for predicting a future position of the vehicle .

7. Method according to claim 6 wherein an input to the prediction filter is derived from one or more vectors of ego-motion and from one or more motion sensor values . 8. Method according to one of the preceding claims, com¬ prising detecting image regions that correspond to ob¬ jects that are not at a ground level and masking

out/disregarding the detected image regions . 9. Computer program product for executing a method according to one of the claims 1 to 8.

10. Ego-Motion detection system for a motor vehicle comprising a computation unit, the computation unit comprising a first input connection for receiving data from a front view camera, a second input connection for receiving data from a right side view camera, a third input con¬ nection for receiving data from left side view camera, a fourth input connection for receiving data from a rear view camera, the computation unit comprising a processing unit for

- obtaining a first sequence of consecutive images from the front view camera, a second sequence of consecutive images from the left side view camera, a third sequence of consecutive images from the right side view camera, and a fourth sequence of consecutive images from the rear view camera,

- merging the first sequence of consecutive images , the second sequence of consecutive images , the third se- quence of consecutive images , and the fourth sequence of consecutive images to obtain a sequence of merged im¬ ages , - providing a virtual projection of the images of the sequence of merged images to a ground plane using an af- fine proj ection thereby obtaining a sequence of pro- j ected images ,

- determining an optical flow, based on the sequence of pro ected images , the optical flow comprising motion vectors of target obj ects in the surroundings of the ve¬ hicle,

- determining an ego-motion of the vehicle based on the optical flow

- predicting a kinematic state of the car based on the ego-motion .

11. Ego-Motion detection system according to claim 10 , comprising a front view camera that is connected to the first input, a right side view camera that is connected to the second input, a left side view camera that is connected to the third input, a rear view camera that is connected to the fourth input .

12. Car with an ego-motion detection system according to

claim 11 , the front view camera being provided at a front side of the car, the right side view camera being provided at a right side of the car, the left side view camera being provided at a left side of the car, and the rear view camera being provided at a rear side of the car .

Description:
METHOD AND DEVICE FOR THE ESTIMATION OF CAR EGOMOTION FROM SURROUND VIEW IMAGES The current specification relates to driver assistance sys ¬ tems .

Advanced driver assistance systems (ADAS ) are systems developed to automate and enhance vehicle systems for safety and better driving . Many driver assistance systems use information about a car ' s position, orientation and motion state to assist the driver in various ways . This information may even be used to drive the vehicle autonomously . Among others , visual odometry can be used to determine a car ' s position . In a system for visual odometry, cameras are used to record input images and image corrections are ap ¬ plied . Features are detected, the features are matched across image frames and an optical flow field is constructed, for example by using a correlation to establish a correspondence between two images , by feature extraction and correlation or by constructing an optical flow field using the Lucas-Kanade method . Odometry errors are detected, the corresponding out ¬ liers are removed and the camera motion is estimated from the optical flow, for example using a Kalman filter or by minimizing a cost function that is based on geometric properties of the features .

The US 2014/0247352 discloses a multi-camera top view vision system, which generates a stitched virtual top view image . The following references [1] to [12] relate to the subject matter of the present specification and are hereby incorpo ¬ rated by reference . [1] Reza N . Jazar, "Vehicle Dynamics : Theory and Applica ¬ tions " , Springer, 19/03/2008.

[2] Thomas D . Gillespie, "Fundamentals of Vehicle Dynamics " . Society of Automotive Engineers , 1992.

[3] Alonzo Kelly, "Essential Kinematics for Autonomous Vehi ¬ cles " , Robotics Institute, Carnegie Mellon University, 1994.

[4] Gideon P. Stein, Ofer Mano, Amnon Shashua . "A Robust Method for Computing Vehicle Ego-motion" , IEEE Intelligent Vehicles Symposium, 2000.

[5] Joao P. Barreto, Frederick Martin, Radu Horaud . "Visual Servoing/Tracking Using Central Catadioptric Images " , Int . Symposium on Experimental Robotics , Advanced Robotics Series, 2002.

[6] Alej andro J. Weinstein and Kevin L. Moore, "Pose Estima ¬ tion of Ackerman Steering Vehicles for Outdoors Autonomous Navigation" , Proceedings of 2010 IEEE International

Conference on Industrial Automation, Valparaiso, Chile, March 2010.

[7] Oliver Pink, Frank Moosmann, Alexander Bachmann, "Visual Features for Vehicle Localization and Ego-Motion Estimation" , proceeding of : Intelligent Vehicles Symposium, 2009 IEEE . [8] D. Cheda, D. Ponsa, A.M. Lopez, "Camera Egomotion Estima ¬ tion in the ADAS Context" , Annual Conference on Intelligent Transportation Systems , 2010. [9] Gim Hee Lee, Friedrich Fraundorfer, Marc Pollefeys , "Mo ¬ tion Estimation for Self-Driving Cars with a Generalized Camera", CVPR, 2013.

[10] Marco Zucchelli , Jose Santos-Victor, Henrik I . Christen- sen, "Constrained Structure and Motion Estimation from Opti ¬ cal Flow", ICPR 2002.

[11] Dan Simon, "Optimal State Estimation : Kalman, H Infinity and Nonlinear Approaches " , John Wiley & Sons , 2006.

[ 12 ] P. Wayne Power, Johann . A Schoones , "Understanding Background Mixture Models for Foreground Segmentation" , Proceed ¬ ings Image and Vision Computing New Zealand 2002. The references [1], [2], [3] explain models of vehicle kine ¬ matics which can be used in an egomotion context .

Stein et alii [4] propose a single camera application where the egomotion of the vehicle is consistent with the road mod- elled . Image features in the two images are combined in a global probability function which introduces a global con ¬ straint to cope with the aperture problem.

Barreto et al . [5] describe a visual control of robot motion using central catadioptric systems and present the Jacobian matrix linking the robot ' s joint velocities to image observa ¬ tions . The solution presented is treated as a least squares problem but they actually defined the state vector that can be used in an Extended Kalman Filter .

Alejandro et al . [ 6 ] study a localization scheme for Ackerman steering vehicles , to be used in outdoors autonomous naviga ¬ tion using a low cost GPS and inclinometer . They use an Extended Kalman Filter to estimate the pose of the vehicle and the sensor biases . Pink et al . [7] present a method for vehicle pose estimation and motion tracking using visual features . They assume an initial vehicle pose and then track the pose in geographical coordinates over time, using image data as the only input . They are tracking the vehicle position based on the Ackerman model .

Cheda et al . [ 8 ] study egomotion estimation from a monocular camera under the ADAS context and compare the performance of nonlinear and linear algorithms .

Lee et al . [9] present a visual ego-motion estimation algo ¬ rithm for a self-driving car . They model a multicamera system as a generalized camera and applying the nonholonomic motion constraint of

Marco et al . [10] provide a formulation of a constrained minimi zation problem for structure and motion estimation from optical flow . He also presents the solution of the optimiza ¬ tion problem by Levenberg-Marquardt and direct proj ection .

Dan Simon [11] proposes multiple-model estimation methods on page 301, section 10.2, in which the update phase of the Kalman filter is reformulated to weight different models . Power and Schoones [12] describe a Gaussian mixture model (GMM) algorithm and an approximation on expectation maximization .

According to the present specification, an egomotion of a vehicle is defined as the 3D motion of a camera relative to a fixed coordinate system of the environment, which is also known as a world coordinate system. Furthermore, egomotion also refers to a two-dimensional motion in a given plane of the three-dimensional world coordinate system. This egomotion is also referred to as "2D egomotion" .

According to the present specification, the egomotion is cal- culated from an optical flow . The optical flow is the appar ¬ ent motion of an image caused by the relative between a cam ¬ era and the scene, wherein "scene" refers to the obj ects in the surroundings of the car . A method according to the present specification uses an

Ackermann model of the steering geometry to describe the ve ¬ hicle motion and an incremental pose update as a framework to integrate multiple sources of vehicle pose . The optical flow is calculated using features that are de ¬ tected in an image frame of a sequence of images and then matched in a consecutive frame . This information is used to generate the optical flow field for the detected features in those two image frames , or consecutive images . The consecu- tive images are a proj ection of the three dimensional scene into a two-dimensional plane, which is also referred to as "viewport plane" . A model of the road may be used to simplify the estimation of the optical flow . The road forms a simple planar structure and can be represented by only three dominant parameters: the forward translation, the pitch and the yaw . However, accord- ing to the present specification, the egomotion can also be estimated with sufficient accuracy without the use of a road model .

According to a second approach, a Horn-Schunck method is used to estimate the optical flow . A global constraint is intro ¬ duced to solve the aperture problem and a road model is fit ¬ ted to the flow fields to remove outliers .

According to the present specification, a four camera setup of a surround view system is used to generate a surround view, the images of the four cameras are merged into a single pro ection to a ground plane, which represents the street level and which is also referred to as "top down view" . A 2D-egomotion is computed from an affine pro ection of the top down view . Flow field outliers , such as measurement er ¬ rors or vectors of moving obj ects are filtered out using a suitable procedure, such as RANSAC . The proj ected view, which is an affine pro ection of the sur ¬ round view to the ground plane, is interpreted using a prior calibration, which provides depth and scale information . Alternatively or in addition, a structure is reconstructed from motion algorithms , which gives an explicit reconstruction of the observed scenes and thereby provides an estimate of ob ¬ ject distances . According to one embodiment, the motion is filtered in order to obtain a consistent position over time . The tracking proc ¬ ess estimates the real position of the vehicle with a consis ¬ tent movement model . According to the present specification, an Ackermann steering model is used as movement model to ep ¬ resent a vehicle with an Ackermann steering geometry .

According to a further embodiment, The Ackermann model is combined with multiple odometric measurements , such as GPS measurement, vehicle sensors , etc .

In a first aspect, the present specification discloses a method for determining an ego-motion of a motor vehicle, such as a passenger car, a utility vehicle or a minibus .

A front view camera records a first sequence of consecutive images of, a left side view camera records a second sequence of consecutive images , a right side view camera records a third sequence of consecutive images , and a rear view camera records a fourth sequence of consecutive images . The first, second, third and fourth image sequences each comprise at least two consecutive images .

The image sequences are transferred to a computational unit of the motor vehicle . The computational unit merges the first sequence of consecutive images , the second sequence of con ¬ secutive images , the third sequence of consecutive images , and the fourth sequence of consecutive images to obtain a se ¬ quence of merged images . The merged images correspond to sur- round view or 360° view of the vehicle ' s surroundings at a given time . Preferably, the respective images and the view fields of ad ¬ jacent cameras overlap at least partially. By way of example, the images can be merged by matching brightness values, on the basis of the individual pixels , correlating the bright- ness of the pixels . According to a further embodiment, higher level features such as lines or edges or regions of high con ¬ trast or brightness gradient of images from adj acent cameras are matched to each other . In a simple embodiment, the images are merged according to a field of view, position and orien- tation of the cameras .

The images of the sequence of merged images , or patches thereof, are proj ected to a ground plane using an affine pro- ection or transformation, thereby providing a sequence of pro ected images . Furthermore, a two-dimensional optical flow is determined, based on the sequence of proj ected images . The optical flow comprises motion vectors of target obj ects in the surroundings of the vehicle . According to one particular embodiment, an optical flow at a given time is provided by comparing two pro ected images , which are consecutive in time .

An egomotion of the vehicle is based on the optical flow . In particular, it is derived by comparing proj ected images of a first and of a second time and by determining the amount by which a pixel or a group of pixels corresponding to an ob ect in the surroundings has moved . According to the present ap ¬ plication, the ego-motion can be derived from the individual camera images of the surround view system or from the merged image of all cameras of the surround view system. A kinematic state of the vehicle, such as a position, a speed or a movement is determined based on the ego-motion of the vehicle . The kinematic state is determined, by way of exam ¬ ple, with respect to a previous position of the car, to a fixed coordinate system, to an object in the surroundings , or to the instantaneous centre of curvature .

According to one embodiment, the derivation of the ego-motion comprises deriving an angular velocity of the vehicle around an instantaneous centre of curvature from the optical flow and using the derived angular velocity to derive a velocity of the vehicle, and in particular to derive a velocity of a centre of gravity of the vehicle in a plane that is parallel to a ground plane using an Ackermann steering model .

According to one particular embodiment, the determination of the ego-motion comprises deriving a current position vector of a target object on a ground plane and a current velocity relative to the target obj ect using a previous position of the target obj ect, a previous velocity relative to the target obj ect and an angular velocity with respect to a rotation around an instantaneous centre of curvature with respect to a yaw motion of the vehicle . According to a further embodiment, the Ackermann steering model is used to derive an angular velocity of a yaw motion of the vehicle around an instantaneous centre of curvature from a wheel speed and a steering angle . In particular, the obtained angular speed can be merged with the derived egomo- tion in an incremental pose update and it can used as a fur ¬ ther input to a prediction filter, such as a Kalman filter . Alternatively, other filters , such as a recursive double least squares estimator or a double exponential smoothing filter or other smoothing filters, such as various types of low pass filters for digital signal processing, may be used as well . According to embodiments of the present specification, kine ¬ matic states of the vehicle, which are obtained from differ ¬ ent sources , such as the derived vehicle egomotion, vehicle sensors and a GPS system, are used as an input to the same prediction filter, or they are used as inputs to different prediction filters and the resulting outputs of the different prediction filters are combined to form an estimate of the kinematic state of the vehicle .

According to a further embodiment, the different sources of vehicle motion can be merged or combined in a probabilistic framework . A likelihood of being correct is determined for each source given a previous measurement . The pose is then updated with the most correct source . In one embodiment, the different sources of vehicle motion are mixed in a Gaussian mixture model .

According to one embodiment , deriving the egomotion from the optical flow comprises applying a random samp1e consensus (RANSAC) procedure to motion vectors , which may be motion vectors of the optical flow or ego-motion vectors . The RANSAC procedure may be applied before and/or after applying a pre ¬ diction filter, such as a Kalman filter . According to the RANSAC procedure, a model is fitted by regression to a subset of the data and the quality of the model is evaluated by measuring the data inliers to the model . The process is re ¬ peated until the solution has a pre-determined statistical significance . By way of example, a sample subset containing minimal data items is randomly selected from the input dataset in a first step . A fitting model and the corresponding model parameters are computed using only the elements of this samp1e subset . The size of the sample subset is the smallest sufficient to determine the model parameters . In a second step, the algo ¬ rithm checks which elements of the entire dataset are consis ¬ tent with the model instantiated by the estimated model pa ¬ rameters obtained from the first step . A data element will be considered as an outlier if it does not fit the fitting model instantiated by the set of estimated model parameters within some error threshold that defines the maximum deviation at ¬ tributable to the effect of noise . According to one embodiment, the determination of the ego- motion comprises deriving motion vectors of individual target ob ects from the optical flow, deriving a vector of ego- motion, also referred to as an average motion vector, from the motion vectors of the optical flow . A prediction filter such as a Kalman filter is applied to the vector of ego- motion for predicting a future vector of ego-motion or a future position of the vehicle for tracking the vehicle ' s posi ¬ tion . In a particular embodiment , an input to the prediction filter is derived from one or more vectors of ego-motion and motion sensor values , such as wheel speed sensor, acceleration sensor and GPS system output . According to a further embodiment, image regions are detected that correspond to ob ects that are not located at a ground level and the detected image re ¬ gions are disregarded or masked out . According to a further aspect, the present specification discloses a computer program product, such as an executable file in a persistent memory, such as a memory stick, a hard-disk or a DVD, or in volatile memory, such as a computer RAM. The executable file or executable code causes a processing unit to execute one of the preceding methods when it is loaded into a program memory of a processor .

According to a further aspect, the present specification dis- closes an Ego-motion detection system for a motor vehicle .

The Ego-motion detection system comprises a computation unit, the computation that has a first input connection for receiving data from a front view camera, a second input connection for receiving data from a right side view camera, a third in- put connection for receiving data from left side view camera and, a fourth input connection for receiving data from a rear view camera .

The four input connections may also be realized by a single input connection, for example, if image data from the respec ¬ tive cameras is transmitted in alternating time slices or al ¬ ternating data chunks. In particular, the camera data may be transmitted via cables of a data bus . The computation unit comprises a processing unit, such as a microprocessor with a computer memory, which is operative to obtain a first sequence of consecutive images from the front view camera, a second sequence of consecutive images from the left side view camera, a third sequence of consecutive images from the right side view camera, and a fourth sequence of consecutive images from the rear view camera via respective the input connections . Furthermore, the camera or the cameras may comprise a camera processing unit for basic image processing. The camera proc ¬ essing unit is different from the main processing unit that does the egomotion calculation.

Furthermore , the processing unit is operative to merge the first sequence of consecutive images , the second sequence of consecutive images , the third sequence of consecutive images, and the fourth sequence of consecutive images to obtain a se- quence of merged images , and to provide a virtual pro ection of the images of the sequence of merged images or patches thereof to a ground plane using an affine pro ection or transformation thereby obtaining a sequence of pro ected im ¬ ages .

Herein, a virtual pro ection refers to the operation of map ¬ ping the content of a first memory area to the content of a second memory area according to a transformation algorithm of the pro ection .

Moreover, the processing unit is operative to determine an optical flow, based on the sequence of proj ected images , to determine an ego-motion of the vehicle based on the optical flow and to predict a kinematic state of the car based on the ego-motion . The optical flow comprises motion vectors of tar ¬ get obj ects in the surroundings of the vehicle .

Furthermore, the current specification discloses the afore ¬ mentioned ego-motion detection system with a front view cam- era that is connected to the first input, a right side view camera that is connected to the second input, a left side view camera that is connected to the third input, a rear view camera that is connected to the fourth input . Moreover, the current specification discloses a car or a motor vehicle with the aforementioned ego-motion detection sys ¬ tem, wherein the front view camera is provided at a front side of the car, the right side view camera is provided at a right side of the car, the left side view camera is provided at a left side of the car, and the rear view camera is pro ¬ vided at a rear side of the car .

The subject matter of the present specification is further explained with respect to the following Figures in which

Fig . 1 shows a car with a surround view system,

Fig . 2 illustrates a car motion of the car of Fig . 1

around an instantaneous centre of rotation, and

Fig . 3 illustrates a pro ection to a ground plane of an image point recorded with the surround view system of Fig . 1 ,

Fig . 4 illustrates in further detail the ground plane pro- j ection of Fig . 3 , and

Fig . 5 shows a procedure for deriving an egomotion of the

C cL 2 ·

In the following description, details are provided to de ¬ scribe the embodiments of the present specification . It shall be apparent to one skilled in the art, however, that the em ¬ bodiments may be practised without such details .

Figure 1 shows a car 10 with a surround view system 11. The surround view system 11 comprises a front view camera 12 , a right side view camera 13, a left side view camera 14 and a rear view camera 15. The cameras 11 - 14 are connected to a CPU of a controller, which is not shown in Fig . 1. The con- troller is connected to further sensors and units, such as a velocity sensor, a steering angle sensor, a GPS unit, and ac ¬ celeration and orientation sensors . Figure 2 illustrates a car motion of the car 10. A wheel base B of the car and a wheel track L are indicated . The car 10 is designed according to an Ackermann steering geometry in which an orientation of the steerable front wheels is adj usted such that all four wheels of a vehicle are oriented in tangential direction to a circle of instant rotation . An instantaneous centre of curvature "ICC" is in register with the rear axis of the car 10 at a distance R, wherein R is the radius of the car 1 s instant rotation with respect to the yaw movement .

A two-dimensional vehicle coordinate system is indicated, which is fixed to a reference point of the car and aligned along a longitudinal and a lateral axis of the car . A loca ¬ tion of the instantaneous centre of curvature relative to the vehicle coordinate system is indicated by a vector P ICC .

In an Ackermann steering geometry according to Fig . 2 , an angle between an inner rear wheel , the instant centre of cur ¬ vature and an inner front wheel is equal to a steering angle a of the inner front wheel . Herein, "inner wheel " refers to the respective wheel that is closer to the centre of curva ¬ ture . A motion of the inner front wheel relative to a ground plane is indicated by a letter v.

Fig . 3 shows a proj ection of an image point to a ground plane 16. An angle of inclination Θ relative to the vertical can be estimated from a location of the image point on the image sensor of the right side view camera 13. If the image point corresponds to a feature of the road the location of the cor- responding object point is the projection of the image point onto the ground plane . In the example of Fig . 3 , the camera 13 has an elevation H above the ground plane . Consequently, the correspond obj ect point is located at a distance H*cos (Θ) from the right side of the car 10.

According to one embodiment, the vehicle is tracked using a constant acceleration model and a one-step procedure, wherein the values of a car position x___k and a car velocity v= d/dt (x___k) at time k*At are predicted using the respective values of the position and velocity at the earlier time ( k- 1 ) * At according to the equations

X k =X k - 1 +X k - 1 *M (1)

X k = ω X X k _ x + X k _ 1 * At (2) or

Xk = Xk-l+Xk-l la )

X k = ω x X k _ 1 + X k _ l , (2a)

for time units in which At = 1. Herein, X k> X k -i refer to posi ¬ tions of the car relative to the vehicle coordinate system of Fig . 2 , which is fixed to the car 10 , wherein the positions

X k , X k - \ of the car 10 are evaluated at times k* At, (k-1) * At, respectively, and wherein a position of the vehicle coordi ¬ nate system is evaluated at time (k-1) * At . The car velocity at the reference point can be derived from the location of the reference point relative to the instanta ¬ neous centre of curvature and the current angular velocity according to V rnr ——ω x PIiCC (3)

wherein ω is a vector of instantaneous rotation and P ICC is the position of the instantaneous centre of curvature rela- tive to the vehicle coordinate system. The relationship ac ¬ cording to equation (3) is used in equations (2 ) , (2a) . In equations (1) - (2a) the vector arrows have been omitted for easier reading. A vehicle position X___k ' relative to a fixed reference frame, also known as "world coordinate system" , is derived from the vector X___k and a location R of the vehicle coordinate system relative to the fixed reference frame . By way of example the movement of the vehicle coordinate system can be derived us- ing GPS and/or other sensors, such as a wheel speed sensor, a steering angle sensor, acceleration and orientation sensors .

According to a further embodiment, the accuracy is improved by incorporating a time dependent acceleration according to

X k = ζ x X h _ 1 + ω x (ω x X k _ l ) + X } fc-1 ' (4)

wherein ζ is related or proportional to the time derivative of the angular rotation ω . The first term on the right hand side of equation (4) is also referred to as "Euler accelera ¬ tion" and the second term is also referred to as "Coriolis acceleration" . Under the assumption that the car stays on track, the centrifugal acceleration is compensated by the car ' s tyres and does not contribute to the vehicle motion . In general, the angular velocity ω is time dependent. Accord ¬ ing to one embodiment, the angular velocity ω at time (k-1)* At is used in a computation according to equations (2 ) , (2a) or (3) .

According to the present specification, a mean velocity v between times ( k-2 ) * At and (k-1) * At can be derived from the comparison of two subsequent pro ections of camera images . In a first approximation, the mean velocity is used as the in- stant velocity at time (k-1) * At .

According to one embodiment, the angular velocity ω is de ¬ rived from the steering angles of the front wheels and a ro ¬ tation speed of a front wheel using an Ackermann steering model . The Ackermann steering model gives a good approxima ¬ tion for a car steering with Ackermann geometry, especially for slow velocities when there is little or no slip between the tires and the road . The steering angle of the front wheels can in turn be derived from an angular position of the steering column and the known lateral distance L between the front wheels . According to a further embodiment, the ego-motion, which is derived from the image sequences of the vehicle cameras is used to derive the angular velocity ω .

With reference to Fig . 2 , a radius of curvature R_2 with re ¬ spect to the instantaneous curvature centre and an inner front wheel can be derived as R_2 = B/sin ( ) , wherein a is a steering angle of the inner front wheel and B is the wheel base of the car . If the inner front wheel moves with a veloc ¬ ity v, which can be derived from a rotation speed of the inner front wheel and the wheel ' s diameter, the angular veloc- ity of the instantaneous rotation of the car in a horizontal plane, also known as "yaw" , is ω = v/R___2 = v*sin (a) /B .

For better accuracy, the instantaneous position can be com- puted using input from further odometric sensors , such as a GPS system, speed and acceleration sensors of the vehicle or other kinds of odometric sensors . In particular, GPS position values can be used to correct a drift from the true position . According to one embodiment, the egomotion is estimated from an affine proj ection or transformation to a ground plane, wherein the images of the cameras of the surround view system are merged into the proj ection to the ground plane . Figs . 3 and 4 show a pro ection to a ground plane 16.

Under the assumption that an image point corresponds to an obj ect point of an object on the ground plane the image point can be pro ected to a location of the corresponding obj ect point on the ground plane . An angle Θ of incidence is derived from a location of the image point on the camera sensor . A location Y of the pro ection is then derived using the height H of the camera sensor above street level as Y = H * cos (Θ) . Fig . 4 shows an isometric view of the affine pro ection of

Fig . 3. In Fig . 4 , a point in a view port plane 17 is denoted by p = (u, v) and a corresponding point in the ground plane 16 is denoted by P = (X, Y) . A distance between the view port plane 17 and a pro ection center C is denoted by the letter "f".

According to further embodiments , the camera image is evalu ¬ ated and the observed scene is reconstructed. In one embodi- ment, a sidewalk is detected and its height estimated. Ac ¬ cording to another embodiment stationary objects, such as a lamp post or a tree, are detected and their orientation rela ¬ tive to the ground plane is estimated.

Objects which are not located at street level and/or which have a proper motion can distort the optic flow and lead to inaccuracies in the derived egomotion . According to one em ¬ bodiment, the optical flow vectors resulting from such ob- j ects are filtered out using a RANSAC (random sample consen ¬ sus ) procedure in which outliers are suppressed . According to further embodiments , a road border is recognized using edge recognition and a digital map, which is stored in a computer memory of the car 10.

According to a further embodiment, roll and a pitch motions of the car are determined, for example by using acceleration and/or orientation sensors of the car and the ego-motion vectors are corrected by subtracting or by compensating the roll and pitch motions .

According to further modifications, the derived ego-motion is used for a lane-keeping application or for other electronic stabilization applications .

Fig . 5 shows , by way of example, a procedure for obtaining an egomotion . In a step 30 , camera images are acquired from the cameras 11 - 16. The camera images are combined into a com ¬ bined image in a step 31. In a step 32 , an image area is se- lected for the determination of ego-motion . For example, image areas which correspond to obj ects outside a street zone, such as buildings and other installations , may be clipped . In a step 33 the image points are proj ected to a ground surface, for example by applying an affine transformation or a perspective pro ection .

In a step 34 , corresponding image points are identified in consecutive images . In a step 35, optical flow vectors are derived by comparing the locations of the corresponding image points , for example by computing the difference vector be ¬ tween the position vectors of the corresponding locations . In a step 36, a filter procedure is applied, such as a RANSAC procedure or other elimination of outliers and interpolation or by applying a Kalman filter . In particular, the filtering may involve storing image values , such as image point bright ¬ ness values , of a given time window in computer memory and computing an average of the image values . In a step 37 , an egomotion vector of the car is derived from the optical flow .

The particular sequence of the steps Fig . 5 is only provided by way of example . For example, the images of the cameras may also be combined after carrying out the pro ection to ground level .

Although the above description contains much specificity, these should not be construed as limiting the scope of the embodiments but merely providing illustration of the foresee ¬ able embodiments . Especially the above stated advantages of the embodiments should not be construed as limiting the scope of the embodiments but merely to explain possible achieve ¬ ments if the described embodiments are put into practise . Thus , the scope of the embodiments should be determined by the claims and their equivalents , rather than by the examples given .

Reference

10 car

11 surround view system

12 front view camera

13 right side view camera

14 left side view camera

15 rear view camera

16 ground plane

30 - 37 method steps