Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND DEVICE FOR PROVIDING A VISUALIZATION OF A VEHICLE, AND VEHICLE
Document Type and Number:
WIPO Patent Application WO/2021/073827
Kind Code:
A1
Abstract:
The invention provides a method for providing a visualization of a vehicle, the method having the steps: receiving, for each vehicle camera of a plurality of vehicle cameras of the vehicle, current values of camera extrinsic parameters of said vehicle camera; receiving vehicle suspension data relating to a suspension of wheels of the vehicle; and visualizing the vehicle using a predefined model of the vehicle, wherein the vehicle is visualized on a ground surface; wherein the ground surface is modelled to contact the wheels of the vehicle, based on the current values of the camera extrinsic parameters of the vehicle cameras and based on the suspension data.

Inventors:
PANAKOS ANDREAS - C/O CONTI TEMIC MICROELECTRONIC GMBH (DE)
MEYERS MORITZ - C/O CONTI TEMIC MICROELECTRONIC GMBH (DE)
FRIEBE MARKUS - C/O CONTI TEMIC MICROELECTRONIC GMBH (DE)
Application Number:
PCT/EP2020/075877
Publication Date:
April 22, 2021
Filing Date:
September 16, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CONTINENTAL AUTOMOTIVE GMBH (DE)
International Classes:
G06T17/00; B60W40/00; G06T15/20
Domestic Patent References:
WO2016162226A12016-10-13
Foreign References:
EP2620917A12013-07-31
DE102014221990A12016-05-04
Attorney, Agent or Firm:
BOBBERT, Christiana (DE)
Download PDF:
Claims:
CLAIMS

1.A method for providing a visualization of a vehicle (3), the method having the steps: receiving, for each vehicle camera (22-2n) of a plurali ty of vehicle cameras (22-2n) of the vehicle (3), cur rent values of camera extrinsic parameters of said vehi cle camera (22-2n); receiving vehicle suspension data relating to a suspen sion of wheels (33) of the vehicle (3); and visualizing the vehicle (3) using a predefined model of the vehicle (3), wherein the vehicle (3) is visualized on a ground surface; wherein the ground surface is modelled to contact the wheels (33) of the vehicle (3), based on the current values of the camera extrinsic parameters of the vehicle cameras (22-2n) and based on the suspension data.

2. The method according to claim 1, further comprising the steps of: computing, for each vehicle camera (22-2n), a difference between the current values of the camera extrinsic pa rameters of said vehicle camera (22-2n) and predefined initial values of the camera extrinsic parameters of said vehicle camera (22-2n); and computing a current three-dimensional posture of the ve hicle (3) based on said calculated differences between the current values of the camera extrinsic parameters of the vehicle cameras (22-2n) and the predefined initial values of the camera extrinsic parameters of the vehicle cameras (22-2n); wherein the ground surface is modelled based on the com puted current three-dimensional posture of the vehicle (3).

3. The method according to claim 2, wherein the three- dimensional posture of the vehicle (3) is computed in homogeneous coordinates, using affine transformations, and wherein said affine transformations comprise rota tions related to a rotation of the vehicle (3) and translations related to a translation of the vehicle (3).

4. The method according to claim 3, wherein said three- dimensional posture of said vehicle (3) is computed by applying multivariate interpolations to the current val ues of the camera extrinsic parameters.

5. The method according to claim 4, wherein the multivari ate interpolations comprise at least one of a bilinear interpolation and a bicubic interpolation.

6. The method according to any of the previous claims, fur ther comprising the step of: computing, for each wheel (33), a displacement based on the vehicle suspension data; wherein the ground surface is modelled based on the com puted displacements of the wheels (33) of the vehicle (3).

7. The method according to claim 6, wherein the vehicle suspension data comprises, for each wheel (33), information regarding a current suspension height; and wherein, for each wheel (33), the displacement is com puted based on a difference between the current suspen sion height and a predefined initial suspension height.

8.The method according to any of the previous claims 2 to 7, wherein a position of each wheel (33) of the vehicle (3) is computed based on the three-dimensional posture of said vehicle (3) and/or based on the computed dis placement of said wheel (33) of the vehicle (3).

9.The method according to claim 8, wherein the ground sur face is modelled using a multivariate interpolation us ing the computed positions of the wheels (33).

10.The method according to claim 9, wherein the multivari ate interpolations using the computed positions of the wheels (33) comprise at least one of a bilinear interpo lation and a bicubic interpolation.

11.A device (1) for providing a visualization of a vehicle (3), comprising: an interface (11) for receiving, for each vehicle camera (22-2n) of a plurality of vehicle cameras (22-2n) of the vehicle (3), current values of camera extrinsic parame ters of said vehicle camera (22-2n), and for receiving vehicle suspension data relating to a suspension of wheels (33) of the vehicle (3); and a computation unit (12) adapted to compute a visualiza tion of the vehicle (3) using a predefined model of the vehicle (3), wherein the vehicle (3) is visualized on a ground surface; wherein the computation unit (12) is adapted to model the ground surface so as to contact the wheels (33) of the vehicle (3), based on the current values of the cam era extrinsic parameters of the vehicle cameras (22-2n) and based on the suspension data.

12.The device (1) according to claim 11, further compris ing a display (13) for outputting the computed visuali zation of the vehicle (3).

13.A vehicle (3) comprising a plurality of vehicle cameras (22-2n); and a device (1) according to one of claims 11 or 12.

14.The vehicle (3) according to claim 13, further compris ing at least one sensor (21) adapted to

- measure the current camera extrinsic parameters of the vehicle cameras (22-2n) and the suspension data, and

- provide the measured current camera extrinsic parame ters of the vehicle cameras (22-2n) and the suspension data to the device (1).

15.The vehicle (3) according to claim 13 or 14, comprising at least four vehicle cameras (22-2n) arranged around the vehicle (3).

Description:
METHOD AND DEVICE FOR PROVIDING A VISUALIZATION OF A VEHICLE,

AND VEHICLE

The invention relates to a method for providing a visualiza tion of a vehicle, to a device for providing a visualization of a vehicle, and to a vehicle.

State of the Art

Modern vehicles can be equipped comprise surround-view sys tems for providing a visualization of a surrounding of the vehicle based on camera data provided by vehicle cameras of the vehicle. To provide a more realistic appearance, the ve hicle itself may be visualized. The visualization of the ve hicle may comprise the animation of multiple features such as wheels, a steering wheel, front and rear lights, doors, a hood of the vehicle, and the like.

A limitation for animations may originate from a predefined surface structure, i.e. mesh, which refers to the ground be low the vehicle. For example, if the ground is modelled to be horizontally planar, there is no way to reflect on real-time the true structure of the ground surface.

In view of the above, it is therefore an object of the pre sent invention to provide a more realistic visualization of the vehicle.

Summary of the Invention

In accordance with the invention, a method for providing a visualization of a vehicle as recited in claim 1 and a device for providing a visualization of a vehicle as recited in claim 11 are provided. The invention further provides a vehi cle as recited in claim 13. Various preferred features of the invention are recited in the dependent claims.

According to a first aspect, therefore, the invention pro vides a method for providing a visualization of a vehicle, wherein, for each vehicle camera of a plurality of vehicle cameras of the vehicle, current values of camera extrinsic parameters of said vehicle camera are received. Vehicle sus pension data relating to a suspension of wheels of the vehi cle is received. The vehicle is visualized using a predefined model of the vehicle. The vehicle is visualized on a ground surface. The ground surface is modelled to contact the wheels of the vehicle, based on the current values of the camera ex trinsic parameters of the vehicle cameras and based on the suspension data.

According to a second aspect, the invention provides a device for providing a visualization of a vehicle, comprising an in terface and a computation unit. The interface is adapted to receive, for each vehicle camera of a plurality of vehicle cameras of the vehicle, current values of camera extrinsic parameters of said vehicle camera, and to receive vehicle suspension data relating to a suspension of wheels of the ve hicle. The computation unit computes a visualization of a ve hicle using a predefined model of the vehicle, wherein the vehicle is visualized on a ground surface. The computation unit models the ground surface so as to contact the wheels of the vehicle, based on the current values of the camera ex trinsic parameters of the vehicle cameras and based on the suspension data.

According to a third aspect, the invention provides a vehicle comprising a plurality of vehicle cameras and a device for providing a visualization of a vehicle according to the in vention. The invention provides a realistic simulation of the vehicle, including vertical motion of the vehicle on non-planar ground surfaces. Accordingly, the invention can provide a better visualization of the vehicle, leading to an improved human- machine interface.

By having a more realistic visualization of the vehicle at hand, the driver can recognize obstacles and uneven road structures and can control the vehicle accordingly. Moreover, the visualization model may be provided as input to a driver assistance system which can control vehicle functions of the vehicle based on the visualization. For example, the driver assistance system may automatically or semi-automatically ac celerate, decelerate or steer the vehicle.

According to the invention, the camera extrinsic parameters may be provided in matrix form as follows:

IR T|

10 II

Herein, R refers to a 3x3 rotation matrix and T 3x1 to a translation vector. The camera extrinsic parameters refer to coordinate system transformations from three-dimensional world coordinates to three-dimensional camera coordinates.

The camera extrinsic parameters define the position of the center of the vehicle camera and the heading of the vehicle camera in world coordinates. The translation vector T pro vides the position of the origin of the world coordinate sys tem expressed in terms of the camera coordinate system.

According to an embodiment of the method for providing a vis ualization of the vehicle, for each vehicle camera, a differ ence between the current values of the camera extrinsic pa rameters of said vehicle camera and predefined initial values of the camera extrinsic parameters of said vehicle camera is computed. A current three-dimensional posture of the vehicle is computed based on said calculated differences between the current values of the camera extrinsic parameters of the ve hicle cameras and the predefined initial values of the camera extrinsic parameters of the vehicle cameras. The ground sur face is modelled based on the computed current three- dimensional posture of the vehicle.

According to an embodiment of the method for providing a vis ualization of the vehicle, the three-dimensional posture of the vehicle is computed in homogeneous coordinates, using af fine transformations. Said affine transformations comprise rotations related to a rotation of the vehicle and transla tions related to a translation of the vehicle.

According to an embodiment of the method for providing a vis ualization of the vehicle, said three-dimensional posture of said vehicle is computed by applying multivariate interpola tions to the current values of the camera extrinsic parame ters.

According to an embodiment of the method for providing a vis ualization of the vehicle, the multivariate interpolations comprise at least one of a bilinear interpolation and a bicu bic interpolation.

According to an embodiment of the method for providing a vis ualization of the vehicle, for each wheel, a displacement is computed based on the vehicle suspension data. The ground surface is modelled based on the computed displacements of the wheels of the vehicle.

According to an embodiment of the method for providing a vis ualization of the vehicle, the vehicle suspension data com prises, for each wheel, information regarding a current sus pension height. For each wheel, the displacement is computed based on a difference between the current suspension height and a predefined initial suspension height. According to an embodiment of the method for providing a vis ualization of the vehicle, a position of each wheel of the vehicle is computed based on the three-dimensional posture of said vehicle and/or based on the computed displacement of said wheel of the vehicle.

According to an embodiment of the method for providing a vis ualization of the vehicle, the ground surface is modelled us ing a multivariate interpolation using the computed positions of the wheels.

According to an embodiment of the method for providing a vis ualization of the vehicle, the multivariate interpolations using the computed positions of the wheels comprise at least one of a bilinear interpolation and a bicubic interpolation.

According to an embodiment of the method for providing a vis ualization of the vehicle, the visualization may comprise a bowl-view-type visualization.

According to an embodiment, the device for providing a visu alization of the vehicle further comprises a display for out- putting the computed visualization of the vehicle.

According to an embodiment, the vehicle further comprises at least one sensor adapted to measure the current camera ex trinsic parameters of the vehicle cameras and the suspension data, and to provide the measured current camera extrinsic parameters of the vehicle cameras and the suspension data to the device.

According to an embodiment, the vehicle comprises at least four vehicle cameras arranged around the vehicle.

Brief description of the drawings For a more complete understanding of the invention and the advantages thereof, exemplary embodiments of the invention are explained in more detail in the following description with reference to the accompanying drawing figures, in which like reference characters designate like parts and in which:

Fig. 1 shows a schematic block diagram of a device for providing a visualization of a vehicle according to an embodiment of the invention;

Fig. 2 shows a schematic view of a rear portion of a vehi cle, illustrating a suspension height of the vehi cle;

Fig. 3 shows a schematic top view of a vehicle; Fig. 4 shows a schematic side view of a vehicle; Fig. 5 shows a schematic block diagram of a vehicle accord ing to an embodiment of the invention; and

Fig. 6 shows a schematic flow diagram of a method for providing a surround view image according to an em bodiment of the invention.

Detailed description of the invention

The accompanying drawings are included to provide a further understanding of the present invention and are incorporated in and constitute a part of this specification. The drawings illustrate particular embodiments of the invention and to gether with the description serve to explain the principles of the invention. Other embodiments of the invention and many of the attendant advantages of the invention will be readily appreciated as they become better understood with reference to the following detailed description. Figure 1 shows a schematic block diagram of a device 1 for providing a visualization of a vehicle. The device 1 compris es an interface 11 which is connected via cables or via a wireless connection to a plurality of vehicle cameras 22 to 2n of a vehicle. Herein, n can be any integer greater than 2. Preferably, there are at least four vehicle cameras 22 to 2n which are arranged around the vehicle. In particular, the ve hicle cameras 22 to 2n may comprise a front camera, a back camera and at least one side camera for each side of the ve hicle. The vehicle cameras 22 to 2n are arranged to provide a 360-degree view. Adjacent vehicle cameras 22 to 2n may have partially overlapping detection regions.

The interface 11 is further connected to a sensor 21 which measures the camera extrinsic parameters of the vehicle cam eras 22 to 2n. The sensor 21 may comprise at least one of yaw rate sensors, acceleration sensors, position sensors, and the like. The sensor 21 may provide the current camera extrinsic parameters as a 4x4 matrix:

IR T|

I0 1I formula (1)

Herein, R refers to a 3x3 rotation matrix and T to a 3x1 translation-vector. The 4x4 matrix corresponds to a vehicle posture.

The camera extrinsic parameters may also be provided in the form (x, y, z, Rx, Ry, Rz), where x, y, z correspond to the T vector and Rx, Ry, Rz to the R matrix via the formula:

R = Rz(a)-Ry(b)-Rx(c).

The sensor 21 further measures vehicle suspension data relat ing to a suspension of wheels of the vehicle and provides the measured suspension data to the interface 11. The suspension data may comprise a suspension height of each wheel of the vehicle.

The vehicle cameras 22 to 2n provide respective camera images to the interface 11.

The interface 11 provides the camera images, the suspension data and the camera extrinsic parameters of the vehicle cam eras 22 to 2n to a computation unit 12 of the device 1. The computation unit 12 may comprise at least one of a processor, microprocessor, integrated circuit, ASIC, and the like. The computation unit 12 may further comprise at least one memory for storing the received camera extrinsic parameters, suspen sion parameters and camera images and for storing program in structions.

The computation unit 12 computes a visualization of the vehi cle using a predefined three-dimensional model of the vehi cle. The predefined model of the vehicle may comprise fea tures such as wheels, a steering wheel, front and rear lights, doors, a hood of the vehicle, and the like. The com putation unit 12 is adapted to visualize the model of the ve hicle on a ground surface.

The computation unit 12 stores initial camera extrinsic pa rameters and initial suspension data. The algorithm carried out by the computation unit 12 is based on an estimation of the current vehicle body state using the difference between the current camera extrinsic parameters and the initial cam era extrinsic parameters combined with differences between the current suspension heights of the wheels and the initial suspension heights of the wheels.

The computation unit 12 generates the ground surface in such a way that the wheels of the vehicle contact the ground sur face. The computation of the ground surface is carried out based on the current values of camera extrinsic parameters of the vehicle cameras and based on the suspension data.

The computation unit 12 computes for each vehicle camera 22 to 2n the differences between the current values of the cam era extrinsic parameters of the vehicle camera and predefined initial values of the camera extrinsic parameters of the ve hicle camera. The computation unit 12 further computes a cur rent three-dimensional posture of the vehicle based on the calculated difference between the current values of the cam era extrinsic parameters of the vehicle cameras and the pre defined initial values of the camera extrinsic parameters of the vehicle cameras 22 to 2n. The three-dimensional posture of the vehicle is computed in homogeneous coordinates, using affine transformations.

The affine transformations comprise rotations related to a rotation of the vehicle and translations related to a trans lation of the vehicle. The computing unit 12 may apply multi variate interpolations to the current values of the camera extrinsic parameters to compute the three-dimensional posture of the vehicle.

The computing unit 12 may compute the multivariate interpola tions as a bilinear interpolation.

In more detail, the computing unit 12 may compute the differ ence between the current camera extrinsic parameters and the initial camera extrinsic parameters to estimate the current three-dimensional posture of the vehicle. Using homogeneous coordinates, the vehicle posture is given by the combination of affine transformations corresponding to rotation and translation by the 4x4 matrix of formula (1).

Rz(a) is a 3x3 rotation matrix along the vertical z-axis by an a-angle. Rz(a) does not reflect any changes on the vehicle posture due to non-planar ground surfaces. Accordingly, an identity matrix may be used.

Ry(b) is a 3x3 rotation matrix along a horizontal y-axis by a b-angle taken from camera extrinsic parameters. The b-angle may be calculated by applying a bilinear interpolation to the camera extrinsic parameters of the vehicle cameras corre sponding to the y-axis rotation.

Rx(c) is a 3x3 rotation matrix along a longitudinal x-axis by a c-angle taken from a camera extrinsic parameters. The c- angle is calculated by applying a bilinear interpolation to the camera extrinsic parameters of the vehicle cameras corre sponding to the x-axis rotation.

T is a 3x1 translation matrix (or vector), where x- and y- displacements may be set to identity because they do not re flect any changes on the vehicle posture due to non-planar ground surfaces. The z-coordinate may be calculated by apply ing a bilinear interpolation to the camera extrinsic parame ters of the vehicle cameras corresponding to the z-axis rota tion.

Further, the computing unit 12 may compute, for each wheel, a displacement based on the vehicle suspension data. The ground surface can be modelled based on the computed displacements of the wheels of the vehicle. Herein, the vehicle suspension data comprises, for each wheel, information regarding a cur rent suspension height. For each wheel, the displacement is computed based on a difference between the current suspension height and a predefined initial suspension height. A position of each wheel of the vehicle is computed based on the three- dimensional posture of the vehicle and based on the computed displacement of the wheel of the vehicle. The displacements may be computed in the x-, y-, and z-directions and angles are calculated on top for the wheels using suspension dis placement calculated as a difference between current and ini- tial suspension position and orientation. The bottom of each wheel of the predefined model of the vehicle is placed at a respective and possibly different position and angle.

The surface is calculated such that it matches the wheels and vehicle heights. The inner ground plane mesh surface height under the vehicle may be estimated using a bilinear interpo lation using the four bottom heights of the vehicle wheels.

Generally, the surface can be modelled using a multivariate interpolation using the computed positions of the wheels.

The ground plane mesh may be equally spaced under the vehi cle.

The outer ground plane mesh surface height is modelled to smooth out to zero height. Accordingly, a simpe interpolation may be applied.

The computation unit 12 may further apply a stabilization filtering mechanism, such as a Kalman filter, to each calcu lated height, making the visualization smoother and more sta ble.

The computation unit 12 may further generate a surround view of the vehicle. The vehicle may be visualized inside a bowl. The ground surface of the vehicle is generated according to the steps outlined above. At farther distance, the ground surface goes over into a wall-shaped portion. The computation unit 12 may project the camera images received from the vehi cle cameras 22 to 2n onto the bowl. Thereby, a surround view of the vehicle is generated which may be presented to the driver of the vehicle on a display 13 of the device, e.g. at a dashboard of the vehicle.

Figure 2 shows a schematic view of a rear portion of a vehi cle 3. A suspension height s of a wheel 33 of the vehicle 3 is depicted. Further, a camera 22 is located at a rear side of the vehicle 3 at a height H. A coordinate system is de fined, wherein x denotes the longitudinal axis, y denotes the horizontal axis, and z denotes the vertical axis.

Figure 3 shows a schematic top view of the vehicle 3, having four vehicle cameras 22 to 25. A first vehicle camera 22 is located at a front of the vehicle 3, a second vehicle camera 23 is located at a first side of the vehicle 3, a third vehi cle camera 24 is located at a second side of the vehicle 3 and a fourth vehicle camera 25 is located at the back of the vehicle 3.

Figure 4 shows a schematic side view of the vehicle 3.

Figure 5 shows a schematic block diagram of a vehicle 3. The vehicle comprises a plurality of vehicle cameras 22 to 2n, in particular front cameras, back cameras and/or side cameras. The vehicle 3 further comprises a sensor 21 for determining the camera extrinsic parameters of the vehicle cameras 22 to 2n.

The sensor 21 provides the camera extrinsic parameters to a device 1 for providing a surround view image. Further, the vehicle cameras 22 to 2n provide respective camera images to the device 1. The device 1 is arranged according to one of the previously described embodiments. As described above, the device 1 provides a visualization of the vehicle 3 using a predefined model of the vehicle 3, wherein the vehicle 3 is visualized on a ground surface. The ground surface contacts the wheels of the vehicle 3.

The device 1 can provide the generated visualization of the vehicle 3 to a display 31 of the vehicle 3. Accordingly, the visualization of the vehicle 3 may be presented to a driver of the vehicle 3. The device 1 may further provide the generated visualization of the vehicle 3 to a driver assistance system 32 which may be adapted to control at least one driving function of the vehicle 3. For example, the driver assistance system 32 may accelerate, decelerate or steer the vehicle 3 in accordance with the visualization of the vehicle 3.

Figure 6 shows a schematic flow diagram of a method for providing a surround view image.

In a first method step SI, current values of camera extrinsic parameters are received, corresponding to a plurality of ve hicle cameras 22 to 2n of a vehicle 3, preferably at least four vehicle cameras 22 to 2n.

In a second method step S2, suspension data relating to a suspension of wheels 33 of the vehicle 3 are received.

In a third method step S3, a three-dimensional posture of the vehicle 3 is computed in homogeneous coordinates, using af fine transformations. They affine transformations comprise rotations related to a rotation of the vehicle 3 and transla tions related to a translation of the vehicle 3. The three- dimensional posture of the vehicle 3 is computed by applying multivariate interpolations to the current values of the cam era extrinsic parameters.

In a fourth method step S4, a displacement is computed for each wheel based on the vehicle suspension data. The suspen sion data may comprise information regarding a current sus pension height of each wheel 33. The displacement may be com puted based on a difference between the current suspension height and a predefined initial suspension height.

In a fifth method step S5, a visualization of the vehicle 3 is computed, wherein the vehicle 3 is visualized on a ground surface. The ground surface is modelled to contact the wheels of the vehicle 3. The ground surface is generated based on the computed displacements of the wheels of the vehicle 3.

A position of each wheel of the vehicle 3 may be computed based on the three-dimensional posture of the vehicle and based on the computed displacement of the wheel of the vehi cle 3.

The ground surface may be modelled using a multivariate in terpolation using the computed positions of the wheels.

In a sixth method step S5, a surround view may be generated using the visualization of the vehicle 3 and camera images provided by the vehicle cameras 22 to 2n. To generate the surround view, a virtual bowl may be generated comprising the visualization of the vehicle 3, in particular including the modelled ground surface. The surround view may be generated by projecting the camera images to the virtual bowl surround ing the model of the vehicle 3. The surround view may be pre sented to a driver of the vehicle 3. Alternatively or addi tionally, the surround view may be used in a driver assis tance system 32 to control driving functions of the vehicle 3.

REFERENCE SIGNS

1 device

3 vehicle

11 interface

12 computation unit

13 display

21 sensor

22-2n vehicle cameras

31 display

32 driver assistance system

33 wheel H camera height s suspension height

S1-S6 method steps x, y, z coordinates