Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MAP LINE INTERFACE FOR AUTONOMOUS DRIVING
Document Type and Number:
WIPO Patent Application WO/2019/207160
Kind Code:
A1
Abstract:
The influence of an uncertain Gaussian ego position and pose or heading for the resulting map information, which is not necessarily Gaussian, is disclosed. In order to transport the map line information to other subsystems such as the lane fusion module, we need to approximate the map line distribution by a suitable data structure which is both accurate and compact. Suitable approximations of the resulting map line distributions such as mean values of map lines only, mean values combined with standard deviation values and mean values combined with the corresponding covariance matrices are also presented. The usage of mean values and covariance matrices approximate the complete distributions rather accurately, and is therefore both from an accuracy point of view as well as from a bandwidth point of view the way to represent map lines in interfaces.

Inventors:
PFEIFLE MARTIN (DE)
Application Number:
PCT/EP2019/060937
Publication Date:
October 31, 2019
Filing Date:
April 29, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
PFEIFLE MARTIN (DE)
VISTEON GLOBAL TECH INC (US)
International Classes:
G01C21/30; B60W30/12; B60W40/072; G05D1/02
Other References:
MATTHAEI RICHARD ET AL: "Map-relative localization in lane-level maps for ADAS and autonomous driving", 2014 IEEE INTELLIGENT VEHICLES SYMPOSIUM PROCEEDINGS, IEEE, 8 June 2014 (2014-06-08), pages 49 - 55, XP032620366, DOI: 10.1109/IVS.2014.6856428
GACKSTATTER C ET AL: "Fusion of clothoid segments for a more accurate and updated prediction of the road geometry", INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2010 13TH INTERNATIONAL IEEE CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 19 September 2010 (2010-09-19), pages 1691 - 1696, XP031792796, ISBN: 978-1-4244-7657-2
LI FRANCK ET AL: "Map-Aided Dead-Reckoning With Lane-Level Maps and Integrity Monitoring", IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, IEEE, vol. 3, no. 1, 1 March 2018 (2018-03-01), pages 81 - 91, XP011679893, ISSN: 2379-8858, [retrieved on 20180319], DOI: 10.1109/TIV.2018.2792843
TOLEDO-MOREO R ET AL: "Fusing GNSS, Dead-Reckoning, and Enhanced Maps for Road Vehicle Lane-Level Navigation", IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, IEEE, US, vol. 3, no. 5, 1 October 2009 (2009-10-01), pages 798 - 809, XP011278678, ISSN: 1932-4553, DOI: 10.1109/JSTSP.2009.2027803
LI FRANCK ET AL: "Estimating localization uncertainty using multi-hypothesis map-matching on high-definition road maps", 2017 IEEE 20TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), IEEE, 16 October 2017 (2017-10-16), pages 1 - 6, XP033330462, DOI: 10.1109/ITSC.2017.8317804
Attorney, Agent or Firm:
MERH-IP MATIAS ERNY REICHL HOFFMANN PATENTANWÄLTE PARTG MBB (DE)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1, A system (1) for a map line interface for providing a trajectory for an autonomous vehicle, comprising:

a camera module (61) for detecting lane boundaries on a road the autonomous vehicle is driving on;

a LiDAR module (62) for detecting lane boundaries on the road and objects on and/or next to the road;

a database (42; 63) storing a high-definition (HD) map of the road;

a lane fusion module (7) configured to fuse information on lane boundaries provided by the camera module (61), the LiDAR module (62), and the high-definition map; and

a planning module (8) for providing a trajectory along the road for the autonomous vehicle based on the fused information.

2. The system (1) according to claim 1, further comprising:

an inertial measurement unit (22) for providing first location information: of the autonomous vehicle based on data measured by a plurality of sensors; and

a first ego-motion estimation module (45) for providing interoceptive ego-motion information based on the first location information, wherein

the planning module (8) is configured to provide the trajectory based on the interoceptive ego-motion information.

3. The system (1) according to claim 1 or 2, further comprising:

a visual odometry module (49} for determining position and orientation of the autonomous vehicle by analyzing image data provided by the camera module (61); and a second ego-motion estimation module (46) for providing exteroceptive ego- motion information based on the position and orientation determined by the visual odometry module (49), wherein

the planning module (8) is configured to provide the trajectory based on the exteroceptive ego-motion information, 4. The system (1) according to at least one of claims 1 to 3, further comprising:

a navigation ECU (2) for providing a route path for the autonomous vehicle selected by a user; and

a map matching module (43) for matching the route path with map information provided by the HD map, wherein

the planning module (8) is configured to provide the trajectory based on the matching result provided by the map matching module (43).

5. The system (1) according to claim 4, further comprising:

a GNSS (global navigation satellite system) module {21} for obtaining second location information of the autonomous vehicle from a satellite system;

a localization module {44} for providing precise location information based on the map matching result provided by the map matching module (43) and the second location information, wherein the planning module? (8) is configured to provide the trajectory based on the precise location information,

6. A method for a map Sine interface for providing a trajectory' for an autonomous vehicle, the method comprising:

obtaining points from a high-definition (HD) map corresponding to a curve (L), wherein the points are provided in a map coordinate system;

converting the points from the HD map to a vehicle-based coordinate system; and fitting a potynomial to a subset of the points.

7. The method according to claim 6, wherein the curve (L) is defined as:

wherein x is a distance measured along an x-axis which corresponds to a forward direction of the vehicle-based coordinate system,

wherein g0h is the offset between a reference point of the autonomous vehicle and the lane boundary, b is an angle between the right lane boundary and the

autonomous vehicles movement (i.e. the heading of the autonomous vehicle), CQ is a curvature of the lane boundary, and ci is the rate of change (first derivative) of Co- 8. The method according to claim: 7, wherein the polynomial is:

v = po + piu + p2U 2 + p3ii3, and

wherein the polynomial is fitted using a least-squares method.

3. The method according to claim 8, wherein:

Yoff := pol

b tan {pi);

Co := 2ps: and

Ci 6p3.

10. The method according to one of claim 3, further comprising:

generating n samples from an ego distribution, wherein n is an integer;

for each sample, refitting the polynomial to obtain a line sample (S);

analyzing a distribution over the line samples (S),

11. The method according to at least one of claims 6 to 10, further comprising: providing first location information of the autonomous vehicle based on data measured by a plurality of sensors of an inertial measurement unit (22): and

providing interoceptive ego-motion information based on the first location information,

providing a trajectory for the autonomous vehicle based on the interoceptive ego- motion information. 12. The method according to claim 11, further comprising:

determining position and orientation of the autonomous vehicle by analyzing image data provided by a camera module (61); providing exteroceptive ego-motion information based on the determined position and orientation; and

providing the trajectory based on the exteroceptive ego-motion information. 13. The method according to claim 12, further comprising

providing a route path for the autonomous vehicle selected by a user;

matching the route path with map information provided by the HD map and providing the trajectory based on the matching result. 14. The method according to claim 13, further comprising:

obtaining second location information of the autonomous vehicle from a satellite system;

providing precise location information based on the map matching result provided and the second location information; and

providing the trajectory based on the precise location information.

15. The method according to at least one of claims 6 to 14, wherein the fitting of the polynomial is repeated a plurality of times using parameters that are varied within a preset range in order to generate a distribution and the distribution is analyzed by calculating a mean value and a covariance.

AMENDED CLAIMS

received by the International Bureau on 01 October 2019 (01.10.2019)

CLAIMS

What is claimed is:

1. A system (1) for a map line interface for providing a trajectory for an autonomous vehicle, comprising:

a camera module (61) for detecting lane boundaries on a road the autonomous vehicle is driving on;

a LiDAR module (62) for detecting lane boundaries on the road and objects on and/or next to the road;

a database (42; 63) storing a high-definition (HD) map of the road;

a lane fusion module (7) configured obtain points from the high-definition (HD) map corresponding to a curve (L), wherein the points are provided in a map coordinate system, convert the points from the HD map to a vehicle-based coordinate system, fit a polynomial to a subset of the points, and to fuse information on lane boundaries provided by the camera module (61), the LiDAR module (62), and the fitted polynomial;

a navigation ECU (2) for providing a route path for the autonomous vehicle selected by a user;

a map matching module (43) for matching the route path with map information provided by the HD map; and

a planning module (8) for providing a trajectory along the road for the autonomous vehicle based on the fused information, wherein

the planning module (8) is configured to provide the trajectory based on the matching result provided by the map matching module (43). 2 The system (1) according to claim 1, further comprising:

an inertial measurement unit (22) for providing first location information of the autonomous vehicle based on data measured by a plurality of sensors; and

a first ego-motion estimation module (45) for providing interoceptive ego-motion information based on the first location information, wherein

the planning module (8) is configured to provide the trajectory based on the interoceptive ego-motion information.

3. The system (1) according to claim 1 or 2, further comprising:

a visual odometry module (49) for determining position and orientation of the autonomous vehicle by analyzing image data provided by the camera module (61); and a second ego-motion estimation module (46) for providing exteroceptive ego- motion information based on the position and orientation determined by the visual odometery module (49), wherein

the planning module (8) is configured to provide the trajectory based on the exteroceptive ego-motion information.

4. The system (1) according to claim 1, further comprising:

a GNSS (global navigation satellite system) module (21) for obtaining second location information of the autonomous vehicle from a satellite system;

a localization module (44) for providing precise location information based on the map matching result provided by the map matching module (43) and the second location information, wherein the planning module (8) is configured to provide the trajectory based on the precise location information.

5. A method for a map line interface for providing a trajectory for an autonomous vehicle, the method comprising:

detecting, by a camera module (61), lane boundaries on a road the autonomous vehicle is driving on;

detecting, by a LiDAR module (62), lane boundaries on the road and objects on and/or next to the road;

obtaining, from a database (42; 63) storing a high-definition (HD) map of the road, points from the high-definition (HD) map corresponding to a curve (L), wherein the points are provided in a map coordinate system;

converting the points from the HD map to a vehicle-based coordinate system; fitting a polynomial to a subset of the points;

fusing information on lane boundaries provided by the camera module (61), the

LiDAR module (62), and the fitted polynomial;

providing a route path for the autonomous vehicle selected by a user;

matching the route path with map information provided by the HD map; and providing a trajectory along the road for the autonomous vehicle based on the fused information and the matching result.

6 The method according to claim 5, wherein the curve (L) is defined as: wherein x is a distance measured along an x-axis which corresponds to a forward direction of the autonomous vehicle coordinate system,

wherein y0ff is the offset between a reference point of the autonomous vehicle and the lane boundary, b is an angle between the right lane boundary and the autonomous vehicle’s movement (i.e. the heading of the autonomous vehicle), Co is a curvature of the lane boundary, and ci is the rate of change (first derivative) of Co.

7. The method according to claim 6, wherein the polynomial is:

v = po + pm + P2U2 + P3U3, and

wherein the polynomial is fitted using a least-squares method.

8. The method according to claim 7, wherein:

y0ff := Po,

b := tan-^pi);

co := 2p2; and

ci := 6p3.

9. The method according to at least one of claims 5 to 8, further comprising:

providing first location information of the autonomous vehicle based on data measured by a plurality of sensors of an inertial measurement unit (22); and

providing interoceptive ego-motion information based on the first location information, providing a trajectory for the autonomous vehicle based on the interoceptive ego- motion information.

10. The method according to claim 9, further comprising:

determining position and orientation of the autonomous vehicle by analyzing image data provided by a camera module (61);

providing exteroceptive ego-motion information based on the determined position and orientation; and

providing the trajectory based on the exteroceptive ego-motion information.

11. The method according to claim 5, further comprising:

obtaining second location information of the autonomous vehicle from a satellite system;

providing precise location information based on the map matching result provided and the second location information; and

providing the trajectory based on the precise location information.

12. The method according to at least one of claims 5 to 11,

wherein the fitting of the polynomial is repeated a plurality of times using parameters that are varied within a preset range in order to generate a distribution and the distribution is analyzed by calculating a mean value and a covariance.

Description:
MAP LINE INTERFACE FOR AUTONOMOUS DRIVING

CROSS-REFERENCE TO RELATED APPLICATION

This application claims the benefit of U.S. provisional application serial no. 62/663,991, filed April 27, 2018, the disclosure of which is hereby incorporated in its entirety by reference. gACKGROUND

I. INTRODUCTION

Modern cars are equipped with a large amount of sensors to perceive their environment. Sensors such as cameras, LtDARs or radars can be used to detect objects around the cars and also to detect lanes of the road that the cars traverse. The detection and tracking of road boundaries is an important task for driver assistance systems as many functions, such as lane departure warning, lane keeping assistance, lane change warning systems, and the tike, depend on a correct, or accurate, representation of the lanes.

In order to achieve such a correct representation of the lanes, information from various sources is correlated together. For example, information from sensors of the vehicle, such as front facing perspective cameras, side-facing fisheye cameras, LtDARs, radars, digital maps in combination with a Global Navigation Satellite System and/or Inertial Measurement Unit (GNSS/IMU) system, from the trajectories of leading vehicles, other suitable information, or a combination thereof are correlated together to provide a correct representation of the lanes. As many of these subsystems may be provided by different vendors, it is crucial for a system integrator, e.g, the Original Equipment Manufacturer (OEM) itself or the Tier! supplier, to define clear and meaningful interfaces, which allow the individual testing of submodules as well as their easy substitution.

Especially for map related topics the encapsulation of information derived from the map is crucial as the map submodule might vary from region to region due to the fact that there are different map providers for different regions. Some OEMs even plan to put the map and positioning related submodule within a dedicated Position/Map electronic contro! unit (positioning ECU) which then communicates the map content to an Advanced driver-assistance system electronic control unit (ADAS ECU) via clear interfaces on top of Ethernet (see fat arrow in FIG. 1). An interface for lane geometry is disclosed herein to enable the exchange of information between a positioning ECU or submodule and an ADAS ECU/submodule. II. RELATED WORK

Typically, High Definition (HD) map information is used for longitudinal control, such as in the case of speed limit changes and/or road changes ahead. Using map information for lateral control is more difficult, as the quality of the information derived from the map heavily depends on the precision of the positional and heading information of an ego- vehicle (e.g., an autonomous vehicle using ego information such as heading information, position information, other information, or a combination thereof), Uncertainty in the ego- vehicle position and pose directly results in uncertainty in the line information retrieved from the digital map. Many approaches use a three dimensional ciothoid road model for representation of lane boundaries or lane center lines, due to its high relevance and flexibility in road analysis owing to it being the favorable and common engineering solution to road and railway design, A common local approximation to the ciothoid is based on the 3rd-order Taylor series. FIG, 2 depicts this representation using a four dimensional state vector, FIG. 2 illustrates a vehicle EV (ego-vehicle) driving on a road with two lanes. Obviously, each lane boundary, i.e. the left lane boundary LL, the right lane boundary RL, and the lane center line CL can be represented by such a four dimensional vector in which the first two dimensions y 0ff ancl b describe the translation and rotation to move the car coordinate system to the coordinate system described by the lane line and the other two dimensions Co and ci describe the lane fine inherent curvature and its rate of change respectively. This representation can be used by planning and execution to tnove the ego-vehicle EV close to the centerline of the lane. The state vector illustrated in FIG. 2 is a four- dimensional state vector describing the position and heading of the ego-vehicle with respect to the lane-boundary, for example the right lane RL boundary l rlght - in particular, the state vector i wherein y off describes the offset between a reference

point of the ego-vehicle EV and the lane boundary, b describes the angle between the right lane boundary and the ego-vehicle's movement {i.e. the heading of the ego-vehicle EV), co describes the curvature of the lane boundary, and ci describes the rate of change (first derivative) of Co. SUMMARY

Delivering an accurate representation of the lane ahead of an autonomously driving vehicle is one of the key functionalities of a good ADAS perception system. This statement holds especially for driving on a highway. Functions such as lane keeping assistance rely on a proper representation of the fanes from the perception subsystem. To achieve such a proper representation, information from various sensors such as cameras, LiDARs, radars, very detailed maps (called High Definition or HD maps) are taken into consideration, In particular, the information obtained from various sensors may be fused together for generating a precise lane geometry.

The influence of an uncertain Gaussian ego position, Le. the position of the ego-vehicle, and pose, i.e. the heading or direction of the ego-vehicle, for the resulting map line information, which is not necessarily Gaussian anymore, is disclosed. In particular, the ego-vehicle may be an autonomous vehicle, In order to transport the map line

information to other subsystems such as the lane fusion module, we need to

approximate the map line distribution by a suitable data structure which is both accurate and compact, Suitable approximations of the resulting map line distributions such as mean values of map lines only, mean values combined with standard deviation values and mean values combined with the corresponding covariance matrices are also presented. The usage of mean values and covariance matrices approximate the complete distributions rather accurately, and is therefore both from an accuracy point of view as well as fro : a bandwidth point of view the way to represent map lines in interfaces, for example the interfaces between different electrical components or ECUs {Electronic Control Units), including a navigation ECU, an HD map ECU, and an autonomous driving ECU, working together in an autonomous vehicle,

A question addressed in this disclosure is whether there exists a representation of the drivable line along with uncertainty information coming directly from the map.

B51EF PESCRIPTION OF THE . DRAWINGS

Further details, features and advantages of designs of the invention result from the following description of embodiment examples in reference to the associated drawings, FIG. 1 is a block diagram of a system illustrating communications between a

Positioning/ ap ECU and an ADAS ECU which is embodied as an autonomous driving

ECU;

FIG. 2 is an overhead representation of the right lane with respect to ego-vehicle movement;

FIG. 3 is a block diagram of a system for autonomous driving based on a precise map and a precise positioning system;

FIG. 4A is graphic representation of lane geometry given a position P and heading h;

FIG. 4B is graphic representation of the lane geometry of FIG. 4 A, where the position P and heading h are uncertain;

FIG. 4C is graphic representation of the lane geometry of FIG. 4A, with five different M onte Ca ri o sa m pie poi nts of pos it i o n P a n d bea ding L;

FIG. 4D is a plot of 4-dimensional vectors corresponding to the five different Monte Carlo sample points of position P and heading h of FIG. 4C; FIG. 5 is a block diagram of communications between an infotainment ECU and an ADAS ECU;

FIG. 6 is a plot of Monte Carlo samples for a first input distribution of a vehicle in a highway driving scenario, and with a heading uncertainty standard deviation of 5 degrees;

FIG. 7 is a plot of Monte Carlo samples for a second input distribution of a vehicle in a “street turn" driving scenario, and with a heading uncertainty standard deviation of 10 degrees;

FIG. 8A is a block diagram of an example embodiment using the subject line

representation from maps as an input to a Lane Fusion module;

FIG. 8B is a block diagram of another example embodiment using the subject tine representation directly for planning a route for driving;

FIG. 8C is a block diagram of yet another example embodiment using the subject fine representation for sanity checks in an ASIL decomposition map.

SSaiLED DfSCRIPTifiM

Recurring features are marked with identical reference numerals in the figures, in which a map line interface for autonomous driving is disclosed.

Test vehicles for autonomous driving are often equipped with a high precision and very expensive Global Navigation Satellite System and/or inertial Measurement Unit

(IMU/GNSS) system which aliow(s) localization errors within a few centimeters. Based on such a precise localization module and a similarly accurate high definition (HD) map, it is possible to derive the above described four parameters for the centerline of the ego lane from the map with a relatively low error and use it directly for autonomous driving. See, for example, the system for autonomous driving based on a precise map and a precise positioning system illustrated in FIG. 3,

Fig. 3 generally illustrates a system 11 that includes a real time kinematic (RTK) system 5 that provides precise lane and heading information to a lane generator 7\ A precise HD map 6 provides a precise lane geometry in the world coordinate system, which may be for example WGS 84, to the lane generator T. The lane generator T provides a precise lane geometry in the car coordinate system or in a similar coordinate system used in planning to a planning module (ECU) 8 of the ADAS, For example, the four-dimensional state vector may be generated by the fane generator and provided to the planning module. The planning module 8 may be a component of an autonomous driving ECU 3 as depicted in FIG. 1.

FIG. 1 illustrates a system architecture of system 1 for an autonomous vehicle. The system 1 comprises a navigation ECU 2, an autonomous driving ECU 3, an HD (high- definition) map ECU 4, a GNSS (global navigation satellite system) module 21, and an inertial measurement unit (IMU) 22. The I MU 22 may comprise further sensors such as a gyro sensor, an odometer, an accelerometer, and a magnetometer.

The HD map ECU 4 comprises a route path module 41, a database 42 storing an HD map, a map matching module 43, an HD localization module 44, a first ego-motion estimation module 45 using interoceptive ego-motion information, a second ego-motion estimation module 46 using exteroceptive ego-motion information, a data provider 47 for providing data from the HD map ECU 4 to the autonomous driving ECU 3, a landmark extraction module 48, and a visual odometry module 49. The navigation ECU 2 interfaces with the route path module 41 for providing a route path which may have been selected by the user of the autonomous vehicle. The route path module 41 interfaces with the map matching module 43 in order to provide information of the selected route path used for matching the information provided by the HD map 42 with the actual route path 43 in order to obtain a position of the vehicle.

The route path module 41, the database 42, the map matching module 43, and the HD localization module 44 ail interface with the data provider 47 in order to output data to the autonomous driving ECU 3.

The GNSS module 21 provides a location of the vehicle obtained from a satellite positioning system to the HD localization module 44. The HD localization module 44 estimates the exact location of the autonomous vehicle using the satellite position and the output of the map matching module 43.

The I MU module 22 interfaces with the first ego-motion estimation module 45 to generate interoceptive ego-motion information.

The autonomous driving ECU 3 controls driving functions of the autonomous vehicle, for example, the autonomous driving ECU 3 may control a steering angle of the steering wheel of the vehicle, an acceleration, and a deceleration of the vehicle. Furthermore, the autonomous driving ECU 3 provides data from a camera and other sensors, such as radar and LiDAR, to the landmark extraction module 48 and the visual odometry module 49. The landmark extraction module 48 applies algorithms to extract landmarks from camera images used for localization of the vehicle. The visual odometry module 49 provides data for generating exteroceptive ego-motion information to the second ego- motion estimation module 46. Obviously in series cars it is not feasible to integrate high cost GNSS/IMU systems.

Sophisticated positioning algorithms use interoceptive and exteroceptive ego- motion information as well as additional information coming from visual odometry (using for example camera data) or from the HD localization objects contained in the HD map and detected by a camera and/or a LiDAR to achieve a substantially accurate position of the ego-vehicle.

Clearly, the better the positional information of the ego-vehicle, the more accurate the lines (such as lane boundaries of a road) are, which are retrieved front the HD map. The first question we tackle is how the 4D distribution of the fine representation for autonomous driving looks like given an uncertain position and heading for the ego- vehicle and a piece-wise linear line or set of markers from the digital map, The line from the map provides a function which transforms the position and heading distribution from 3D space to a 4D distribution in the line representation space used for autonomous driving, called drivab!e line space. In other words, "lines" may correspond to possible trajectories of the ego-vehicle, for example in relation to a detected lane marking or other objects detected by the ADAS perception system. The second question is how the resulting distribution of the lines for autonomous driving can be expressed by a data representation which is accurate, compact, and usable by subsequent modules, such as the autonomous driving ECU 3 of FIG. 1 or the planning module 8 of FIG, 3 or FIG. 8. FIGS, 4A-4D present an informal problem statement.

Similarly to the illustration of FIG. 2, FIGS. 4Ά to 4C show a top view of an exemplary road an autonomous vehicle (ego-vehicle EV) is driving on. The road has two lanes, On the left, the road has a continuous left lane boundary LL. On the right, the road has a continuous right lane boundary RL The center line CL is marked by a broken fine.

With reference to FIG, 4 A, it is assumed that the position P and heading h are certain. We can then compute the four-dimensional state vector illustrated in FIG, 2.

in this case, the map line can be regarded as f: IR 3 IR 4 , transforming the certain values (x, y, h) into the certain values b, Co, Ci).

With reference to FIG, 4B, it is assumed that the position P and heading h are uncertain, A Monte-Carlo sampling can be applied to the uncertain position P anti heading h to generate N sample points. N=5 in the example shown in FIG. 4C. With reference to FIG, 4C, a four-dimensional state vector Xi " is computed for each of the

N samples

With reference to FIG. 40, the resulting distribution of the N vectors X/ is not necessarily Gaussian, This disclosure examines how well a Gaussian cov(X) or the standard deviation values o(X) approximates the distribution.

In this disclosure, the following three data structures that may represent the distribution for map lines resulting from positional uncertainty are evaluated:

• The four dimensional mean vector {y 0#f , b, <¾, Ci) only;

• The vector along with its standard deviation values; and

* The vector along with the corresponding covariance matrix. In industry, there exists an initiative called ADAS Interface Specification (ADAS!S), which transmits the map information in front of the vehicle via CAN (ADAS IS v2) or via Ethernet (ADASIS v3). In both cases, representations of the map line geometry are sent out via interfaces. The map lines are sent out in WGS84 coordinates as stored in the database. In addition, the position of the ego-vehicle is sent out with some accuracy values. FIG. 5 depicts some applications and a possible architecture. The problem how to get correct drivable line representations out of uncertain position and map lines is not tackled. The reason is that map content is so far used mainly for longitudinal control and not for iatera! control.

FIG. 5 illustrates another example of a system architecture, in FIG. 5 there is shown an infotainment ECU 9 and an ADAS ECU 3 The ADAS ECU 3" may be regarded as a modified example of the autonomous driving ECU 3 of FIG. 1. The ADAS ECU 3’ comprises a camera module 31, a radar module 32, a LiDAR module 33, an e-horizon

reconstruction module 34, a fusion module 35, an module providing an environmental model 36, a planning module 37, and an e-horizon provider 38, Furthermore, a DC runtime layer which refers to a runtime software layer like DriveCore runtime used for autonomous drive platforms and/or systems. This DriveCore runtime software layer is a middle-layer that bridges the gap between the sensors 31, 32, 33, 34 and autonomous driving functionality of an autonomous vehicle.

The camera module 31 may provide image data of a road the autonomous vehicle is driving on. The radar module 32 may provide distance information of other vehicles driving in front or in a surrounding of the autonomous vehicle. The LiDAR module 33 may provide a three-dimensional mode! of the autonomous vehicle's surrounding. The fusion module 35 may fuse sensor data and map data provided by the camera module 31, the radar module 32, the LiDAR module 33, and the e-horizon reconstruction module 34, By fusing the provided data, the fusion module 35 may provide data for updating or generating the environmental model 36, This environmental model 36 may be used by the planning module 37. This planning module may correspond to the planning module 8 of FIG. 3 or of FIG. 8. The fusion module 35 may correspond, for example, to the lane fusion module of FIG. 8,

The infotainment ECU 9 comprises a map database 91 and an e-horizon provider 92. The map database 91 stores an HD map similarly to the database 42 of FIG. 2. This HD map may comprise layers providing information on hazard spots, i.e. parts of a road which present a danger temporarily or always, such a tight turns, road construction sites, accident sites, danger of rock fall, danger of congestion, and the like. Furthermore, the HD map may comprise layers providing information on current traffic conditions and/or speed limits.

The e-horizon provider 92 may provide information used for positioning, map matching, and dead reckoning. In some system architectures, an infotainment ECU 9 may interface with an HD map ECU 4 and/or with an autonomous driving ECU 3 of FIG. 1.

III. COMPUTATION OF MAP LINES BASED ON UNCERTAIN POSITIONAL INFORMATION In general there is uncertainty in the positional and heading (hereafter referred to as “ego") information and possible uncertainty in the map information. Here we deal with the case where the map information is considered accurate or by reduction of the model we take the best estimate. The problem then becomes to compute the distribution of the line parameters as a function of the ego distribution and the map information. Let fen e Y ) represent the x and y coordinates of the ego position and en represent the heading (angle) in terms of the map coordinate system.

We concentrate on the case where the information about the ego distribution is given to us via the three-dimensional mean and a 3x3 covariance matrix. For computational and sampling purposes we may approximate the true distribution over (e x ; e y ; e>,) using a Gaussian with mean and covariance å. [This is the Gaussian that best matches the true distribution (if it is well-behaved) in terms of Kul!backLeibler distance. Proof by expanding the definition and differentiating,}

A. The Curve Function

Let x be the distance along the x axis (forward direction) of the car coordinate system. Then we define the line as:

Suppose we are given points from a map [HD Map information comes in this form] and we know somehow that they correspond to the same curve {(s,-, fc)}/ in the map coordinate system. We convert these points to their equivalent in the car coordinate system {(u / , )), where (u lr w) = coordinate transform (e * , e y , en, s < , f,). Then we fit a polynomial to a subset of these points: v = po + PiU + psu 2 + au 3 via a method such as feast-squares, i.e.

(Po, Pi. P2, P3) = arg min

Finally, we set y 0i r po . b := tan a (pi). CQ 2ps, cl := 6p3. This gives us a way to convert map info into lines in the car coordinate system a!l based on the vector (e*, e y , en). To complete the analysis consider a possible alternative position and heading of the car (et, e' y , e }. Through the described procedure, this would give rise to a different line for the same map points: (y off, b', c¾, cT). This polynomial refitting gives us the recipe for mapping ego and map information to line parameters. As mentioned previously we seek to use this mapping in order to understand the resulting distribution over the lines.

1) Invariance of least-squares: We also considered polynomial fitting via total least squares whose objective function is invariant to the coordinate system used since it relies on the orthogonal distances between the polynomiai and the data points. This method would be more computationally expensive and in general is not guaranteed to converge to an acceptable solution. We do not expect that the total least-squares solution be better for obtaining line parameters from map points. Nor is it known a priori that any coordinate system invariant fitting would produce a more accurate curve that least squares when we know the expected heading.

However, when we are already given a line (such as one obtained from least-squares fitting of map points), that line will have an implicit coordinate system: by definition. If we then want to calculate its parameters relative to an alternative coordinate system we simply take many points (five are enough) along this line and apply an invariant method such as the total least-squares objective. This makes sure that the newly fited curve is not influenced by a coordinate-variant distance measure. Luckily, regardless of this we can still use simple least-squares fitting since we know that its solution wilt be achieved with all distances zero (we can obtain a perfect fit) and thus will not be affected by the distance measure used since all distance measures must yield the same zero distances. B, Monte Carlo Curve Sampling

One way to understand the resulting distribution over the line parameters is to sample from it. This means sampling lines in the car coordinate system. To do this we first generate n samples from the ego distribution

for each of the coordinate system samples we refit the polynomial to obtain the needed line samples {z, = {y 0 ff t > b ί . ¾ · ¾)}å ] - The line samples then give us the information needed to analyze the distribution over possible lines.

where E and Cov are the statistical expectation and covariance of the vector, respectively and E mc and C mc are their approximators.

C. Sigma Points: Method

The sigma-points method is a basic and quick way to get special“sample" points from an input distr bution. The points are mapped through a possibly non-linear function, as is the case with our polynomial refitting. The mapped points are then used to approximate the true distribution that would result from applying the non-linear mapping, A separate set of weights is used to compute the estimated mean and covariance matrix of the mapped! distribution. The sigma points method tries to preserve the structure of the resulting distribution so that the variance and higher-order moments match the true distribution.

There are a few alternative formulations which try to preserve different properties of the true distribution. We use the 7-point version with n = 3; k > 0, a€ {0; 1], b = 2

parameters usually used in the unscented Kalman filter (UKF). The points defined in this disclosure are as follows:

is the i’th column of the square root of L

For each of these sigma points {{Xi, y, hi)} =1 we apply our refitting procedure to obtain 7 line parameters fo = {y o; ( ,/?,· , c 0f , c 1{ )} i· These line parameters are then used in the following way to estimate the mean and covariance of the true line distribution:

where E and Gov are the statistical expectation and covariance of the vector respectively and E sp

and p are their approximators.

IV. EXPERIMENTAL EVALUATION In this section, vve will present various examples depicting the true distribution of the map tines for a Gaussian ego motion position and a given map line. We will show and discuss in detail the limitations of using an approximation to this true distribution while analyzing what information is best to convey over our interface.

We wifi explore two scenarios: one that is simitar to a highway situation and one akin to a street turn situation. Before discussing the results we will explain the common structure of the experimental figures included here

A. Figure Explanation

Figures 6 and 7 share the same structure. Each of the plots depicts the line L fitted based on the mean position and heading in solid black. This solid black line L represents the lane marking (LL, RL, CL) of a road plotted in two dimensions. In each plot the positional distribution P is shown in the center of the subplot using an ellipse which contains 99% of the probability mass. This ellipse indicates the position P of the vehicle. The heading distribution is shown in the center with a black V indicating the mean heading (in all cases direct forward) and black arrows between which lies 99% of the probability mass for the heading

The plots (FIGS. 6 and 7): show a simplified illustration of Monte Carlo samples 8. The sample can be seen as arrows whose position and orientation signify (e*, e y , e*) and as black doted lines (S) which indicate its corresponding (refitted) line (y 0ff , b, Co, ci).

S. Analysis

Here we analyze the results in our experimental figures (6 and 7). We assessed the true distribution over lines L by simply looking at the empirical

distribution arising from applying curve refitting to the Monte Carlo samples S. We assess the four-dimensional Gaussian distributions over tines which are obtained from Monte Carlo and sigma points individually by looking at their samples on the map as well as their marginal distributions. Our visual comparison will involve these true distribution samples vs. the resulting Gaussian distribution samples.

1) The Gaussian approximation: Our first observation is on our highway scenario; that if the uncertainty in the heading is reasonable then the Gaussian distributions arising from both methods seem to fit the true distribution fairly well. The output Gaussian samples encompass the samples from the true distribution. This means we do not expect many false negatives when using the information provided over the interface rather than the true distribution.

However, we see that there exists a class of lines for which alt correlations are large (positive or negative). This is our street turn scenario. The marginals are already looking non-Gaussian even for small uncertainty in the heading. We also see some indication that there are false negatives, especially for the sigma points method.

2) Significance of the heading distribution: Fig, 6 depicts a scenario where the standard deviation of the heading is 5 degrees. This is already a larger value than commonly encountered in practical experience and this larger value was chosen to be able to evaluate and confirm the methods to even this range. However, as we see n FIG. 7, changes in the heeding distribution do affect the accuracy of the interface. The level depicted in this figure may not ever be experienced in practical situations, but it gives a good idea what levels are tolerable for the interface.

Notice that increasing the heading uncertainty leads to increases in the correlations 5 between parameters.

It has been found that increasing the heading uncertainty may create enough false positives to require additional information for decision making. Some of the marginal distributions, especially between the curvature and its rate of change have become dramatically non-Gaussian; we are assigning more and more probability mass to

LO parameter configuration that do not occur (hence the false positives).

In the example of FIG. 7 the use of the Gaussian approximation rather than the true distribution is clearly detrimental for decision making in our street turn scenario because too many false positives occur.

15 3) Significance of the covariances: Notice that the correlation matrix shows high:

correlation between most of the parameters indicating that the covariance matrix contains important information. Passing only the variances forces the user to effectively use a spherical Gaussian to model all marginals. We notice that a few of the marginals are very poorly represented, it has also been shown that because of this the reduction 20 causes too many false negatives, i.e. we fail to detect the true line location in many instances. 4) Trade-off between Monte Carlo and sigma points: Our two ways of deriving the Gaussian approximation of the line parameter distribution seem to yield similar results in alt cases (FIGS. 6 and 7). The sigma points method suffers from more false-positives. Considering that the Monte Carlo is more computationally expensive we may prefer to use sigma points results whenever the increase in false-positives is acceptable.

FIG. 8A shows an example embodiment of a system architectures using the subject line representation from maps as an input to a lane fusion module 7. FIG. 8B is a block diagram of another example embodiment using the subject line representation directly for planning a route for driving. FIG. 8C is a block diagram of yet another example embodiment using the subject line representation for sanity checks in an ASSL

decomposition map.

In FIG. 8A, data provided by a camera module 61, a LiDAR module 62, and a map lane detection module 63, are fused by a lane fusion module 7. This lane fusion module 7 may correspond to the lane generator 7' of FIG. 3. The output of the lane fusion module 7 is provided to the planning module 8, which may interface with an autonomous driving module such as the one illustrated in FIG. 1.

In FIG. 8B, a line representation derived from an HD map is directly used for planning.

The architecture of FIG. 8C is modification of the one illustrated in FIG. 8A. Here, sensor data from a camera module 61 and a LiDA module 62 are fused by the iane fusion module 7. A voter V compares the output of the lane fusion module 7 with the data provided by the HD map. V. CONCLUSION

In this document, the impact of uncertain position and heading information for retrieving drivable line information from a digital map is described. Basically, the geometry from the map provides us a non-linear function (curve refitting) which transforms a 3D Gaussian into a 4D distribution in the drivable line space. A way to visualize and store this complex distribution is provided via sampled curves which were passed through the non-linear function. Less expensive parametric ways of storing information are provided by approximating it with a Gaussian distribution. It is shown herein that there is a large enough class of lines and scenarios (in particular for highways) where this approximation is sufficient and even desirable for decision making. Two ways of arriving at this

distribution were given, one via Monte Carlo sampling and one via the sigma points method.

With regard to the main question of what to pass over the interface, important insights are disclosed. Passing only the mean of the distribution may be sufficient for some applications, but in general we are looking to inform about the uncertainty of the

distribution as well. The experimental results show that there is significant and valuable information available in the covariance matrix due to the geometry involved in alternative line recalculation. An approach where we pass only the mean and the variances (and not the covariances) would be equivalent to assuming a distribution that has 0 covariance between the parameters it is shown herein that this can lead to too many false negatives. Passing the mean of the distribution (four values) and this covariance matrix (4+6 unique values) over the interface requires passing 14 values in total, rather than four if only passing the mean. This will not detrimentally burden the interface with respect to bandwidth. Therefore it is highly advantageous to use the mean vector and the corresponding covariance matrix for representing drivable lines. As mentioned in section IV-B4 it may be computationally desirable for common applications to obtain this representation via the sigma points method.

In addition, the chosen description containing the covariance matrix along with the four dimensional mean value (y 0ff , b, Co, ¾) can naturally be processed by the fusion module. The mean value can be regarded as measurement update whereas the corresponding covariance matrix can be handled as a kind of dynamic measurement noise. This dynamic measurement noise, often denoted by R in the Kalman filter domain, changes for every measurement.

Thus the chosen representation for the interface between map line detection and lane fusion naturally fits to the standard Kalman filter based fusion algorithms (See FIG, 8A). As the chosen representation for map lines is Identical to the output representation of the lane fusion module, the planning module could choose either the map representation or the result of lane fusion (which in principle may use the same interface) for their lane keeping activities without changing their input interface (See FIG. 88}. Finally, the drivab!e line representation could be used for sanity checks comparing the fusion result to the map representation of the lines. Such a setup is especially helpful for AS I L decomposition. As distance measure the Earth Mover Distance (described at

https://en.wikipedia.org/wiki/Earth mover’s distance) could be used (See FIG. 80).

The system, methods and/or processes described above, and steps thereof, may be realized in hardware, software or any combination of hardware and software suitable for a particular application. The hardware may include a general purpose computer and/or dedicated computing device or specific computing device or particular aspect or component of a specific computing device. The processes may be realized in one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable device, along with interna! and/or external memory. The processes may also, or alternatively, be embodied in an application specific integrated circuit, a programmable gate array, programmable array logic, or any other device or combination of devices that may be configured to process electronic signals. It will further be appreciated that one or more of the processes may be realized as a computer executable code capable of being executed on a machine readable medium, The computer executable code may be created using a structured programming language such as C, an object oriented programming language such as C++, or any other high-level or low-1 evet programming language (including assembly languages, hardware description languages, and database programming languages and technologies) that may be stored, compiled or interpreted to run on one of the above devices as well as heterogeneous combinations of processors processor architectures, or combinations of different hardware and software, or any other machine capable of executing program instructions. Thus, in one aspect, each method described above and combinations thereof may be embodied in computer executable code that, when executing on one or more computing devices performs the steps thereof. In another aspect, the methods may be embodied in systems that perform the steps thereof, and may be distributed across devices in a number of ways, or all of the functionality may be integrated into a dedicated, standalone device or other hardware. In another aspect, the means for performing the steps associated with the processes described above may include any of the hardware and/or software described above Aif such permutations and combinations are intended to fall within the scope of the present disclosure

Obviously, many modifications and variations of the present invention are possible in light of the above teachings and may be practiced otherwise than as specifically described while within the scope of the appended claims.