Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
GENERATING A FILTERED POINT CLOUD AND GUIDING A VEHICLE AT LEAST IN PART AUTOMATICALLY
Document Type and Number:
WIPO Patent Application WO/2023/285254
Kind Code:
A1
Abstract:
According to a method for generating a filtered point cloud (10) for guiding a vehicle (1), a 5 sensor dataset representing an environment of the vehicle (1) is generated by an active optical sensor system (4) and a sensor point cloud (8) is generated a computing unit (3). The filtered point cloud (10) is generated as a subset of the sensor point cloud (8) depending on a reference point cloud (9) comprising a plurality of reference points, wherein for each of a plurality of points of the sensor point cloud, the respective point of 0 the sensor point cloud (8) is added to the filtered point cloud (10), only if a spatial region corresponding to a predefined vicinity of the respective point of the sensor point cloud (8) is free of reference points.

Inventors:
GARNAULT ALEXANDRE (FR)
BRADAI BENAZOUZ (FR)
MORENO-LAHORE PEDRO (FR)
BERNARD JEAN-CHARLES (JP)
Application Number:
PCT/EP2022/068826
Publication Date:
January 19, 2023
Filing Date:
July 07, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
VALEO SCHALTER & SENSOREN GMBH (DE)
International Classes:
G06V20/58; G06V10/762
Other References:
ASVADI ALIREZA ET AL: "3D Lidar-based static and moving obstacle detection in driving environments: An approach based on voxels and multi-region ground planes", ROBOTICS AND AUTONOMOUS SYSTEMS, ELSEVIER BV, AMSTERDAM, NL, vol. 83, 11 July 2016 (2016-07-11), pages 299 - 311, XP029674186, ISSN: 0921-8890, DOI: 10.1016/J.ROBOT.2016.06.007
KIRAN B RAVI ET AL: "Real-Time Dynamic Object Detection for Autonomous Driving Using Prior 3D-Maps", 23 January 2019, ADVANCES IN DATABASES AND INFORMATION SYSTEMS; [LECTURE NOTES IN COMPUTER SCIENCE; LECT.NOTES COMPUTER], SPRINGER INTERNATIONAL PUBLISHING, CHAM, PAGE(S) 567 - 582, ISBN: 978-3-319-10403-4, XP047501277
MURO SHOTARO ET AL: "Moving-object Tracking with Lidar Mounted on Two-wheeled Vehicle :", PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, 1 January 2019 (2019-01-01), pages 453 - 459, XP055966162, ISBN: 978-989-7583-80-3, Retrieved from the Internet DOI: 10.5220/0007948304530459
WOJTECH ROLF: "On-Device Motion Detection", 20 May 2021 (2021-05-20), XP055966344, Retrieved from the Internet [retrieved on 20220929]
Attorney, Agent or Firm:
CLAASSEN, Maarten (DE)
Download PDF:
Claims:
Claims

1. Method for generating a filtered point cloud (10) for guiding a vehicle (1 ) at least in part automatically, wherein a sensor dataset representing an environment of the vehicle (1) is generated by an active optical sensor system (4) of the vehicle (1) and a sensor point cloud (8) is generated by at least one computing unit (3) of the vehi cle (1) depending on the sensor dataset, characterized in that the filtered point cloud (10) is generated by the at least one computing unit (3) as a subset of the sensor point cloud (8) depending on a predetermined reference point cloud (9) comprising a plurality of reference points; and for each of a plurality of points of the sensor point cloud (8), the respective point of the sensor point cloud (8) is added to the filtered point cloud (10) by the at least one computing unit (3), only if a spatial region corresponding to a predefined vicinity of the respective point of the sensor point cloud (8) is free of reference points.

2. Method according to claim 1 , characterized in that for each of the plurality of points of the sensor point cloud (8), the spatial region cor responding to the predefined vicinity of the respective point of the sensor point cloud (8) is identified and it is determined whether the spatial region is free of refer ence points by the at least one computing unit (3).

3. Method according to claim 1 , characterized in that for all reference points of the reference point cloud (9), a further spatial region cor responding to a predefined further vicinity of the respective reference point of the reference point cloud (9) is identified and it is determined whether the further spatial region is free of points of the sensor point cloud (8) by the at least one computing unit (3).

4. Method for guiding a vehicle (1) at least in part automatically, characterized in that a filtered point cloud (10) is generated by carrying out a method according to one of the preceding claims; and - at least one control signal for guiding the vehicle (1) at least in part automatically is generated depending on the filtered point cloud (10).

5. Method according to claim 4, characterized in that - a perception algorithm is carried out by the at least one computing unit (3), wherein the filtered point cloud (10) is used as an input to the perception algorithm; and the at least one control signal is generated depending on an output of perception algorithm. 6. Method according to claim 5, characterized in that the reference point cloud (9) is used as a further input to the perception algorithm; and/or the at least one control signal is generated depending on the output of the percep- tion algorithm and depending on the reference point cloud (9).

7. Method according to one of claims 4 to 6, characterized in that one or more clusters (11a, 11b, 11c, 11 d, 12a, 12b) of points of the filtered point cloud (10) are determined and a bounding box and/or an object class is assigned to each of the plurality of clusters (11a, 11b, 11c, 11 d, 12a, 12b) by the at least one computing unit (3); and the at least one control signal is generated depending on the assigned bounding boxes and/or object classes.

8. Method according to one of claims 4 to 6, characterized in that a camera image (6) depicting the environment of the vehicle (1 ) is generated by a camera system (5) of the vehicle (1); one or more clusters (11a, 11b, 11c, 11 d, 12a, 12b) of points of the filtered point cloud (10) are determined by the at least one computing unit (3); a region of interest of the camera image (6) is determined by the at least one computing unit (3) depending on the determined one or more clusters (11a, 11b, 11c, 11 d, 12a, 12b); an object detection algorithm is applied by the at least one computing unit (3) to the region of interest; and the at least one control signal is generated depending on an output of the object detection algorithm.

9. Electronic vehicle guidance system (2) for a vehicle (1), the electronic vehicle guidance system (2) comprising an active optical sensor system (4), which is configured to generate a sensor da taset representing an environment of the vehicle (1); and at least one computing unit (3), which is configured to generate a sensor point cloud (8) is depending on the sensor dataset, characterized in that the at least one computing unit (3) is configured to generate the filtered point cloud (10) as a subset of the sensor point cloud (8) de pending on a predetermined reference point cloud (9) comprising a plurality of ref erence points; and for each of a plurality of points of the sensor point cloud (8), add the respective point of the sensor point cloud (8) to the filtered point cloud (10), only if a spatial re gion corresponding to a predefined vicinity of the respective point of the sensor point cloud (8) is free of reference points.

10. Electronic vehicle guidance system (2) according to claim 9, characterized in that the active optical sensor system (4) is designed as a laser scanner.

11. Electronic vehicle guidance system (2) according to one of claims 9 or 10, characterized in that the electronic vehicle guidance system (2) comprises a control unit, which is configured to generate at least on control signal for guiding the vehicle (1) at least in part automatically depending on the filtered point cloud (10). 12. Electronic vehicle guidance system (2) according to one of claims 9 to 11, characterized in that the electronic vehicle guidance system (2) comprises a camera system (5) for the vehicle (1), which is configured to generate a camera image (6) depicting the environment of the vehicle (1); and - the at least one computing unit (3) is configured to

- determine one or more clusters (11a, 11b, 11c, 11 d, 12a, 12b) of points of the filtered point cloud (10);

- determine a region of interest of the camera image (6) depending on the determined one or more clusters (11a, 11b, 11c, 11 d, 12a, 12b); and - apply an object detection algorithm to the region of interest; and the control unit is configured to generate the at least one control signal depending on an output of the object detection algorithm.

13. Computer program comprising instructions, which, when executed by an electronic vehicle guidance system (2) according to one of claims 9 to 12, cause the electronic vehicle guidance system (2) to carry out a method according to one of claims 1 to 8.

14. Computer readable storage medium storing a computer program according to claim 13.

Description:
Generating a filtered point cloud and guiding a vehicle at least in part automatically

The present invention is directed to a method for generating a filtered point cloud for guiding a vehicle at least in part automatically, wherein a sensor dataset representing an environment of the vehicle is generated by an active optical sensor system of the vehicle and a sensor point cloud is generated by at least one computing unit of the vehicle depending on the sensor dataset. The invention is further directed to a method for guiding a vehicle at least in part automatically, to an electronic vehicle guidance system, to a computer program and to a computer-readable storage medium.

Many automatic and partially automatic driving functions for vehicles, in particular motor vehicles, such as autonomous driving functions or advanced driver assistance system, ADAS, rely on the efficient processing of data generated by various sensors of the vehicle. In particular, active optical sensor systems, such as lidar systems, may provide a valuable input for these functions. Such active optical sensor systems may provide sensor data, which may be in the form of a two-dimensional or two-dimensional point cloud or may be converted to such a point cloud. However, a point cloud obtained via an active optical sensor system potentially contains a large number of points for each sensor frame. Consequently, the processing of such big point clouds requires a significant amount of computing resources in term of memory and computational time. In particular in view of embedded systems, commonly used in the context of automotive applications, this a drawback.

It is an object of the present invention to reduce the computational requirements for processing sensor data for guiding a vehicle at least in part automatically, in particular sensor data obtained by means of an active optical sensor system.

This object is achieved by the subject-matter of the independent claims. Further implementations and preferred embodiments are subject-matter of the dependent claims.

The invention is based on the idea to filter a sensor point cloud by comparing the sensor point cloud to a predetermined reference point cloud such that the filtered point cloud contains only points, which do not have a corresponding reference point in a predefined vicinity. According to an aspect of the invention, a method for guiding a filtered point cloud, which is suitable for being used for guiding a vehicle, in particular a motor vehicle, at least in part automatically is provided. Therein, a sensor dataset representing an environment of the vehicle is generated by an active optical sensor system of the vehicle, and a sensor point cloud is generated by at least one computing unit of the vehicle depending on the sensor data set. The filtered point cloud is generated by the at least one computing unit as a subset of the sensor point cloud depending on a predetermined reference point cloud, which comprises a plurality of reference points. The sensor point cloud comprises a plurality of points, and for each of the points of the sensor point cloud, the respective point of the sensor point cloud is added to the filtered point cloud by the at least one computing unit, only if a spatial region corresponding to a predefined vicinity of the respective point of the sensor point cloud is free of reference points, in particular, if and only if, the spatial region is free of reference points of the reference point cloud.

Here and in the following, “light” may be understood such that it comprises electromagnetic waves in the visible range, in the infrared range and/or in the ultraviolet range. Accordingly, the expression “optical” may be understood to be related to light according to this meaning.

By definition, an active optical sensor system comprises a light source for emitting light or light pulses, respectively. For example, the light source may be implemented as a laser, in particular as an infrared laser. Furthermore, an active optical sensor system comprises, by definition, at least one optical detector to detect reflected parts of the emitted light. In particular, the active optical sensor system is configured to generate one or more sensor signals, which represent the sensor dataset, based on the detected fractions of the light, and process and/or output the sensor signals.

For example, the active optical sensor system may be implemented as a lidar system. A known design of lidar systems are so-called laser scanners, in which a laser beam is deflected by means of a deflection unit so that different deflection angles of the laser beam may be realized. The deflection unit may, for example, contain a rotatably mounted mirror. Alternatively, the deflection unit may include a mirror element with a tiltable and/or pivotable surface. The mirror element may, for example, be configured as a micro-electro- mechanical system, MEMS. In the environment, the emitted laser beams can be partially reflected, and the reflected portions may in turn hit the laser scanner, in particular the deflection unit, which may direct them to a detector unit of the laser scanner comprising the at least one optical detector. In particular, each optical detector of the detector unit generates an associated detector signal based on the portions detected by the respective optical detector. Based on the spatial arrangement of the respective optical detector, together with the current position of the deflection unit, in particular its rotational position or its tilting and/or pivoting position, it is thus possible conclude the direction of incidence of the detected reflected components of light. A processing unit or an evaluation unit of the laser scanner may, for example, perform a time-of-flight measurement to determine a radial distance of the reflecting object. Alternatively or additionally, a method, according to which a phase difference between the emitted and detected light is evaluated, may be used to determine the distance.

A point cloud may be understood as a set of points, wherein each point is characterized by respective coordinates in a two-dimensional or three-dimensional coordinate system.

In case of a three-dimensional point cloud, the three-dimensional coordinates may, for example, be determined by the direction of incidence of the reflected components of light and the corresponding time-of-flight or radial distance measured for this respective point. In other words, the three-dimensional coordinate system may be a three-dimensional polar coordinate system. However, the information may also be pre-processed to provide three-dimensional Cartesian coordinates for each of the points. In general, the points of a point cloud are given in an orderless or unsorted manner, in contrast to, for example, a camera image. In addition to the spatial information, namely the two-dimensional or three- dimensional coordinates, the point cloud may also store additional information or measurement values for the individual points such as an echo pulse width, EPW, of the respective sensor signal.

A computing unit may in particular be understood as a data processing device. The computing unit can therefore in particular process data to perform computing operations. This may also include operations to perform indexed accesses to a data structure, for example a look-up table, LUT.

In particular, the computing unit may include a computer, a microcontroller, or an integrated circuit, for example, an application-specific integrated circuit, ASIC, a field- programmable gate array, FPGA, a system on a chip, SoC, and/or an electronic control unit, ECU. The computing unit may also include a processor, for example a microprocessor, a central processing unit, CPU, a graphics processing unit, GPU, and/or a signal processor, in particular a digital signal processor, DSP. The computing unit may also include a physical or a virtual cluster of computers or other of said units. In this case, the computing unit may also be referred to as a computing system, for example.

In various embodiments, the computing unit includes one or more hardware and/or software interfaces and/or one or more memory units. A memory unit may be implemented as a volatile data memory, for example a dynamic random access memory, DRAM, a static random access memory, SRAM, or as a non-volatile data memory, for example a read-only memory, ROM, a programmable read-only memory, PROM, an erasable read-only memory, EPROM, an electrically erasable read-only memory, EEPROM, a flash memory or flash EEPROM, a ferroelectric random access memory, FRAM, a magnetoresistive random access memory, MRAM, or a phase-change random access memory, PCRAM.

The at least one computing unit may, for example, comprise a pre-processing or evaluation unit of the active optical sensor system and/or one or more computing units of the vehicle, which are arranged separately to the active optical sensor system. In particular, the sensor point cloud may be generated based on the sensor dataset partly or completely by the pre-processing or evaluation unit of the active optical sensor system. However, the sensor point cloud may also be generated partly or completely by the one or more computing units external to the active optical sensor system.

The reference point cloud is, in particular, stored on a memory unit of the vehicle or a memory unit of the at least one computing unit and also represents the environment of the vehicle. In general, the reference point cloud is determined and stored on the memory unit prior to the method according to the invention, such that it may be considered as a map, in particular a high-definition map, which is static and represents the environment at a previous point in time. However, in some implementations, also determining and storing the reference point cloud on the memory may be considered as a part of the method according to the invention.

The reference point cloud may cover a greater spatial region in the environment than the sensor point cloud. The reference point cloud may be determined by one or more further active optical sensor systems or laser scanners et cetera, which are mounted to one or more further vehicles.

Since each point of the sensor point cloud as well as each reference point of the reference point cloud corresponds to a well-defined position in the real environment, the sensor point cloud and the reference point cloud or, in other words, their respective positions, may be compared to each other, and a distance between individual points of the sensor point cloud and individual points of the reference point cloud may be determined. In this way, the at least one computing unit may analyze for each of the points of the sensor point cloud whether the spatial region in the real world corresponding to the vicinity of the respective point contains any of the reference points of the reference point cloud. This may be done by analyzing the respective spatial region for each of the considered points of the sensor point cloud. Alternatively, respective vicinities of the reference points of the reference point cloud may analyzed for points of the sensor point cloud. In both ways, it may be ensured that only such points of the sensor point cloud are added to the filtered point cloud, which have a spatial region free of reference points. In other words, points of the sensor point cloud, which are closer to any reference point of the reference point cloud, are not added to the filtered point cloud. In yet other words, the respective point of the sensor point cloud is discarded for generating the filtered point cloud.

In this way, the number of points of the filtered point cloud is significantly reduced compared to the number of points of the sensor point cloud. Nevertheless, the loss of information by the filtering is acceptable since the reference point cloud contains the respective reference points in the spatial region around the discarded point of the sensor point cloud. Since measurements by active optical sensor systems are, in general, not absolutely exact, it is necessary to define a certain vicinity of the points of the sensor point cloud to be analyzing for reference points. The vicinity may, for example, defined by a respective threshold distance.

The sensor point cloud, which may also be considered as a live point cloud or online point cloud of a respective frame during operation of the vehicle, contains points corresponding to objects, which already have been present in the environment at the time the reference point cloud has been established. These points correspond, for example, to an infrastructure of the environment, road parts or other objects, which may in the following be denoted as static objects. In addition, the sensor point cloud contains points, which correspond to objects, which have not been present in the environment at the time the reference point cloud has been established. These objects may, for example, include other road users such as further vehicles, pedestrians et cetera or changes in the infrastructure. By generating the filtered point cloud in the described manner, it is achieved that, for each frame, the points corresponding to static objects may be removed from the sensor point cloud, which significantly reduces the amount of points to be processed and stored and therefore significantly reduces the required computational time and memory. Therefore, the subsequent processing of the filtered point cloud instead of the sensor point cloud, in some implementations together with the reference point cloud, allows for a reliable and robust perception of the environment at reduced computational requirements.

The filtered point cloud may then, in particular by an electronic vehicle guidance system of the vehicle, be used as an input for guiding the vehicle at least in part automatically. For example, the at least one computing unit may use the filtered point cloud as an input for an automatic perception algorithm, for example, a computer vision algorithm or another perception algorithm.

Visual perception algorithms, also denoted as algorithms for automatic visual perception, computer vision algorithms or machine vision algorithms, may be considered as computer algorithms for performing a visual perception task automatically. A visual perception task may, for example, be understood as a task for extracting information from image data. In particular, the visual perception task may in principle be performed by a human, which is able to visually perceive an image corresponding to an image data. By means of a visual perception algorithm, the visual perception task is, however, performed automatically without requiring the support of a human. For example, a visual perception algorithm may be understood as an image processing algorithm or an algorithm for image analysis, which may, for example, be trained using machine learning and may, in particular, be based on an artificial neural network, for example, a convolutional neural network. Examples for visual perception algorithms include object detection algorithms, object tracking algorithms, classification algorithms, in particular image classification algorithms, and/or segmentation algorithms, in particular semantic segmentation algorithms.

Corresponding algorithms may analogously be performed based on input data other than images visually perceivable by a human. For example, point clouds or images of infrared cameras et cetera may also be analyzed by means of correspondingly adapted computer algorithms. Strictly speaking, however, the corresponding algorithms are not visual perception algorithms since the corresponding sensors may operate in domains, which are not perceivable by the human eye, such as the infrared range. Therefore, here and in the following, such algorithms are denoted as perception or automatic perception algorithms. Perception algorithms therefore include visual perception algorithms, but are not restricted to them with respect to a human perception. Consequently, a perception algorithm according to this understanding may be considered as computer algorithm for performing a perception task automatically, for example, by using an algorithm for sensor data analysis or processing sensor data, which may, for example, be trained using machine learning and may, for example, based on an artificial neural network. Also the generalized perception algorithms may include object detection algorithms, object tracking algorithms, classification algorithms and/or segmentation algorithms, such as semantic segmentation algorithms.

In case an artificial neural network is used to implement a visual perception algorithm, a commonly used architecture is a convolutional neural network, CNN. In particular, a 2D- CNN may be applied to respective 2D-camera images. Also for other perception algorithms, CNNs may be used. For example, 3D-CNNs, 2D-CNNs or 1D-CNNs may be applied to point clouds depending on the spatial dimensions of the point cloud and the details of processing.

The output of a perception algorithm depends on the specific underlying perception task. For example, an output of an object detection algorithm may include one or more bounding boxes defining a spatial location and, optionally, orientation of one or more respective objects in the environment and/or corresponding object classes for the one or more objects. A semantic segmentation algorithm applied to a camera image may include a pixel level class for each pixel of the camera image. Analogously, a semantic segmentation algorithm applied to a point cloud may include a corresponding point level class for each of the points. The pixel level classes or point level classes may, for example, define a type of object the respective pixel or point belongs to.

According to several implementations of the method for generating a filtered point cloud, for each of the plurality of points of the sensor point cloud, the spatial region corresponding to the predefined vicinity of the respective point of the sensor point cloud is identified and it is determined whether the spatial region is free of reference points by the at least one computing unit.

According to several implementations, the spatial region is identified as a spherical region, which has a predefined radius, and the respective point of the sensor point cloud corresponds to a center point of this spherical region. In other words, only such points of the sensor point cloud are added to the filtered point cloud, which have a distance to all of the reference points, which is greater than the radius of the spherical region.

According to several implementations, for all reference points of the reference point cloud, a further spatial region corresponding to a predefined further vicinity of the respective reference point of the reference point cloud is identified and it is determined whether the further spatial region is free of points of the sensor point cloud by the at least one computing unit.

According to a further aspect of the invention, a method for guiding a vehicle at least in part automatically is provided. Therein, a filtered point cloud is generated by carrying out a method for generating a filtered point cloud according to the invention. At least one control signal for guiding the vehicle at least in part automatically is generated depending on the filtered point cloud, in particular by a control unit of the vehicle, for example, of the at least one computing unit. In particular, the vehicle is guided at least in part automatically depending on the at least one control signal.

For example, the vehicle may comprise an electronic vehicle guidance system, which comprises the at least one computing unit, the active optical sensor system and the control unit, and the method for guiding a vehicle at least in part automatically is carried out by the electronic vehicle guidance system.

An electronic vehicle guidance system may be understood as an electronic system, configured to guide a vehicle in a fully automated or a fully autonomous manner and, in particular, without a manual intervention or control by a driver or user of the vehicle being necessary. The vehicle carries out all required functions, such as steering maneuvers, deceleration maneuvers and/or acceleration maneuvers as well as monitoring and recording the road traffic and corresponding reactions automatically. In particular, the electronic vehicle guidance system may implement a fully automatic or fully autonomous driving mode according to level 5 of the SAE J3016 classification. An electronic vehicle guidance system may also be implemented as an advanced driver assistance system, ADAS, assisting a driver for partially automatic or partially autonomous driving. In particular, the electronic vehicle guidance system may implement a partly automatic or partly autonomous driving mode according to levels 1 to 4 of the SAE J3016 classification. Here and in the following, SAE J3016 refers to the respective standard dated June 2018. Guiding the vehicle at least in part automatically may therefore comprise guiding the vehicle according to a fully automatic or fully autonomous driving mode according to level 5 of the SAE J3016 classification. Guiding the vehicle at least in part automatically may also comprise guiding the vehicle according to a partly automatic or partly autonomous driving mode according to levels 1 to 4 of the SAE J3016 classification.

The at least one control signal may be provided to one or more actuators of the vehicle for implementing the automatic or partly automatic control of the vehicle accordingly.

According to several implementations of the method for guiding a vehicle at least in part automatically, a perception algorithm is carried out by the at least one computing unit, wherein the filtered point cloud is used as an input to the perception algorithm. The at least one control signal is generated depending on an output of the perception algorithm.

It should be understood that the output of the perception algorithm, for example a segmented image, a segmented point cloud, object tracking information, bounding boxes et cetera, may be further processed or used in further processing steps in order to determine one or more parameters or the like for guiding the vehicle at least in part automatically, and the at least one control signal is generated depending on the one or more parameters.

According to several implementations, the reference point cloud is used as a further input to the perception algorithm, in particular for generating the output of the perception algorithm.

In particular, the reference point cloud does not only serve as a resource for filtering the sensor point cloud but also as an additional source of information regarding, in particular, the static objects in the environment. By considering the reference point cloud as well as the filtered point cloud for the perception algorithm, an improved functionality and/or a higher reliability of the output of the perception algorithm is achieved.

According to several implementations, the at least one control signal is generated depending on the output of the perception algorithm and depending on the reference point cloud. In other words, the at least one control signal is generated taking into account the reference point cloud explicitly and not only via its influence on its filtered point cloud.

According to several implementations, one or more clusters of points of the filtered point cloud are determined by the at least one computing unit and a bounding box and/or an object class is assigned to each of the plurality of clusters by the at least one computing unit. The at least one control signal is generated depending on the assigned bounding boxes and/or assigned object classes.

Therein, determining the one or more clusters and assigning the bounding boxes and/or object classes to each of the clusters may be considered to be a part of the perception algorithm.

In particular, the perception algorithm may comprise an object detection algorithm, which is directly applied to the filtered point cloud or directly applied to the filtered point cloud and the reference point cloud.

The bounding boxes assigned to each of the clusters may correspond to two-dimensional geometric figures in case the filtered point cloud is provided as a two-dimensional point cloud or to three-dimensional geometric figures in case the filtered point cloud is provided as a three-dimensional point cloud. For example, the respective bounding boxes may be implemented as rectangles or cuboids, respectively.

The object classes assigned to the clusters may correspond to the respective types of objects represented by the corresponding points of the cluster or, in other words, to the type of the object, which cause the corresponding reflections of light emitted by the active optical sensor system. The bounding boxes and/or the object classes may be considered as an output of the perception algorithm.

According to several implementations, a camera image depicting the environment of the vehicle is generated by a camera system of the vehicle. One or more clusters of points of the filtered point cloud are determined by the at least one computing unit, and a region of interest of the camera image is determined by the at least one computing unit depending on the determined one or more clusters. An object detection algorithm is applied by the at least one computing unit to the region of interest. The at least one control signal is generated depending on an output of the object detection algorithm. In other words, the object detection algorithm is applied to the camera image, wherein the object detection algorithm is restricted to the region of interest. It should be understood that the region of interest may contain one or more regions of the camera image, which may or may not be connected to each other. In this way, the computational effort, computation time and memory requirements for carrying out the object detection algorithm is reduced.

Also here, the output of the object detection algorithm may comprise respective bounding boxes and/or object classes for each of the objects identified in the region of interest of the camera image. The region of interest may contain one or more connected or disconnected regions, for example, up to one region for each of the identified clusters.

According to a further aspect of the invention, an electronic vehicle guidance system for a vehicle is provided. The electronic vehicle guidance system comprises an active optical sensor system for being mounted to the vehicle, which is configured to generate a sensor dataset representing an environment of the vehicle, in particular when it has been mounted to the vehicle. The electronic vehicle guidance system comprises at least one computing unit, which is configured to generate a sensor point cloud depending on the sensor dataset. The at least one computing unit is configured to generate the filtered point cloud as a subset of the sensor point cloud depending on a predetermined reference point cloud comprising a plurality of reference points. The at least one computing unit is configured to add the respective point of the sensor point cloud to the filtered point cloud, only if a spatial region corresponding to a predefined vicinity of the respective point of the sensor point cloud is free of reference points of the reference point cloud

According to several implementations of the electronic vehicle guidance system, the active optical sensor system is designed as a lidar system, in particular a laser scanner lidar system.

According to several implementations, the electronic vehicle guidance system, in particular the at least one computing unit, comprises a control unit, which is configured to generate at least one control signal for guiding the vehicle at least in part automatically depending on the filtered point cloud.

According to several implementations, the electronic vehicle guidance system comprises a camera system, in particular for being mounted to the vehicle, which is configured to generate a camera image depicting the environment of the vehicle, in particular when it has been mounted to the vehicle. The at least one computing unit is configured to determine one or more clusters of points of the filtered point cloud and to determine a region of interest of the camera image depending on the determined one or more clusters. The at least one computing unit is configured to apply an object detection algorithm to the region of interest. The control unit is configured to generate the at least one control signal depending on an output of the object detection algorithm.

Further implementations of the electronic vehicle guidance system according to the invention follow directly from the various implementations of the method for generating a filtered point cloud according to the invention and the implementations of the method for guiding a vehicle according to the invention and vice versa. In particular, an electronic vehicle guidance system according to the invention is configured to carry out a method according to the invention or carries out such a method.

According to a further aspect of the invention, a vehicle, in particular a motor vehicle, is provided, which comprises an electronic vehicle guidance system according to the invention.

According to a further aspect of the invention, a computer program comprising instructions is provided. When the computer program or the instructions, respectively, are executed by an electronic vehicle guidance system according to the invention, in particular by the at least one computing unit of the electronic vehicle guidance system, the instructions cause the electronic vehicle guidance system to carry out a method according to the invention.

According to a further aspect of the invention, a computer-readable storage medium storing a computer program according to the invention is provided.

Further features of the invention are apparent from the claims, the figures and the figure description. The features and combinations of features mentioned above in the description as well as the features and combinations of features mentioned below in the description of figures and/or shown in the figures may be comprised by the invention not only in the respective combination stated, but also in other combinations. In particular, embodiments and combinations of features, which do not have all the features of an originally formulated claim, are also comprised by the invention. Moreover, embodiments and combinations of features which go beyond or deviate from the combinations of features set forth in the recitations of the claims are comprised by the invention.

In the figures:

Fig. 1 shows schematically a vehicle comprising an exemplary implementation of an electronic vehicle guidance system according to the invention;

Fig. 2 shows a schematic flow diagram of an exemplary implementation of a method according to the invention;

Fig. 3 shows schematically a camera image representing an environment of a vehicle;

Fig. 4 shows schematically a sensor point cloud representing an environment of a vehicle;

Fig. 5 shows schematically a reference point cloud;

Fig. 6 shows schematically a filtered point cloud;

Fig. 7 shows schematically clusters of points of a filtered point cloud;

Fig. 8 shows schematically a camera image and a reference point cloud; and

Fig. 9 shows schematically a camera image and a reference point cloud.

Fig. 1 shows schematically a motor vehicle 1 comprising an exemplary implementation of an electronic vehicle guidance system 2 according to the invention.

The electronic vehicle guidance system 2 comprises at least one computing unit 3 and a lidar system 4, which is, for example, implemented as a laser scanner. Optionally, the vehicle guidance system 2 also comprises a camera 5 mounted on the vehicle 1. The vehicle guidance system 2 may carry out a method for generating a filtered point cloud according to the invention and/or a method for guiding the vehicle 1 at least in part automatically according to the invention. The function of the vehicle guidance system 2 and corresponding methods are explained in the following in more detail with reference to the figures Fig. 2 to Fig. 7.

Fig. 2 shows a flow diagram of an exemplary implementation of a method for generating a filtered point cloud according to the invention. Therein, step S1 may be considered as a part of the method or may be carried out prior to the method according to the invention, while steps S2 and S3 are comprised by the method according to the invention. Step S4 is an optional step of the method according to the invention.

In step S1 , a reference point cloud 9, as schematically shown in Fig. 5, depicting an environment of the vehicle 1 is generated and stored on a memory unit (not shown) of the at least one computing unit 3. To this end, one or more further vehicles (not shown) may be equipped with respective lidar systems to gather a large number of reference scan points representing the road and other static objects in the environment. The reference point cloud 9 may then be considered as a high-definition map, FID map, of the environment. The reference point cloud 9 may also be considered as an offline FID map, which remains unchanged during individual subsequent sensor frames of the lidar system 4. The reference point cloud 9 contains points corresponding infrastructure elements such as trees, buildings, road, landscape et cetera. Mobile or dynamic objects, such as pedestrians, vehicles, mobile barriers et cetera are preferably not contained in the reference point cloud 9.

The at least one computing unit 3 may accurately determine the position of the vehicle 1 in the reference coordinate system corresponding to the reference point cloud 9 by using known methods, for example, based on the evaluation of GPS signals and various sensor data of the vehicle, including but not limited to data obtained by means of the lidar system 4 and/or the camera 5.

In step S2, the lidar system 4 generates, during a respective sensor frame, a sensor dataset representing the environment of the vehicle 1 and the at least one computing unit 3 generates a sensor point cloud 8 depending on the sensor dataset, as depicted schematically in Fig. 4. The sensor point cloud 8 comprises points corresponding to the static objects also represented by the reference point cloud 9. In addition, the sensor point cloud 8 may comprise additional points corresponding to changes in the environment or in the infrastructure or to mobile or dynamic objects in the environment. A few examples of such objects are depicted schematically in the camera image of Fig. 3. For example, there may be persons 7a, 7b or other objects 7c, 7d in the scene.

In step S3, the at least one computing unit 3 may compare the sensor point cloud 8 to the reference point cloud 9 in order to generate a filtered point cloud 10, as depicted in Fig. 6, wherein the static objects are effectively removed from the sensor point cloud 8. To this end, the at least one computing unit 3 may analyze a spatial region corresponding to a predefined vicinity of a given point of the sensor point cloud 8 and discard this point or, in other words, not add this point to the filtered point cloud 10, if there is a reference point of the reference point cloud 9 in the vicinity of the respective point of the sensor point cloud 8. For example, only points of the sensor point cloud 8, which do not have a reference point of the reference point cloud 9 closer than a predefined threshold distance, for example 10 cm, are added to the filtered point cloud 10.

Fig. 6 shows respective clusters of points 11a, 11b, 11c, 11 d in the filtered point cloud 10, which are associated to the objects 7a, 7b, 7c, 7d in the camera image 6.

In step S4, the filtered point cloud 10 may then be used by the at least one computing unit 3 to carry out a perception algorithm, for example an object detection algorithm or a segmentation algorithm.

To this end, the at least one computing unit 3 may directly apply the perception algorithm to the filtered point cloud 10. Alternatively, the at least one computing unit 3 may use the filtered point cloud 10 to determine a respective region of interest in the camera image 6 and apply the perception algorithm to the camera image 6 while restricting it to the region of interest.

In Fig. 7, it is schematically shown that one or more clusters 12a, 12b of points are identified in the filtered point cloud 10 by the at least one computing unit 3. For extracting the clusters 12a, 12b, known approaches or known methods may be used. After the clustering, the locations of the clusters 12a, 12b in the camera image 6 may be determined. For example, the coordinates of the parameters of the clusters 12a, 12b may be extracted in the coordinates of the local vehicle coordinate system.

For each cluster 12a, 12b, the coordinates of the cluster in the vehicle coordinate system may be transformed to pixel coordinates of the camera system using the respective calibration parameters for the camera 5. The corresponding pixel coordinates may be denoted as region of interest. The at least one computing unit 3 may then apply the classification and/or detection algorithm to the region of interest and provide its result to following software modules of the perception system.

In this way, a significant reduction of the processing power and time consumption compared to traditional camera detection systems may be achieved, in which the whole image has to be searched for patterns to be recognized.

It is noted that a precise HD map with infrastructure and other static obstacles in terms of the reference point cloud 9 may also be used for other applications in the context of automated driving. Automated driving vehicle systems may be equipped with software processing modules for estimating the surrounding object’s possible trajectory based on their past trajectories, orientation, motion model, further characteristics et cetera. Also, the ego vehicle’s possible trajectories may be estimated analogously, and a respective trajectory the ego vehicle should follow may be selected. Among the trajectories, which are calculated, some may be discarded because of a possible collision with surrounding objects, because they may infringe traffic rules and/or they may lead to collisions with infrastructure obstacles.

Assuming a precise HD map with infrastructure obstacles is given, such as the reference point cloud 9, and that the vehicle 1 is precisely localized in this HD map, it is possible for a trajectory estimation system of the vehicle 1 to refine this trajectory calculation or estimation as the HD map provides information about the infrastructure, which may go beyond the vehicle’s sensors 4, 5 field of view. The drivable area for the vehicle 1 may then be estimated, even if the infrastructure is not in direct line of sight with the sensors 4, 5 or is occluded.

In Fig. 8, an example for an HD point cloud map 13 is shown as well as an inset showing a respective camera image 15. The vehicle’s sensor perception is, for example, limited to approximately 150 m. The information contained in the point cloud map 13, however, enables a description of the infrastructure of several hundreds of meters including, for example, the road boundaries. The box 14 indicates to what extent the at least one computing unit 3 is able to estimate a safe trajectory for the vehicle 1.

Fig. 9 shows another example for an HD point cloud map 16 and an inset showing a corresponding camera image 18. The box 17 indicates a position of an emergency area for the vehicle 1 to safely stop, which is identifiable in the point cloud map 16 but not in the field of view of the camera 5 at the moment of the snap shot given by the camera image 18.