Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD FOR DETERMINING A RESPECTIVE BOUNDARY OF AT LEAST ONE OBJECT, SENSOR DEVICE, DRIVER ASSISTANCE DEVICE AND MOTOR VEHICLE
Document Type and Number:
WIPO Patent Application WO/2015/176933
Kind Code:
A1
Abstract:
The invention relates to a method for determining a respective boundary (8) of at least one object (7) in an environmental region (5) of a motor vehicle (1) based on sensor data of an optical sensor (3) of a sensor device (2) of the motor vehicle (1), wherein the following steps are performed by means of an image processing device (4) of the sensor device (2). A point cloud (11) with a plurality of points (10) is determined based on the sensor data and the point cloud (11) is transformed into a plan view image (13), which represents the environmental region (5) of the motor vehicle (1). With the image processing device (4), the following steps are performed in the plan view image (13). A reference point (14) is determined in the plan view image (13), which describes a position of the optical sensor (3), and the plan view image (13) is divided into a plurality of segments (15) starting from the reference point (14). Furthermore, a respective selection point (9) from the points (10) of the point cloud (11) is determined in the respective segment (15), wherein the respective selection point (9) in the segment (15) has the lowest distance to the reference point (14) compared to the other points (10) of the point cloud (11) in the segment (15). The respective boundary (8) of the at least one object (7) is determined based on the respectively determined selection points (9).

Inventors:
NGUYEN DUONG-VAN (IE)
HUGHES CIÁRAN (IE)
HORGAN JONATHAN (IE)
Application Number:
PCT/EP2015/059400
Publication Date:
November 26, 2015
Filing Date:
April 29, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CONNAUGHT ELECTRONICS LTD (IE)
International Classes:
G06K9/00; G06K9/32
Other References:
RUIMIN ZOU: "Free Space Detection Based On Occupancy Gridmaps", 1 April 2012 (2012-04-01), XP055202458, Retrieved from the Internet [retrieved on 20150715]
HERNÁN BADINO ET AL: "Free Space Computation Using Stochastic Occupancy Grids and Dynamic Programming", INTERNET CITATION, 1 January 2007 (2007-01-01), pages 1 - 12, XP008131597, Retrieved from the Internet [retrieved on 20110114]
HERN PRG A!N BADINO ET AL: "The Stixel World - A Compact Medium Level Representation of the 3D-World", 9 September 2009, PATTERN RECOGNITION : 31ST DAGM SYMPOSIUM, JENA, GERMANY, SEPTEMBER 9-11, 2009 ; PROCEEDINGS; [LECTURE NOTES IN COMPUTER SCIENCE ; 5748], SPRINGER BERLIN HEIDELBERG, BERLIN, HEIDELBERG, PAGE(S) 51 - 60, ISBN: 978-3-642-03797-9, XP019127048
Attorney, Agent or Firm:
JAUREGUI URBAHN, Kristian (Laiernstr. 12, Bietigheim-Bissingen, DE)
Download PDF:
Claims:
Claims

1 . Method for determining a respective boundary (8) of at least one object (7) in an environmental region (5) of a motor vehicle (1 ) based on sensor data of an optical sensor (3) of a sensor device (2) of the motor vehicle (1 ), wherein the following steps are performed by means of an image processing device (4) of the sensor device (2):

- determining a point cloud (1 1 ) with a plurality of points (10) based on the sensor data, and

- transforming the point cloud (1 1 ) into a plan view image (13), which represents the environmental region (5) of the motor vehicle (1 ),

characterized in that

the following steps are performed with the image processing device (4) in the plan view image (13):

- determining a reference point (14) in the plan view image (13), which describes a position of the optical sensor (3),

- dividing the plan view image (13) into a plurality of segments (15) starting from the reference point (14),

- determining a respective selection point (9) from the points (10) of the point cloud (1 1 ) in the respective segment (15), wherein the respective selection point (9) in the segment (15) has the lowest distance to the reference point (14) compared to the other points (10) of the point cloud (1 1 ) in the segment (15), and

- determining the respective boundary (8) of the at least one object (7) based on the respectively determined selection points (9).

2. Method according to claim 1 ,

characterized in that

for determining the respective boundary (8) of the at least one object (7), selection points (9) are selected and connected by a polyline (12).

3. Method according to claim 2,

characterized in that

the following steps are performed in selecting the selection points (9): - providing a vector situated from a first selection point (a) in a first segment to a second selection point (b) in a second segment adjacent to the first one, which contains one of the selection points (9), and

- connecting the first (a) and the second selection point (b) by the polyline (12) if at least one further of the selection points (9) is disposed on a side of the vector opposing the reference point (14), which is disposed after the second segment in a direction of the vector.

4. Method according to claim 3,

characterized in that

the first selection point (a) is connected to a third selection point by the polyline (13), wherein the third selection point (c) from a predetermined set of the selection points (9) each disposed in segments (15) following the second segment in the direction of the vector, has the lowest distance to the first selection point (a).

5. Method according to claim 3,

characterized in that

the first selection point (a) is connected to a fourth selection point by the polyline (12) if a third selection point (c) from a predetermined set of the selection points each disposed in segments (15) following to the second segment in the direction of the vector, has the lowest distance to the first selection point (a), and a ratio of an angle between a line from the reference point (14) to the first selection point (a) and a line from the reference point (14) to the third selection point (c) as well as a distance between the first (a) and the third selection point (c) exceeds a

predetermined limit value.

6. Method according to any one of claims 2 to 5,

characterized in that

the respective selection points (9) are adapted and connected by the polyline (12) depending on a ratio of an angle between three of the selection points (a,b,c) and a distance between two of the selection points (a,c).

7. Method according to any one of claims 2 to 6,

characterized in that

a position of at least one of the selection points (9) is adapted depending on a ratio of an angle between three of the selection points (a,b,c) and a distance between two of the selection points (a,c).

8. Method according to any one of the preceding claims,

characterized in that

the points (10) of the point cloud (1 1 ) are divided into at least two clusters (16).

9. Method according to claim 8,

characterized in that

the determination of the respective boundary (8) of the at least one object (7) is performed separately for each of the clusters (16) based on the respectively determined selection points (9).

10. Method according to claim 8 or 9,

characterized in that

the clusters (16) are combined if one of the segments (15) contains the selection points (9) of each of these clusters (16).

1 1 . Sensor device for a motor vehicle (1 ) including at least one optical sensor (3) for providing sensor data of an environmental region (5) of the motor vehicle (1 ), which is adapted to perform a method according to any one of the preceding claims.

12. Sensor device (2) according to claim 1 1 ,

characterized in that

the optical sensor (3) is a camera.

13. Driver assistance device with a sensor device (2) according to claim 12.

14. Motor vehicle (1 ) with a driver assistance device according to claim 13.

Description:
Method for determining a respective boundary of at least one object, sensor device, driver assistance device and motor vehicle

The invention relates to a method for determining a respective boundary of at least one object in an environmental region of a motor vehicle based on sensor data of an optical sensor of a sensor device of the motor vehicle, wherein the following steps are performed by means of an image processing device of the sensor device. A point cloud with a plurality of points is determined based on the sensor data and the point cloud is transformed into a plan view image, which represents the environmental region of the motor vehicle. In addition, the invention relates to a sensor device for a motor vehicle, to a driver assistance device with a sensor device as well as to a motor vehicle with a driver assistance device.

In sensor data of optical sensors, boundaries of objects can be determined. For example, objects can be recognized in images captured with a camera, and the boundaries thereof can be marked. The representation of boundaries can for example be used in a rearview camera of a motor vehicle in order to make the driver aware of obstacles.

Methods for determining the boundary of the object are already known from the prior art. Thus, for example, points of a point cloud of the object are connected to each other with methods, which are known under the generic term of convex hull. Thus, in the chapter "Convex Hulls: Basic Algorithms" of Franco P. Preparata and Michael Ian Shamos in the book ..Computational Geometry", 1985, pages 95 to 149, a series of convex hull methods is presented. A further method, in which a Bayes' model is used, is known from the article "A geometric Approach to automated Fixture Layout Design", of Y. Zeng and C.-M. Chew, Computer aided Design, Volume 42, Issue 3, March 2010, pages 202 to 212.

However, it is disadvantageous in the mentioned prior art that the optimum boundary of the object cannot be determined. Thus, the boundary of a parking lot defined by the points of the point cloud can for example be identified as a closed quadrangle and be presented in a plan view image as such. This can entail that the parking lot is recognized as occupied by a driver assistance device of the motor vehicle. Thus, in this case, it has to be taken care that the parking lot is not represented as blocked or closed. For example, this is the case if the two points, which are the frontmost points of a side of the parking lot facing the motor vehicle are connected to each other. It is the object of the invention to provide a method, a sensor device, a driver assistance device as well as a motor vehicle, in which measures are taken, which ensure that boundaries of objects located in an environmental region of a motor vehicle can be particularly reliably determined.

According to the invention, this object is solved by a method, by a sensor device, by a driver assistance device as well as by a motor vehicle having the features according to the respective independent claims. Advantageous implementations of the invention are the subject matter of the dependent claims, of the description and of the figures.

A method according to the invention serves for determining a respective boundary of at least one object in an environmental region of a motor vehicle based on sensor data of an optical sensor of a sensor device of the motor vehicle, wherein the following steps are performed by means of an image processing device of the sensor device: A point cloud with a plurality of points is determined based on the sensor data, and the point cloud is transformed into a plan view image, which represents the environmental region of the motor vehicle. According to the invention, it is provided that the following steps are performed with the image processing device in the plan view image: A reference point is determined in the plan view image, which describes a position of the optical sensor, and the plan view image is divided into a plurality of segments starting from the reference point. Furthermore, a respective selection point is determined from the points of the point cloud in the respective segment, wherein the respective selection point in the segment has the lowest distance to the reference point compared to the other points of the point cloud in the segment. In addition, the respective boundary of the at least one object is determined based on the respectively determined selection points.

Presently, sensor data is acquired with an optical sensor of a sensor device. The sensor data can in particular represent a picture of the environmental region of the motor vehicle. For example, sensor data can be continuously acquired with the optical sensor, which describe the environmental region of the motor vehicle. A point cloud is determined from the sensor data and displayed in the plan view image representing the environmental region of the motor vehicle. Moreover, the plan view image is divided into multiple segments, whereby the points of the point cloud are also divided. A selection point is determined in each segment. This selection point is that one of the points, which has the lowest distance to the reference point, which can for example describe the position of the optical sensor, compared to the other points in the segment. The boundary of the object can then be determined only based on the selection points. By the method according to the invention, it becomes possible to particularly accurately determine the boundary of the object. Thus, for example, a parking lot is surrounded by the objects, which delimit it. By the determination of the boundary of these objects, thus, the boundary of the parking lot can also be determined. The objects are present in the form of the point cloud, which is processed by the method according to the invention such that only the points of the point cloud relevant to the motor vehicle are determined or used as the boundary. Now, the advantage is in that the boundary can be very precisely and reliably represented, but at the same time only the required points are used. This approach allows working with an optimally reduced amount of sensor data.

In an embodiment, it is provided that selection points are selected and connected by a polyline for determining the respective boundary of the at least one object. Thus, it can be provided that only certain selection points are used for determining the boundary. In that the selection points or at least a part of the selection points are connected by the polyline, the boundary is present not only in the form of individual points, but continuously. Now, for each position in the plan view image or in the environmental region, it can be determined if it is located in front of or behind the respective boundary of the object. Without the polyline, the boundary of the object would only be partially defined, namely in the position of the points.

In particular, in selecting the selection points, the following steps are performed: A vector is provided, which is situated from a first selection point in a first segment to a second selection point in a second segment adjacent to the first one, which contains one of the selection points. Furthermore, the first and the second selection point are connected by the polyline if at least one further of the selection points is disposed on a side of the vector opposing the reference point, which is disposed after the second segment in the direction of the vector. Thus, a course for the polyline can be preset by the vector. Selection points, which would for example be located behind the polyline starting from the reference point, presently cannot be taken into account, Now, it can be ensured that only the selection points are used, which are required for a reliable representation of the boundary.

In a further embodiment, it is provided that the first selection point is connected to a third selection point by the polyline, wherein the third selection point from a predetermined set of the selection points each disposed in segments, which follow the second segment in the direction of the vector, has the lowest distance to the first selection point. This has the advantage that therefore at least one of the selection points cannot be taken into account, which is unnecessary for the definition of the boundary. Thus, with this embodiment, it is additionally ensured that only the respective selection points are selected or used, which are required for the reliable definition of the boundary.

Preferably, the first selection point is connected to a fourth selection point by the polyline if a third selection point from a predetermined set of the selection points each disposed in segments, which follow the second segment in the direction of the vector, has the lowest distance to the first selection point and a ratio of an angle between a line from the reference point to the first selection point and a line from the reference point to the third selection point as well as a distance between the first and the third selection point exceeds a predetermined limit value. This has the advantage that a too large area can be prevented from being bounded by the polyline. The ratio can also be referred to as blocking state. Thus, if the blocking state exceeds the limit value, the selection points, more precisely the first selection point and the third selection point, are not connected by the polyline. The limit value is for example set such that the distance from the first selection point to the second selection point is less than the dimensions of the motor vehicle.

In particular, it is provided that the respective selection points are adapted and connected by the polyline depending on a ratio of an angle between three of the selection points and a distance between two of the selection points. It is advantageous that thus an additional criterion for the selection of the selection point can be established. Thus, selection points unnecessary for the definition of the boundary cannot be taken into account. In particular, those of the selection points cannot be taken into account, which for example span an area with the adjacent selection points, which is considerably smaller than the dimensions of the motor vehicle. Thus, there is no risk that the boundary of a parking lot or a passage is erroneously determined.

Preferably, it is provided that a position of at least one of the selection points is adapted depending on a ratio of an angle between three of the selection points and a distance between two of the selection points. This is advantageous because the boundary can thereby be particularly optimally determined. Some of the selection points can be positioned such that they are not used for determining the boundary although they have been determined according to one of the preceding steps. Thus, there may be certain positions of the selection points, which result in them being shifted or substituted. In a further embodiment, it is provided that the points of the point cloud are divided into at least two clusters. Thus, for example, one of the respective clusters can be provided for each of the objects in the environmental region. Further, the cluster has the advantage that now the respective boundary can be separately determined for example for each of the objects. Thus, it is possible to associate an importance of the respective boundary for the motor vehicle, which differs from the respective cluster to the respective cluster.

In particular, it is provided that the determination of the respective boundary of the at least one object is performed separately for each of the clusters based on the respectively determined selection points. The boundary of the respective cluster can now be particularly precisely determined because for example only the respective points are contained in the cluster, which belong to a single one of the objects.

In particular, it is also provided that the clusters are combined if one of the segments contains the selection points of each of these clusters. This has the advantage that the boundary is not erroneously determined by two overlapping clusters. In this case, for example, the boundary of the one of the clusters could extend through the center of the set of the points of the other cluster. The determination of this boundary would be unnecessary since the boundary relevant to the motor vehicle is already determined by the boundary of the other cluster.

In a particular embodiment, the optical sensor is a camera. The camera has the advantage that the environmental region can be captured in large area with a

comparatively low-cost sensor. Furthermore, it is thereby possible that the method according to the invention can be integrated in an existing system including the camera. The camera is preferably a video camera, which is able to provide a plurality of images (frames) per second. The camera can be a CCD camera or a CMOS camera.

Additionally or alternatively, it is provided that the point cloud is determined based on sensor data of a laser scanner. The optical sensor is therefore formed as a laser scanner.

A sensor device according to the invention for a motor vehicle includes at least one optical sensor for providing sensor data of an environmental region of the motor vehicle, and is adapted to perform a method according to the invention.

A driver assistance device according to the invention includes a sensor device according to the invention. A motor vehicle according to the invention includes a driver assistance device according to the invention. Thus, for example, an obstacle for the motor vehicle can be acquired with the optical sensor as the object. Thus, the object can be an item or a living entity, with which a collision is to be prevented with the aid of the driver assistance device. However, the obstacles also appear if for example the parking lot is to be bounded. All of the obstacles located on a travel path or a trajectory from a current site to a destination are to be acquired with the driver assistance device. Among these obstacles, only the boundary facing the motor vehicle can be of relevance to the motor vehicle.

The preferred embodiments presented with respect to the method according to the invention and the advantages thereof correspondingly apply to the sensor device according to the invention, to the driver assistance device according to the invention as well as to the motor vehicle according to the invention.

Further features are apparent from the claims, the figures and the description of figures. All of the features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations or else alone.

Now, the invention is explained in more detail based on a preferred embodiment as well as with reference to the attached drawings.

There show:

Fig. 1 in schematic illustration a plan view of a motor vehicle and a parking lot with a boundary;

Fig. 2 in schematic illustration points of a point cloud and segments starting from a reference point;

Fig. 3 a flow diagram of a method according to an embodiment of the invention;

Fig. 4 a further flow diagram of a method according to an embodiment of the invention; Fig. 5 a further flow diagram of a method according to an embodiment of the invention;

Fig. 6 in schematic illustration three selection points from the point cloud;

Fig. 7 in schematic illustration three selection points, a shifted selection point and the reference point from the point cloud;

Fig. 8 in schematic illustration the three selection points, the shifted selection point and the reference point from the point cloud;

Fig. 9 in schematic illustration the three selection points, the shifted selection point and the reference point from the point cloud;

Fig. 10 in schematic illustration an environmental region of the motor vehicle with the selection points in the segments;

Fig. 1 1 in schematic illustration the environmental region with the selection points in the segments and the respective boundary;

Fig. 12 in schematic illustration the environmental region with selection points in the segments, wherein the selection points are divided into clusters and one of the boundaries is determined based on two combined clusters;

Fig. 13 in schematic illustration a side view of the environmental region with a point cloud of objects; and

Fig. 14 in schematic illustration the environmental region in plan view analogous to

Fig. 13 with the respectively determined boundary.

In Fig. 1 , a plan view of a motor vehicle 1 according to an embodiment of the invention is illustrated in schematic illustration. In the present embodiment, the motor vehicle 1 is a passenger car. The motor vehicle 1 has a sensor device 2, which includes an optical sensor 3 and an image processing device 4. The optical sensor 3 can be a camera, in particular a video camera, which continuously captures a sequence of frames. The image processing device 4 then processes the sequence of frames in real time and can determine a boundary 8 based on it - as described in more detail below.

However, the optical sensor 3 can also be formed as a laser scanner, which determines the point cloud 1 1 of the environmental region 5 by emitting and receiving optical pulses. The laser scanner can also be an imaging laser scanner, which additionally acquires an intensity of a reflected signal of the laser scanner besides a geometry of the object 7.

Sensor data is provided with the optical sensor 3, which represent a picture of the environmental region 5. A point cloud 1 1 is determined from the sensor data by means of the image processing device 4, which includes a plurality of points 10. For example, the image processing device 4 can determine characteristic points as the points 10 of the point cloud 1 1 from the sensor data or image data. For example, this can be effected by means of an interest operator.

Selection points 9 are determined from the points 10 of the point cloud 1 1 . They are used to determine the boundary 8 of the object 7. For this purpose, the selection points 9 are connected by a polyline 12. Thus, the boundary 8 is a connection of the selection points 9 of the points 10 of the point cloud 1 1 of the object 7 by the polyline 12. In the shown embodiment, an empty parking lot 6 is located in an environmental region 5 of the motor vehicle 1 . This parking lot 6 represents an object 7, which has the boundary 8.

For example, this boundary 8 can be presented on a screen of the motor vehicle 1 together with an image of the environmental region 5. Alternatively or additionally, the boundary 8 or data describing the boundary 8 can be used as an input signal for a driver assistance device, which assists the driver in parking. The motor vehicle 1 can be precisely parked into the parking lot 6 if the boundary 8 has been reliably determined.

In the embodiment according to Fig. 1 , the optical sensor 3 is disposed in an interior of the motor vehicle 1 , in particular behind the windshield, and captures the environmental region 5 in front of the motor vehicle 1 . However, the invention is not restricted to such an arrangement of the optical sensor 3. The arrangement of the optical sensor 3 can be different according to embodiment. For example, the optical sensor 3 can also be disposed in a rear region of the motor vehicle 1 and capture the environmental region 5 behind the motor vehicle 1 . Several such optical sensors 3 can also be employed, which each are formed for capturing the environmental region 5. Now, Fig. 2 shows a schematic plan view image 13, which contains the points 10 of the point cloud 1 1 . In the plan view image 13, a reference point 14 is determined, which is presently identical to the position of the optical sensor 3 of the motor vehicle 1 . However, the reference point 14 does not necessarily have to be identical to the position of the optical sensor 3. The optical sensor 3 can also be disposed in a known distance to the reference point 14. The position of the optical sensor 3 to the reference point 14 is therefore arbitrary provided that the relative position is known.

Starting from the reference point 14, the plan view image 13 is divided into segments 15. In the embodiment, the segments 15 are formed as circular segments each including the reference point 14. By dividing the plan view image 13 into the segments 15, the points 10 of the point cloud 1 1 are also divided. The respective selection point 9 for each of the segments 15 can now be determined by selecting that point 10 of the respective segment 15, which has the lowest distance to the reference point 14 among the points 10 within this segment 15.

In Fig. 2, a coordinate axis extending in horizontal image plane through the reference point 14 is denoted by y, while a coordinate axis extending vertically in image plane through the reference point 14 is denoted by x. The size of the respective segments 15 depends on an angle φ, which specifies a respective size of the respective segment 15 and describes, which area is swept by the segment 15 starting from the reference point 14. In the present case with the environmental region 5, which extends in a horizontal field of view of the optical sensor 3 over 180° the numb er of the segments 15 is calculated with 180 / φ. Based on a star S, which has one of the points 10 at each of its apices, it is apparent, which of the points 10 are selected as the selection points 9. The selection points 9 are then connected by the polyline 12.

Fig. 3 shows a simplified flow diagram of a method according to an embodiment of the invention. Thus, in a step S1 , the points 10 of the point cloud 1 1 are divided into at least one cluster 16. Therein, the at least one cluster 16 can be selected such that it includes the points 10 of the object 7. If multiple objects 7 are present, a separate cluster 16 can be provided for each of the objects 7. Thereafter, in a step S2, a list of the selection points 9 is created. The selection points 9 are calculated according to a flow diagram of Fig. 4. In a further step S3, the polyline 12 is determined according to a flow diagram of Fig. 5. The flow diagram in Fig. 4 shows, how the respective selection points 9 are determined. To this, in a step S4, a loop for each of the points 10 of the respective segment 15 is started. In a step S5, it is queried if all of the points 10 of the respective segment 15 have been processed. If this is affirmed, the point 10 closest to the reference point 14 of each of the segments 15 is respectively determined as the selection point 9. Now, the method is preliminarily terminated in a step S7. If not all of the points were processed after step S5 and the query in this step is therefore negated, a step S8 follows, and a distance, in particular a Manhattan distance, from one of the points 10 to the reference point 14 is calculated. The Manhattan distance is also referred to as Cityblock distance and is based on a neighborhood of four. Thus, only horizontal or vertical pieces of path are possible to determine the distance. Another possibility to this is a neighborhood of eight, in which diagonal pieces of path are also possible.

Step S8 is initialized with a step S9. Herein, the shortest distance from the respective segment 15 to the reference point 14 is taken as a basis. After step S8, a step S10 follows, in which it is checked if the distance calculated in step S8 is less than or equal to the shortest distance, which has been calculated in step S9. If the result in step S10 is negative, it is continued with step S4, and if the result of step S10 is positive, it is continued with a step S1 1 . In step S1 1 , the point 10 currently present as the selection point 9 is stored in a vector. Thereafter, the method is continued with step S4.

In the flow diagram of Fig. 5, the connection of the selection points 9 by the polyline 12 is described. In a step S12, a loop for each of the selection points 9 is performed. In a subsequent step S13, it is checked if the selection points 9 were all passed. If this is the case, the method ends in a step S14. If this is not the case, a step S15 follows, which continues with the next selection point 9. If a new selection point 9 is not found, the method is continued in step S12. In contrast, if a new selection point 9 is found, step S16 follows. In step S16, a predetermined set or number of selection points 9 adjacent to the current selection point 9 is determined.

In a next step S17, it is checked if one of these adjacent selection points 9 from the predetermined set belongs to a different cluster 16 than the current selection point 9. If this is the case, the two respective clusters 16, namely the cluster 16 of the current selection point 9 and the cluster 16 of the adjacent selection points 9, are combined in a step S18, and the method is continued at step S16. If this is not the case and the current selection point 9 and the adjacent selection points 9 belong to the same cluster 16, the method is continued with a step S19. In step S19, it is checked if all of those adjacent selection points 9 from the predetermined set are located on a side of a vector opposing the reference point 14. The vector extends from the current selection point 9 to the nearest selection point 9. In other words, in step S16, at least one further selection point 9 is determined starting from the current selection point 9, which is disposed in a

consecutively adjacent segment 15. At this place, multiple consecutively adjacent selection points 9 can also be selected. However, in the following, this is only described in simplified manner with the one selection point 9.

Now, if the next selection point 9 is on the opposing side of this vector with respect to the reference point 14, the current selection point 9 is connected to the next selection point 9 by the polyline 12 in a step S20. If this is not the case, the next selection point 9 is connected to the further selection point 9 by the polyline 12 in a step S21 . Finally, the obtained polyline 12 is optionally improved or adapted in a step S22. After the

improvement in step S22, the procedure of the flow diagram again starts at step S12 with the next selection point 9. The method for improving of step S22 is exemplarily explained below in Fig. 6 to 9 based on three selection points a, b and c.

In order to avoid too great distances between two selection points 9 connected by the polyline 12, a variable or a state bs is introduced, which can also be referred to as blocking state. This state bs is explained by the example of the first selection point a, which is connected to the second selection point b by the polyline 12. The state can be determined with the following formula: a diff

bs =—=— .

d _ diff

Herein, a_diff is the angle between the first selection point a, the reference point 14 and a selection point 9 different from the second selection point b. d_diff is the distance between the first selection point a and the selection point 9 different from the second selection point b. The quotient of a_diff and d_d iff thus results in the state bs. Now, it is intended that the state bs is less than a limit value. If the state is less than or equal to the limit value, the polyline 12 is removed, and an alternative connection to another one of the respective selection points 9 is searched with the aid of one of the previous steps. The state bs also finds use in step S22, which is to improve the boundary 8 with respect to the shape thereof. In particular if the boundary 8 or polyline 12 extends zigzag-shaped, it can be optimized with the following improvement approaches. Fig. 6 shows a first variant, how the behavior of the polyline 12 can be adapted. The first selection point a is connected to the second selection point b and the second selection point b in turn is connected to the third selection point c. The selection points a, b, c span an angle Θ. Presently, this angle Θ is now smaller than a predetermined value, for example 90°. In addition, the state bs is less than a secon d limit value Z with respect to the first selection point a and the third selection point c. In this case, the polylines 12 are removed from the second selection point b, and the first selection point a and the third selection point c are directly connected by the polyline 12. The next neighboring point after the first selection point a is therefore the third selection point c.

Fig. 7 shows a further possible arrangement of selection points a, b, c. Here, the first selection point a forms an angle Θ with the second selection point b and the third selection point c. The selection points a, b, c are in a particular relation to the reference point 14. Presently, the first selection point a is farther away from the reference point 14 than the second selection point b. In addition, the third selection point c is closest to the reference point 14 along the axis y. Now, the third selection point c is shifted such that the shifted selection point V forms a right-angled triangle with the second selection point b and the third selection point c, which faces the reference point 14. The shift or replacement of the third selection point c with the shifted selection point V is only performed if the state bs of the second selection point a and the third selection point c is less than the second limit value Z.

Fig. 8 shows a further arrangement of the selection points a, b, c, wherein the first selection point a, the second selection point b and the third selection point c form an angle Θ. The first selection point a has a greater distance to the reference point 14 than the second selection point b and the third selection point c. In addition, the third selection point c is disposed on the left side of the second selection point b with respect to the line y. If this constellation is true and the state bs is satisfied by the second selection point b and the third selection point c, the third selection point c is replaced with the selection point V. In this case, the selection point V is situated on a line segment from the reference point 14 to the third selection point c and is defined there via a perpendicular from the second selection point b in vertical direction of the image plane. Here too, the state bs again has to be less than the second limit value Z.

In Fig. 9, the angle Θ is also spanned by the first selection point a, the second selection point b and the third selection point c. The first selection point a is farther away from the reference point 14 than the second selection point b. The third selection point c is disposed on the left side of the second selection point b with respect to the line x,. The third selection point c is now replaced with the selection point V if the angle Θ is below a predetermined value, for example 90° and the state bs of the second selection point b and the third selection point c is below the second limit value Z. In this case, the position of the selection point V is situated on a line segment from the reference point 14 to the third selection point c, wherein the final definition of the position of the point V is effected by a line horizontal in the image plane parallel to y from the second selection point b to the line segment.

Fig. 10 shows in schematic illustration a plan view image 13 of the environmental region 5 with a superimposed model of the motor vehicle 1 , which is divided into the segments 15. In the plan view image 13, there are the selection points 9, which are connected by the polyline 12 according to the method. The selection points 9 are only connected to each other, if they belong to the same cluster 16. In Fig. 10, thus, two different clusters 16 are depicted. The effect of the state bs is also described by a dashed line. Presently, the state bs is only applied in the cluster 16 represented on the left side in Fig. 10.

Fig. 1 1 also shows a plan view image 13 with a superimposed model of the motor vehicle 1 in a further embodiment. Here, the polylines 12 or the boundary 8 are shown after step S22, which is applied for improvement. The result after step S22 is referred to as improved polyline 17. It is apparent that the improved polyline 17 oscillates less compared to the polyline 12 without improvement.

In Fig. 12, a further plan view image 13 analogous to Fig. 10 and Fig. 1 1 is illustrated. Here, the boundary 8 or the polyline 12 is shown for the case that two of the clusters 16 have been combined. Thus, the boundary 8 does not separately extend in each of the clusters 16, but extends over the respectively next selection points 9 from the two clusters 16 together.

Fig. 13 shows an image of the environmental region 5, in which the points 10 of the point cloud 1 1 are illustrated. Several objects 7 are present in the environmental region 5. The points 10 of the point cloud 1 1 are divided into the respective clusters 16, wherein the clusters 16 are each associated with an object 7. The environmental region 5 is presently captured with the optical sensor 3 or a camera having a fish eye lens. This explains the distortions and the wide image angle in Fig. 13. Fig. 14 shows a transformation of the environmental region 5 from Fig. 13 into a plan view image 13, wherein only two of the boundaries 8 of the two respective objects 7 are represented in Fig. 14.