Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A DEVICE FOR ASSISTED STEERING
Document Type and Number:
WIPO Patent Application WO/2023/025983
Kind Code:
A1
Abstract:
A device that includes a scanning unit and a control unit communicatively connected to each other. The is configured to be fixed to a vehicle and generate a point cloud that has a field of view around a point of origin located in the scanning unit. The control unit is configured to receive the point cloud and extract from it at least a first group of points that represent at least one selected region of a surface supporting the vehicle. The control unit is further configured to determine a tilt of the vehicle from said first group of points and create a compensated point cloud where coordinates of the points are adjusted by compensating the effect of the tilt. The control unit is configured to detect one or more objects of interest for steering functions from the compensated point cloud.

Inventors:
PYYKÖNEN PASI (FI)
KUTILA MATTI (FI)
TARKIAINEN MIKKO (FI)
Application Number:
PCT/FI2022/050537
Publication Date:
March 02, 2023
Filing Date:
August 17, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
TEKNOLOGIAN TUTKIMUSKESKUS VTT OY (FI)
International Classes:
G01S17/42; G01S17/89; G01S17/931; G01S7/00; G01S17/875
Foreign References:
DE102017105209A12018-09-13
US9052721B12015-06-09
DE102019117312A12020-12-31
Attorney, Agent or Firm:
BOCO IP OY AB (FI)
Download PDF:
Claims:
9

CLAIMS

1 . A device including a scanning unit and a control unit communicatively connected to each other, wherein the scanning unit is configured to be fixed to a vehicle and generate a point cloud having a field of view around a point of origin located in the scanning unit; the scanning unit is configured to feed the point cloud to the control unit; wherein the control unit is configured to extract from the point cloud at least a first group of points that represent a first selected region of a surface supporting the vehicle; the control unit is configured to select at least two points from the first group of points; the control unit is configured to determine a tilt of the vehicle based on coordinates of the at least two points in said first group of points; the control unit is configured to create a compensated point cloud where coordinates of the points are adjusted by compensating the effect of the tilt; the control unit is configured to detect one or more objects of interest from the compensated point cloud.

2. A device according to claim 1 , characterized in that the control unit is configured to extract from the point cloud points of interest that fulfil at least one defined criterion and thus correspond with the one or more objects of interest in the field of view of the scanning unit; the control unit is configured to apply the points of interest for the detection of the one of more objects of interest.

3. The device according to claim 1 or 2, characterized in that the control unit is configured use vertical coordinates to select the at least two points in the first group of points, wherein a vertical coordinate of each point in the first group of points indicates a distance from the point of origin to said point in a direction parallel to the direction of gravitation.

4. The device according to claim 3, characterized in that the control unit is configured to determine an average point representing an average of vertical coordinates of the points in the first group of points; determine a minimum point representing a minimum of vertical coordinates of the points in the first group of points; determine the tilt from coordinates of the average point and the minimum point. The device according to claim 3, characterized in that the control unit is configured to determine an average point representing an average of vertical coordinates of the points in the first group of points; determine a maximum point representing a minimum of vertical coordinates of the points in the first group of points; determine the tilt from coordinates of the average point and the maximum point. The device according to any of claims 1 to 5, characterized in that the control unit is configured to extract from the point cloud also a second group of points that represent a second selected region of the surface supporting the vehicle. The device according to claim 6, characterized in that the first selected region is in front of the vehicle and the second selected region is on the back of the vehicle. The device according to any of claims 1 to 7, characterized in that the control unit is configured to detect the one or more objects of interest in a direction that is orthogonal to the direction of gravitation and orthogonal to the direction in which the vehicle moves. The device according to any of claims 1 to 7, characterized in that the control unit is configured to detect the one or more objects of interest in a direction in which the vehicle moves. A method for a control unit communicatively connected to a scanning unit releasably fixed to a vehicle, the method comprising: receiving a point cloud having a field of view around a point of origin located in a scanning unit fixed to a vehicle; extracting from the point cloud at least a first group of points that represent a first selected region of a surface supporting the vehicle; selecting at least two points from the first group of points; determining a tilt of the vehicle based on coordinates of the at least two points in said first group of points; creating a compensated point cloud where coordinates of the points are adjusted by compensating the effect of the tilt; 11 detecting one or more objects of interest from the compensated point cloud. The method according to claim 10, characterized by extracting from the point cloud points of interest that fulfil at least one defined criterion and thus correspond with the one or more objects of interest in the field of view of the scanning unit; applying the points of interest for the detection of the one of more objects of interest. The method according to claim 10 or 11 , characterized by using vertical coordinates of the at least two points in the first group of points to determine the tilt of the vehicle, wherein a vertical coordinate of each point in the first group of points indicates a distance from the point of origin to the point in a direction parallel to the direction of gravitation. The method according to claim 12, characterized by determining an average point representing an average of vertical coordinates of the points in the first group of points; determining a minimum point representing a minimum of vertical coordinates of the points in the first group of points; determining the tilt from coordinates of the average point and the minimum point. The method according to claim 12, characterized by determining an average point representing an average of vertical coordinates of the points in the first group of points; determining a maximum point representing a minimum of vertical coordinates of the points in the first group of points; determining the tilt from coordinates of the average point and the maximum point. A computer program product, comprising instructions which, when the program is executed by a computer operated as a control unit, cause the computer to carry out the steps of the method of any of claims 10 to 14.

Description:
A DEVICE FOR ASSISTED STEERING

FIELD

The disclosure relates to assisted steering, and particularly to use of point clouds in assisted steering functions.

BACKGROUND

The term vehicle refers in general to means of carrying of transporting people or goods. In its broadest scope, the term includes watercraft, amphibious vehicles, aircraft and spacecraft, but most often the term associates to a land vehicle, a piece of mechanized equipment that applies steering and drive forces against the ground. The contact to the ground may be implemented through wheels, tracks, rails or skis, for example. Driving of a vehicle means controlling operation and movement of the vehicle, and in the early days, vehicles were practically fully controlled by human drivers. Through the development of mechatronics, artificial intelligence and multi-agent systems, more and more of the monitoring, agency and action functions have been transferred to be implemented by automated driving systems, even if the human remains to be responsible for the vehicles’ performance as the operator.

One important function of vehicular automation is to enable a driver (human or automated) to steer a vehicle accurately relative to a physical landmark. Conventional satellite-based navigation systems are applicable for some steering functions, but their accuracy is typically not sufficient for steering functions where tolerances are in the order of twenty centimetres or less. Examples of such steering functions include, for example, driving a bus to an exact vicinity of a curb, or positioning a bus to an exact position under a charging station.

For more accurate operations, special-purpose machines have been conventionally equipped with an inertial measurement unit (IMU) that includes a combination of accelerometers, gyroscopes and/or magnetometers. However, IMUs are integrated systems where the sensors are jointly adapted for a specific type of a vehicle and for a defined set of steering functions. Due to this, they are quite expensive and not applicable for general use in devices that can be installed to various types of land vehicles, even be retrofitted to an existing car, truck or bus.

Another conventionally applicable vehicle control and navigation method is to use a lidar, a measuring system that detects and locates objects on the same principle as a radar but emits pulsed laser light instead of microwaves. Lidar generates a point cloud that represents a measured distance to surrounding objects. However, a lidar needs to be fixed to a vehicle, and a vehicle does not provide a static frame of reference for the point cloud. Land vehicles include various suspension systems and active mechanisms that add passenger comfort and assist passenger entry and exit. Most of these systems operate independently of each other or are dynamically controlled by the driver. These unpredictable tilting and swinging motions of the body of the vehicle introduce distortion that prevents use of point clouds for steering functions that require higher accuracy than what is achievable with satellite-based navigation systems.

BRIEF DESCRIPTION

An object of the present disclosure is to provide a device and a method to alleviate at least some of the above described challenges in assisted steering functions.

This object is achieved with a device and a method, which are characterized by what is stated in the independent claims. Some exemplary embodiments of the disclosure are disclosed in the dependent claims.

The following examples are based on the idea of determining the tilt of a surface that supports a vehicle based on its point cloud coordinates and using the tilt to compensate the effect of the tilt in the point cloud coordinates before using them to detect objects in the vicinity of the vehicle.

BRIEF DESCRIPTION OF THE DRAWINGS

In the following the disclosure will be described in greater detail by means of preferred embodiments with reference to the accompanying drawings, in which

Figure 1 illustrates some basic elements and a vehicle setup for a device applicable in assisted steering;

Figure 2 illustrates components of a control unit of the device

Figure 3 shows an example of the vehicle setup of Figure 1 in an operational situation;

Figure 4 illustrates example positions for applicable reference areas;

Figure 5 illustrates a reference area of Figure 4 in another view;

Figure 6 illustrates the principle for determining the tilt of the supporting surface; and

Figure 7 illustrates steps of a method implemented in the control unit of the device.

DETAILED DESCRIPTION

The block chart of Figure 1 illustrates some basic elements necessary to understand following examples of the invention. Figure 1 shows a setup of an initial situation where a vehicle 100 stands or runs on a supporting surface 102. The supporting surface 102 is even, i.e. it is essentially perpendicular to the direction of gravitation so that the vehicle presses against the supporting surface and suspension systems of the vehicle can maintain a neutral mode wherein the body remains or runs without tilting or swinging on the supporting surface 102. A scanning unit 104 is fixed to the vehicle and is configured to generate a point cloud that represents measured distances to objects around the scanning unit. The scanning unit includes emits pulses of light and measures the amounts of time before the reflected light pulses are seen by the detector. Since the speed of light is known, the round-trip time determines the travel distance of a light pulse, which is twice the distance between the scanning unit and a point of reflection. The scanning unit 104 thus uses emitted and detected light pulses to generate a point cloud, wherein each point has its set of Cartesian coordinates (x, y, z). These coordinates represent measured distances between the scanning unit and detected objects. Devices generating point clouds through 3-D scanning are widely known and commercially available, so their operation will not be described in more detail herein. Fixing of the scanning unit to the vehicle in this context means that during operation, the scanning unit does not move in relation to the body of the vehicle and thus provides a point of origin for the coordinates in the point cloud. This fixing can be made permanent, for example by welding or use of a permanent adhesive or the scanning unit may be releasably fixed to the body of the vehicle with screws or by various locking/latching mechanisms.

The scanning unit 104 is communicatively coupled to a control unit 106 and is thus enabled to feed the point cloud data to the control unit. The control unit 106 processes point cloud information from at least one scanning unit 104, but a vehicle can be equipped with more than one scanning units, and the control unit can process information from them in an integrated manner. The block chart of Figure 2 illustrates components of the control unit. The control unit 106 is a device that may comprise a processing component 210. The processing component 210 is a combination of one or more computing devices suitable for performing systematic execution of operations upon predefined data. The processing component may comprise one or more arithmetic logic units, special registers and control circuits. The processing component may comprise or may be connected to a memory unit 212 that provides a data medium where computer-readable data or programs, or user data can be stored. The memory unit may comprise one or more units of volatile or non-volatile memory, for example EEPROM, ROM, PROM, RAM, DRAM, SRAM, firmware, programmable logic, etc. The control unit 106 may also comprise, or be connected to an interface unit 214 that comprises at least one input unit for inputting data to the internal processes of the control unit, and at least one output unit for outputting data from the internal processes of the control unit.

If a line interface is applied, the interface unit 214 typically comprises plug-in units acting as a gateway for information delivered to its external connection points and for information fed to the lines connected to its external connection points. If a radio interface is applied, the interface unit 214 typically comprises a radio transceiver unit, which includes a transmitter and a receiver. A transmitter of the radio transceiver unit may receive a bitstream from the processing component 210 and convert it to a radio signal for transmission by an antenna. Correspondingly, the radio signals received by the antenna may be led to a receiver of the radio transceiver unit, which converts the radio signal into a bitstream that is forwarded for further processing to the processing component 210. Different line or radio interfaces may be implemented in one interface unit.

The interface unit 214 may also comprise a user interface with a keypad, a touch screen, a microphone, or equals for inputting data and a screen, a touch screen, a loudspeaker, or equals for outputting data to the manual or automated driver of the vehicle. The output data may include, for example, information on an object of interest that has been detected by the scanning unit in a form applicable for a manual or automated steering function of the vehicle.

The memory unit 212, the processing component 210 and the interface unit 214 are electrically interconnected to provide means for performing systematic execution of operations on the received and/or stored data according to predefined, essentially programmed processes. These operations comprise the procedures described herein for the control unit 106 of the device of Figure 1. The point cloud typically includes a limited set of points that in combination provide a field of view a predefined region around the scanning unit. In the field of view, some specific object is relevant and needs to be detected for a steering function. The control unit 106 is thus configured to apply one or more defined criteria to the point cloud and thereby extract from the point cloud one or more points of interest that correspond with one or more objects of interest in the vicinity of the vehicle.

Figure 1 illustrates notation of a coordinate system applied in this description. The point of origin is the scanning unit, a direction parallel to the direction of gravitation is considered as the vertical direction and is denoted as the z-direction. A direction parallel to the direction in which the vehicle moves is considered as a first horizontal direction and is denoted as the y-direction. A second horizontal direction is orthogonal to the vertical direction and the first horizontal direction and is denoted as the x-direction. As an example, let us consider a situation where a driver wants to stop a bus exactly to a defined distance from the curb. For this, the control unit needs to apply a criterion that extracts from the point cloud a set of points of interest that most likely include points that correspond to the edge of the curb. The problem in use of the criterion is, however, illustrated with Figure 3.

Figure 3 shows the same vehicle setup as Figure 1 but in an operational situation where the vehicle 100 is a bus that stands or runs on a supporting surface 102 that is tilted with respect to a plane perpendicular to the z-direction. For passenger comfort, hydraulic system of the bus maintains the body of the bus in upright position even if the road is tilted. In addition, during driving, minor unevenness of the road may further wobble the body, not much but quite unpredictably. As can be understood from the drawing, the vehicle becomes correspondingly tilted with respect to the supporting surface 102 so that many of the criteria applied to analyse the point cloud in the situation of Figure 1 are no longer applicable in the situation of Figure 3. This is specifically the case for analyses that need greater accuracy, like positioning the bus accurately beside a curb, or driving the bus under a charging station.

In the following examples, this problem is overcome by complementing the control unit to implement a stage where the control unit determines from the point cloud at least one reference area and uses information from that reference area to compensate the effect of the tilt of the vehicle before any criterion for object detection is applied. Figure 4 illustrates example positions for the reference areas. For many applications, a required accuracy can be achieved through one reference area, like Area 1 , but by use of additional reference areas, accuracy can be improved. For a reference area, the control unit is configured to extract from the point cloud at least a first group of points that represent at least one selected region of a surface that supports the vehicle. Tilt of the vehicle refers herein to tilt of the vehicle with respect to the surface of the supporting surface and it can then be determined based on coordinates of the points in the first group of points. With the information on the tilt of the vehicle, the effect of this tilt can be compensated from the point cloud before any criterion for object detection is applied.

For added accuracy, the control unit may be configured to extract from the point cloud a second group of points that represent another selected region of the surface that supports the vehicle. Tilt of the vehicle can then be determined based on points in the first group of points and in the second group of points. As shown in Figure 4, even more reference areas can be applied. Availability of reference areas depends naturally on the extent of the field of view of the scanning unit. For example, if necessary, additional scanning units may be positioned in different parts of the body of the vehicle and be jointly connected to the control unit to provide reference areas and their respective groups of points to the control unit so that they can be used jointly for determination of the tilt of the vehicle. As shown in Figure 4, one reference area may be, for example, in front of the vehicle and another reference area on the back of the vehicle. Other reference area configurations may also be used without deviating from the scope.

As an example, let us look in more detail the case of Figures 1 and 3, and one reference area (Area 1 of Figure 4). If we assume that the scanning unit is fixed to the front of the bus, as shown in Figure 4, and one coordinate unit corresponds with 1 metre, an example condition for a first group of points could be {0<Xgi<0.4A0<y g i<0.2)}, wherein x gi represents x-coordinates and y gi represents y-coordinates of the first group of points, in other words points of a point cloud included in the selected reference area. Figures 4 and 5 illustrate the reference area 110 corresponding to a first group of points in this example in two different views.

The control unit is then configured to determine a tilt of the vehicle from said first group of points. The determination may be based on one calculation, or several calculations may be combined for more accuracy. A most straightforward method would be to select any two points from the reference area and determine the tilt of the vehicle based on their x- and z-coordinates. However, as the supporting surface (here the road) may have dents and bumps, a more accurate result can be achieved by means of averaging. The control unit may be configured to determine an average point p ave that represents an average of vertical coordinates of the points in the first group of points, and a minimum point p m in that represents a minimum of vertical coordinates of the points in the first group of points. As shown in Figure 6, the tilt of the vehicle (angle a) may then be determined from x- and z- coordinates of the average point and the minimum point as: zave — zmin a = arctan - xave — xmin

Alternatively, the control unit may be configured to compute a maximum point p ave that represents a maximum of vertical coordinates of the points in the first group of points, and a minimum point p m in that represents a minimum of vertical coordinates of the points in the first group of points, and the tilt of the vehicle (angle a) may then be determined from x- and z-coordinates of the average point and the minimum point as: zmax — zmin a = arctan xmax — xmin The computations may be implemented also in parallel to provide a way to check and ensure that the computed tilt angle is correct. It should be noted that these points are advantageous examples that are easily implemented with known library functions of available software systems. Other selected points and/or other averaging methods, well known to a person skilled in the art may be applied without deviating from the scope of protection.

The determinations are implemented in the control unit with coded programs, advantageously invoking library software available in the specific software system applied by the control unit. Most code used by modern applications is readily provided in these system libraries. The choice of the applied library depends on a diverse range of requirements such as: desired features, ease of API, portability or platform/compiler dependence (for e.g.: Linux, Windows, Visual C++, GCC), performance in speed, to name some. For example, the Point Cloud Library (PCL) is a large scale, open project for point cloud processing.

When the tilt angle a is determined, the control unit is configured to apply it to create a compensated point cloud where coordinates of the points are adjusted by compensating the effect of the tilt of the vehicle. The effect of the tilt of the vehicle may be compensated by rotating coordinates of the points of the point cloud. The control unit may be configured to apply a library function to rotate the coordinates or it may be configured with written code to implement the rotation. For example, Eigen is a high-level C++ library of template headers for linear algebra, matrix and vector operations, geometrical transformations, numerical solvers and related algorithms. With Eigen, the operation could be implemented with function Affine3f:: rotate

(https://eigen.tuxfamily.org/dox/group TutorialGeometry.html).

The control unit is then configured to detect the one or more objects of interest from the compensated point cloud. Typically, the control unit is configured to apply one or more defined criteria to the compensated point cloud to extract from the point cloud points of interest that correspond with the one or more objects of interest in the field of view of the scanning unit. For example, the PCL framework contains algorithms including filtering, feature estimation, surface reconstruction, registration, model fitting and segmentation. These algorithms can be used, for example, to filter outliers from noisy data, stitch 3D point clouds together, segment relevant parts of a scene, extract key points and compute descriptors to recognize objects in the world based on their geometric appearance, create surfaces from point clouds and visualize them (https://Dointclouds.org/documentation/index.html). Naturally, the software libraries and other programming tools mentioned in the above description are examples only. For a person skilled in the art it is clear, that a range of specially coded or commercially available products may be used to determine the tilt angle of a supporting surface based on its point cloud coordinates, and to use the tilt angle to compensate the effect of the tilt in coordinates of points before detecting objects in the scanned field of view of the point cloud.

The flow chart of Figure 7 illustrates steps of a method implemented in the control unit 106 of the device described with Figures 1 to 6. The method begins by the control unit receiving (stage 700) from the scanning unit a point cloud, which has a field of view around a point of origin. The point of origin is located in the scanning unit that is fixed to a vehicle. The control unit extracts (stage 702) from the point cloud at least a first group of points that represent at least one selected region (REF) of a surface supporting the vehicle. The control unit selects (stage 704) at least two points from the first group of points, and determines (stage 706) tilt of the vehicle based on coordinates of the at least two points in in said first group of points. The control unit creates (stage 708) a compensated point cloud (cPC) where coordinates of the points are adjusted by compensating the effect of the tilt.

One or more objects of interest can then be detected (710) from the compensated point cloud.