Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DEVICE FOR AND METHOD OF DRIVING SUPERVISION
Document Type and Number:
WIPO Patent Application WO/2022/111784
Kind Code:
A1
Abstract:
Method of and device (100) for driving supervision, wherein the device (100) comprises means (108) configured to receive sensor data, means (110) configured to estimate a driving trajectory of a vehicle and means (114) configured to analyze the sensor data, wherein the means (108) configured to receive sensor data, the means (110) configured to estimate the driving trajectory of the vehicle, and the means (114) configured to analyze the sensor data and the driving path are configured to cooperate for receiving sensor data from at least one sensor of a vehicle, determining a driving trajectory of the vehicle depending on the sensor data, providing a reference for driving according to a vehicle type, a driving mode for the vehicle or a driving style, determining a reward depending on a difference between the driving trajectory and the reference, and outputting the reward.

Inventors:
BARBANTAN RARES (RO)
Application Number:
PCT/EP2020/025534
Publication Date:
June 02, 2022
Filing Date:
November 24, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
PORSCHE AG (DE)
International Classes:
G08G1/01; B60W40/09; B60W50/14; G08G1/16
Domestic Patent References:
WO2014029882A12014-02-27
Foreign References:
US20140322676A12014-10-30
US20100023197A12010-01-28
US20190263417A12019-08-29
FR3074123A12019-05-31
US20180060970A12018-03-01
US20170166217A12017-06-15
Download PDF:
Claims:
Claims

1. A method of driving supervision, characterized by receiving (202) sensor data from at least one sensor of a vehicle, determining (204) a driving trajectory of the vehicle depending on the sensor data, providing (206) a reference for driving according to a vehicle type, a driving mode for the vehicle or a driving style, determining (208) a reward depending on a difference between the driving trajectory and the reference, and outputting (210) the reward.

2. The method according to claim 1, characterized by determining (204) at least one predicted trajectory of a traffic participant from the sensor data, wherein the reference is determined (206) depending on the at least one predicted trajectory.

3. The method according to one of the previous claims, characterized in that the reward is determined (208) depending on an artificial intelligence model, wherein the artificial intelligence model is trained on different driving styles from recorded driving of in particular professional drivers.

4. The method according to claim 3, characterized in that the artificial intelligence model is trained to classify the driving style of the driver, in particular as sporty, aggressive, safe, ecologic, the method further comprising determining the reference for driving depending on the driving style.

5. The method according to one of the previous claims, characterized by outputting (210) the reward on a graphical user interface of the vehicle or to a social media interface.

6. The method according to one of the previous claims, characterized by determining (208) for different driving situations a plurality of differences between different driving trajectories and respective references, and determining (208) the reward depending on the plurality of differences.

7. The method according to one of the previous claims, characterized by determining a goal depending on the vehicle type, the driving mode for the vehicle or the driving style, determining if the difference meets the goal, providing the reward if the goal is met and not providing the reward otherwise.

8. The method according to one of the previous claims, characterized by determining the driving mode selected by a driver of the vehicle via a user interface.

9. Device (100) for driving supervision, characterized in that the device (100) comprises means (108) configured to receive sensor data, means (110) configured to estimate a driving trajectory of a vehicle and means (114) configured to analyze the sensor data, wherein the means (108) configured to receive sensor data, the means (110) configured to estimate the driving path of the vehicle, and the means (114) configured to analyze the sensor data and the driving trajectory are configured to cooperate for performing steps of the method according to one of the preceding claims.

Description:
Device for and method of driving supervision

The invention concerns a device for and method of driving supervision.

US 20180060970 A1 discloses a driver assistance system for collision mitigation by analyzing driving behavior with regard to critical traffic situations, of which the driver may be warned. US 20170166217 A1 and WO 2014029882 A1 disclose aspects of an adaptation of corresponding warnings according to different vehicle types, driving modes and styles.

The method and device according to the independent claims further improve the driving supervision.

The method of driving supervision comprises receiving sensor data from at least one sensor of a vehicle, determining a driving trajectory of the vehicle depending on the sensor data, providing a reference for driving according to a vehicle type, a driving mode for the vehicle or a driving style, determining a reward depending on a difference between the driving trajectory and the reference, and outputting the reward.

Advantageously, the method comprises determining at least one predicted trajectory of a traffic participant from the sensor data, wherein the reference is determined depending on the at least one predicted trajectory.

In one aspect, the reward is determined depending on an artificial intelligence model, wherein the artificial intelligence model is trained on different driving styles from recorded driving of in particular professional drivers.

The artificial intelligence model may be trained to classify the driving style of the driver, in particular as sporty, aggressive, safe, ecologic, the method further comprising determining the reference for driving depending on the driving style. Advantageously, the method comprises outputting the reward on a graphical user interface of the vehicle or to a social media interface.

Advantageously, the method comprises determining for different driving situations a plurality of differences between different driving trajectories and respective references, and determining the reward depending on the plurality of differences.

Advantageously, the method comprises determining a goal depending on the vehicle type, the driving mode for the vehicle or the driving style, determining if the difference meets the goal, providing the reward if the goal is met and not providing the reward otherwise.

The method may comprise determining the driving mode selected by a driver of the vehicle via a user interface.

The Device for driving supervision comprises means configured to receive sensor data, means configured to estimate a driving trajectory of a vehicle and means configured to analyze the sensor data, wherein the means configured to receive sensor data, the means configured to estimate the driving trajectory of the vehicle, and the means configured to analyze the sensor data and the driving trajectory are configured to cooperate for performing steps of the method.

Further advantageous embodiments are derivable from the following description and the drawing. In the drawing:

Fig. 1 schematically depicts a device for driving supervision,

Fig. 2 depicts steps in a method for driving supervision.

Figure 1 depicts a device 100 for driving supervision. The device 100 in the example is connectable to or comprises at least one first sensor 102, at least one second sensor 104 and at least one third sensor 106. The at least one first sensor 102 in the example is a camera. The at least one second sensor 104 in the example is a radar sensor. The at least one third sensor 106 in the example is a LIDAR sensor. Other sensors may be used. There may be more or less than three sensors.

The device 100 comprises a first module 108 configured to receive sensor data, a second module 110 configured to estimate a driving path of a vehicle, a third module 112 configured to supervise driving and a fourth module 114 configured to analyze the sensor data and the driving path.

The device 100 may be mountable to the vehicle. The device 100 may be a controller for the vehicle. The means of the device 100 may be distributed throughout various controllers mounted to the vehicle and configured to communicate among one another.

These modules are configured to cooperate according to method described below for receiving sensor data from at least one sensor of a vehicle, determining a driving trajectory of the vehicle depending on the sensor data, providing a reference for driving according to a vehicle type, a driving mode for the vehicle or a driving style, determining a reward depending on a difference between the driving trajectory and the reference, and outputting the reward.

The first module 108 is configured to receive input from the at least one first sensor 102, e.g. the camera. The first module 108 is further configured to receive input from the at least one second sensor 104, e.g. the radar sensor sensors.

The first module 108 is further configured to receive input from the at least one third sensor 106, e.g. the LIDAR sensor.

The first module 108 may be configured to determine from the sensor data of different sensors fused sensor data. The first module 108 may be configured to determine an object list comprising one object or several objects detected in sensor data received by the sensors. The first module 108 may be configured to determine separate object lists for different sensors. The first module 108 may be configured to provide a first input for the second module 110 and to provide a second input for the third module 112.

The first input may be fused sensor data. The second input may comprise the object list or the separate objects lists and may include further characteristics of the fused sensor data.

The second module 110 is configured to receive further sensor data, e.g. steering wheel angle, acceleration or other vehicle parameters.

The second module 110 is configured to estimate a future driving path based on the further sensor data, e.g. the steering wheel angle, the acceleration or the other vehicle parameters.

The second module 110 may comprise a model for estimating the future driving path. The model may be an artificial intelligence based model. This artificial intelligence based model may be trained to predict estimations for future driving path based on the further sensor data.

The first input from the first module 108 may be an input, in particular to this artificial intelligence based model, to estimate the future driving path as well.

The second module 110 may be configured to use map data to determine road data and to estimate the future driving path depending on the road data.

The second module 110, in particular the model, may be parameterized based on a vehicle type and/or a selected driving mode. The driving mode may be a normal mode, a sport mode or an ecological mode.

The future driving path determined by the second module 110 is provided to the third module 112. The third module 112 is configured to determine a predicted path for at least one traffic participant. The third module 112 is configured to determine the predicted path based on the second input, i.e. the object list or the object lists from the sensors. In the example, a plurality of predicted paths is determined for the driving path of the traffic participants, i.e. the object or the objects, detected in the sensor data.

The third module 112 is configured in the example to determine for any detected object a predicted trajectory with corresponding uncertainty.

The third module 112 may be configured to determine based on the future driving path of the vehicle a future trajectory of the vehicle.

The third module 112 is configured to evaluate the predicted trajectory of at least one object and the future trajectory of the vehicle to determine a parameter indicating a safety of the driving. The evaluation may consider a potential future collision when the future trajectory of the vehicle and the predicted trajectory of at least one object cross one another. The evaluation may determine the parameter to indicate a high risk of a collision in that case. The evaluation may determine the parameter to indicate a risk of collision in case a distance between these trajectories is less than a threshold without crossing one another. A high acceleration or high speed of the at least one object or the vehicle may increase the risk compared to a lower acceleration or speed. The parameter may be determined to indicate a higher level of the risk of the collision at a higher acceleration or speed than when the acceleration or speed is lower. A stability of the vehicle may be determined and the level of the risk may be adjusted to a higher level when the vehicle is in an instable driving situation than when the vehicle is in a stable driving situation.

The parameter may define the risk level based on the vehicle type and/or the selected driving mode as well. The sport mode may lead to the parameter indicating a higher risk level or a lower risk level than the normal mode or the ecological mode. In the example, the parameter may define three levels of the risk, namely from high to low: Imminent, Critical and Standard.

The third module 112 may be configured to signal the criticality of the future estimated behavior to a driver e.g. on the three levels: Imminent, Critical and Standard.

The third module 112 may be configured to output data to the fourth module 114. The data may be the predicted trajectory of at least one object and the future trajectory of the vehicle and/or the parameter indicating a safety of the driving.

The fourth module 114 is configured to analyze the predicted trajectory of at least one object and the future trajectory of the vehicle. The fourth module 114 in the example is configured to determine and display a safety analysis for a scenario the vehicle is driving in. The fourth module 114 may be adapted to show a history of driving for individual drivers of the vehicle. The history may be a scenario based safety analysis or show an evolution of the safety analysis for the individual driver. The fourth module 114 may be configured determine the safety analysis by means of an artificial intelligence. The artificial intelligence may be trained to predict goal achievements. The fourth module 114 may be configured to determine a data analysis for vehicle malfunction possibilities as well.

The fourth module 114 may be configured to analyze the predicted trajectory of at least one object and the future trajectory of the vehicle and the at least one parameter indicating a safety of the driving.

The fourth module 114 is configured in one aspect to create a safe driving rating e.g. for an individual driver. The fourth module 114 may be configured to upload the rating to a cloud based tool for learning and estimating unsafe behaviors. The fourth module 114 is in another aspect configured to use an external application or external applications for simulating different behaviors. The fourth module 114 is in another aspect configured to use an external application or external applications for presenting safe and unsafe situations.

The fourth module 114 may be configured to implement a gamification concept for rewards. For example, a reward is given, for driving the vehicle by a driver for 100 km without warnings. For example, a reward is given, for perfect overtaking. The reward may include a badge that is displayed to the driver via a graphical user interface of the vehicle.

The artificial intelligence may be used for the prediction and/or the analysis of the data as follows.

For Prediction:

An artificial intelligence model may be trained on generic driver behavior to accurately predict how other participants in traffic will behave.

A short history of a vehicle’s movement together with current dynamic attributes such as speed, acceleration, yaw, yaw rate, may be input to the artificial intelligence model to predict the vehicle's future trajectory. A sequential model may be used, e.g. a long short-term memory, LSTM.

For analytics:

An artificial intelligence model may be trained on different driving styles from recorded professional drivers.

The model may be used to classify the driving style of the driver, e.g. sporty, aggressive, safe, ecologic. The model may output a “driving style” difference to assess how close a driver gets to a desired driving style. The model may output a “rating”.

The Fourth module 114 may be adapted to use this “rating” to determine the reward or to suggest improvements for the driving.

For example, an improvement is a recommendation for a driver of the vehicle to increase a distance to another vehicle in front, to start braking sooner in tight corners.

The model may be trained on famous drivers. The model may output a suggestion or recommendation to achieve a most similar driving style to the famous driver. The model may output the reward depending on the difference to the driving style of the famous driver, e.g. giving a higher reward the closer the driving style is imitated. The output may be provided for sharing on social media.

A different reward may be determined depending on the vehicle type, the selected driving mode or the driving style.

The method of driving supervision is described with reference to figure 2 below.

In a step 202, sensor data from at least one sensor of the vehicle is provided.

The sensor data may be captured while the vehicle moves in a drive cycle.

In a step 204, the driving trajectory of the vehicle is determined. The driving trajectory is determined in the example from the sensor data as described above. The at least one predicted trajectory of the traffic participant may be determined from the sensor data as well.

In a step 206, the reference for driving according to the vehicle type, the driving mode for the vehicle or the driving style is provided as described above. The reference may be determined depending on the at least one predicted trajectory as well. The reference may be determined by the artificial intelligence model. The artificial intelligence model may be trained to classify the driving style of the driver, in particular as sporty, aggressive, safe, ecologic. In this case, the reference for driving may be determined depending on the driving style that the artificial intelligence model classified the sensor data into.

In a step 208, the reward is determined depending on the difference between the driving trajectory and the reference as described above. In the example, the reward is determined depending on the artificial intelligence model.

The artificial intelligence model may be trained on different driving styles from recorded driving of in particular professional drivers.

For different driving situations, a plurality of differences between different driving trajectories and respective references may be determined. This may be within the driving cycle or for the same driver in different driving cycles for the vehicle as well. The reward is in this case determined depending on the plurality of differences, e.g. by summing up individual rewards determined for the different driving situations.

A goal may be provided depending on the vehicle type, the driving mode for the vehicle or the driving style. In this case, the reward may be provided, if the difference meets the goal. In one aspect, the reward is not provided, if the goal is not met.

In a step 210 the reward is output, e.g. on a graphical user interface of the vehicle or to a social media interface. The driving rating may be determined and uploaded to the cloud based tool as well.

The driving mode may be selectable by the driver in the vehicle via a user interface. In this case, the selected driving mode by a driver of the vehicle may be recognized for use in the method.