Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ACTION PLANNING SYSTEM AND METHOD FOR AUTONOMOUS VEHICLES
Document Type and Number:
WIPO Patent Application WO/2018/162521
Kind Code:
A1
Abstract:
An action planning system (100) and method for autonomous vehicles are provided. The system (100) comprises one or more processors (108) and one or more non-transitory computer-readable storage medium (110) having stored thereon a computer program used by the one or more processors (108), wherein the computer program causes the one or more processors (108) to estimate future environment of an autonomous vehicle (114), generate a possible trajectory for the autonomous vehicle (114), predict motion and reactions of each dynamic obstacle in the future environment of the autonomous vehicle (114) based on current local traffic context, and generate a prediction iteratively over timesteps.

Inventors:
SADAT SEYED ABBAS (US)
GLASER THOMAS (US)
HARDY JASON S (US)
JACOB MITHUN (US)
MUELLER JOERG (US)
Application Number:
PCT/EP2018/055536
Publication Date:
September 13, 2018
Filing Date:
March 07, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
BOSCH GMBH ROBERT (DE)
International Classes:
G01C21/00; B60W30/08; B60W30/095; G01C21/34; G05D1/02
Foreign References:
US20080303696A12008-12-11
EP2495713A12012-09-05
US20110190972A12011-08-04
EP1898232A12008-03-12
Other References:
None
Download PDF:
Claims:
What is claimed is:

1. A non-transitory computer-readable storage medium having stored thereon a computer program for evaluating reactions and interactions with one or more vehicles, the computer program comprising a routine of set instructions for causing the machine to perform:

searching through a tree of possible action sequence combinations for an ego vehicle; and

capturing reactions and interactions with one or more vehicles.

2. The non-transitory computer-readable storage medium of claim 1 wherein the capturing, by an iterative full environment prediction, reactions and interactions with one or more vehicles.

3. A method comprising:

estimating, by one or more processors, future environment of an autonomous vehicle;

generating, by the one or more processors, a possible trajectory for the autonomous vehicles;

predicting, by the one or more processors, motion and reactions of each dynamic obstacle in the future environment of the autonomous vehicle based on current local traffic context; and generating, by the one or more processors, a prediction iteratively over timesteps.

4. The method of claim 3 further comprising:

decoupling, by the one or more processors, an action decision time resolution from an iterative prediction resolution.

5. A system for an autonomous vehicle comprising:

one or more processors; and

one or more non-transitory computer-readable storage medium having stored thereon a computer program used by the one or more processors, wherein the computer program causes the one or more processors to:

estimate future environment of an autonomous vehicle; generate a possible trajectory for the autonomous vehicles; predict motion and reactions of each dynamic obstacle in the future environment of the autonomous vehicle based on current local traffic context; and generate a prediction iteratively over timesteps.

6. The system of claim 5 wherein the computer program further causes the one or more processors to:

decouple an action decision time resolution from an iterative prediction resolution.

Description:
Action Planning System and Method for Autonomous Vehicles

FIELD

[0001] This disclosure relates generally to autonomous vehicles and, more particularly, to action planning systems and methods for autonomous vehicles.

BACKGROUND

[0002] Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to the prior art by inclusion in this section.

[0003] Conventional action planning device is based on a finite state machine (FSM) approach where the autonomous vehicle decides to make specific actions based on heuristic transition conditions. This FSM approach, however, only allows the autonomous vehicle to passively react to changes in its environment and provides no future look ahead for how the given action might affect how the future traffic situation evolves. FIGs 1 and 2 illustrate various possible actions such as "follow lane" action with a target or "lane change" action between two target vehicles on an adjacent lane that are evaluated by the prior art action planning devices using the FSM approach.

SUMMARY

[0004] A summary of certain embodiments disclosed herein is set forth below. It should be understood that these aspects are presented merely to provide the reader with a brief summary of these certain embodiments and that these aspects are not intended to limit the scope of this disclosure. Indeed, this disclosure may encompass a variety of aspects that may not be set forth below. [0005] Embodiments of the disclosure related a non-transitory computer-readable storage medium having stored thereon a computer program for evaluating reactions and interactions with one or more vehicles, the computer program comprising a routine of set instructions for causing the machine to perform searching through a tree of possible action sequence combinations for an ego vehicle and capturing reactions and interactions with one or more vehicles.

[0006] Another aspect of the disclosed embodiment is a method by one or more processors includes estimating future environment of an autonomous vehicle, generating a possible trajectory for the autonomous vehicles, predicting motion and reactions of each dynamic obstacle in the future environment of the autonomous vehicle based on current local traffic context, and generating a prediction iteratively over timesteps. The method further decoupling, by the one or more processors, an action decision time resolution from an iterative prediction resolution.

[0007] Another aspect of the disclosed embodiment is a system for an autonomous vehicle includes one or more processors and one or more non-transitory computer-readable storage medium having stored thereon a computer program used by the one or more processors, wherein the computer program causes the one or more processors to estimate future environment of an autonomous vehicle, generate a possible trajectory for the autonomous vehicles, predict motion and reactions of each dynamic obstacle in the future environment of the autonomous vehicle based on current local traffic context, and generate a prediction iteratively over timesteps.

BRIEF DESCRIPTION OF THE DRAWINGS [0008] These and other features, aspects, and advantages of this disclosure will become better understood when the following detailed description of certain exemplary embodiments is read with reference to the accompanying drawings in which like characters represent like arts throughout the drawings, wherein:

[0009] FIG. 1 is a simplified diagram showing a possible follow-lane action using a prior art FSM implemented action planning device for an autonomous vehicle;

[0010] FIG. 2 is a simplified diagram showing a possible lane-change action using the prior art FCM implemented action planning device for an autonomous vehicle;

[0011] FIG. 3A is an illustration showing an automated driving system according to an embodiment of the disclosure;

[0012] FIG. 3B is a simplified diagram showing a bird eye view of a map and a multilane road including a plurality of neighboring vehicles in proximity to an autonomous vehicle according to a described embodiment of the disclosure; and

[0013] FIGs. 4A-4D are views of the autonomous vehicle traveling autonomously in proximity to a plurality of neighboring vehicles according to a described embodiment of the disclosure.

DETAILED DESCRIPTION

[0014] The following description is presented to enable any person skilled in the art to make and use the described embodiments, and is provided in the context of a particular application and its requirements. Various modifications to the described embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the described embodiments. Thus, the described embodiments are not limited to the embodiments shown, but are to be accorded the widest scope consistent with the principles and features disclosed herein.

[0015] FIGs. 3 A and 3B illustrate an automated driving system 100 in accordance with one aspect of the disclosure. As depicted in FIG. 3A, the driving system 100 may be either integrated into a vehicle 114, a machine device, or any suitable portable or mobile device/vessel. While certain aspects of the disclosure are particularly useful in connection with specific types of vehicles, the vehicle may be any type of vehicles including, but not limited to, cars, trucks, motocycles, buses, boats, sport- utility vehicles, two-wheelers, airplanes, helicopters, lawnmowers, recreational vehicles, amusement park vehicles, trams, golf carts, trains, trolleys, ultralights, and the like. The vehicle may have one or more automated driving systems. The machine devices may be any type of devices including, but not limited to, cellular phones, laptops, tablets, wearable devices such as watches, glasses, goggles, or any suitable portable devices. As depicted in FIG. 3A, the automated driving system 100 includes a processor 108, a computer readable medium 110, and a communication module 112. A route planning module 102, an action planning module 104, and a trajectory planning module 106 may be provided in the automated driving system 100.

[0016] Although the route planning module 102, the action planning module 104, the trajectory planning module 106, the processor 108, the computer readable medium 1 10, and the communication module 112 as being within the same block 100, it will be understood by those of ordinary skill in the art that the route planning module 102, the action planning module 104, the trajectory planning module 106, the processor 108, the computer readable medium 1 10, and the communication module 1 12 may or may not be housed in the same block 100. In various of the aspects described herein, the processor 108, the computer readable medium 1 10, and the communication module 1 12 may be integrated into a computer located outside the automated driving system 100. In other aspects, some of the processes described herein are executed on a processor disposed within the vehicle 1 14 and others by a remote processor disposed within the machine device 1 16.

[0017] The computer readable medium 1 10 stores information accessible by the route planning module 102, the action planning module 104, the trajectory planning module 106, and the processor 108 including computer-executable instructions that may be executed or otherwise used by route planning module 102, the action planning module 104, the trajectory planning module 106, and the processor 108. The processor 108 may be any conventional processor. Alternatively, the processor 108 may be a dedicated device such as an ASIC.

[0018] The communication module 1 12 in wired/wireless communication with other computers such as one or more electronic controller units, one or more processors, networks, directly or indirectly. Other sensing devices such as sensors, user interface such as a touch display, mouse, keyboard, audio input, camera, and other suitable computer implemented modules may be integrated into or communicatively coupled to the system 100.

[0019] Processes of route, action, and trajectory running on the route planning module 102, the action planning module 104, and the trajectory planning module 106 generate candidate route, action, and trajectory that an ego vehicle may follow through an environment during a configurable time horizon T. In some aspects, processes of route, action, and trajectory to generate candidate route, action, and trajectory may be run on the processor 108 whereby the route planning module 102, the action planning module 104, and the trajectory planning module 106 are computer-executable instructions being programmed on the processor 108.

[0020] As depicted in FIG. 3B, the action planning module 104 is operable to perform a limited horizon search through a tree of possible action sequence combinations for the autonomous vehicle and to combine the horizon search with iterative, full-environment prediction to capture reactions and interactions with other traffic participants. For example, the action planning module 104 performs an optimal graph search to efficiently explore the tree of possible ego actions up to a limited time horizon or a planning horizon. The time horizon includes a start point of time and an end point of time at which the time horizon can be evaluated. The time horizon is necessary to constrain the size of the search space and enable efficient re- planning.

[0021] In order to ensure the end point of time is quickly reached, whereby the most suitable path can be found without having to traverse all possible paths or connections, a graphic search technique is used. The graph search technique may be a horizon search technique. However, other suitable graph search techniques may be used. A precomputed static cost is used to estimate a utility cost for a given ego position at the end of the planning time horizon. In one embodiment, the trajectory planning module 106 includes the graph search technique that the ego vehicle 1 14 may follow through the environment during the configurable time horizon T. In another embodiment, the graph search technique is stored on the action planning module 104. The generated trajectory is then stored in the computer readable medium 1 10. Further details on action planning using iterative action search full environment prediction will be described below.

[0022] FIGs. 4A-4D are views of the autonomous vehicle 114 on a two-lane road 200 traveling autonomously in proximity to a plurality of neighboring vehicles 160, 162 according to a described embodiment of the disclosure. As depicted in FIG. 4A, the vehicles 160 is currently following the same potential path 202 on the left lane of the two-lane road 200. The autonomous vehicle 1 14 is currently positioned in the right lane of the two-lane road 200, with the left and right lanes separated by a dividing line 204. The autonomous vehicle 114 or "an ego vehicle" described herein starts in a follow lane action and enumerates all possible actions it can switch to at the next time step. In one aspect, as depicted in FIG. 4A, the ego vehicle 104 continues in the follow lane action and follows the same potential path 208 on the right lane of the two-lane road 200. In another aspect, the autonomous driving system 100 of the ego vehicle 104 may switch from the follow lane action to a lane change action and a planned path 210 for the ego vehicle 104 is depicted in FIG. 4B as crossing over the dividing line 204 to approach the potential path 202 of the neighboring 160 if the lane change action has a lower initial cost. The action planning module 104 having horizon search programmed therein explores switching from the follow lane action to the lane change action. In one embodiment, an iterative full-environment prediction is used which allows the horizon search to evaluate reactions and interactions with other vehicles 160, 102 without explicitly searching over the possible actions of other vehicles 160, 162. The iterative full-environment prediction keeps the horizon search tractable. Each time a new action is evaluated in the horizon graph search, a prediction process is performed that estimates future environment of the ego vehicle 1 14 at the next timestep assuming the ego vehicle 1 14 follows the selected action, e.g. a follow lane action, a lane change action, or an abort lane change action. This prediction process includes generating a possible trajectory for the ego vehicle 1 14 and then predicting the motion and reactions of each dynamic obstacle in the environment of the ego vehicle 1 14 based on its current local traffic context. The prediction process further includes generating a prediction of how other vehicles may react to their current traffic situation and iterative ly over short timesteps. The horizon graph search is able to model interactions between the ego vehicle 1 14 and other traffic participants without requiring to do any active planning for the other vehicles 160, 162. The iterative passive prediction is much less expensive and scales much better than attempting to perform joint active planning for the ego vehicle 1 14 and other traffic participants.

[0023] In one aspect, the ego vehicle 104 can either continue in the lane change action or switch from the lane change action to an abort lane change action. As depicted in FIG. 4C, the ego vehicle 104 proceeds to continue in the lane change action by entering the potential path 202 of the neighboring vehicle 160 and the planned path 210 thereby merges to the potential path 202. If either the ego vehicle 104 decides not to switch lanes, or not likely able to change lanes due to on-coming neighboring vehicle 160 approaches the potential path 202, as depicted in FIG. 4D, the autonomous driving system 100 of the ego vehicle 104 switches from the lane change action to the abort lane change action. A planned path 212 for the ego vehicle 104 is shown as crossing over the dividing line 204 to approach the left lane of the two-lane road 200.

[0024] Now returning to FIG. 4B, at timestep tl , the ego vehicle 1 14 generates a lane change trajectory but the other vehicle 160 in the left lane has not yet reacted. At timestep t2, if the ego vehicle either continues with the lane change behavior as illustrated in FIG. 4C or switches to an abort lane change action as shown in FIG. 4D, the ego vehicle trajectory takes the ego vehicle 1 14 partially into the left lane, causing a reaction for the left-lane vehicle 160. In some embodiments, restricting the action search to only evalute switching behaviors at a small subset of timesteps, effectively decoupling the iterative prediction time resolution from the action search time resolution. This greatly improves computational efficiency and allows for the evalation of longer planning horizons, which generates more intelligent action decisions. The limited horizon search though the tree of posible ego actions for intelligent automated driving decision making provides a significant decision making performance improvement. In one example, the ego vehicle 1 14 explicitly evaluates the expected utility of its actions at future timesteps. In another example, the ego vehicle 1 14 understands incrementally predicts the actions of other agents based on its own anticipated future state. The ego vehicle 1 14 explicitly evaluates and compares different possible timings of each action transition to maximize safety, comfort, and progress towards its goal. Iterative prediction allows the action search to understand how other dynamic obstacles react to and interact with the ego vehicle 1 14 without the need to perform joint anticipatory planning for each dynamic obstacle in the environment. Restricted branch points decouple the action decision time resolution from the iterative prediction resolution, allowing more efficient exploration of longer planning horizons.

[0025] The embodiments described above have been shown by way of example, and it should be understood that these embodiments may be susceptible to various modifications and alternative forms. It should be further understood that the claims are not intended to be limited to the particular forms disclosed, but rather to cover all modifications, equivalents, and alternatives falling with the sprit and scope of this disclosure.

[0026] Embodiments within the scope of the disclosure may also include non- transitory computer-readable storage media or machine-readable medium for carrying or having computer-executable instructions or data structures stored thereon. Such non-transitory computer-readable storage media or machine-readable medium may be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such non-transitory computer- readable storage media or machine-readable medium can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions or data structures. Combinations of the above should also be included within the scope of the non-transitory computer-readable storage media or machine-readable medium. [0027] Embodiments may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination thereof) through a communications network.

[0028] Computer-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Computer-executable instructions also include program modules that are executed by computers in stand-alone or network environments. Generally, program modules include routines, programs, objects, components, and data structures, etc. that perform particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps.

[0029] While the patent has been described with reference to various embodiments, it will be understood that these embodiments are illustrative and that the scope of the disclosure is not limited to them. Many variations, modifications, additions, and improvements are possible. More generally, embodiments in accordance with the patent have been described in the context or particular embodiments. Functionality may be separated or combined in blocks differently in various embodiments of the disclosure or described with different terminology. These and other variations, modifications, additions, and improvements may fall within the scope of the disclosure as defined in the claims that follow.