Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR CONFIGURING VARIATIONS IN AUTONOMOUS VEHICLE TRAINING SIMULATIONS
Document Type and Number:
WIPO Patent Application WO/2023/009925
Kind Code:
A1
Abstract:
A method includes receiving a base simulation scenario that includes features of a scene through which a vehicle may travel and receiving a simulation variation for an object in the scene. The simulation variation defines multiple values for a characteristic of the object. The method includes adding the simulation variation to the base simulation scenario to yield an augmented simulation scenario and applying the augmented simulation scenario to an autonomous vehicle motion planning model to train the motion planning model. The motion planning model iteratively simulates variations of the object based on values for the characteristic of the object. In response to each simulated variation of the object, the motion planning model selects a continued trajectory for the vehicle, wherein the continued trajectory is either the planned trajectory or an alternate trajectory.

Inventors:
NAYHOUSE MICHAEL (US)
ACKENHAUSEN THOMAS (US)
CARMODY PATRICK (US)
YU TINTIN (US)
Application Number:
PCT/US2022/073251
Publication Date:
February 02, 2023
Filing Date:
June 29, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ARGO AI LLC (US)
International Classes:
B60W40/00; B60W30/095; G05D1/00; G06N3/08
Domestic Patent References:
WO2021183748A12021-09-16
WO2019199880A12019-10-17
Foreign References:
US20200353943A12020-11-12
US20200074230A12020-03-05
Attorney, Agent or Firm:
SINGER, James (US)
Download PDF:
Claims:
CLAIMS

1. A method of generating a vehicle motion planning simulation scenario, the method comprising, by a processor: receiving, from a data store containing a plurality of simulation scenarios, a base simulation scenario that includes features of a scene through which a vehicle may travel; receiving, from the data store, a simulation variation for an object in the scene, the simulation variation defining a plurality of values for a characteristic of the object; adding the simulation variation to the base simulation scenario to yield an augmented simulation scenario; and applying the augmented simulation scenario to an autonomous vehicle motion planning model to train the motion planning model in which the motion planning model: simulates movement of the vehicle along a planned trajectory, iteratively simulates variations of the object based on the plurality of values for the characteristic of the object, in response to each simulated variation of the object, selects a continued trajectory for the vehicle, wherein the continued trajectory is either the planned trajectory or an alternate trajectory, and causes the vehicle to move along the continued trajectory.

2. The method of claim 1, further comprising, defining the simulation variation by: outputting, via a user interface that includes a display device, the characteristic of the object; receiving, via the user interface, one or more variations for the characteristic of the object; outputting, via the display device, a revised simulation scenario in which the object exhibits the one or more variations of the characteristic; and saving the one or more variations for the characteristic to the data store as the simulation variation.

3. The method of claim 2, wherein the characteristic comprises a dimension, a position, a velocity, an acceleration, or a behavior-triggering distance of the object.

4. The method of claim 1, further comprising: receiving, from the data store, a second base simulation scenario that includes features of a second scene through which the vehicle may travel; receiving, from the data store, the simulation variation; adding the simulation variation to the second base simulation scenario to yield a second augmented simulation scenario; and applying the second augmented simulation scenario to the autonomous vehicle motion planning model to train the motion planning model.

5. The method of claim 1, further comprising: receiving, from the data store, a second simulation variation for a second object in the scene, the second simulation variation defining a second plurality of values for a characteristic of the second object; adding the second simulation variation to the base simulation scenario to yield a second augmented simulation scenario; and applying the second augmented simulation scenario to the autonomous vehicle motion planning model to train the motion planning model, wherein the motion planning model iteratively simulates variations of the object and the second object based on the plurality of values for the characteristic of the object and the plurality of values for the characteristic of the second object, and, in response to each variation of the object or the second object, selects the continued trajectory for the vehicle.

6. The method of claim 1, further comprising: receiving, from the data store, a second simulation variation for the object in the scene, the second simulation variation defining a second plurality of values for a second characteristic of the object; and adding the simulation variation and the second simulation variation to the base simulation scenario to yield the augmented simulation scenario.

7. The method of claim 1, further comprising: generating an augmentation element that includes a second object and a behavior for the second object; and adding the simulation variation and the augmentation element to the base simulation scenario to yield the augmented simulation scenario.

8. The method of claim 1, wherein simulating movement of the vehicle along the planned trajectory comprises running the vehicle on a test track, wherein perception data from one or more vehicle sensors is augmented by simulated variations of the object.

9. The method of claim 1, wherein: at least one variation of the simulated object at least partially interferes with the planned trajectory of the vehicle; and the continued trajectory is an alternate trajectory that will keep the vehicle at least a threshold distance away from the object.

10. A vehicle motion planning model training system, comprising: a processor; a data store containing a plurality of simulation scenarios; and a memory that stores programming instructions that are configured to cause the processor to train a vehicle motion planning model by: receiving, from the data store, a base simulation scenario that includes features of a scene through which a vehicle may travel; receiving, from the data store, a simulation variation for an object in the scene, the simulation variation defining a plurality of values for a characteristic of the object; adding the simulation variation to the base simulation scenario to yield an augmented simulation scenario; and applying the augmented simulation scenario to an autonomous vehicle motion planning model to train the motion planning model in which the motion planning model: simulates movement of the vehicle along a planned trajectory, iteratively simulates variations of the object based on the plurality of values for the characteristic of the object, in response to each simulated variation of the object, selects a continued trajectory for the vehicle, wherein the continued trajectory is either the planned trajectory or an alternate trajectory, and causes the vehicle to move along the continued trajectory.

11. The training system of claim 10, wherein the instructions are further configured to cause the processor to define the simulation variation by: outputting, via a user interface that includes a display device, the characteristic of the object; receiving, via the user interface, one or more variations for the characteristic of the object; outputting, via the display device, a revised simulation scenario in which the object exhibits the one or more variations of the characteristic; and saving the one or more variations for the characteristic to the data store as the simulation variation.

12. The training system of claim 11, wherein the characteristic comprises a dimension, a position, a velocity, an acceleration, or a behavior-triggering distance of the object.

13. The training system of claim 10, wherein the instructions are further configured to cause the processor to: receive, from the data store, a second base simulation scenario that includes features of a second scene through which the vehicle may travel; receive, from the data store, the simulation variation; add the simulation variation to the second base simulation scenario to yield a second augmented simulation scenario; and apply the second augmented simulation scenario to the autonomous vehicle motion planning model to train the motion planning model.

14. The training system of claim 10, wherein the instructions are further configured to cause the processor to: receive, from the data store, a second simulation variation for a second object in the scene, the second simulation variation defining a second plurality of values for a characteristic of the second object; add the second simulation variation to the base simulation scenario to yield a second augmented simulation scenario; and apply the second augmented simulation scenario to the autonomous vehicle motion planning model to train the motion planning model, wherein the motion planning model iteratively simulates variations of the object and the second object based on the plurality of values for the characteristic of the object and the plurality of values for the characteristic of the second object, and, in response to each variation of the object or the second object, selects the continued trajectory for the vehicle.

15. A computer program product compri sing : a memory that stores programming instructions that are configured to cause a processor to train a vehicle motion planning model by: receiving, from a data store containing a plurality of simulation scenarios, a base simulation scenario that includes features of a scene through which a vehicle may travel; receiving, from the data store, a simulation variation for an object in the scene, the simulation variation defining a plurality of values for a characteristic of the object; adding the simulation variation to the base simulation scenario to yield an augmented simulation scenario; and applying the augmented simulation scenario to an autonomous vehicle motion planning model to train the motion planning model in which the motion planning model: simulates movement of the vehicle along a planned trajectory, iteratively simulates variations of the object based on the plurality of values for the characteristic of the object, in response to each simulated variation of the object, selects a continued trajectory for the vehicle, wherein the continued trajectory is either the planned trajectory or an alternate trajectory, and causes the vehicle to move along the continued trajectory.

16. The product of claim 15, wherein the instructions are further configured to cause the processor to define the simulation variation by: outputting, via a user interface that includes a display device, the characteristic of the object; receiving, via the user interface, one or more variations for the characteristic of the object; outputting, via the display device, a revised simulation scenario in which the object exhibits the one or more variations of the characteristic; and saving the one or more variations for the characteristic to the data store as the simulation variation.

17. The product of claim 16, wherein the characteristic comprises a dimension, a position, a velocity, an acceleration, or a behavior-triggering distance of the object.

Description:
TITLE: METHOD AND SYSTEM FOR CONFIGURING VARIATIONS IN AUTONOMOUS

VEHICLE TRAINING SIMULATIONS

CROSS-REFERENCE AND CLAIM OF PRIORITY

[0001] This patent application claims priority to U.S. Patent Application Nos. 17/387,922 and 17/387,927, each filed July 28, 2021, and to U.S. Patent Application No. 17/468,600 filed September 7, 2021. All of the priority applications are fully incorporated herein by reference.

BACKGROUND

[0002] Many vehicles today, including but not limited to autonomous vehicles (AVs), use motion planning systems to decide, or help the driver make decisions about, where and how to move in an environment. Motion planning systems rely on artificial intelligence models to analyze moving actors that the vehicle sensors may perceive, make predictions about what those actors may do, and select or recommend a course of action for the vehicle that takes the actor’s likely action into account.

[0003] To make predictions and determine courses of action, the vehicle’s motion planning model must be trained on data that the vehicle may encounter in an environment. The more unique scenarios that are used to train a vehicle’s motion planning model, the better that the model can be at making motion planning decisions. However, the range of possible scenarios that a vehicle may encounter is limitless. Manual development of a large number of unique simulation scenarios would require a significant investment in time and manpower, as well as a continued cost to update individual scenarios as the motion planning model improves and vehicle behavior changes. [0004] While systems are available to randomly develop simulation scenarios, the number of possible random scenarios is limitless. Purely random simulation would require the motion planning model to consider an extremely large number of events that may not be relevant, or which at least would be extremely unlikely, in the real world. This causes a significant waste of computing resources and time. In addition, it can require the vehicle to be trained on a large number of less relevant scenarios well before the random process yields more relevant scenarios.

[0005] Therefore, methods of identifying and developing an effective set of relevant simulation scenarios, and training the vehicle’s model on such scenarios, is needed.

[0006] This document describes methods and systems that address issues such as those discussed above, and/or other issues.

SUMMARY

[0007] In various embodiments, a method of generating a vehicle motion planning model simulation scenario is disclosed. The method may be embodied in computer programming instructions and/or implemented by a system that includes a processor. The method includes receiving, from a data store containing multiple simulation scenarios, a base simulation scenario that includes features of a scene through which a vehicle may travel. The method includes receiving, from the data store, a simulation variation for an object in the scene, the simulation variation defining multiple values for a characteristic of the object. The method further includes adding the simulation variation to the base simulation scenario to yield an augmented simulation scenario and applying the augmented simulation scenario to an autonomous vehicle motion planning model to train the motion planning model. The motion planning model simulates movement of the vehicle along a planned trajectory and iteratively simulates variations of the object based on the multiple values for the characteristic of the object. In response to each simulated variation of the object, the motion planning model selects a continued trajectory for the vehicle, either the planned trajectory or an alternate trajectory, and it causes the vehicle to move along the continued trajectory.

[0008] In other embodiments, a method of generating training scenarios variations is disclosed. The method may be embodied in a scenario variation generation system configured to receive a base scenario including objects in a scene and execute a user interface configured to display data associated with characteristics of the objects in the scene. The characteristic may include a dimension, a position, a velocity, an acceleration, or a behavior-triggering distance of the object. The user interface may be further configured to receive variation input related to characteristics of the objects. In response to the user interface receiving variation input defining multiple values for characteristics of the objects, the scenario variation generation system may store a scenario variation in a data store containing simulation scenarios, wherein the scenario variation is based on the multiple values and configured to augment one or more of the simulation scenarios.

BRIEF DESCRIPTION OF THE DRAWINGS

[0009] FIG. 1 A illustrates example elements of a vehicle simulation scenario, while FIG. IB illustrates a modified version of the simulation scenario of FIG. 1 A.

[0010] FIG. 2 illustrates example subsystems of an autonomous vehicle.

[0011] FIGs. 3 A and 3B are flowcharts that illustrate a process by which a training system may generate an augmented simulation scenario. [0012] FIGs. 4A and 4B illustrate certain elements of the processes of FIGs. 3A and 3B in an alternate format.

[0013] FIGs. 5A and 5B illustrate development of a simulation scenario augmentation using an example set of candidate object data.

[0014] FIG. 6 is a flowchart that illustrates a process by which a training system may identify interaction zones within which the system may modify a base simulation scenario.

[0015] FIG. 7 illustrates an example augmented simulation scenario in which an obstruent object behavior has been introduced into the scenario.

[0016] FIG. 8 illustrates an example augmented simulation scenario in which a deviant object behavior has been introduced into the scenario.

[0017] FIG. 9 illustrates how a system may incorporate element distributions into different segments of an interaction zone when generated multiple augmented simulation scenarios.

[0018] FIG. 10 shows an example input form for configuring simulated variations.

[0019] FIG. 11 shows another example input form for configuring simulated variations.

[0020] FIG. 12 shows computed values for a characteristic of an object.

[0021] FIG. 13 shows another example input form for configuring simulated variations.

[0022] FIG. 14 shows computed values for two characteristics of an object.

[0023] FIG. 15 shows another example input form for configuring simulated variations.

[0024] FIG. 16 shows computed values for two characteristics of an object.

[0025] FIG. 17 shows another example input form for configuring simulated variations.

[0026] FIG. 18 shows computed values for a characteristic of an object.

DETAILED DESCRIPTION [0027] As used in this document, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. Unless defined otherwise, all technical and scientific terms used herein have the same meanings as commonly understood by one of ordinary skill in the art. As used in this document, the term “comprising” (or “comprises”) means “including (or includes), but not limited to.” Definitions for additional terms that are relevant to this document are included at the end of this Detailed Description.

[0028] An autonomous vehicle (AV) software stack includes various software platforms which handle various tasks that help the AV move throughout an environment. These tasks include tasks such as perception, motion planning, and motion control. An AV stack may reside in a software repository (in the physical form of computer-readable memory) that is available to a vehicle’s original equipment manufacturer (OEM) and/or to an OEM’s suppliers. An AV stack also may be directly deployed on a vehicle. To be effective, before it is deployed on a vehicle an AV stack must be trained on multiple simulation scenarios. Training is a process that applies a simulation scenario definition to one or more of the AV stack’s systems so that the AV stack can process the simulation scenario and generate a response. Supplemental training also may be done after an AV stack is deployed on a vehicle, with additional simulation scenarios that will continue to improve the AV stack’s operation and help the AV recognize and react to an increased variety of conditions when it encounters them in the real world.

[0029] A simulation scenario definition is a set of parameters and/or programming instructions that identify one or more objects in a scene, the initial locations, dimensions, and other configuration parameters of the objects. For such objects, the simulation scenario definition may include acceleration profiles or other profiles that guide the object’s possible movements in the scene. Some objects may be actors that are moving or which could move, such as vehicles, pedestrians or animals. Other objects also may be static objects that can occlude the field of view of the AV’s perception system, such as vegetation or buildings. FIG. 1A is a graphic illustration of an example simulation scenario for a vehicle 101 that is moving along a first street 117 according to a planned trajectory 102 past an intersection with a second street 118. The simulation scenario definition includes actors that are a parked vehicle 113 and pedestrians 114 and 115. The configuration for parked vehicle 113 may define that the vehicle is not currently moving, but that it could start to move forward and/or into the vehicle’s lane. The configuration for pedestrian 114 may define that the pedestrian is moving parallel to first street 117 and toward second street 118, and that it is equally likely to cross either the first street 117 or the second street 118 when it reaches the intersection. The configuration for pedestrian 115 may define that the pedestrian is moving parallel to first street 117, and that it has a higher probability of continuing to move forward than it is to turn and cross the first street 117. The simulation scenario of FIG. 1 A may be stored in a training system database for use in training one or more subsystems of an AV stack.

[0030] This document describes methods and systems for augmenting base simulation scenarios such as that shown in FIG. 1A by introducing one or more variations into the base scenario. The variation may include introducing one or more new actors into the base scenario, varying the configuration for one or more of the actors that already exist in the scenario, or both. For example, FIG. IB shows a variation of the simulation scenario of FIG. 1A in which an additional actor - in this case parked car 119 - is added to the scene. A training system may store and use an augmented simulation scenario such as that of FIG. IB, in addition to the base scenario of FIG. 1A, to train the AV stack. Augmenting base scenarios using such variations may reduce the number of base scenarios needed to effectively train the AV stack. Furthermore, simulation variations, being typically focused on a subset of a scenario, may be generally easier for the user to maintain than a base scenario and may use fewer resources, such as memory or storage space, than base scenarios. By reducing the number of base scenarios and augmenting that smaller number of base scenarios with easier-to-maintain (or automatically generated) simulation variations, the overall maintenance burden may be significantly reduced. For example, the user may no longer need to create a new base scenario simply to add an object or modify a behavior or characteristic of an existing object in the base scenario. And the user may be able to perform a maintenance update on a large number of scenarios by performing the maintenance update on the relatively smaller number of base scenarios, each of which is augmented by associated simulation variations. Furthermore, simulation variations that prove valuable when applied to one base scenario may be readily applied to other base scenarios without incurring the burden of updating or reconfiguring the other base scenarios. Moreover, using fewer resources may increase the total number of scenarios available to train the AV stack.

[0031] Before further exploring the details of the present embodiments, we provide some background information about AV systems. FIG. 2 shows a high-level overview of subsystems of an AV stack that may be relevant to the discussion below. Certain components of the subsystems may be embodied in processor hardware and computer-readable programming instructions that are part of a computing system 201 that is either onboard the vehicle or that is offboard and stored on one or more memory devices. The subsystems may include a perception system 202 that includes sensors that capture information about moving actors and other objects that exist in the vehicle’s immediate surroundings. Example sensors include cameras, LiDAR sensors and radar sensors. The data captured by such sensors (such as digital images, LiDAR point cloud data, or radar data) is known as perception data. The perception data may include data representative of one or more objects in the environment. [0032] The perception system may include one or more processors and computer-readable memory with programming instructions and/or trained artificial intelligence models that, during a run of the AV, will process the perception data to identify objects and assign categorical labels and unique identifiers to each object detected in a scene. Categorical labels may include categories such as vehicle, bicyclist, pedestrian, building, and the like. Methods of identifying objects and assigning categorical labels to objects are well known in the art, and any suitable classification process may be used, such as those that make bounding box predictions for detected objects in a scene and use convolutional neural networks or other computer vision models. Some such processes are described in Yurtsever et ah, “A Survey of Autonomous Driving: Common Practices and Emerging Technologies” (published in IEEE Access, April 2020).

[0033] The vehicle’s perception system 202 may deliver perception data to the vehicle’s forecasting system 203. The forecasting system (which also may be referred to as a prediction system) will include processors and computer-readable programming instructions that are configured to process data received from the perception system and forecast actions of other actors that the perception system detects.

[0034] The vehicle’s perception system 202, as well as the vehicle’s forecasting system 203, will deliver data and information to the vehicle’s motion planning system 204 and motion control system 205 so that the receiving systems may assess such data and initiate any number of reactive motions to such data. The motion planning system 204 and motion control system 205 include and/or share one or more processors and computer-readable programming instructions that are configured to process data received from the other systems, compute a trajectory for the vehicle, and output commands to vehicle hardware to move the vehicle according to the determined trajectory. Example actions that such commands may cause include causing the vehicle’s brake control system to actuate, causing the vehicle’s acceleration control subsystem to increase speed of the vehicle, or causing the vehicle’s steering control subsystem to turn the vehicle. Various motion planning techniques are well known, for example as described in Gonzalez et ah, “A Review of Motion Planning Techniques for Automated Vehicles,” published in IEEE Transactions on Intelligent Transportation Systems , vol. 17, no. 4 (April 2016).

[0035] The subsystems described above may be implemented as components of an AV stack, which may be trained on various simulation scenarios. As described above, the system 201 on which the subsystems may be installed may be a vehicle’s computer processing hardware, or it may be one or more memory devices that are offboard the vehicle. The system 201 may be in communication with a remote server 206 that provides updates and/or commands, or which receives data from the AV stack. In addition, in the present embodiments, the system 201 on which the AV stack is installed will be in electronic communication with a training system 209. The training system 209 will include a processor 211, a data store 212 containing a variety of stored simulation scenarios, and a memory containing programming instructions 213 for generating, modifying and using simulation scenarios to train the system 201.

[0036] Optionally, the training system 209 also may include a user interface 214 for presenting information to a user and receiving information and/or commands from the user. For example, the user interface 214 may include a display via which the system may output graphic illustrations of simulation scenarios, as well as one or more menus or forms that present features or options for augmenting the scenario. The user interface also may include an input device such as a mouse, keyboard, keypad, microphone and/or touch-screen elements of the display via which the issuer may select variations for a displayed scenario. The variations may include new actors, configuration parameters for new or existing actors, or other information. [0037] FIG. 10 shows an example input form 1000 for configuring simulated variations of an object in a scene. The form displays several characteristics 1001 of the object that can be varied. Here, the variable characteristics 1001 include dimensions of the object, such as its height, or its position in the scene (shown here as longitudinal distance and lateral offset with respect to a reference in the scene, such as the starting position of the AV in the simulation). Other variable characteristics 1001 may be related to time and/or motion, such as the transition time for a traffic light to change or the acceleration of an object. Other variable characteristics 1001 may relate to behavior of the object, such as the distance from the vehicle to the object that triggers a behavior of the object. For example, when the vehicle comes within the trigger distance of an object simulating a pedestrian, the pedestrian may begin crossing the street. Similarly, a parked car such as the one configured in input form 1000 of FIG. 10 may begin to move forward and/or into the vehicle’s lane when the vehicle comes within the configured trigger distance. The example input form includes an Edit button (such as button 1002) for each variable characteristic, allowing the user to configure aspects of each variation. For example, the user may change the value 1003 of the corresponding characteristic.

[0038] FIG. 11 shows an example input form 1100 for configuring the variation of one characteristic 1001 of the object in the scene. Here, the characteristic 1001 is the lateral offset of the object with respect to the reference in the scene. The example input form presents the user with several discrete options 1101 for configuring the variation, including disabling variation of the characteristic. The example input form also presents the user with fields 1102 enabling the user to enter numeric values related to the configuration. Here, the user has configured the lateral offset to sweep through values between -2.5 meters and -3.4 meters, in steps of -0.1 meters, without repeating the immediately preceding value (repeat=0). Referring to FIG. 12, in response to the user configuring the variation, the user interface 214 may display a form 1200 including a table of values 1201 for the configured variation. Optionally, the user interface may include fields that enable a user to select a particular value 1202, 1202a-n from the table 1201 to use for the characteristic 1001 of the object, rather than the value for the characteristic 1001 of the base scenario. The user may select a button 1203 to indicate the selected value 1202 should be used in a modified scenario. In response, the user interface may output a graphical display of the modified scenario, using the selected value 1202 for the characteristic 1001 of the object. The user interface includes fields (e.g. of input form 1100) through which a user may enter data to refine the variation configuration based on the displayed scenario, e.g. to provide a good range of scenarios to train the AV stack. When the user is satisfied with the variation configuration, the user enters an indication (e.g., button 1103 of input form 1100) that the configuration is complete. In response, the system may store the variation configuration, e.g., in a training system database, where the variation configuration may be applied to other base scenarios.

[0039] FIG. 13 shows an example input form 1300 configuring the initial longitudinal distance of the object from the reference in the scene. Here, the user has configured the longitudinal distance to step through a number of defined values (in this example, five values) without repeating the immediately preceding value (repeat=0). Referring to FIG. 14, in response to the user configuring the variation, the user interface 214 may display a form 1400 including a first table 1201 of values 1202a-n for the first configured variation and a second table 1401 of values 1402a-n for the second configured variation. While each characteristic 1001a, 1001b variation is configured separately, the training system may consider the variations jointly when augmenting a simulation scenario. Here, the lateral offset is configured to sweep through ten values, but the longitudinal distance is configured to step through only five defined values. In some examples, when the final defined value for each characteristic 1001a, 1001b is reached, the final value is retained as values for other characteristics 1001a-n continue to vary according to each variation configuration. Here, the sequence of values for each characteristic 1001b repeats as values for other characteristics 1001a continue to vary. The example shown includes many possible combinations of values for both characteristics 1001a, 1001b but does not include every combination of values.

[0040] FIG. 15 shows an example configuration form 1500, similar to the example input form 1100 of FIG. 11, with the repeat parameter set to four instead of zero, configuring the lateral offset to sweep through values between -2.5 meters and -3.4 meters, in steps of -0.1 meters, repeating each value four additional times at each step, for a total of 50 values. Referring to FIG. 16, in response to the user configuring the variation, the user interface 214 may display a form 1600 including a first table 1201 of values 1202a-n for the first configured variation and a second table 1401 of values 1402a-n for the second configured variation (the table 1401 shown is truncated at the first twelve values). Here, the user has set the repeat parameter so that the total number of times the value occurs at each step is equal to the number of defined values of the other characteristic’s 1001a, or 1001b variation configuration. Therefore, all combinations of values for both characteristics’ 1001a, 1001b variation configurations may be simulated. As disclosed above, the user may select a pair of values 1202a-n, 1402a-n from the table 1201,

1401 to use for the characteristics 1001 of the objects, rather than the values for the characteristic 1001 of the base scenario. In response to the selection, the user interface may output a graphical display of the scenario, modified by the selected values for the characteristics 1001 of the object. The user may continue to refine the variation configurations based on the displayed scenario. [0041] The user may configure the variation of additional characteristics 1001 of objects in the scene. FIG. 17 shows an example configuration form 1700 including the trigger distance for the object. Here, the user has configured the trigger distance to step through 10 pseudo random values between 4 and 15. Referring to FIG. 18, in response to the user configuring the variation, the user interface 214 may display a form 1800 including a table of values 1201 for the configured variation. Here, the values 1202 include the pseudo-random values between 4 and 15. The user interface may generate a seed value for the random number generator so that the same values are used for the trigger distance during each subsequent simulation. Alternatively, the user may choose to enter a new seed number to generate a new set of ten pseudo-random values to be used in subsequent simulations. The user may also configure variations in the motion profile of an object. For example, the user may configure the braking performance of a vehicle (or all vehicles, e.g. in a simulation of roads covered with snow or ice). In this example, the user may configure the braking performance to step through some number of random values having a Gaussian distribution (e.g., having a configurable mean and standard deviation) to simulate varying levels of degraded braking performance, e.g. due to varying degrees of simulated snow or ice on the road. In some examples, when the user is satisfied with the variation configuration for one object (e.g., the range of degraded braking performance), the user enters an indication (e.g., button 1103 of input form 1100) that the configuration is complete. In response, the system may store the variation configuration, e.g., in a training system database, where the variation configuration may be applied to any or all other objects within the base scenario, or any or all objects in other base scenarios. Other distributions of random numbers may be used, including Poisson distributions (e.g., having a configurable mean). [0042] For example, the user may store the degraded-braking-performance variation so that it can be applied to the AV in other base scenarios. Similarly, the user may store variation configurations related to other characteristics 1001 of the AV, such as starting speed or position, so that the other characteristics 1001 may also be used in other base scenarios. Moreover, variation configurations may be applied to actual or live testing of an AV, e.g. on a test track or other controlled environment. In this way, the user may test the actual performance of the AV (e.g., using the trained AV stack) against any or all of the variation configurations without having to manually set up configuration changes. In some examples, the user may decide that it is unnecessary to simulate every combination of values for every variable characteristic 1001 of multiple objects in the scene. For example, certain combinations of characteristics 1001 of one or more objects may not result in a reasonable scenario. The user interface may display additional configuration forms and menus allowing the user to remove invalid or unnecessary combinations of object characteristics 1001 from the variation configurations.

[0043] FIG. 3 A is a flowchart that illustrates a process by which a training system (such as system 209 in FIG. 2) may generate an augmented simulation scenario for training an AV stack. FIG. 4A illustrates certain elements of the process in an alternate format. Referring to FIGs. 3 A and 4A, at 301 the system will choose a base simulation scenario from its set of stored simulation scenarios. Simulation scenarios may include data obtained from actual runs of vehicles in the real world, data generated through manual or automated simulation processes, or both. The system may select the base simulation scenario be done in response to a user input received via a user interface. Alternatively, the selection may occur automatically through processes such as random selection, by choosing the next scenario in a ranked order, or by applying a rule set such as one that selects a base scenario that satisfies one or more conditions. The system will then generate an augmentation element for the simulation scenario at 401. Generation of the augmentation element includes various sub-processes as described below.

[0044] Optionally, the sub-processes may include identifying an interaction zone within the base simulation scenario (step 302). An interaction zone is a time and distance range in the base simulation scenario into which the system will introduce an augmentation element. The interaction zone may include a physical location in the scene, a time in the simulation, or both a physical location and time. The locations and times may be specific points or ranges. In some embodiments, the system may identify the interaction zone as that which a user enters via a user interface. In others, the system may automatically choose the interaction zone based on one or more rules. Example rules may include rules to select a time and/or position that satisfies one or more conditions, such as:

[0045] - a position that is at or within a threshold distance from an intersection;

[0046] - a time range and position along which the vehicle will make a lane change; or

[0047] - a time range and position along which the vehicle will make a protected or unprotected left turn or right turn at an intersection.

[0048] Additional aspects of the interaction zone selection process will be described below in the context of FIG. 6.

[0049] At 303 the system will choose whether to add a new object to the scene or modify the behavior of an existing actor in the scene. This choice may be received: (a) from a user via the user interface; (b) automatically and randomly; or (c) automatically based on one or more rules. If received via the user interface, the choice may be received in response to a set of options that the system outputs to the user via the user interface. If the system identified an interaction zone at 302 and if the simulation scenario does not include any suitable object within the interaction zone, the system may require that the choice be a new object, as no existing object whose behavior can be modified will be available in the interaction zone.

[0050] If the choice at 303 is to select a new object, or if the choice at 303 is to modify behavior of an existing object and multiple candidate objects are available, then at 304 the system may select an object class to employ in the augmentation element. Example object classes may include, for example, pedestrian, vehicle, cyclist, vegetation, building and/or other object classes. If the choice at 303 was to modify the behavior of an existing object, then the candidate object classes for selection may be limited to those present in the scene (or in the interaction zone, if applicable.) The selection of object class at 304 may be: (a) received from a user via the user interface; (b) automatically and randomly selected by the system; or (c) automatically selected by the system based on one or more rules. If received via the user interface, the choice may be received in response to a set of options that the system outputs to the user via the user interface.

[0051] Once the object class is selected, at 305 the system may select one or more classification parameters for the selected object class. Classification parameters are type labels for each object class. The system may store the type labels in a data store along with a mapping of each type to a probability that the object will be one that corresponds to the type. For example, for the “vehicle” object class, the system may store type labels and mapped probabilities that include [sedan, 0.8 ] and [truck; 0.2] (In practice, the system would maintain several additional type labels for this class.) The system will then use the probabilities to apply a weighted randomization function to select one of the available types. An example randomization function would be one that will calculate the sum of all the weights, choose a random number that less than the sum, and subtract each type’s weight from the random number until the system finds a type for which the random number is less than that type’s weight. Other randomization functions may be used, including functions that consider other parameters.

[0052] At 306 the system will select a behavior from a position distribution of candidate behaviors for the object type along with mapped likelihoods of each candidate behavior. For example, for an object that is a bus, which is a type of vehicle, the position distribution may be [static, 0.8 ] and [dynamic, 0.2] The system may then use a randomization function as described above to select one of these behaviors and determine whether or not the bus will move in the simulation scenario. Behaviors may be dynamic behaviors, such as one or more characteristics 1001 of motion as described by way of example above. Alternatively, they may be static characteristics 1001 of the object, such as a size or position of an occlusion such as vegetation or a building. The system may select a single behavior or multiple behaviors to apply to an object in the augmented simulation scenario.

[0053] As noted above at step 302 in FIG. 3 A, the system may choose the interaction zone early in the process. However, if not done early in the process, then after the system selects the actor class, classification parameters and position distribution for the augmentation element, then at 307 after selecting the augmentation element (object and behavior) the system may then identify the interaction zone into which the augmentation element will be introduced, whether by adding a new actor or modifying behavior of an existing actor. Identification of the interaction zone in this step will be done according to any of the processes described above for step 302, or below.

[0054] FIGs. 5 A and 5B illustrate the steps of object class selection, classification parameter selection, and behavior selection using an example set of candidate actor data. In FIGs. 5 A and 5B, the candidate object classes 504 are vehicle, pedestrian, cyclist and vegetation. FIG. 5 A shows that in response to selecting the “vehicle” object class, the system then considers the classification parameters and mapped probabilities 505 of [car, 0.5], [truck, 0.2], [bus, 0.2] and [emergency vehicle, 0.1]. Position distributions are not yet known in FIG. 5 A because they may vary depending on the classification parameter selected. However, FIG. 5B shows that after the system selects the “bus” classification parameter, the system may then consider the position distribution for behaviors and mapped probabilities 506 of [static, 0.8] and [dynamic, 0.2] for busses, and it may select one of those behaviors to apply to a bus in the augmentation element.

[0055] In practice, the system may add any number of additional classification parameters, behaviors, or both for any object class. For example, for additional sub-parameters of the “vehicle / car” class and parameter may include “parked car”, which may be associated with various yaw range behaviors indicating not only whether the parked car will remain static or move, but also whether the vehicle is parallel to the lane of travel or skewed into the drivable area of the lane. The system may therefore determine any number of behaviors to apply to an object when modifying a base simulation scenario.

[0056] Returning to FIGs. 3 A and 4A, once the system generates the augmentation element for the simulation scenario at 401, the system will apply the augmentation element to the base simulation scenario to yield the augmented simulation scenario at 308. To apply the augmentation element, the system will add the object to, or (if it already exists) modify behavior of the object in, the base simulation scenario to include the selected behavior(s) for the specified object in the interaction zone. Other than adding the augmentation element, the system will leave the base simulation scenario relatively intact so that the training and assessment (discussed later) can focus on changes in vehicle reaction that the augmentation element may cause.

[0057] FIG. 3B is a flowchart that illustrates another process by which a training system (such as system 209 in FIG. 2) may generate an augmented simulation scenario for training an AV stack. FIG. 4B illustrates certain elements of the process in an alternate format. Referring to FIGs. 3B and 4B, at 301 the system will choose a base simulation scenario from its set of stored simulation scenarios. Simulation scenarios may include data obtained from actual runs of vehicles in the real world, data generated through manual or automated simulation processes, or both. The system may select the base simulation scenario in response to a user input received via a user interface. Alternatively, the selection may occur automatically through processes such as random selection, by choosing the next scenario in a ranked order, or by applying a rule set such as one that selects a base scenario that satisfies one or more conditions. The system will then generate an augmentation element for the simulation scenario at 421. Generation of the augmentation element includes various sub-processes as described below.

[0058] At 322 the system will identify objects in the scene having one or more variable characteristics. The variable characteristics 1001 may be configured by a user via a user interface. At 323 the system may determine values to apply to the characteristic 1001 of the object based on a set of defined values or a range or distribution of values defined for the characteristic. The system may determine values to apply to the characteristic 1001 of the object based on user-configured parameters for variation of the characteristic. The system may apply the values to modify the characteristic 1001 of the object to generate the augmentation element for the simulation scenario. Once the system generates the augmentation element for the simulation scenario at 421, the system will apply the augmentation element to the base simulation scenario to yield the augmented simulation scenario at 308. Other than adding the augmentation element, the system will leave the base simulation scenario relatively intact so that the training and assessment (discussed below) can focus on changes in vehicle reaction that the augmentation element may cause. [0059] Once the system generates an augmented simulation scenario, at 311 the system may test the augmented simulation scenario by applying the augmented simulation scenario to the AV stack, optionally over multiple iterations and optionally with varied parameters. To test the augmented simulation scenario, the system will apply a planned trajectory of the vehicle to the scene in the augmented simulation scenario. The vehicle’s perception system will detect the augmentation element in the simulation, and the vehicle’s motion planning system will compute a continued trajectory response to the detected augmentation element. The continued trajectory may not change the trajectory, in which case it will cause the vehicle to continue along the planned trajectory in the simulation. Alternatively, the continued trajectory may be an alternate trajectory, such as one that will ensure that the vehicle avoids moving within a threshold distance of the object in the simulation. Optionally, in the iteration process at 311 the system may access an evaluation data set 411, which is a set of data describing an expected behavior for the vehicle in response to a simulation scenario. The expected behavior may be as simple as an expectation that the vehicle not collide with another object, or it may include other parameters such as acceleration and/or deceleration limits.

[0060] At 312 the system may save details of the simulation, including the augmentation element (object and behavior) and the vehicle’s response (computed continued trajectory) to a simulated vehicle log for further analysis.

[0061] The system will then save the augmented simulation scenario to a data store at 313. The data store may be that which includes the base simulation scenario (in which case the augmented scenario may be used as a new base scenario in the future), a separate data store, or both. [0062] Optionally, at 314 a human operator may label the vehicle’s reaction in the simulation as a desirable reaction or an undesirable reaction to help train the AV’ s motion planning model. Alternatively, at 314 if system included an evaluation data set 411, the system may automatically label the vehicle’s reaction as desirable or undesirable depending on whether the vehicle’s simulated performance met the expected parameters that are contained in the evaluation data set. As yet another alternative, at 314 the system may help expedite and/or improve a human labeling process by extracting data from the evaluation data set 411 and using that data to suggest a label for the vehicle’ s reaction, in which case the human operator may either accept the suggested label or enter an alternative label. After the AV stack is trained with the augmented simulation scenario, the trained model may then be deployed in an AV to operate the vehicle at 315.

[0063] The process discussed above and illustrated in FIGs. 3A, 3B, 4A and 4B may be repeated for additional augmentation elements, thus allowing for a massive exploration of the state space by adding a wide variety of augmentation elements to a base simulation scenario while leaving the base scenario relatively intact. However, while the number of potential augmentation elements is potentially very large, the system limits processing requirements by restricting the addition of augmentation elements to those that appear in designated interaction zones. This allows the training process to be configurable to focus on particular scene characteristics 1001 that the operator of the training system selects, such as intersections, lane change events, left turns, or the like.

[0064] As noted above, either before or after selecting the augmentation element, the system will define an interaction zone in the base simulation scenario. The interaction zone includes elements of both position and time at which a new object will be inserted, or in which an existing object’s behavior will be modified, to yield the augmented simulation scenario. The position of an interaction zone will typically be one that bears a relation to the vehicle’s planned trajectory. For example, positions of an interaction zone may include lanes through which the vehicle’s planned trajectory will travel, lanes that are within threshold distances of the vehicle’s planned trajectory, sidewalks or crosswalks that are within threshold distances of the vehicle’s planned trajectory, intersections that are present along the vehicle’s planned trajectory, or other positions within the scene.

[0065] In some situations, the system may define the interaction zone as one that a user has specified via user input in a user interface. In some situations, the system may automatically define the interaction zone in a base simulation scenario using a process such as that described in FIG. 6. To automatically define the interaction zone, at 601 the system may perform a base simulation by simulating movement of the vehicle along a planned trajectory in the base simulation scenario. While doing this, the system will identify a trigger event at 602. A trigger event is an event of interest for which the system identifies, or which an operator of the system specifies, as a candidate for additional training data. Each trigger event will include a starting position and/or a starting time, and optionally an ending position and/or ending time. Trigger events may include, for example:

[0066] - a position and time in the planned trajectory in which the vehicle will implement a lane change maneuver;

[0067] - a position and time in the planned trajectory in which the vehicle will implement a left turn or a right turn;

[0068] - a position and time in the planned trajectory in which the vehicle will enter an intersection; or [0069] - a position and time in the planned trajectory in which the vehicle is approaching (i.e., reaches a threshold distance from) an intersection.

[0070] The system will make these determinations by receiving a signal of intent from the AV’s motion planning system. The signal of intent is a communication from the motion planning system which identifies a motion that the AV plans to implement in the simulation.

[0071] In some situations, when identifying trigger events at 602, the system may identify multiple candidate trigger events and filter out any trigger events that do not meet one or more specified criteria, such as events having a route length or time that exceeds a specified value. In this way, the system can help create interaction zones that are a relatively small segment of the entire base simulation scenario. The filter also may remove trigger events for which the system has already generated a threshold number of augmentation elements. In this way the system can devote processing resources on trigger events that are most needed for motion planning model training, and avoid using processing resources to train the model on events for which the system already has at least a threshold among of training data.

[0072] At 603 the system will identify the position(s) of the trigger event (including for example starting position and ending position. At 604 the system will identify the time of the trigger event, which may be a single time or a time window that includes starting time and stopping time in the simulation scenario. (In this disclosure, “time” does not necessarily require determination of time according to a world clock in any particular time zone. Although world time may be used, “time” also may refer to a time position measured with respect to a start time of the simulation, a time measured by a computer clock, or another time.) In response to identifying the trigger event while simulating the movement of the vehicle in the base simulation scenario, at 605 the system will then define the interaction zone as a position (or positions) and time (or time window) that are determined with respect to the location and time of the trigger event. For example, the system may define interaction zone to: (a) equal the locations and times of the trigger event; (b) be a position range along the planned trajectory that includes the location of the trigger event and a time window that includes the time of the trigger event; (c) include a position (or position range) along the planned trajectory that begins a specified distance ahead of the location of the trigger event; or (d) include a time window in the simulation that begins a specified amount of time after the time of the trigger event. Other interaction zone definitions may be employed in various embodiments.

[0073] As noted above, an augmentation element will include an object and one or more behaviors for the object. Selection of the behaviors will introduce an element of randomness by selecting behavior elements as described above in the discussion of FIGs. 3 A and 4A, which helps the system be able to explore a very large state space and include events that may be rarely encountered in the real world. For behaviors of objects that are actors, in addition to simple behavior elements such as “remain static” or “be dynamic” (i.e., move), the system may define other behavior elements such as posture or pose, acceleration profile, and even “noise” elements that may be more difficult for the vehicle’s perception system to discern such as presenting a vehicle with its right turn signal blinking while continuing to move straight through an intersection. These behaviors may include behavior definitions that follow base definitions of an obstruent augmentation element, an ambiguous augmentation element, a deviant augmentation element, or a behavior that corresponds to a combination of any of these. Features of these base definitions are described below.

[0074] An obstruent augmentation element is an object having a location that will at least partially block the vehicle’s planned path. An example of this is shown in FIG. 7, in which vehicle 701 is moving on road 717 along a planned trajectory 702. Parked vehicle 719 is obstruent because its position is skewed, not parallel to the road, and therefore partially within a threshold distance from the centerline of the vehicle’s planned trajectory 702. In the augmented simulation, when the vehicle’s perception system detects the parked vehicle 719, the vehicle’s motion planning system may alter the trajectory of vehicle 701 to: (a) veer slightly to the left to maintain a threshold distance between vehicle 701 and parked vehicle 719; and /or (b) to move more slowly as it approaches the parked vehicle 719 in case the parked vehicle begins motion and pulls into the lane in front of the vehicle 701. In various embodiments, an obstruent augmentation element may include any known object class, such as vehicle, pedestrian, cyclist, animal, vegetation, or even unknown/unidentifiable. The system may include a mapping for each object class with potential behaviors / states, along with probabilities of each behavior or state. By way of example, potential states of a vehicle may include parked, moving forward, turning left, turning right, accelerating, decelerating, among other states. Potential states of a pedestrian may include examples such as walking into lane, walking parallel to lane, standing facing lane, standing facing away from lane, among other states.

[0075] An ambiguous augmentation element is an object that exhibits a behavior or combination of behaviors that are not common for that class of object, and therefore a perception system of the vehicle will be expected to assign substantially equal likelihoods to the behavior being one of at least two candidate behaviors. (In this document, “substantially equal” means that the values of the two likelihoods are within 10 percent or less of each other.) Examples include a combination of behaviors that are inconsistent with each other, such as (i) a vehicle that exhibits a blinking turn signal while continuing to move straight through an intersection; (ii) a parked vehicle that exhibits a blinking turn signal while continuing to remain parked for more than a threshold period of time; or (iii) an object that randomly changes its classification from a first classification to a second classification (such as changing from pedestrian to unknown). Other examples include behaviors that are not associated with the class of object, such as vegetation that moves. Other examples include an object that flickers in and out of existence or that appears for no more than a limited number of cycles, in the simulation, or a vehicle that randomly activates and deactivates its brake lights. Each of these states may render one or more characteristics 1001 of the object ambiguous to the vehicle’s perception system.

[0076] A deviant augmentation element is an object that exhibits a behavior or combination of behaviors that are both dynamic (i.e., it results in movement of the object) and which will cause the vehicle’s motion planning system to react by modifying its trajectory. An example of this is shown in FIG. 8, in which vehicle 801 is moving on road 817 along a planned trajectory 802. Parked vehicle 819 is deviant because it pulls away from its position and moves not just forward, but it follows a trajectory 822 that crosses both the vehicle’s planned trajectory 802 and the road 817. In the augmented simulation, when the vehicle’s perception system detects the parked vehicle 819 begin to move along this deviant trajectory 822, the vehicle’s motion planning system may alter the trajectory of vehicle 801 by causing the vehicle 801 to stop until the (formerly) parked vehicle 819 completely crosses the road 817, or to take other evasive action. Other deviant behaviors include, for example, motions that violate one or more traffic laws.

[0077] In some embodiments, the system may generate augmentation elements, and it may introduce different categories of augmentation elements within different segments of the interaction zone. Optionally, to promote random generation of augmentation elements in situations for which the system may require more data, the system may assign weights to different segments of the interaction zone. In addition, it may assign different weights to different categories of augmentation elements and/or different classes of objects in each segment. The system may then incorporate these distributions in its randomization function when generating objects and behaviors to use as augmentation elements. An example of this is shown in FIG. 9, in which the area in front of vehicle 901 include a first interaction zone segment 902 that includes four subregions that cross the road and extend into an intersecting road. Each subregion is assigned an object/behavior distribution which in this case may be a distribution for the placement of deviant vehicles in each subregion so that randomly generated deviant vehicle behaviors in the interaction zone segment are distributed across the subregions in amounts that are substantially equal to the distributions shown. The area also includes a second interaction zone segment 903 that includes three subregions that cross the road, and which in this case correspond to new pedestrian objects. Each subregion is assigned an object/behavior distribution which in this case may be a distribution for the placement of new pedestrians in each subregion so that randomly generated simulated pedestrians are distributed across the subregions in amounts that are substantially equal to the distributions shown. The area also includes a third interaction zone segment 904 that includes two subregions along road, and which in this case correspond to static vehicle locations (one in the lane of travel, and one in a parking lane). Each subregion is assigned an object/behavior distribution which in this case may be a distribution for the placement of new parked or otherwise non-moving vehicles in each subregion so that randomly generated simulated static vehicles are distributed across the subregions in amounts that are substantially equal to the distributions shown.

[0078] Finally, returning to FIG. 3 A and FIG. 3B, in some embodiments at 316 the system may deploy augmented simulation scenarios on-board the vehicle while the vehicle operates in a real-world environment such as a test track. In this case, referring to FIG. 2, the training system 209 may be onboard the vehicle, and it may deliver augmented simulation scenarios to the vehicle’s forecasting system 203 and/or motion planning system 204. These systems will combine the combine data received from the vehicle’s perception system 202 with the augmented simulation scenarios to expose the vehicle with additional scenarios that it may encounter in the real world but has not yet encountered. Example methods for combining real-world perception data with simulation data are described in U.S. patent application number 17/074,807, the disclosure of which is fully incorporated into this document by reference.

[0079] The above-disclosed features and functions, as well as alternatives, may be combined into many other different systems or applications. Various components may be implemented in hardware or software or embedded software. Various presently unforeseen or unanticipated alternatives, modifications, variations or improvements may be made by those skilled in the art, each of which is also intended to be encompassed by the disclosed embodiments.

[0080] Terminology that is relevant to the disclosure provided above includes:

[0081] An “automated device” or “robotic device” refers to an electronic device that includes a processor, programming instructions, and one or more physical hardware components that, in response to commands from the processor, can move with minimal or no human intervention. Through such movement, a robotic device may perform one or more automatic functions or function sets. Examples of such operations, functions or tasks may include, without limitation, operating wheels or propellers to effectuate driving, flying or other transportation actions, operating robotic lifts for loading, unloading, medical-related processes, construction- related processes, and/or the like. Example robotic devices may include, without limitation, autonomous vehicles, drones and other autonomous robotic devices.

[0082] The term “vehicle” refers to any moving form of conveyance that is capable of carrying either one or more human occupants and/or cargo and is powered by any form of energy. The term “vehicle” includes, but is not limited to, cars, trucks, vans, trains, autonomous vehicles, aircraft, aerial drones and the like. An “autonomous vehicle” is a vehicle having a processor, programming instructions and drivetrain components that are controllable by the processor without requiring a human operator. An autonomous vehicle may be fully autonomous in that it does not require a human operator for most or all driving conditions and functions. Alternatively, it may be semi-autonomous in that a human operator may be required in certain conditions or for certain operations, or that a human operator may override the vehicle’s autonomous system and may take control of the vehicle. Autonomous vehicles also include vehicles in which autonomous systems augment human operation of the vehicle, such as vehicles with driver-assisted steering, speed control, braking, parking and other advanced driver assistance systems.

[0083] The term “object,” when referring to an object that is detected by a vehicle perception system or simulated by a simulation system, is intended to encompass both stationary objects and moving (or potentially moving) actors, except where specifically stated otherwise by use of the term “actor” or “stationary object.”

[0084] When used in the context of autonomous vehicle motion planning, the term “trajectory” refers to the plan that the vehicle’s motion planning system will generate, and which the vehicle’ s motion control system will follow when controlling the vehicle’ s motion. A traj ectory includes the vehicle’s planned position and orientation at multiple points in time over a time horizon, as well as the vehicle’s planned steering wheel angle and angle rate over the same time horizon. An autonomous vehicle’s motion control system will consume the trajectory and send commands to the vehicle’s steering controller, brake controller, throttle controller and/or other motion control subsystem to move the vehicle along a planned path. [0085] A “trajectory” of an actor that a vehicle’s perception or prediction systems may generate refers to the predicted path that the actor will follow over a time horizon, along with the predicted speed of the actor and/or position of the actor along the path at various points along the time horizon.

[0086] In this document, the terms “street,” “lane,” “road” and “intersection” are illustrated by way of example with vehicles traveling on one or more roads. However, the embodiments are intended to include lanes and intersections in other locations, such as parking areas. In addition, for autonomous vehicles that are designed to be used indoors (such as automated picking devices in warehouses), a street may be a corridor of the warehouse and a lane may be a portion of the corridor. If the autonomous vehicle is a drone or other aircraft, the term “street” or “road” may represent an airway and a lane may be a portion of the airway. If the autonomous vehicle is a watercraft, then the term “street” or “road” may represent a waterway and a lane may be a portion of the waterway.

[0087] An “electronic device” or a “computing device” refers to a device that includes a processor and memory. Each device may have its own processor and/or memory, or the processor and/or memory may be shared with other devices as in a virtual machine or container arrangement. The memory will contain or receive programming instructions that, when executed by the processor, cause the electronic device to perform one or more operations according to the programming instructions.

[0088] The terms “memory,” “memory device,” “computer-readable medium,” “data store,” “data storage facility” and the like each refer to a non-transitory device on which computer- readable data, programming instructions or both are stored. Except where specifically stated otherwise, the terms “memory,” “memory device,” “computer-readable medium,” “data store,” “data storage facility” and the like are intended to include single device embodiments, embodiments in which multiple memory devices together or collectively store a set of data or instructions, as well as individual sectors within such devices. A computer program product is a memory device with programming instructions stored on it.

[0089] The terms “processor” and “processing device” refer to a hardware component of an electronic device that is configured to execute programming instructions, such as a microprocessor or other logical circuit. A processor and memory may be elements of a microcontroller, custom configurable integrated circuit, programmable system-on-a-chip, or other electronic device that can be programmed to perform various functions. Except where specifically stated otherwise, the singular term “processor” or “processing device” is intended to include both single-processing device embodiments and embodiments in which multiple processing devices together or collectively perform a process.

[0090] A “machine learning model” or a “model” refers to a set of algorithmic routines and parameters that can predict an output(s) of a real-world process (e.g., prediction of an object trajectory, a diagnosis or treatment of a patient, a suitable recommendation based on a user search query, etc.) based on a set of input features, without being explicitly programmed. A structure of the software routines (e.g., number of subroutines and relation between them) and/or the values of the parameters can be determined in a training process, which can use actual results of the real- world process that is being modeled. Such systems or models are understood to be necessarily rooted in computer technology, and in fact, cannot be implemented or even exist in the absence of computing technology. While machine learning systems utilize various types of statistical analyses, machine learning systems are distinguished from statistical analyses by virtue of the ability to learn without explicit programming and being rooted in computer technology. [0091] A typical machine learning pipeline may include building a machine learning model from a sample dataset (referred to as a “training set”), evaluating the model against one or more additional sample datasets (referred to as a “validation set” and/or a “test set”) to decide whether to keep the model and to benchmark how good the model is, and using the model in “production” to make predictions or decisions against live input data captured by an application service. The training set, the validation set, and/or the test set, as well as the machine learning model are often difficult to obtain and should be kept confidential. The current disclosure describes systems and methods for providing a secure machine learning pipeline that preserves the privacy and integrity of datasets as well as machine learning models.

[0092] In this document, when relative terms of order such as “first” and “second” are used to modify a noun, such use is simply intended to distinguish one item from another, and is not intended to require a sequential order unless specifically stated.

[0093] In addition, terms of relative position such as “front” and “rear”, or “ahead” and “behind”, when used, are intended to be relative to each other and need not be absolute, and only refer to one possible position of the device associated with those terms depending on the device’s orientation.

[0094] In a first set of embodiments, a method of generating a vehicle motion planning model simulation scenario is disclosed. The method may be embodied in computer programming instructions and/or implemented by a system that includes a processor. The method includes receiving, from a data store containing multiple simulation scenarios, a base simulation scenario that includes features of a scene through which a vehicle may travel. The method includes receiving, from the data store, a simulation variation for an object in the scene, the simulation variation defining multiple values for a characteristic of the object. The method further includes adding the simulation variation to the base simulation scenario to yield an augmented simulation scenario and applying the augmented simulation scenario to an autonomous vehicle motion planning model to train the motion planning model. The motion planning model simulates movement of the vehicle along a planned trajectory and iteratively simulates variations of the object based on the multiple values for the characteristic of the object. In response to each simulated variation of the object, the motion planning model selects a continued trajectory for the vehicle, either the planned trajectory or an alternate trajectory, and it causes the vehicle to move along the continued trajectory.

[0095] Implementations of the first set of embodiments may include one or more of the following optional features. In some examples, the method includes defining the simulation variation by outputting, via a user interface that includes a display device, the characteristic of the object and receiving, via the user interface, one or more variations for the characteristic of the object. The method may further include outputting, via the display device, a revised simulation scenario in which the object exhibits the one or more variations of the characteristic and saving the one or more variations for the characteristic to the data store as the simulation variation. The characteristic may include a dimension, a position, a velocity, an acceleration, or a behavior triggering distance of the object. In some examples, the method further includes receiving, from the data store, a second base simulation scenario that includes features of a second scene through which the vehicle may travel. The method may include receiving, from the data store, the simulation variation, adding the simulation variation to the second base simulation scenario to yield a second augmented simulation scenario, and applying the second augmented simulation scenario to the autonomous vehicle motion planning model to train the motion planning model. [0096] In some variations of the first set of embodiments, the method includes receiving, from the data store, a second simulation variation for a second object in the scene, the second simulation variation defining multiple values for a characteristic of the second object. The method may further include adding the second simulation variation to the base simulation scenario to yield a second augmented simulation scenario and applying the second augmented simulation scenario to the autonomous vehicle motion planning model to train the motion planning model. The motion planning model may iteratively simulate variations of the object and the second object based on the multiple values for the characteristic of the object and the multiple values for the characteristic of the second object and, in response to each variation of the object or the second object, select the continued trajectory for the vehicle.

[0097] In some variations of the first set of embodiments, the method further includes receiving, from the data store, a second simulation variation for the object in the scene, the second simulation variation defining multiple values for a second characteristic of the object. The method may further include adding the simulation variation and the second simulation variation to the base simulation scenario to yield the augmented simulation scenario. The method may further include generating an augmentation element that includes a second object and a behavior for the second object and adding the simulation variation and the augmentation element to the base simulation scenario to yield the augmented simulation scenario.

[0098] In some variations of first set of embodiments, simulating movement of the vehicle along the planned trajectory includes running the vehicle on a test track, wherein perception data from one or more vehicle sensors is augmented by simulated variations of the object. In some examples, one variation of the simulated object partially interferes with the planned trajectory of the vehicle and the continued trajectory is an alternate trajectory that will keep the vehicle at least a threshold distance away from the object.

[0099] In a second set of embodiments, a method of generating training scenarios variations is disclosed. The method may be embodied in a scenario variation generation system configured to receive a base scenario including objects in a scene and execute a user interface configured to display data associated with characteristics of the objects in the scene. The characteristic may include a dimension, a position, a velocity, an acceleration, or a behavior triggering distance of the object. The user interface may be further configured to receive variation input related to characteristics of the objects. In response to the user interface receiving variation input defining multiple values for characteristics of the objects, the scenario variation generation system may store a scenario variation in a data store containing simulation scenarios, wherein the scenario variation is based on the multiple values and configured to augment one or more of the simulation scenarios.

[00100] In some variations of the second set of embodiments, the method further includes, after receiving input related to the characteristics of the objects, displaying the multiple values by the user interface, receiving, by the user interface, a selection including one or more of the values, augmenting the base scenario with the selected values, and displaying the augmented scenario by the user interface.

[00101] In some variations of the second set of embodiments, the multiple values include evenly spaced values in a range, a Gaussian distribution of values in a range, a uniform distribution of values in a range, or user-configured discrete values. The method may further include receiving, by the user interface, variation input related to two or more characteristics of one or more objects, the variation input defining multiple values for the two or more characteristics of the one or more objects. After receiving input related to the two or more characteristics, the method may include receiving a selection including values to remove from the scenario variation.

[00102] In other embodiments, a vehicle motion planning model training system includes a processor, a data store of simulation scenarios, and a memory that stores programming instructions that are configured to cause the processor to train a vehicle motion planning model. The training system receives, from the data store, a base simulation scenario that includes features of a scene through which a vehicle may travel. The system also receives, from the data store, a simulation variation for an object in the scene, the simulation variation defining multiple values for a characteristic of the object. The system then adds the simulation variation to the base simulation scenario to yield an augmented simulation scenario and applies the augmented simulation scenario to an autonomous vehicle motion planning model to train the motion planning model. The motion planning model simulates movement of the vehicle along a planned trajectory and iteratively simulates variations of the object based on the multiple values for the characteristic of the object. In response to each simulated variation of the object, the motion planning model selects a continued trajectory for the vehicle, either the planned trajectory or an alternate trajectory, and causes the vehicle to move along the continued trajectory.

[00103] Implementations of the disclosure may include one or more of the following optional features. In some examples, the training system outputs, via a user interface that includes a display device, the characteristic of the object and receives, via the user interface, one or more variations for the characteristic of the object. The training system may output, via the display device, a revised simulation scenario in which the object exhibits the one or more variations of the characteristic and save the one or more variations for the characteristic to the data store as the simulation variation. The characteristic may include a dimension, a position, a velocity, an acceleration, or a behavior-triggering distance of the object. In some examples, the training system receives, from the data store, a second base simulation scenario that includes features of a second scene through which the vehicle may travel. The training system may receive, from the data store, the simulation variation, add the simulation variation to the second base simulation scenario to yield a second augmented simulation scenario, and apply the second augmented simulation scenario to the autonomous vehicle motion planning model to train the motion planning model. The training system may receive, from the data store, a second simulation variation for a second object in the scene, the second simulation variation defining multiple values for a characteristic of the second object. The training system may add the second simulation variation to the base simulation scenario to yield a second augmented simulation scenario and apply the second augmented simulation scenario to the autonomous vehicle motion planning model to train the motion planning model. The training system may iteratively simulate variations of the object and the second object based on the multiple values for the characteristic of the object and the multiple values for the characteristic of the second object and, in response to each variation of the object or the second object, select the continued trajectory for the vehicle.

[00104] The training system may receive, from the data store, a second simulation variation for the object in the scene, the second simulation variation defining multiple values for a second characteristic of the object. The training system may add the simulation variation and the second simulation variation to the base simulation scenario to yield the augmented simulation scenario. The training system may generate an augmentation element that includes a second object and a behavior for the second object and add the simulation variation and the augmentation element to the base simulation scenario to yield the augmented simulation scenario.

[00105] In some examples, simulating movement of the vehicle along the planned trajectory includes running the vehicle on a test track, wherein perception data from one or more vehicle sensors is augmented by simulated variations of the object. In some examples, one variation of the simulated object partially interferes with the planned trajectory of the vehicle and the continued trajectory is an alternate trajectory that will keep the vehicle at least a threshold distance away from the object.

[00106] In other embodiments, a computer program product is disclosed. The product includes a memory that stores programming instructions that are configured to cause a processor to train a vehicle motion planning model. The product receives, from the data store, a base simulation scenario that includes features of a scene through which a vehicle may travel.

The product also receives, from the data store, a simulation variation for an object in the scene, the simulation variation defining multiple values for a characteristic of the object. The product then adds the simulation variation to the base simulation scenario to yield an augmented simulation scenario and applies the augmented simulation scenario to an autonomous vehicle motion planning model to train the motion planning model. The motion planning model simulates movement of the vehicle along a planned trajectory and iteratively simulates variations of the object based on the multiple values for the characteristic of the object. In response to each simulated variation of the object, the motion planning model selects a continued trajectory for the vehicle, either the planned trajectory or an alternate trajectory, and causes the vehicle to move along the continued trajectory. [00107] Implementations of the disclosure may include one or more of the following optional features. Optionally, in some embodiments the product outputs, via a user interface that includes a display device, the characteristic of the object and receives, via the user interface, one or more variations for the characteristic of the object. The product may output, via the display device, a revised simulation scenario in which the object exhibits the one or more variations of the characteristic and save the one or more variations for the characteristic to the data store as the simulation variation. The characteristic may include a dimension, a position, a velocity, an acceleration, or a behavior-triggering distance of the object. In some examples, the product receives, from the data store, a second base simulation scenario that includes features of a second scene through which the vehicle may travel. The product may receive, from the data store, the simulation variation, add the simulation variation to the second base simulation scenario to yield a second augmented simulation scenario, and apply the second augmented simulation scenario to the autonomous vehicle motion planning model to train the motion planning model. The product may receive, from the data store, a second simulation variation for a second object in the scene, the second simulation variation defining multiple values for a characteristic of the second object. The product may add the second simulation variation to the base simulation scenario to yield a second augmented simulation scenario and apply the second augmented simulation scenario to the autonomous vehicle motion planning model to train the motion planning model. The motion planning model may iteratively simulate variations of the object and the second object based on the multiple values for the characteristic of the object and the multiple values for the characteristic of the second object and, in response to each variation of the object or the second object, select the continued trajectory for the vehicle. [00108] The product may receive, from the data store, a second simulation variation for the object in the scene, the second simulation variation defining multiple values for a second characteristic of the object. The product may add the simulation variation and the second simulation variation to the base simulation scenario to yield the augmented simulation scenario. The product may generate an augmentation element that includes a second object and a behavior for the second object and add the simulation variation and the augmentation element to the base simulation scenario to yield the augmented simulation scenario.

[00109] In some embodiments, simulating movement of the vehicle along the planned trajectory includes running the vehicle on a test track, wherein perception data from one or more vehicle sensors is augmented by simulated variations of the object. In some examples, one variation of the simulated object partially interferes with the planned trajectory of the vehicle and the continued trajectory is an alternate trajectory that will keep the vehicle at least a threshold distance away from the object.