Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS FOR TRAINING AND CONTROLLING AN ARTIFICIAL NEURAL NETWORK WITH DISCRETE VEHICLE DRIVING COMMANDS
Document Type and Number:
WIPO Patent Application WO/2019/105974
Kind Code:
A1
Abstract:
Systems, devices, and methodologies are provided for training and controlling a neural network. The neural network is trained using definitive and random training modes to train neurons in a monolithic network. The neural network output is used to control an autonomous or semi-autonomous vehicle.

Inventors:
SALEEM MUNEEB (US)
Application Number:
PCT/EP2018/082782
Publication Date:
June 06, 2019
Filing Date:
November 28, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
VOLKSWAGEN AG (DE)
AUDI AG (DE)
International Classes:
G05D1/02; B60W30/00
Other References:
HUBSCHNEIDER CHRISTIAN ET AL: "Adding navigation to the equation: Turning decisions for end-to-end vehicle control", 2017 IEEE 20TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), IEEE, 16 October 2017 (2017-10-16), pages 1 - 8, XP033330568, DOI: 10.1109/ITSC.2017.8317923
Attorney, Agent or Firm:
RITTNER & PARTNER PATENTANWÄLTE MBB (DE)
Download PDF:
Claims:
CLAIMS

1. A method for training a monolithic neural network for controlling a vehicle, the method comprising:

inputting first and second definitive driving commands into the neural network, inputting first and second driving maneuvers corresponding to the first and second definitive driving commands into the neural network with the first and second definitive driving commands as part of a definitive training mode,

means for inducing confusion into the neural network by inputting a unique random command into the neural network along with first and second driving maneuvers so that the neural network controls an autonomous vehicle by identifying open road space.

2. The method of claim 1 wherein the definitive training mode further comprises inputting a third defined mode command and a third driving maneuver is input into the neural network, and wherein the random training mode further includes randomly selected third driving maneuvers.

3. The method of claim 1 , wherein the training takes place in real-time and the driving maneuvers are input via manual operation of the vehicle captured by camera sensors mounted to the vehicle.

4. The method of claim 1 wherein the first definitive command is a right turn command and the first driving maneuver is a right turn.

5. The method of claim 4, wherein the second definitive command is a left turn command and the second driving maneuver is a left turn

6. The method of claim 5, wherein during each neutral command, one of right turn and left turn maneuvers are input.

7. The method of claim 6, wherein during each neutral command, one of a plurality of right turn and left turn maneuvers at a plurality of different angles are input.

8. An vehicle comprising:

one or more sensors mounted to the vehicle and configured to capture data about the vehicle surroundings in real-time; and

an autonomous or semi-autonomous driving system having end to end neural network having a single trained module operably trained in definitive and neutral commands and configured process the sensed data and determine and output a driving trajectory,

wherein the driving system executes an automatic driving maneuver of the vehicle smoothly based on the driving trajectory.

9. The vehicle of claim 8, wherein the definitive commands include left turn commands, right turn commands and straight commands.

10. The vehicle of claim 9, wherein the neutral command training provides input to each of the left turn, right turn, and straight commands thereby smoothing the driving trajectory between each of the definitive commands and smoothing transitions between one control mode and another control mode of the neural network.

11. The vehicle of claim 8, further comprising a navigation system, wherein data from the navigation system is input into the end-to-end neural network and processed to determine the driving trajectory.

12. The vehicle of claim 8, wherein the one or more sensors comprise camera or LIDAR sensors.

13. The vehicle of claim 8, wherein the neural network receives real-time feedback during the execution of the driving maneuver to provide temporal control and update the output trajectory.

14. The vehicle of claim 8, wherein the vehicle is configured to be operated in a random mode so that the vehicle transitions smoothly between definitive driving maneuvers.

15. A system for automatically and autonomously executing smooth driving maneuvers comprising:

a vehicle;

an autonomous driving system integrated into the vehicle for having a command controller that controls and executing smooth driving maneuvers; and

means for continuously determining a trajectory of the vehicle by using neutral command reference data so that an output control command is continuously updated and that the vehicle performs a smooth driving maneuver in response to the continuously updated control command.

16. The system of claim 15, wherein the means for continuously determining a trajectory comprises an end-to-end deep neural network with a single model trained in definitive and neutral command data.

17. The system of claim 16, wherein the definitive command data includes, left turning data, right turning data, and straight moving data, and wherein the neutral command data includes left turning data, right turning data, and straight moving data.

18. The system of claim 16, wherein the end-to-end deep neural network includes recurrent layers that receive feedback from the command controller in real-time in order to continuously adjust the control command throughout execution of a driving maneuver.

19. The system of claim 18, wherein the driving maneuver is a turn and the continuous adjustment comprises the angle of the steering wheel from before the turn through completion of the turn.

20. A method for training an end-to-end artificial neural network in an vehicle to control an autonomous driving system, the method comprising: executing a definitive training mode in which a first defined command and a first driving maneuver is input to a model in the neural network, and a second defined command and a second driving maneuver is input in the neural network; and

executing a random training mode in which plurality neutral commands and a plurality of randomly selected first and second driving maneuvers are input into the model in the neural network along,

wherein the trained neural network includes a single model made up of a plurality of neurons and the neurons trained in the random mode contribute to the production of a safe and smooth trajectory for the vehicle, which is then input to the vehicle resulting in a smooth execution of the determined driving maneuver.

Description:
SYSTEMS AND METHODS FOR TRAINING AND CONTROLLING AN ARTIFICIAL NEURAL NETWORK WITH DISCRETE VEHICLE DRIVING COMMANDS

BACKGROUND

[0001] The present disclosure relates to systems, components, and methodologies for using an artificial network in autonomous driving. In particular, the present disclosure relates to training an artificial neural network and controlling a vehicle using the trained artificial neural network.

SUMMARY

[0002] According to the present disclosure, an end-to-end artificial neural network is provided that has a single model.

[0003] In accordance with at least one embodiment, the single model is trained in definitive and neutral commands to more smoothly control a vehicle maneuver.

[0004] In accordance with at least one embodiment the definitive commands may be forward, left and right turn commands that are input into the model along with respective forward, left, and right maneuvers executed by a vehicle driver. The neutral commands may be a plurality of neutral commands that are input in the model along with randomly selected forward, left and right turn maneuvers executed by the vehicle driver. In this manner, the neutral commands are input to induce“confusion” into the neural network during training. In accordance with at least one embodiment, the maneuvers are input in real-time via sensors on the vehicle.

[0005] The neural network may be trained so that it can determine right, left, or forward trajectories at upcoming intersections. The neural network may be further configured to process navigational inputs such as voice commands or mapped route guidance by mapping them to particular spatial coordinates. The neural network may control a command controller so that a command is output to a vehicle component to execute the predicted trajectory at the intersection.

[0006] Additional features of the present disclosure will become apparent to those skilled in the art upon consideration of illustrative embodiments exemplifying the best mode of carrying out the disclosure as presently perceived.

BRIEF DESCRIPTIONS OF THE DRAWINGS [0007] The detailed description particularly refers to the accompanying figures in which:

[0008] Fig. 1 is a schematic and diagrammatic illustration of a vehicle control system including sensors and inputs to an autonomous driving system having a neural network and command controller output to execute driving maneuvers;

[0009] Fig. 2 is a block diagram of a training regime for training the neural network of Fig.

1 to identify driving trajectories and predict driving maneuvers;

[0010] Fig. 3 is a schematic and diagrammatic illustration of an end-to-end neural network of Fig 1;

[0011] Fig. 4 is an illustration of exemplary two-dimensional model of a mapping of neural network commands for the neural network according of Fig. 1 including straight, right, left, and neutral, and their relative polarity and distance from neutral in the plane;

[0012] Fig. 5 is an illustration of an exemplary three-dimensional model of a semantically- sensitive mapping of neural network commands for the neural network according of Fig. 1 including straight, right, left, and neutral, their relative polarity as well as secondary commands and their relative coordinates and distance from neutral in the planes;

[0013] Fig. 6A is an illustration of an exemplary steering angle over time for a human driver compared with an autonomous vehicle having a neural network that is trained only in definitive commands; and

[0014] Fig. 6B is an illustration of an exemplary steering angle over time for a human driver compared with an autonomous vehicle having a neural network that is trained in definitive and confusion induce commands

DETAILED DESCRIPTION

[0015] The figures and descriptions provided herein may have been simplified to illustrate aspects that are relevant for a clear understanding of the herein described devices, systems, and methods, while eliminating, for the purpose of clarity, other aspects that may be found in typical devices, systems, and methods. Those of ordinary skill may recognize that other elements and/or operations may be desirable and/or necessary to implement the devices, systems, and methods described herein. Because such elements and operations are well known in the art, and because they do not facilitate a better understanding of the present disclosure, a discussion of such elements and operations may not be provided herein. However, the present disclosure is deemed to inherently include all such elements, variations, and modifications to the described aspects that would be known to those of ordinary skill in the art.

[0016] Fig. 1 is a schematic and diagrammatic view of an exemplary vehicle control system 10. According to Fig 1, a vehicle 10 may include an autonomous driving system 20 including a neural network 25, a local storage or memory 30, and a component controller 35 implemented in part using one or more computer processors. Vehicle 15 may further include sensors 40, such as camera or video sensors, coupled to the vehicle to capture data about the environment surrounding the vehicle and communicate the captured data to the neural network. Vehicle 15 may also include a navigation system 45 including one or more computer processors running software configured to capture global positioning data and communicate navigational data to the vehicle neural network 25. Neural network 25 may be trained in using real-time and/or stored driving maneuver data along with commands as described further in Fig. 2 resulting in smooth human-like maneuverability of the vehicle 15 in response to outputs from the component controller 35. During training, the data and command inputs alternatively may not be in‘real-time’. The commands plus imagery data can be stored on disk and processed repetitively until model is trained.

[0017] A neural network 225 may be a single model end-to-end neural network as described in Fig. 3. Neural network training 200 may include two modes of training, definitive mode, 230 and random mode 232. In each mode, the network is trained on the same trajectories of straight, right, and left, with different definitive and neutral input commands. In definitive mode training 230, a plurality of definitive commands is provided as an input to the neural network along with corresponding definitive driving maneuver data. For example, a discrete definitive command may be a right turn command 234 that is input along with right turn sensor data 236 for an approaching intersection. Another definitive command may be a left turn command 238 that is input with left turn sensor data 240 for an approaching intersection. Another definitive command may be a straight command 242 and straight sensor data 244 for an approaching intersection. The sensor data may be image data selected and input from stored historic data sets of known driving maneuvers. Alternatively, the senor data may be image data collected in real time during manual driving of the vehicle that includes the neural network 225. [0018] In random mode training, a neutral mode command is provided as an input to the neural network 225 along with corresponding randomly selected driving data, the driving data corresponding to straight, right turn, and left turn data as was used in definitive commands training. For example, the random command may be a neutral command 246 that is input along with right turn sensor data 248 for an approaching intersection.

[0019] In this mode, the neutral command 246 may also be input along with left turn sensor data 250 for an approaching intersection. The neutral command may also be input along with straight sensor data 252 for an approaching intersection. In this manner in random mode, the network is trained in the overall dynamics of driving under all valid maneuvers, while still maintaining control over which trajectory or maneuver to choose. While the control input is in random training mode, the neural network is allowed to leam the possible valid maneuvers in each driving situation due to the random mode simply being a mix of all the other definitive commands. In this manner, the neural network learns to follow the free space in the road, and avoid obstacles.

[0020] Since the neural network model is monolithic, the command inputs flow through the same computational graph, and therefore, lessons learned in random mode training are shared with the other definitive command modes as well. Due to the monolithic nature and induced confusion means provided by the random mode and with unique random commands, such as neutral command inputs, some of the neurons in the network automatically learning to detect features in the images that are crucial to safe navigation, such as open spaces in the random mode. According to this embodiment, the neural network 225 receives 50% of its training in definitive mode training 230 and 50% of its training in random mode training. However, the percentage of time in each training mode may vary depending on the dataset used and driving scenarios the network is being trained for.

[0021] An illustrative embodiment of the real-time, end-to-end processing of the neural network 300 is shown in Fig. 3. Sensor data 350 may be captured and input from a plurality of optical sensors such as cameras 366 configured to capture at least left, front, and right images relative to the exterior of the vehicle. These inputs may be fed into convolutional layers 352 of the network. Optionally, NavFusion embedding 354 of input from a navigational system 368 may be added to the convolutional image output. Each voice command or guidance given by a navigation system maybe mapped according to the semantic mapping described in Figs. 4-5, and fed, directly, into the neural network.

[0022] Dense layers 356 predict the vehicle command (left, right straight), and recurrent layers 358 may determine the time-series of the collected data as part of a feedback loop 370 with the command controller 362 which outputs the command and drives execution 360 of a vehicle component 364. In this illustrative embodiment, the component 364 is a steering wheel and the recurrent layers 358 utilize long and short term memory, the real-time continuous input of images, and the real-time command controller 362 output to adjust the steering wheel angle over time throughout the execution of the turn maneuver.

[0023] During deployment of the trained neural network, the vehicle can be operated in the random mode for prolonged periods of time. The mode of operation may be input by a user via a user interface such as a keyboard, touchscreen, or vehicle interface. Sensors are active and capture images or other data indicative of the vehicle surroundings. During this time, the vehicle executes all the discrete maneuvers it has been trained on when each of them becomes relevant and safe/valid to execute. For example, if the vehicle is inside a parking lot in random mode, it will keep going straight until it reaches a turn or intersection, at which point it will randomly choose a maneuver it has been trained on, and execute that maneuver if feasible. For example, if the vehicle chooses to execute a left turn driving maneuver at the intersection, it can then drive as if in straight definitive mode until it reaches another turn or intersection. In this manner, the vehicle avoids curbs and obstacles, and allows the neural network to produce trajectories which are a combination of the definitive commands it has been trained on. The network may use lessons learned across different definitive training sessions together in a single driving control/command mode. The network deployed in the command mode can make a trajectory determination using a minimum of a single captured image.

[0024] Fig. 4 illustrates the two-dimensional relationship of the neutral command to each of the right, left and straight commands so that they are suitable numerical inputs for the neural network. The neutral command 402 is at the origin point (0, 0). As can be seen each of straight command 404, left command 408 and right command 406 are located on a unit circle forming a triangle 410. The coordinates for each definitive command 404, 406, and 408 are chosen so that their polarities are distinct, thereby helping the neural network to learn and associate polarities with the definitive command. For example, straight command coordinates are both positive (+,+), right command 406 coordinates are positive, negative (+,-) and left command 408 coordinates are negative, positive (-,+). Neutral command (0,0) has no polarity and is symmetrically at the center of the triangle as it is the combination of all three definitive commands represented by the vertices of the triangle.

[0025] Fig. 5 illustrates how mapping the four commands of Fig. 4, can be extended to semantically sensitive numerical mapping of additional commands to number sequences. As can be seen, similar commands are close to each other in the embedding space. For example, exit left 516 may lie between straight command 504 and left turn command 508 (90 degrees from straight), but closer to straight command 504 depending on the angle of the exit. In this manner, relative degrees of turns between 0 degrees and a 90 degree turn can be defined. Likewise exit right 514 may lie between straight command 404 and right turn 406. An optional third dimension“z” may be added to show relationship s including braking and accelerating driving maneuvers. For example, stop slowly 512 may lie in the z-plane indicative of the rate of deceleration and smooth transition to a stopped state.

[0026] Figs. 6A-6B illustrate how the neural network of this disclosure and the use of the random training command results in driving maneuvers that more closely resemble smooth human driving trajectories. For example, Fig. 6A depicts how the neural network may behave through turning steering angles over time 682 when the neural network has not been trained with the neutral mixed command. Compared with the human manual trajectory 680 for the same steering angles maneuvers, the neural network driven vehicle exhibits abrupt turning behaviors 683, 684 and inability to transition fully and smoothly 685. This is due to the strong data correlation between the definitive commands and outputs that occur. By way of contrast in Fig. 6B, the human trajectory 690 is closely followed by the neural network driven vehicle trajectory 692. Vertical dashed lines may indicate transitions between control modes. By adding the random training and neutral commands, the correlation is no longer as strong to the definitive commands removing all or nothing maneuvering and trajectories.

[0027] Although the network has been disclosed as having convolutional layers and sensed image inputs, other sensor inputs such as radar or LIDAR sensors and inputs may be used. Additional inputs can be provided via a vehicle CAN or similar internal communication network of the vehicle. Further commands may be expanded beyond simply left, right and straight commands such as reverse or stop as described with respect to Figs. 4-5. The system may permit a range of operation from full autonomy to partially-supervised, or semi-autonomous driving capabilities.

[0028] Deep neural networks have recently been shown to be able to control the steering of an automotive vehicle by learning a mapping between raw image sensor data to steering direction. In accordance with at least one embodiment, the network may operate in an end-to-end manner without any external control over the network’s predictions. Although such a network could theoretically be trained on a set of external commands to control the network, such a system would be unable to leam the dynamics of driving. As a result, such raining would be unable to transition smoothly or in a human-like fashion between one command or mode to another command or mode. This technical problem stems from a strong data correlation between commands and output trajectories. More specifically, a single neural network cannot perform different discrete driving tasks using a single type of output (e.g. steering).

[0029] Classical approaches to solving such technical obstacles to human-like driving maneuvers by an autonomous vehicle use a modular approach, in contrast to the single end-to-end model used of the presently disclosed embodiments. Such modular approaches consist of multiple modules for each subtask. Some of these modules contain neural network modules that solve a specific subtask related to driving instead of trying to solve the entire problem of driving end-to- end on raw sensor data. For example, a module could output segmented images predicting where in the image it perceives free space, vehicles, trees, pedestrians, etc, whereas another module could be responsible of detecting traffic signs. Use of such a modular approach further requires a rule- based planner that could then plan a trajectory based on the information from these modules for the vehicle to follow. Additionally, in such a configuration, a final module could then elaborate on this trajectory to produce a sequence of steering angles to follow for a smooth drive.

[0030] Such conventional modular approaches, however, require extensive tuning and each module must be configured and/or trained carefully. Additionally, the whole system’s reliability depends on each module functioning properly. Furthermore, this approach assumes, and requires, a structured environment because it can only account for objects and situations that the original programmer accounted for at system design and data selection phase. [0031] To the contrary, the disclosed embodiments provide a technical solution to these conventional deficiencies by providing a method to control the sometimes unpredictable output of a driving artificial neural network by devising a new training method for driving networks. This approach may be termed“induced confusion mode.”

[0032] In accordance with disclosed embodiments, this approach involves training the same neural network on all commands separately, and additionally, training the same network on an additional auxiliary random mode or command. This auxiliary random mode may constitute training the network on all other trajectories while keeping this random mode as the control input for the network. This approach teaches the network the overall dynamics of driving under all valid maneuvers, while still maintaining control over which trajectory or maneuver to choose, given an external command. Moreover, this approach enables smooth, human-like steering behavior for the vehicle, while paying attention to dynamics of driving by learning to avoid obstacles and drive within free road space automatically. Although illustrative embodiments are disclosed in terms of a vehicle, the neural network may be trained to direct maneuver other systems or robotic devices.

[0033] Disclosed embodiments may include apparatus/systems for performing the operations disclosed herein. An apparatus/system may be specially constructed for the desired purposes, or it may comprise a general purpose apparatus/system selectively activated or reconfigured by a program stored in the apparatus/system.

[0034] Disclosed embodiments may also be implemented in one or a combination of hardware, firmware, and software. They may be implemented as instructions stored on a machine- readable medium, which may be read and executed by a computing platform to perform the operations described herein. A machine -readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable medium may include read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices including thumb drives and solid state drives, and others.

[0035] Unless specifically stated otherwise, and as may be apparent from the following description and claims, it should be appreciated that throughout the specification descriptions utilizing terms such as“processing,”“computing,”“calculating,”“dete rmining,” or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities within the computing system’s registers and/or memories into other data similarly represented as physical quantities within the computing system’s memories, registers or other such information storage, transmission or display devices.

[0036] In a similar manner, the term“processor” may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory. A“computing platform” or“controller” may comprise one or more processors.

[0037] Further, the term computer readable medium is meant to refer to any machine- readable medium (automated data medium) capable of storing data in a format readable by a mechanical device. Examples of computer-readable media include magnetic media such as magnetic disks, cards, tapes, and drums, punched cards and paper tapes, optical disks, barcodes and magnetic ink characters. Further, computer readable and/or writable media may include, for example, a magnetic disk (e.g., a floppy disk, a hard disk), an optical disc (e.g., a CD, a DVD, a Blu-ray), a magneto-optical disk, a magnetic tape, semiconductor memory (e.g., a non-volatile memory card, flash memory, a solid state drive, SRAM, DRAM), an EPROM, an EEPROM, etc.).

[0038] While various exemplary embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should instead be defined only in accordance with the following claims and their equivalents.