Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHODS FOR AUTONOMOUS VEHICLE LOCALIZATION
Document Type and Number:
WIPO Patent Application WO/2019/070824
Kind Code:
A1
Abstract:
Disclosed are systems, methods, and computer-readable storage media to control a vehicle. In one aspect, a method includes capturing point-cloud data representative of a surrounding of an autonomous vehicle with one or more LIDAR sensors, identifying a point in the point cloud data as a non-matching point in response to the point having no corresponding point in a map used to determine a position of the autonomous vehicle, determining whether the non- matching point is to be used in a determination of an overlap score based on one or more comparisons of the point cloud data and the map, determining the overlap score in response to the determining whether the non-matching point is to be used in the determination of the overlap score, determining a position of the autonomous vehicle based on the overlap score and the map, and controlling the autonomous vehicle based on the position.

Inventors:
JENSEN KENNETH JAMES (US)
Application Number:
PCT/US2018/054124
Publication Date:
April 11, 2019
Filing Date:
October 03, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UBER TECHNOLOGIES INC (US)
International Classes:
G01S17/06; G01C21/30; G01S17/89; G01S7/48; G05D1/02
Domestic Patent References:
WO2017155970A12017-09-14
Foreign References:
CN106052697A2016-10-26
Other References:
DAVID L DORIA ET AL: "Consistency and confidence: A dual metric for verifying 3D object detections in multiple LiDAR scans", 2009 IEEE 12TH INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCV WORKSHOPS : KYOTO, JAPAN, 27 SEPTEMBER - 4 OCTOBER 2009, INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS, PISCATAWAY, NJ, 27 September 2009 (2009-09-27), pages 1481 - 1488, XP031664468, ISBN: 978-1-4244-4442-7
Attorney, Agent or Firm:
SCHEER, Bradley W. et al. (US)
Download PDF:
Claims:
We claim:

1. A method of controlling a vehicle, comprising:

capturing point-cloud data representative of a surrounding of an autonomous vehicle with one or more LIDAR sensors;

identifying a point in the point cloud data as a non-matching point based at least on the point having no corresponding point in a map used to determine a position of the autonomous vehicle;

determining that the non-matching point is to be used in a determination of an overlap score based on one or more comparisons of the point cloud data and the map;

determining the overlap score based at least partly on the determining that the non-matching point is to be used in the determination of the overlap score;

determining a position of the autonomous vehicle based on the overlap score and the map; and

controlling the autonomous vehicle based on the position.

2. The method of claim 1, wherein determining that the non- matching point is to be used in a determination of the overlap score comprises: generating ray-tracing data representing a tracing of a ray from a laser source of the one or more LIDAR sensors to a destination point that intersects the non-matching point;

determining that the ray intersects a reference point in the map; and in response to determining that the ray intersects the reference point in the map, including the non-matching point in the determination of the overlap score.

3. The method of claim 1, wherein determining that the non- matching point is to be used in a determination of the overlap score comprises: determining that the non-matching point corresponds to a non-occluded point of the map; and including the non-matching point in the determination of the overlap score in response to the non-matching point corresponding to a non-occluded point. 4. The method of claim 1, wherein determining the overlap score further comprises relating (i) a number of points that match to the map to (ii) a total number of points included in the determination of the overlap score.

5. The method of claim 1, further comprising:

determining a height of the one or more LIDAR sensors; and

adjusting the point-cloud data based on the height,

wherein determining that the non-matching point is to be used in the determination of the overlap score comprises determining that the non-matching point is to be used in the determination of the overlap score based on the adjusted point cloud data.

6. The method of claim 5, further comprising:

determining a height parameter for the map;

comparing the height parameter to the height of the one or more LIDAR sensors; and

adjusting the point-cloud data based on the comparison.

7. The method of claim 5, further comprising:

determining an angle of the one or more LIDAR sensors with respect to a reference plane; and adjusting the point-cloud data based on the angle.

8. The method of claim 1, wherein controlling the autonomous vehicle based on the position comprises:

determining a motion plan for the autonomous vehicle; and sending a signal to one or more of a steering controller, engine controller, or braking controller based on the motion plan.

9. A computing system for a vehicle, comprising:

one or more hardware processors;

one or more tangible, non-transitory, computer readable media that collectively store instructions that when executed by the one or more hardware processors cause the one or more hardware processors to perform operations, the operations comprising:

capturing point-cloud data representative of a surrounding of the vehicle with one or more LIDAR sensors;

identifying a point in the point cloud data as a non-matching point in response to the point having no corresponding point in a map used to determine a position of the vehicle;

determining that the non-matching point is to be used in a determination of an overlap score based on one or more comparisons of the point cloud data and the map;

determining the overlap score in response to the determining that the non-matching point is to be used in the determination of the overlap score; determining a position of the vehicle based on the overlap score and the map; and

controlling the vehicle based on the position.

10. The computing system of claim 9, wherein determining that the non-matching point is to be used in a determination of the overlap score comprises:

generating ray-tracing data representing a tracing of a ray from a laser source of the one or more LIDAR sensors to a destination point that intersects the non-matching point;

determining that the ray intersects a reference point in the map; and in response to determining that the ray intersects the reference point in the map, including the non-matching point in the determination of the overlap score.

11. The computing system of claim 9, wherein determining that the non-matching point is to be used in a determination of the overlap score comprises: determining that the non-matching point corresponds to a non-occluded point of the map; and including the non-matching point in the determination of the overlap score in response to the non-matching point corresponding to a non-occluded point.

12. The computing system of claim 9, wherein determining the overlap score further comprises relating (i) a number of points that match to the map to (ii) a total number of points included in the determination of the overlap score.

13. The computing system of claim 9, further comprising one or more light detection and ranging (LIDAR) sensors, wherein the point-cloud data is captured using the one or more light detection and ranging sensors.

14. The computing system of claim 13, further comprising the vehicle.

15. An apparatus for controlling a vehicle, comprising: means for capturing point-cloud data representative of a surrounding of an autonomous vehicle with one or more LIDAR sensors;

means for identifying a point in the point cloud data as a non-matching point based at least on the point having no corresponding point in a map used to determine a position of the autonomous vehicle;

means for determining that the non-matching point is to be used in a determination of an overlap score based on one or more comparisons of the point cloud data and the map;

means for determining the overlap score based at least partly on the determining that the non-matching point is to be used in the determination of the overlap score;

means for determining a position of the autonomous vehicle based on the overlap score and the map; and controlling the autonomous vehicle based on the position.

16. The apparatus of claim 15, wherein the means for determining that the non-matching point is to be used in a determination of the overlap score is configured to: generate ray-tracing data representing a tracing of a ray from a laser source of the one or more LIDAR sensors to a destination point that intersects the non-matching point;

determine that the ray intersects a reference point in the map; and in response to determining that the ray intersects the reference point in the map, include the non-matching point in the determination of the overlap score.

17. The apparatus of claim 15, wherein the means for determining that the non-matching point is to be used in a determination of the overlap score is configured to: determine that the non-matching point corresponds to a non-occluded point of the map; and

include the non-matching point in the determination of the overlap score in response to the non-matching point corresponding to a non-occluded point.

18. The apparatus of claim 15, wherein the means for determining the overlap score is configured to relate (i) a number of points that match to the map to (ii) a total number of points included in the determination of the overlap score.

19. The apparatus of claim 15, further comprising: means for determining a height of the one or more LIDAR sensors; and means for adjusting the point-cloud data based on the height,

wherein the means for determining that the non-matching point is to be used in the determination of the overlap score is configured to determine that the non-matching point is to be used in the determination of the overlap score based on the adjusted point cloud data.

20. The apparatus of claim 19, further comprising: means for determining a height parameter for the map;

means for comparing the height parameter to the height of the one or more LIDAR sensors; and

means for adjusting the point-cloud data based on the comparison.

21. The apparatus of claim 19, further comprising: means for determining an angle of the one or more LIDAR sensors with respect to a reference plane; and

means for adjusting the point-cloud data based on the angle.

22. The apparatus of claim 15, wherein the means for controlling the autonomous vehicle based on the position is configured to determine a motion plan for the autonomous vehicle, and to send a signal to one or more of a steering controller, engine controller, or braking controller based on the motion plan.

Description:
METHODS FOR AUTONOMOUS VEHICLE LOCALIZATION

CROSS REFERENCE TO RELATED APPLICATIONS

[0001] This application claims priority to U.S. Application No. 15/853,448, filed December 22, 2017 and entitled "SYSTEMS, DEVICES, AND

METHODS FOR IMPROVED AUTONOMOUS VEHICLE CONTROL," which claims priority to U. S. Provisional Application No. 62/567,439, filed October 3, 2017 and entitled "SYSTEMS, DEVICES, AND METHODS

FOR IMPROVED AUTONOMOUS VEHICLE CONTROL." The content of these prior applications are considered part of this application, and are hereby incorporated by reference in their entireties.

TECHNICAL FIELD

[0002] The present disclosure generally relates to the control of

autonomous vehicles, and in particular to improving an ability to correlate location information in a map with location data obtained by a vehicle while in motion.

BACKGROUND

[0003] Control of autonomous vehicles may rely on comparisons between maps of a local area and information acquired by on-board vehicle sensors. These sensors may rely, in various aspects, on one or more of optical, radio, and laser based technologies. The comparisons between the maps and the data obtained by the vehicle sensors may enable to vehicle to identify its position along a roadway, in real time, and make adjustments to its position and/or speed based on the identified position.

BRIEF DESCRIPTION OF THE DRAWINGS

[0004] In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. Like numerals having different letter suffixes may represent different instances of similar components. Some embodiments are illustrated by way of example, and not limitation, in the figures of the accompanying drawings.

[0005] FIG. 1 is an overview diagram of an autonomous vehicle.

[0006] FIG. 2 shows an example embodiment of a map data collection vehicle.

[0007] FIG. 3 A is a block diagram of an example system to control the navigation of a vehicle according to example embodiments of the present disclosure.

[0008] FIG. 3B is an example expanded view of an autonomous vehicle.

[0009] FIG. 4 is an expanded view of an example controller or vehicle computing system.

[0010] FIG. 5 shows example scenes captured by LIDAR sensor(s)of an autonomous vehicle and a map generating vehicle.

[0011] FIG. 6 is a visual representation of a best fit correspondence between two sets of point cloud data representing two scenes respectively.

[0012] FIGs. 7A-B show the two sets of point cloud data of FIG. 6.

[0013] FIGs. 8A-B show example two dimensional representations of point cloud data used to determine an overlap score.

[0014] FIGs. 9A-B illustrate tracing of a ray from a laser source to a destination.

[0015] FIG. 10 is an example of a comparison of point cloud data captured by an autonomous vehicle and data included in a map.

[0016] FIG. 1 1 is a flowchart of an example method of controlling an autonomous vehicle.

DETAILED DESCRIPTION

[0017] The description that follows includes systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative embodiments of the disclosure. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments of the inventive subject matter. It will be evident, however, to those skilled in the art, that embodiments of the inventive subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques are not necessarily shown in detail.

[0018] Maps of an autonomous system can be used to perceive the environment, detect objects, localize and control of autonomous vehicles. One type of map is a map that provides coordinates of certain map features, such as roads, addresses, points of interests, traffic information, and the like. Such maps, sometimes referred to as coverage maps, can be used to generate routing information from point A to point B. However, the maps of an autonomous system for the function listed above may require higher fidelity representation of the world. For example, a map for autonomous vehicles ("AV Map") may include a 3-D point cloud of the environment. As will be described below, such AV maps can be generated by driving a mapping vehicle equipped with a sensor within an area being mapped. The mapping vehicle may drive a section of road a number of times to generate a dense set of map data for the section of road. In this way, the map data represents the world from the perspective of the sensor on the mapping vehicle. The map data can include geolocation metadata so that the autonomous vehicle can determine the sensor data that is expected at a given location.

[0019] Maps may be generated from data collected in a variety of environments. For example, some maps are generated based on data collected from vehicles having roof mounted sensors, e.g., a LIDAR (Light Detection and Ranging), RADAR (Radio Detection and Ranging), or Visual sensors (e.g., a camera sensor). Other maps are generated from data collected by vehicles with sensors mounted on an extension platform. The extension platform may position the sensor substantially above the vehicle's roof line. Furthermore, while the data collected for maps may vary, vehicles using the maps for autonomous position determination may also have sensors that vary in their positions. For example, a small compact car may have an LIDAR sensor located at a first distance from the ground while a large semi-tractor may have an LIDAR sensor positioned at a larger second distance. Thus, a sensor mounted to the top of a compact car and to the top of a truck may generate different point clouds because of the different perspectives.

[0020] Additionally, the angle of a sensor used to collect map data may also vary from map to map. For example, some vehicles may be positioned in a slightly nose up attitude, resulting in the sensor being slightly inclined with respect to the horizontal. Other map generating vehicles may have a nose down stance, resulting in the sensor line of sight falling slightly below the horizontal.

[0021] The disclosed methods, devices, and systems provide for

improvements in autonomous vehicles in a variety of environments, such as those described above. In some aspects disclosed, an autonomous vehicle generates an overlap score. The overlap score provides an indication of how well position information derived from three-dimensional LIDAR point cloud captured by the autonomous vehicle matches position information (e.g. point cloud data) included in a particular map at a particular location and, in some embodiments, orientation. The autonomous vehicle generates multiple overlap scores from a single set of captured position information by calculating the overlap score at different candidate positions within the particular map. Each overlap score provides an indication of how likely the corresponding candidate position corresponds to the vehicle's true position within the map data. A candidate location corresponding to map data having the best overlap score may be considered the best approximation of the vehicle' s position in some aspects.

[0022] As discussed above, in some embodiments, an autonomous vehicle's sensor that is collecting the position information is configured differently (e.g., different heights from the ground) than a sensor that captured the map data. This difference may result in a reduced overlap score when data derived from the two sensors is compared because, for example, the perspective of the vehicle's sensor is different from the perspective of the sensor that was used to generate the map. At a given location, different perspectives can cause certain objects or map features to be occluded or not occluded. In some cases, the reduction in overlap score, even for the overlap score at a location within the map that corresponds to the vehicle's true location, may result in the autonomous vehicle failing to recognize its current position from the map data. This may cause the vehicle to provide a warning, to exit from autonomous driving, or to rely on less accurate means of estimating its position.

[0023] To improve the determination of overlap between vehicle's captured sensor data ("vehicle data") and map data, the disclosed methods, devices, and systems may detect differences between the vehicle data and map data that can be explained by differences in positions of the respective sensors used to collect the data. For example, in some aspects, if a point in the vehicle data does not have a match in the map data, a ray may be traced from the vehicle's sensor position through the non-matching point in the vehicle data. If the traced ray intersects a point in the map data, the non-matching point may be included in the overlap score determination. Otherwise it may not be included. For example, in some aspects, an overlap score is a ratio of a number of points in the vehicle data that match points in the map data, to a total number of points in the vehicle data. If the non-matching point is not included in the overlap score, the total number of points does not include the non-matching point.

[0024] Some aspects may determine points within the map data that are occluded. For example, these points may not have been visible to a sensor used to collect the map data. If a non-matching point in the vehicle data falls within an occluded portion of the map data, the non-matching point may not be included in the overlap score determination. Otherwise, the non- matching point may be included in the overlap score determination. Use of the above techniques may improve an overlap score's representation of how well the vehicle data matches the map data.

[0025] FIG. 1 is an overview diagram of an autonomous driving truck. The vehicle 102 includes a sensor 104, a controller or vehicle computing system 106, and a map database 1 18. The sensor 104 captures point cloud data representing a scene 105. In the illustrated embodiment, the sensor 104 is positioned on the roof of the truck, resulting in a distance of Di between the ground and the sensor 104. The sensor 104 may be a LID AR sensor. In some other aspects, the sensor 104 may be a visual sensor or an infrared sensor. The sensor 104 may be comprised of multiple physical sensors or multiple physical sensor devices. The sensor 104 may provide three- dimensional data. For example, data in an X, Y, and Z dimension may be obtained from the sensor 104.

[0026] The controller or vehicle computing system 106 compares data collected by the sensor 104 to map data included in the map database 1 18. As discussed above, the controller or vehicle computing system 106 computes an overlap score between the data collected by the sensor 104 and the map data in the map database 1 18 to determine a position of the vehicle 102.

[0027] FIG. 2 shows an example embodiment of a map data collection vehicle. The map data collection vehicle 202 includes a sensor 204, a controller 206, and the map database 1 18. The sensor 204 operationally captures point cloud data representing a scene 205 in front of the vehicle 202. The point cloud data captured by the sensor 204 may include three- dimensional data, such as data across each of X, Y, and Z dimension. The controller 206 stores map data derived from point cloud data collected by the sensor 204 in the map database 1 18. The map data may reflect a perspective of the sensor 204, which is located at a distance D 2 from the ground. Note that distance D 2 is a different distance from that of distance Di, discussed above with respect to FIG. 1 and the vehicle 102.

[0028] With reference to the figures, example embodiments of the present disclosure will be discussed in further detail. FIG. 3A is a block diagram of an example system 300 to control the navigation of a vehicle 102 according to example embodiments of the present disclosure. The autonomous vehicle 102 is capable of sensing its environment and navigating with little to no human input. The autonomous vehicle 102 can be a ground-based autonomous vehicle (e.g., car, truck, bus, etc.), an air-based autonomous vehicle (e.g., airplane, drone, helicopter, or other aircraft), or other types of vehicles (e.g., watercraft). The autonomous vehicle 102 can be configured to operate in one or more modes, for example, a fully autonomous operational mode and/or a semi-autonomous operational mode. A fully autonomous (e.g., self-driving) operational mode can be one in which the autonomous vehicle can provide driving and navigational operation with minimal and/or no interaction from a human driver present in the vehicle. A semi-autonomous (e.g., driver-assisted) operational mode can be one in which the autonomous vehicle operates with some interaction from a human driver present in the vehicle.

[0029] As discussed above, the autonomous vehicle 102 can include one or more sensors 104, a controller or vehicle computing system 106, and one or more vehicle controls 108. The vehicle controls 108 may include one or more of the vehicle computing system 106 can assist in controlling the autonomous vehicle 102. In particular, the vehicle computing system 106 can receive sensor data from the one or more sensors 104, attempt to comprehend the surrounding environment by performing various processing techniques on data collected by the sensors 104, and generate an appropriate motion path through such surrounding environment. The vehicle computing system 106 can control the one or more vehicle controls 108 to operate the autonomous vehicle 102 according to the motion path.

[0030] The vehicle computing system 106 can include one or more processors 130 and at least one memory 132. The one or more processors 130 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, a FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected. The memory 132 can include one or more non-transitory computer-readable storage mediums, such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof. The memory 132 can store data 134 and instructions 136 which are executed by the processor 130 to cause vehicle computing system 106 to perform operations. In some implementations, the one or more processors 130 and at least one memory 132 may be comprised in one or more computing devices, such as computing device(s) 129, within the vehicle computing system 106.

[0031] In some implementations, vehicle computing system 106 can further be connected to, or include, a positioning system 120. Positioning system 120 can determine a current geographic location of the autonomous vehicle 102. The positioning system 120 can be any device or circuitry for analyzing the position of the autonomous vehicle 102. For example, the positioning system 120 can determine actual or relative position by using a satellite navigation positioning system (e.g. a GPS system, a Galileo positioning system, the Global Navigation satellite system (GLONASS), the BeiDou Satellite Navigation and Positioning system), an inertial navigation system, a dead reckoning system, based on IP address, by using triangulation and/or proximity to cellular towers or WiFi hotspots, and/or other suitable techniques for determining position. The position of the autonomous vehicle 102 can be used by various systems of the vehicle computing system 106.

[0032] As illustrated in FIG. 3A, in some embodiments, the vehicle computing system 106 can include a perception system 1 10, a prediction system 1 12, and a motion planning system 1 14 that cooperate to perceive the surrounding environment of the autonomous vehicle 102 and determine a motion plan to control the motion of the autonomous vehicle 102

accordingly. In some implementations, the vehicle computing system 106 can also include a feature extractor/concatenator 122 and a speed limit context awareness machine-learned model 124 that can be provide data to assist in determining the motion plan to control the motion of the

autonomous vehicle 102.

[0033] In particular, in some implementations, the perception system 1 10 can receive sensor data from the one or more sensors 104 that are coupled to or otherwise included within the autonomous vehicle 102. As examples, the one or more sensors 104 can include a Light Detection and Ranging

(LIDAR) system, a Radio Detection and Ranging (RADAR) system, one or more cameras (e.g., visible spectrum cameras, infrared cameras, etc.), and/or other sensors. The sensor data can include information that describes the location of objects within the surrounding environment of the autonomous vehicle 102.

[0034] As one example, for LIDAR systems, the sensor data can include the location (e.g., in three-dimensional space relative to the LIDAR system) of a number of points that correspond to objects that have reflected a ranging laser. For example, LIDAR system can measure distances by measuring the Time of Flight (TOF) that it takes a short laser pulse to travel from the sensor to an object and back, calculating the distance from the known speed of light.

[0035] As another example, for RADAR systems, the sensor data can include the location (e.g., in three-dimensional space relative to RADAR system) of a number of points that correspond to objects that have reflected a ranging radio wave. For example, radio waves (pulsed or continuous) transmitted by the RADAR system can reflect off an object and return to a receiver of the RADAR system, giving information about the object's location and speed. Thus, RADAR system can provide useful information about the current speed of an object.

[0036] As yet another example, for one or more cameras, various processing techniques (e.g., range imaging techniques such as, for example, structure from motion, structured light, stereo tri angulation, and/or other techniques) can be performed to identify the location (e.g., in three- dimensional space relative to the one or more cameras) of a number of points that correspond to objects that are depicted in imagery captured by the one or more cameras. Other sensor systems can identify the location of points that correspond to objects as well.

[0037] Thus, the one or more sensors 104 can be used to collect sensor data that includes information that describes the location (e.g., in three- dimensional space relative to the autonomous vehicle 102) of points that correspond to objects within the surrounding environment of the autonomous vehicle 102.

[0038] In addition to the sensor data, the perception system 1 10 can retrieve or otherwise obtain map data 1 18 that provides detailed information about the surrounding environment of the autonomous vehicle 102. The map data 1 18 can provide information regarding: the identity and location of different travel ways (e.g., roadways), road segments, buildings, or other items or objects (e.g., lampposts, crosswalks, curbing, etc.); the location and directions of traffic lanes (e.g., the location and direction of a parking lane, a turning lane, a bicycle lane, or other lanes within a particular roadway or other travel way); traffic control data (e.g., the location and instructions of signage, traffic lights, or other traffic control devices); and/or any other map data that provides information that assists the vehicle computing system 106 in comprehending and perceiving its surrounding environment and its relationship thereto.

[0039] The perception system 1 10 can identify one or more objects that are proximate to the autonomous vehicle 102 based on sensor data received from the one or more sensors 104 and/or the map data 1 18. In particular, in some implementations, the perception system 1 10 can determine, for each object, state data that describes a current state of such object. As examples, the state data for each object can describe an estimate of the object' s: current location (also referred to as position); current speed; current heading (also referred to together as velocity); current acceleration; current orientation; size/footprint (e.g., as represented by a bounding shape such as a bounding polygon or polyhedron); class (e.g., vehicle versus pedestrian versus bicycle versus other); yaw rate; and/or other state information.

[0040] In some implementations, the perception system 1 10 may determine state data for each object over a number of iterations. In particular, the perception system 1 10 can update the state data for each object at each iteration. Thus, the perception system 1 10 can detect and track objects (e.g., vehicles, pedestrians, bicycles, and the like) that are proximate to the autonomous vehicle 102 over time.

[0041] The prediction system 1 12 may receive the state data from the perception system 1 10 and predict one or more future locations for each object based on such state data. For example, the prediction system 1 12 can predict where each object will be located within the next 5 seconds, 10 seconds, 20 seconds, etc. As one example, an object can be predicted to adhere to its current trajectory according to its current speed. As another example, other, more sophisticated prediction techniques or modeling can be used.

[0042] The motion planning system 1 14 may determine a motion plan for the autonomous vehicle 102 based at least in part on the predicted one or more future locations for the obj ect provided by the prediction system 1 12 and/or the state data for the object provided by the perception system 1 10. Stated differently, given information about the current locations of objects and/or predicted future locations of proximate objects, the motion planning system 1 14 can determine a motion plan for the autonomous vehicle 102 that best navigates the autonomous vehicle 102 relative to the obj ects at such locations.

[0043] As one example, in some implementations, the motion planning system 1 14 can determine a cost function for each of one or more candidate motion plans for the autonomous vehicle 102 based at least in part on the current locations and/or predicted future locations of the obj ects. For example, the cost function can describe a cost (e.g., over time) of adhering to a particular candidate motion plan. For example, the cost described by a cost function can increase when the autonomous vehicle 102 approaches a possible impact with another obj ect and/or deviates from a preferred pathway (e.g., a preapproved pathway).

[0044] Thus, given information about the current locations and/or predicted future locations of objects, the motion planning system 1 14 can determine a cost of adhering to a particular candidate pathway. The motion planning system 1 14 can select or determine a motion plan for the autonomous vehicle 102 based at least in part on the cost function(s). For example, the candidate motion plan that minimizes the cost function can be selected or otherwise determined. The motion planning system 1 14 can provide the selected motion plan to a vehicle controller 1 16 that controls one or more vehicle controls 108 (e.g., actuators or other devices that control gas flow, acceleration, steering, braking, etc.) to execute the selected motion plan.

[0045] FIG. 3B is an example of an expanded view of the autonomous vehicle 102. As discussed above, the autonomous vehicle 102 may include the sensor 104, the vehicle controller or vehicle computing system 106, a motor controller 210, a steering controller 212, a braking controller 214, and the map database 1 18. The controller 106 is operably connected to each of the sensor 104, motor controller 210, steering controller 212 and braking controller 214, via any known interconnect technology. In FIG. 3, this is illustrated as a bus 230. The vehicle controller 106 may be configured to capture multiple point cloud data sets from the sensor 104 and compare the point cloud data sets to map data in the map database 1 18. Based on the comparison, the controller 106 may determine a position of the vehicle 102. The vehicle controller 106 may control the position and/or speed of the vehicle 102 by issuing commands to one or more of the motor controller 210, steering controller 212, and/or braking controller 214. For example, if the controller 106 determines a speed of the truck should be increased, the controller 106 may transmit a command to the motor controller 210 indicating an increased level of fuel is to be provided to the motor. In embodiments utilizing electric motors, the vehicle controller 106 may transmit a command to the motor controller 210 indicating an increased current or voltage is to be provided to the motor. If the vehicle controller 106 determines a position of the vehicle 102 should be adjusted to the left or right, the controller 106 may send a command indicating same to the steering controller 212.

[0046] FIG. 4 is an expanded view of an example controller or vehicle computing system 106. The example controller or vehicle computing system 106 of FIG. 4 includes one or more hardware processors 130, a hardware memory or memories 132, and one or more interfaces 440. The hardware processor(s) 130, memories 132, and interfaces 440 may be operably connected via any known interconnect technology, such as a bus 430. In some aspects, instructions stored in the memory/memories 132 may configure the one or more hardware processors 130 to perform one or more of the functions discussed below to provide for autonomous control of a vehicle, such as the vehicle 102. The interface(s) 440 may provide for electronic communication between the controller 106 and one or more of the motor controller 210, steering controller 212, braking controller 214, sensor 104, and/or map database 1 18.

[0047] FIG. 5 shows example scenes captured by LIDAR sensor(s) of an autonomous vehicle and a map generating vehicle. While a LIDAR sensor may capture point cloud data, for ease of illustration the scenes 502, 510, and 520 are shown as two dimensional images. An autonomous vehicle sensor may capture point cloud data for the scene 502. The scene 502 may be captured, in some aspects, by the sensor 104. The scene 502 shows a relatively straight road 504, a tree 506, and a pedestrian 508. The scene 502 shows that the tree 506 is a distance D 3 from the LIDAR sensor (e.g. 104) capturing the scene 502.

[0048] Point cloud data for a scene 510 may be captured by a vehicle generating map data. For example, point cloud data for the scene 510 may be captured by the sensor 204 of the vehicle 202. The scene 510 also shows the road 504 and the tree 506. The tree 506 in the scene 510 is a distance D 4 from the imaging sensor 204. D4 is less than D 3 in the scene 502. Thus, the scene 510 may be captured when the vehicle 202 is positioned closer to the tree than the vehicle 102 was positioned when capturing the scene 502.

[0049] Point cloud data for the scene 520 may also be captured by a vehicle generating map data. The scene 520 shows the road 504 and the tree 506. The tree 506 in the scene 520 is further from the LIDAR sensor capturing the scene 520 (e.g. a distance D 4 > , not shown) than in either the scene 510 or 502.

[0050] Some aspects may generate a first overlap score 552 for point cloud data representing the scenes 502 and 510, and a second overlap score 554 for point cloud data representing scenes 502 and 520. The two overlap scores 552 and 554 may be compared, with results of the comparison used to select a set of point cloud data for a scene that is closest to the scene 502 captured by the autonomous vehicle.

[0051] To generate an overlap score, a positional relationship between two scenes may be determined. For example, in some aspects, a "best fit" process may be performed so as to establish a correspondence between point cloud data representing the scene 502 and point cloud data representing the scene 510, and similarly to establish a correspondence between point cloud data representing the scene 502 and point cloud data representing the scene 520.

[0052] FIG. 6 is a visual representation of a best fit correspondence between point cloud data 502i and 51 Oi representing the scenes 502 and 510 respectively. As shown in FIG. 6, aligning features in each of the point cloud data sets 502i and 51 Oi may result in offsets in one or more

dimensions between the point cloud data sets 502i and 5 lOi. For example, point cloud data 502i is shown, in a simplified two dimensional view, offset to the right and below the point cloud data 5 lOi to obtain the best fit. A size of the offsets in a first and second dimension is shown by D 5 and D 6 respectively. In some aspects, after the correspondence between point cloud data 502i and point cloud data 51 Oi is established as visually indicated in FIG. 6 (in two dimensions only for ease of illustration), an overlap score may then be determined based on portions of each set of point cloud data representing a shared three-dimensional space. The shared three- dimensional space is established based on the correspondence. In some aspects, correspondence between two sets of point cloud data establishes coordinate offsets between points in each of the point clouds that represent a common point in three-dimensional space. For example, if the point cloud data 502i is offset to the right relative to point cloud data 51 Oi by ten (10) pixels, the correspondence in the horizontal dimension will indicate that data at a position (x, y, z) in point cloud data 502i is to be compared to data at position (x+10,y,z) in point cloud data 5 lOi.

[0053] In some aspects, after the correspondence illustrated in FIG. 6 is established, the portion of three-dimensional space that is shared by the two sets of point cloud data is illustrated as region 610. An overlap score for region 610 of the two point clouds may then be determined in some aspects.

[0054] FIGs. 7A-B show the two sets of point cloud data 502i and 51 Oi respectively. Point cloud data 502i represents the scene 502 of FIG. 5 in three dimensions (only two dimensions shown in FIG. 7A), while point cloud data 51 Oi represents scene 510 of FIG. 5 in three dimensions (only two dimensions shown in FIG. 7B). Regions 610a and 610b are shown within each of point cloud data 502i and 510L These regions 610a-b represent the region 610 shown in FIG. 6, but contained within each of the respective point cloud data 502i and 510L

[0055] Magnified views of portions of regions 610a and 610b are shown as 710a and 710b respectively. Portion 710a includes four points 720a-d while portion 710b includes three points 722a-c. The portions 710a-b are shown in two dimensions for ease of illustration, but the portions shown are intended to represent point cloud data in a three-dimensional space. A location within portion 710b corresponding to point 720d in region 610a is shown as 722d. The two points 720d and 722d may be considered corresponding or

"matching" because they have equivalent coordinates in their respective images or point clouds after the images or point clouds are aligned, for example, via a registration process. In some aspects, if the two points have coordinates within a predetermined range of each other, they may also be considered corresponding. The predetermined range may be provided to allow for some error in alignment of point clouds. For example, in some of the disclosed embodiments, the predetermined range may represent a relatively small distance error in the alignment, such as less than any of 20, 19, 18, 17, 16, 15, 14, 13, 12, 1 1, 10, 9, 8, 7, 6, 5, 4, 2, or 1 centimeters. In some embodiments, an overlap score may relate a number of matching points to a total number of points. Thus, in the simplified examples of FIGs 7A-B, some embodiments would determine an overlap score of .75 for the two portions 710a and 710b. This approach may be extended to encompass the entirety of regions 610a-b and/or point cloud data 502i and 5 lOi.

[0056] FIGs. 8A-B show example two dimensional representations of point cloud data used to determine an overlap score. The point cloud data 802 may be captured from an autonomous vehicle and compared to map data including the point cloud data 810. The point cloud data 802 shows a road 820, bridge 822, and building 824. The map point cloud data 810 also includes the road 820, bridge 822, and building 824. The map point cloud data 810 differs from the vehicle point cloud data 802 in that a portion of the building 826 is visible above the bridge 822 in the vehicle point cloud data 802 but not in the map point cloud data 810. In some aspects, this difference may be caused by a difference in height of the two LIDAR sensors that captured the two sets of point cloud data 802 and 810. For example, as shown in FIGs. 1 and 2, the sensor 104, mounted on a roof of the vehicle 102, and used to capture point cloud data 802, may be further from the ground (i.e. distance Di) than the sensor 204, mounted on a roof of the car 202 (i.e. distance D2) and used to capture map point cloud data 810. An overlap score based on the point cloud data 802 and 810 may be reduced due to the difference in perspective caused by the different distances Dl and D2, and the resulting building portion 826 appearing in point cloud 802 but not in point cloud data 810.

[0057] Aspects disclosed may determine whether a portion of a three- dimensional space including the building portion 826 in point cloud data 802 was obscured in point cloud data 810. In some aspects, this determination may be performed in response to determining that the portion 826 in point cloud data 802 is not present in point cloud data 810. In some aspects, if any portion of three-dimensional space including the building portion 826 was obscured in point cloud data 810, that portion may not be included in an overlap score based on the point cloud data 802 and point cloud data 810. For example, if an obscured portion of the space occupied by building portion 826 included a particular number of data points, those particular number of data points would not be included in the overlap score, in that a total number of points would not include the particular number of points.

[0058] FIGs. 9A-B illustrate tracing of a ray from a laser source to a destination. FIG. 9A shows the scene 502. Point cloud data representing the scene 502 may be captured by an autonomous vehicle, such as the autonomous vehicle 102 discussed above with respect to at least FIGs. 1 and 3. FIG. 9B shows one embodiment of map data 910. The map data 910 may include point cloud data captured by a map data collection vehicle, such as the map data collection vehicle 202 discussed above with respect to FIG. 2. The map data 910 may not include the tree 506 or the pedestrian 508.

[0059] As discussed above, point cloud data representing the scene 502 may be compared to point cloud data representing the scene 910 when determining a position of an autonomous vehicle in at least some of the disclosed embodiments. The comparison of the two point cloud data sets may identify points representing the tree 506 in scene 502 that do not have corresponding points in the point cloud representing the scene 510, since the tree 506 is not present in the scene 506. These points may be considered non-matching points. In other words, corresponding or "matching" points may be two points in two respective point cloud data sets that have equivalent coordinates after the two point cloud data sets are aligned, for example, via a registration process. In some aspects, some degree of error may be provided for in determining whether two points are corresponding. For example, in some aspects, two points may be considered corresponding if they are within a first predetermined number of units in a first axis (such as an X axis), a second predetermined number of units in a second axis (such as a Y axis), and a third predetermined number of units in a third axis (such as a Z axis).

[0060] In some aspects, to determine whether each point representing the tree in scene 502 should be included in an overlap score between the two scenes 502 and 910, ray tracing data may be generated for the point cloud data representing the scene 910. The ray tracing data may be used to determine if a point exists in the scene 910's point cloud data at any position along the path of the ray (e.g. ray 920 in FIG. 9B). A source of the ray may be a laser source position of a LIDAR sensor capturing the scene 910. The ray may intersect a location in the scene 910 occupied by a non-matching point in the point cloud data for scene 502. The ray may also extend beyond the non-matching point. If the ray intersects a point in the point cloud data representing the scene 910, then the non-matching point in the scene 502 may be included in an overlap score. If the ray does not intersect any point in the point cloud data representing the scene 910, in some embodiments, the non-matching point may not be included in the overlap score.

[0061] FIG. 10 is an example of a comparison of point cloud data captured by an autonomous vehicle (e.g. 102) and data included in a map. FIG. 10 illustrates that with some points, both a normal vector associated with a point and the point itself match. For some additional portion of points, locations of the points match between the two data but the normal vectors are different. This may occur, for example, with foliage, which may be repositioned frequently by wind or even in some aspects by passing vehicles. For some other points, there may not be a point in the map data at a location of a point in the vehicle' s point cloud.

[0062] FIG. 1 1 is a flowchart of an example method of controlling an autonomous vehicle. The process 1 100 discussed below with respect to FIG. 1 1 may be performed, in some aspects, by the vehicle controller 106. For example, instructions stored in the memory 132 may configure the one or more hardware processors 130 to perform one or more of the functions discussed below with respect to FIG. 1 1 and process 1 100.

[0063] Process 1 100 determines an overlap score. The overlap score indicates an amount of overlap between position data collected by a first vehicle, for example, from a LiDAR sensor, and map data. The map data may be generated prior to performance of the process 1 100. The map data may be based on data obtained from a separate sensor mounted on a different second vehicle. As discussed above, differences in height between the first sensor and the second sensor may result in reduced overlap between the vehicle data and the map data. In some aspects, this may have deleterious effects if not properly compensated.

[0064] In block 1 1 10, a point cloud is captured with a LIDAR sensor. For example, in some aspects, point cloud data representing the scene 105 may be captured by the sensor 104. The point cloud data is representative of a surrounding of an autonomous vehicle. For example, as discussed above with respect to FIG. 1, the sensor 104 may capture the scene 105immediately in front of the vehicle 102.

[0065] In block 1 120, a point is identified in the point cloud data as a non- matching point in response to the point having no corresponding point in a map. The map may also include three-dimensional data or point cloud data. The three-dimensional space represented by each of the point cloud data and the map is compared. In some aspects, at least a portion of points in the captured point cloud data will match a point in the map data. Another portion of points may not have a corresponding matching point in the map. For example, if a point in the point cloud data represents a reflection of LIDAR signals from an object at a location in the point cloud, the

corresponding location in the map may be empty, or in other words, may not indicate a reflection of LIDAR data.

[0066] Corresponding or "matching" points may be two points in the captured point cloud data and the map data respectively that have equivalent coordinates after the two point cloud data sets are aligned, for example, via a registration process. In some aspects, some degree of error may be provided for in determining whether the two points are corresponding or matching. For example, in some aspects, two points may be considered corresponding if they are within a first predetermined number of units in a first axis (such as an X axis), a second predetermined number of units in a second axis (such as a Y axis), and a third predetermined number of units in a third axis (such as a Z axis). In some aspects, if the two points have coordinates within a predetermined range of each other, they may also be considered

corresponding. The predetermined range may be provided to allow for some error in alignment of point clouds. For example, in some of the disclosed embodiments, the predetermined range may represent a relatively small absolute distance error in the alignment, such as less than any of 20, 19, 18, 17, 16, 15, 14, 13, 12, 1 1 , 10, 9, 8, 7, 6, 5, 4, 2, or 1 centimeters.

[0067] In some aspects, a height parameter for the map may be determined. For example, some aspects may store information relating to distance D 2 with the map generated by the vehicle 202. The height parameter may be compared to a height of the one or more Lidar sensors used to capture the point cloud in block 1 1 10 (e.g. Di). The point-cloud data may be adjusted based on the comparison.

[0068] In some aspects, an angle of the one or more LIDAR sensors with respect to a reference plane may be determined, and compared with an angle associated with the map data. The point-cloud data based on the angle in some aspects. After one or more of the adjustments, the determination of whether the point is a matching point or a non-matching point may be made.

[0069] In block 1 130, a determination is made as to whether the non- matching point is used in an overlap score. The determination is based on one or more comparisons of the point cloud data and the map. For example, in some aspects, in some aspects, block 1 130 may determine whether the corresponding position in the map is obscured in the map data. As discussed above, in some aspects, because of differences in perspective between a LIDAR sensor used to capture the map data, and a LIDAR sensor used to capture point cloud data for an autonomous vehicle, some obj ects visible to the LIDAR sensor(s) of the autonomous vehicle may not be visible to the LIDAR sensor collecting data for the map. [0070] Some aspects may generate ray-tracing data representing a tracing of a ray from a laser source of the one or more Lidar sensors to a location in the map corresponding to the location of the non-matching point. If a point is at the location in the map, the non-matching point may be included in the determination of the overlap sore. If there is no point at the location in the map, the non-matching point may not be included in the determination of the overlap score.

[0071] Some aspects may determine whether the non-matching point corresponds to a non-occluded point of the map; and include the non- matching point in the determination of the overlap score in response to the non-matching point corresponding to a non-occluded point. For example, as discussed with respect to FIGs. 8A-B, some obj ects, such as the building 826, may be represented by the point cloud data representing a scene captured by an autonomous vehicle, but may be obscured in point cloud data of the map. In these cases, non-matching points may not be considered in an overlap score.

[0072] In block 1 140, the overlap score is determined based on the included points and the map. The overlap score relates a number of point that map to the map to a total number of points included in the determination of the overlap score.

[0073] In block 1 150, a position of the vehicle is determined based on the overlap score. For example, as discussed above, an overlap score may be determined for a plurality of map point cloud data sets. A set of point cloud data in the map having the highest overlap score with vehicle point cloud data may be selected as most closely approximating the vehicle position. Offsets from the selected map point cloud data may then be determined to determine a more precise position of the vehicle relative to the selected map point cloud data.

[0074] In block 1 160, the vehicle is controlled based on the determined position. For example, as discussed above, the controller 106 may send a command to one or more of the motor controller 210, steering controller 212, and braking controller 214 to control the vehicle based on the determined position. For example, if the determined position indicates the vehicle is not centered in a lane of travel, the controller 214 may send a command to the steering controller 212 to adjust the path of the vehicle to move the vehicle closer to the center of the lane. As another example, if the determined position is within a predetermined threshold distance of a curve in a road, the controller 106 may send a command to the braking controller 214 and/or motor controller 210 to reduce a speed of the vehicle.

[0075] In some aspects, process 1 100 includes determining a motion plan for the autonomous vehicle. The motion plan may be based in some aspects on the overlap score. Process 1 100 may include sending a signal to one or more of a steering controller, engine controller, or braking controller based on the motion plan.

[0076] Example 1 is a method of controlling a vehicle, comprising:

capturing point-cloud data representative of a surrounding of an autonomous vehicle with one or more LIDAR sensors; identifying a point in the point cloud data as a non-matching point based at least on the point having no corresponding point in a map used to determine a position of the autonomous vehicle; determining that the non-matching point is to be used in a determination of an overlap score based on one or more comparisons of the point cloud data and the map; determining the overlap score based at least partly on the determining that the non-matching point is to be used in the determination of the overlap score; determining a position of the

autonomous vehicle based on the overlap score and the map; and controlling the autonomous vehicle based on the position.

[0077] In Example 2, the subject matter of Example 1 optionally includes wherein determining that the non-matching point is to be used in a determination of the overlap score comprises: generating ray-tracing data representing a tracing of a ray from a laser source of the one or more LIDAR sensors to a destination point that intersects the non-matching point;

determining that the ray intersects a reference point in the map; and in response to determining that the ray intersects the reference point in the map, including the non-matching point in the determination of the overlap score.

[0078] In Example 3, the subject matter of any one or more of Examples 1- 2 optionally include wherein determining that the non-matching point is to be used in a determination of the overlap score comprises: determining that the non-matching point corresponds to a non-occluded point of the map; and including the non-matching point in the determination of the overlap score in response to the non-matching point corresponding to a non-occluded point.

[0079] In Example 4, the subject matter of any one or more of Examples 1- 3 optionally include wherein determining the overlap score further comprises relating (i) a number of points that match to the map to (ii) a total number of points included in the determination of the overlap score.

[0080] In Example 5, the subject matter of any one or more of Examples 1- 4 optionally include determining a height of the one or more LIDAR sensors; and adjusting the point-cloud data based on the height, wherein determining that the non-matching point is to be used in the determination of the overlap score comprises determining that the non-matching point is to be used in the determination of the overlap score based on the adjusted point cloud data.

[0081] In Example 6, the subject matter of Example 5 optionally includes determining a height parameter for the map; comparing the height parameter to the height of the one or more LIDAR sensors; and adjusting the point- cloud data based on the comparison.

[0082] In Example 7, the subject matter of any one or more of Examples 5- 6 optionally include determining an angle of the one or more LIDAR sensors with respect to a reference plane; and adjusting the point-cloud data based on the angle.

[0083] In Example 8, the subject matter of any one or more of Examples 1- 7 optionally include wherein controlling the autonomous vehicle based on the position comprises determining a motion plan for the autonomous vehicle; and sending a signal to one or more of a steering controller, engine controller, or braking controller based on the motion plan.

[0084] Example 9 is a vehicle, comprising: one or more light detection and ranging (LIDAR) sensors; one or more hardware processors; one or more tangible, non-transitory, computer readable media that collectively store instructions that when executed by the one or more hardware processors cause the one or more hardware processors to perform operations, the operations comprising: capturing point-cloud data representative of a surrounding of the vehicle; identifying a point in the point cloud data as a non-matching point in response to the point having no corresponding point in a map used to determine a position of the vehicle; determining that the non-matching point is to be used in a determination of an overlap score based on one or more comparisons of the point cloud data and the map;

determining the overlap score in response to the determining that the non- matching point is to be used in the determination of the overlap score, determining a position of the vehicle based on the overlap score and the map; and controlling the vehicle based on the position.

[0085] In Example 10, the subject matter of Example 9 optionally includes wherein determining that the non-matching point is to be used in a determination of the overlap score comprises: generating ray-tracing data representing a tracing of a ray from a laser source of the one or more LIDAR sensors to a destination point that intersects the non-matching point;

determining that the ray intersects a reference point in the map; and in response to determining that the ray intersects the reference point in the map, including the non-matching point in the determination of the overlap score.

[0086] In Example 1 1, the subject matter of any one or more of Examples 9-10 optionally include wherein determining that the non-matching point is to be used in a determination of the overlap score comprises: determining that the non-matching point corresponds to a non-occluded point of the map; and including the non-matching point in the determination of the overlap score in response to the non-matching point corresponding to a non-occluded point.

[0087] In Example 12, the subject matter of any one or more of Examples 9-1 1 optionally include wherein determining the overlap score further comprises relating (i) a number of points that match to the map to (ii) a total number of points included in the determination of the overlap score.

[0088] In Example 13, the subject matter of any one or more of Examples 9-12 optionally include wherein the one or more tangible, non-transitory, computer readable media collectively store further instructions that when executed by the one or more hardware processors cause the one or more hardware processors to perform further operations, the further operations comprising: determining a height of the one or more LIDAR sensors; and adjusting the point-cloud data based on the height, wherein determining that the non-matching point is to be used in the determination of the overlap score comprises determining that the non-matching point is to be used in the determination of the overlap score based on the adjusted point cloud data.

[0089] In Example 14, the subject matter of Example 13 optionally includes wherein the one or more tangible, non-transitory, computer readable media collectively store further instructions that when executed by the one or more hardware processors cause the one or more hardware processors to perform further operations, the further operations comprising: determining a height parameter for the map; comparing the height parameter to the height of the one or more LIDAR sensors; and adjusting the point-cloud data based on the comparison.

[0090] In Example 15, the subject matter of any one or more of Examples 13-14 optionally include wherein the one or more tangible, non-transitory, computer readable media collectively store further instructions that when executed by the one or more hardware processors cause the one or more hardware processors to perform further operations, the further operations comprising: determining an angle of the one or more LIDAR sensors with respect to a reference plane; and adjusting the point-cloud data based on the angle.

[0091] In Example 16, the subject matter of any one or more of Examples 9-15 optionally include wherein controlling the autonomous vehicle based on the position comprises determining a motion plan for the autonomous vehicle; and sending a signal to one or more of a steering controller, engine controller, or braking controller based on the motion plan.

[0092] Example 17 is a computing system for a vehicle, comprising: one or more hardware processors; one or more tangible, non-transitory, computer readable media that collectively store instructions that when executed by the one or more hardware processors cause the one or more hardware processors to perform operations, the operations comprising: capturing point-cloud data representative of a surrounding of the vehicle with one or more LIDAR sensors; identifying a point in the point cloud data as a non-matching point in response to the point having no corresponding point in a map used to determine a position of the vehicle; determining that the non-matching point is to be used in a determination of an overlap score based on one or more comparisons of the point cloud data and the map; determining the overlap score in response to the determining that the non-matching point is to be used in the determination of the overlap score; determining a position of the vehicle based on the overlap score and the map; and controlling the vehicle based on the position.

[0093] In Example 18, the subject matter of Example 17 optionally includes wherein determining that the non-matching point is to be used in a determination of the overlap score comprises: generating ray-tracing data representing a tracing of a ray from a laser source of the one or more LIDAR sensors to a destination point that intersects the non-matching point;

determining that the ray intersects a reference point in the map; and in response to determining that the ray intersects the reference point in the map, including the non-matching point in the determination of the overlap score.

[0094] In Example 19, the subject matter of any one or more of Examples 17-18 optionally include wherein determining that the non-matching point is to be used in a determination of the overlap score comprises: determining that the non-matching point corresponds to a non-occluded point of the map; and including the non-matching point in the determination of the overlap score in response to the non-matching point corresponding to a non-occluded point.

[0095] In Example 20, the subject matter of any one or more of Examples 17-19 optionally include wherein determining the overlap score further comprises relating (i) a number of points that match to the map to (ii) a total number of points included in the determination of the overlap score.

[0096] Example 21 is an apparatus for controlling a vehicle, comprising: means for capturing point-cloud data representative of a surrounding of an autonomous vehicle with one or more LIDAR sensors; means for identifying a point in the point cloud data as a non-matching point based at least on the point having no corresponding point in a map used to determine a position of the autonomous vehicle; means for determining that the non-matching point is to be used in a determination of an overlap score based on one or more comparisons of the point cloud data and the map; means for determining the overlap score based at least partly on the determining that the non-matching point is to be used in the determination of the overlap score; means for determining a position of the autonomous vehicle based on the overlap score and the map; and controlling the autonomous vehicle based on the position.

[0097] In Example 22, the subject matter of Example 21 optionally includes wherein the means for determining that the non-matching point is to be used in a determination of the overlap score is configured to: generate ray-tracing data representing a tracing of a ray from a laser source of the one or more LIDAR sensors to a destination point that intersects the non- matching point; determine that the ray intersects a reference point in the map; and in response to determining that the ray intersects the reference point in the map, include the non-matching point in the determination of the overlap score.

[0098] In Example 23, the subject matter of any one or more of Examples 21-22 optionally include wherein the means for determining that the non- matching point is to be used in a determination of the overlap score is configured to: determine that the non-matching point corresponds to a non- occluded point of the map; and include the non-matching point in the determination of the overlap score in response to the non-matching point corresponding to a non-occluded point.

[0099] In Example 24, the subject matter of any one or more of Examples 21-23 optionally include wherein the means for determining the overlap score is configured to relate (i) a number of points that match to the map to (ii) a total number of points included in the determination of the overlap score.

[00100] In Example 25, the subject matter of any one or more of Examples 21-24 optionally include means for determining a height of the one or more LIDAR sensors; and means for adjusting the point-cloud data based on the height, wherein the means for determining that the non-matching point is to be used in the determination of the overlap score is configured to determine that the non-matching point is to be used in the determination of the overlap score based on the adjusted point cloud data.

[00101] In Example 26, the subject matter of Example 25 optionally includes means for determining a height parameter for the map; means for comparing the height parameter to the height of the one or more LIDAR sensors; and means for adjusting the point-cloud data based on the comparison.

[00102] In Example 27, the subject matter of any one or more of Examples 25-26 optionally include means for determining an angle of the one or more LIDAR sensors with respect to a reference plane; and means for adjusting the point-cloud data based on the angle.

[00103] In Example 28, the subject matter of any one or more of Examples 21-27 optionally include wherein the means for controlling the autonomous vehicle based on the position is configured to determine a motion plan for the autonomous vehicle, and to send a signal to one or more of a steering controller, engine controller, or braking controller based on the motion plan.

[00104] As used herein, the term "machine-readable medium," "computer- readable medium," or the like may refer to any component, device, or other tangible medium able to store instructions and data temporarily or permanently. Examples of such media may include, but are not limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, optical media, magnetic media, cache memory, other types of storage (e.g., Electrically Erasable Programmable Read-Only Memory (EEPROM)), and/or any suitable combination thereof. The term "machine- readable medium" should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions. The term "machine-readable medium" may also be taken to include any medium, or combination of multiple media, that is capable of storing instructions (e.g., code) for execution by a machine, such that the instructions, when executed by one or more processors of the machine, cause the machine to perform any one or more of the methodologies described herein. Accordingly, a "machine-readable medium" may refer to a single storage apparatus or device, as well as "cloud-based" storage systems or storage networks that include multiple storage apparatus or devices. The term "machine-readable medium" excludes transitory signals per se.

[00105] Where a phrase similar to "at least one of A, B, or C," "at least one of A, B, and C," "one or more of A, B, or C," or "one or more of A, B, and C" is used, it is intended that the phrase be interpreted to mean that A alone may be present in an embodiment, B alone may be present in an

embodiment, C alone may be present in an embodiment, or any combination of the elements A, B, and C may be present in a single embodiment; for example, A and B, A and C, B and C, or A and B and C may be present.

[00106] Changes and modifications may be made to the disclosed

embodiments without departing from the scope of the present disclosure. These and other changes or modifications are intended to be included within the scope of the present disclosure, as expressed in the following claims.

[00107] A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever. The following notice applies to the software and data as described below and in the drawings that form a part of this document: Copyright 2017, Uber, Inc., All Rights Reserved.