Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MULTIPATH CORRECTION FOR REFLECTIONS
Document Type and Number:
WIPO Patent Application WO/2022/220969
Kind Code:
A1
Abstract:
Ranging and detection data is processed to identify and correct for a multipath reflection. A set of ranging and detection points is obtained. A first point under test from the set of ranging and detection points is selected. The first point under test is determined to be a multipath reflection. A first point of reflection on a surface of a surface model is determined. A location of the first point under test is corrected based on the first point of reflection on the surface.

Inventors:
LI BO (US)
CURRY JAMES (US)
WANG SHAOGANG (US)
Application Number:
PCT/US2022/020158
Publication Date:
October 20, 2022
Filing Date:
March 14, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
AURORA OPERATIONS INC (US)
International Classes:
G01S7/40; G01S13/89; G01S13/931
Foreign References:
US20170031003A12017-02-02
EP3588128A12020-01-01
KR101790864B12017-10-26
KR101792114B12017-11-02
US20120039422A12012-02-16
US20080158058A12008-07-03
Attorney, Agent or Firm:
SUEOKA, Greg T. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A method comprising: obtaining, using one or more processors, a set of ranging and detection points; selecting, using the one or more processors, a first point under test from the set of ranging and detection points; determining, using the one or more processors, that the first point under test is a multipath reflection; determining, using the one or more processors, a first point of reflection on a surface of a surface model; and correcting, using the one or more processors, a location of the first point under test based on the first point of reflection on the surface.

2. The method of claim 1, further comprising: selecting a second point under test from the set of ranging and detection points; determining that the second point under test is a second multipath reflection; determining a second point of reflection on the surface of the surface model; and correcting a location of the second point under test based on the second point of reflection on the surface.

3. The method of claim 1, wherein determining the first point of reflection on the surface uses one of a binary search and a linear step progression.

4. The method of claim 1, further comprising: transforming the first point under test and the surface model into a common coordinate system.

5. The method of claim 1, wherein correcting the location of the first point under test based on the first point of reflection on the surface comprises: determining a ground surface plane at the first point of reflection on the surface; and mirroring the first point under test across the ground surface plane at the first point of reflection.

6. The method of claim 5, wherein determining the ground surface plane at the first point of reflection on the surface comprises: querying a neighborhood of the first point of reflection on the surface; and computing a normal associated with the ground surface plane based on the neighborhood of the first point of reflection on the surface.

7. The method of claim 1, wherein the first point under test is determined to be the multipath reflection when the first point under test is determined to be beyond the surface of the surface model.

8. The method of claim 1, wherein the first point under test is determined to be the multipath reflection when an altitude of the first point under test is determined to be below an altitude of the surface of the surface model, wherein the surface model is a model of ground surface and based on a three-dimensional map.

9. A system comprising one or more processors and memory operably coupled with the one or more processors, wherein the memory stores instructions that, in response to execution of the instructions by one or more processors, cause the one or more processors to perform the following operations: obtain a set of ranging and detection points; select a first point under test from the set of ranging and detection points; determine that the first point under test is a multipath reflection; determine a first point of reflection on a surface of a surface model; and correct a location of the first point under test based on the first point of reflection on the surface..

10. The system of claim 9, wherein the operations further comprise: select a second point under test from the set of ranging and detection points; determine that the second point under test is a second multipath reflection; determine a second point of reflection on the surface of the surface model; and correct a location of the second point under test based on the second point of reflection on the surface.

11. The system of claim 9, wherein determining the first point of reflection on the surface uses one of a binary search and a linear step progression.

12. The system of claim 9, wherein the operations further comprise: transform the first point under test and the surface model into a common coordinate system.

13. The system of claim 9, wherein the operations to correct the location of the first point under test based on the first point of reflection on the surface further comprise: determine a ground surface plane at the first point of reflection on the surface; and mirror the first point under test across the ground surface plane at the first point of reflection.

14. The system of claim 9, wherein the operations to determine the ground surface plane at the first point of reflection on the surface further comprise: query a neighborhood of the first point of reflection on the surface; and compute a normal associated with the ground surface plane based on the neighborhood of the first point of reflection on the surface.

15. The system of claim 9, wherein the first point under test is determined to be the multipath reflection when the first point under test is determined to be beyond the surface of the surface model.

Description:
MULTIPATH CORRECTION FOR REFLECTIONS

BACKGROUND

[0001] A challenge to automotive detection and ranging (e.g. radio detection and ranging “RADAR” and light detection and ranging, “LIDAR”) arises from multipath. Multipath is relative to a direct path between the sensor and object. Multipath is caused by an electromagnetic signal (e.g. light, in the case of lidar, or a radio wave, in the case of RADAR) reflecting off more than one object before being received by the sensor. Automotive RADARs, for example, experience severe multipath effects caused by reflections off the ground, guardrails, other vehicles, etc. One major source of RADAR multipath is the ground reflection, which produces RADAR points underground. Some approaches eliminate or ignore points produced by the multipath. For example, some approached gate out underground points during association. However, problems with such an approach include (1) that a large amount of information and bandwidth are wasted, (2) some objects are occluded and only visible via multipath reflection, so ignoring such points cause a false negative, and (3) other points produced by multipath may cause false positives. A particular problem is detecting and correction for multipath reflections.

SUMMARY

[0002] This specification relates to methods and systems for detecting and correcting a multipath reflection. According to one aspect of the subject matter described in this disclosure, a method includes obtaining, using one or more processors, a set of ranging and detection points; selecting, using the one or more processors, a first point under test from the set of ranging and detection points; determining, using the one or more processors, that the first point under test is a multipath reflection; determining, using the one or more processors, a first point of reflection on a surface of a surface model; and correcting, using the one or more processors, a location of the first point under test based on the first point of reflection on the surface.

[0003] In general, another aspect of the subject matter described in this disclosure includes a system comprising one or more processors and memory operably coupled with the one or more processors, wherein the memory stores instructions that, in response to the execution of the instructions by one or more processors, cause the one or more processors to perform the following operations of obtaining a set of ranging and detection points; selecting a first point under test from the set of ranging and detection points; determining that the first point under test is a multipath reflection; determining a first point of reflection on a surface of a surface model; and correcting a location of the first point under test based on the first point of reflection on the surface.

[0004] Other implementations of one or more of these aspects include corresponding systems, apparatus, and computer programs, configured to perform the actions of the methods, encoded on computer storage devices.

[0005] These and other implementations may each optionally include one or more of the following features. For instance, the method further comprises selecting a second point under test from the set of ranging and detection points; determining that the second point under test is a second multipath reflection; determining a second point of reflection on the surface of the surface model; and correcting a location of the second point under test based on the second point of reflection on the surface. For instance, features may also include determining the first point of reflection on the surface uses a binary search. For instance, features may also include determining the first point of reflection on the surface uses a linear step progression. For instance, the method further comprises transforming the first point under test and the surface model into a common coordinate system. For instance, the method for determining the ground surface plane at the first point of reflection on the surface further comprises querying a neighborhood of the first point of reflection on the surface; and computing a normal associated with the ground surface plane based on the neighborhood of the first point of reflection on the surface. For instance, features may also include that the set of ranging and detection points includes points from one or more of a light detection and ranging (LIDAR) sensor and radio detection and ranging (RADAR) sensor. For instance, features may also include that the first point under test is determined to be the multipath reflection when the first point under test is determined to be beyond the surface of the surface model. For instance, features may also include that the first point under test is determined to be the multipath reflection when an altitude of the first point under test is determined to be below an altitude of the surface of the surface model, wherein the surface model is a model of ground surface and based on a three-dimensional map.

BRIEF DESCRIPTION OF THE DRAWINGS [0006] These and other aspects and features of the present implementations will become apparent upon review of the following description of specific implementations in conjunction with the accompanying figures, wherein:

[0007] FIG. l is a block diagram illustrating an example of a hardware and software environment for an autonomous vehicle in accordance with some implementations. [0008] FIG. 2 is a block diagram of a multipath correction engine in accordance with some implementations.

[0009] FIG. 3 is a diagram illustrating an example of the multipath problem in accordance with some implementations.

[0010] FIG 4 is a block diagram illustrating an example data flow in accordance with some implementations.

[0011] FIG 5 is a block diagram illustrating an example flowchart for correcting for a reflection in accordance with some implementations.

[0012] FIG 6 is a block diagram illustrating another example flowchart for correcting for a reflection in accordance with some implementations.

[0013] FIG 7 illustrates a flowchart of a method 700 for determining a point of reflection in accordance with some implementations.

DETAILED DESCRIPTION

[0014] Implementations of the disclosure are generally related to the use of a surface model to identify and correct multipath reflections. A points beyond (e.g. under or behind) the surface of the surface model is identified as resulting from multipath reflection. When a point is identified as a result of multipath reflection, in some implementations, the point is corrected, for example, the point of reflection on the surface model is determined and the point is mirrored through an estimated surface plane at the point of reflection.

Vehicle

[0015] Referring to the drawings, wherein like numbers denote like parts throughout the several views, FIG. 1 illustrates an example hardware and software environment for an autonomous vehicle within which various techniques disclosed herein may be implemented. The vehicle 100, for example, may include a powertrain 102 including a prime mover 104 powered by an energy source 106 and capable of providing power to a drivetrain 108, as well as a control system 110 including a direction control 112, a powertrain control 114, and a brake control 116. The vehicle 100 may be implemented as any number of different types of vehicles, including vehicles capable of transporting people and/or cargo, and capable of traveling by land, by sea, by air, underground, undersea, and/or in space, and it will be appreciated that the aforementioned components 102-116 may vary widely based upon the type of vehicle within which these components are utilized.

[0016] For simplicity, the implementations discussed hereinafter will focus on a wheeled land vehicle such as a car, van, truck, bus, etc. In such implementations, the prime mover 104 may include one or more electric motors and/or an internal combustion engine (among others). The energy source 106 may include, for example, a fuel system (e.g., providing gasoline, diesel, hydrogen, etc.), a battery system, solar panels or other renewable energy source, and/or a fuel cell system. The drivetrain 108 includes wheels and/or tires along with a transmission and/or any other mechanical drive components suitable for converting the output of the prime mover 104 into vehicular motion, as well as one or more brakes configured to controllably stop or slow the vehicle 100 and direction or steering components suitable for controlling the trajectory of the vehicle 100 (e.g., a rack and pinion steering linkage enabling one or more wheels of the vehicle 100 to pivot about a generally vertical axis to vary an angle of the rotational planes of the wheels relative to the longitudinal axis of the vehicle). In some implementations, combinations of powertrains and energy sources may be used (e.g., in the case of electric/gas hybrid vehicles), and in other implementations multiple electric motors (e.g., dedicated to individual wheels or axles) may be used as a prime mover. In the case of a hydrogen fuel cell implementation, the prime mover 104 may include one or more electric motors and the energy source 106 may include a fuel cell system powered by hydrogen fuel.

[0017] The direction control 112 may include one or more actuators and/or sensors for controlling and receiving feedback from the direction or steering components to enable the vehicle 100 to follow a desired trajectory. The powertrain control 114 may be configured to control the output of the powertrain 102, e.g., to control the output power of the prime mover 104, to control a gear of a transmission in the driv etrain 108, etc., thereby controlling a speed and/or direction of the vehicle 100. The brake control 116 may be configured to control one or more brakes that slow or stop vehicle 100, e.g., disk or drum brakes coupled to the wheels of the vehicle.

[0018] Other vehicle types, including but not limited to airplanes, space vehicles, helicopters, drones, military vehicles, all-terrain or tracked vehicles, ships, submarines, construction equipment etc., will necessarily utilize different powertrains, drivetrains, energy sources, direction controls, powertrain controls and brake controls. Moreover, in some implementations, some of the components can be combined, e.g., where directional control of a vehicle is primarily handled by varying an output of one or more prime movers. Therefore, implementations disclosed herein are not limited to the particular application of the herein- described techniques in an autonomous wheeled land vehicle.

[0019] In the illustrated implementation, full or semi-autonomous control over the vehicle 100 is implemented in a vehicle control system 120, which may include one or more processors 122 and one or more memories 124, with each processor 122 configured to execute program code instructions 126 stored in a memory 124. The processors(s) can include, for example, graphics processing unit(s) ( GPU(s)”)) and/or central processing unit(s) (“CPU(s)”).

[0020] Sensors 130 may include various sensors suitable for collecting information from a vehicle’s surrounding environment for use in controlling the operation of the vehicle 100. For example, sensors 130 can include one or more detection and ranging sensors (e.g., RADAR (Radio Detection and Ranging) sensor 134, LIDAR (Light Detection and Ranging) sensor 136), a 3D positioning sensor 138, e.g., a satellite navigation system such as GPS (Global Positioning System), GLONASS (GlobalnayaNavigazionnaya Sputmkovaya Sistema, or Global Navigation Satellite System), BeiDou Navigation Satellite System (BDS), Galileo, Compass, etc. The 3D positioning sensors 138 can be used to determine the location of the vehicle on the Earth using satellite signals. The sensors 130 can optionally include a camera 140 and/or an IMU (inertial measurement unit) 142. The camera 140 can be a monographic or stereographic camera and can record still and/or video images. The IMU 142 can include multiple gyroscopes and accelerometers capable of detecting linear and rotational motion of the vehicle 100 in three directions. One or more encoders 144, such as wheel encoders may be used to monitor the rotation of one or more wheels of vehicle 100.

[0021] The outputs of sensors 130 may be provided to a set of control subsystems 150, including, a localization 152 subsystem, a perception 154 subsystem, a planning subsystem 156, and a control subsystem 158. The localization 152 subsystem is principally responsible for precisely determining the location and orientation (also sometimes referred to as “pose” or “pose estimation”) of the vehicle 100 within its surrounding environment, and generally within some frame of reference. In some implementations, the pose is stored within the memory 124 as localization data 127. In some implementations, a surface model is generated from a high-definition map, and stored within the memory 124 as surface model data 128. In some implementations, the detection and ranging sensors store their sensor data in the memory 124, e.g., RADAR data point cloud is stored as RADAR data 129. In some implementations, calibration data 125 is stored in the memory 124. The perception 154 subsystem is principally responsible for detecting, tracking, and/or identifying objects within the environment surrounding vehicle 100. As described in more detail below with reference to FIG. 2, in some implementations, perception 154 comprises a multipath correction engine 202 to detect and correct for a multipath reflection.

[0022] A machine learning model in accordance with some implementations can be utilized in tracking objects. The planning subsystem 156 is principally responsible for planning a trajectory or a path of motion for vehicle 100 over some timeframe given a desired destination as well as the static and moving objects within the environment. A machine learning model in accordance with some implementations can be utilized in planning a vehicle trajectory. The control subsystem 158 is principally responsible for generating suitable control signals for controlling the various controls in the vehicle control system 120 in order to implement the planned trajectory of the vehicle 100. Similarly, a machine learning model can be utilized to generate one or more signals to control the autonomous vehicle 100 to implement the planned trajectory.

[0023] It will be appreciated that the collection of components illustrated in FIG. 1 for the vehicle control system 120 is merely one example. Individual sensors may be omitted in some implementations. Additionally, or alternatively, in some implementations, multiple sensors of the same types illustrated in FIG. 1 may be used for redundancy and/or to cover different regions around a vehicle. Moreover, there may additional sensors of other types beyond those described above to provide actual sensor data related to the operation and environment of the wheeled land vehicle. Likewise, different types and/or combinations of control subsystems may be used in other implementations. Further, while subsystems 152— 158 are illustrated as being separate from processor 122 and memory 124, it will be appreciated that in some implementations, some or all of the functionality of a subsystem 152-158 may be implemented with program code instructions 126 resident in one or more memories 124 and executed by one or more processors 122, and that these subsystems 152— 158 may in some instances be implemented using the same processor(s) and/or memory. Subsystems may be implemented at least in part using various dedicated circuit logic, various processors, various field programmable gate arrays (“FPGA”). various application-specific integrated circuits (“ASIC”), various real time controllers, and the like, as noted above, multiple subsystems may utilize circuitry, processors, sensors, and/or other components. Further, the various components in the vehicle control system 120 may be networked in various manners.

[0024] In some implementations, the vehicle 100 may also include a secondary vehicle control system (not illustrated), which may be used as a redundant or backup control system for the vehicle 100. In some implementations, the secondary vehicle control system may be capable of fully operating the autonomous vehicle 100 in the event of an adverse event in the vehicle control system 120, while in other implementations, the secondary vehicle control system may only have limited functionality, e.g., to perform a controlled stop of the vehicle 100 in response to an adverse event detected in the primary vehicle control system 120. In still other implementations, the secondary vehicle control system may be omitted.

[0025] In general, an innumerable number of different architectures, including various combinations of software, hardware, circuit logic, sensors, networks, etc. may be used to implement the various components illustrated in FIG. 1. Each processor may be implemented, for example, as a microprocessor and each memory may represent the random- access memory (“RAM”) devices comprising a main storage, as well as any supplemental levels of memory, e.g., cache memories, non-volatile or backup memories (e.g., programmable or flash memories), read-only memories, etc. In addition, each memory may be considered to include memory storage physically located elsewhere in the vehicle 100, e.g., any cache memory in a processor, as well as any storage capacity used as a virtual memory, e.g., as stored on a mass storage device or another computer controller. One or more processors 122 illustrated in FIG. 1, or entirely separate processors, may be used to implement additional functionality in the vehicle 100 outside of the purposes of autonomous control, e.g., to control entertainment systems, to operate doors, lights, convenience features, correct for multipath reflections, etc.

[0026] In addition, for additional storage, the vehicle 100 may include one or more mass storage devices, e.g., a removable disk drive, a hard disk drive, a direct access storage device (“DASD”), an optical drive (e.g., a CD drive, a DVD drive, etc.), a solid state storage drive (“SSD”), network attached storage, a storage area network, and/or a tape drive, among others.

[0027] Furthermore, the vehicle 100 may include a user interface 118 to enable vehicle 100 to receive a number of inputs from and generate outputs for a user or operator, e.g., one or more displays, touchscreens, voice and/or gesture interfaces, buttons and other tactile controls, etc. Otherwise, user input may be received via another computer or electronic device, e.g., via an app on a mobile device or via a web interface.

[0028] Moreover, the vehicle 100 may include one or more network interfaces, e.g., network interface 162, suitable for communicating with one or more networks 176 to permit the communication of information with other computers and electronic devices, including, for example, a central service, such as a cloud service, from which the vehicle 100 receives information including trained machine learning models and other data for use in autonomous control thereof. The one or more networks 176, for example, may be a communication network and include a wide area network (“WAN”) such as the Internet, one or more local area networks (“LANs”) such as Wi-Fi LANs, mesh networks, etc., and one or more bus subsystems. The one or more networks 176 may optionally utilize one or more standard communication technologies, protocols, and/or inter-process communication techniques. In some implementations, data collected by the one or more sensors 130 can be uploaded to a computing system 172 via the network 176 for additional processing.

[0029] In the illustrated implementation, the vehicle 100 may communicate via the network 176 and signal line 178 with a computing system 172. In some implementations, the computing system 172 is a cloud-based computing device. As described below in more detail with reference to FIGS. 2, the multipath correction engine 202 is included in the perception 154 subsystem of the vehicle 100. In some implementations not shown in FIG. 1, the multipath correction engine 202 may be configured and executed on a combination of the computing system 172 and the vehicle control system 120 of the vehicle 100. In other implementations, either the computing system 172 or the vehicle control system 120 of the vehicle 100 alone executes the functionality of the multipath correction engine 202.

[0030] Each processor illustrated in FIG. 1, as well as various additional controllers and subsystems disclosed herein, generally operates under the control of an operating system and executes or otherwise relies upon various computer software applications, components, programs, objects, modules, data structures, etc., as will be described in greater detail below. Moreover, various applications, components, programs, objects, modules, etc. may also execute on one or more processors in another computer (e.g., computing system 172) coupled to vehicle 100 via network 176, e.g., in a distributed, cloud-based, or client-server computing environment, whereby the processing required to implement the functions of a computer program may be allocated to multiple computers and/or services over a network.

[0031] In general, the routines executed to implement the various implementations described herein, whether implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions, or even a subset thereof, will be referred to herein as “program code.” Program code typically comprises one or more instructions that are resident at various times in various memory and storage devices, and that, when read and executed by one or more processors, perform the steps necessary to execute steps or elements embodying the various aspects of the present disclosure. Moreover, while implementations have and hereinafter are described in the context of fully functioning computers and systems, it will be appreciated that the various implementations described herein are capable of being distributed as a program product in a variety of forms, and that implementations can be implemented regardless of the particular type of computer readable media used to actually carr out the distribution. [0032] Examples of computer readable media include tangible, non-transitory media such as volatile and non-volatile memory devices, floppy and other removable disks, solid state drives, hard disk drives, magnetic tape, and optical disks (e.g., CD-ROMs, DVDs, etc.) among others.

[0033] In addition, various program code described hereinafter may be identified based upon the application within which it is implemented in a specific implementation. However, it should be appreciated that any particular program nomenclature that follows is used merely for convenience, and thus the present disclosure should not be limited to use solely in any specific application identified and/or implied by such nomenclature. Furthermore, given the typically endless number of manners in which computer programs may be organized into routines, procedures, methods, modules, objects, and the like, as well as the various manners in which program functionality may be allocated among various software layers that are resident within a typical computer (e.g., operating systems, libraries, API’s, applications, applets, etc.), it should be appreciated that the present disclosure is not limited to the specific organization and allocation of program functionality described herein. [0034] The example environment illustrated in FIG. 1 is not intended to limit implementations disclosed herein. Indeed, other alternative hardware and/or software environments may be used without departing from the scope of implementations disclosed herein.

Multipath Correction Engine 202

[0035] As described above, the perception 154 subsystem is principally responsible for detecting, tracking, and/or identifying objects within the environment surrounding vehicle 100. In some implementations, a multipath correction engine 202 to detect and correct for a multipath reflection is included in the perception 154 subsystem, as is illustrated in FIG 2. However, in other implementations, which are not shown, the features and functionalit of the multipath correction engine 202 or its subcomponents may be located and/or executed on the vehicle 100, on the computing system 172, or a combination thereof.

[0036] Referring to FIG 2, a block diagram of a multipath correction engine in accordance with some implementations is illustrated. In the illustrated implementation, the multipath correction engine 202 comprises a pose estimation receiver 222, a surface model receiver 224, a RADAR data receiver 226, a calibration data receiver 228, a coordinate transformer 230, a reflection detector 232, a point of reflection determiner 234, a reflection surface plane estimator 236, and a correction determination and application engine 238, which are described in detail below. [0037] The pose estimation receiver 222 receives a pose estimation. As described above, the localization 152 subsystem determines the location and orientation (also sometimes referred to as “pose” or “pose estimation”) of the vehicle 100. In some implementations, the pose estimation receiver 222 receives the pose estimation from the localization 152 subsystem. In some implementations, the pose is stored within the memory 124 as localization data 127, and the pose estimation receiver 222 receives the pose estimation by retrieving the pose estimation from the memoiy 124.

[0038] The surface model receiver 224 receives a surface model. The surface model models the surface(s) off which detection and ranging signal may have reflected. For example, a surface model may include a surface model of the ground surface between the vehicle 100 and the object(s) detected by the detection and ranging sensor(s).

[0039] For clarity and convenience, the description herein frequently refers to an example in which the detection and ranging sensor(s) are a RADAR 134 sensor, the surface being modeled is that of the road surface, and the road surface creates a multipath reflection so that at least a subset of the RADAR data 129 received by the RADAR 134 sensor is inaccurate, as a result of the multipath reflection from the road surface. However, it should be recognized that this is merely an illustrative, example scenario, and that the multipath correction engine 202 may be adapted to use other ranging and detection technologies using other forms of electromagnetic radiation (e.g. Lidar and light, rather than RADAR and radio waves), and that such adaptations are within the scope of this disclosure. It should further be recognized that the ground, or road, surface is merely an illustrative, example scenario, and the that the multipath correction engine 2020 may be adapted to correct for multipath reflections off of other surfaces, e.g., a guard rail or Jersey barrier, a sound wall, other structures, another vehicle, etc.

[0040] In some implementations, the surface model receiver 224 receives a surface model that has been previously generated. In other implementations, the surface model receiver 224 receives a surface model by generating the surface model from a three- dimensional map that includes surface information ad-hoc. In some implementations, the three-dimensional map is a high-definition map of the type used by self-driving vehicles, which provide the location of the ground surface (in addition to other information such as signs, traffic lanes, etc.). In some implementations, the three-dimensional map may be generated dynamically by the vehicle 100, itself, as the vehicle drives, while in other implementations the three-dimensional map may include historic information (e.g. from other vehicles that have driven the area recently, etc.). [0041] The RADAR data receiver 226 receives RADAR data 129 generated by the

RADAR 134 sensor. As mentioned above, the example described herein refers to RADAR, but the description may be adapted to other detection and ranging, such as LIDAR. The RADAR data 129 may occasionally be referred to as a point cloud, and comprises multiple points, which are associated with one or more objects detected by the RADAR 134 sensor. [0042] The calibration data receiver 228 receives calibration data describing the relationship of the ranging and detection sensor(s) to the vehicle. For example, the calibration data may describe the location of a RADAR 134 sensor in relation to the center of the rear axle, or any other reference point, on the vehicle 100. In some implementations the calibration information may be set at the factory. In some implementations, the calibration information may be modified or recalibrated, e.g., when a ranging and detection sensor or a component to which one is mounted is replaced.

[0043] The coordinate transformer 230 transforms data into a common coordinate system. Depending on the implementation, the various forms of raw data may use different coordinate systems and/or units. For example, as a non-limiting example of the possible variety of coordinate systems, consider that the vehicle pose estimation may use the XYZ Cartesian coordinate system or a homogeneous UVW inverse spherical coordinate system, while the surface model 128 uses the XYZ Cartesian coordinate system, and the ranging and detection (RADAR or LIDAR) point cloud may use a polar or spherical coordinate system. Use of a common coordinate system may beneficially simplify subsequent calculations and determinations. In one implementation, the common coordinate frame is the cartesian coordinate system, while in other embodiments, a different coordinate system may be the common coordinate system.

[0044] To summarize and simplify, the vehicle pose estimation provides the orientation and location of the vehicle 100. The calibration data 125 provides the location of the ranging and detection sensor, i.e., the RADAR 134 sensor of the example described herein, on the vehicle 100, e.g., how high off the ground and where on the vehicle the sensor is mounted. The surface model describes the surface(s) around the vehicle 100. Specifically, in the detailed example described herein, the surface map is that of the road surface.

However, the description herein could generate a surface map of other surfaces, such as building, guardrails, sound walls, etc. and be used to identify and correct for reflections off of such surfaces. The ranging and detection data, RADAR data points as described in the example herein, describes what that sensor has detected or “sees.” The coordinate transformer 230 obtains these data and transforms them into a common coordinate system including RADAR 134 sensor and the RADAR points in spatial to spatial relation to the ground surface (as modeled).

[0045] For clarity and convenience, the features and functionality of the reflection detector 232, point of reflection determiner 234, reflection surface plane estimator 236, and correction determination and application engine 238 are described with reference to the example scenario illustrated in FIG 3.

[0046] Referring now to FIG 3, detection and ranging sensor 302 (e.g. a RADAR 134 sensor) emits electromagnetic wave illustrated by dashed line 332, which reflects of the point 312a. That reflection, which is illustrated by dashed line 334a, is not directly received by the sensor 302. Rather, the path of the 334a intersects the ground as modeled by surface model 322 at the point of reflection 308, which cause another reflection illustrated by dashed line 336, which is received by the sensor 302. Because the sensor 302 received the return signal along dashed line 336 and assumes that the return signal followed a direct path with no reflection(s), the sensor data (e.g. RADAR data 129 point) indicates that there is an object at 312b (not at 312a).

[0047] The reflection detector 232 determines whether a point is a result of a multipath reflection. In some implementations, a point is a result of a multipath, when it is determined to be located beneath the surface of the surface model. In some implementations, the reflection detector selects a point under test from the set of ranging and detection points (e.g. point 312b from a RADAR point cloud), and determines that point’s altitude relative to the altitude of the ground surface, and determines whether the point is multipath reflection. The reflection detector 232 then proceeds to the next point in the detection and ranging data set.

[0048] In some implementations, the reflection detector 232 determines that a point under test is a multipath reflection when the altitude of the point under test is below that of the surface of the surface model. In some implementations, the reflection detector 231 assigns a probability of the point being a multipath reflection point to the point under test, e.g., assuming a gaussian distribution. Referring again to Figure 3, as described above, the sensor 302 detected point 312b due to the multipath reflection off of point 312a. However, sensor 302 may have some degree of error or uncertainty. For example, sensor 302 may have an uncertainty in resolution of plus or minus a few degrees (or portions thereof) with regard to elevation. Therefore, the detected point 312b could be as low as 304b or as high as 310b

(if it were not the result of a multipath and located below the surface). With low, glancing angles or high uncertainty, accurately identifying whether or not the point should be identified as a reflection may become more difficult. In implementations where a gaussian distribution is assumed and points under test are assigning a probability that the detected point 312b is a multipath a threshold may be set (and modified in some implementations, e.g., using machine learning) to accurately identify which, if any points, are a result of multipoint reflection.

[0049] In some implementations, the point may be identified, or classified, as a multipath reflection, which may be corrected as described below, or not. In other implementations, based on the probability a point is a multipath, the point may be identified as a multipath reflection (e.g. satisfies a first threshold), which may be corrected as described below, identified as not a multipath reflection (e.g. satisfies a second threshold), or identified as uncertain whether the point is a multipath (e.g. probability is between the first and second threshold). Points where it is uncertain whether they are a result of multipath may be treated differently in different implementations, for example, they may be subject to additional processing, or omitted or removed from the data set.

[0050] The point of reflection determiner 234 determines a point of reflection for a point determined to be a multipath reflection, i.e., a point resulting from a multipath reflection. In some implementations, the point of reflection is determined to be the intersection point of the ground surface from the surface model and the segment connecting the RADAR 134 sensor and the RADAR point determined to be a multipath reflection. Referring to FIG 3, the reflection point 308 is the intersection of the surface model 322 and the segment 336 and 334b between sensor 302 and point 312b. Depending on the implementation, the intersection, i.e., the point of reflection 308 may be found using various methods. In one implementation, described below with reference to FIG 7, a binary search is applied by the point of reflection determiner 234 to identify the intersection. In another implementation, the point of reflection determiner 234 applies a linear step progression to identify the intersection.

[0051] The reflection surface plane estimator 236 determines, from the surface model, a surface plane at the point of reflection associated with a multipath reflection point. In some implementations, the reflection surface plane estimator 236 queries the neighborhood around the point of the reflection to estimate the ground surface plane at the point of reflection, for example, by computing the normal. The size of the neighborhood and the resolution at which the neighborhood is queried varies depending on the implementation. In some implementations, the neighborhood may be a defined value, for example, a four-meter radius. In some implementations, the neighborhood may be based on a resolution of equipment, such as the RADAR 134 sensor or HD map.

[0052] In some implementations, the neighborhood is larger than the resolution of the

HD map and/or surface model. For example, in one implementation, the neighborhood may be four times the resolution of the surface model. Such implementations may benefit by not overfitting for local variation in the surface at the point of reflection.

[0053] Referring to FIG 3, the point of reflection is at point 308, and the neighborhood around 308 is illustrated as having the diameter 1, which is also illustrated as the distance between the high and low bounds of uncertainty for the sensor 302. The surface model 322 in the neighborhood, 1, of 308 is illustrated as flat for convenience and ease of illustration and explanation. The height, h, is the vertical distance between the sensor 302 and the point of reflection 308.

[0054] The correction determination and application engine 238 determines and applies a correction to a multipath reflection. The correction determination and application engine 238 determines the correction by mirroring the original point through the estimated plane. Referring, again, to FIG 3, the neighborhood, 1, around 308 was horizontal, and that horizontal plane intersecting the point of reflection 308 is extended and illustrated by line 306. When mirroring the original point 312b across plane 306, the correction determination and application engine 238 obtains point 312a, the point spatially, or geometrically, corrected for the multipath reflection off of the ground surface.

[0055] The correction determination and application engine 238 may apply the correction differently in different implementations. In some implementations, the original point may be replaced with the corrected point. For example, point 312a may overwrite point 312b in the RADAR data 129. In some implementations, a difference between the onginal point and the corrected point may be applied to the RADAR data 129. For example, the RADAR data stores point 312b and a correction up N units and over O units, to describe the position of point 312a relative to point 312b. In some implementations, the correction may be applied so as to retain the original data. For example, the RADAR data 129 includes an original data column (e.g. storing point 312b), and the correction determination and application engine 238 populates a corrected data column with corrections (e.g. 312a) and, for points that are not multipath reflections, copies the data from the original data column into the corrected data column.

Data Flow [0056] Referring now to FIG 4, a block diagram illustrating an example data flow through the multipath correction engine 202 and its subcomponents is descnbed in accordance with some implementations. GPS and IMU data 402 are generated by the 3D positioning and IMU 142 sensors, respectively. One or more HD maps 404 are generated or maintained by the mapping engine 164. The localization 152 subsystem obtains the HD map of the vehicle’s location and, along with GPS, IMU or both data 402, determines a vehicle pose estimation of the vehicle’s location and orientation, which may be stored as localization data 127. The localization 152 subsystem also uses the HD map to generate a surface model 128. The multipath correction engine 202 obtains the vehicle pose estimation, the surface model 128, RADAR point cloud data 129 generated by a RADAR 134 sensor (e.g. with elevation resolution capability), and calibration data 125. The multipath correction engine 202 transforms the data into a common coordinate frame 415 and determined a set of points 416 corrected for reflection. Depending on the implementation, the original point that was determined to be a result of multipath reflection may or may not be retained. For example, in some implementations, the original point me be retained and tagged or associated with the corrected point so that subsequent analysis may be performed on the data to gain insights. In another example, in some implementations, the point resulting from the correction for reflection may replace, or overwrite, the original point that was determined to be a result of multipath reflection.

Methods

[0057] FIG 5 is a block diagram illustrating an example flowchart of a method 500 for correcting for a reflection in accordance with some implementations. In block 505, A determination as to whether any point us unprocessed. When it is determined that no point is unprocessed (505-NO), the method 500 ends. When it is determined that a point has not been processed (505-YES), an unprocessed point is selected and a determination is made, in block 510, as to whether that point is a reflection.

[0058] When it is determined that the point is not a reflection (510-NO), the method

500 return to block 505. Blocks 505 and 510 are repeated until it is determined that no point remains unprocessed (505-NO), and the method 500 ends, or it is determined, in block 510, that the point is a reflection. When it is determined that a point is a reflection (510-YES), the method continues at block 515.

[0059] In block 515, the surface reflection point is found. In block 520, a geometry- based correction is performed, and the method 500 returns to block 505. Blocks 505, 510, 515, 515, and 520 are repeated until all points are processed by the method 500 and determined not to be a reflection (510-NO) or are determined to be a reflection (510-YES) and subsequently corrected at block 520.

[0060] FIG 6 is a block diagram illustrating an example flowchart of a method 600 for correcting for a reflection in accordance with some implementations. In block 605, RADAR data for a point under test is obtained. In block 610, the ground surface model is obtained. In block 615, the point under test and ground are, in some implementations, transformed into a common coordinate system. At block 620, the altitude of the point under test is determined to be below the altitude of the ground from the ground surface model obtained in block 610. In block 625, the point of reflection is identified. In block 630, the ground surface plane is estimated at the point of intersection. In block 635, the spatial position of the point under test is corrected by mirroring the point under test through the ground surface plane estimated in block 630.

[0061] FIG 7 illustrates a flowchart of a method 700 for determining a point of reflection in accordance with some implementations. In block 705, RADAR data is obtained. In block 710, a ground surface model is obtained. In block 715, an initial interval is defined [Left, Right], where the Left is set as the sensor position, and Right is set to the RADAR point under test. At block 720, a midpoint, Mid, is determined as equal to (Left+Right)/2. At block 725, a determination is made as to whether the midpoint, Mid, is above the ground. [0062] When it is determined, at block 725, that the midpoint, Mid, is above the ground (725-YES), the method 700 proceeds to block 730. At block 730, the Left value is set to the midpoint, Mid, value, and the method continues at block 720 where anew midpoint, Mid, is determined. Blocks 720, 725, and 730 are repeated until it is determined, at block 725, that the midpoint, Mid, is not above the ground (725-NO).

[0063] When it is determined, at block 725, that the midpoint, Mid, is below the ground (735-YES), the method 700 proceeds to block 740. At block 740, the Right value is set to the midpoint, Mid, value, and the method continues at block 720 where a new midpoint, Mid, is determined. Blocks 720, 725, 730, 735, and 740 are repeated until it is determined, at block 735, that the midpoint, Mid, is not below the ground (735-NO), and the method 700 proceeds to block 745, where the midpoint, Mid, is the reflection point.

Other Considerations

[0064] The previous description is provided to enable practice of the various aspects described herein. Various modifications to these aspects will be understood, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shorn herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean "one and only one" unless specifically so stated, but rather "one or more.” Unless specifically stated otherwise, the term "some" refers to one or more. All structural and functional equivalents to the elements of the various aspects described throughout the previous description that are known or later come to be known are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed as a means plus function unless the element is expressly recited using the phrase "means for."

[0065] It is understood that the specific order or hierarchy of blocks in the processes disclosed is an example of illustrative approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes may be rearranged while remaining within the scope of the previous description. The accompanying method claims present elements of the various blocks in a sample order, and are not meant to be limited to the specific order or hierarchy presented.

[0066] The previous description of the disclosed implementations is provided to enable others to make or use the disclosed subject matter. Various modifications to these implementations will be readily apparent, and the generic pnnciples defined herein may be applied to other implementations without departing from the spirit or scope of the previous description. Thus, the previous description is not intended to be limited to the implementations shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

[0067] The various examples illustrated and described are provided merely as examples to illustrate various features of the claims. However, features shown and described with respect to any given example are not necessarily limited to the associated example and may be used or combined with other examples that are shown and described. Further, the claims are not intended to be limited by any one example.

[0068] The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the blocks of various examples must be performed in the order presented. As will be appreciated, the order of blocks in the foregoing examples may be performed in any order.

Words such as “thereafter,” “then,” “next,” etc. are not intended to limit the order of the blocks; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles “a,” “an” or “the” is not to be construed as limiting the element to the singular.

[0069] The various illustrative logical blocks, modules, circuits, and algorithm blocks described in connection with the examples disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and blocks have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.

[0070] The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the examples disclosed herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some blocks or methods may be performed by circuitry that is specific to a given function.

[0071] In some examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable storage medium or non-transitory processor-readable storage medium. The blocks of a method or algorithm disclosed herein may be implemented in a processor-executable software module which may reside on a non-transitory computer-readable or processor- readable storage medium. Non-transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor-readable storage media may include RAM, ROM, EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu- ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non- transitory computer-readable and processor-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable storage medium and/or computer-readable storage medium, which may be incorporated into a computer program product.

[0072] The preceding description of the disclosed examples is provided to enable others to make or use the present disclosure. Various modifications to these examples will be readily apparent, and the generic principles defined herein may be applied to some examples without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the examples shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein.