Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMAGE BASED LOCATIONING
Document Type and Number:
WIPO Patent Application WO/2020/263982
Kind Code:
A1
Abstract:
This disclosure relates to systems and methods of obtaining accurate motion and orientation estimates for a vehicle (10) traveling at high speed based on images of a road surface. A purpose of these systems and methods is to provide a supplementary or alternative means of locating a vehicle (10) on a map, particularly in cases where other locationing approaches (e.g., GPS) are unreliable or unavailable.

Inventors:
VOLKERINK ERIK (US)
KHOCHE AJAY (US)
Application Number:
PCT/US2020/039362
Publication Date:
December 30, 2020
Filing Date:
June 24, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
TRACKONOMY SYSTEMS INC (US)
International Classes:
G01C3/08; G01P5/00
Foreign References:
US20170064287A12017-03-02
US20170045889A12017-02-16
US20090279741A12009-11-12
US20040221790A12004-11-11
US20160014394A12016-01-14
Other References:
MOZILLA AGRAWAL , KURT KONOLIGE: "Real-time Localization in Outdoor Environments using Stereo Vision and Inexpensive GPS", 18TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR'06), 3 November 2020 (2020-11-03), pages 1063 - 1068, XP055670058, ISSN: 1051-4651, DOI: 10.1109/ICPR.2006.962
NICOLA KROMBACH; DAVID DROESCHEL; SEBASTIAN HOUBEN; SVEN BEHNKE: "Feature-based Visual Odometry Prior for Real-time Semi-dense Stereo SLAM", ROBOTICS AND AUTONOMOUS SYSTEMS, 19 October 2018 (2018-10-19), pages 1 - 46, XP080924920, DOI: 10.1016/j.robot.2018.08.002
PAUL TIMOTHY FURGALE: "Extensions to the Visual Odometry Pipeline for the Exploration of Planetary Surfaces", THESES, 1 November 2011 (2011-11-01), pages 1 - 156, XP055780244
Attorney, Agent or Firm:
CHOI, Christopher (US)
Download PDF:
Claims:
CLAIMS

1. A method of processing a sequence of images to determine an image based trajectory of a vehicle (10) along a road, comprising by a data processing system (22):

rectifying each stereoscopic pair of images (38) to a common epipolar plane (40); detecting features in each of the rectified stereoscopic pair of images (38) in each successive frame (44);

matching corresponding features in each rectified stereoscopic pair of images (38), wherein the matching comprises matching points in one image of the rectified stereoscopic pair with corresponding features in the other image of the stereoscopic image pair to produce a feature disparity map (46);

calculating a depth at each location point to obtain a sparse three-dimensional depth map of the road (48); and

determining motion and orientation of the vehicle (10) between successive stereoscopic image frames based on execution of an optical flow process by the data processing system (22) that determines estimates of motion and orientation between successive stereoscopic pairs of images (50).

2. The method of claim 1 , comprising by the data processing system (22), converting the image of the stereoscopic pair of images (38) to a grayscale format.

3. The method of claim 1 , wherein the disparity map measures pixel displacement between the matched features.

4. The method of claim 1 , further comprising estimating a current trajectory of the vehicle (10) based on the estimates of motion and orientation between successive stereoscopic pairs of images (38).

5. The method of claim 1 , further comprising by the data processing system (22), dividing an image region above a road surface into multiple tracks (62, 64, 66), and processing the multiple tracks to obtain multiple independent estimates of a trajectory of the vehicle (10). 6. The method of claim 5, further comprising by the data processing system (22), combining multiple ones of the estimates to determine estimates of motion and orientation of the vehicle (10) between successive stereoscopic frames.

7. The method of claim 6, further comprising by the data processing system (22), identifying and rejecting as outliers motion and orientation estimates that are inconsistent with a majority of other motion and orientation estimates.

8. The method of claim 5, further comprising by the data processing system (22), processing each of the tracks independently of other tracks to obtain a respective set of features for the stereographic image in the frame for each track.

9. The method of claim 8, further comprising by the data processing system (22), determining a respective set of disparity and depth maps for each track.

10. The method of claim 9, further comprising by the data processing system (22), dividing an imaged road region into a rectangular array multiple tracks wide and multiple tracks long (68, 70, 72, 74, 76, 78), and processing the tracks to obtain respective independent sets of estimates of motion and orientation along a trajectory of the vehicle (10).

11. The method of claim 10, wherein the dividing comprises dividing the imaged road region into the rectangular array by capturing a stereographic image set and windowing the imaged road surface into the sets of tracks.

12. The method of claim 10, wherein the dividing comprises dividing the imaged road region into the rectangular array using optical elements configured to direct light reflecting from a surface of the road to respective ones of the sets of tracks.

13. The method of claim 10, wherein the dividing comprises dividing the imaged road region into the rectangular array using respective pairs of stereoscopic image devices array according to the track layout. 14. A method of processing a sequence of successive stereographic image frames to determine estimates of motion and orientation of a vehicle (10), the method comprising:

dividing an image region above a road surface into multiple tracks;

for each track, by the data processing system (22), estimating motion and orientation between corresponding features in respective stereographic image pairs; removing, by the data processing system (22), inconsistent motion and orientation estimates as outliers;

determining, by the data processing system (22), estimates of motion and orientation based on an aggregation of consistent motion and orientation estimates; and processing the multiple tracks to obtain multiple independent estimates of a trajectory of the vehicle (10).

15. The method of claim 14, wherein consistent motion and orientation estimates are averaged to determine the motion and orientation estimates.

16. A computer program product for execution by a computer system and comprising at least one non-transitory computer-readable medium having computer readable program code portions embodied therein to process a sequence of images to determine an image based trajectory of a vehicle (10) along a road, the computer- readable program code portions comprising:

an executable code portion to rectify each stereoscopic pair of images (38) to a common epipolar plane;

an executable code portion to detect features in each of the rectified stereoscopic pair of images (38) in each successive frame;

an executable code portion to match corresponding features in each rectified stereoscopic pair of images (38), wherein the matching comprises matching points in one image of the rectified stereoscopic pair with corresponding features in the other image of the stereoscopic image pair to produce a feature disparity map;

an executable code portion to calculate a depth at each location point to obtain a sparse three-dimensional depth map of the road; and an executable code portion to determine motion and orientation of the vehicle (10) between successive stereoscopic image frames based on execution of an optical flow process by the data processing system (22) that determines estimates of motion and orientation between successive stereoscopic pairs of images.

Description:
IMAGE BASED LOCATIONING

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims the benefit of U.S. Provisional Application No. 62/865,192, filed 22 June 2019, which is incorporated herein by reference.

BACKGROUND

[0002] Locationing systems can track mobile targets in real time. These systems typically ascertain information relating to their geographic locations based on communications with a variety of different wireless locationing systems (e.g., the Global Positioning System (GPS), cellular network systems (e.g., GSM), and wireless local area networks (e.g., a system of Wi-Fi access points). No single approach, however, provides continuous tracking information under all circumstances. For example, GPS tracking requires a tracking device to have an unobstructed view of at least four GPS satellites at the same time, making GPS tracking in urban and indoor environments problematic. Dead reckoning may be used to supplement GPS locationing when GPS signals are unavailable or inaccurate (e.g., as a result of signal multipath error).

However, dead-reckoning navigation is limited by the rapid accumulation of errors and, requires a complex fusion process to integrate dead-reckoning navigation data with GPS navigation data. Map-matching techniques can improve locationing accuracy by identifying the most likely locations of a vehicle on a road network. However, the accuracy of map-matching techniques depends significantly on the accuracy of the position estimates for the mobile target being tracked and the fidelity of the spatial road map used to locate the mobile target in a geographic region.

DESCRIPTION OF DRAWINGS

[0003] FIGS. 1 A and 1 B respectively show diagrammatic side and bottom views of an example vehicle that includes a locationing system on the vehicle chassis.

[0004] FIG. 2 shows a diagrammatic view of an example locationing system.

[0005] FIG. 3 is a diagrammatic view of one or more light sources illuminating a surface and one or more imaging devices capturing images of the illuminated surface.

[0006] FIG. 4 shows components of an example of the locationing system shown in FIGS. 1A and 1 B. [0007] FIG. 5 is a flow diagram of an example image based locationing method.

[0008] FIG. 6A is a diagrammatic view of an example of an imaged region divided into a set of parallel tracks.

[0009] FIG. 6B is a diagrammatic view of an example of an imaged region divided into an array of tracks.

[0010] FIG. 7 is a flow diagram of an example process of estimating motion during a time step based on the positions determined for multiple tracks.

[0011] FIG. 8 shows one or more locationing systems located at different locations on an example vehicle.

[0012] FIG. 9 shows an example vehicle that includes multiple locationing systems located on the vehicle chassis.

[0013] FIGS. 10A and 10B respectively show examples of different systems for maintaining the performance of the locationing system.

DETAILED DESCRIPTION

[0014] In the following description, like reference numbers are used to identify like elements. Furthermore, the drawings are intended to illustrate major features of exemplary embodiments in a diagrammatic manner. The drawings are not intended to depict every feature of actual embodiments nor relative dimensions of the depicted elements, and are not drawn to scale.

[0015] This disclosure relates to systems and methods of obtaining accurate motion and orientation estimates for a vehicle traveling at high speed based on images of a road surface. A purpose of these systems and methods is to provide a

supplementary or alternative means of locating a vehicle on a map, particularly in cases where other locationing approaches (e.g., GPS) are unreliable or unavailable.

[0016] FIGS. 1A and 1 B show an example of a vehicle 10 that includes an example locationing system 12 on the chassis 14 of the vehicle 10. The locationing system 12 may be integrated with the vehicle 10 during manufacture or it may be an add-on component that is retrofitted to the vehicle 10. In the illustrated example, the locationing system 12 may be located anywhere on the chassis 14 that provides a vantage point from which the locationing system 12 can capture images of the road surface. In some examples, the locationing system 12 is configured to simultaneously capture images of both the road surface and at least one wheel (e.g., wheel 16) so that wheel slippage can be optically detected and incorporated into the locationing process.

[0017] In the example shown in FIG. 2, the locationing system 12 includes at least one light source 18, at least one image capture device 20, a data processing system 22, and a data storage system 24. In some examples, the data processing system 22 is implemented by or integrated with the central control system of the vehicle. In other examples, the data processing system 22 is implemented, at least in part, by a remote data processing system or service that communicates with the locationing system 12.

[0018] Referring to FIG. 3, the light source 18 typically is a high brightness illumination source, such as a light emitting diode (LED) that emits light 19 of a particular wavelength towards a road surface 25. In some examples, the light source 18 emits light in the infrared wavelength range (e.g., 750-1400 nm). In other examples, different wavelengths of light may be used.

[0019] The image capture device 20 may include one or more imaging devices 26 that are capable of capturing images of a road surface at a high rate (e.g., 750-1000 frames per second). In some examples, the images are captured as grayscale images. In some examples (see FIG. 4), the image capture device 20 includes two stereoscopic imaging devices 28, 30 that are configured to capture a respective frame that includes at least two simultaneous stereoscopic images 38 for each image capture event. In some examples, a trigger signal or a global shutter 32 synchronizes the capture of the image frames by the imaging devices 28, 30. The image capture device 20 also may include one or more lenses 34 (e.g., fixed lenses or automatic zoom lenses), one or more optical filters 36 at least one of which may be matched to pass a wavelength of light generated by the light source 18, and an automatic exposure control system. The various components of the light source 18 and the image capture device 20 may be integrated and configured to optimize one or more parameters of the captured images (e.g., contrast).

[0020] The data processing system 22 processes the images captured by the image capture device 20 to determine an image-based trajectory of the vehicle along a road. In some examples, the image capture device 20 consists of a monocular camera with an optional ranging sensor that enables vehicle motion and depth to be calculated in terms of pixel motion across successive images. In preferred examples, however, the image capture device 20 includes two or more cameras that are fixed in position in a known geometry, which allows vehicle motion and depth to be calculated in real world dimensions (e.g., meters).

[0021] Referring to FIG. 5, in one example, the data processing system 22 processes a sequence of frames of stereoscopic image pairs that are captured at different times (e.g., at equal time intervals) to determine respective estimates of three- dimensional points on the road surface.

[0022] The data processing system 22 rectifies each stereo image pair to a common epipolar plane (FIG. 5, block 40). The data processing system 22 may use any of a wide variety of methods to rectify the stereo image pair in each frame.

Examples of such methods include planar rectification, cylindrical rectification, and polar rectification. In some examples, the resulting rectified images have epipolar lines that are parallel to the horizontal axis and corresponding points have identical vertical coordinates.

[0023] If the images of the stereo pair are not grayscale images, the images optionally may be converted to a grayscale format (FIG. 5, block 42).

[0024] The data processing system 22 detects features in each of the rectified stereo pair of images in each successive frame (FIG. 5, block 44). In general, features are points in an image that can be uniquely identified based on comparisons of colors, highlights, and other features in the pair of images. Examples include points that have high contrast with its neighbors in a local region of an image.

[0025] The data processing system 22 matches corresponding features in the rectified stereo pair of images (FIG. 5, block 46). In this process, the processing system 22 matches points (e.g., pixels) or features in one image of the stereo pair with corresponding interest points or features in the other image of the stereo pair. The result of this process is a feature disparity map, where the disparity is the pixel displacement between the matched features.

[0026] In some examples, the depth at each feature point is calculated to obtain a sparse three-dimensional surface profile (FIG. 5, block 48). In some examples, the data processing system 22 produces a depth map by calculating an elevation value for each feature in the disparity map using triangulation.

[0027] The data processing system 22 determines the motion of the vehicle (e.g., orientation and translation) between successive frames (FIG. 5, block 50). In some examples, an optical flow process is used to determine the motion and orientation of the vehicle between successive stereoscopic image frames.

[0028] In the examples described above, a stereoscopic image set in a frame is used to estimate the current trajectory of the vehicle. In order to obtain a denser and more robust dataset that is less sensitive to outliers, the imaged region of the road surface under the vehicle is divided into multiple tracks that can be processed to obtain multiple independent estimates of the vehicle’s trajectory. Multiple of these estimates can be combined or otherwise used to determine estimates of the vehicle’s motion and orientation between successive stereoscopic image frames. In some examples, the multiple independent sets of estimates can be used to identify and reject as outliers motion and orientation estimates that are inconsistent with the majority of other estimates.

[0029] Referring to FIG. 6A, in one example, an imaged road region 60 is divided into three parallel tracks 62, 64, 66, each of which is processed as an

independent stereoscopic image frame. Each track 62, 64, 66 is processed

independently of the other tracks 62, 64, 66 to obtain a respective set of interest points or features for the stereoscopic images in the frame for each track 62, 64, 66. From this information, the data processing system 22 can determine a respective set of disparity and depth maps for each track 62, 64, 66.

[0030] FIG. 6B shows another example, in which the imaged road region 68 is divided into a rectangular array of six tracks 68, 70, 72, 74, 76, 78, each of which can be processed to obtain respective independent sets of estimates of motion and orientation along the vehicle’s trajectory.

[0031] In some examples, the imaged region 60 is divided into the tracks 62- 66 and 68-78 by capturing a stereoscopic image set and windowing or bucketing the frame into regions that correspond to the sets of tracks 62-66 and 68-78. In other examples, the imaged region 60 is divided into the sets of tracks 62-66 and 68-78 using lenses or diffractive optical elements that are configured to direct the light reflecting from the road surface to respective ones of the sets of tracks 62-68 and 68-78. In still other examples the imaged region 60 is divided into the sets of tracks 62-66 and 68-78 using respective pairs of stereoscopic image devices 28, 30 (e.g., cameras) arrayed according to the desired track layout. [0032] FIG. 7 shows an example method of determining estimates of the motion and orientation of a vehicle from successive stereoscopic image frames. In this example, for each track, the data processing system 22 estimates the motion and orientation between corresponding features in a respective stereoscopic pair images (FIG. 7, block 80). The data processing system 22 removes inconsistent motion and orientation estimates as outliers (FIG. 7, block 82). Estimates of the motion and orientation are determined based on an aggregation of the consistent motion and orientation estimates (FIG. 7, block 84). In some examples, the consistent motion and orientation estimates are averaged to determine the motion and orientation estimates.

[0033] FIG. 8 shows an example of a vehicle 85, in which one or more image- based locationing systems 86, 88 are mounted on the front and/or rear ends of the vehicle 10 and oriented to obtain images of the road surface.

[0034] FIG. 9 shows an example of a vehicle 96 in which first and second imaging devices 90, 92 and a light source 94 are located on the chassis of the vehicle 96 at different locations along the longitudinal dimension of the vehicle 10. Each of the image capture devices 90, 92 is operable to obtain a series of successive stereoscopic image frames of the light reflecting from the road surface. The data processing system 22 is operable to determine the vehicle’s direction of travel based on the locations of features detected in the images captured by the first imaging device 90 and the locations of matching features detected in the images captured by the second imaging device 92. By increasing the distance between the complementary images, this approach increases the accuracy of measured changes in direction. In another example, the first image capture device 90 is located under the vehicle 96 on the chassis and the second image capture device 92 is located on the roof of the vehicle 96.

[0035] FIG. 10A shows an example of a vehicle 98 that includes a cleaning system 100 for the locationing system 12. In this example, the cleaning system 100 includes a funnel that includes a large input end 102 and a small output end 104.

Movement of the vehicle 98 over the road causes air 106 to flow into the input end 102 of the funnel 100 at a first rate that depends at least in part on the speed of the vehicle 98. The optical cleaning system 100 is designed so that , under typical driving conditions, the input air is ejected from the output end 104 of the funnel 100 at pressure that is sufficient to remove dust and other debris from the optical and other components of the locationing system 12.

[0036] FIG. 10B shows an example of a vehicle that includes a cleaning system 108 for the locationing system 12. In this example, the cleaning system 108 includes an electrically powered cylindrical brush that is configured to remove dust and other debris from the optical and other components of the locationing system 12. In some examples, the rotating brush is configured to move into and out of contact with the locationing system components either on a regular basis (e.g., each time the vehicle is turned on) or on demand (e.g., when a sensor detects that the optical and/or other components require cleaning).

[0037] Referring back to FIG. 2, in some examples, the data processing system 22 also integrates data received from one or more auxiliary locationing sensors 38 into the locationing process. Example auxiliary locationing sensors 38 include a wheel encoder, an inertial measurement unit (IMU), an inertial navigation system (INS), a global positioning system (GPS), a sound navigation and ranging (SONAR) system, a LIDAR system, and a radar system.

[0038] Examples of the subject matter described herein, including the disclosed systems, methods, processes, functional operations, and logic flows, can be implemented in data processing apparatus (e.g., computer hardware and digital electronic circuitry) operable to perform functions by operating on input and generating output. Examples of the subject matter described herein also can be tangibly embodied in software or firmware, as one or more sets of computer instructions encoded on one or more tangible non-transitory carrier media (e.g., a machine readable storage device, substrate, or sequential access memory device) for execution by data processing apparatus.

[0039] The details of specific implementations described herein may be specific to particular embodiments of particular inventions and should not be construed as limitations on the scope of any claimed invention. For example, features that are described in connection with separate embodiments may also be incorporated into a single embodiment, and features that are described in connection with a single embodiment may also be implemented in multiple separate embodiments. In addition, the disclosure of steps, tasks, operations, or processes being performed in a particular order does not necessarily require that those steps, tasks, operations, or processes be performed in the particular order; instead, in some cases, one or more of the disclosed steps, tasks, operations, and processes may be performed in a different order or in accordance with a multi-tasking schedule or in parallel.

[0040] Other embodiments are within the scope of the claims.