Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD OF DETECTING, POPULATING AND/OR VERIFYING CONDITION, ATTRIBUTIONS, AND/OR OBJECTS ALONG A NAVIGABLE STREET NETWORK
Document Type and Number:
WIPO Patent Application WO/2011/053335
Kind Code:
A1
Abstract:
A method of detecting, populating and/or verifying conditions, attributions, and/or objects along a navigable street is provided The method includes placing a single global positioning system-enabled device having a user interface screen in a vehicle Then, collecting probe data from the global positioning system enabled device as the vehicle travels along the navigable street The method is further characterized by providing the global positioning system-enabled device with a forward facing stll image captuπng camera opposite the user interface screen and onenting the camera to face in the direction of forward vehicle travel and outwardly from a front windshield of the vehicle Then, captuπng a plurality of still images from the camera, geocoding the still images, and analyzing the geocoded still images to detect, populate and/or verify conditions, attributions, and/or objects along the navigable street.

Inventors:
MORLOCK, Clayton, R. (30 Storrs Hills Road, Lebanon, NH, 03766, US)
COOKE, Donald (360 River Road, Lymn, NH, 03768, US)
Application Number:
US2009/069948
Publication Date:
May 05, 2011
Filing Date:
December 31, 2009
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
TELE ATLAS NORTH AMERICA (11 Lafayette Street, Lebanon NH, 03766-1445, US)
MORLOCK, Clayton, R. (30 Storrs Hills Road, Lebanon, NH, 03766, US)
COOKE, Donald (360 River Road, Lymn, NH, 03768, US)
International Classes:
G08G1/00
Foreign References:
US20080170755A12008-07-17
US20030191568A12003-10-09
US20050137786A12005-06-23
US20070063875A12007-03-22
US6810323B12004-10-26
US6526352B12003-02-25
Attorney, Agent or Firm:
STEARNS, Robert, L. (Dickinson Wright PLLC, 38525 Woodward Avenue Suite 200, Bloomfield Hills MI, 48304-5092, US)
Download PDF:
Claims:
What is claimed is:

1. A method of detecting, populating and/or verifying conditions, attributions, and/or objects along a navigable street (18), comprising:

placing a single global positioning system-enabled device (12) having a user interface screen (20) in a vehicle (10);

collecting probe data from the global positioning system-enabled device (12) as the vehicle (10) travels along the navigable street (18);

characterized by:

providing the global positioning system-enabled device (12) with a forward facing still image capturing camera (14) opposite the user interface screen (20) and orienting the camera (14) to face in the direction of forward vehicle travel and outwardly from a front windshield of the vehicle (10);

capturing a plurality of still images from the camera (14);

geocoding the still images; and

analyzing the geocoded still images to detect, populate and/or verify conditions, attributions, and/or objects along the navigable street (18).

2. The method of claim 1 further including comparing sequential geocoded still images with one another and discerning changes between the compared sequential geocoded still images.

3. The method of claim 2 further including assigning pixel coordinates to the geocoded still images and comparing the pixel coordinates of an object between the sequential geocoded still images.

4. The method of claim 3 further detecting the shape of the object in the geocoded still images and comparing the shape with a database of verified object shapes.

5. The method of claim 3 further including detecting colors of the object in the geocoded still images and comparing the colors with a database of verified object colors.

6. The method of claim 3 further including calculating the distance between the sequential geocoded still images and comparing the pixel coordinates of the object in the sequential geocoded still images with a database of similar pixel coordinates over said distance for verified types of objects.

7. The method of claim 2 further including determining that a change between the sequential geocoded still images indicates a rapidly approaching object and triggering an alarm in the vehicle to alert the driver of the rapidly approaching object.

8. The method of claim 7 further including configuring the global positioning system-enabled device (12) in communication with an engine control unit of the vehicle ( 10) and automatically sending a signal to the engine control unit in response to the rapidly approaching object from the global positioning system-enabled device (12) to slow the vehicle.

9. The method of claim 7 further including determining the change by comparing sizes of the object in the sequential geocoded still images.

10. The method of claim 7 further including determining the change by comparing colors of the object in the sequential geocoded still images.

1 1. The method of claim 1 further including obtaining direct input from the user via the global positioning system-enabled device (12) in response to questions posed via the global positioning system-enabled device (12).

12. The method of claim 1 further including limiting the camera to only store predetermined types of models of road furniture.

13. The method of claim 1 further including limiting the camera to store only a single type of model of road furniture.

14. A system for detecting, populating and/or verifying conditions, attributions, and/or objects along a navigable street, comprising:

a vehicle (10);

a single global positioning system-enabled device ( 12) disposed in said vehicle ( 10) and being configured in wireless electrical communication with a database;

collecting probe data from the global positioning system-enabled device ( 12) as the vehicle (10) travels along the navigable street (18) and storing the probe data in said database;

characterized by:

said global positioning system-enabled device (12) having a forward facing still image camera (14) oriented to face in the direction of forward vehicle travel and outwardly from a front windshield (16) of the vehicle (10) to communicate still images to said database for analysis.

15. The system of claim 14 further including limiting the camera (14) to store only predetermined types of models of road furniture.

16. The system of claim 15 further including limiting the camera (14) to store only a single type of model of road furniture.

Description:
SYSTEM AND METHOD OF DETECTING, POPULATING AND/OR VERIFYING CONDITION, ATTRIBUTIONS, AND/OR OBJECTS ALONG A NAVIGABLE STREET NETWORK

BACKGROUND OF THE INVENTION

Field of the Invention

[0001] This invention relates generally to methods and systems for analyzing objects along a navigable street, and more particularly to methods and systems of verifying and updating objects and attributions in a street network database and detecting sudden changes and conditions along a navigable street.

Related Art

[0002] Pattern or object recognition from still or video images taken along a navigable street network and communicated via a Global Positioning System is known. Known methods and apparatus used to conduct pattern or object recognition are complex, typically require use of multiple and expensive dedicated image recording devices in elaborate mapping vans, wherein the images can take a good deal of time to be processed. With image recording devices being dedicated, they are relatively costly.

SUMMARY OF THE INVENTION

[0003] In accordance with one aspect of the invention, a method of detecting, populating and/or verifying conditions, attributions, and/or objects along a navigable street is provided. The method includes placing a single global positioning system- enabled device having a user interface screen in a vehicle. Then, collecting probe data from the global positioning system-enabled as the vehicle travels along the navigable street. The method is further characterized by: providing the global positioning system- enabled device with a forward facing still image capturing camera opposite the user interface screen and orienting the camera to face in the direction of forward vehicle travel and outwardly from a front windshield of the vehicle. Then, capturing a plurality of still images from the camera; geocoding the still images; and analyzing the geocoded still images to detect, populate and/or verify conditions, attributions, and/or objects along the navigable street.

[0004] In accordance with the invention, a relatively inexpensive method of detecting, populating and/or verifying conditions, attributions, and/or objects along a navigable street network is provided. The method utilizes a single global positioning system-enabled device having a still image capturing camera. By comparing and contrasting the images obtained from the camera with themselves and with information maintained in a database, the attributes and objects (road furniture) along a street of the street network are able to be accurately assessed. Further, the global positioning system- enabled device provides a mechanism for feedback from a user, thereby further providing an ability to communicate with the user to have the user verify the nature of objects and attributes along the street.

BRIEF DESCRIPTION OF THE DRAWINGS

(0005) These and other aspects, features and advantages of the invention will become more readily appreciated when considered in connection with the following detailed description of presently preferred embodiments and best mode, appended claims and accompanying drawings, in which:

[0006] Figure 1A-1B illustrate a vehicle having a global positioning system-enabled device constructed in accordance with one presently preferred aspect of the invention;

[0007] Figures 2A-2C illustrate a series of schematic images of a road sign taken along a navigable street from the vehicle of Figure 1 ;

|0008] Figures 3A-3C illustrate a series of schematic images of an overhead road sign taken along a navigable street from the vehicle of Figure 1 ; and

[0009] Figures 4A-4C illustrate a series of schematic images of a vehicle traveling ahead of the vehicle of Figure 1 along a navigable street.

DETAILED DESCRIPTION OF PRESENTLY PREFERRED EMBODIMENTS

[00010] In accordance with one aspect of the invention, a series of 2-D images obtained from a Global Positioning System (GPS)-enabled device having a camera facing forward in a vehicle are processed for populating, updating and verifying various types of road furniture (e.g. road signs, constructions barriers) and various attributions (e.g. speed limits, address ranges) along the street network and for detecting sudden changes along the street network. Accordingly, each vehicle serves in essence as a low cost mapping vehicle. In accordance with the invention, the GPS-enabled device can be one many different types of devices, such that this invention is applicable to all kinds of navigation systems including, but not limited to, personal navigation devices (PND), handheld devices, personal digital assistants (PDAs), mobile telephones with navigation software (smart phones), and in-car navigation systems built in a vehicle. During use, the owner of the GPS-enabled device, for privacy consideration reasons, is aware that images are being transmitted from the device, wherein the owner can optionally block the sending of images, if desired. The GPS-enabled devices with integral forward facing camera constructed in accordance with the invention are relatively inexpensive compared to more elaborate video imaging devices in high-tech mapping vans, and can be preprogrammed to take and store only geotagged images of interest, such as only one particularly type of road furniture or attribution, for example, while not capturing and/or deleting other images not of interest. Accordingly, the GPS-enabled device can function as intended, such as for navigation purposes, and can also transmit images of interest without having to have an overabundance of storage memory. Further, with the types of images taken being controlled, i.e. type of road furniture being imaged, an ability to limit the number of images generated is recognized. As such, the ability to manage the volume of the images captured is greatly improved, in addition to being able to manage specific types of images captured.

[00011] Referring in more detail to the drawings, Figure 1 illustrates, by way of example and without limitation, a vehicle 10, such as an automobile traveling along a road, for example, having a Global Positioning System (GPS)-enabled personal navigation device 12, such as those manufactured by TomTom NV (www.tomtom.com). It should be recognized that the term "vehicle" is not intended to be limited to automobiles as illustrated, and thus, is intended to include bikes, pedestrians, scooters, or any other typed of vehicle that can travel along a mapped road. The GPS-enabled device 12 is equipped with a single forward facing camera 14 such that the camera 14 is configured to face in a forward direction in the direction of forward vehicle travel. As such, the camera 14 is able to take sequential snap shots outwardly from a windshield 16 of the vehicle 10 as it travels along a road 18. The camera 14 can be actuated to take a snap shot as frequently as desired, such that the frequency can be varied as needed to obtain the type of information sought. The camera can be provided having a predetermined focal length and field of view, as desired, and can be provided having any suitable type of lens, including a fish eye lens (180 degree view angle) or a standard lens (90 degree field of view). Depending on the type of device used, and discussed hereafter as being a PND, the device 12 can have a rearward facing viewing screen 20 for displaying a navigation map to assist the driver in locating a destination, for example. In addition, the device 12 can be provided with sufficient processing capabilities to process images taken in order to verify the typed of road furniture imaged in the snap shot. Accordingly, it is not necessary for the device 12 to be in constant communication with an external processor.

[00012] In accordance with one aspect of the invention, with reference to Figure 2, the GPS-enabled device 12, with the camera 14 facing forward in the direction of vehicle travel, is prompted automatically, with knowledge by the vehicle user and without vehicle user interface, to take a series of pictures in front of the vehicle 10 as the vehicle travels along the road 18. Given the location of the GPS-enabled device is known via triangulation performed via at least 3, and usually about 10- 12 satellites (not shown), each photographic image, also referred to as snap shot, taken by the camera 14 is geo- coded, meaning that the position of the camera 14 at the time the snap shot was taken, as computed by the GPS-enabled device 12, along with possibly other heading information, is associated with the image as metadata. As the vehicle 10 traverses the road surface 18, sequential images are captured at times tl (Figure 2A), t2 (Figure 2B), and t3 (Figure 2C), wherein the time lapse between the successive images (At) can be varied as desired. It is contemplated that the At could be established sufficiently small so that successive images could overlap one another, if desired.

[00013] As shown in the image captured at tl (Figure 2A), the image has a field of view 19 bounded by a left edge 26 (defined as extending between pixel coordinates (0,0) and (0,300), a right edge 28 (defined as extending between pixel coordinates (400,0) and (400,300), a top edge 30 (defined as extending between pixel coordinates (0,0) and (400,0) and a bottom edge 32 (defined as extending between pixel coordinates (0,300) and (400,300). The road sign 24 captured in the image at tl is located at or near pixel coordinate (200,20) and comprises about 4 red pixels. In contrast, the road sign 24 captured in the image at t2 is larger and has a more defined octagonal shaped, with red pixels outlining the sign 24 and white pixels generally centrally located on the sign at or near pixel coordinate (300, 100). Further yet, the road sign 24 captured in the image at t3 is larger yet and is clearly comprised of red pixels outlining an octagon shape and white pixels generally centrally located on the sign at or near pixel coordinate (350, 150).

[00014] With the aforementioned information obtained from the images at tl , t2, and t3, the type of sign can be verified with a high degree of confidence. First of all, it can be determined by the successive images that the sign 24 is off to the side of the road 18. This can be done by assuming that the vehicle is traveling on the road 18 in generally centered relation such that the pixel coordinates associated with coordinate (200, "Y"), wherein "Y" can take on any pixel value ranging between 0-300, designate a close approximation of the center of the road 18. Accordingly, any "distant" objects near or adjacent the road 18 will appear in an image at a coordinate approximating (200, 20), and upon the object "nearing" the vehicle 10, the object will move off center from the (200, "Y") coordinate toward one of the left or right edges 26, 28, wherein the "Y" value will increase and the "200" value with either tend toward 0 or 400, depending on which side of the road 18 the object is located. And so, with all the above information, it can be determined that red and white pixels are present on the sign 24, and that the sign 24 is octagonal in shape, and further yet, that the sign 24 is fixed on the right hand side of the road 18. All these factors are consistent with the sign 24 being a 'stop sign', and thus, if not present on a digital map, can be added to the database of models, or if present, can be verified.

[00015] In addition to the above information obtained, additional relative speed information can be used to assess attributes of the road and to further verify the presence of the stop sign 24, as well as the presence and lack thereof of other types of road furniture. For example, given the ability to determine the precise or substantially precise location of the GPS-enabled device at any moment in time, the relative speed of the vehicle 10 can be determined using a simple formula: distance = rate x time. With the distance of vehicle travel known between points of probe data obtained and with time also known between the points of probe data, the rate of the vehicle 10 can be readily determined. As such, upon the vehicle 10 reaching the sign 24, if it is truly a 'stop sign', we would expect the rate of travel of the vehicle 10 to diminish and come to a stop. Accordingly, in the illustrated example, we would expect the rate of travel or speed of the vehicle 10 to come to a stop near time t3, as the sign 24 has become increasingly close to the vehicle 10, which further verifies the sign 24 is in fact a 'stop sign'. Further yet, the various types of road furniture encountered along a road have a particular size, shape, height, location, color pattern, etc. As such, when the various types of road furniture are imaged from the camera of a known focal length and field of view, and from a known and/or determined distance, the road furniture will appear at an approximate known pixel coordinate over a predetermined pattern and will have known colors. The optical distance can be readily determined by the relative velocity of the road furniture toward the vehicle 10 and how fast the sign 24 is migrating within the field of view 19 of the image. Depending on the position of the road furniture, relative to the center of view of the camera, regular geometric features such as a circle or rectangle will become skewed as they move away from the center of view. Circular objects will appear as oval objects, straight linear objects will appear curved and rectangular objects will appear more trapezoidal. As such, based on the anticipated migration of objects away from the center of view of the camera, models can be computed to account for their change in appearance. Accordingly, the images obtained from the camera 24 can be compared to a database of models of the various types of road furniture (having known shapes and configurations) to verify the type of object imaged by the camera 24. As such, in the example of the stop sign 24, the database of models will contain data for a stop sign taken from predetermined distances, such that when the vehicle 10 is a known distance from the stop sign, such as a ½ mile, for example, the stop sign 24 will appear in the image at about coordinate (200,20), and then, when at a distance of a ¼ mile, the stop sign 24 will appear in the image at about coordinate (300, 100). Accordingly, when images are taken by users and compared to the database of models, whether via an external processor or an internal processor to the device 12, it can be determined through straight forward statistical analysis the probability of the type of road furniture captured in the image. As such, the road furniture, if already mapped can be confirmed, and if not mapped, can be added to the digital map as a new item of road furniture.

[00016] Models that represent the appearance of road furniture at distances away from the camera and the software that compares the models to the sequential camera still images are computationally complex and memory intensive. As such, in accordance with one embodiment of this invention, the models stored in the GPS-enabled device containing a camera are limited to one or a few models. Accordingly, any given GPS- enabled device may only be looking for particular type of road furniture, such as a stop sign, for example, while others may be looking for a yield sign or overhead street sign, for example. This vastly reduces the amount of memory and computation time as well as the number and type of images that need to be analyzed. If a version of a given model is not identified in a given image, then the image does not need to be maintained in memory, thereby freeing up memory space for other images or intermediate process data. In addition to reducing the amount of memory required for the individual device, it also makes managing the total number of captured images from for each type of road furniture from all devices much more manageable. This results due to having some devices imaging one type of road furniture and having other devices images other types of road furniture. Accordingly, the total number of images for each type of road furniture can be controlled, wherein the total number of images for each type of road furniture is substantially less that the total number of images taken by all device in use. |00017] In accordance with another aspect of the invention, with reference to Figures 3A-3C, the GPS-enabled device 12, with the camera 14 facing forward in the direction of vehicle travel, is prompted automatically, with knowledge by the vehicle user and without vehicle user interface, to take a series of pictures of the horizon in front of the vehicle 10. This example is similar to that described for the 'stop sign' example illustrated in Figure 2, however, in this example, an object, such as an overhead sign 34, for example, is captured in the images taken at times tl (Figure 3 A), t2 (Figure 3B), and t3 (Figure 3C). At tl , the image shows the overhead sign 34 as a distant object generally located at pixel coordinate (200,100); at t2, the image shows the overhead sign 34 as being a slightly enlarged object at about pixel coordinate (200,50), and at t3, the image shows the overhead sign 34 as being a larger object yet at about pixel coordinate (200, 10). And so, as explained above, with the object 34 increasing in size, we can assume that the object is a type of road furniture getting closer to the vehicle 10 as the vehicle travels along the road 18, and further yet, with the object remaining generally centered in the image at coordinate (200,"Y"), we can assume the road furniture is centered relative to the road 18, unlike with the sign 24 of Figure 2. In addition, the object 34 appears to raise within the field of view 19 as the vehicle 10 gets closer to the object 34, thereby verifying that the object 34 is overhead the vehicle 10. All this information is consistent with the object 34 being an 'overhead sign', and thus, if not present on a digital map, can be added, or if present, can be verified. Further, as discussed above, various colors associated with the sign can be determined, such as green and white, typically indicating a road junction sign, or blue and white, typically indicating a point-of-interest (POI), for example.

[00018] In accordance with another aspect of the invention, with reference to Figures 4A-4C, the GPS-enabled device 12, with the camera 14 facing forward in the direction of vehicle travel, is prompted automatically, as described above, to take a series of pictures of the horizon in front of the vehicle 10. However, unlike the previous embodiments, in this embodiment the camera 14 is capturing images of another vehicle 36 in front of the vehicle 10. The vehicle 36 is captured in the images taken at times tl (Figure 4A), t2 (Figure 4B), and t3 (Figure 4C). At times tl and t2, the image shows the vehicle 36 as an object located generally at the same pixel coordinate (200, 100). As such, it is assumed that the vehicle 10 and the vehicle 36 are traveling at substantially constant speeds, and further, that the relative speed are substantially the same. Otherwise, if one of the vehicles was traveling at a different speed from the other vehicle, the imaged vehicle 36 would move upwardly or downwardly and respectively decrease or increase in size in the field of view 19. For example, as shown at time t3, the vehicle 36 is located significantly downwardly and is increased in size in the viewing screen 20 at about pixel coordinate (200, 200) in comparison with the image taken previously at t2. Accordingly, either the vehicle 10 sped up drastically between t2 and t3; the vehicle 36 slowed significantly between t2 and t3, or a combination thereof. Regardless, if the driver of the vehicle 10 is not aware of the rapidly approaching vehicle 36, an accident could be forthcoming. However, in accordance with an aspect of this embodiment, the GPS- enabled device can be equipped to indicate to the driver that the vehicle 36 is rapidly approaching. This can be done in a number of ways, either separately or in combination with one another. For example, the GPS-enabled device 12 can be equipped with an audible alarm to alert the driver; the GPS-enabled device 12 can be configured in communication with a control module of the vehicle, which in turn is triggered to alert the driver via an alarm (audible, visual or otherwise) and/or to automatically actuate a braking mechanism on the vehicle 10 at least until the other vehicle 36 is no longer closing in on the vehicle 10. Accordingly, it should be recognized that the sequential images taken by the camera 14 continue beyond the images shown in Figure 4C. Of course, when the vehicle 36 is verified by analysis of the continued images as resuming a spaced, constant or increasing distance from the vehicle 10, then any alarm trigger can be turned off and reset.

[00019] In addition to detecting the rapidly approaching vehicle based on size and position within the field of view 19, other factors can be used to make a determination that the vehicle 36 is rapidly approaching the vehicle 10. For example, color (i.e. brake lights), texture or even heat signature (infrared) of the image can used to assess the relative position of the vehicle 36 to the vehicle 10. It should be recognized that any of the aforementioned mechanisms can be used separately or in combination with any one or more of the other mechanisms.

[00020] In accordance with yet another aspect of the invention, in addition to the GPS-enabled device 12 capturing an image while the driver is navigating a route, such as along the road 18, and a street database of the area along the road 18 is not identified (attributed) with all the current attributes, e.g., the address range, street names accessing the road 18, etc., the GPS-enabled device 12 can be configured to ask the driver or passenger (if driver, then during a stopped condition, or if moving, then via voice prompt so the driver can communicate in a hands-free manner so as to not distract the driver) to verify an existing attribution or to see if any new attributions exist. For example, if the vehicle 10 is stopped at an intersection, and the GPS-enabled device displays or says: "We are unsure of the name of the street you are navigating. Please press or say 'yes' if the street sign showing the name of the street is in view on the street level image currently being displayed in your GPS-enabled device." The image, along with the typed or spoken response, is stored with the coordinates and the direction of travel, and thus, either verification or an addition is made to the street database. It should be recognized that the same can be done for any point of interest (POI), traffic controls (one ways, reversible lanes, yields, school zones, cross walks, etc.). This could be particularly useful to verify and update areas considered difficult to map via GPS measurements, such as downtown "urban canyons", due to the inherent bouncing of signals off tall buildings.

100021] Obviously, many modifications and variations of the present invention are possible in light of the above teachings. It is, therefore, to be understood that within the scope of the appended claims, the invention may be practiced otherwise than as specifically described.