Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR PREVENTING ACCIDENTS
Document Type and Number:
WIPO Patent Application WO/2016/103258
Kind Code:
A1
Abstract:
A system for improved road safety comprising a sub-system for detecting a traffic signal directed to a vehicle, subsystem comprising: a general positioning system, an output, an outward looking camera all mounted within vehicle, and in data communication with a processor having image analysis functionality coupled to a database; the general positioning system for providing general position of vehicle; the database comprising data regarding appearance and relative positions of road-signs and locations, such that a road sign can be identified within an area by compounding uncertainty of general positioning system with imaged area; the outward looking camera for capturing an image of field of view, the processor with image analysis functionality for identifying objects within the field of view for locating objects by comparison with information in the database, thereby identifying exact position of the vehicle; the output for outputting a driver warning.

Inventors:
RAIMAN TIMOR (IL)
Application Number:
PCT/IL2015/051240
Publication Date:
June 30, 2016
Filing Date:
December 22, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
RAIMAN TIMOR (IL)
International Classes:
G06K9/00; G08G1/096
Foreign References:
US20100103040A12010-04-29
EP2383679A12011-11-02
EP1383098A12004-01-21
US20060034484A12006-02-16
US20140362221A12014-12-11
Attorney, Agent or Firm:
FACTOR, Michael (Amal 11, Rosh HaAyin, IL)
Download PDF:
Claims:
CLAIMS

1. A system for improved road safety, the system comprising

a sub-system for detecting a traffic signal directed to a vehicle, the subsystem comprising:

a general positioning system;

an output,

and an outward looking camera all mounted within the vehicle, and in data communication with a

common processor having image analysis functionality that is coupled to a database; The general positioning system for providing a general position of the vehicle; the database comprising data regarding appearance and relative positions of a list of road-signs and their exact locations, such that a road sign can be identified as being one of its occurrences within the area of compounding the uncertainty of the general positioning system to the analyzed portion of the imaged area within that general position and

the outward looking camera for capturing an image of a field of view, the processor with image analysis functionality for identifying objects within the field of view and for locating said objects by comparison with information in the database, thereby identifying exact position of the vehicle, and the output for outputting a driver warning.

2. The system of claim 1 further comprising at least one installation of a plurality of similar road-signs at distances exceeding an uncertainty span of the general positioning system compounded with an analyzed sub-region of the imaged area of the outward looking camera; thereby, enabling the sub-system to un- ambiguously identify an imaged road-sign from the plurality of similar road signs.

3. The sub-system of claim 1 wherein the general positioning system is selected from a global positioning system using geostationary satellites and a positioning system using land based antennae.

4. The sub-system of claim 1 wherein the camera identifies at least one stationary road sign and the output includes data regarding the at least one road sign.

5. The sub-system of claim 4 wherein the at least one road sign is a traffic light.

6. The sub-system of claim 5 wherein said light of a traffic light is a red light, an amber light or a green light and said output data includes color of said traffic light.

7. The sub-system of claim 1 wherein the output is an alert to a driver of the vehicle.

8. The sub-system of claim 7 wherein the alert comprises at least one of a haptic signal, an audible signal and a visual signal to the driver.

9. The sub-system of claim 1 wherein the output directly controls the vehicle, bypassing a driver.

10. A system comprising a plurality of the sub-systems of claim 1 wherein each subsystem comprises a receiver and the output of each subsystem is a data signal detectable by receivers of other sub-systems.

11. The system of claim 10 wherein the output data includes information about GPS attenuations.

12. The system of claim 10 wherein the output data includes information about traffic signals.

13. A system comprising a plurality of the sub-systems of claim 1 wherein a common computer processor and database are provided with a receiver and a transmitter, the receiver for receiving signals from the outputs of the subsystems and each subsystem comprising a receiver for receiving transmissions from the transmitter.

14. The system of claim 13, wherein the transmitter coupled to the common computer processor and database transmits data calculated from outputs of each subsystem.

15. The sub-system of claim 1, such that comparing distortion of an image of a road sign in the image stream of the camera to the road sign data in the database provides absolute distance and directional information of the road sign from the vehicle to the sub-system.

16. The sub-system of claim 15 wherein an uncertainty in identification of the road sign is a function of its occurrences within an uncertainty area of the general positioning system compounded with an uncertainty in the absolute distance of the road sign from the vehicle.

17. The subsystem of claim 1 wherein said road signs comprise a painted road sign

applied to one of the group of road surfaces, tunnel walls, overhead signs and roadside signs.

18. The sub-system of claim 17, wherein the painted road sign comprises concentric markings.

19. A system comprising the sub-system of claim 16 wherein at least one installation of a plurality of similar road signs at distances exceeding the diameter of an uncertainty area of the general positioning system by an error margin of the road-sign to vehicle distance calculation enables the sub-system to un-ambiguously identify an imaged road-sign from the plurality of similar road signs.

20. A method for detecting an absolute position of a vehicle comprising:

• providing a subsystem comprising:

a general positioning system, an output and an outward looking camera all mounted within the vehicle, and in data communication with a common processor having image analysis functionality and coupled to a database;

• determining a general geostationary location of the vehicle in absolute

coordinates using the general positioning system;

• retrieving a list of objects including traffic signs and their relative positions within the general location from the database;

• Capturing an image stream with a field of view with the outward looking camera and

• comparing the list of objects and their relative positions from the database with the image stream from the forward mounted camera and determining the actual position of the vehicle, and

• outputting a warning to the driver.

21. The method of claim 20 wherein said method further comprises identifying at least one object in a field of view of the camera by comparing candidate data regarding objects within a general vicinity of the vehicle that are listed in a database, with objects in the field of view of the camera.

22. The method of claim 21 wherein the database comprises information regarding

several traffic signs within the area corresponding to the analyzed portion of the field of view of the outward looking camera as determined by the general positioning system.

23. The method of claim 21 wherein at least one object is uniquely identified by

comparing an analyzed portion of the field of view of the outward looking camera with the position of the vehicle as determined by the general positioning system.

24. The method of claim 21 wherein said at least one object comprises a stationary traffic light.

25. The method of claim 20 wherein the warning comprises at least one of a haptic signal, an audible signal and a visual signal to the driver.

26. The method of claim 20 further comprising outputting a signal to directly control the vehicle, bypassing a driver.

27. The method of claim 20 wherein data from a plurality of sub-systems is received by a computer processor which transmits information to a sub-system of a vehicle of interest.

28. The method of claim 20 wherein a base station comprising a common computer

processor and database, a receiver and transmitter, and the base station receives signals from each sub-system and transmits information to each sub-system.

29. The method of claim 20, wherein a territory is divided into a tessellation of areas such that each area is larger than the uncertainly in position resulting from the general positioning system by at least the uncertainty in the distance of the recognized traffic sign from the vehicle, and the database associates each of the several traffic signs with the area in which it is installed; thereby, enabling candidates for the recognized traffic sign to be located in the database by association with one area and all its neighboring areas.

30. The method of claim 29 wherein the areas are assigned a coloring so that each area has at most one neighbor of a given color and at least one traffic signal is installed only in areas assigned a particular color; thereby, the at least one traffic signal occurring no more than once in an area and all its neighboring areas and hence being uniquely identifiable to the subsystem.

31. The method of claim 20 wherein at least one traffic sign is installed at distances exceeding the span of the analyzed portion of the field of view of the outward looking camera by the uncertainty in position resulting from the general positioning system; thereby, being uniquely identifiable to the subsystem.

32. The sub-system of claim 17, wherein the image-analysis functionality is split into phases, with an initial phase recognizing a constant shape shared by painted road signs and subsequent phases being foregone when the constant shape is not found in the image stream, thereby conserving processor power usage.

Description:
SYSTEM AND METHOD FOR PREVENTING ACCIDENTS

FIELD OF THE INVENTION

The present invention relates generally to systems and methods for preventing accidents, to systems and methods for vehicle navigation, to traffic control systems and methods and systems and methods for warning drivers to prevent road accidents.

BACKGROUND OF THE INVENTION

Despite modern technology, there are numerous road, rail, sea and air accidents every day worldwide. Some of the accidents occur due to a driver error. One type of driver error is where the driver fails to act to a traffic control signal such as a stop sign, a give way sign or a traffic light. This may be due to misinterpretation of the traffic control signals or simply ignoring it, such as by driving through a red light, driving beyond the speed limit, or driving against the direction of traffic. Another type of driver error is where the driver neglects to take account of road conditions or miscalculates braking distance to an intersection.

In order to prevent accidents by computer assisted technology it is generally required to identify a vehicle's location with regards to the road with high precision to be able to determine the driving lane that the vehicle is in and which road signs or signals are directed to the driver. This requires knowing the position of the vehicle with high precision (e.g., driving lane), knowing the state of traffic control signals in real time and identifying when a driver is distracted or otherwise needs to be warned.

Onboard cameras coupled with automated image analysis programs are inadequate to recognize traffic lights since red lights may be a traffic signal or may be the rear light of a vehicle or a breaking light. Even where a red light within the field of view of an onboard camera is indeed a traffic signal, and is correctly identified as being a stop sign, without much more information, it is not clear whether this is an instruction to the vehicle with the on-board camera (host vehicle) or whether it relates to vehicles in other traffic lanes. In some instances such lights are intended for traffic coming into a junction from a different direction and the host vehicle has right of way and should not be stopping.

The Global Positioning System (GPS) is a satellite navigation system that provides location information anywhere on or near the Earth's surface. It comprises a number of satellites in orbit above Earth. Each satellite continually transmits messages that include the time the message was transmitted, and the satellite position. On the ground the GPS unit receives these messages and, by comparing the time at which the message was received (on its internal clock) against the time which the message was transmitted, it works out how far away it is from each satellite.

A good aerial is required in order to detect the message signals coming from the GPS satellites. The strength of a GPS signal is often expressed in decibels referenced to one mill watt (dBm). By the time the signals have covered the 22,200km from satellite to Earth's surface, the signal is typically as weak as -125dBm to - 130dBm, even in clear open sky. In built up urban environments or under tree cover the signal can drop to as low as -150dBm (the larger the negative value, the weaker the signal). At this level some GPS devices struggle to acquire a signal (but may be able to continue tracking if a signal was first acquired in the open air). A good high sensitivity GPS receiver can acquire signals down to -155 dBm and tracking can be continued down to levels approaching -165 dBm.

In theory, a 3 satellite system provides all the data needed to calculate a reasonably accurate location. However due to clock inaccuracies, in practice signals must be received from a minimum of four satellites in order to correct for errors.

Although early GPS receiver were limited to the number of satellites they could track at any one time, modern GPS receivers have enough "tracking channels" to follow all satellites in view.

To calculate the distance between the GPS receiver and each satellite, the receiver first calculates the time that a signal has taken to arrive. It does this by taking the difference between the time at which the signal was transmitted which is included in the signal message, and the time the signal was received by using an internal clock. As the signals travel at the speed of light, even a 0.001 second error equates to a 300km inaccuracy of the calculated distance.

To reduce the GPS error level to the order of meters would require an atomic clock. However, not only is this impracticable for consumer GPS devices, the GPS satellites are only accurate to about 10 nano seconds (in which time a signal would travel 3m). It is for precisely this reason why a minimum of four satellites is required. The additional satellite(s) is used to help correct for the error. Thus although rarely publicized, it is important that a GPS receiver includes good error correction algorithms and even then, the accuracy of positioning for a moving vehicle is typically only within about 30 meters. Consequently, in many cases, current GPS systems cannot identify a vehicle's location with sufficient precision to enable automated detection of traffic control signals, and the driver is required to judge the driving conditions on his own. Drivers are, however, prone to lapses of attention.

Thus despite onboard cameras coupled to computers and GPS systems, there is a need for a system that can detect traffic signals directed to a host vehicle for alerting the driver or for automatic or semi-automatic control.

There is thus a need for a comprehensive and affordable solution which may prevent minimize accidents while being markedly readily accessible to drivers.

SUMMARY OF THE INVENTION

Embodiments of the present invention identify the precise location of the vehicle and locate traffic lights and identifies their status.

Additionally, a small set of new road marking is proposed. Due to the small size of this set, these road markings need only a few stencils to accurately produce.

Despite enabling traffic lights to be monitored by an onboard subsystem on a vehicle, the traffic light itself does not require new hardware.

In preferred embodiments, different vehicles communicate so that knowledge regarding the traffic lights is relayed to a vehicle of interest from other vehicles far earlier than the vehicle of interest is able to image the traffic light directly, such as when it is obscured for example.

A first aspect is directed to a system for improved road safety, the system comprising a sub- system for detecting a traffic signal directed to a vehicle, the subsystem comprising: a general positioning system; an output, and an outward looking camera all mounted within the vehicle, and in data communication with a common processor having image analysis functionality that is coupled to a database;

The general positioning system for providing a general position of the vehicle;

the database comprising data regarding appearance and relative positions of a list of road-signs and their exact locations, such that a road sign can be identified as being one of its occurrences within the area of compounding the uncertainty of the general positioning system to the analyzed portion of the imaged area within that general position and the outward looking camera for capturing an image of a field of view, the processor with image analysis functionality for identifying objects within the field of view and for locating said objects by comparison with information in the database, thereby identifying exact position of the vehicle, and the output for outputting a driver warning.

Preferably, the system further comprises at least one installation of a plurality of similar road-signs at distances exceeding an uncertainty span of the general positioning system compounded with an analyzed sub-region of the imaged area of the outward looking camera; thereby, enabling the sub-system to un-ambiguously identify an imaged road-sign from the plurality of similar road signs.

The general positioning system may be selected from a global positioning system using geostationary satellites and a positioning system using land based antennae.

In an embodiment of the sub-system described above, the camera identifies at least one stationary road sign and the output includes data regarding the at least one road sign.

In some embodiments and applications, the at least one road sign is a traffic light. For example, where light of a traffic light is a red light, an amber light or a green light and said output data includes color of said traffic light.

Typically, the output is an alert to a driver of the vehicle. Optionally, the alert comprises at least one of a haptic signal, an audible signal and a visual signal to the driver.

Alternatively, the output directly controls the vehicle, bypassing a driver.

A second aspect is directed to a system comprising a plurality of the subsystems described above, wherein each sub-system comprises a receiver and the output of each subsystem is a data signal detectable by receivers of other sub-systems.

In some systems, the output data includes information about GPS attenuations.

Optionally, the output data includes information about traffic signals.

An aspect of the invention is directed to a system comprising a plurality of the sub-systems as above, wherein a common computer processor and database are provided with a receiver and a transmitter, the receiver for receiving signals from the outputs of the subsystems and each subsystem comprising a receiver for receiving transmissions from the transmitter.

Typically, the transmitter coupled to the common computer processor and database transmits data calculated from outputs of each subsystem.

Optionally, comparing distortion of an image of a road sign in the image stream of the camera to the road sign data in the database provides absolute distance and directional information of the road sign from the vehicle to the sub-system.

Optionally, an uncertainty in identification of the road sign is a function of its occurrences within an uncertainty area of the general positioning system compounded with an uncertainty in the absolute distance of the road sign from the vehicle. In some systems dedicated road signs comprise a painted road sign applied to one of the group of road surfaces, tunnel walls, overhead signs and roadside signs.

Usefully, the painted road sign comprises concentric markings.

In some systems, at least one installation of a plurality of similar road signs at distances exceeding the diameter of an uncertainty area of the general positioning system by an error margin of the road- sign to vehicle distance calculation enables the sub-system to un-ambiguously identify an imaged road-sign from the plurality of similar road signs.

A further aspect of the invention is directed to a method for detecting an absolute position of a vehicle comprising:

providing a subsystem comprising:

a general positioning system, an output and an outward looking camera all mounted within the vehicle, and in data communication with a common processor having image analysis functionality and coupled to a database;

determining a general geostationary location of the vehicle in absolute coordinates using the general positioning system;

retrieving a list of objects including traffic signs and their relative positions within the general location from the database;

Capturing an image stream with a field of view with the outward looking camera and

comparing the list of objects and their relative positions from the database with the image stream from the forward mounted camera and determining the actual position of the vehicle, and

outputting a warning to the driver.

The method may further comprise identifying at least one object in a field of view of the camera by comparing candidate data regarding objects within a general vicinity of the vehicle that are listed in a database, with objects in the field of view of the camera.

The database may comprises information regarding several traffic signs within the area corresponding to the analyzed portion of the field of view of the outward looking camera as determined by the general positioning system.

Using the method, at least one object may be uniquely identified by comparing an analyzed portion of the field of view of the outward looking camera with the position of the vehicle as determined by the general positioning system. Optionally, at least one object comprises a stationary traffic light.

In some embodiments, the warning comprises at least one of a haptic signal, an audible signal and a visual signal to the driver.

The method may further comprise outputting a signal to directly control the vehicle, bypassing a driver.

Optionally, in the method, data from a plurality of sub-systems is received by a computer processor which transmits information to a sub- system of a vehicle of interest.

Optionally, in method, a base station comprising a common computer processor and database, a receiver and transmitter, and the base station receives signals from each sub-system and transmits information to each sub-system.

Optionally, in the method, a territory is divided into a tessellation of areas such that each area is larger than the uncertainly in position resulting from the general positioning system by at least the uncertainty in the distance of the recognized traffic sign from the vehicle, and the database associates each of the several traffic signs with the area in which it is installed; thereby, enabling candidates for the recognized traffic sign to be located in the database by association with one area and all its neighboring areas.

Optionally, the areas are assigned a coloring so that each area has at most one neighbor of a given color and at least one traffic signal is installed only in areas assigned a particular color; thereby, the at least one traffic signal occurring no more than once in an area and all its neighboring areas and hence being uniquely identifiable to the subsystem.

Optionally, at least one traffic sign is installed at distances exceeding the span of the analyzed portion of the field of view of the outward looking camera by the uncertainty in position resulting from the general positioning system; thereby, being uniquely identifiable to the subsystem.

In some embodiments, the image-analysis functionality is split into phases, with an initial phase recognizing a constant shape shared by painted road signs and subsequent phases being foregone when the constant shape is not found in the image stream, thereby conserving processor power usage. DESCRIPTION OF FIGURES

For a better understanding of the invention and to show how it may be carried into effect, reference will now be made, purely by way of example, to the

accompanying drawings.

With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of the preferred embodiments of the present invention only, and are presented in the cause of providing what is believed to be the most useful and readily understood description of the principles and conceptual aspects of the invention. In this regard, no attempt is made to show structural details of the invention in more detail than is necessary for a fundamental understanding of the invention; the description taken with the drawings making apparent to those skilled in the art how the several forms of the invention may be embodied in practice.

Fig. 1 is a simplified pictorial illustration showing a system for preventing accidents, in accordance with an embodiment of the present invention;

Fig. 2 is a simplified schematic illustration of an on-board device for preventing accidents, in accordance with an embodiment of the present invention;

Fig. 3 is a simplified schematic flowchart of a method for issuing driver warnings, in accordance with an embodiment of the present invention;

Fig. 4 is a simplified schematic flowchart of a method for determining a

"unique location" presently "viewed" by a camera, in accordance with an embodiment of the present invention;

Fig. 5 is a simplified schematic flowchart of a method for assigning "tag lexemes" to locations of interest, in accordance with an embodiment of the present invention;

Fig. 6 is a simplified schematic illustration of a sample surface grid coloring and 'tag lexeme" placement, in accordance with an embodiment of the present invention;

Fig. 7 is a simplified schematic illustration of sample "tag lexeme" shapes, in accordance with an embodiment of the present invention;

Fig. 8 is a simplified schematic illustration of one kind of recognizable "tag lexeme" shape in accordance with an embodiment of the present invention; Fig. 9 is a simplified flowchart of a method for recognizing a "tag lexeme" shape, in accordance with an embodiment of the present invention; Fig. 10 is a simplified schematic flowchart of a method for determining the "viewing angles" of a camera, in accordance with an embodiment of the present invention;

Fig. 11 is a simplified schematic flowchart of a method for looking up "recognizable objects" for "viewing angles" determination, in accordance with an embodiment of the present invention;

Fig. 12 is a simplified schematic flowchart of a method for determining a "travel vector", in accordance with an embodiment of the present invention;

Fig. 13 is a simplified schematic flowchart of a method for looking up "recognizable objects" for "travel vector" determination, in accordance with an embodiment of the present invention;

Fig. 14 is a simplified schematic flowchart of a method for determining the states of "traffic control signals", in accordance with an embodiment of the present invention;

Fig. 15 is a simplified schematic flowchart of a method for looking up "signal recognizable objects" for "traffic control signal" state determination, in accordance with an embodiment of the present invention;

Fig. 16 is a simplified schematic of a "unique location" entity relation model, in accordance with an embodiment of the present invention;

Fig. 17 is a simplified schematic of a "(signal) recognizable object" entity relation model, in accordance with an embodiment of the present invention;

Fig. 18 is a simplified pictorial illustration of a mounting fixture, in accordance with an embodiment of the present invention; and

Fig. 19 is another simplified pictorial illustration of a mounting fixture, in accordance with an embodiment of the present invention.

DEFINITIONS AND NOMENCLATURE

In all the figures similar reference numerals identify similar parts.

Throughout this document, including the claims where applicable, the term "vehicle" is used in a broad sense to denote any transportation apparatus, whether operated by a person or otherwise. In this respect, the term "driver" is used to denote whatever entity operates the vehicle and the terms "warning" and "alert" are used to denote an instruction relevant to the vehicle operation in the immediate, medium or long term.

For simplicity, the term "road" is used to denote the ground over which the vehicle moves. Although intended primarily for road vehicles, such as cars

(automobiles) and the like, the term road should be understood as including other ground surfaces.

The term "traffic signal" is used to designate visual signals, particularly traffic lights. However, in general, it is not required that a "traffic signal" in and of itself bear an instruction to a "driver".

The term traffic light refers primarily to what are called traffic robots in the United States, i.e. red and green lights, and typically red, amber and green arrays that indicate that a vehicle should stop or may continue its journey.

The term "recognizable object" is used to denote any entity, whether natural or man-made, or a collection of such entities, which form some visually recognizable pattern or patterns not necessarily unique. This includes buildings. In this respect, the terms "object recognition algorithm" or "object recognition method" refer to a means of searching for or confirming the presence of a "recognizable object" or a class of similar "recognizable objects" in an image, and the term "recognition signature" of a "recognizable object" refers to the data needed to be made available to the "object recognition algorithm" in order for it to search for or confirm the presence of said "recognizable object" in a given image. An area of the image where an "object recognition algorithm" should search for a "recognizable object" or where one was found will be referred to throughout the disclosure as "bounding box". In some embodiments of the invention the aforementioned "object recognition method" may be similar to or based upon one or more existing prior work object recognition algorithms, such as SURF.

In addition, throughout this document, the terms "recognizable signal object" and "signal recognizable object" both refer to a "recognizable object" which possesses one or more additional visually recognizable patters ("signal recognition" patterns), possibly equivalent to the visually recognizable pattern of the "signal recognizable object" itself. In the domain of the invention, the presence of one or more such "signal recognition" patterns being indicative of a particular "traffic control signal" actively or passively conveying a particular instruction to a "driver" or "drivers" - for example, a three-colored traffic light displaying a red signal. We will say that a "recognizable signal object" is in a particular "state" when it is displaying none, one or more of its "signal recognition" patterns.

To eliminate doubt, "recognizable signal objects" are objects whose "signal recognition" patterns is equivalent to the "signal recognizable object's" own visually recognizable pattern are simply those "signal recognizable objects" whose very presence in a scene is indicative of a particular "traffic control signal" actively or passively conveying a particular instruction to a "driver" or "drivers" - for example, a lowered boom gate.

It is noteworthy to mention here that multiple "recognizable signal objects" may be associated with one "traffic control signal".

The terms "surface grid pattern" and "tessellation of areas" are used interchangeably to denote a segmentation of the transportation medium into adjacent or overlapping segments.

Following this, a "space grid element" is one such segment, and equivalently a "(tessellation) area".

The term "unique location" is used to denote a particular coordinate in the transportation medium which is chosen to have certain data about it recorded in the system of the invention, as will become clear.

The term "tag lexeme" is used throughout the disclosure to denote a visual marking or pattern which can be applied to the transportation medium or to other entities at or near a "unique location", or otherwise be made visible or imaged by a camera located at or near a "unique location". The term "viewing", when applied to a camera, means capturing and streaming a view of a location, so that some

"recognizable objects" associated by the system of the invention with the said unique location can be imaged by the camera, as described in detail below.

We will also use the conjunct "unique location of a vehicle" in a broader sense to refer to the general area in the transportation medium such that a camera placed in this area can "view" the "unique location", according to the above terminology.

In addition, throughout this document, the term "viewing angles" is used to denote the camera orientation angles when it captures an image while "viewing" a particular unique location. In particular, the term "viewing angles" refers to the camera roll, pitch and azimuth. For the purpose of the ongoing discussion it is not necessary to specify what are the reference axes for quantifying the actual angles of roll, pitch and azimuth; but to simplify understanding of the disclosure, these can be thought of as the line of horizon for roll, the plane formed by the line of horizon and the camera for pitch and a specific arbitrary vector in this plane for azimuth - for example, the direction of normal "vehicle" traffic transition past the presently "viewed" unique location. According to this, the reference vector for the azimuth component of the "viewing angles" is likely to be different whenever a camera "views" a different "unique location".

The term "travel vector", as used throughout the disclosure, refers to the vector in the transportation medium from the coordinates of a particular "unique location" to some other coordinates - in particular, the coordinates where a camera was situated when it captured a particular image.

In addition, we will use the term "precise location" to refer to a specific coordinate in the transportation medium. Specifically, a "precise location" is the combination of a "unique location" and a "travel vector".

Similarly, the term "height" is used throughout the disclosure to denote a distance in a vector perpendicular to the plane formed by the line of horizon and a camera, usually with reference to the transportation medium surface.

Unless indicated otherwise, throughout the disclosure, we do not discuss methods for determining the "height" of a camera position associated with a particular captured image. It is assumed that either this height is close to constant or can be computed by methods similar or equivalent to other methods presented herein.

Specifically the "height" can be computed by a method similar to the method for determining the "travel vector", described below. This should not limit the scope of the invention to transportation media where the concept of "height" has little significance with respect to operation of the "vehicles" of that transportation medium.

The reader should be aware that the discussion of the computation of the "height" is omitted in order not to complicate the disclosure beyond what is necessary. Furthermore, most embodiments of the invention will require either computation or prior knowledge of the camera "height".

Finally, the term "movement function" is used throughout the disclosure to refer to a transformation, usually associated with a particular "recognizable object" recorded in the system of the invention. When, given as input camera attributes (for example, focal length and pixels per degree), a particular "height", particular "viewing angles" and a particular "travel vector", the "movement function" provides as output the "bounding box" where the recognizable object is expected to appear in an image captured by a camera possessing the given attributes situated at the given "height" at the "precise location" formed by the given "travel vector" in the presently viewed "unique location" and oriented at the given "viewing angles". Where discussion of the "height" is omitted, it may be assumed that

"movement functions" are computed from the supplied "viewing angles" and "travel vector" and an implied "height". Thus, whenever we say in the disclosure that a "movement function" is applied or computed, it is provided a "height", whether or not this is explicitly mentioned in the text.

Similarly, methods for extraction of camera attributes are not discussed.

Nevertheless, whenever it is stated that a "movement function" is applied or computed, such camera attributes may be made available to the "movement function" whether or not this is explicitly mentioned in the text. Alternatively, some

embodiments of the present invention may employ a different "movement function" for various combinations of camera attributes. This is considered an implementation detail and is therefore omitted from further discussion in the body of the disclosure and the claims.

DESCRIPTION OF THE EMBODIMENTS

As is common for computer implemented inventions, specific embodiments of systems, subsystems and devices are illustrated as functional block diagrams and specific process algorithms are illustrated as flow charts.

Reference is now made to Fig. 1, which is a simplified pictorial illustration showing a system 100 for preventing accidents, in accordance with an embodiment of the present invention. The system comprises an on-board subsystem 101 for detecting a traffic signal directed to a vehicle 102, the subsystem comprising:

a GPS system, an output and an outward looking, in this case forward mounted camera all mounted within the vehicle 100 and in data communication with a common processor having image analysis functionality that is coupled to a database. The GPS for providing a general position of the vehicle; the database for providing data regarding objects and their relative positions within that general position and the forward mounted camera for capturing an image of a field of view, the processor with image analysis functionality for identifying objects within the filed of view and for locating said objects by comparison with information in the database, thereby identifying said traffic signal unambiguously and the output for outputting data that includes identity of said traffic signal and its location.

Preferably, the sub-system 101 is configured and constructed to broadcast and receive signals via a local broadcast network (peer to peer communication network) 103. The sub- system may be integrated into a single unitary device and may comprise a modern smart phone that uses a GPS navigation application, includes a camera application and telecommunication capability with a transmitter and receiver for transmitting and receiving data. The sub- system 101 is placed inside a vehicle 102, such as a car for example.

Typically, the sub-system 101 is mounted in a mounting apparatus 104 or cradle, which enables the camera of the sub-system 101 to capture and/or view images of the road ahead of the vehicle.

Tag lexemes 105 are applied to the road surface or road signs (not shown) and can be viewed by the camera of the on-board sub-system 101. Preferably, the subsystem 101 is configured and constructed to broadcast and receive signals via a centralized network connection 106 to a backbone (centralized) network 107, such as the Internet, for example.

Moreover, a server 108 and database 109 are preferably provided. These are constructed and configured to broadcast and receive signals to the backbone

(centralized) network 107.

It will be appreciated that there are very many implementations whereby 'computing' is performed by a single processor or by a collection of processors working together. Sometimes processing is divided between the onboard subsystem 101 and one or more servers 108. Similar, some information will be stored in a memory such as a flash memory of the on board sub- system, whether a dedicated navigation - safety system or a smart-phone, both of which have flash memories. Other information may be held in a server 108. Information may be transmitted from the on board sub- systems 101 in each vehicle 102 to a central server 107 and location specific data may be transmitted from the central server 107 to the on board subsystems 101 in each vehicle 102 to provide location relevant information.

An existing road surface marking 110 might be imaged/viewed by the device 101 and might be utilized by the methods of the invention as a "recognizable object" along with other visually recognizable entities (not shown) potentially also visible to the device 101.

With reference to Fig. 2, a simplified schematic illustration of an on-board device 200 for preventing accidents, in accordance with an embodiment of the present invention is shown. On-board device 200 may be a specific implementation of subsystem 101 shown in Fig. 1. On-board device 200 is connectable / connected to a backbone network 201 and a local broadcast network 202. The device 200 is able to receive and send data/signals from/to these networks 201, 202.

The on-board device 200 comprises a processor 211 such as a central processing unit (CPU) programmed with appropriate software and further comprises a global positioning system (GPS) sensor 203, and a forward-facing camera.

The on-board device 204 may also include other subsystems and functionality, such as a cabin (inside vehicle) camera 205, a cabin ambient microphone 206, orientation sensors 207 a video screen 208, an audio output device 209, a haptic device 210 for creating vibratory stimulation, and an optional local memory 212. In sub-systems 101 that are not unitary, some of these functions may be provided by separate units, such as a GPS unit, a camera unit and an onboard computer. These may exchange data via wired or wireless connections, such as Bluetooth™, for example.

In embodiments intended for alerting the driver, the sub-system 101 includes an output. Typically this is an audible alert and may be abstract or may be words generated by a speech synthesizer or pre-recorded messages such as "approaching traffic light", "approaching red traffic light", "stop sign ahead" and the like. The alert, may however, be a visual output such as a flashing light or written instruction and may be projected onto the windscreen, onto the glasses of the driver, or even directly onto the retina of the driver. Alternatively, or additionally, the alert may be a haptic signal, such as a vibration transmitted to the driver's body through the seat, through the pedals or through the seatbelt, for example. Theoretically, though uncommon, the sense of smell or taste could be alerted. Two of more senses may be stimulated with alerts at the same time.

Reference is now made to Fig. 3, which is a simplified schematic flowchart of a method 300 for issuing driver alerts, in accordance with an embodiment of the present invention. Forward facing Camera 204 and GPS system 203 of the sub-system 101 or Device 200 are operative to capture camera images 310 and obtain GPS readings 312. The camera images 310 and GPS readings 312 may be obtained and transmitted to the processor of the sub-system 101 or Device 200 continuously, semi- continuously or intermittently.

The images and/or data/signals associated therewith 310 along with the GPS readings 312 are fed into a unique location algorithm 316 discussed in detail below with respect to the flowchart shown in Fig. 4.

Once the unique location algorithm 316 is able to recognize the "unique location" which is presently "viewed" by the device's camera, this "unique location" is fed into the instantaneous viewing angles algorithm 318, along with the images captured in step 310. The output of the instantaneous viewing angle algorithm 318 is then fed into the instantaneous travel vector algorithm 320, along with the images captured in step 310. The instantaneous travel vector algorithm 320 is discussed in detail below with reference to the flowchart shown in Fig. 10.

The output of the instantaneous travel vector algorithm 320 together with the output of the unique location algorithm 316 allows determining the instant precise location of the vehicle 322.

In embodiments and applications involving crowd- sourced supplementary data from other drivers operating similar subsystems 101, or compatible subsystems in their vehicles, concurrently device 200 or sub-system 101 is operative to obtain crowd GPS assist data 314 via the local broadcast network 202 or the backbone network 201 from other nearby devices. Similarly, where provided, device 200 or sub-system 101 may obtain crowd GPS assist data 314 and other data from the central server 108. The GPS assist data 314 may increase the accuracy of the GPS 203 reading enough to conclude the current "precise location" of the vehicle 102 in step 322 even when this would not be possible via steps 316, 318 and 320 alone, for example when no "tag lexeme" or "recognizable objects" can be imaged by the device 200, as will become clear.

Once the current "precise location" of the vehicle is concluded in step 322, it is fed along with the current GPS reading 312 into step 324, whereby new crowd GPS assist data 314 is calculated and broadcast to the local broadcast network and/or to the central server 108.

Further, the current "precise location" as concluded in step 322, along with the output of the unique location algorithm 316 and the instantaneous viewing angle algorithm 318 is fed into a signal state recognition algorithm in step 328, which is discussed in detail below with respect to the flowchart shown in Fig. 14.

As detailed below (Fig. 14), in step 328, any signal states not recognized directly may be obtained via the local broadcast network 202 or the backbone network 201 from other nearby devices or from the central server 108.

In step 326, the output of step 328 is considered along with data on signal state recognitions by other nearby devices (not shown), to reduce false recognitions and to safely conclude the current signal states of all or some traffic control signals relevant to the presently "viewed" "unique location".

Thereafter, data on signal state recognition by the device 200 is broadcast via the local broadcast network and/or the backbone network to other devices and/or to the central server in step 330.

Once available, the "precise location", determined in step 322, is used to identify the relevant "traffic control signals" in step 332.

The relevant "traffic control signals" are those "traffic control signals" whose state dictates a particular way of operating the "vehicle" in the immediate or long term; particularly but not exclusively traffic lights.

In step 334, the system of the invention is then operative to consider location- related and signal-related timing factors relating to the "precise location" from step 322, the relevant "traffic control signals" from step 332 and the states of the latter from step 326. Thereafter, in a decision step 336, the system is operative to decide if the "driver" should be warned about a signal state. If the output is YES, a driver alert or warning is issued in an issue driver warning step 338 via, for example, the audio, video or haptic outputs of the on-board device 200 (Fig. 2) or sub-system 101.

Thereafter, a query step 340 is performed to query a central database for location- specific warnings relevant to the "precise location" from step 322. In addition, distractions such as noise inside cabin, driver eye movement, etc., if detected and monitored, and driving conditions factors such as time of day, speed, weather, road bends, relevant statistics, etc., may be considered - step 342.

Following this, the system is operative to consider whether the driver should be warned about current location conditions and / or suitability of his / her driving in step 344. If yes, a driver warning is issued - step 346.

Fig. 4 is a simplified schematic flowchart of a method for determining a "unique location" presently "viewed" by a camera, in accordance with an embodiment of the present invention. With reference to Fig. 4, a method 400 for providing the unique location that is one embodiment of the unique location algorithm 316 of Fig. 3 is detailed.

Firstly camera images are obtained in a camera image obtaining step 402, by camera 204 (Fig. 2). In parallel, in an optional obtain viewing angle step 404, viewing angles are obtained using orientation sensors 207 and/or one or more of cameras 204, (205 where provided) see Fig. 2. The outputs of steps 402, 404, are fed into a searching step 408, for searching for a "tag lexeme" in the image, for example, as is described hereinbelow in Figs. 7-9.

In checking if a tag lexeme is recognized, step 410, the system is operative to determine if a tag lexeme has been recognized in the previous step 408. If the response is negative, i.e. NO, more images and viewing angles are obtained in steps 402, 404, 406 respectively and the process is repeated. If the response is YES then step 416 detailed herein below is performed.

Possibly in parallel to steps 402, 404, 408 and 410, GPS readings are obtained in step 406 using GPS sensor 203 (Fig. 2). Then, in step 412, the outputs of step 406 are used to determine the identification or ID of the space grid corresponding to the GPS reading obtained in step 406. Thereafter, all GPS grids neighboring the grid from step 412 are determined in step 414.

In step 416, the identified tag lexeme from step 410 is looked up in the database 109 or local storage 212 among tag lexemes in place in the identified grids from steps 412 and 414, where, due to the provisions of the method for tag lexeme assignment described herein below in Fig. 5, the identified tag lexeme is guaranteed to appear at most one time; and thus, the space grid where the recognized tag lexeme is in use is uniquely identified.

The output of step 416 along with the recognized tag lexeme from step 410 uniquely identify the specific appearance of the identified tag lexeme - a "unique location" "viewed" by the camera at the time that the imaging step 402 was performed. In step 418, this "unique location" is concluded or deduced concluding method 400.

Fig. 5 is a simplified schematic flowchart of a method for assigning "tag lexemes" to locations of interest, in accordance with one embodiment of the present invention.

It will be appreciated that preferred methods of the present invention rely upon tag lexemes placed so that when an image captured from a particular coordinate is subject to image analysis, at most one instance of the same tag lexeme has the potential to be present in the analysed portion of the image; or, when the chosen image analysis yields the camera to tag-lexeme distance - in any distance-uncertainty sub-region of the analysed portion of the image. Thus, in preferred embodiments of the present invention, tag lexemes reappear no closer than twice the maximum GPS error, and usually further still, as necessitated by the implementation details and uncertainty margins. While there are other methods of satisfying this aspect of the invention, in Fig. 5 one such method 420 is shown by way of an example.

The method 420 involves defining a tessellating array of areas that cover the territory. The method 420 consists of the following steps:

Pick a distance d as a semi-arbitrary maximum GPS error for a given GPS system and a set of GPS receivers of interest in a distance picking step 422;

Define the surface grid pattern - for example a uniform hexagonal grid in a surface grid pattern definition step 424;

Apply a grid pattern to surface so that no edge of any grid element is shorter than d, and in general so that any two points on the surface are closer to each other than d only if they are in the same grid element or in adjoining or overlapping grid elements, in a grid pattern to surface matching step 426;

Identify locations of interest for tag application (for example by criticality) in a tag location identification step 428; For instance, such locations may be specific lanes in a controlled intersection;

Define a finite set T of easily recognizable "tag lexemes" in a defining tag lexemes step 430; and finally

Allocate "tag lexemes" to locations of interest such that in any group of adjacent or overlapping space grids, a given lexeme appears at most once in an allocating tag lexemes step 432.

One way of performing step 432 is exemplified in 434 as follows:

Determine the number of colors required to color the map characterized by the chosen grid pattern so that each grid element has at most one adjacent or overlapping grid element of a given color. Let this number be z, in a number of colors determining step 436;

In a tag segmentation step 438, segment T in to z non intersecting subsets

TL.Tz;

In a grid annotation step 440, annotate each grid element with colors l..z, such that no grid elements neighboring or overlapping one grid elements are annotated with the same color; and

In a "tag lexeme" allocation step 442, allocate "tag lexemes" to locations of interest such that if a location resides in a grid element annotated with color i, then a tag is allocated only from the subset of T Ti.

Reference is now made to Fig. 6, which is a simplified schematic illustration of a sample surface grid coloring consisting of hexagonal cells, and "tag lexeme" placement 460, in accordance with an embodiment of the present invention.

In 460, a uniform hexagonal surface grid pattern is applied to a surface, producing uniform hexagonal surface grid elements. All grid elements, like grid element 462, bear edges longer than worst case GPS accuracy 468.

In 470, it is shown how a coloring of seven colors A, B, C, D, E and F can be applied to the surface grid pattern of 460 in accordance with 434 (Fig. 5) so that each grid element has at most one adjacent or overlapping grid element of a given color.

The hexagonal array of areas is placed over the area, and roads, junctions and the like will be located in or will traverse one or more such hexagons. Thus an existing road 466 traverses the surface. A "tag lexeme" is applied to each "unique location" along the road 466. Thus "tag lexeme" 464 in a surface grid element 462 is tagged with the letter A. Hence, the tag lexeme 464 is denoted Al. In general in 460, tag lexemes denoted Ai, Bi and Ci each belong to a different non intersecting subset of the set of tag lexemes. In accordance with 442 (Fig. 5), tag lexemes denoted Ai, are present in surface grid elements tagged with the color A, while tag lexemes denoted Bi, Ci are present in surface grid element tagged with the color C.

Fig. 7 shows a simplified schematic illustration of sample "tag lexeme" shapes

/ patterns designed for ease of recognition, 480, in accordance with some

embodiments of the present invention.

The "tag lexeme" shapes shown in 482 are substantially concentric images that may be characterized by complete omni-directional symmetry which can contribute to recognizability by a computationally inexpensive recognition algorithm (not shown).

The "tag lexeme" shapes shown in 484 are constructed by composing arbitrary elements onto an omni-directionally symmetrical sub-pattern / element.

The "tax lexeme" shapes shown in 486 are constructed by alteration of straight lines of two different lengths while incorporating an omni-directionally symmetrical element.

Fig. 8 is a simplified schematic illustration of one kind of recognizable "tag lexeme" shape 490, in accordance with an embodiment of the present invention.

With reference to Fig. 8, this type of recognizable tag lexeme shape 490 comprises at least one invariant element 492, an invariant element for suspect confirmation 494, an invariant element with at most one line of symmetry for orientation disambiguation 496 and a variable component 498 unique to each "tag lexeme" in the defined finite set of "tag lexemes". In a preferred embodiment of the present invention, element 492 is omni-directionally symmetrical (i.e. concentric) and/or element 494 has at least two lines of symmetry.

To facilitate recognizability by a computationally inexpensive recognition algorithm (not shown), all of the above-mentioned sub-patterns / elements are symmetrical with respect to a common line of symmetry 499a while sub-patterns / elements 494 and 492 are also symmetrical to a second line of symmetry 499b.

Fig. 9 is a simplified schematic flowchart of a method 491 for recognizing a tag lexeme shape, similar to 490 (Fig. 8), in accordance with an embodiment of the present invention.

The method 491 is one possible implementation of step 408 of method 400 (Fig. 4). The method 491 comprises the following steps:

First, obtain a camera image in an obtaining camera image step 491a;

Then, obtain viewing angle (such as by device orientation sensors) in a viewing angle obtaining step 491b;

Thereafter, predict a pavement horizon in a predicting pavement horizon step 491c;

Then, further identify an image area occupied by pavement for example by color and lightness in an identifying image area occupied by pavement step 49 Id;

The method is then operative to search pavement area of image for angle and distance modified likenesses of sub-pattern / element 492 (Fig. 8) using a

computationally inexpensive recognition algorithm (not shown) in a searching step 491e;

In a first checking step 49 If, it is determined whether sub-pattern /element 492 is recognized;

If NO, start over at steps 491a and 491b;

If YES, then, using an inexpensive recognition algorithm (not shown), proceed to perform a second searching step 49 lg to search for sub-pattern / element 494 at the expected distance from 492 and at various pavement plane rotations within the angles defined by the lines of symmetry of sub-pattern / element 494 and using similar angle and distance modifications as those that produced the suspected recognized match of sub-pattern /element 492;

In a second checking step 49 lh, determine if sub-pattern / element 494 is identified;

If no, start over at steps 491a and 491b;

If yes, proceed to disambiguate orientation by searching for sub- patter/elements 496 in another searching step 49 li via the application of a

computationally inexpensive recognition algorithm (not shown);

Thereafter, in a matching step 49 lj, attempt to match the variable sub-pattern / element 498 against known tag lexeme shapes in the finite set of "tag lexemes", by, for example, iteratively attempting to recognize the variant element of each "tag lexeme" from the set until a recognition is found or the set is exhausted (not shown);

In a conclusion step 491k, the method is operative to conclude whether or not a tag lexeme was recognized and if one was recognized, which one.

Reference is now made to Fig. 10, which is a simplified schematic flowchart of a method 500 for determining the "viewing angles" (azimuth, pitch and roll) of a camera similar to 204 (Fig. 2), in accordance with an embodiment of the present invention. This method provides more details of the instantaneous viewing angles algorithm 318 (Fig. 3).

First, in an obtaining camera image step 501, a camera image is obtained by forward facing camera 204 (Fig. 2).

Thereafter, the system of the invention is operative to lookup appropriate recognizable objects in local storage memory 212 or in a server accessible database 108, their object recognition methods, object recognition signatures and movement functions and to compute predicted bounding boxes in a lookup step 502 using a method such as 550 (Fig. 11) discussed below.

Following the looking up of appropriate recognizable objects, each

recognizable object's object recognition method requires generating a list of match options 503 for each recognizable object in the image from step 501 within the predicted bounding boxes from step 502.

Thereafter, in a viewing angle computation step 504, the recognizable object's movement function is examined for a combination of inputs which produce an output requiring the recognizable object to be present in the image at coordinates sufficiently close to those at which it was found / matched in at least one of the match options produced in the recognizing step 503. In a preferred embodiment of the invention, this consideration takes into account either a minimal or the last known travel vector according to a recent iteration of 322 (Fig. 3), since the inputs required by the movement function include both the travel vector and the viewing angles.

In an embodiment of the where the movement function is a system of multivariate polynomials, the calculation step 504 consists of solving a system of simultaneous equations.

In a confidence checking step, 505 it is considered whether enough

recognizable objects were recognized in step 503 and whether step 504 yielded exactly one viewing angle which would satisfy at least one match option of each recognized recognizable object.

If YES, then the present iteration of the method is concluded and the system of the invention, in a concluding step 507, concludes the viewing angles corresponding to the image captured in step 501.

If NO, then the system is operative to enlarge search hounding boxes for all recognizable objects in a bounding box enlarging step 506 and repeat steps 502 to 505.

Turning to Fig. 11, there is seen a simplified schematic flowchart of a method 550 for looking up recognizable objects in step 502 (Fig 10) of the viewing angle algorithm 500, in accordance with one embodiment of the present invention.

A most recent recognized unique location ID 551 (316 of Fig. 3) is fed into a computing procedure step 552 that looks up recognizable objects associated with the unique location ID with which it is provided. (See figures 16 and 17).

In a last known travel vector step 553, a last known travel vector 320 (Fig. 3) is considered by the device 211 (Fig. 2). In a use last known viewing angles step 554, the last known viewing angles 318 (Fig. 3) are considered by the device 211 (Fig. 2).

In an obtain orientation / rotation data step, 555, orientation / rotation data may be obtained from orientation sensors 207 (Fig. 2) or by analysis of data from cameras 206 and / or 205 (Fig 2).

Thereafter, in a formulating predicted bounding boxes step 556, predicted bounding boxes for each selected recognizable object may be formulated based on the afore mentioned considered last known travel vector (step 553), last known viewing angle (step 554) and the previously considered orientation / rotation data (step 555), by supplying the travel vector and viewing angle, corrected by the orientation / rotation data as input to the movement function associated with each recognizable object (see Fig 17).

Then, the predicted bounding boxes may be enlarged in a bounding box enlargement step 557 proportionally to the present search iteration number, in accordance with an iteration count of step 506 (Fig. 10) of the viewing angle algorithm 500 (Fig. 10).

Then, in a filtering step 558, recognizable objects whose bounding boxes are outside image boundaries are dropped.

Additionally, in lighting conditions crowd data lookup step 559, the system may be queried for recent statistics regarding lighting conditions observed by other devices in the same general area or when viewing the unique location viewed herein in step 551.

Then, the current image, time of day, weather forecast and crowd data from previous step 559, are used in lighting conditions computation step 560. The results of step 560 are used to filter recognizable objects by lighting conditions in a second filtering step 561, where recognizable objects not relevant for current lighting conditions are dropped. (For example, some recognizable objects are only relevant for daytime, while others are exclusively relevant for night time, while still others possess different recognition signatures during dusk and are thus recorded in the system of the invention as separate recognizable objects for dusk and for daylight viewing).

Thereafter, a lookup recent recognizability statistics from crowd data step 562 is performed, where the system of the invention is queried for recent statistics regarding successful recognition of the recognizable objects of the presently viewed unique location by other devices.

This is followed by a thirds filtering step 563, where recognizable objects which, according to the output of the previous step 562, recently tended to fail to be recognized by devices viewing the present unique location - and thus fall below a specified recognizability threshold - are dropped.

Lastly, of particular significance, an ordering step 564, is performed, whereby the set of recognizable objects is ordered sequentially in accordance with least effect of travel vector - in one embodiment, this could be the average absolute value of the recognizable objects' movement function applied to a number of predefined travel vectors.

Finally, in the last step 565, a predetermined number, n, of recognizable objects are picked from the top of the output of the ordering step 564, thus concluding the method 550.

Reference is now made to Fig. 12, which is a simplified schematic flowchart of a method 600 for determining a "travel vector" as defined herein above, of a camera similar to 204 (Fig. 2), in accordance with an embodiment of the present invention. Method 600 provides more details of the instantaneous travel vector algorithm 320 referred to in Fig. 3.

First, a camera image (step 601) is obtained by camera 204 of Fig. 2. Then system 100 is operative to lookup appropriate recognizable objects, their object recognition methods, object recognition signatures and movement functions and to compute predicted bounding boxes in a lookup step 602, using a method such as 650 of Fig. 13, for example, as discussed below.

Following the look up step 602, each recognizable object's object recognition method is consulted in a recognizing step 603 to yield all match options for that recognizable object in the image from step 601 within the predicted bounding boxes from step 602.

Then, in a travel vector computation step 604, the recognizable object's movement function is examined for a combination of inputs which would produce an output requiring the recognizable object to be to be present in the image at coordinates sufficiently close to those at which it was found / matched in at least one of the match options produced in the recognizing step 603. In a preferred embodiment, this consideration takes into account the present viewing angles computed in 318 (Fig. 3), since the movement function takes as input both the travel vector and the viewing angles. In an embodiment of the invention where the movement function is a system of multivariate polynomials, this step 604 amounts to solving a system of linear equations.

In a confidence checking step, 605 it is considered whether enough

recognizable objects were recognized in step 603 and whether step 604 yielded exactly one travel vector which would satisfy at least one match option of each recognized recognizable object. If YES, then the present iteration of the method is concluded and the system of 30 the invention, in a concluding step 607, concludes the travel vector corresponding to the image captured in step 601. If NO, then the system is operative to enlarge search bounding boxes for all recognizable objects in a bounding box enlarging step 606 and repeat steps 602-605. Fig. 13 is a simplified schematic flowchart of a method for looking up "recognizable objects" for "travel vector" determination, in accordance with an embodiment of the present invention.

With reference to Fig. 13 a method 650 for looking up recognizable objects in step 602 (Fig 10) of the travel vector algorithm 600, in accordance with one embodiment of the present invention is shown.

In a last used recognized unique location ID step 651, a most recent output of 316 (Fig. 3) is fed into step 652. Step 652 is operative to query the system of the invention and thus lookup recognizable objects associated with the unique location ID it was provided (see figures 16 and 17).

In a last used known travel vector step 653, a last known travel vector 320 (Fig. 3) is potentially considered by the device 211 (Fig. 2). In a using viewing angles computed for current image step 654, the output of method 500 (Fig 10), with captured image in step 501 being the same as the presently captured image in step 601, is consulted.

Thereafter, in a formulating predicted bounding boxes step 656, predicted bounding boxes for each selected recognizable object are formulated based on the afore -recollected last known travel vector (step 653) and viewing angles (step 654), by supplying the said travel vector and viewing angles as input to the movement function associated with each recognizable object (see Fig. 17).

Then, the predicted bounding boxes are potentially enlarged in a bounding box enlargement step 657 proportionally to the present search iteration number, in accordance with an iteration count of step 606 (Fig. 12) of the travel vector algorithm 600 (Fig. 12).

Following this, in a filtering step 658, recognizable objects whose bounding boxes are outside image boundaries are dropped.

Additionally, in lighting conditions crowd data lookup step 659, the system of the invention is queried for recent statistics regarding lighting conditions observed by other devices in the same general area or when viewing the unique location viewed herein in step 651.

Then, the current image, time of day, weather forecast and crowd data from previous step 651, are used in lighting conditions computation step 660.

The results of step 660 are used to filter recognizable objects by lighting conditions in a second filtering step 661, where recognizable objects not relevant for current lighting conditions are dropped. (For example, some recognizable objects are only relevant for daytime, while others are exclusively relevant for night time, while still others possess different recognition signatures during dusk and are thus recorded in the system of the invention as separate recognizable objects for dusk and for daytime.

Thereafter, a lookup recent recognizability statistics from crowd data step 662 is performed, where the system of the invention is queried for recent statistics regarding successful recognition of the recognizable objects of the presently viewed unique location by other devices.

This is followed by a thirds filtering step 663, where recognizable object which, according to the output of the previous step 662, recently tended to fail to be recognized by devices viewing the present unique location - and thus fall below a specified recognizability threshold - are dropped.

Lastly, of particular significance, an ordering step 664, is performed, whereby, the set of recognizable objects is ordered sequentially in accordance with most effect of travel vector - in one embodiment, this could be the average absolute value of the recognizable objects' movement function applied to a number of predefined travel vectors.

Finally, in the last step 665, a predefined number, n, of recognizable objects are picked from the top of the output of the ordering step 664, thus concluding the method 650.

Fig. 14 is a simplified schematic flowchart of a method for determining the states of "traffic control signals", in accordance with an embodiment of the present invention.

With reference to Fig. 14, a method 700 for recognizing the signal states of traffic control signals, in accordance with an embodiment of the present invention is shown. This method provides more details of the signal state algorithm 328 (Fig. 3).

First, in an obtain camera image step 701, a camera image is obtained by camera 204 (Fig. 2). Then, system 100 is operative to lookup relevant "signal recognizable objects", their object recognition methods and signatures, signal recognition methods and signatures and to compute predicted bounding boxes in a lookup step 702, using a method such as 750 (Fig. 15) discussed below.

In a confirmation step 703, correct object recognition methods provided by step 702 are consulted to confirm recognition of each signal recognizable object at expected bounding boxes. The confirmation possibly yields minor adjustments required for the correct recognition of the signal state in the next step 704. In the preferred embodiment of the invention, signals which are not confirmed in this step 703 are excluded from signal state recognition in the next step 704. This allows minimizing the frequency of false state recognitions, for instance if traffic control signal is temporarily obscured by an object which might otherwise confuse the signal recognition algorithm employed in the next step 704.

Following the confirmation step 703, a state recognition step 704 is performed, whereby the state of each of the signal recognizable objects confirmed in step 704 is attempted to be determined using the corresponding signal recognition method given by step 702.

In addition, in an obtaining crowd-cast signal states step 705, the system of the invention is queried for signal state recognitions made available in 330 (Fig. 3) by other devices viewing the present unique location. In this step 705, either or both of the networks 202 (Fig. 2) and / or 201 (Fig. 2) may be instrumental. The here- obtained crowd-cast signal states are further marked in this step 705 as the present states of those traffic control signals for which no associated signal recognizable objects were confirmed in step 703, or those for which the state of associated signal recognizable objects was not conclusively recognized in the recognition state 704 by the appropriate recognition method.

The method 700 is concluded in a concluding step 707, where the system of the invention uses the outputs of steps 704 and 705 to conclude the states of the traffic control signals associated (see Fig 17) with the signal recognizable objects of the presently viewed unique location. In this step 707, the system of the invention is operative to consider various parameters such as recognition confidence level and inter-signal rules. (For example if when traffic light A is green, a conflicting traffic light B is always red, and A was recognized as green with 95% confidence while B was recognized as yellow with 15% confidence, B will be reported as red contrary to its recognition).

Turning to Fig. 15 a simplified schematic flowchart of a method 750 for looking up "signal recognizable objects" in step 702 (Fig 14) of the signal state recognition algorithm 700, in accordance with one embodiment of the present invention is now described.

In a use last recognized unique location ID step 751, a recent output of 316 (Fig. 3) is fed into step 752. Step 752 is operative to query the system of the invention and thus lookup signal recognizable objects associated with the unique location ID it was provided (see figures 16 and 17).

In a use travel vector computed for current image step 753, the output of method 600 (Fig 12), with captured image in step 601 being the same as the presently captured image in step 701, is consulted.

In a use viewing angles computed for current image step 754, the output of method 500 (Fig 10), with captured image in step 501 being the same as the presently captured image in step 701, is consulted.

Thereafter, in a formulating predicted bounding boxes step 755, predicted bounding boxes for each selected signal recognizable object are formulated based on the afore-recollected last known travel vector (step 753) and viewing angles (step 754), by supplying the said travel vector and viewing angles as input to the movement function associated with each signal recognizable object (see Fig. 17).

Following this, in a filtering step 756, signal recognizable objects whose bounding boxes are outside image boundaries are dropped.

Additionally, in lighting conditions crowd data lookup step 757, the system of the invention is queried for recent statistics regarding lighting conditions observed by other devices in the same general area or when viewing the unique location viewed herein in step 751.

Then, the current image, time of day, weather forecast and crowd data from previous step 757, are used in lighting conditions computation step 758.

The results of step 758 are used to filter signal recognizable objects by lighting conditions in a second filtering step 759, where signal recognizable objects that cannot be decisively confirmed / recognized using their object recognition method under current lighting conditions are dropped. (For example, a traffic light cannot be recognized / confirmed based on its external outlines at night, but the same traffic light can be recognized at night by the hue of its halo, which is not appropriate at day time).

Furthermore, the results of step 759 are used to filter signal recognizable objects once again in a third filtering step 760, where signal recognizable objects whose state cannot be determined using their signal state recognition method under current lighting conditions are dropped. (For example, the state of the traffic light discussed above cannot be determined at night based on the distance of the lit light from the external outlines of the signal; but rather, color-based recognition methods can be applied, which might be less trustworthy at daytime).

Fig. 16 is a simplified schematic illustration of a "unique location" entity relation model 800, in accordance with an embodiment of the present invention.

Fig 16 is provided in order to simplify the understanding of the inter-relationship of some of the entities used elsewhere in this detailed description of preferred

embodiments. The "unique location" entity relation model 800 is centered around the "unique location" entity 840.

Figures 4, 5 and 6 make use of a notion of a space grid, which corresponds in the domain of the invention to a defined region of the transportation medium. In reference to Fig 16, the system of the invention 100 (Fig. 1) is operative to uniquely identify each space grid entity 820 by a space grid ID 822 and to non-uniquely label each space grid entity with a color 821, as specified in step 440 (Fig. 5).

The system of the invention 100 (Fig. 1) is also operative to associate each space grid entity 820, with 6 other such space grid entities 820 which are adjacent to it in the domain of the invention, in the 6 to 6 relation 810 "Is Adjacent To" (assuming a hexagonal space grid, as shown in 460 (Fig. 6). This relation is instrumental in step 414 (Fig. 4).

Figures 4, 5, 6, 10, 11, 12, 13, 14 and 15 make use of a notion of a "unique location", which, as described and defined herein above, can be thought of as a particular coordinate in the transportation medium which is chosen to have certain data about it recorded in the system of the invention.

The system of the invention 100 (Fig. 1) is operative to associate with each space grid entity 820 zero or more such unique location entities 840 by the 1 to any "Harbors" relation 830. In this case, it can be said that the associated "space grid" 820 harbors the associated "unique location" 840.

The system of the invention is further operative to uniquely identify each "unique location" 840 by a unique location ID 842.

In addition, the system of the invention is also operative to non-uniquely label each "unique location" 840 by a "recognition lexeme" 841, representing one non- unique "tag lexeme" shape as exemplified in figures 7 and 8. This labeling is taken advantage of in step 416 (Fig. 4). The assignment of this labeling is the subject of method 420 (Fig. 5).

Fig. 17 is a simplified schematic of a "(signal) recognizable object" entity relation model 900 centered around the "(signal) recognizable object' ' entity 906.

Fig. 17 is provided in order to simplify the understanding of the interrelationship of some of the entities used elsewhere in this detailed description of preferred embodiments.

Unique location entities 902 pictured in Fig. 17 may be similar or identical to entities 840 (Fig. 16).

Entities 906 pictured in Fig. 17 depict both signal recognizable objects and regular recognizable objects; hence, entities 906 are titled "(signal) recognizable objects". Following this syntax, items only relevant to signal recognizable objects appear in Fig. 17 in brackets, while items without brackets are relevant equally to regular "recognizable objects" and "signal recognizable objects".

As follows from the definition appearing herein above, recognizable object entities 906 need not be real objects in the domain of the invention, but rather any entity forming some visually recognizable pattern or patterns in the domain of the invention. Such patterns need not be unique. Examples of such recognizable objects in the domain of the invention may include a contour, a horizon line, a textured wall, a graffiti segment, an outline of a street light, a bright light source etc.

In order to facilitate the operation of the various methods discussed hereinabove, the system of the invention is operative to associate with each "(signal) recognizable object" entity 906, its recognition signature (910), its object recognition method (908), its movement function (912) and its object recognition required lighting conditions (914). When 906 is a signal recognizable object entity, the system of the invention is operative to also associate it with its signal recognition method (922), its signal recognition signature (924) and its signal recognition required lighting conditions (926).

When in the domain of the invention, a camera located within the vicinity of a particular unique location 902 can image certain recognizable objects 906, the system of the invention is operative to associate with the said unique location entity 902 the said recognizable object entities 906 by the one to any relation "Has a View of 904. This relation is instrumental in steps 552 (Fig. 11), 652 (Fig. 13) and 752 (Fig 15). As mentioned herein above, as long as a camera is located within the region of the domain of the invention where it can image some of the recognizable objects 906 which are associated with a particular unique location 902 via the "Has a View of relation 904, the camera is said to be "viewing" the said unique location 902. Finally, entity 920 corresponds to a traffic control signal in the domain of the invention, as defined herein above. When a recognized or an unrecognized signal state of some signal recognizable object entities 906 is indicative of the presence or absence of one or more states of a traffic control signal 920 in the domain of the invention, the system of the invention is operative to associate the said signal recognizable objects entities 906 with the said traffic control signal entity 920 via the any to one relation "Indicates State of 918. As follows form the present paragraph, multiple signal recognizable objects 906 may indicate the presence or absence of various states of a single traffic control signal 920. The relation 918 is used in step 326 (Fig. 3).

Fig. 18 is a simplified pictorial illustration of a mounting fixture 1100, in accordance with an embodiment of the present invention.

Mounting fixture with offset arm and optional adjustable mirrors 1100 is configured to lower the effective vantage point of device camera 204 (Fig. 2) below an attachment element 1106. In the figure is seen a top view 1102 of the mounting fixture and a side view 1104. Attachment element, such as suction cup 1106 is attached to an offset arm 1108, wherein the offset shape of the arm allows for the arm not to block said device camera 204.

The fixture further comprises connecting stalk 1110, bearing joints 1112 and 1116.

The joint 1112 attaches the connecting stalk 1110 to the offset arm 1108 and permits rotation of the connecting stalk 1110 around multiple axes with respect to the offset arm 1108.

The fixture further optionally comprises a pair of adjustable mirrors 1114, which by their configuration allow displacement of device camera view point below the attachment element 1106 and angle adjustment.

The joints 1116 attach the adjustable mirrors 1114 to the connecting stalk 1110 and permit rotation of the adjustable mirrors 1114 around multiple axes with respect to the connecting stalk 1110. The presence of multiple receptacles for the said joints 1116 exemplifies how in the present embodiment the joint position is allowed to vary or transit along the connecting stalk 1110, while one mirror is placed opposite the camera 204 (Fig 2) and the other mirror is placed at the desired position below the attachment element 1106, so that said attachment element does not obscure the view of the camera. In the pictured embodiment the rotation of joints 1112 and 1116 is achieved by implementing said joints 1112 and 1116 as ball joints.

The mounting fixture 1110 further comprises an adjustable holder 1118 for retaining the on board device within the mounting fixture. In the present embodiment, the adjustable holder 1118 is attached to the connecting stalk 1110 by virtue of being an extrusion thereof. The adjustable holder is characterized by its ability to retain onboard device 211 (Fig. 2) or it camera 204 (Fig. 2), for example through a clamp-like mechanism, as pictured.

Fig. 19 is another simplified pictorial illustration of a mounting fixture 1200, similar to the described mounting fixture 1100 (Fig. 18), in accordance with an embodiment of the present invention is shown.

It will be appreciated that the functional block diagrams and flowcharts taken together provide a full system and method for determining the exact position of a vehicle and to identify road signs such as traffic lights in its vicinity and to alert the driver regarding the status of such road signs. The system could be used to provide input to an automated driving system or to a cruise control type driver automated driver assist system that may be over-ridden by the driver or to a system that over rides the driver.

The above description is thus a preferred embodiment. The general approach of combining a GPS system with a camera to generally and then precisely locate a vehicle and to identify traffic control signals and, where these are traffic lights, to identify the state of the traffic light, i.e. whether it is red, amber or green, is a new approach designed to generate highly accurate and unambiguous warnings and alerts in real time, despite the enormous number of roads and junctions, approach angles and distances.

The following brief description extracts and highlights the main features of the system.

Foremost, embodiments consist of a sub-system 101 which may be a unitary device 200 or a collection of interconnecting elements.

The sub-system 101 detects traffic signals directed to a vehicle 102. Subsystem includes a GPS 203 and a forward mounted camera 204 both mounted within the vehicle 102, and in data communication with a common processor 211 having image analysis functionality that is coupled to a database that may reside partially or completely within an onboard memory 212 or an external database 109 in data communication with the common processor 211 via a server 108, for example. The GPS 203 provides a general position (GPS reading 312) of the vehicle 102 and the database 109, 212 for providing data regarding objects and their relative positions within that general position of the vehicle 101. The forward mounted camera 204 captures an image of a field of view 310, the processor with image analysis functionality may apply a unique location algorithm 316, an instantaneous viewing algorithm 318 and an instantaneous travel vector algorithm to determine the exact location of the vehicle 101.

Optionally, a plurality of sub-systems 101 described above interact, either directly via a telecommunication link 103, or indirectly via a common computer processor such as a server 108, and access a common database 109. This requires the sub-systems 101 and the server 108 to have receivers and transmitters (transceivers) and to communicate via a network 107.

The sub- systems include receivers and transmitters for receiving signals from the transmitters of other sub-systems and the server.

In general, the system identifies candidate objects within the filed of view and locates and identifies these objects by comparison with information in the database 109, 212, thereby identifying objects such as traffic signals unambiguously, outputting data that includes identity of said traffic signal and its location.

The sub-system 101 (200) may be used to identify traffic signals such as the lights of a traffic light generating data that includes the color of traffic light.

Generally, the sub-system 101 is configured to alert a driver of the vehicle 102 via at least one of a haptic signal, an audible signal and a visual signal to the driver. In some embodiments, however, the sub-system 101 (200) may be configured such that the output thereof directly controls the vehicle, bypassing a driver.

The system may use existing hardware such as off the shelf GPS units and cameras and may use an appropriately mounted and positioned smart phone. It may be implemented as a new software program which may include additional functionality such as navigation software, or may be implemented as an add-on or retrofit to existing systems or as a series of procedures within available systems such as

WAYZ™ for example. Thus a plurality of the sub-systems each with a transceiver may generate positioning signals that are detectable by receivers of other sub-systems.

The GPS can only provide a very general location due to limitations of satellite positioning, including the affects of overhang, adverse weather conditions and the like. It is a particular feature of some preferred embodiments that the territory is divided into a tessellation of areas where in one embodiment, each area is larger than the uncertainly in position resulting from the GPS, and in another embodiment, each area is larger than the worst case accuracy of the GPS (effectively half of the former). It will be appreciated that the two embodiments apply the same concept.

By means of the aforementioned tessellation, in preferred embodiments, the uncertainty in position of a vehicle is no more than within one area and all its neighboring areas, and in embodiments where the tessellation areas are larger than the uncertainty in position resulting from the GPS, within one of four areas; while in such embodiments where also the territory is divided into a tessellating array of identical hexagons - within one of three areas.

The database 108 or locally stored data in local storage 212 typically includes information regarding all traffic signals (including road markings, such as tag lexemes 105) within each area such that a vehicle 102 knows that any traffic signal within the field of view must lie within one of the areas identified in accordance with the accuracy of the GPS, image processing and the construction of the tessellation of areas.

In this manner, a limited number of labels may uniquely label all items of interest in the road environment, such as road signs and markings; since three, four, or as in the tessellation 462, seven, sets of labels may be used over and over, such that no two tessellation areas potentially intersected by uncertainty in position of a vehicle use the same labels.

It will be appreciated that the tessellation thus employed in the preferred embodiments is one specific implementation of the more general approach of embodiments of the present invention where labels are allowed to be reused at distances exceeding the uncertainty resulting from the GPS, image capture and processing.

Amongst other objects determined in the field of view, such as buildings used for determining the location, the detected objects include road-signs.

In preferred embodiments, the road signs include painted road signs which may be painted on a road surface and which may contain concentric markings. Details of the painted road-signs are included in the database 109. These may be used to provide exact location, the distortion of an image in the painted road- sign in the image stream of the camera 204 providing absolute distance and directional information to the on-board sub- system 101.

A method for detecting traffic signals directed to a vehicle 102 consists of:

(i) providing a subsystem 101, 200 that includes at least a GPS 203, an output (e.g. 208, 209, 210) and a forward mounted camera 204 all mounted within the vehicle, 102 and in data communication with a common processor 211 having image analysis functionality and coupled to a database which may be in a local storage memory 212, an accessible central database 109 accessible via a server 108, for example, or distributed within both, and possibly other locations as well.

(ii) The general geostationary location of the vehicle is determined in absolute coordinates using the GPS 203 (step 312 of Fig. 3 also step 406 of Fig. 4).

(ii) retrieving a list of objects and their relative positions within the general location from the database; (iii) Capturing an image stream with a field of view with the forward mounted camera (this is step 310 of Fig. 3 and step 402 of Fig. 4) and (iv) comparing the list of objects and their relative positions from the database with the image stream from the forward mounted camera 204 and determining the actual position of the vehicle 102 (this is step 418 of Fig. 4 which goes into more detail of a specific embodiment) and identify at least one visual traffic signal (step 332) by transforming and aligning the geostationary position of the visual traffic signal with a candidate location within a relative polar coordinate system of the vehicle by mapping onto the field of view (steps 316, 318, 320).

For example, the traffic signal comprises a traffic light, or tag lexeme 105.

Then an alert (338, 346) is output to a driver of the vehicle 102.

As shown in Fig. 2, the alert may comprise at least one of a haptic signal 210, an audible signal 209 and a visual signal 208 to the driver.

Data from a plurality of sub-systems 101 may be received by a computer processor 211, 108 which corrects information to a sub-system 101 of a vehicle 102 of interest. Such crowd GPS is shown as step 314 of Fig. 3.

In some embodiments, such as shown in Fig. 1, for example, a base station comprising a common computer processor or server 108 and database 109, a receiver and transmitter (or transceiver) for coupling to a network 107, such that the base station receives and transmits signals to each on-board subsystem 101.

With reference to Figs. 4 steps 412, 414 and 416, and in general to Figs. 5, 6, and 7 the territory is divided into a tessellation of areas 424 such that each area is larger than the worst case position accuracy resulting from the GPS 422 such that the uncertainty in position of a vehicle is no more than within one area or any of it neighboring areas and the processor compares data from the database that relates to those areas 426.

As required in Fig. 5 and illustrated in Fig. 6, a tessellation grid is colored in preferred embodiments of the invention so that each area has at most one neighbor of a given color and the tags are segmented for reuse by colors assigned to areas where these tags are allowed to be applied. This is one implementation of the general principle of tag reuse limitation whereby in any uncertainty area resulting from GPS and image processing, a given tag (or traffic signal) appears at most once.

Generally, the database 109 comprises information regarding all traffic signals

A1-A9, B1-B7, C1-C3 of Fig. 5 (the latter specifically being tag lexeme road markings), for example, within each area such that a vehicle knows that any such traffic signal within the field of view must lie within the area containing the GPS reading or one of its neighboring areas.

Although the above description focuses by way of example on locating and acting on red light traffic signals, it will be appreciated that the system is more flexible and preferred embodiments may have additional functionality such as:

warning correctly regarding excessive speed for specific exact locations, e.g. exit lanes where it will be appreciated that GPS systems are unreliable, since they do not have the precision to know what lane the vehicle is in, and frequently give the wrong warnings and can't know if a vehicle is in a toll lane or bus lane, for example warning regarding entering the wrong lane where sometimes the driver of a vehicle believes that the neighboring lane on the driver's side is designed for the same direction of travel, but actually is intended for oncoming traffic, or for turning off one road at a junction where sometimes drivers mistakenly turn into on-coming traffic warning about needing to drive in a specific lane when a turn is approaching, according to navigation plan (such as WAYZ)

warning about going too fast approaching a dangerous turn where the driver needs to know unambiguously where the car is, such that with conventional GPS systems and even road signs, the drivers typically ignore such warnings

warning about entering a playground or school zone where again one needs to be totally sure about where the vehicle is and ambiguity creates problems

GPS accuracy varies with weather. Warning system need to be robust and trust-worthy; they should issue warnings in close to 100% of cases regardless of weather.

Preferred embodiments have been described. As is typically the case with computer implemented inventions, it will be appreciated that various modifications may be made without departing from the spirit and scope of the invention.

Accordingly, other embodiments are within the scope of the following claims.

Thus persons skilled in the art will appreciate that the present invention is not limited to what has been particularly shown and described hereinabove. Rather the scope of the present invention is defined by the appended claims and includes both combinations and sub combinations of the various features described hereinabove as well as variations and modifications thereof, which would occur to persons skilled in the art upon reading the foregoing description.

In the claims, the word "comprise", and variations thereof such as

"comprises", "comprising" and the like indicate that the components listed are included, but not generally to the exclusion of other components.

