Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
HYPERACUITY SYSTEM AND METHODS FOR REAL TIME AND ANALOG DETECTION AND KINEMATIC STATE TRACKING
Document Type and Number:
WIPO Patent Application WO/2016/073958
Kind Code:
A2
Abstract:
Certain embodiments of the methods and systems disclosed herein determine a location of a tracked object with respect to a coordinate system of a sensor array by using analog signals from sensors having overlapping nonlinear responses. Hyperacuity and real time tracking are achieved by either digital or analog processing of the sensor signals. Multiple sensor arrays can be configured in a plane, on a hemisphere or other complex surface to act as a single sensor or to provide a wide field of view and zooming capabilities of the sensor array. Other embodiments use the processing methods to adjust to contrast reversals between an image and the background.

Inventors:
UNGLAUB RICARDO A G (US)
WILCOX MICHAEL (US)
SWANSON PAUL (US)
ODELL CHRIS (US)
Application Number:
PCT/US2015/059625
Publication Date:
May 12, 2016
Filing Date:
November 06, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
LAMINA SYSTEMS INC (US)
International Classes:
H04N5/232; G06T7/00
Attorney, Agent or Firm:
JARVIS, Peter M. et al. (Two Embarcadero CenterSan Francisco, California, US)
Download PDF:
Claims:
WHAT IS CLAIMED IS: 1. A method of calculating a position of a feature of an object at hyperacuity accuracy, the method comprising:

receiving output signals from a plurality of sensors, wherein the sensors:

are configured in an array;

have overlapping response profiles, wherein for each sensor the response profile is non-linear and is axio-symmetric about a respective center of the sensor; and produce output signals based on the response profiles; and

processing the signals by:

applying logarithmic amplification to each output signal;

determining coordinates of the position of the feature with respect to an origin of the array using a linear combination of the logarithmically amplified output signals. 2. The method of claim 1 wherein the non-linear response profiles of the sensors have an approximately Gaussian profile. 3. The method of claim 1 wherein determining the coordinates of the position of the feature with respect to the origin of the array comprises using a linear combination of the logarithmically amplified output signals of at least three sensors, at least two of the three sensors being adjacent. 4. The method of claim 1 further comprising determining at least one of a velocity and an acceleration of the feature by calculating time-derivatives of the position of the feature. 5. The method of claim 1 wherein:

the sensors are optical sensors arranged on a focal plane of an objective lens; the feature is a component of a focused image of the object; and the processing of the output signals is performed with analog circuitry in real time.

6. The method of claim 5 wherein:

the array comprises six optical sensors are arranged hexagonally around a central optical sensor; and

the origin of the array is a center of the central optical sensor.

7. The method of claim 5, further comprising determining positions of a plurality of dependent or independent features, wherein the independent features comprise focused images of respective objects, wherein a quantity of features in the plurality of features is at most one less than a total number of the optical sensors, and wherein the determination of the positions of the features uses at least two of the logarithmically amplified output signals, each pair comprising the logarithmically amplified output signal of a central optical sensor and of another optical sensor. 8. A method of calculating a position of a feature of an object at hyperacuity accuracy, the method comprising:

receiving output signals from a plurality of sensors, wherein the sensors:

are configured in an array;

have overlapping response profiles, wherein for each sensor the response profile is non-linear and is axio-symmetric about a respective center of the sensor;

for each of a plurality of output signals, producing a corresponding first weighted signal by weighting the output signal with a respective first sinusoidal weight, and producing a corresponding second weighted signal by weighting the output signal with a respective second sinusoidal weight;

forming a first orthogonal resultant signal, wherein the first orthogonal resultant signal comprises a sum of the corresponding first weighted signals;

forming a second orthogonal resultant signal, wherein the second orthogonal resultant signal comprises a sum of the corresponding second weighted signals; and

determining the position of the feature using the first and second orthogonal resultant signals .

9. The method of claim 8, wherein the non-linear response profiles of the sensors have an approximately Gaussian profile. 10. The method of claim 8, wherein the method is performed with analog circuitry in real time. 11. The method of claim 8 further comprising using the first and second orthogonal resultant signals to determine a radial distance and an incident angle to the position of the feature relative an origin point of the array. 12 . The method of claim of claim 8, wherein:

the sensors are optical sensors arranged on a focal plane of an objective lens; and the feature is a component of an image of the object focused onto the focal plane by the objective lens. 13. The method of claim 12 wherein:

the array comprises optical sensors arranged around a central optical sensor; and the origin of the array is a center of the central optical sensor. 14. The method of claim 13 wherein:

the array comprises six optical sensors arranged hexagonally around a central optical sensor;

the origin of the array is a center of the central optical sensor;

the corresponding first respective weighted signals are formed from the signals from the six hexagonally arranged optical sensors respectively weighted by cosines of successive multiples of π 13 radians, and wherein the second respective weighted signals are formed from the signals from the six hexagonally arranged optical sensors respectively weighted by sines of successive multiples of π 13 radians. 15. The method of claim 11, further comprising determining at least one of a velocity and an acceleration of the position of the feature using at least one of time derivatives of the radial distance and the incident angle and time derivatives of the first and second orthogonal resultant signals.

16. The method of claim 8, wherein a trajectory of the feature is determined using a plurality of positions of the feature, the plurality of positions of the feature being determined at a respective plurality of discrete times. 17. A tracking system for determining a position of a feature of an object at hyperacuity accuracy comprising:

an array comprising a plurality of sensors, wherein for each sensor the response profile is non-linear and is axio-symmetric about a respective center of the sensor, and the sensor produces an output signal based on the response profile; and

components configured to receive the output signals from the sensors, wherein, in response to receiving the output signals from the sensors, the components process the received output signals by:

applying logarithmic amplification to a plurality of the output signals; determining coordinates of the position of the feature with respect to an origin of the array using a combination of the logarithmically amplified output signals. 18. The tracking system of claim 17, wherein:

the sensors are optical sensors arranged on a focal plane of the objective lens; each optical sensor is configured with a respective sensor lens, so that the objective lens and the respective sensor lens of each optical sensor are arranged to produce the non-linear response profile of the optical sensor; and

the feature is a component of a focused image of the object. 19. The tracking system of claim 17 wherein:

the non-linear response profiles of the sensors have an approximately Gaussian profile. 20. The tracking system of claim 17 wherein:

the processing of the output signals is performed with analog circuitry in real time; and wherein determining the coordinates of the position of the feature with respect to the origin of the array comprises using a linear combination of the logarithmically amplified output signals of at least three sensors, at least two of the three sensors being adjacent. 21. A tracking system for determining a position of a feature of an object at hyperacuity accuracy comprising:

an array comprising a plurality of sensors, wherein for each sensor the response profile is non-linear and is axio-symmetric about a respective center of the sensor, and the sensor produces an output signal based on the response profile; and

components configured to receive the output signals from the sensors, wherein, in response to receiving the output signals from the sensors, the components process the received output signals from the sensors by:

for each of the output signals, producing a corresponding first weighted signal by weighting the output signal with a respective first sinusoidal weight, and producing a

corresponding second weighted signal by weighting the output signal with a respective second sinusoidal weight;

forming a first orthogonal resultant signal, wherein the first orthogonal resultant signal comprises a sum of the corresponding first weighted signals;

forming a second orthogonal resultant signal, wherein the second orthogonal resultant signal comprises a sum of the corresponding second weighted signals; and

determining the position of the feature using the first and second orthogonal resultant signals . 22. The tracking system of claim 21, wherein the processing of the signals includes:

using the first and second orthogonal resultant signals to determine a radial distance and an incident angle to the position of the feature relative an origin point of the array. 23. The tracking system of claim 22, wherein at least one of velocity and acceleration of the position of the feature are determined using at least one of time derivatives of the radial distance and the incident angle and time derivatives of the first and second orthogonal resultant signals.

24. The tracking system of claim 21, wherein a trajectory of the feature is determined using a plurality of positions of the feature, the plurality of positions of the feature being determined at a respective plurality of discrete times. 25. The tracking system of claim 21, wherein a trajectory of the feature is determined using time derivatives of the first and second orthogonal resultant signals. 26. A tracking system for determining a position of a feature of an object at hyperacuity accuracy comprising:

an array comprising a plurality of sensors, wherein for each sensor the response profile is non-linear and is axio-symmetric about a respective center of the sensor, and the sensor produces an output signal based on the response profile; and

components configured to receive the output signals from the sensors, wherein, in response to receiving the output signals from the sensors, the components perform operations comprising:

determining a first estimate of coordinates of the position of the feature using a signal differencing method;

determining a second estimate of the coordinates of the position of the feature by forming a first and a second orthogonal resultant signals;

combining the first and second estimates of the coordinates of the position of the feature to obtain a final estimate of the position of the feature. 27. The tracking system of claim 26 wherein the non-linear response profiles of the sensors have an approximately Gaussian profile. 28. The tracking system of claim 26, wherein:

the sensors are optical sensors and are arranged on a focal plane of an objective lens;

each optical sensor is configured with a respective optical sensor lens, the optical sensor lens being either a spherical or hemispherical ball lens, so that the objective lens and the respective optical sensor lens of each optical sensor is arranged to produce the non-linear response profile of the optical sensor.

29. The tracking system of claim 26 wherein:

the array comprises six sensors arranged hexagonally around a central sensor; the origin of the array is a center of the central sensor; and

a termination of each sensor is adjacent to a termination of another sensor. 30. The tracking system of claim 29, wherein the signal differencing method comprises:

applying logarithmic amplification to the signals;

processing the signals with analog circuitry in real time; and

wherein determining the first estimate of the coordinates of the position of the feature comprises using a linear combination of the logarithmically amplified signals of at least three sensors, at least two of the sensors being adjacent. 31. The tracking system of claim 30, wherein determining the second estimate comprises:

for the signal of each of the six sensors arranged hexagonally around a central sensor, producing a corresponding first weighted signal by weighting the signal with a respective first sinusoidal weight, and producing a corresponding second weighted signal by weighting the signal with a respective second sinusoidal weight;

forming a first orthogonal resultant signal, wherein the first orthogonal resultant signal comprises a sum of the corresponding first weighted signals;

forming a second orthogonal resultant signal, wherein the second orthogonal resultant signal comprises a sum of the corresponding second weighted signals; and

determining the position of the feature using the first and second orthogonal resultant signals. 32. The tracking system of claim 26, wherein combining the first estimate and the second estimate uses a Proportional-Integral-Derivative (PID) control method. 33. A method of determining a position of a feature of an object at hyperacuity accuracy comprising:

receiving output signals from a plurality of sensors, wherein the sensors: are configured in an array; and

have overlapping response profiles, wherein for each sensor the response profile is non-linear and is axio-symmetric about a respective center of the sensor;

processing the output signals by:

determining a first estimate of coordinates of the position of the feature using a signal differencing method;

determining a second estimate of the coordinates of the position of the feature by forming a first and a second orthogonal resultant signal;

combining the first and second estimates of the coordinates of the position of the feature to obtain a final estimate of the position of the feature. 34. The tracking system of claim 21 , wherein:

the sensors are optical sensors arranged on a focal plane of the objective lens; and wherein the components are further configured to:

detect that an initial image produced by the feature has a darker image on an optical sensor than an initial background image on the optical sensor; and

apply image inversion to the output signal of the sensor to produce a modified signal so that in the modified signal a modification of the initial background image is darker than a modification of the initial image produced by the feature. 35. A method of determining a position of a feature of an object at hyperacuity accuracy comprising:

receiving respective output signals from a plurality of sensors, wherein the sensors:

are configured as a plurality of arrays wherein each array comprises a respective plurality of the sensors;

have overlapping response profiles, wherein for each sensor the response profile is non-linear and is axio-symmetric about a respective center of the sensor; and produce output signals based on the response profiles; and for each array, separately processing the output signals of the respective plurality of sensors of the array to determine respective coordinates of the position of the feature with respect to an origin of the array; and

combining each array's determined respective coordinates of the position of the feature to produce output coordinates of the position of the feature with respect to a global coordinate system of the plurality of arrays. 36. A tracking system for determining a position of a feature of an object at hyperacuity accuracy comprising:

a plurality of sensors, wherein the sensors:

are configured as a plurality of arrays wherein each array comprises a respective plurality of the sensors;

have overlapping response profiles, wherein for each sensor the response profile is non-linear and is axio-symmetric about a respective center of the sensor; and produce output signals based on the response profiles;

components configured to receive the output signals from the sensors, wherein, in response to receiving the output signals from the sensors, the components process the received output signals from the sensors by:

for each array, separately processing the output signals of the respective plurality of sensors of the array to determine respective coordinates of the position of the feature with respect to an origin of the array; and

combining each array's determined respective coordinates of the position of the feature to produce output coordinates of the position of the feature with respect to a global coordinate system of the arrays. 37. The tracking system of claim 36, wherein:

in each array the respective plurality of the sensors comprises six sensors arranged hexagonally around a central sensor;

the origin of the array is a center of the central sensor; and

the arrays are configured on a common plane. 38. The tracking system of claim 36, wherein: in each array the respective plurality of the sensors comprises six sensors arranged hexagonally around a central sensor;

the origin of the array is a center of the central sensor; and

the arrays are arranged on an exterior of a body. 39. The tracking system of claim 36, wherein for each array: the respective plurality of the sensors are optical sensors arranged on a focal plane of an objective lens; and

the feature is a component of an image of the object focused onto the focal plane by the objective lens. 40. The method of claim 35, wherein for at least one array: the separate processing of the output signals of the plurality of sensors of the array comprises:

applying logarithmic amplification to each output signal of the sensor;

determining the respective coordinates of the position of the feature with respect to the origin of the array by using a linear combination of the logarithmically amplified output signals. 41. The method of claim 35, wherein for at least one array the separate processing of the output signals of the plurality of sensors of the array comprises, for each output signal:

producing a corresponding first weighted signal by weighting the output signal with a respective first sinusoidal weight, and producing a corresponding second weighted signal by weighting the output signal with a respective second sinusoidal weight;

forming a first orthogonal resultant signal, wherein the first orthogonal resultant signal comprises a sum of the corresponding first weighted signals;

forming a second orthogonal resultant signal, wherein the second orthogonal resultant signal comprises a sum of the corresponding second weighted signals; and

determining the position of the feature using the first and second orthogonal resultant signals.

42. The method of claim 35, wherein for each of the arrays, the separate processing of the output signals of the plurality of sensors of the array comprises:

applying logarithmic amplification to each output signal of the sensor;

determining first coordinates of the position of the feature with respect to the origin of the array by using a linear combination of the logarithmically amplified output signals;

producing a corresponding first weighted signal by weighting the output signal with a respective first sinusoidal weight, and producing a corresponding second weighted signal by weighting the output signal with a respective second sinusoidal weight;

forming a first orthogonal resultant signal, wherein the first orthogonal resultant signal comprises a sum of the corresponding first weighted signals;

forming a second orthogonal resultant signal, wherein the second orthogonal resultant signal comprises a sum of the corresponding second weighted signals;

determining second coordinates of the position of the feature with respect to the origin of the array using the first and second orthogonal resultant signals; and

combining the first coordinates and the second respective coordinates to produce the output coordinates of the position of the feature with respect to a global coordinate system of the plurality of arrays.

43. The tracking system of claim 36, wherein:

in response to a received input signal at a first time, the sensors of the tracking system are reconfigured as a second plurality of reconfigured arrays each comprising a second respective plurality of sensors;

for each of the second plurality of reconfigured arrays, at a second time, output signals of the second respective plurality of sensors of the reconfigured array are processed to determine second respective coordinates of the position of the feature with respect to an origin of the reconfigured array at the second time; and

the second respective coordinates of the position of the feature are combined to produce second output coordinates of the position of the feature with respect to the global coordinate system.

44. A tracking system for determining a position of a feature of an object at hyperacuity accuracy comprising:

a plurality of sensors, wherein the sensors:

are configured as a plurality of arrays wherein each array comprises a respective plurality of the sensors;

have overlapping response profiles, wherein for each sensor the response profile is non-linear and is axio-symmetric about a respective center of the sensor;

produce output signals based on the response profiles;

wherein in each array the respective plurality of the sensors comprises six sensors arranged hexagonally around a central sensor, and the origin of the array is a center of the central sensor; and

wherein there are six arrays arranged hexagonally around a central array in a common plane;

components configured to receive the output signals from the sensors, wherein, in response to receiving the output signals from the sensors, the components process the received output signals from the sensors by:

for each array, separately processing the output signals of the respective plurality of sensors of the array to determine a respective net signal for the array; and

combining each array's respective net signal to produce estimated output coordinates of the position of the feature with respect to a global coordinate system of the arrays. 45. The tracking system of claim 44, wherein:

the estimated output coordinates of the position of the feature are used to select one of the plurality of arrays; and

using the output signals of the sensors of the selected array to calculate second coordinates of the position of the feature with respect to a global coordinate system of the arrays. 46. The tracking system of claim 36, wherein the sensors are infrared (IR) sensors and the arrays are arranged to provide a wide field of view. 47. The tracking system of claim 38 wherein: the body is either of a hemisphere or sphere;

the sensors include polarization sensors; and

the output signals of the polarization sensors are used to track a polarization plane of the feature of the object.

48. The tracking system of claim 36, wherein the sensors are pressure sensors.

Description:
HYPERACUITY SYSTEM AND METHODS FOR REAL TIME AND ANALOG

DETECTION AND KINEMATIC STATE TRACKING

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims priority to U.S. Provisional Patent Application No. 62/077,095, filed November 7, 2014, and also claims priority to U.S. Provisional Patent Application No. 62/079,688, filed November 14, 2014, both of which are incorporated by reference herein in their entirety.

BACKGROUND OF THE INVENTION

[0002] Optical sensing, detecting, and tracking devices may comprise a lens system to project light rays from a segment of an environment onto a planar or other surface. One such device is shown schematically in FIG. 1. The surface may comprise an array of photo sensors that create electronic signals from the projected light. Examples of photo sensors include charge coupled devices (CCD) having a planar rectangular array of distinct light receptors that produce a respective pixel of the projected light, such as are used in digital cameras. Typically each such receptor in a CCD generates a voltage corresponding to the light amplitude level integrated over the exposure time. Also, the values of the pixels in a CCD are typically taken at specific time instants, rather than having a continuous time, or real-time, output. That is, the received light is digitally sampled in time by the CCD rather than producing a real-time analog output. Such devices typically include electronics that may amplify the sensor signals from the optical sensors and applying signal processing to the sensor signals to obtain desired output results, such as position, motion and acceleration of an object. Other types of photo sensors may be chosen, depending on the application.

[0003] Depending on the application, other detecting and tracking devices may be based on non-visible electromagnetic radiation. Still other detecting and tracking devices may be based on non-electromagnetic phenomena.

[0004] However, many current devices using multiple sensors typically are configured so that the sensors only react or respond to inputs local to themselves. Inputs impinging on other nearby sensors are considered to be problems to be avoided or designed around. This limits the overall resolution of, for example, location detection.

BRIEF SUMMARY OF THE INVENTION

[0005] Systems and methods are disclosed for object detection and tracking using multiple sensors, such as photo sensors or acoustic wave sensors, that are arranged to have overlapping output response profiles. The sensors' outputs can then be differenced or combined to produce a location accuracy at least, and often better, than what could be achieved by the same sensors without overlapping profiles. An arrangement of sensors having such overlapping output response profiles is termed an ommatidium (plural: ommatidia). [0006] Hyperacuity is the ability of a detection or tracking device or system to discriminate a minimal angular displacement of an object moving in the system's field of view that is smaller than the sensor dimension or spacing. The hyperacuity of such systems can be limited only by the system electronics. In preferred embodiments, analog electronics are used to achieve faster results by avoiding limitations due to sampling rate; nevertheless, the methods and systems can still be implemented in digital electronics.

[0007] Multiple ommatidia can be organized into a single system to achieve various advantages, such as increased resolution and/or wider field of view. Also, a multiple ommatidia can be organized hierarchically to achieve zooming capability for detection and tracking.

[0008] Multiple ommatidia can be configured on surface to provide a wide field of view (FoV) for detection and tracking. The multiple ommatidia can use combined methods of detection and tracking, and may use IR, visible, or polarization detection.

BRIEF DESCRIPTION OF THE DRAWINGS

[0009] FIG. 1 shows a structure of a tracking system comprising an ommatidium, according to an embodiment. [0010] FIGs. 2A, 2B, 2C, and 2D shows configurations of sensors arranged to form a single ommatidium to achieve hyperacuity, according to embodiments. [0011] FIG. 3 shows an embodiment of a sensor configured with a lens, according to embodiments.

[0012] FIG. 4 shows overlapping Gaussian response profiles of sensor signals within an ommatidium, according to an embodiment. [0013] FIGs. 5A and 5B respectively show a functional diagram for a circuit to implement a differencing method and Tables of coefficients to use with the circuit components, according to embodiments.

[0014] FIGs. 6A and 6B respectively show the geometric parameters underlying an orthogonal decomposition method and a functional diagram for a circuit to implement an orthogonal decomposition method, according to embodiments.

[0015] FIGs. 7A, 7B and 7C shows the geometric parameters used in an analysis of an orthogonal decomposition method, according to an embodiment.

[0016] FIG. 8 shows graphs related to a sensor output during traversal of an optical sensor by an object image, according to embodiments. [0017] FIG. 9 shows the geometry used for analyzing a trajectory of a moving object's image, according to embodiments.

[0018] FIG. 10 shows a functional block diagram of operations used to track a moving object image, according to embodiments.

[0019] FIG. 11 shows a component level block diagram of operations used to track a moving object image, according to embodiments.

[0020] FIGs. 12A and 12B show the geometry used for determining a trajectory of a moving object image, according to embodiments.

[0021] FIG. 13 shows an component level block diagram of operations used to track a moving object image, according to embodiments. [0022] FIG. 14 shows a process of dynamic contrast inversion, according to embodiments. [0023] FIG. 15 shows a component level block diagram of operations used to implement dynamic contrast inversion, according to embodiments.

[0024] FIGs. 16A, 16B and 16C show configurations of sensors and structures for multi- ommatidia systems and methods, according to embodiments.

[0025] FIGs. 17A, 17B and 17C show the geometry of a multi-ommatidia system and a corresponding table of parameters, according to embodiments.

[0026] FIG. 18 shows the geometry of a multi-ommatidia system, according to embodiments.

[0027] FIG. 19 shows the geometry for tracking an object image between ommatidia of a multi-ommatidia system, according to embodiments.

[0028] FIG. 20 shows a zooming method of a multi-ommatidia system, according to embodiments.

[0029] FIG. 21 shows a zooming method of a multi-ommatidia system, according to embodiments.

[0030] FIG. 22 is a flow chart of a method of tracking an object image in a multi-ommatidia system, according to embodiments.

[0031] FIG. 23 shows a multi-ommatidia system having adaptive grouping of sensors into ommatidia, according to embodiments.

[0032] FIG. 24 shows a functional block diagram for adaptive grouping of sensors into ommatidia within a multi-ommatidia system, according to embodiments.

[0033] FIG. 25 is a block diagram for using orthogonal differencing methods in a multi- ommatidia system, according to embodiments.

[0034] FIG. 26 is a block diagram for combined methods in a multi-ommatidia system, according to embodiments.

[0035] FIG. 27 is a component block diagram for combined methods in a multi-ommatidia system, according to embodiments. DETAILED DESCRIPTION OF THE INVENTION

I. INTRODUCTION

[0036] Systems and methods are disclosed in which multiple sensors are arranged to form a combined sensing unit, called an ommatidium. The sensors may be of any of a variety of physical phenomena, such as visible, infrared or other electromagnetic waves, acoustic waves, or light polarization.

[0037] In exemplary embodiments the sensors in an ommatidium have overlapping output response profiles. This overlapping response is typically non-linear, and is in fact used to achieve a higher location resolution (hyperacuity) than would a system using similar sensors without such overlap. Further, exemplary embodiments will make use of analog components, such as fast logarithmic amplifiers, to achieve fast detection and tracking that is not limited by a sampling rate, as in a digital system. The methods and systems can nevertheless be implemented with sampling and digital processing.

[0038] For simplicity of exposition, the exemplary embodiments disclosed herein will be of methods for, and systems of, photo sensors but it will be clear to one of skill in the art that the methods and systems are not restricted to photo sensor systems. The methods and systems described herein for optical (visual light) based sensing and tracking using photo sensors will be seen as readily adaptable to other frequency ranges (such as infrared, ultraviolet or x-ray) of electromagnetic radiation for corresponding sensors and imaging devices. Further, the methods and systems in fact will be seen as readily adaptable to location and tracking devices based on other physical phenomena, such as for sonar devices that sense acoustic waves.

[0039] The following sections describe different embodiments of the inventions. Part II discloses systems and methods based on a single ommatidium array of sensors. Methods of achieving hyperacuity include differencing methods that may use logarithmic amplification of sensor signals and analog electronics for real-time tracking. Other methods disclosed include orthogonal decomposition methods that use sinusoidal weighting of the sensor signals to form a pair of resultant signals which are used to achieve hyperacuity tracking. Part III discloses systems and methods based on multiple ommatidia arrays. Such systems may use a single lens or multiple lenses. Part IV discloses embodiments directed to HYPRIS™ systems, which may use multiple ommatidia systems to achieve wide field of view, and other tracking and imaging results.

II. SINGLE OMMATIDIA SYSTEMS AND METHODS

[0040] The methods and systems disclosed in this section combine multiple sensors to function as a single location or position detection (herein just "detection") and location tracking (herein just "tracking") device. Part A describes exemplary physical system structures that may be used to implement the systems and methods. Part B describes methods based on differencing, and Part C describes alternate methods based on orthogonal decomposition of the sensor signals. Part D describes how the methods and systems can implement trajectory tracking. Part E describes methods to compensate when a sensor's output signal response to an image feature of a detected/tracked object in fact is darker than the output response to the imaged background. Part F describes ways to combine the various methods.

A. Single Ommatidium Systems

[0041] FIG. 1 shows an exemplary detection and tracking system 100 that can make use of the methods and systems disclosed herein. In this example, a supporting structure 120 encloses a lens 110, with a focal length f, 140, that focuses light onto a sensor field 130. The sensor field can comprise a plurality of sensor arrangements, as now described.

[0042] FIGs. 2A, 2B, 2C, and 2D show exemplary configurations of sensors: respectively 210, 220, 230 and 240. For the case that the sensors are photo sensors, these can be arranged on the sensor field 130. FIG. 2A shows a first embodiment of an ommatidium configuration, in which seven sensors are arranged around a central sensor. Depending on the shape of the individual sensors, a similar configuration is shown in FIG. 2B for circular sensors.

[0043] In FIG. 2B the sensors Ro , ... are arranged geometrically to form an ommatidium disposed in a hexagonal pattern around the central sensor Ro. Each sensor has a unique and well- defined location relative to the central sensor which is also the central or reference point of the ommatidium. When composed of photo sensors, a beam of light incident on this array of sensors will form an image spot on a given sensor, indicated at point (x, y) on R 3 . Depending the output response profile of the sensors, as shown in FIG. 3, one or more of the lenses may use the configuration 300 in which there is a surface mounted lens 310 on the sensor. The combination of lens 1 10 and lens 310 may implement a response profile of the sensor that is Gaussian and/or overlaps with the response profile of neighboring sensors. An example of overlapping response profiles for the configuration of FIG. 2B is shown in FIG. 4 and described further below. [0044] The described sensor response profile is to be a monotonically decreasing function from each sensor's center; these decaying functions include but are not limited to Gaussian, Airy, cosine power, sin(x)/x, sinc(x), (l-x 2 ) n polynomials, etc. There are several ways to generate the nonlinear response profile, including nonlinear doping of the detector, optical termination, waveguide transmission, and intentional image feature blurring to achieve the desired response profile. Each sensor has an overlapping field of view with its adjacent neighbors.

[0045] FIGs. 2C and 2D show ommatidia with alternate arrangements 230 and 240 of sensors. The methods described below can be also be adapted to these configurations, as will be clear to one of skill in the art after understanding the following descriptions.

B. Differencing Systems and Methods [0046] This part describes differencing methods by which the exemplary arrangements of multiple individual sensors into ommatidia previously described can implement hyperacuity detection and tracking.

[0047] Embodiments of the differencing method will be disclosed by way of example using the seven sensor hexagonal ommatidium configuration of FIG. 2B. From the derivation given below it will be clear to one of skill in the art how the methods can be adapted for ommatidia having other configurations.

[0048] For this example case, each sensor is assumed to have a Gaussian-type, axisymmetric response profile, as exemplified in FIG. 4, that overlaps with adjacent sensors. An image spot is shown located at position (x, y) in the array's coordinate system in FIG. 2B. An image feature of the object produces a light spot that is sensed by several neighboring sensors due to their overlapping fields of view. The location of the image spot's center as observed from the center of the photo sensors Ro, R 2 and R3 in FIG. 2B can be expressed by the set of equations

{x-x 3 f +{y-y 3 ) (II.B.1)

The solution of this set of equations can be interpreted as the intersection of three circles at the point (x, y), the circles having radii r 0 , r 2 and r 3 centered at (x 0 , yo), (¾ yi), and ( ¾ yi), respectively. Taking the difference between the second and the first equation in (II.B.l), and then taking the difference between the third and the first in (II. B.1) yields

{x-x 2 f +{y- y 2 f -{x-x 0 f -{y- y 0 ) '

{x -x 3 f +{y- y 3 f - {x - x 0 f -{y- y 0 ) ' (II.B.2) respectively. Letting the point (x 0 , yo) = (0, 0) be the origin, which coincides with the geometric center of the array, the set (II.B.2) then becomes

2x 2 )x + (- 2y 2 )y

2x } )x + (- 2y } )y (II.B.3)

[0049] The right hand sides of (II.B.3) are constant values:

A A—— r 2 2 _ r 0 2 _ x 2 2 _ y 2 2

D 2 2 2 2

B = r 3 -r 0 -x 3 -y 3 (II.B.4)

Using Cramer's rule one finds from (II.B.3) and (II.B.4) the unknowns x and y as

Once the x and y coordinates of the image-spot have been determined, the angle φ that the spot subtends with respect to the array's reference axis and the distance p to the origin and center of the array are given respectively by φ = arctan — p (x 2 + y 2 )

(Π.Β.6)

[0050] The term r t in these equations, i.e. the distances from the image spot center to the center of the photoreceptors, is determined by analyzing the response profile of the ommatidium's sensors. The Gaussian sensor response profile to focused light stimuli has the form:

Equation (II.B.7) represents a Gaussian-like radial-dependence response. In this equation I i = is the signal intensity or response of the h sensor to an image spot illuminating the sensor at a distance t from the sensor's axis of symmetry, β is a constant that represents the rate of decay of the response with the distance from the center of the sensor. One may relate β to the value <7 of the Gaussians used in statistics, the conversion is β=1 / (2σ 2 ) , and / is the peak value of the sensor response for a given image spot illuminating the center of the sensor. A cylindrical coordinate system is used here for the sensor response function. [0051] The square of the radial distance at which the image-spot is illuminating the sensor, relative to the sensor's center, can be obtained from (II.B.7) as

Equation (II.B.8) requires knowing / to solve for η since only l t , the sensor signal response due to the image spot illuminating at a distance η , is observed or measured. It will be shown below that the knowledge of this value is not needed, as relative measures of the sensor responses will only be required.

[0052] Referring back to FIG. 2B and using equation (II.B.8), one has

° β which becomes

[0053] As shown in (II.B.9), was canceled-out by a logarithmic property. Similarly,

Substituting (II.B.9) and (II.B.10) into the right-hand side of (4) produces

In (ll.B. l 1), d is the constant distance from the center of the sensor array to each photoreceptor. Substituting A and B from (ll.B.l 1) into (5) give the image spot coordinates (x, y) as

[0054] For the general case, one would have

Applying some algebra, (II.B.14) and (II.B.15) can be written as

(x j -x.) n{l 0 )-d 2 -l[ x .ln(/ -x (/.)]

y = -

2{x i y j -X j y t )

(II.B.17)

Since the only variables in (II.B.16) and (II.B.17) are the signals lo, , and I j , these equations can be written, respectively, as x = ln ( + j In (/, )+ 7o In (/ 0 )+ η ά (II.B.18) y = 7i In (/,-)+ Y } In (/, )+ γ 0 ln (/ 0 )+ y (II.B.19)

Equations (II.B.18) and (II.B.19) are two linear equations in 1η(/ ζ ), 1η(7,), and ln(Io). The coefficients, using Δ = 2(x. y . -x .y ; ) , are given by

For the hexagonally packed sensor array in Figure 1, with j≡ i mod 6 , and i = 1,2,..., 6, one has:

X; = d cos I i— , x , = d cos +1)-

1 3/ 3 3 .

y t = d sin ί i^) , y } = d sin ( + i)-

3 (II.B.21) Substitutin these into (II.B.20) gives

fid βά fid The coefficient values for the seven sensor hexagonal ommatidium of FIG. 2B are given in Table 1 and Table 2 of in FIG. 5B and each have the dimension of length.

[0055] FIG. 5A shows a functional block diagram of an analog circuit implementation of equations (II.B.18) and (II.B.19) that can be realized by summing the logarithmic amplification of the sensor signals with the coefficients of Table 1 and Table 2 of in FIG 5B. A first advantage of this embodiment is that the accuracy of the calculations depends only on the accuracy of the logarithmic amplifier and the other analog circuitry, and does not have discretization inaccuracies. The calculations also avoid inaccuracies due to time-interval averaging effects that occur when using an analog-to-digital converter to sample the sensor signals at fixed sample times. The sensors are denoted by R k for k from 0 to 6, with Ro denoting the central sensor. The configuration 510 shows that the output of the sensor Ro is received by a logarithmic amplifier, denoted In, to produce an amplified output signal So . This same configuration is also shown being applied to the sensor outputs of sensors R k for k from 1 to 6. The sigma symbols, as conventional, indicate summing operations, and the coefficients disclosed above are stored as constant sources. [0056] The circuit yields six pairs of solutions 512 as given by the equations above.

[0057] While implementation using analog circuitry is a preferred embodiment, the digital sampling of the sensor responses could still be used to implement the solutions given in equations (II.B.18) and (II.B.19). For an ommatidium with an alternative configuration, a comparable analysis will yield an alternate set of coefficients. [0058] That the method for this configuration yields six pairs of solutions implies that up to and including six independently moving objects, producing six separate image features on the ommatidium can be separately located. When the differencing detection method is applied over real time, the separate detections allow for tracking the six moving objects. Generalizing from this to other configurations of ommatidia further implies that if there are a total of n many sensors in an ommatidium with n-\ arranged around a central sensor, then such an ommatidium could be able to detect and track up to n-\ objects.

C. Orthogonal Decomposition Methods and Systems

[0059] The methods and systems described in this section can provide location, motion tracking, velocity and acceleration of an image feature traversing the ommatidium. As with the differencing methods and systems, in preferred embodiments the processing may use real time parallel analog processing. Other embodiments may process the sensor signals using sampling with digital circuitry. Exemplary methods generate an orthogonal decomposition of the ommatidium sensor output responses to generate continuous position signals and, through temporal derivatives of the position signals, continuous velocity and acceleration signals of the image feature traversing the ommatidium. As with previous methods and systems, hyperacuity can be achieved.

[0060] There are two aspects of the embodiments directed to the orthogonal decomposition methods. The first aspect is related to the creation of the orthogonal signals. The second aspect relates to the processing of the orthogonal signals to generate a radial distance (rho: p ) and incident angle (phi: , sometimes also denoted φ) with respect to the ommatidium center of an image feature being tracked as it traverses the ommatidium. At both stages, velocity and acceleration of the image feature can be generated by taking the temporal derivatives of the location/position signals.

[0061] As an overview, in some embodiments output signal intensity of the individual ommatidium sensors are correlated with an orthogonal system to provide location and motion tracking of an image feature traversing the ommatidium field of view. The sensor outputs of the ommatidium sensors are weighted with sine and cosine functions and summed using an algorithm to affix position to the sensors within the ommatidium to provide a position of an image feature with respect to the center of the ommatidium. The sums of the weighted sensor outputs represent the real and imaginary mathematical parts of the orthogonal decomposition of the image feature position. For a moving objects, orthogonal intensity components are constantly changing. The ratio of these values can be used to calculate the direction of object motion. Therefore, a change in trajectory can be extracted in real time. The temporal derivative of the position signal yields the velocity of the image feature. The second derivative yields the image feature's acceleration.

[0062] Embodiments of the orthogonal decomposition methods and systems can be understood with reference to the particular embodiment shown in FIG. 6A for the seven sensor hexagonal ommatidium configuration of FIG. 2B. As described above, the sensors are arranged

geometrically to form an ommatidium disposed in a hexagonal pattern. Each sensor has a unique and well-defined location relative to the central sensor which is also the central or reference point of the ommatidium. A beam of light incident on this array of sensors will form an image- spot on a given sensor and on the extension of the Gaussian profiles of neighboring sensors. Though the following description is with respect to that configuration, one of skill in the art will see how the methods and systems can be applied to ommatidia of other configurations. [0063] FIG. 6 A shows the (x , y) coordinate system for the ommatidium as a whole. In this configuration the angle formed from by the rays from the origin to the center of two adjoining sensors is π / 3 . FIG. 6A also shows the transformation matrix that can be used to convert the (x , y) coordinate system to a rotated coordinate system (x' , y' ) for a given angle of rotation γ . The method can be applied in the rotated coordinate system. [0064] The orthogonal decomposition of the ommatidium's sensor signals can be best described by means of complex functions and reference to FIG. 6B. In the case here, special optical coupling in each sensor results in a Gaussian sensor response profile which is spatially dependent on the radial distance measured from the center axis of the sensor. This Gaussian profile can be expressed, for the Mi-sensor, as ^ {ϊ Ι[ , Γ Ι[ ) = η{λ) ϊ Ι[ β - βΓΪ , * = 0, 1,Λ , 6 (II.C. l)

In (II.C. l), s k (I k , r) is the sensor response to an assumed thin beam of light of intensity I k illuminating the i-photoreceptor system normally at a distance r h from the center of the sensor, 7]( ) is an efficiency factor that includes the photon efficiency of the sensor, the optical transmissivity of the lens system, and other losses which are wavelength (λ) dependent. The signals from the sensor onto which the light beam is impinging and that of the neighboring sensors can be added up to give a resultant system output signal S shown in FIG. 6B. A simulated response of the sensors is as shown in FIG. 4 and was described above.

[0065] Significantly more useful information can be obtained by weighting each signal s k by the complex exponential

This complex function relates the angular positions of the sensors to the (x, y) frame of reference of the sensors arrangement or assembly for one ommatidium. These weights are indicated in FIG. 6A. After multiplying (II.C.l) and (II.C.2) and summing over all k one obtains the composite complex signal

[0066] It is convenient to call U the real part or component, and V the imaginary part or component of this decomposition of the sensor signals, both of which are mutually orthogonal to each other. FIG. 6B shows a circuit component diagram that can be used to calculate U and V. As above, the sensors are denoted Ro ... Re as shown producing signals So ... S6 . The weights are stored as part of the cosine multiplier blocks w 0 ... W6 and the further sine blocks. The weights for the alternate (x' , y' ) coordinates can be modified based on an input for the given angle of rotation γ .

[0067] Based on (II.C.3), one may calculate the magnitude and the phase of the complex signal sum and express the composite complex signal as

That is, one obtains the magnitude (U 2 + V 2 ) 1 ' 2 and phase φ = tan l (V I U) which can be used to locate the position of the image light spot in the ommatidium.

[0068] It should be noted that the U and the V signals described here comprise the orthogonal components of the intensity of the ommatidium sensor signals and should not be confused with the /, Q phase decomposition that is utilized in communications and antenna systems. Even though the mathematics of the orthogonal decomposition is comparable, the underlying principle of using sensor intensity instead of phase is novel.

[0069] For locating the light spot relative to the ommatidium image space, both the angle φ subtended between the line joining the spot and the ommatidium center and the x-axis of the ommatidium reference frame, and the distance p from the center of the ommatidium to the spot location must be computed. The distance is computed as the two-norm of the normalized-by-∑ orthogonal signals U and V, scaled by a number k a that depends on the one-sigma (or beta- value) of the Gaussian curve utilized. That is,

(II.C.5)

The coefficient r in equation (II.C.5) can be taken as the one-sigma value of the Gaussian response profile of each sensor.

[0070] For larger area of an image light spot, the sensor signal is the response corresponding to the integral, over the spot's cross-sectional area, of the Gaussian profile. This is described next with the help of FIG. 7A. In FIG. 7A an image feature light spot of radius r 0 is at location (x 0 , yo) of the ommatidium, over a sensor with a Gaussian response profile 710. The center line of the image light spot is at a distance d from the zth-sensor's center located at (xi, relative to the ommatidium reference frame.

[0071] The sensor response can now be computed as the integral of the Gaussian profile over the region R over the sensor surface 720 that the light spot covers. Thus, for the z ' th-sensor,

S. = jj η(Χ) I. e-^ds, i = 0,\,A , 6

R (II.C.6)

The integration region R consists of the circle defined by the image light spot of radius r 0 . The integration of the sensor's Gaussian response profile is obtained by Riemann sums. That is, the sum of the products of the sensor Gaussian value g(p k ) at distance p k with respect to the z ' th- sensor center (xi, i), times the annular region areas AS k over the light-spot region R. A three- dimensional depiction of the annular slices used for the numerical integration is depicted in the following FIG. 7C.

[0072] The integration is carried over by summing up the individual products of the sensor response (Gaussian profile) at distance/^ from the sensor's center and the surface areaAS^ of the annular strip of width Ap k bounded by the light spot circle. The distance between the center

(Xi, i) of the sensor and the center (xo,yo) of the image light spot is given by d. Letting the arc length of the central fiber of the annular region be a k , the area of the annular strip within the region R of the light-spot is given by A S k = a k Ap = p k - (20 kc ) · Ap (n c y)

[0073] In (II. C.7), 9 k c is the angle subtended by the radial vector p k when the tip of this vector touches either point P or P' in Figures-3 and 4, where P and P' are the intersections of the central annular sector fiber (dotted line in the annular region in FIGs. 7B and 7C) with the circle boundary. It can be shown from the geometry of the problem, using the law of cosines, that this angle is given by

The product of the Gaussian value g(p k ) and the annular region areaAS^ is the "volume" V k given by

V k = g {p k ) - AS k (II.C.9) Substituting (II.C.7) into (II.C.9) yields

V k = g{p k )- p k - (2# fc )· Ap (II.C.10) [0074] Summing all these and replacing (II.C.8) into (II. C.10) gives

The sum extends to all annular strips that fit in the region R from p k = p Q = d - r 0 to p k = p n = d + r 0 . The number n of annular strips between these limits is controlled by the width Ap of the annular strips that is selected for accuracy considerations. That is, the number of strips and the strip width are related as =— , n e Z +

n (II.C.12)

[0075] Substituting (II.C.12) and the normalized Gaussian response function

into (Ii.C. l l) yields

Substituting p k = d - r 0 + (k - 1) · Ap + into (II. C.14) and performing simplifications gives η -π-σ 2 £ 0 P { 2σ 2 J l μ 2 J { 2 A <* J

This Riemann sum can be computed numerically in simulation software.

[0076] Referring now to FIG. 8, the top trace shows the overlap volume (mathematical convolution) of a circle shaped target (as in Figure 7A) moving across the Gaussian Response profile of a photo sensor and the resulting signal output of that circle moving across the equator of that portion of the focal plane of the ommatidia. The bottom trace shows the percentage overlap of that same circular shaped target with adjacent photo sensors whose Gaussian response overlaps with the sensor where the target is transiting an equatorial line.

[0077] The orthogonal decomposition method can be adapted to ommatidia with other configurations .

D. Tracking and Trajectory Determinations

[0078] This section describes systems and methods for continuous and analog tracking of the position, velocity and acceleration of an image feature using real time parallel analog processing of orthogonally decomposed ommatidium sensor outputs into continuous signals representing the radial distance and incident angle relative to the ommatidium reference axes of an image feature traversing the ommatidium' s field of view and the temporal derivatives of these position signals to provide continuous velocity and acceleration signals of the image feature.

[0079] Further analog processing of the orthogonal signals U and V enables the determination and continuous tracking of the radial distance (P ) and the radial angle (φ) of an image feature relative to the ommatidium reference which is the center of the central sensor (R 0 ) in the ommatidium using analog circuitry (see Figure 8). Temporal derivatives of p and φ signals can provide both velocity and acceleration of the object on a real time continuous basis.

[0080] The methods and systems will be described for the particular embodiment of the seven sensor hexagonal ommatidium configuration, but it will be clear to one skill in the art that the methods and systems can be adapted for other configurations. The values of U, V, p and φ and their temporal derivatives may also be obtained by digitizing the ommatidium' s analog sensor outputs and numerically computing these values so the embodiments also cover this approach too, as would be apparent to one of skill in the art. The derivatives may also be obtained by sampling the ommatidium's output at discrete time instants.

[0081] FIG. 9 shows the seven sensor hexagonal ommatidium configuration 900, on which an image feature 910 is moving in time across various sensors of ommatidia along the trajectory 920. The image feature is presumed initially to be smaller than an individual sensor 221. The U and V outputs determine the radial distance (p ) and the radial angle (φ) in real time.

[0082] FIG. 10 depicts a functional block diagram 1000 for an ommatidium O 1010 comprising seven sensors Rj , as shown. The sensor outputs s s , i = 0,1,L ,6 are weighted in the Weighting Block 1020 to produce the real (U) and imaginary (V) orthogonal components and the sum S of all the sensor signals. U and V signals are subsequently processed by analog circuitry in the Functions Block 1030 to produce p and φ of the image feature spot relative to the center of the ommatidium in polar coordinates, and also produce the corresponding first and second derivatives to yield velocity and acceleration of the image feature spot. [0083] This orthogonal decomposition enables the determination of the angle φ and radial distance p of an image feature traversing the ommatidium where p is, p =

where r is the sensor radius. The angle φ is determined by U and V, by the formula

-i V

φ = tan — . Thus, with real-time analog processing, from the two continuous signals U and V the image feature position within the ommatidium is specified. [0084] FIG. 11 shows a component level block diagram for computing the first and second derivative values of p and φ from the U and V inputs. The speed of the computation is only limited by the speed of the circuit components, and not by a sampling speed.

[0085] Referring now to FIG. 12A and FIG. 12B, a trajectory is shown of an image feature smaller than an individual detector traversing the ommatidium. FIG. 12A illustrates the geometry used for determining a trajectory of the image feature by first methods based on two observations of φ and p at different time instants. FIG. 12B illustrates the geometry used for determining a trajectory of the image feature by second method based on continuous observations of φ and p .

[0086] There are two methods indicated in FIG. 12A for discrete sample determinations of the derivatives of φ and p . First, shown as (a) in FIG. 12A, is based on two observations, (φ^ρ^ and (<p 2 ,p 2 ) , made by the system at two different times, t l and t 2 , respectively, as the image feature traverses the array. Denoting the points observed by P l = (^) and 2 = P(t 2 ) , the trajectory displacement S and angle 77 relative to the array reference line or x-axis are determined from

S = [pi + P2 2 - 2p,p 2 cos {φ 2 - φ ι ) ' (II.D. l) and

η = φ 2 - arcsin sin(p 2 - φ ι ) \

L (II.D.2)

Equation (II.D. l) is obtained by straightforward application of the law of cosines, while (II.D.2) is obtained by applying the law of sines as follows: sin (<p 2 - <p 2 ) _ sin (θ ) _ sin (a )

from which = arcsin \— sin( 2 - φ ι ) I

t S J . (II.D.4) From FIG. 12A (b), the trajectory angle 77 is obtained as η = φ 2 — a , which is equation (II.D.4).

[0087] An alternative method to determine the trajectory from the two (φ,ρ) data values is as follows: if a = p 2 cos (φ 2 ) - ρ ι cos (φ { ), b = p 2 sin (φ 2 ) - p l sin (φ ι ) (II.D.5) then

S = (a 2 + b 2 ) 1 ' 2 , /7 = arctan[— 1

(II.D.6) This method appears to be simpler to implement as it does not require inverse trigonometric functions as in the Discrete Method- 1. Implementation of these two methods can be achieved using analog circuitry employing of a time-delay line, or digitally from two (φ,ρ) samples.

[0088] Referring to FIG. 12B for a continuous signal approach, deriving the trajectory is based on calculus. Starting with the exaggerated angle subtended between the two points P l = (^) and

P 2 = P(t 2 ) , in Figure 12B, one can make the following limiting observations. First, the arc s is obtained as s = ρΑφ . Relating to Figure 12A for the discrete case, here one has p x = p and p 2 = p + Ap . Second, as t 2 - t l → dt , the following happens: Ap— > dp so that p 2 → p l = p , s→ds ~ ds (that is, the arc approaches the secant), Αφ— > άφ , the angle AP X QP 2 → π 12 , and

S→dS . Therefore, for an infinitesimal time interval dt , one has dS dp 2 ) 1 2 = [ (pd ) 2 + dp (II.D.7) from which

Since for most cases (dpi p) ~ 0 , then (II. D.8) can be written as

2 άφ ^ dp

d S = & r _. P Ψ + P lh _ = p n -d ( _ -^ + } -_ dp_ dp_

dt do dt p do dt (II.D.9)

One can write = ^Pldt ^ SQ ^JJ Q 9^ becomes

do do! dt

[0089] It can be shown that all the time-derivatives in (II.D.10) can be obtained from the time derivatives of U and V, that is, from dU /dt and dVldt , which are given by φ ' = arctan I—

yu ) I =>— =—

dt u 2 z + - v 2 (II.D.l 1)

Substituting (II.D.l 1) and (II.D.12) into (II.D. IO) gives dV dU

U V ,

dS dt dt 1 dt dt_

+ V 1 p jj d _ y dU_ (u 2 + v 2 ) 112

dt dt

U 2 + V 2 (II.D.13)

[0090] Using p = k a r(U 2 + V 2 and after some algebra one obtains

This is the velocity of the image spot over the ommatidium image space. Integration of (II.D.14) over time would give displacement, and the derivative of (II.D.14) with respect to time would give the acceleration. These can be achieved using an integrator and a differentiator components in an analog circuit.

[0091] The angle rj(t) of the instantaneous trajectory shown in FIG. 12B (b) is obtained as follows:

(dp}

a = arctan = arctan

y ds

(II.D.l 5) Substituting (II.D.l 1) and (II.D.12) into (II.D.15) and using η = α+ φ yields

[0092] FIG. 13 shows a block diagram of a circuit for obtaining the outputs of equations (II.D.14) and (II.D.16). When so implemented, the trajectory of the image feature can be calculated in real time.

E. Dynamic Contrast Control

[0093] This section describes embodiments for systems and methods for real-time tracking of an image feature with respect to the background and automatically adjusting to contrast reversals between an image feature and background. Such systems and methods can be used, for example, when an ommatidium-based system is imaging or tracking a dark object across a bright sky.

Light sensors of the ommatidium can then have the bright sky background imaged onto most of their surfaces, with only a relatively small dark image feature on the sensor. As some previous embodiments presume a bright object image and dark background, a potential problem or inaccuracy can be prevented by applying an inversion-biasing to the output signals of the sensors.

[0094] The theory of operation for dynamic contrast reversal can be understood with reference to FIG. 14 and FIG. 15. In the case of orthogonal decomposition methods and systems, tracking objects regardless of whether lighter or darker than the ambient background, may be based on differentiating the U, V, and/or S-signals and by checking the sign, (+) or (-), of the differentiators' response signals. The sign can then trigger a command to another section of the circuit that applies the inversion-biasing mechanism, or to revert to the normal operation (i.e., a bright object against a dark background). While the image feature changes brightness relative to the background as it traverses the array, the differentiators and sign detectors will enable or disable the inversion-biasing algorithm automatically so that the tracking continues using the appropriate tracking mode. [0095] FIG. 14 shows combinations that are possible between the object image feature and the background:

Image-Background Combinations Differentiators Signal Levels & Output Signs

Bright Image (BI), Dark Background (DB) (BI) > (DB) => (+)

Bright Image (BI), Bright Background (BB).... (BI) > (BB) => (+) or (BI) < (BB)

Dark Image (DI), Dark background (DB).. (DI) > (DB) => (+) or (DI) < (DB) => (-) Dark Image (DI), Bright Background (BB) (DI) < (BB) => (-)

The first case is shown in 1410, in which the image feature 1412 is brighter than the background on the sensor. The graph for 1410 shows a cross section of the sensor response I. The sensor outputs due to the background Ie k and the "target image" In satisfy Ie k < In · Thereafter, in the third case 1420 in which the image feature 1422 is darker than the background Ie k > In · The detection scheme will automatically switch between one mode (normal) or the other mode (inversion and biasing) according to the relative intensities between the object image feature and the background, as the object image feature traverses the ommatidium field of view. When the situation of 1420 is detected, the detection scheme applies inversion 1430 so that the image feature 1432 is brighter than the background, and follows this with biasing 1440 so that both outputs are now positive and the image feature 1442 is now brighter than the background.

[0096] FIG. 15 shows an embodiment of a functional block diagram of a circuit that can implement the dynamic contrast reversal. The particular embodiment shown applies to the case of U/V /S signals obtained using an orthogonal decomposition method to the seven sensor hexagonal ommatidium structure shown in FIG. 2B. The differentiators activate a switch to enable or disable the following inversion-and-biasing components for continual tracking of the object image feature. F. Combined Methods and Systems

[0097] This section describes methods and systems that combine previously described methods and systems of locating or tracking an image feature of an object in a single ommatidium.

Combining the differencing methods described in section II. B with the orthogonal decomposition methods of section II. C can produce improved accuracy in position, velocity and acceleration determinations. [0098] There are several analytical advantages gained by expressing the orthogonal and differencing methods in compact form using matrix-vector notation besides clarity and notational economy. In what follows, ^ { k ) refers to the orthogonal system using Lagrange's planar hexagonal packing index and number η , and ¾( 2 ) refers to the differencing method for the triangular number T 2 structure embedded in η Ιι and T n for n>2, i.e., Γ 2 <= % . This allows the implementation of zooming algorithms Z^ +1 for both systems, as will be described below in section III.C. Another advantage is the possibility of generation of a multiplicity of mathematically-derived systems from these two basic ones by inversion, complementation, by dual spaces, by application of bilinear transformations, etc.

[0099] The orthogonal system is described here by using complex exponential notation. For the differencing method, T 2 is defined to be the distance between sensor centers for any T n system besides the structure it refers to.

[0100] For the orthogonal decomposition system, denoted ^ { k ) , one has (II.F.1) where

S = S(t)= [s t (t)] t E = [e X p {jW}] > Θ = [θ η,)] υ = e{s T E }^ V = 3 m {s ~T E } ∑ =

and

[0101] For the differencing system, denoted & A (T 2 ) , one has

¾(r 2 )= 1 J (II.F.2) (II.F.3) where = [ln(s 0 ),ln(sj,ln(s i+1 ),l]

for

/? = 41n(2)/r 2 2 A = 2(i t+1 - ½Jt )

[0102] The methods may be combined using proportional-integral-derivative (PID) methods:

(II.F.4) where / (p,(p,p h ) is a function of its arguments, being the P part in the PID combining approach, which may include any correction terms, pre-acquisition coordinates as in the case of inter-ommatidia state propagation, etc., and where

represents the polar to rectangular conversion operator, and

wherein δ is the Dirac delta function, and Tis a diagonal L- vector Select Matrix. [0103] The matrix A = w - 1 2 is a 2 by 2 diagonal weight matrix, where 0≤ w≤ 1 would be determined on specific tracking criteria and for particular cases. If w is set to 1, then the Orthogonal Decomposition Method would be used exclusively. On the other extreme, if w is set to 0, then the Differencing Method would be used exclusively. Intermediate values of w can be set by different tracking strategies or criteria. For example, if a large object is being observed, then the weight w could be set by the ratio of the object size or angular extent divided by the ommatidia FOV extent. Other criteria may involve contrasts, brightness, object speed, distance from the ommatidia tracking limit or from object's distance from the ommatidia center (where the differencing method will have all the positions converging). One can also set the weight w based on probability measures obtained from the tracking data, from a feedback loop of the tracking platform, or from an optimization tracking algorithm that compares the two methods alternatively by switching the weight w between 0 and 1 and compares the object's trajectory and noise, etc. In summary, the algorithm(s) for determining of the optimal value for the combining weight w would be defined for each specific application. If w is set to zero, then also Γ = I 6 and multiple objects can be tracked simultaneously.

[0104] These combined methods and systems can be implemented in analog or digital circuitry, and may provide improved location detection and tracking of one or more independent image features.

III. MULTIPLE OMMATIDIA SYSTEMS AND METHODS [0105] This section describes methods and systems that use one or more arrays of sensors that have been organized into subarrays, with each subarray acting as a single ommatidium, to perform detection and/or tracking of one or more objects.

A. Multiple Ommatidia Systems

[0106] The methods and systems disclosed for a single ommatidia can be applied when the sensor array comprises more than one individual ommatidia. The methods for single ommatidia can be extended and adapted to combine the detection and tracking results of each constituent ommatidium. Characteristics of the object or of multiple objects can be tracked, including size, shape, brightness, relative contrast which is directly related to object detectability, and the rate of the object(s) traversing the field of view of the system.

[0107] Various embodiments of configurations that combine multiple ommatidia, each comprising multiple sensors, are shown in FIG. 16A, FIG. 16B, and FIG. 16.C, and described in further detail below. These figures show three embodiments: FIG. 16A is a planar arrangement of seven ommatidia under a single focusing lens; FIG. 16. B comprises three ommatidia with independent focusing lenses (one lens per ommatidium); and FIG. 16C shows a particular example of numerous ommatidia arranged over a hemispherical (now a three-dimensional) supporting structure. It will be recognized that the configuration of FIG. 16C is a specific embodiment of embodiments in which the supporting structure is convex hull, or an arbitrary three-dimensional body in general. Each individual, or unit, ommatidium of a multiple ommatidia system may have any one of the configurations shown in FIGs. 2A to 2D, or another configuration of multiple sensors configured to implement hyperacuity.

[0108] Methods the can be used in multiple ommatidia systems, such as to determine image feature position, include: orthogonal decomposition methods, differencing methods, or the combination methods described previously. Other methods will be clear to one of skill in the art. The combination orthogonal-differencing methods can use weighted methods, such as by a PID.

[0109] FIG. 16A shows a first configuration 1610, which may be implemented as an integrated circuit. Each individual ommatidium has the configuration of the seven sensor hexagonal ommatidium previously described. These individual ommatidium are then themselves configured as six ommatidia hexagonally arranged around a central ommatidium. The counter clockwise numbering of the sensors within each individual ommatidium is given by the second index number; the first index indicates the ommatidium number. The identification of the sensors within each ommatidia arrangement will always be given by ordered pairs (o, s), where o denotes ommatidia number in an array, including a single ommatidium, and s denotes the sensor number within the oth-ommatidia. On each array, the center of the sensor (0, 0) would be taken as the origin of the array, which will be considered as the origin of the global reference frame. Likewise, the center of the sensor (o, 0) for the oth-ommatidia would be considered as the origin of the local reference frame for the particular ommatidia. [0110] A similar alternate configuration 1620 uses circular shaped sensors. The central sensor of each seven sensor hexagonal ommatidium is the indicated with a bold circle.

[0111] In FIG. 16B, the embodiment configuration 1630 comprises multiple independent ommatidia 1631 and 1632, each containing its own focusing lens, and put together mechanically on a planar support. In this case the sensors are of different ommatidia are not adjacent, and larger gaps exist between the three shown in this arrangement. The arrangement 1630 can be altered to have the three ommatidia touch each other as three tangent circles 120 degrees apart, as in configuration 1640. An "array center" can be defined for both such configurations.

[0112] The configuration 1640 of FIG. 16B can function as a hierarchical version of the configuration shown in FIG. 2C.

[0113] In FIG. 16C, the embodiment configuration 1650 is a 3-D hemispherical arrangement of unit ommatidia. As the other arrangements, a global array center and frame relative to which all the ommatidia observations are referred can also be defined as the center of the sphere the hemisphere. This is described in detail below. [0114] FIG. 17A shows the geometry 1710 used for the methods for the configuration 1620 of FIG. 16A. The centers (x . , .) are given in Table 3 shown in FIG. 17C. Note that the radial distance of the circles containing all the six peripheral unit ommatidia centers is d yf , for d the radius of each circular sensor. The numbers are the cosines (for the X-coordinates) and the sines (for the Y-coordinates) of the respective angles 0 i shown in Column 1 of Table 3. These angles are the ones subtended from the reference line (the positive part of the X-axis) and the reference point (the center of the central ommatidium) to each peripheral unit ommatidia in the arrays shown in FIG. 17A.

[0115] FIG. 17B shows the transformation 1720 or mapping of the position vector from a local unit ommatidium coordinate frame to the global coordinate frame is obtained by the following formulas in polar coordinates:

P k = { ot + P i cos {p t ) ] 2 + [y oi + p t sin (<p . ) f }

(III.A. l)

[0116] A method for the transformation from a local to the global frame can be implemented using the following pseudo-code as follows:

(A) Read the (x^y^ values from Table 3, or compute using:

(A.2) For i = 2 to 6

{

(A.3) 0 Oi = 0 ol + (/ - l) - 6O (A.4) x 0i = d - fl - cos(0 Oi ) (A.5) y 0i = d - fi · ύη(θ )

}

(B) Read the ommatidium p t , φ ι values for the actual spot-image location

(C) α = χ + ρ ί ∞&(φ ί )

(D) b = y 0i + p, ^,)

(F) 0 k = tan ~l (b / a) (Note: This can be accomplished with a version of the arctan function that accounts for the quadrant of (a, b).)

[0117] FIG. 18 shows the geometry 1800 for the configuration of multi-ommatidia covering a body, such as shown in FIG. 16C. Once the local ommatidia vectors have been transformed to the global array vectors, the transformation of the ^-array global frame vectors onto a main system reference frame of a convex hull or hemisphere containing all the arrays, can be subsequently applied as described next, using FIG 18. The embodiment shown in FIG. 16C is for a hemispherical body for the ommatidia arrays.

[0118] FIG. 18 shows a vector transformation from a local unit ommatidia array of sensors 1820 on a body 1810 to global coordinates for the body as a whole. The global coordinate system for the spherical coordinates shown in FIG. 18 is an orthogonal rectangular (x,y,z) frame whose origin and z-axis coincide with the center of symmetry and axis of symmetry of the hemisphere, respectively. The polar angle 0 ; for the /^-photoreceptor or terminator is measured relative to the z-axis, and the azimuthal angle Φ ; is measured counter-clockwise from the x-axis of the right-handed reference frame. The x-axis is the preferential or reference axis, such as for a supporting structure or vehicle. Each photo sensor (in the embodiment that a single ommatidium is at each location) or each ommatidia array (e.g. 49-sensor arrays) in the hemisphere has its own local reference plane and an associated local frame that can be defined as follows.

[0119] The normal-to-the-surface unit vector to the h - photoreceptor or array is given, as can be derived from Figure 18 relative to the system's (x,y,z) reference frame, as it is parallel to the radial vector R ; . , by:

" / = {u x ,u y ,u z ) = (sin (θ . ) · cos (Φ . ), sin (θ . ) · sin(0 ; ), cos (θ . )) A

This vector defines also the local horizon plane, a plane tangent to the hemispherical surface at the center of the ommatidium, or at the ommatidia array's center. The south-looking unit vector is perpendicular to u t and is contained in the plane defined by u t and the z-axis (the trace or intersection between this plane and the hemisphere is called the local meridian of east-longitude Φ ; . ). This local-frame unit-vector (or versor), s t , is defined by

Finally, the east-looking unit-vector e t forms with the other two a right-handed system. Hence, it is defined by

½ = « / x s t = (e x , e y , e z ) = (sin (φ . ), cos (φ,. ), θ) A ^

[0120] The local frame given by the set s t , e t } for the i th - ommatidia, or array of ommatidia, is required for mapping any object's trajectory with respect to this local frame first, and then transform or map the image coordinates (or image position vectors) to the system or hemisphere frame given by the unit direction vectors {/ ; , j . This system frame is assumed to be the body (the convex hull hemisphere of FIG. 18) which supports this system of ommatidia. So the hemisphere's image position vectors would in turn need to be transformed onto the body's reference frame by means of a chain product of homogeneous coordinates transformation matrices. Such matrices may include scaling, rotations, and translations from one frame to the next. An example of a homogeneous transformation matrix is given by

[0121] In more compact form, equation (III.A.6) can be written as

In (III.A.7), x 0 is the position vector of the image spot relative to the global frame, x i is the osition of the image spot relative to the z 'iA -ommatidia, R is a composite 3 -axes Euler rotation a displacement vector representing the translation of the ommatidia or ommatidia array origin to the global frame. The rotation matrix R can be written as

R = Ul_ 1 R k (a k )

[0122] Multiple ommatidia systems can provide improved detection and tracking capabilities, as will now be described. Other embodiments and applications include providing zooming detection and wider fields of view, as will be disclosed in sections III.C and section IV.

B. Inter-Ommatidia Feature Tracking Methods and Systems

[0123] In detection and tracking using multiple ommatidia systems, it may occur that an image feature either is near a boundary between two (or more) constituent unit ommatidia.

Methods are now described for maintaining detection and tracking with hyperacuity. [0124] FIG. 19 shows the geometry 1900 for a method of tracking a moving image feature in a multi-ommatidia system as it transitions between the ommatidia in the system's array. The particular embodiment is described for the 7 hexagonal planar ommatidia configured as described in FIG. 16A, 1620. However, it can be understood that the methods to be described can be adapted to other configurations.

[0125] During the transition, that is, when the light spot of the image feature begins to appear on a neighboring ommatidia and begins to fade out of the present ommatidia, the local vectors would point to the center of mass of that portion of the image light spot that affect the particular ommatidia. Because of this, the transformed vectors of each ommatidia excited by the light spot would differ by some relatively small amount. If precision tracking is required, there are two methods that can be followed to obtain the best estimates for the global set( ? i , φ^) : (i) Perform a weighted average of the two sets obtained by the transformations (1) and (2), where the weights would be computed as the ratio of the intensities (equivalent to areas) of the parts of the spot on each ommatidia, divided the total light spot intensity. That is, if / is the total light spot intensit and h are the respective spot intensities detected on each ommatidia, then the weights and respectively. Then, these weights are used to perform a weighted average in both rho's and thetas obtained in the two transformations; or (ii), utilize a dynamic state estimator/propagator (or state predictor).

[0126] FIG. 22 shows a flow chart for an exemplary method 2200 that adapts previously described detection and tracking methods for multiple ommatidia systems. The steps are as disclosed previously but have now added the step 2210 which uses the size of the derivatives of U and V to determine if motion is occurring. The test is whether the sum of the absolute value of the derivatives exceeds a specified threshold ε . In step 2220, in the case that the radial distance is found to exceed a minimum threshold, indicating the image feature is near another

ommatidium , a kinematic state predictor determines whether another ommatidium has also detected the image feature. In the case, tested at the Om On decision branch, that another ommatidium indeed has also detected the image feature the method proceeds to either compute an overlap or continue with sensor readings. c. Meta-Ommatidia Zooming Methods

[0127] This section describes methods and systems by which, in a multiple ommatidia system, the separate ommatidia can be used hierarchically to provide location zooming. In some of these embodiments a location obtained for an image feature with respect to global coordinates of the multiple ommatidia system is used to select a component ommatidium of the ommatidia system, and the signals of that component ommatidium are then used to provide a finer resolution location. These hierarchical configurations of multiple ommatidia may be generalized to include the ability to zoom using both the differencing and the orthogonal decomposition methods previously described. [0128] First embodiments of zooming make use of the orthogonal decomposition approach of section II. C. FIG. 20 illustrates how a multiple ommatidia system 2010 with 49 sensors can have its signals summed to produce a reduced set of seven signals and so effectively act as a seven sensor single ommatidium 2020. This can allow for less processing when the effective ommatidium is providing desired detection and tracking accuracy. [0129] FIG. 21 shows an alternate configuration 2110 of a nine sensor ommatidium. The particular arrangement into triangles of 2110 allows for sensor signal summation to produce a reduced signal set to function as the reduced ommatidium 2120.

D. Dynamic Reconfiguration In Multiple Ommatidia Systems

[0130] In the multiple ommatidia systems and methods described so far, it was presumed that the individual sensors of the entire system were uniquely assigned to respective unit ommatidia. This section describes systems and methods by which the arrangement of the individual sensors of the entire system can be dynamically reconfigured "on the fly" to provide improved tracking and detection. Such methods and systems are described with respect to the particular embodiment shown in FIG. 23. However, it will be clear to one of skill that the methods and systems can be adapted to other configurations.

[0131] FIG. 23 shows the configuration 2300 of sensors of the multiple ommatidia

configuration 1610, with internal sensors explicitly enumerated. Also indicated are boundary sensors 2310. The methods may comprise reselecting which individual sensors are to form the respective sensors of unit seven sensor hexagonal ommatidia. Once, e.g., the sensor chosen to be the central sensor of the central ommatidium is selected, the reorganization of the sensors into respective unit ommatidia is then chosen accordingly.

[0132] FIG. 24 shows a block diagram of the operations that perform the reorganization of the sensors. One limitation that may be enforced is that only interior sensors can be selected as centers for unit ommatidia. The methods and systems illustrated in FIG. 23 and FIG. 24 may be used in the systems next described for multiple ommatidia systems and methods for increased field of view.

IV. HYPRIS SYSTEM APPLICATIONS OF OMMATIDIA SYSTEMS [0133] The following describes further embodiments (referred to as HYPRIS™) that extend and apply the embodiments disclosed above. HYPRIS™ embodiments provide continuous kinematic target tracking and may be expanded to any arbitrary field of view by combining multiple ommatidia into a multi-layered hierarchy, much like a compound eye structure. Inter- ommatidia processing and tracking may be accomplished using analog or sampling with numerical processing. In this section the objects discussed above will also be referred to as targets.

[0134] HYPRIS™ embodiments are based on modularity of the multiple ommatidia methods and systems. This modularity facilitates the placement of ommatidia on a smooth curved surface (such as vehicle hulls, aircraft wings, or missiles) providing full-surround situational awareness without breaking aerodynamic contours. The ability to operate on a wingtip rather than an airframe centerline may be made possible by real time compensation for wing motion relative to the airframe. The compound eye structure is useful for small or large airframes as it does not require pan and tilt control or large volume aplanatic optics.

A. Wide FoV Methods with Multiple Ommatidia [0135] HYPRIS 's modular system is extensible up to a full spherical FoV. A basic single ommatidium module may operate over a limited field of view (FoV); in some case this may be approximately ten degrees. Parallel, analog processing of a single ommatidium produces the continuous analog signals representing the kinematic states of up to six targets for the seven sensor ommatidium embodiment used as examples above. (Recall also that a greater number of targets can be tracked with other configurations.) Two embodiments for creating a wide FoV using an ommatidium are as follows. The first method assumes that the kinematic state signals of each ommatidium, arranged on a compound surface to achieve the desired FoV, will be numerically sampled and processed using algorithms to facilitate inter-ommatidia tracking of targets across the entire FoV. Although numerical processing is employed to accomplish multi- segment tracking, HYPRIS may use embedded processors to greatly reduce the computational burden required by image-based kinematic analysis of targets.

[0136] A second approach uses the multi-layer, hierarchical processing architecture possible with multi-ommatidia systems. This replicates the same sensing and processing methods of a basic ommatidia module at successively higher meta levels in order to preserve the ability to detect and track multiple targets over a wide FoV on a continuous analog basis. Information is propagated from higher to lower layers to detect, isolate and track objects with increasingly greater spatial resolution all in the analog domain. Since HYPRIS's parallel analog processing is not constrained by the Nyquist digital sampling rate and the processing limitations of image based sensor systems, it allows for very high speed operation.

[0137] The methods and systems may implement the capacity to calculate and subtract self- motion of a imaging sensor platform already exists but requires a separate reference system such as an INS or GPS for absolute accuracy or a large computational burden if self-motion is estimated from image-based analysis by the sensor system. In further embodiments, the methods and systems may implement embodiments that can be used to estimate and extract self-motion using kinematic state signals, which already contain motion cues, with algorithms and processing at a greatly reduced computational burden.

B. IR and Polarization Detection [0138] In some embodiments, the multi-ommatidia configurations may use IR or polarization sensitive sensors. C. High Level Partition Methods

[0139] FIG. 25 shows a block diagram of a multiple ommatidia system as described in relation to FIG. 16A. The multiple ommatidia system is shown, in this embodiment, using an orthogonal decomposition method described in section II. C. [0140] FIG. 26 shows a block diagram 2600 of how the multiple ommatidia system of FIG. 25 can be implemented to use both differencing and orthogonal decomposition methods. The outputs of the differencing method applied to the unit ommatidia comprise the locations, velocities and accelerations, 2610, with according to respective rectangular coordinates systems of the unit ommatidia. Further outputs 2622 and 2624 are obtained using orthogonal

decomposition methods, and comprise respective radial distances and incident angles from each of the unit ommatidia. The outputs may be combined to produce net location and trajectory information.

[0141] FIG. 27 shows a component level block diagram of the multiple ommatidia system of FIG. 26. As shown, it can be implemented with continuous real-time analog components to avoid problems due to sampling limits and latency.

[0142] By implementing one or more multiple ommatidia systems, HYPRIS methods and systems can be configured to detect and track multiple targets over a wide field of view.