Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR IDENTIFYING THE LOCATION OF AN EMITTER IN AN IMAGE
Document Type and Number:
WIPO Patent Application WO/2017/013641
Kind Code:
A1
Abstract:
System for identifying the location of an emitter in an image. The system includes a receptor-array, an imager which includes an imaging-sensor, a multi-channel receiver which includes a correlator and a processor. The receptor-array includes at least three receptors. Each receptor receives the signal from the emitter and produces a respective received-signal. The imaging-sensor includes a plurality of pixels, each associated with a unique-identifier. The imager is at a fixed spatial relationship relative to the receptor-array. The field-of-view of the imager and the field-of-view of the receptor-array at least partially overlap. The correlator determines inter-receptor characteristic models. The processor determines a received-signals characteristic model from the inter-receptor characteristic models. The processor determines a pixel corresponding to the location of the emitter in the image according to a signal-to-pixel correspondence model optimizing an array-to-imager correspondence model. The array-to-imager correspondence model incorporates the received-signals characteristic model and the signal-to-pixel correspondence model.

Inventors:
MAOR AMNON MENASHE (IL)
GILADY YUVAL (IL)
Application Number:
PCT/IL2016/050733
Publication Date:
January 26, 2017
Filing Date:
July 07, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ELBIT SYSTEMS EW AND SIGINT-ELISRA LTD (IL)
International Classes:
G01S5/00
Domestic Patent References:
WO2015075720A12015-05-28
WO2011158233A12011-12-22
Foreign References:
US20110273722A12011-11-10
US4385301A1983-05-24
Other References:
See also references of EP 3325993A4
Attorney, Agent or Firm:
KORAKH, Eliav et al. (IL)
Download PDF:
Claims:
CLAIMS

1 . A system for identifying the location of an emitter in an image, said emitter emitting a signal, said system comprising:

a receptor array, including at least three receptors, said at least three receptors associated with a plane, each one of said at least three receptors receiving said signal and producing a respective received signal;

an imager, including an imaging sensor, said imaging sensor includes a plurality of pixels, each pixel being associated with a pixel unique identifier, said imager being at a fixed spatial relationship relative to said receptor array, the field of view of said imager and the field of view of said receptor array at least partially overlap one with respect to the other;

a multi-channel receiver, each receptor being coupled with a respective channel in said multi-channel receiver, said multi-channel receiver at least including a correlator, said correlator determining inter-receptor characteristic models; and

a processor, coupled with said multi-channel receiver and with said imager, said processor determining a received signals characteristic model from said inter-receptor characteristic models, said processor determining a pixel corresponding to the location of said emitter in said image according to a signal-to-pixel correspondence model optimizing an array-to-imager correspondence model, said array-to-imager correspondence model incorporating said received signals characteristic model and said signal-to-pixel correspondence model.

2. The system according to claim 1 , wherein said signal-to-pixel correspondence model is a steering vector associated with each of at least one pixel, wherein, said inter-receptor characteristic models are the correlations between the signals received by each pair of receptors and the correlation between the signal received by each receptor with itself,

wherein said received signal characteristic model is a covariance matrix.

3. The system according to claim 2, wherein said array-to-imager correspondence model is as follows:

[x, y] = arg max pwx yRwxy ] wherein wxy denotes said steering vector associated with each of said at least one pixel, R denotes said covariance matrix, [x, y ] denotes said pixel unique identifier and T denotes the transpose operator.

The system according to claim 2, wherein said array-to-imager correspondence model is as follows; y - arg max

wT V VTw

wherein wxy denotes said steering vector associated with each pixel of group of pixels, where VN represent the K eigen vectors of said relating to the noise, K+D equals the total number of eigen values of said covariance matrix, [x, y ] denotes said pixel unique identifier and T denotes the transpose operator.

5. The system according to claim 1 , wherein said signal-to-pixel correspondence model is a vector of phase differences between selected pairs of antennas respectively, associated with each of said at least one pixel, wherein, said inter-receptor characteristic models are a received phase differences between the received signals produced by said selected pairs of receptors respectively,

wherein said received signal characteristic model is a vector of phase differences between said selected pairs of receptors, and

wherein said array-to-imager correspondence model is given by:

[ x, y 1 = arg min([ A^, A^, ... , A^rM ] - [ Δφ^ , Δφ^ , ... , Δφ^ if wherein Δφ^ denotes the mth model phase difference and determined by a calibration process,

wherein, Δφ^ denotes the mth received phase difference and [x, y ] denotes said pixel unique identifier.

6. The system according to claim 1 , wherein said processor further marks said pixel corresponding to said emitter.

7. The system according to claim 6, further including a display, said processor displaying said image of said scene.

8. The system according to claim 1 , wherein said signal-to-pixel correspondence model is determined by:

positioning a calibration emitter and the detection module at a plurality of relative calibration directions there between, said detection module includes said receptors array and said imager;

for each relative calibration direction, acquiring at least one respective image of said calibration emitter;

for each said relative calibration direction, detecting pixel unique identifier of said calibration emitter; for each said relative calibration direction, receiving a signal transmitted by said calibration emitter;

for each said relative calibration direction, determining a respective signal-to-pixel correspondence model; and

for each said relative calibration direction, associating said pixel unique identifier of said calibration emitter with said respective signal-to-pixel correspondence model.

9. A method for identifying an emitter on an image, said emitter emitting a signal, said method comprising the procedures of:

determining a signal-to-pixel correspondence model;

acquiring an image of a scene, said emitter being located in said scene;

receiving by a receptor array a signal emitted by said emitter; determining inter-receptor characteristic models from the received signals;

determining a received signals characteristic model from said inter-receptor characteristic models; and

determining a pixel corresponding to said emitter according to a signal-to-pixel correspondence model optimizing an array-to-imager correspondence model, said array-to-imager correspondence incorporating said received signals characteristic model and said signal-to-pixel correspondence model.

10. The method according to claim 9, wherein said signal-to-pixel correspondence model is a steering vector associated with each of at least one pixel,

wherein, said inter-receptor characteristic models are the correlations between the signals received by each pair of receptors and the correlation between the signal received by each receptor with itself, wherein said received signal characteristic model is a covariance matrix.

1 1 . The method according to claim 10, wherein said array-to-imager correspondence model is as follows:

[x, y ] = arg max [wxTy Rwx y ] wherein wxy denotes said steering vector associated with each of said at least one pixel, R denotes said covariance matrix, [x, y ] denotes said pixel unique identifier and T denotes the transpose operator.

12. The method according to claim 10, wherein said array-to-imager correspondence model is as follows;

[x, y ]^ = arg max

wT V VTw

wherein wxy denotes said steering vector associated with each pixel of group of pixels, where VN represent the K eigen vectors of said relating to the noise, K+D equals the total number of eigen values of said covariance matrix, [x, y ] denotes said pixel unique identifier and T denotes the transpose operator.

13. The system according to claim 9, wherein said signal-to-pixel correspondence model is a vector of phase differences between selected pairs of antennas respectively, associated with each of said at least one pixel,

wherein, said inter-receptor characteristic models are a received phase differences between the received signals produced by said selected pairs of receptors respectively, wherein said received signal characteristic model is a vector of phase differences between said selected pairs of receptors, and

wherein said array-to-imager correspondence model is given by:

[x, y ] = arg min([ Δφ , Δφ^, ... , Δφ^ ] - [ Δφ^ , Δφ^ , ... , Δφ^ ]f wherein Δφ^ denotes the mth model phase difference and determined by a calibration process,

wherein, Δφ^ denotes the mth received phase difference and [x, y ] denotes said pixel unique identifier.

14. The method according to claim 9, further includes the procedure of marking said pixel corresponding to said emitter on said image.

15. The method according to claim 9, wherein said procedure of determining a signal-to-pixel correspondence model includes the sub-procedures of:

positioning a calibration emitter and a detection module at a plurality of relative calibration directions there between, said detection module includes a receptor array and an imager;

for each relative calibration direction, acquiring at least one respective image of said calibration emitter;

for each said relative calibration direction, detecting pixel unique identifier of said calibration emitter;

for each said relative calibration direction, receiving a signal transmitted by said calibration emitter;

for each said relative calibration direction, determining a respective signal-to-pixel correspondence model; and for each said relative calibration direction, associating said pixel unique identifier of said calibration emitter with said respective signal-to-pixel correspondence model.

16. A system for identifying the location of an emitter on in an image, said emitter emitting a signal, said system comprising:

a receptor array, including at least three receptors, said at least three receptors associated with a plane, each one of said at least three receptors receiving said signal and producing a respective received signal;

an imager, including an imaging sensor, said imaging sensor includes a plurality of pixels, each pixel being associated with a pixel unique identifier, said imager being at a fixed spatial relationship relative to said receptor array, the field of view of said imager at least partially overlapping with and the field of view of said receptor array at least partially overlap one with respect to the other;

a multi-channel receiver, each receptor being coupled with a respective channel in said multi-channel receiver, said multi-channel receiver at least including a correlator, said correlator determining the correlations between the signals received by each pair of receptors and the correlation between the signal received by each receptor with itself; and

a processor, coupled with said multi-channel receiver and with said imager, said processor determining a covariance matrix from said correlations between the signals received by each pair of receptors and said correlation between the signal received by each receptor with itself, said processor determining a pixel corresponding to the location of said emitter according to a steering vector optimizing an array-to-imager correspondence model, said array-to-imager correspondence model incorporating said covariance matrix and said steering vector.

17. The system according to claim 16, said array-to-imager correspondence model is as follows;

wherein wxy denotes said steering vector associated with each pixel of group of pixels, where VN represent the K eigen vectors of said relating to the noise, K+D equals the total number of eigen values of said covariance matrix, [x, y ] denotes said pixel unique identifier and T denotes the transpose operator.

18. The method according to claim 16, wherein said array-to-imager correspondence model is as follows:

[x, y ] = arg max [wxTy Rwx y ] wherein wxy denotes said steering vector associated with each of said at least one pixel, R denotes said covariance matrix, [x, y ] denotes said pixel unique identifier and T denotes the transpose operator.

Description:
SYSTEM AND METHOD FOR IDENTIFYING THE LOCATION OF AN

EMITTER IN AN IMAGE

FIELD OF THE DISCLOSED TECHNIQUE

The disclosed technique relates to direction finding in general, and to systems and methods for identifying the location of an emitter in an image, in particular.

BACKGROUND OF THE DISCLOSED TECHNIQUE

Systems for determining the Direction Of Arrival (DOA) of a signal wavefront (e.g., an electromagnetic signal, or an audio signal) emitted by a signal emitter are known in the art. A signal emitter is referred to herein as a device, which generates (i.e., a source) or reflects signal wavefront. The signal waterfront emitted by a signal emitter is also referred to herein as 'source signal'. One application of determining the DOA of a source signal is, for example, determining the location of an emitter (e.g., a RADAR system, a radio transmitter, a sound source). Another exemplary application is determining the location of a wireless transmitting node in a network (e.g., a cellular telephone or a node in an ad hoc network).

A known in the art technique used to determine the DOA of a source signal is to determine the phase difference between signals received by at least two receptors adapted to receive the source signal and transforms the source signal to an electric received signal (e.g., antennas or microphones). In the two-dimensional (2D) case, a DOA determining system employs at least two receptors and measures the difference in the phase between the signal received by one receptor, relative to the signal received by the other receptor. Further in the 2D case, a DOA determining system determines only one of the azimuth or the elevation of the signal emitter. In the three-dimensional (3D) case, a DOA determining system employs at least three receptors (i.e., which are not located on the same line), and measures the difference in the phases between the source signal received by at least two pairs of receptor. Further in the 3D case, a radio DOA determining system may determine both azimuth and elevation of the radio emitter.

The difference in phase between the received signal of in one receptor and the received signal by another receptor relates to the DOA of the signal (i.e., the direction at which the signal emitter is located) as follows:

where Δφ represents the difference in phase between the signal received by the two receptors, d represents the relative distance between the two receptors, λ represents the wavelength of the source signal and Θ represents the DOA of the source signal (e.g., either azimuth or elevation). Equation (1 ) has a single solution as long as the relative distance d , between receptor 12 and receptor 14, is smaller or equal to half the wavelength of the signal (i.e., d≤ ).

The publication to Schikora et al, entitled "Airborne Emitter Tracking by Fusing Heterogeneous Bearing Data", directs to a system for tracking an unknown number of emitters by fusing optical bearing data and bearing data obtained from an antenna array for tracking an emitter from the air. To that end, the system directed to by Schikora et al determines the actual direction (i.e., azimuth and elevation) of the emitters in a global coordinate according to an image of the area in which the emitter is located. To that end, the system directed to by Schikora et al transforms the bearing data obtained from the imager to bearing data in the global coordinate system. Similarly, the system directed to by Schikora et al transforms the bearing data obtained from the antenna array to bearing data in the global coordinate system. The system then fuses the bearing data and associates the bearing data with respective emitters based on the frequency of the emitters, and localization results along with the accuracy of the sensors.

P.C.T Application Publication 201 1/158233 to Maor directs to a method and a system for determining the location properties of an emitter that transmits pulse trains. The location properties are related to the timing characteristics of the received pulse trains. These timing characteristics are either a characteristic TOA-phase or the characteristic TOA-phase curve. Furthermore, the timing characteristics may further be determined according to either TOA-phase, or characteristic TOA-phase curve or both. According to Maor, 'location properties' relate to the actual location of the emitter, a set of possible locations of the emitter relative to a reference coordinate system, motion properties of the emitter (e.g., relative motion between the emitter and the receivers or a set of possible trajectories of the emitter). The system directed to by Maor, includes at least two receivers, each receives a plurality of repetitive pulses or repetitive pulse trains emitted by the emitter. The TOA of the received pulse trains are recorded and the pulse train repetition interval (PTRI) is determined. Thereafter, the TOA-phase of each received pulse train is determined, according to the TOA of each pulse train and the PTRI. The TOA-phase is determined according to the residue of the division of the TOA of the pulse trains by the number of PTRI's counted from the start of reception of the pulse trains. In other words, the time axis is wrapped according to the PTRI's and the TOA-phases are determined according to the TOA of the pulse trains on the wrapped time axis. Thereafter a characteristic TOA-phase of the pulse trains is determined, according to the TOA-phases of the pulse trains respective of each receiver. The location properties of the emitter are determined according to the characteristic TOA-phase respective of the receivers.

P.C.T Application Publication 201 1/158233 to Ben-Yishai et al directs to a medical Wide Field Of View (WFOV) optical tracking system for determining the position and orientation of at least one target object in a reference coordinate system. The system directed to by Ben-Yishai et al includes a light emitter attached to a target object and two two light emitters attached to a display. Two WFOV optical detectors attached to the target object. Of the WFOV optical detectors is selected to be an active WFOV optical detector which acquires at least one image of the two light emitters attached to the display (i.e., when the two light emitters attached to the display are within the field of view thereof). Each WFOV optical detector includes an optical sensor and at least two optical receptors, optically coupled with the optical sensor. Another optical detector is attached to the display, and acquires at least one image of the light emitter attached to the target object. A processor wirelessly coupled with the WFOV optical detectors and with the other optical detector determines the position and orientation of each target object in the reference coordinate system, according to representations of light emitters in images acquired by the optical detectors. The processor further renders one of navigational information, a real-time image, a model of a region of interest of the patient and a representation associated with the medical tool, according to the determined position and orientation of the target object. The display displays to a physician rendered navigational information, a real-time image, a model of region of interest of the patient or a representation associated with the medical tool, at a position and orientation corresponding to the determined position and orientation of the target object.

SUMMARY OF THE PRESENT DISCLOSED TECHNIQUE

It is an object of the disclosed technique to provide a novel method and system for identifying the location of an emitter in an image. In accordance with the disclosed technique, there is thus provided a system for identifying the location of an emitter in an image. The emitter emits an signal. The system includes a receptor array, an imager, a multichannel receiver and a processor the emitter emitting a signal. Each receptor is coupled with a respective channel in the multi-channel receiver. The processor is coupled the multi-channel radio receiver and with the imager. The receptor array includes at least three receptors. The at least three receptors are associated with a plane. Each one of the at least three receptors receives the signal and produces a respective received signal. The imager includes an imaging sensor. The imaging sensor includes a plurality of pixels, each associated with a pixel unique identifier. The imager is at a fixed spatial relationship relative to the receptor array. The field of view of the imager and the field of view of the receptor array at least partially overlap one with respect to the other. The multi-channel receiver at least includes a correlator. The correlator determines an inter-receptor characteristic models. The processor determines a received signals characteristic model from the inter-receptor characteristic models. The processor determines a pixel corresponding to the location of the emitter in the image according to a signal-to-pixel correspondence model optimizing an array-to-imager correspondence model. The array-to-imager correspondence model incorporating the received signals characteristic model and the signal-to-pixel correspondence model.

In accordance with another aspect of the disclosed technique, there is thus provided method for identifying an emitter on an image. The emitter emits a signal. The method includes the procedures of determining a signal-to-pixel correspondence model, acquiring an image of a scene, the emitter being located in the scene and receiving by a receptor array a signal emitted by the emitter. The method further includes the procedures of determining inter-receptor characteristic models from the received signals, determining a received signals characteristic model from the inter-receptor characteristic models and determining a pixel corresponding to the emitter according to a signal-to-pixel correspondence model optimizing an array-to-imager correspondence model. The array-to-imager correspondence incorporating the received signals characteristic model and the signal-to-pixel correspondence model.

BRIEF DESCRIPTION OF THE DRAWINGS

The disclosed technique will be understood and appreciated more fully from the following detailed description taken in conjunction with the drawings in which:

Figure 1 is a schematic illustration of a system for marking an emitter location on an image, constructed and operative in accordance with an embodiment of the disclosed technique;

Figure 2 is a schematic illustration of an exemplary imaging sensor including a matrix of pixels, in accordance with another embodiment of the disclosed technique;

Figure 3A is a schematic illustration of a correspondence between horizontal and vertical phase differences to a respective pixel number, in accordance with a further embodiment of the disclosed technique;

Figure 3B is a schematic illustration of a correspondence between azimuth phase differences to a respective horizontal pixel coordinates, in accordance with a further embodiment of the disclosed technique;

Figure 3C is a schematic illustration of a correspondence between elevation phase differences to respective vertical pixel coordinates, in accordance with a further embodiment of the disclosed technique;

Figure 4 is a schematic illustration of a method for identifying an emitter in an image, operative in accordance with another embodiment of the disclosed technique;

Figure 5 is a schematic illustration of a method for marking an emitter location on an image when employing correlation-based techniques, in accordance with a further embodiment of the disclosed technique;

Figure 6 is a schematic illustration of a method for marking an emitter location on an image when the interferometer technique is employed, in accordance with another embodiment of the disclosed technique; and

Figure 7 is a schematic illustration of a method for determining a signal-to-pixel correspondence model in accordance with a further embodiment of the disclosed technique.

DETAILED DESCRIPTION OF THE EMBODIMENTS

The disclosed technique overcomes the disadvantages of the prior art by providing a system and a method for identifying the location of an emitter in an image of a scene in which the emitter is situated, without determining the DOA of the source signal. Thus, the need determined the DOA of the emitter, and to convert the DOA to pixel coordinates is alleviated, as well as inaccuracies resulting from these conversions (e.g., terrain map errors or optical errors). The system includes a receptors array (e.g., an antenna array or a microphone array) and an imager. The spatial relation of the imager and the receptor array is fixed one with respect to the other (i.e., the receptor array and the imager do not move one with respect to the other) at least during the acquisition of the image and the reception of the emitted signal. Furthermore, the Filed Of View (FOV) of the receptor array at least partially overlaps with the FOV of the imager. The imager includes an imaging sensor, which includes a plurality of pixels. Each pixel is associated with a respective unique identifier such as horizontal pixel coordinate and a respective vertical pixel coordinate or a pixel number. The receptor array includes at least three receptors. Each receptor receives a source signal and produces a respective received signal. The system further includes multi-channel receiver, which includes a respective channel for each receptor and a correlator. The processor determines the pixel in an acquired image of the scene, which corresponds to the location of the emitter (i.e., the location of the emitter in the acquired image), without determining the DOA of the source signal, according to measurements of phase differences between signals received by selected pairs of receptors, referred to herein as 'the interferometer technique'. Alternatively, the processor determines the pixel corresponding to the location of the emitter in the acquired image of the scene (i.e., without determining the DOA of the emitter) according to according to correlation-based techniques (e.g., beamforming, the Multiple

Signal Classification - MUSIC and the like). The processor may mark the image pixel (or group of pixels) corresponding to the location of the emitter in the image. Thus, the marked image serves as an output for the receptor array instead of, for example, a numeric representation of the direction in which the emitter is located.

Reference is now made to Figure 1 , which is a schematic illustration of a system, generally referenced 100, for identifying an emitter in an image, constructed and operative in accordance with an embodiment of the disclosed technique. System 100 includes an imager 102, a receptor array, which includes at least three receptors 104^ 104 2 and 104 3 , a multi-channel receiver 106, a processor 1 10 and a display 1 12. Multi-channel receiver 106 at least includes a correlator 108. Processor 1 10 is coupled with imager 102, with multi-channel receiver 106 and with display 1 12. Multi-channel receiver 106 is further coupled with each one of receptors 104 ! , 104 2 and 104 3 . It is noted that Multi-channel receiver 106 may also be implemented with three separate receivers, each coupled with a respective one of receptors 104^ 104 2 and 104 3 and a separate correlator, coupled with the three separate receivers, as long as these receivers and correlator are coherent (i.e., synchronized samplers clocks) and synchronized with respect to their output. In other words, the received signals produced by multi-channel receiver 106 are coherent. In addition, multi-channel receiver 106 and correlator 108, along with processor 1 10 may all be implemented on a digital signal processor (DSP).

Imager 102 includes an imaging sensor (e.g., Charge Coupled Device - CCD sensor or a Complimentary Metal Oxide Semiconductor - CMOS sensor), which includes an array of pixels. Each pixel is associated with a unique identifier (e.g., a respective horizontal and vertical pixel coordinates or a pixel number) as further explained below. Receptors 104^ 104 2 and 104 3 may be antennas for receiving electromagnetic radiation or sound sensors (e.g., microphones) for receiving sound waves. The receptor array and imager 102 are at a fixed spatial relation one with respect to the other (i.e., the receptor array and the imager do not move one with respect to the other) at lease during the acquisition of the image and the reception of the emitted signal. The receptor array and imager 102 are also referred to herein as detection module 103. The field of view of imager 102 at least partially overlaps with the field of view of the receptor array. In addition, each pair of receptors (i.e., 104 ! and 104 2 , 104 ! and 104 3 and 104 2 , and 104 3 ) defines a baseline.

Imager 102 acquires an image of a scene, which includes, for example, a building 120 and a tree 122. Each one of receptors 104^ 104 2 and 104 3 receives a signal originating from an emitter (not shown) located in the scene and produced a respective received signal (i.e., an electrical signal). As mentioned above, the emitter may be a radio emitter emitting electromagnetic radiation or a sound emitter emitting sound waves. Each one of receptors 104^ 104 2 and 104 3 provides the respective received signal produced thereby to multi-channel receiver 106. Multi-channel receiver 106 receives the received signals from receptors 104^ 104 2 and 104 3 and performs substantially identical reception operations on each of the received signals. These reception operations include, for example, down conversion, filtering, sampling and the like. Multi-channel receiver 106 produces, for example, a complex representation of the received signal (e.g., in-phase and quadrature signals or samples of the in-phase and quadrature signals) from each receptor 104^ 104 2 and 104 3 . Correlator 108 determines inter-receptor characteristic models. These inter-receptor characteristic models are, for example, the correlations between received signals from each pair of receptors 104^ 104 2 and 104 3 and with each received signal with itself, or the phase difference measurement between pairs of receivers. Processor 1 10 determines a received signals characteristic model from the inter-receptor characteristic models. The received signals characteristic model is, for example, a covariance matrix or a vector of phase difference measurements. Processor 1 10 determines the pixel corresponding to the emitter location in the image, according to the signal-to-pixel correspondence model optimizing an array-to-imager correspondence model. The array-to- imager correspondence model incorporates the received signals characteristic model and the signal-to-pixel correspondence model. A signal-to-pixel correspondence model is a respective steering vector or respective phase difference measurements associated with each of at least one pixel (i.e., each pixel or group of pixels). The array-to-imager correspondence model "links" the signals received by the array and the pixels. Thus, processor 1 10 determines the pixel (e.g., pixel coordinates or pixel number) corresponding to the location of the emitter in the acquired image without determining the direction from which the signal was received.

Following is a description relating to the Correlation-based techniques. As mention above, and with reference to Figure 1 , processor 1 10 may employ correlation-based techniques such as the beamforming or MUSIC algorithms to determine the pixel corresponding to the location of the emitter in the image (i.e., identifies the emitter in the image). In both cases, the signal-to-pixel correspondence model is a steering vector. Thus, each pixel or a group of pixels is associated with a respective steering vector as follows:

where N is the number of receptor (e.g., the number of antennal or the number of microphones), α^ θι is the complex representation (i.e., amplitude and phase) of the signal received by the i th receptor relative to a reference signal, when a calibration emitter is located a the direction corresponding to the pixel coordinates. When the signal received by the i th receptors is employed as the reference signal than α^ θι =λ . The steering vectors w X:y are determined by a calibration process or computationally as further elaborated below.

After multi-channel receiver 106 receives the received signals from receptors 104^ 104 2 and 104 3 , correlator 106 correlates received signals to determine inter-receptor characteristic models. In the correlation-based technique, the inter-receptor characteristic models are the correlations between the signals received by each pair of receptors 104 ! , 104 2 and 104 3 , and the correlation between the signal received by each receptor with itself as follows:

(3) were η denotes the signal received by the i th receptor, r y denotes the signal received by the j th receptor and T denotes the transpose operator. Correlator 106 provides results of the correlations to processor 1 10. Processor 1 10 then determines a received signals characteristic model from the inter-receptor characteristic models. In the correlation-based techniques, the received signals characteristic model is a covariance matrix, for example, as follows:

where N denotes the 'length' of the received signal (i.e., either the number of samples in the discrete case or the time duration in the continuous case).

As mentioned above, processor 1 10 identifies the pixel corresponding to the location of the emitter in the image according to the signal-to-pixel correspondence model, which optimizes an array-to-imager correspondence model. According to the beamforming algorithm, processor 1 10 determines the pixel or group of pixels (i.e., the pixels coordinates or the pixel number) with the respective steering vector, which maximizes the power in the received signal. According to the beamforming algorithm, processor 1 10 optimizes the following array-to-imager correspondence model:

[x, y] = arg max [w x T y Rw xy ] (5) . w , R and T are as defined above.

According to the MUSIC algorithm, processor 1 10 performs eigen decomposition to determine the eigen values and the eigen vectors of the covariance matrix R. Processor 1 10 the determines K eigen vectors corresponding to the K smallest eigen values (i.e., the K eigen values that correspond to noise). Processor 1 10 determines the D pixels or groups of pixels (i.e., the pixel coordinates) corresponding to the signal received from D emitters, which optimize the following array-to-imager correspondence model:

where V N represent the K eigen vectors of the covariance matrix, R, relating to the noise and K+D equals the total number of eigen values of the covariance matrix, R and vectors and w and T are as defined above.

As mentioned above, not every pixels needs to be associated with a respective steering vector w X . In case the angular resolution of the imaging sensor is higher than the minimum detectable angular difference of the receptor array, than each group of pixels (e.g., a group 2, 4 or 9 pixels) may be associated with a corresponding steering vector w X:y . In such a case, the pixel coordinates or unique identifier of such a group of pixel may be, for example, the pixel coordinates or unique identifier of a selected one of the pixels in the group. In the description above, the beamforming technique and the MUSIC technique are brought examples to the correlation based techniques. Additional correlation based techniques may also be employed. To that end, the array-to-imager correspondence model relating to this technique should be employed. As a further example, the Capon-Minimum Variance Distortionless Response (MVDR) may be employed. To that end, the array-to-imager correspondence model relating to this technique is as follows:

Following is a description relating to the interferometer technique. According to the interferometer technique, at least two phase difference measurements are employed between the signals received by respective at least two pairs of receptors. Nevertheless, more than two phase difference measurements may also be employed. When, for example, two pairs of receptors are employed (i.e., two baselines), one pair of receptors 104^ 104 2 and 104 3 is selected as the first pair of receptors and the baseline defined thereby is the first baseline. Another pair of receptors 104^ 104 2 and 104 3 is selected as the second pair of receptors and the baseline defined thereby is the second baseline. For example, in Figure 1 , receptors 104 2 , and 104 3 are selected as the first pair of receptors and define the first baseline 1 14. Receptors 104 ! and 104 2 are selected as the second pair of receptors and define the second baseline 1 16. It is noted that although first baseline 1 14 and second baseline 1 16 are depicted in Figure 1 as being substantially orthogonal, that is not generally a requirement. It is sufficient that receptors 104^ 104 2 and 104 3 are not located on the same line. It is further noted that the relative position between receptors 104^ 104 2 and 104 3 need not be known (i.e., when array calibration, further explained below, is employed). As mentioned above, each pixel or group of pixels is associated with a respective signal-to-pixel correspondence model. In the interferometer technique, the signal-to-pixel correspondence model is a vector of phase differences between selected pairs of receptors associated with each pixel or group of pixels as follows:

/ Λο. ' . Λο ? ' ,\a , I (7) where Δφ^ denotes the m th model phase difference and determined by a calibration process as further explained below and M is the number of selected pairs of receptors (i.e., the selected number of baselines employed).

After multi-channel receiver 1 06 receives the received signals from receptors 1 04^ 104 2 and 1 04 3 , correlator 106 correlates the received signals to determine inter-receptor characteristic models. In the interferometer technique, the inter-receptor characteristic models are received phase difference measurements between the selected pairs of receptors. When, two pairs of receptors are employed (i.e., two baselines), the inter-receptor characteristic models are a first phase difference, Δψ and a received second phase difference, Δφ!, . The received first phase difference, Δψ is the phase difference between the received signals produced by the first pair of receptors (e.g., receptors 104 2 , and 1 04 3 ). The received second phase difference, Δφΐ, , is the phase difference between the received signals produced by the second pair of receptors (e.g., receptors104 1 and 104 2 ). Multi-channel receiver 106 provides these phase difference measurement to processor 1 1 0.

Processor 1 1 0 determines a received signals characteristic model from the inter-receptor characteristic models. In the interferometer technique, the received signals characteristic model is a vector of received phase differences between the selected pairs of antennas as follows:

Φί,Δφ^. , .,Δφ^ ; (8) where Δφ^ denotes the m th received phase difference and M is as above. Processor 1 10 determines a pixel corresponding to the location of the emitter in the image according to the signal-to-pixel correspondence model optimizing an array-to-imager correspondence model. According to the interferometer technique, processor 1 10 determines the pixel or group of pixels with a corresponding signal-to-pixel correspondence model, which optimizes the following array-to-imager correspondence model:

[ x, y 1 = arg min([ Δψ , Δφ^, ... , Δφ^ ] - [ Δφ^ , Δφ^ , ... , Δφ^ if (9)

Accordingly, system 100 does not need to determine the direction from which the signal was received. Furthermore, the relative position between receptors 104i , 104 2 and 104 3 need not be known when calibration is employed as further explained below.

In general, in both the correlation-based techniques and the interferometer technique, the direct correspondence between the received signals and the pixels (i.e., the signal-to-pixel correspondence models) alleviates errors resulting from non-ideal receptors, optical distortions. The direct correspondence between the received signals and the pixels also alleviates the need for a positioning system or for the fusion of data (e.g., the fusion of DOA and positioning information with image information).

After determining the pixel corresponding to the emitter (i.e., either by a correlation-base technique or by the interferometer technique), processor 1 10 may mark the corresponding pixel or group of pixels in the acquired image with an indicator 124 (e.g., a crosshairs, a circle or any other geometrical or arbitrary shape). Thus, the marked image serves as an output for the receptors array and a user can identify the location of the radio emitter in the scene according to location of indicator 124 in the image. In the image depicted in Figure 1 , the emitter is located in building 120.

Reference is now made to Figure 2, which is a schematic illustration of an exemplary imaging sensor, generally reference 150, including a matrix of pixels such as pixel 152 and pixel 154, in accordance with another embodiment of the disclosed technique. Each pixel in imaging sensor 150 is associated with a respective unique identifier. For example, the pixel unique identifier is the respective horizontal pixel coordinate and a respective vertical pixel coordinate corresponding to the location of that pixel in the matrix. For example, pixel 152 is associated with horizontal pixel coordinate 3 and vertical pixel coordinate 4. Similarly, pixel 154 is associated with horizontal pixel coordinate 9 and vertical pixel coordinate 6. The pixel coordinates may be designated in an ordered pair. As such, pixel 152 is associated with pixel coordinates [3,4] and pixel 154 is associated with pixel coordinates [9,6]. Alternatively, each pixel may be associated with a pixel number, which uniquely identifies the pixel and the location thereof in the array. In the example brought forth in Figure 2, array 150 includes 96 pixels. Accordingly, each pixel is associated with a number between 1 -96 counting from right to left and from bottom to top. Thus, pixel 152 is associated with the number 39 and pixel 154 is associated with the number 69. Furthermore, each pixel is also associated with a respective horizontal angle and vertical angle related to the pixel coordinates, (e.g., relative to the optical axis of the imaging sensor, which passes through the physical center of the imaging sensor and perpendicular to the imaging sensor plane). It is noted that imaging sensor 150 is brought herein as an example only for explanation purposes only. In general, the number pixels in imaging sensors may be between hundreds of pixels to tens of millions of pixels. Also, the notation 'x, y' herein above and below relates, in general, to the pixel unique identifier and not just to the pixel coordinates.

Reference is now made to Figures 3A, 3B and 3C. Figure 3A is a schematic illustration of a correspondence, generally referenced 160, between first and second phase differences to a respective pixel number, Figure 3B is a schematic illustration of a correspondence, generally referenced 170, between first phase differences to a respective horizontal pixel coordinates and Figure 3C is a schematic illustration of a correspondence, generally referenced 180, between second phase differences to respective vertical pixel coordinates, in accordance with a further embodiment of the disclosed technique. With reference to Figure 3A, each pair of first phase difference and second phase difference is associated with a respective pixel identified by a respective unique pixel number. In general, the azimuth of an emitter, as may be determined from the phase difference between the first pair of antennas, depends on the elevation of the emitter. In other words, more than one azimuth angle may be associated with the same first phase difference measurement (i.e., the phase difference measurement defines a cone about the horizontal base line). However, a single azimuth angle is associated with a phase difference measurement for a given elevation angle. Thus, each row of pixels, which corresponds to a second phase difference, is associated with a respective correspondence curve, such as curves 162^ 162 2 and 162 3 , between horizontal phase difference and the pixel horizontal coordinates. This plurality of curves, which may be employed to determine a surface such as surface 164 in Figure 3A, define a correspondence between each pair of first and second phase differences to a respective pixel. Curves 162 ! , 162 2 and 162 3 and surface 164 are brought herein as examples only. Practical curves and surface may vary depending on various factors such as the mapping of the pixel numbers, relative position between receptors, the medium characteristics (e.g., changes in refractive index) and the like.

When the predicted elevation angle of the emitter is small, azimuth of the emitter may be approximated as independent from the elevation thereof. Thus, the correspondence between first phase differences and the horizontal pixel coordinates may be estimated independently from the elevation phase difference. With reference to Figure 3B, each first phase difference is associated with a respective horizontal pixel coordinate. For example, an azimuth phase difference of 1 is associated with horizontal pixel coordinate 9. With reference to Figure 3C, each second phase difference is associated with a respective vertical pixel coordinate. For example, a second phase difference of 2, is associated with vertical pixel coordinate 7. For example, similarly to as mentioned above and with reference to Figure 1 , processor 1 10 determines the horizontal pixel coordinate and the vertical pixel coordinate corresponding to the direction (i.e., the azimuth direction and the elevation direction) in which the emitter is located, according to correspondence 160 between the first phase differences and horizontal pixel coordinates and correspondence 170 between the second phase differences and the vertical pixel coordinates.

Still referring to Figure 1 , when the distance between receptors 104 ! , 104 2 and 104 3 increases, the accuracy of the detected phase difference between the signals received by each of pair of receptors also increases. However, positioning the receptors at a relative distance larger than half the wavelength of the radio signal may cause ambiguity in the detected phase difference between the received signals. This ambiguity occurs since, when placing the receptors at a relative distance greater than half the wavelength, the number of repetitions of the wavelength associated with the radio signal, within the relative distance between the receptors, is greater than one. However, the repetitive nature of the radio signal limits the measured phase difference (i.e., either the first phase difference, Δφ^ or the second phase difference, Δφ 2 ) to be between 0 and 2π radians (i.e., to portions of a wavelength). Thus, when distance between the receptors is greater than half the wavelength, more than one wavelength repetitions may be associated with the same measured phase difference (i.e., the measured phase difference may include integer multiples of 2π radians). This is also referred to as the wrapping or folding of the phase difference measurement. Consequently, more than one image pixel may be associated with the same measured first and/or second phase differences. Processor 1 10 may mark all of these pixels. A user may resolve this ambiguity through observation and/or additional information. For example, the user may be informed that the emitter is located to the right of building 120.

The correspondence between each pixel or group of pixels, and the respective signal-to-pixel model is determined by a calibration process. Referring back to Figure 1 , during this calibration process, a calibration emitter (not shown) and detection module 103 are positioned in a plurality of relative calibration directions therebetween. Positioning the calibration emitter and detection module 103 in a plurality of relative calibration directions therebetween is achieved by moving the either the calibration emitter or the system through a plurality of calibration locations relative to each other. Alternatively, positioning the calibration emitter and detection module 103 through a plurality of relative calibration directions therebetween is achieved by orienting detection module 103 though a plurality of calibration orientation relative to the calibration emitter. Furthermore, the calibration emitter is easily identifiable in an image or images acquired by imager 102. For example, the calibration emitter may be of a color, which exhibits a high contrast relative to the background. Alternatively, the emitter may be fitted with a light reflector (e.g., a reflection sphere) or a light emitter (e.g., Light Emitting Diode - LED).

For each relative calibration direction, imager 102 acquires an image of the calibration emitter. The location of the calibration emitter in the acquired image (i.e., the pixel coordinates or the pixel number) is determined either automatically (e.g., by processor 1 10 with the aid of image processing techniques) or by a user (e.g., with the aid of cursor).

For each calibration direction, each one of receptors 104^ 104 2 and 104 3 receives a signal originating from the calibration emitter. Each one of receptors 104^ 104 2 and 104 3 provides the respective received signal produced thereby to multi-channel receiver 106. Multi-channel receiver 106 receives the received signals from receptors 104^ 104 2 and

104 3 . For each calibration direction, correlator 108 determines respective inter-receptor characteristic models according to the received signal and processor 1 10 determines a received signals characteristic model from the inter-receptor characteristic models. Processor 1 10, associates the determined pixel location of the calibration emitter with the received signals characteristic model corresponding to the same calibration direction.

When employing the interferometer technique, correlator 108 determines at least two calibration phase differences. When two pairs of receptors are employed, respective calibration first phase difference Δφ ε1 between the signals received by the first pair of receptors and a respective calibration second phase difference Δφ ε2 between the signals received by the vertical pair of receptors, which correspond to the current relative calibration direction of the calibration emitter. Multi-channel receiver 106 provides these phase difference measurements to processor 1 10. When employing correlation-based techniques, processor 1 10 determines a respective steering vector for each relative calibration direction by determining the complex signal received by each receptor relative to a reference signal.

Accordingly, when the interferometer technique is employed, for each calibration direction processor 1 10 determines a correspondence between the calibration phase differences (e.g., the first phase difference Δφ α and calibration second phase difference Δφ α ) and a the respective detected pixel. This correspondence may take the form of a Look-Up-Table (LUT). It is noted that there is no need for the number of calibration directions to correspond to the number of pixels or group of pixels. The number of calibration directions may be smaller then the number of pixels or group of pixels. In such a case, the phase difference measurements of the pixels or group of pixels that are not associated with a calibration direction may be determined by interpolation. Alternatively, processor 1 10 may fit a determined function (e.g., a polynomial of a selected degree) to the determined correspondences between the detected phase differences and pixels.

Similarly When correlation-based techniques are employed, processor 1 10 determines a correspondence between the steering vector determined for each calibration position of the calibration emitter and the determined pixel of the calibration emitter. As mentioned above, the steering vector is as follows:

where N is the number of receptor (e.g., the number of antennas or the number of microphones), α^ θι is the complex representation (i.e., amplitude and phase) of the signal received by the i th receptor relative to a reference signal, when a calibration emitter is located a the direction corresponding to the pixel coordinates.

Similar to as noted above, there is no need for the number of calibration directions to correspond to the number of pixels. The number of calibration directions may be smaller then the number of pixels or group of pixels. In such a case, the steering vectors of the pixels or group of pixels that are not associated with a calibration direction may be determined by interpolation.

As mentioned above, the correspondence between each steering vector with respective pixel or group of pixels may be determined computationally. To that end, the relative position between the receptors in the receptors array should be known. Furthermore, the wavelength of the signal (i.e., either the electromagnetic signal or the sound signal) in the medium should also be known. Consequently, the relative amplitudes and phases between the receptors of signals received from various directions can be determined. In essence, the calibration procedures registers the coordinate system associated with the imaging sensor with a coordinate system associated with the 'measurement space' of the receptors array (i.e., the phase difference measurements or the complex received signals). Consequently, a mapping between these two coordinate systems is determined and each location in the measurement space is associated with respective 'location' (i.e., pixel) in the imaging sensor. As mentioned above, the spatial relation between the imager and the receptor array is fixed one with respect to the other, at least during the acquisition of the image and the reception of the signals. However, if this spatial relationship changes, the magnitude of this change (e.g., the translation and rotation) should be known and the calibration adjusted accordingly.

The above description related to a receptor array, which includes at least three receptors. However, it is noted that the disclosed technique is applicable to a receptor array, which includes at least two receptors as well. However, instead of marking a pixel, a system according to the disclosed technique, employing only two receptors shall mark a column of pixels or a line of pixels.

Reference is now made to Figure 4, which is a schematic illustration of a method for identifying an emitter in an image, operative in accordance with another embodiment of the disclosed technique. In procedure 200, a signal-to-pixel correspondence model is determined for each pixel or group of pixels in an imaging sensor. This correspondence is determined by a calibration process described herein above and herein below in conjunction with Figure 7 or by computation as also explained hereinabove. In the correlation-based technique, the signal-to-pixel correspondence model is a steering vector as in Equation (2). In the interferometer technique, the signal-to-pixel correspondence model is a vector of phase differences between selected pair of antennas in Equation (7). After procedure 200, the method proceeds to procedure 202 and to procedure 204. In procedure 202, an image of a scene is acquired. An emitter is located in the scene. With reference to Figure 1 , imager 102 acquires an image of a scene. After procedure 202, the method proceeds to procedure 210.

In procedure 204, a signal emitted by the emitter is received by a receptor array. The field of view of the receptor array at least partially overlaps with the field of view of the imager that acquired the image. With reference to Figure 1 , receptors 104^ 104 2 and 104 3 receive a signal transmitted by an emitter.

In procedure 206, inter receptor characteristic models are determined. In the correlation-based technique, the inter receptor characteristic models are the correlations between each pair of received signals and between each received signal and itself as in Equation (3). In the interferometer technique, the receptor characteristic models are received difference measurements between the selected pairs of antennas. With reference to Figure 1 , correlator 108 determines inter receptor characteristic models.

In procedure 208, a received signals characteristic model is determined. In the correlation-based technique, received signals characteristic model is a covariance matrix determined from the correlations between each pair of received signals and between each received signal and itself, for example, as in Equation (4). In the interferometer technique, the received signals characteristic model is a vector of received phase differences between selected pairs of antennas as in Equation (8) above. With reference to Figure 1 , processor 1 10 determines a received signals characteristic model.

In Procedure 210, a pixel or group of pixels corresponding to the location of the emitter in the acquired image is determined according to the signal-to-pixel correspondence model optimizing an array-to-imager correspondence model. The array-to-imager correspondence model incorporates the received signals characteristic model and the signal-to-pixel correspondence model. In the correlation-based technique, the array-to-imager correspondence model is, for example, as in Equation (5) for the beamforming algorithm or Equation (6) for the MUSIC algorithm or as in equation (7) for the Capon algorithm. In the interferometer technique, the array-to-imager correspondence model is for example as in Equation (9). With reference to Figure 1 , processor 1 10 determines a pixel or a group of pixels corresponding to the location of the emitter in the acquired image according to the signal-to-pixel correspondence model optimizing an array-to-imager correspondence model.

In procedure 212, the pixel corresponding to the location of the emitter in the acquired image is marked. The pixel of associated with the emitter is the pixel or pixels associated with the emitter. With reference to Figure 1 , processor 1 10 marks the pixel corresponding to the location of the emitter in the acquired image.

As described above, the pixel corresponding to the location of the emitter the image may also be determined employing correlation-based techniques. Reference is now made to Figure 5, which is a schematic illustration of a method for marking an emitter location on an image when employing correlation-based techniques, in accordance with a further embodiment of the disclosed technique. In procedure 220, a correspondence between each pixel or group of pixels in an imaging sensor, with a respective steering vector is determined. This correspondence is determined by a calibration process described herein above and herein below in conjunction with Figure 7 or by computation as also explained hereinabove. After procedure 220, the method proceeds to procedure 222 and to procedure 224.

In procedure 222, an image of a scene is acquired. An emitter is located in the scene. With reference to Figure 1 , imager 102 acquires an image of a scene. After procedure 222, the method proceeds to procedure 232. In procedure 224, a signal, emitted by the emitter, is received by a receptors array. The field of view of the receptor array at least partially overlaps with the field of view of the imager that acquired the image. With reference to Figure 1 , receptors 104^ 104 2 and 104 3 receive a signal transmitted by an emitter.

In procedure 226, the correlation between each pair of received signals is determined and between each received signal and itself. With reference to Figure 1 , correlator 108 determines the correlation between each pair of received signals.

In procedure 228, a covariance matrix is determined from the correlations between each pair or received signals. With reference to Figure 1 , processor 1 10 determines a covariance matrix.

In procedure 230, the pixel or group of pixels, corresponding to the location of the emitter in the acquired image is determined according to the determined the covariance matrix and the steering vectors and their respective corresponding pixel unique identifiers. The pixel unique identifier may be determined according to the beamforming algorithm described herein above in conjunction with Equation (5) or according to the MUSIC algorithm described hereinabove in conjunction with Equation (6) or according to the Capon algorithm. With reference to Figure 1 , processor 1 10 determines the pixel coordinates associated with the emitter.

In procedure 232, the pixel associated with the emitter is marked in the acquired image. The pixel of associated with the emitter is the pixel or pixels associated with the emitter. With reference to Figure 1 , processor 1 10 marks the pixel associated with the emitter in the acquired image.

As mentioned above, the pixel corresponding to the location of the emitter in the image may be determined by employing the interferometer technique. Reference is now made to Figure 6, which is a schematic illustration of a method for marking an emitter location on an image when the interferometer technique is employed, in accordance with another embodiment of the disclosed technique. In procedure 250, a correspondences between each selected pixel or group of pixels of an imaging sensor with respective phase differences between to at least two selected pairs of receptors is determined. This correspondence is determined by a calibration process described herein above, and as described herein below in conjunction with Figure 7. After procedure 250, the method proceeds to procedure 252 and to procedure 254.

In procedure 252, an image of a scene is acquired. An emitter is located in the scene. With reference to Figure 1 , imager 102 acquires an image of a scene. After procedure 252, the method proceeds to procedure 260.

In procedure 254, a signal, transmitted by the emitter, is received by a receptor array. The field of view of the receptor array at least partially overlaps with the field of view of the imager that acquired the image. With reference to Figure 1 , receptors 104^ 104 2 and 104 3 receive a signal transmitted by an emitter.

In procedure 256, at least two received phase difference corresponding to the at least two selected pairs of receptors are detected. With reference to Figure 1 , correlator 108 determines at least one of first phase difference and second phase difference.

In procedure 258, the pixel or group of pixel corresponding to the location the emitter in the acquired image is determined according to the at least two received phase differences, the correspondences between each selected pixel or group of pixels of the imaging sensor with respective phase differences. With reference to Figure 1 , processor 1 10 determines the pixel associated with the emitter.

In procedure 260, the pixel or group of pixels corresponding to the location of the emitter in the acquired image is marked in the acquired image. The pixel of associated with the emitter is the pixel or pixels located at pixel coordinates associated with the emitter. With reference to Figure 1 , processor 1 10 marks the pixel associated with the emitter in the acquired image.

Reference is now made to Figure 7, which is a schematic illustration of a method for determining a signal-to-pixel correspondence model in accordance with a further embodiment of the disclosed technique. When correlation-based techniques are employed, the correspondence vector is a steering vector as described above in conjunction with Equation (2). When the interferometer technique is employed, the correspondence vector is vector whose entries are phase difference measurements between at least two selected pairs of receptors. In procedure 300, a calibration emitter and the detection module are positioned at a plurality of relative calibration directions therebetween. Positioning the calibration emitter and the detection module in a plurality of relative calibration directions therebetween is achieved, for example, by moving the calibration emitter or the detection module through a plurality of calibration locations relative to each other. Alternatively, positioning the calibration emitter and detection module through a plurality of relative calibration directions therebetween is achieved by orienting the detection module though a plurality of calibration orientation relative to the calibration emitter. With reference to Figure 1 , detection module 103 and a calibration emitter are positioned in a plurality of relative calibration directions therebetween. After procedure 300, the method proceeds to procedure 302 and 306

In procedure 302, for each relative calibration direction, at least one respective image of the calibration emitter is acquired. With reference to Figure 1 , imager 102 acquires at least one image of the calibration emitter for each relative calibration direction.

In procedure 304, for each relative calibration direction, determining the pixel unique identifier (e.g., pixel coordinates or pixel number) of the calibration emitter. The pixels unique identifier of the calibration emitter in the acquired image are determined either automatically (e.g., with the aid of image processing techniques) or by a user. With reference to Figure 1 , processor 1 10 determines the pixel coordinates of the calibration emitter. After procedure 304, the method proceeds to procedure 310.

In procedure 306, for each relative calibration direction, a signal transmitted by the calibration emitter is received. With reference to Figure 1 , receptors 104^ 104 2 and 104 3 receive the signal transmitted by the calibration emitter. After procedure 304, the method proceeds to procedure 308.

In procedure 308, for each relative calibration direction, determining a respective signal-to-pixel correspondence model. When the correlation-based techniques are employed, the correspondence vector is a steering vector as described above as described above in conjunction with Equation (2). When the interferometer technique is employed, the signal-to-pixel correspondence model is vector whose entries are the phase differences measured during the calibration process between at least two selected pairs of receptors. With reference to Figure 1 , correlator 108 determines signal-to-pixel correspondence model respective of each calibration direction.

In procedure 310, for each relative calibration direction, the respective calibration emitter pixel unique identifier is associated with the respective signal-to-pixel correspondence model. This correspondence may be in the form of a LUT or in the form of a function fitted to the determined correspondences. With reference to Figure 1 , processor 1 10 the respective calibration emitter pixel coordinates with the respective correspondence vector.

The disclosed technique may be employed in various scenarios and applications. For example, in a disaster area (e.g., where earthquakes occurred, where a storm passed or where a tsunami hit), which includes collapsed buildings, a system according to the disclosed technique may be employed to locate cellphones corresponding to survivors trapped within the collapsed buildings and indicate on the acquired image the location of such survivors. As such, the system shall acquire an image of the disaster area or a part thereof and receive signals transmitted by these cellphones. The system identifies pixel or pixels corresponding to the cellphones and mark the pixel or pixels corresponding to these cellphones on the acquired image as described above. Such a marked image may give the user an indication regarding the location of the survivors in the scene. The user can than direct the rescue personal toward the location of the cellphones. Moreover, the system may identify a unique device identification number (e.g., Medium Access Control - MAC address, International Mobile Station Equipment Identity - IMEI number), which a mobile devices transmit un-encrypted and display these numbers on the acquired image near the marking corresponding to the detected emitter or save these numbers to a memory. These unique device identification numbers may aid in identifying the trapped persons by matching these numbers to the numbers registered at the service providers.

The disclosed technique may also be employed for security purposes. For example, imager 102 in detection module 103 may be employed as a security camera while the receptor array shall receive the signals emitted by the cellphones and mark the location of these cellphones in the acquired image. Similar to as described above, the system can identify a unique device identification number which the cellpohne transmit un-encrypted, display these numbers on the acquired image near the marking corresponding to the detected emitter or save these numbers to a memory, and employ these numbers to identifying the persons in the image. When the receptors in the receptors array are microphones, the system according to the disclosed technique may be employed, for example, for identifying talking people in a crowd.

It will be appreciated by persons skilled in the art that the disclosed technique is not limited to what has been particularly shown and described hereinabove. Rather the scope of the disclosed tech defined only by the claims, which follow.