Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DUAL SPECTRAL IMAGER WITH NO MOVING PARTS AND DRIFT CORRECTION METHOD THEREOF
Document Type and Number:
WIPO Patent Application WO/2017/137970
Kind Code:
A1
Abstract:
A device images radiation from a scene. An image forming optical component forms an image of the scene on an uncooled detector having two separate regions. A first filter allows radiation in a first wavelength band to be imaged on the first detector region. A second filter allows radiation in a second wavelength band to be imaged on the second detector region, Two fixedly positioned wedge-shaped components each direct radiation from the scene through the image forming optical component onto the detector through an f-number of less than 1.5. A blackbody source positioned within, the.device reduces drift induced by -environmental, changes surrounding the devices. The blackbody source projects radiation through one of the wedge- shaped components onto a region of the detector that does not receive radiation from the scene. Pixel signals produced from the scene radiation are modified based on pixel signals produced from the blackbody.

Inventors:
CABIB DARIO (IL)
LAVI MOSHE (IL)
SINGHER LIVIU (IL)
Application Number:
PCT/IL2016/050166
Publication Date:
August 17, 2017
Filing Date:
February 11, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CI SYSTEMS (ISRAEL) LTD (IL)
International Classes:
G01J3/28
Foreign References:
US20110002505A12011-01-06
US20010045516A12001-11-29
US20050263682A12005-12-01
US5729011A1998-03-17
US20140354802A12014-12-04
US20080180789A12008-07-31
US6249374B12001-06-19
Other References:
See also references of EP 3414537A4
Attorney, Agent or Firm:
FRIEDMAN, Mark et al. (IL)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1 , A device for imaging radiation from a scene, the radiation including at leas a first and second wavelength band, the device comprising;

(a) a detector of the radiation from the scene, the detector being uncooled and including a first and second detector region, the first and second detector regions being separate;

(b) a first and a second filter, the first filter associated with the first detector region for allowing radiation in the first wavelength band t be imaged oil the first detector region, the second filter associated with the second detector region for allowing radiation in the second wavelength band to be imaged on the second detector region; and

(c) an optical system for focusing the radiation from the scene onto the detector, the optical system comprising:

(i) an image forming optical component for forming an image of the scene on the detector, and

(ii) first and second substantially wedge-shaped components, the first wedge- shaped component associated with the first filter, the second wedge- shaped component associated with the second filter, each of the wedge- shaped components fixedly positioned at a distance from the image forming optical component, each of the wedge-shaped components directing radiation from a field of view of the scene through the image forming optical component onto the detector, such that the radiation is imaged separately onto the first and second detector regions through an f- number of the optica! system of less than approximately 1.5, the imaged radiation on each of the detector regions including radiation in one respective wavelength band.

2. The device of claim 1 ,. wherein each of the first and second filters is a band pass filter.

3. The device of claim 1, wherein the first filter is disposal on one of a first surface of a second surface of the first wedge shaped component, and the second filter is disposed on one of a first surface or a second surface of the second wedge shaped component.

4. The device of claim 3, wherein the first surface of the first wedge-shaped component is a closest surface of the first wedge-shaped component to the image forming optical component, and the first surface of the second wedge-shaped component is a closest surface of the second wedge-shaped component, to the image forming optical component, and the second surface of the first wedge-shaped component is a closest surface of the first wedge-shaped component to the scene, and the second surface of the second wedge-shaped component is a closest surface of the second wedge-shaped component to the scene.

5. The device of claim 1 , wherein an antirefleetive material is disposed on at least one of a first surface and a second surface of the first wedge-shaped componen t, and on at least one of a first surface and a second surface of the second wedge shaped component.

6. The device of claim 1. wherein the first and second wedge-shaped components are substantially symmetrically disposed relative to an optical path of radiation,

7. The device of claim 1 , wherein each of the first and second wedge-shaped components is oriented at a respective angle relative to an optical path of radiation from the scene to the detector.

8. The device of claim 1 , wherein the presence of the first and second wedge-shaped components results in a vertical field of view approximately halved with respect to a field of view of the device defined by the image forming optical component.

9. A method for reducing drift induced by a changing environment feature when imaging radiation from a scene, the radiation from the scene including at least a first and second wavelength band in the long wave infrared region of the electromagnetic spectrum, the method comprising:

(a) focusing radiation from the scene by a first and second substantially wedge- shaped component through an image forming optical component onto a detector sensitive to radiation in the first and second wavelength bands, the detector being uncooled and including a separate first, second, and third detector region, the first and second wedge-shaped components positioned at a distance from the image forming optical component such that the radiation is imaged separately onto the first and second detector regions through a -number less tha approximately 1.5, and each of the wedge-shaped components transmitting radiation substantially in one respective wavelength band, and the imaged radiation on each of the first and second detector regions including radiation in one respective wavelength band, the imaged radiation on the first and second detector regions producing at least a first pixel signal;

(b) projecting radiation from a radiation source by at least one of the first or second wedge-shaped components through the image forming optical component onto the third detector region to produce a second pixel signal, the radiation source different from the scene, and the radiation source projected continuously onto the third detector region over the duration for which the radiation from the scene is focused onto the first and second detector regions; aid

(c) modifying the first pixel signal based in part on a predetermined function to produce a modified pixel signal, the predetermined function defining a relationship between a change in the second pixel signal and a change in. the first pixel signal induced by the changing environment feature.

10. The method of claim 9, wherein the image forming optical component and the first and second wedge-shaped components are positioned within a first enclosure volume, the method further comprising;

(d) positionin the radiation source proximate to the first enclosure volume,

1 1. The method of claim 9, wherein the image forming optical component and the first and second, wedge-shaped components are positioned within a first enclosure volume, and at least a portion of the first enclosure volume is positioned within a second enclosure volume, the method further comprising:

(d) positioning the radiation source within the second enclosure volume and outside of the first enclosure volume.

12. The method of claim 9, further comprising:

(d) determining the change in the first pixel signal induced by the changing environment feature based on the predetermined function, and wherein the modified pixel signal is produced by subtracting the determined change in the first pixel signal from the first pixel signal.

13. The method of claim 9, wherein the predetermined function is a correlation between the second pixel signal and the change in the first pixel signal induced by the changing environment feature. ■14. The method of claim 13, further comprising;

(d) determining the correlation, wherein the determining of the con-elation is performed prior to performing (a).

15. The method of claim 13, wherein the radiation source is a blackbody radiation source, and the image forming optical component and the first and second wedge-shaped components are positioned within a first enclosure volume, and the detector and the first enclosure volume are positioned within a chamber having an adjustable chamber temperature, and a verificatio of the correlation is determined by:

(i) measuring a first temperature of the blackbody radiation source at a first chamber temperature and measuring a subsequent temperature of the blackbody radiation source at a subsequent chamber temperature, the first and subsequent temperatures of the blackbody radiation source defining a first set;

(ii) measuring a first reading of the second pixel signal at the first chamber temperature and measuring a subsequent reading of the second pixel signal at the subsequent chamber temperature, the first and subsequent readings of the pixel signal defining a second set; and

(iii) verifying a correlation between the first and second sets.

16. The method of claim 13, wherein the radiation source is a blackbody radiation source, and the image forming optical component and the first and second wedge-shaped components are positioned within a first enclosure volume, and the detector and the first enclosure volume are positioned within a chamber having an adjustable chamber temperature, and a determination of the correlation includes:

(i) measuring a. first reading of the first pixel signal at a first chamber temperature and measuring a subsequent reading of the first pixel signal at a subsequent chamber temperature;

(ii) subtracting the first reading of the first pixel signal from the subsequent reading of the first pixel signal to define a first set; and

(iii) measuring a first reading of the second pixel signal at the first chamber temperature and measuring a subsequent reading of the second pixel signal a the subsequent chamber temperature, the- first and subsequent readings of the second pixel signal defining a second set.

17. The method of claim 16. wherein the modifying of the first pixel signal includes: (i) measuring a first reading of the first pixel signal at a first time instance and measuring a subsequent reading of the first pixel signal at a subsequent time instance;

(ii) measuring a first reading of the second pixel signal at the first time instance and measuring a subsequent reading of the second pixel signal at the subsequent time instance; and

(in) subtracting the first reading of the blind pixel signal from the subsequent reading of the blind pixel signal to define a third set,

18. The method of claim 17, wherein the modifying of the first pixel signal further includes:

(iv) modifying the subsequent reading of the first pixel signal based on the third set in accordance with a correlation between the first and second sets.

19. The method of claim 16, wherein the determination of the correlation further includes:

(iv) displaying the first set as a function of the second set.

20. The method of claim 16, wherein the determination of the correlation further includes:

(iv) displaying the first set as a function of a third set, the third set being defined by the first chamber temperature and the subsequent chamber temperature.

Description:
APPLICATION FOR PATENT

INVENTORS

Dario CABIB, Moshe LA VI, and Liviu SINGHER

TITLE

Dual Spectral Imager with No Moving Parts and Drift Correction Method Thereof

TECHNICAL FIELD

The present invention relates to the detection and imaging of infrared radiation.

BACKGROUND OF THE INVENTION in order to detect and image gas clouds, especially through the use of infrared detection systems over a wide spectral range, it is often necessary to spectrally limit the incoming radiation to selected wavelength bands using spectral filtering techniques. This is accomplished by measuring the radiation emitted by a background of the gas cloud in two different wavelength bands, one which is absorbed by the gas cloud, and one which is not absorbed by the gas cloud. Devices which can detect and image gas clouds have a wide range of applications, such as, for example, constant monitoring of a scene in industrial installations and facilities for gas leakages, and identifying escaping gases from gas transporting vehicles subsequent to traffic accidents. Typically, detectors which detect radiation in the visible spectral range are of lower cost than infrared detectors. However, since most hazardous gases of interest lack colors in the visible spectral range, such devices must use higher cos infrared detectors. Typically, the least expensive infrared imaging detectors relevant for such applications are uncooled detectors, such microbolem ' ter type arrays.

For some of the above applications, for example when the device must provide an alarm at cloud concentrations and size combinations above predetermined thresholds, quantitative data are required. To this end, these devices must use at least two spectral filters for filtering two selected wavelength bands, radiation in one wavelength band which is absorbed by the gas. and radiation in the other wavelength band which is not absorbed by the gas. The radiation in each wavelength band is imaged and analyzed separately. Device calibration methods and mathematical algorithms can be used to subsequently transform this quantitative data into scenes where the cloud image (when present) is shown, and where the quantitative information on the optical density of the specific gas of interest is stored pixel by pixel. Such a double filtering configuration is necessary in order to take into account contributions to the signals due to background infrared self-emission and drifts thereof brought about by background temperature drifts. This double filtering can be achieved with a spectral scanning method in which there is movement of an optical component of the device, such as, for example, an interferometer, a set of band pass filters mounted on a rotating wheel, or a scanning mirror to gather spectral information. Devices based on uneooled detectors must be designed with a large focusing lens numerical aperture (low f-nurnber) in order to increase detector sensitivity to the radiation of interest relative to environment radiation. This is due to the fact that such detectors have a wide field of view. Designing an optical system with such a low f-number can be achieved with the above mentioned movin components. However, movements of optical components cause decreased system- reliability, thereby increasing maintenance and operating cost of the device, in order to reduce maintenance and cost, filtering techniques can be implemented without moving parts via prisms, beam splitters, or beam combiners. However, such techniques have the effect of decreasing the focusing lens numerical aperture, thereby decreasing the sensitivity of the system to the radiation of the scene of interest relative to the environment radiation.

Furthermore, such infrared imaging devices based on uneooled microbolometer detectors can be used to quantitatively measure the radiance of each pixel of a scene only if the environment radiation changes (due mainly to environment temperature changes) contributing to the detector signals, can be monitored and corrected for. This is due to the fact that a quantitative measurement of infrared radiation from a scene is based on a mathematical relation between the detector signal and the radiation to be measured. This relation depends on the environment state during the measurement, and therefore the quantitative scene measurement can be done only if the environment state, and how the environment state affects that relation, is known during the measurement. The environment radiation sensed by the detector elements originates mainly from the optics and enclosures of the imaging device (besides the scene pixel to be monitored), and is a direct function of the environment temperature. If this radiation changes in time, it causes a drift in the signal, which changes its relation to the corresponding scene radiation to be measured and introduces inaccuracy.

This resulting inaccuracy prevents the use of such devices, especially in situations where they have to provide quantitative information on the gas to be monitored and have to be used unattended for monitoring purposes over extended periods of time, such as, for example, for the monitoring of a scene in industrial installations and facilities. One known method for .performing drift corrections is referred to as Non-Uniformity Correction (NUC). NUC corrects for detector electronic offset and partially corrects for detector case temperature drifts by the frequent use of an opening and closing shutter which is provided by the camera manufacturer. This NUC procedure is well known and widely employed in instruments based on microboiotneter detectors. The shutter used for NUC is a moving part and therefore it is desirable to reduce the number of openings and closings of such a component when monitoring for gas leakages in large installations, requiring the instrument to be used twenty-four hours a day for several years without maintenance or recalibration. Frequent opening and closing of the shutter (which is usually done every few minutes or hours) requires high maintenance expenses.

To reduce the amount of shutter operations when using NUC techniques, methods for correcting for signal drift due to detector case temperature changes occurring between successive shutter openings have been developed by detector manufacturers, referred to as blind pixel methods. Known blind pixel methods rely on several elements of the detector array of the imaging device being exposed only to a b!ackbody radiation source placed in the detector case, and no to the scene radiation (i.e. being blind to the scene). However, such methods can only account and compensate for environmental temperature changes originating near and from the enclosure of the detector array itself, and not for changes originating near the optics or the enclosures of the imaging device. This is because in general there are gradients of temperature between the detector case and the rest of the optics and device enclosure. Therefore, known blind pixel methods may not satisfactorily compensate for environment radiation changes in imaging devices with large and/or complex optics, such as, for example, optics with wedges for directing and imaging radiation onto a detector through an objective lens system, as described below.

SUMMARY OF THE INVENTION

The present invention is a device and method for simultaneously detecting and imaging infrared radiation in at least two wavelength bands, and for correcting for signal drift as a result of the changing environment. The device and method avoids the use of moving parts, for higher reliability, without appreciably compromising on high numerical aperture (low optical f-number) for high sensitivity, and still using lower cost uncooled imaging detectors.

According to an embodiment of the teachings of the present invention there is provided, a device for imaging radiation from a scene, the radiation including at least a first and second avelength band, the device comprising: (a) a detector of the radiation from the scene, the detector being uncooled and including a first and second detector region, the first and second detector regions being separate; (b) a first and a second filter, the first filter associated with the first detector region for allowing radiation in the first wavelength hand to be imaged o the first detector region, the second filter associated with the second detector region for allowing radiation in the second wavelength band to be imaged on the second detector region; and (c) an optical system for focusing the radiation from the scene onto the detector, the optical system comprising: (i) an image forming optical component for forming an image of the scene on the detector, and (ii) first and second substantially wedge-shaped components, the first wedge- shaped component associated with the first filter, the second wedge-shaped component associated with the second filter, each of the wedge-shaped components fixedly positioned at a distance from the image forming optical component, each of the wedge-shaped components directing radiation from a field of view of the scene through the image forming optical component onto the detector, such that the radiatio is imaged separately onto the first and second detector regions through an f-mimber of the optical system of less than approximately 1.5, the imaged radiation on each of the detector regions including radiation in one respective wavelength band.

Optionally, each of the first and second filters is a hand pass filter.

Optionally, the first filter is disposed on one of a first surface or a second surface of the first wedge shaped component, and the second filter is disposed on one of a first surface or a second surface of the second wedge shaped component.

Optionally, the first surface of the first wedge-shaped component is a closest surface of the first wedge-shaped component to the image forming optical component, and the first surface of the second wedge-shaped component is a closest surface of the second wedge-shaped component to the image forming optical component, and the second surface of the first wedge- shaped component is a closest surface of the first wedge-shaped component to the scene, and the second surface of the second wedge-shaped component i s a closest surface of the second wedge- shaped component to the scene.

Optionally, an antireflective materia] is disposed o at least one of a first surface and a second surface of the first wedge-shaped component, mid on at least one of a first surface and a second surface of the second wedge shaped component.

Optionally, the first and second wedge-shaped components are substantially symmetrically disposed relative to an optical path of radiation.

Optionally, each of the first and second wedge-shaped components is oriented at a respective angle relative to an optical path of radiation from the scene to the detector. Optionally, the presence of the first and second wedge-shaped components results in vertical field of view approximately halved with respect to a field of view of the device defined by the image forming optica! component.

There is also provided according to an embodiment of the teachings of the present invention, a method for imaging radiation from a scene, the radiation including at least a first and second wavelength band, the method comprising: (a) fixedly positioning a first and a second substantially wedge-shaped component at a distance from an image forming optical component; (b) directing radiation from a field of view of the scene by the first wedge-shaped component through the image forming optical component onto a first region of an uncooled detector; (c) filtering the directed radiation by the first wedge-shaped component to allow radiation in the first wavelength band to be imaged on the first region of the detector; ' (d) directing radiation from the field of view of the scene by the second wedge-shaped component through an image forming optical component onto a second region of the detector, the first and second regions of the detector being separate; (e) filtering the directed radiation by the second wedge-shaped component to allow radiation in the second wavelength band to be imaged on the second region of the detector; and (f) imaging the radiation from the field of view of the scene onto the detector, the distance from the image forming optical component being such that the radiation is imaged separately onto the first and second regions of the detector through an f-number less than approximately 1.5, and the imaged radiation on each of the regions of the detector including radiation in one respective wavelength band.

Optionally, the method further comprises: (g) orienting each o the first and second wedge-shaped components at a respective angle relative to an optical path of radiation from the scene to the detector.

Optionally, the method further comprises: (g) disposing an antireflectiye material on at least one of a first surface and a second surface of the first wedge-shaped component, and on at least one of a first surface and a second surface of the second wedge-shaped component.

Optionally, the method further comprises: (g) fixedly positioning a first filter component to allow radiation in the first wavelength band to be imaged on the first region of the detector; and (h) fixedly positioning a second filter component to allow radiation in the second wavelength band to be imaged on the second region of the detector.

Optionally, the fixedly positioning of the first filter comprises: (i) disposing the first filter on one of a first surface or a second surface of the first wedge-shaped component, and the fixedly positioning of the second filter compri es: (i) disposing the second filter on one of a first surface or a second surface of the second wedge-shaped component. There is also provided according to an embodiment of the teachings of the present invention, a method for reducing drift induced by a changing environment feature when imaging radiation from a scene, the radiation from the scene including at least a first and second wavelength band in the long wave infrared region of the electromagnetic spectrum, the method comprising: (a) focusing radiation from the scene by a first and second substantially wedge- shaped component through an image forming optical component onto a detector sensitive to radiation in the first and second wavelength bands, the detector being uncooled and including a separate first, second, and third detector region, the first and second wedge-shaped components positioned at a distance from the image forming optical component such that the radiation is imaged separately onto the first and second detector regions through an f-number less than approximately 1.5, and each of the wedge-shaped components transmitting radiation substantially in one respective wavelength band, and the imaged radiation on each of the first and second detector regions including radiation in one respective wavelength band, the imaged radiation on the first and second detector regions producing at least a first pixel signal; (b) projecting radiation from a radiation source by at least one of the first or second wedge-shaped components through the image forming optical component onto the third detector region to produce a second pixel signal, the radiation source different from the scene, and the radiation source projected continuously onto the third detector region over tlie duration for which the radiation from the scene is focused onto the first and second detector regions; and (c) modifying the first pixel signal based in pari on a predetermined function to produce a modified pixel signal, the predetermined function defining a relationship between a change in the second pixel signal and a change in the first pixel signal induced by the changing environment feature.

Optionally, the image forming optical component and the first and second wedge-shaped components are positioned within a first enclosure volume, and the method further comprises: (d) positioning the radiation source proximate to the first enclosure volume.

Optionally, the image forming optical component and the first and second wedge-shaped components are positioned within a first enclosure volume, and at least a portion of the first enclosure volume is positioned within a second enclosure volume, and the method further comprises: (d) positioning the radiation source within the second enclosure volume and outside of the first enclosure volume.

Optionally, the method further comprises: (d) determining the change in the first pixel signal induced by the changing environment feature based on the predetermined function, and the modified pixel signal is produced by subtracting the determined change in the first pixel signal from the first pixel signal. Optionally, the predetermined function is a correlation between the second pixel signal and the change in the first pixel signal induced by the changing environment feature.

Optionally, the method further comprises: (d) determining the con-elation, the determining of the correlation being performed prior to performing (a).

Optionally, the radiation source is a hlackbody radiation source, and the image forming optical component and the first and second wedge-shaped components are positioned within a first enclosure volume, and the detector and the first enclosure volume are positioned within a chamber having an adjustable chamber temperature, and a verification of the correlation is determined by: (i) measuring a first temperature of the hlackbody radiation -source at a first chamber t emperature and measuring a subsequent temperature of the hlackbody radiation so urce at a subsequent chamber temperature, the first and subsequent temperatures of the hlackbody radiation source defining a first set; (ii) measuring a first reading of the second pixel signal at the first chamber temperature and measuring a subsequent reading of the second pixel signal at the subsequent chamber temperature, the first and subsequent readings of the pixel signal defining a second set; and (iii) verifying a correlation between the first and second sets.

Optionally, the radiation source is a hlackbody radiation source, and the image forming optical component and the first and second wedge-shaped components are positioned within a first enclosure volume, and the detector and the first enclosure volume are positioned within a chamber having an adjustable chamber temperature, and a determination of the correlation includes: (i) measuring a first reading of the first pixel signal at a first chamber temperature and measuring a subsequent reading of the first pixel signal at a subsequent chamber temperature; (ii) subtracting ' the first reading of the first pixel signal from the subsequent reading of the first pixel signal to define a first set; and (iii) measuring a first reading of the second pixel signal at the first chamber temperature and measuring a subsequent reading of the second pixel signal at the subsequent chamber temperature, the first and subsequent readings of the second pixel signal defining a second set.

Optionally, the modifying of the first pixel signal includes: (i) measuring a first reading of the first pixel signal at a first time instance and measuring a subsequent reading of the first pixel signal at. a subsequent time instance; (ii) measuring a first reading of the second pixel signal at the first, time instance and measuring a subsequent reading of the second pixel signal at the subsequent time instance; and (iii) subtracting the first reading of the blind pixel signal from the subsequent reading of the bli nd pixel signal to define a third set. Optionally, the modifying of the first pixel signal further includes: (iv) modifying the subsequent reading of the first pixel signal based on the third set in accordance with correlation between the first and second sets.

Optionally, the determination of the con-elation further includes: (iv) displaying the first set as a function of the second set.

Optionally, the determinatio of the correlation further includes: (iv) displaying the first set as a functio of a third set, the third set being defined by the first chamber temperature and the subsequent chamber temperature.

There is also provided according to an embodiment of the teachings of the present invention, a device for reducing a drift induced by a changing environment feature when imaging radiation from a scene, the radiation from the scene including at least a first and second wavelength band in the long wave infrared region of the ' electromagnetic spectrum, the device comprising: (a) a radiation source, the radiation source different from the scene; (b) a detector of the radiation from the scene and of radiation from the radiation source, the detector being uncooied and sensitive to radiation in the first and second wavelength bands, and the detector including a separate first, second, and third detector region; (c) a .first and a second filter, the first filter associated with the first detector region for allowing radiation in the first wavelength band to be iraaged on the first detector region, the second filter associated with the second detector region for allowing radiation in the second wavelength band to be imaged on the second detector region; (d) an optical system for continuously focusing the radiation from the scene and the radiation source onto the detector, the optical system comprising: (i) an image forming optical component for forming an image of the scene on the detector and for projecting radiation from the radiation source onto the third detector region, and (ii) a first and a second substantially wedge-shaped component, the first wedge-shaped component associated with the first filter, the second wedge-shaped component associated with the second filter, each of the wedge-shaped components fixedly positioned at a distance from the image forming optical component, each of the wedge-shaped components directing radiation from a field of view of the scene through the image forming optical component onto the detector, such that the radiation is iraaged separately onto the first and second detector regions through an f-number of the optical system of less than approximately 1 .5, the imaged radiation on each of the detector regions including radiation in one respective wavelength band, and at least one of the first or second wedge-shaped components projecting radiation from the radiation source through the image forming optical component onto the third detector region; and the device further comprising (e) electronic circuitry configured to: i) produce at least a first pixel signal from the imaged radiation on the first and second detector regions; fii) produce a second pixel signal from the radiation source projected by the optical system onto the third detector region, and (iii) modify the first pixel signal according to a predetermined function to produce a modified pixel signal, the predetermined function defining a relationship between a change in the second pixel signal and a change in the first pixel signal induced by the changing environment feature.

Optionally, the electronic circuitry is further configured to: (iv) determine the change in the first pixel signal induced by the changing environment feature based on the predetermined function; and (v) subtract tire determined change in the first pixel signal from the first pixel signal.

Optionally, the radiation source is a blaekbody radiation source.

Optionally, the radiation from the radiation source is directed by only one of the first and second wedge-shaped components through the image forming optical component onto the third detector region.

Optionally, the device further comprises: (f) a first enclosure volume, the optical system being positioned within the first enclosure volume.

Optionally, the radiation source is positioned proximate to the first enclosure volume.

Optionally, the device further comprises: (g) a second enclosure volume, at least a portion of the first enclosure volume being positioned within the second enclosure volume, and the radiation source being positioned within the second enclosure volume and outside of the first enclosure volume.

BRIEF DESCRIPTION OF THE DRAWINGS

The invention is herein described, by way of example only, with reference to the accompanying drawings, wherein:

FIG. 1 is a schematic side view illustrating a device for imaging radiation from, a scene according to an embodiment of the invention;

FIG. 2 is a schematic side view illustrating the traversal of incident rays from the scene and scene background through the device according to an embodiment of the invention;

FIGS. 3A-3.B are schematic illustrations showing filtering alternatives of the device according to an embodiment of the invention;

FIG. 4 is a schematic front view illustrating a detector and the resulting image formed on the detector, according to an embodiment of the invention.

FIG. 5 is a schematic side view illustrating a device for drift correction according to an embodiment of the invention; FIG. 6A is a schematic front view illustrating a detector array of the device of FIG. 5;

FIG. 63 is a schematic front view illustrating blind pixels and imaged pixels according to an embodiment of the invention;

FIG. 7 is a block diagram of image acquisition electronics coupled to a detector array according to an embodiment of the invention;

FIG. 8 is a flowchart for verifying a correlation according to an embodiment of the invention;

FIG. 9 is a flowchart for determining a correlation accordin to an embodiment of the invention

FIG. 10 is flowchart for correcting for drift according to an embodiment of the invention. FIGS. ΓΙΑ and 1 1 B show examples of plots used for performing steps of the flowchart of

FIG. 8.

DESCRIPTION OF THE PREFERRED EMBODIMENTS

The present invention is a device for detecting and imaging infrared radiatio in at least two wavelength bands on two separate regions of a detector, and for correcting for signal drift as a result of the changing environment,

The principles and operation of the device according to the present invention may be better understood with reference to the drawings and the accompanying description.

Referring now to the drawings, Figure 1 shows a schematic illustration of an embodiment of a device 1 for imaging a scene 80, such as a gas cloud, in the infrared region of the electromagnetic spectrum, most preferably the Long-Wave Infrared (LWIR) region of the electromagnetic, spectrum. When imaging such a scene, the device used for imaging the scene 80 is preferably positioned such that the scene 80 is inteiposed between the device I and a radiation emitting background 90, such as, for example, a collection of objects (such as pipes and walls), the horizon, the sky or any other suitable background. Infrared radiation in at least two wavelength bands, a first wavelength band λ« and a second wavelength band λ Ν , is emitted from the background. The characteristics of the scene 80 are such that it is absorbent, at least in part, and emitting of radiation in one of the wavelength bands and non-absorbent (and therefore non- emitting) of radiation in the other wavelength band. For example, the scene 80 may be absorbent and emitting of radiation in the first wavelength band (λ«) and non-absorbent and non-emitting of radiation in the second wavelength band (λκ). As a result, data acquired through a filter approximatel centered about λο includes information about both the gas presence and the background emission. Similarly, data acquired through a filter approximately centered about λ Ν includes information about the background emission, but does not include informatio about the gas presence. The algorithms mentioned above subsequently extract relevant gas cloud information from the acquired data.

The imaging itself is. done by an infrared detector array 14. The detector array 14 is an uncooled detector array, such as, for example, a microbolometer type array. The detector array 14 may be positioned within a detector case 12 positioned within the device 1. Radiation from the scene 80 and the background 9 is focused onto the detector array 14 through a window 16 by collection optics 18 whose optical components are represented symbolically in Figure 1 by objective lens 20 and first and second wedge-shaped components 22 and 24. Note that "objective lens" 20 may actually be a set of one or more lenses that is represented in Figure 1 by a single lens. The collection optics 18 can be considered as an enclosure volume for maintaining the position and the orientation of the optical components. The device 1 can be considered as an enclosure volume, defined by internal walls 30, for maintaining the position and orientation of the collection optics 18 and the detector array 14. The window 16 and the objective lens 20 are preferably made of materials, such as, for example, germanium, silicon, zinc sulfide or zinc se!enide, which are transparent in the infrared region.

The wedge-shaped components 22 and 24 are preferably implemented as transmitting plates which are transmisstve to the wavelength bands of the infrared radiation from the scene 80 and the background 90. The objective lens 20 focuses radiation deflected by the wedge-shaped components 22 and 24 on the detector array 14 to form two simultaneous and separate images of the scene 80 on the background 90. each image being formed on one hal f of the detector surface, as shown in Figure 4.

For clarity of illustration, the image acquisition electronics associated with the detector array 14 are not shown in Figure 1.

The detector array 14 is divided into two non-overlapping regions, a first detector region 14a and a second detector region 14b. As should be apparent, each of the detector regions preferably includes a plurality of detector elements (not shown) corresponding to individual pixels of the imaged scene. The detector array 14 is divided into the two equal aforementioned regions by a dividing plane 32 in Figure 1 , perpendicular to the detector surface and to the plane of the page, represented by the line 32 in Figure 4. The optical axis of the collection optics 18 lies in the dividing plane 32.

In a non-limiting example. Figure 1 includes the Cartesian coordinate system XYZ. In the non-limiting exemplary representation, of the coordinate system XYZ in Figure 1, the detector plane is parallel to the YZ plane. Accordingly, the dividing plane 32 is parallel to the XZ plane and the optical axis is parallel to the X-axis. The wedge-shaped components 22 and 24 are wedge-shaped in the XY plane. Continued reference will be made to the non-limiting exemplary representation of the coordinate system XYZ in Figure 1 throughout this description. A front view of the detector plane and scene images is depicted in Figure 4.

The optica] components of the collection optics 18 are arranged such that the numerical aperture of the collection optics 18 at the detector array 14 is effectively large. Having a large numerical aperture provides higher sensitivity of the detector array 14 to the radiation from the scene 80. and less sensitivity to radiation originating from within the internal walls 30 of the device 1, the collection optics 18, and the optica! components themselves. Optical systems having a large numerical aperture have a correspondingly small f-number (defined as the ratio between the focal length and the aperture diameter of the optical system) at the detector. As will be discussed, the position of the wedge-shaped components 22 and 24 along the optical axis 32 relative to the objective lens 20 provides a numerical aperture of at least 1/3, corresponding to an f-number of less than 1 .5 at the detector array .14.

As will be detailed below, the components of the device 1 to he discussed are placed in the device 1 in a fixed position and are not movable, thereby attaining the above mentioned f~ number and numerical aperture bounds with no moving parts.

Refer now to Figure 2, the traversal of incident rays from the scene 80 and the background 90 to the detector array 14. For clarity of illustration, the internal wails 3ft, the collection optics 18, the detector case 12, and the window 16 are not shown in Figure 2.

The broken line between the scene 80 and the device 1 signifies that the distance between the scene 80 and the device 1 as depicted in Figure 2 is not to scale, in general, the distance between the scene 80 and the device 1 is much larger than the size of the device 1 itself, and is typically on the order of tens or hundreds of meters. Additionally, the broken line signifies that the two bundles of rays 42a, 42d and 44a, 44d both originate from the entire scene and not from one half of the scene.

With continued reference to Figure 2, incident ray 42a is deflected by the first wedge- shaped component 22 resulting in a deflected ray 42b. The deflected ray 42b is focused, by the objective lens 20 resulting in a focused ray 42c which is imaged o the first detector region 14a. For clarity of illustration, the rays 42a~42d are represented by continuous lines in Figure 2, Similarly, incident ray 44a is deflected by the second wedge-shaped component 24 resulting in a deflected ray 44b. The deflected ray 44b is focused by the objective lens 20 resulting in a focused ray 44c which is imaged on the second detector region 14b. For clarity of illustration, the rays 44a-44d are represented by dashed lines in Figure 2. Note thai although only four incident rays 42a, 42d and 44a, 44d are depicted in Figure 2 {these are the marginal rays which define the field of view of the device in the plane of the cross section defined by the plane of the paper (XY plane)), it should b apparent that additional similar incident rays originating from the scene 80 are present and follow a path of traversal similar to the rays as described above. As such, reference to the incident rays 42s, 42d and 44a, 44d implicitly applies to ail such similar incident rays originating from the scene 80 within the field of view.

Note that the incident rays 42a and 44d are parallel to each other, as are the incident rays 42d and 44a. Such a parallel relationship exists for additional pairs of incident rays (not shown) in which each ray of the pair is part of a different bundle. The parallel relationship is a result of each pair of incident rays originating from the same region of the scene 80 and background 90.

Accordingly, the incident rays which traverse the first wedge-shaped component 22 are imaged on the first detector region 14s, and the incident rays which traverse the second wedge- shaped component 24 are imaged on the second detector region 14b. As a result, the radiation from the same scene 80 and background 90 is imaged separately and simultaneously onto the detector regions 14a and 14b. This separation of the two images of the scene 80 and background 90 allows for a double filtering arrangement for gathering spectral information in order to measure and detect the gas.

In the absence of the wedge-shaped components 22 and 24, the imaging device (detector array 14 and objective lens 20) has a field of view which can be defined by a cone originating at or near the objective lens 20 and extending towards the scene 80, The field of view of such a device can equivalentiy be interpreted as the field of view of the objective lens 20 in combination with the detector 14. The distance and orientation of the wedge-shaped components 22 and 24 relative to the objective lens 20 is such that the field of view of the objective lens 20 in the vertical direction (XY plane) can be visualized as the angle between the deflected rays 42b and 44b.

The angles by which the incident rays 42a, 42d and 44a and 44d are deflected are a function of the angle of incidence, the ape angle of the wedge-shaped components 22 and 24, and the index of refraction of the .mate-rial used to construct, the wedge-shaped components 22 and 24, Accordingly, it is preferred that the above mentioned apex angle and material are selected such that the incident rays 42a. 42d and 44a and 44d are deflected by an angle which is approximately ¼ of the field of vie of the objective lens 20. Such a deflection angle ensures that all of the deflected rays are incident on the objective lens 20 and are within the field of view of the device 1. Each of the wedge-shaped components 22 and 24, combined with the objective lens 20 and the detector array .14 defines a field of view. The field of view defined by the objective lens 20 and the first wedge-shaped component 22 is equal to the field of view defined by the objective lens 20 and the second wedge-shaped component 24. The field of view of the first wedge-shaped component 22 in the vertical direction (XY plane) can be visualized as the angle between the incident rays 42a and 42d. Similarly, the field of view of the second wedge-shaped component 24 in the vertical direction (XY plane) can be visualized as the angle between the incident rays 44a and 44du

The imaging device (the detector array 14 and the objective lens 20) has an f-nurnber defined by the local length / and aperture diameter D of the objective lens 20. The inclusion of the wedge-shaped components 22 and 24 may cause the f-nurnber to increase if not designed properly. Therefore, the wedge-shaped components 22 and 24 should be included in a way that, while the same scene field of view is imaged on two separate halves of the detector, the small f- number is maintained. This can be accomplished if the field of view in one direction is halved (for example in the vertical direction) and the wedge-shaped components 22 and 24 are positioned at. a minimum fixed distance d along the optical axis from the objective lens 20.

Positioning the wedge-shaped components 22 and 24 at a sufficiently large enough distance from the objective lens 20, in combination with the above mentioned deflection angles, allows for the low f-number (high numerical aperture) at the detector array 14 to be maintained. This corresponds to high optical throughput of the device 1. As a result, the same radiation from the scene is deflected by the wedge-shaped components 22 and 24 toward the objective lens 20 and imaged on the detector regions 14a and 14b through, an f-number of the collection optics 18 which can be maintained close to 1 (fll) without having to decrease the focal length for increase the aperture diameter Z ) .

As a result of positioning the wedge-shaped components 22 and 24 at the distance d, the vertical fields of view of the wedge-shaped components 22 and 24 are approximately hal f of the above mentioned vertical field of view of the objective lens 20. Note that, the field of view of the device with the wedge-shaped components 22 and 24 in the horizontal direction (XZ plane) is the same as the field of view of the device without the wedge-shaped components (no need to compromise there).

Positioning the wedge-shaped components 22 and 24 too close to the objective lens 20 (i.e. d too small) would not allow for the separation of the two images of the same field of view (albeit halved) through the two different filtering components necessary for the detection of the presence or absence of a gas cioud while at the same time maintaining the lo f-number for collection of the scene radiation. The distance d which provide such high optical throughput can y lower bounded by: where D is the aperture diameter of the objecti ve lens 20 and Θ is the vertical field of view of the objective lens 20.

in order to help mitigate the effects of beam wander on the wedge shaped components 22 and 24 while keeping their size to a minimum the angle of each of the wedge-shaped components 22 and 24 relative to the optical axis must be designed accordingly. The distance d can also be increased beyond the minimum value described in equation (1 ) to mitigate the effects of beam wander.

The wedge-shaped components 22 and 24 are preferably be positioned symmetrically about the optical axis, such that each is positioned at the same distance d from the objective lens 20, and each is positioned at the same angle relative to the optical axis. Such a design ensures that the same amount of radiation is imaged on the detector regions 14a and 14b via the objective lens 20 from the wedge-shaped components 22 and 24.

As a result of the arrangement of the optical components of the collection optics 18, the same scene .80 and background 90 is imaged on the detector regions 14a and 14b. As previously mentioned, the characteristics of the scene 80 are such that the scene 80 affects infrared radiation in the first wavelength band (%G) and does not affect the radiation in the second wavelength band (X j). The radiation from the scene 80 which is imaged onto the first detector region 14a only includes one of the wavelength bands. The radiation from the scene 80 which is imaged onto the second detector region 14b only includes the other one of the wavelength bands. This is accomplished by positioning filters, most preferably band pass filters, in the optical train.

With reference to embodiment of the device 1 depicted in Figures 1 and 2, a first filter 26 and a second filter 28 are positioned relative to the respective wedge-shaped components 22 and

24.

Suppose, for example, that it is desired that the radiation from the scene 80 imaged on the first detector region I4a only includes radiation in the first wavelength band (λο), and the radiation from the scene 80 imaged on the second detector region 14b only includes radiation in the second wavelength band (%$). Accordingly, the first filter 26 eliminates radiation in spectral ranges outside of the first wavelength band (λο) and the second filter 28 eliminates radiation in spectral ranges outside of the second wavelength band (λκ). Thus, the radiation from the scene 80 thai is directed by the first wedge-shaped component 22 to be imaged on the first detector region ! a includes only radiation in the first wavelength band (λο), and the radiation from the scene 80 tha is directed by the second wedge-shaped component 24 to be imaged on the second detector region 14b includes only radiation in the second wavelength band (AN).

in the embodiment of Figures 1 and 2, the filters 26 and 28 are not necessarily optical elements from the optics of the collection optics 18, but rather a coating on a first surface 22a of the first wedge-shaped component 22 and a first surface 24a of the second wedge-shaped component 24, respectively. The first surface 22a is the surface of the first wedge-shaped component 22 which is closest to the objective lens 20. Likewise, the first surface 24a is the surface of the second wedge-shaped components 24 which is closest to the objecti ve lens 20.

Additionally, a second surface 22b of the first wedge-shaped component 22 and a second surface 24b of the second wedge-shaped component 24 may be coated with an anti reflection coating. The second surfaces 22b and 24b are the respective surfaces of the wedge-shaped components 22 and 24 which are closest to the scene 80. The antireflection coating provides increased sensitivity of the device 1 to the radiation from the scene 80.

Refer now to Figures 3A-3B, an alternative positioning of the filters 26 and 28. Similar to the embodiment of Figures 1 and 2, the filters 26 and 28 are implemented as a coating* but in Figure 3A the coating is on the second surface 22b of the first wedge-shaped component 22. Similarly, in Figure 3B, the coating is on the second surface 24b of the second wedge-shaped component 24.

In the filter alternatives illustrated in Figures 3A and 3B, the first surfaces 22a and 24a may be coated with an antireflection coating. It is also noted that for clarity of illustration, the thickness of the coating for implementing the filters 26 and 28 is greatly exaggerated in Figures 1 , 2, 3A-3B.

As previously" discussed with reference to Figure 4, each image is formed on one half of the detector surface, also referred to as the detector plane. The two halves of the detector plane depicted in Figure 4 are shown as seen from the direction of the incoming radiation. Note that the image of the scene i formed and doubled on the detector surface after having passed through both filters separately, exemplified by the gas cloud of Figures I and 2, Both images on the detector regions 14a and 14b are upside dow with respect to the scene direction, as can be understood from the path of the various rays 42a-42f and 44a~44f depicted in Figure 2.

As should be apparent, combinations of. the above mentioned filter implementations may be possible. For example, the first filter 26 may be implemented as a coating on the first surface 22a of the first wedge-shaped component 22, while the second filter 28 may be implemented as a coating on the second surface 24b of the second wedge-shaped component 24, Note that in an of the possible filter implementations, the first and second filters 26 and 28 are in fixed positions relative to the detector array 14 and the collection optics 18.

As previously discussed, the large numerical aperture and low f-number provides higher sensitivity of the detector array 14 to the radiation from the scene 80. However, changes in the environmental temperature surrounding the device 1 causes the emission of radiation originating from within the internal walls 30 of the imaging device 1, the optical system 18, and the optical components themselves to vary with time. This emission of radiation is referred to interchangeably as unwanted radiation. The unwanted radiation, in turn leads to drifts in the imaged pixels signals, and erroneous results in the gas path concentration of each pixel of the image of the scene as measured by the device 1 according to appropriate algorithms. Compensation for such drifts provides a means to ensure quantitative results of the gas distribution in the scene without using moving parts.

Refer now to Figure 5, a device 10 for reducing the effect of the unwanted radiation according to an embodiment of the present disclosure. The description of the structure and operation of the device 10 is generally similar to that of the device 1 unless expressly stated otherwise, and will be understood by analog thereto, ideally, the device MS reduces the signal drift to negligible amount essentially correcting for the effect of the drift. Accordingly, the terms "correcting for", "compensating for' * and "reducing", when applied to drift in imaged pixels signals, are used interchangeably herein.

For simplicity and disambi uation, the device 1 is hereinafter referred to as the imaging device 10. The term "imaging device " is used herein to avoid confusing the device Ϊ with the imaging device 10, and is not intended to limit the functionality of the imaging device 10 solely to imaging. The imaging device 10 may also include functionality for detection, measurement, identification and other operations relevant to infrared radiation emanating from a scene.

A specific feature of the imaging device 10 which is not shown in the device I is image acquisition electronics 50 associated with the detector array 14. As shown in Figure 5, the image acquisition electronics 50 is electrically coupled to the detector array 1 for processing output from the detector in order to generate and record signals corresponding to the detector elements for imaging the scene 80. As will be discussed, the image acquisition electronics 50 is further configured to apply a correction to the generated scene pixels signals in order to reduce the drift- in the generated scene pixels signals caused by the radiation originating from within the internal walls 30 of the imaging device 10, the optical system 18, and the optical components themselves. Refer now to Figure 7, a block diagram of the image acquisition electronics 50. The image acquisition electronics 50 preferably includes an analog to digital conversion module (ADC) 52 electrically coupled to a processor 54, The processor 54 i coupled to a storage medium 56, such as a memory or the like. The ADC 52 converts analog voltage signals from the detector elements into digital signals, The processor 54 is configured to perform computations and algorithms based on the digital signals received from the ADC" 52.

The processor 54 can be any number of computer processors including, but not limited to, a microprocessor, an ASIC, a DSP, a state machine, and a microcontroller. Such processors include, or may be in communication with computer readable media, which stores program code or instruction sets that, when executed by the processor, cause the processor to perform actions. Types of computer readable media include, but are not limited to, electronic, optical, magnetic, or other storage or transmission devices capable of pro viding a processor with computer readable instructions.

As shown in Figure 5, the image acquisition electronics 50 may be positioned outside of the detector case 12. Alternatively, the image acquisition electronics 50 may be included as part of the detector arra 14 and detector case 12 combination.

Another specific feature of the imaging device 10 that is different from the device 1 is the partition of the detector array 14 into more than the two separate regions of the detector array of the device 1 as shown in Figure 4 As shown in Figure 6 A, the detector array 14 of the imaging device .10 is partitioned into three separate regions, a first detector region 14a, a second detector region 14b, and a third detector region 14c. The area of the third detector region 14c is significantly smaller or not usually larger than the areas of the other two detector regions and can be visualized as a strip extending across the center of the detector plane along the Z-axis.

The optical system composed of the wedge-shaped components 22 and 24, and the objective lens 20 simultaneously images the scene 80 upside down in both regions 14a and 14b while projecting infrared radiation emitted by a surface 60 (e.g. a blackbody radiation source) onto the third detector region I4c. The surface 60 is in good thermal contact with the walls 30 of the device and is in the vicinity of the optical components, so that the temperature of the surface 60 can be assumed to be at all times at the temperature of the device walls 30 and optics 18, which in turn is affected by (and usually, especially when used in outdoor conditions, close to) the environment temperature, in other words, the signals of the detector elements of the third detector region 14c do not carry information from the scene 80, but rather carry information on the self-emitted radiation of the internal walls 30 and optics 18 of the device. Therefore, the pixels signals of the third detector region 14c can be used by the device 10 algorithms and electronics to- correct for the unwanted changes to the signals of the detector regions 14a and 14b that are caused by changing environment and not by the corresponding regions of scene 80. The pixels of the third detector region 14c are referred to as "'blind pixels". Additionally, a baffle or baffles may be positioned to prevent radiation from the scene 80 from reaching the third detector region 14 c,

The above explanation constitutes a third specific feature of the imaging device 10, which is different from the device 1, namely the inclusion of the blackbody radiation source 60 within the internal walls 30 of the imaging device 10. The blackbody radiation source 60 is positioned such that the blackbody radiation source 60 emits radiation which is projected only onto the third detector region 14c, resulting in the blind pixels as previously mentioned to produce signals which, as will be discussed ill more detail below, are used to reduce the drift- in the signals from the scene, due to changing case and optics self-emission. The traversal of incident rays 64a and 64b from the blackbody radiation source 60 to the detector array 14 is shown in Figure 5, Note that the traversal of the rays as depicted in Figure 5 is not drawn to scale (due to drawings space constraints), and that the deflection angle between the rays 64a and 64b is approximately the same as the deflection angle between the- rays 44d and 44e (Figure 2).

The blackbody radiation source 60 can be placed in various positions within the imaging device 10. Preferably, the blackbody radiation source 60 is placed in contact with the internal walls 30 of the imaging device 1.0 and outside of the optical system 18, and most preferably in proximity to the optical system 18, The placement of the blackbody radiation source 60 within the imaging device 10 is incumbent upon the radiation being focused by the optical system 18 onto only the third detector region 14c to generate the blind pixels signals.

In the non-limiting implementation of the imaging device JO shown in Figure 5, the blackbody radiation source 60 is positioned such that the radiation from the blackbody radiation source 60 is directed by the second wedge-shaped component 24 through the objective lens 20 onto the third detector region 14c. Note that in addition to the blackbody radiation source 60, an additional blackbody radiation source 70 can be placed in a symmetric position about the X-axis such that the radiation from the blackbody radiation source 7 is directed by the first wedge- shaped component 22 through the objective lens 20 onto the third detector region 14c as well.

The process of reducing and/or correcting for the drift in the generated scene pixels signals is applied to all scene pixels signals. For clarity, the process will be explained with reference to correcting for the drift in a single scene pixel signal,

The optical components, the optical system 18. and the spaces between the internal walls 30 are assumed to be at a temperature T / y, which is usually close to and affected by the temperature of the environment in which the imaging device 10 operates. As a result, the amount of radiation originating from the optical components and the optical system 18 is a direct function of the temperature 7

Since the blackbody radiatio source 60 (and 70 if present) is placed within the imaging device 10 and in good thermal contact with the device 10, the optical components 18 and the walls 30, the temperature of the blackbody radiation source 60 (TBB) is assumed to be the same or a function o the temperature T £ (i.e. TBB and 7},· are correlated). TUB can be measured by a temperature probe 62 placed in proximity to, or within, the blackbody radiation source 60.

A measured scene pixel signal S from a region of the scene, can be expressed as the sum of two signal terms, a first signal term S and a second signal term $ $ . The first signal term SQ is the signal contribution to S corresponding to the radiation originating from the optical components, the optical system 18, and walls 30 of the device 10. The second signal term„¾ is the signal contribution to S due to the radiation originating from the corresponding region of the scene 80 imaged on the pixel in question. Accordingly, the scene pixel signal S is the result of the combination of radiation originating from the device walls 30 and environment, optical components and the optical system 18, and radiation from the scene 80, being imaged onto the two detector regions 14a and 14b.

Since the blackbody radiation source 60 is assumed to be at a temperature that is a direct function of the temperature the radiation emitted b the blackbody radiation source 60 is representative of the radiation originating from the optical components and the optical system 18 and device walls 30 and environment. Accordingly, a blind pixel signal, may be assumed to be also a good representation of the contribution to the scene pixel signal due to the radiation originating from the environment, the optical components and the optical system 18.

As a result of the radiation originating from the optical components and the optical system 18 being a direct function of the temperature Tg, the first signal term ¾ (if the above assumptions are correct) is also a direct function of th temperature TV This can be expressed mathematically as where fj(*) is a function.

Similarly, as a result of the blind pixel signal ¾ being assumed to be a goo representation of the pixel signal contribution corresponding to the radiation originating from the optical components and the optical system 18, the blind pixel signal ¾ can also be assumed to be a direct function of the walls 30, the environment and optical system temperature 7 / ;. This can be expressed mathematically as ¾ ~f?(Tg), where , /V*j is also a function.

Accordingly, since both the first signal term .¾ and the blind pixel signal ¾ are functions of the same operating temperature ¾ it is conceivable that a correlation may exist between the first signal term o arid the blind pixel signal S # . With the knowledge of the correlation (if existing), the first signal term So and the changes in time of ¾ {referred to hereinafter as "scene pixel signal drifts " ) can be determined from the blind pixel signal ¾ and the changes in time of Sn. Accordingly, in the above assumptions, the changes in time or drifts of the scene pixel signal S due to environment status can be removed and corrected for, in order to prevent gas quantity calculation errors,

in the context of this document, the term "correlation' " , when applied to a relationship between sets of variables or entities, generally refers to a one-to-one relationship between the sets of variables. As such, a correlation between the first signal term So and the blind pixel signal SB indicates a one-to-one relationship between the first signal term So a the blind pixel signal Ss at any temperature of the imaging device 10. This con-elation is determined by a sequence of controlled measurements. The sequence of controlled measurements is performed prior to when the imagin device 10 is in operation in the field, and can be considered as a calibration procedure or process to be performed in manufacturing of the device. For the purposes of this document, the imaging device 10 is considered to be in an operational stage when the radiation from the scene 80 is imaged by the detector array 14 and the drift in the generated imaged pixels signals is actively reduced by the techniques as will later be described.

Recall the assumption that the blackbody radiation source 60 is at a temperature that is a direct function of the temperature Τχ. According to this assumption, the blind pixel signal ¾ is assumed to be a good representation of the pixel signal contribution due to the radiation originating from the optical components and the optical system 18. Prior to determinin the correlation function between the first signal term So and the blind pixel signal <¾, it is first necessary t verify the actuality of the above assumptions. Subsequen to the verification, the correlation function between the time changes of the first signal term So (scene pixel signal drifts) and the blind pixel signal ¾ time changes can be determined. Both the verification process, and the process of determining the correlation function, is typically conducted tlirough experiment. In practice, only drifts, or unwanted changes of the imaged pixel signals over time are to be corrected for. so the process of verification and determination of the correlations are only needed and performed between the differentials of So-. ¾, or variations during time due to environment temperature variations.

Refer no to Figure 8, a flowchart of a process 1000 for verifying the existence of a correlation between the environment temperature, the temperature of the blackbody radiation source 60 (and 70 if present) and the blind pixel signal ¾. In step 801, the imaging device 10 is placed in a temperature controlled environment, such as a temperature chamber havin a controllable and adjustable temperature, and to point the device 10 at a external blackbody source at a fixed temperature r so that the scene pixels of the detector regions 14a and 14b are exposed to unchanging radiation from the external blackbody. Such an external blackbody source is used in place of the scene 80 depicted in Figures 1 , 2 and 5. In step 802, the temperature of the temperature chamber is set to an initial temperature 7¾. The temperature of the temperature chamber and the imaging device 10 are let to stabilize to temperatures T and TE respectively by allowing for an appropriate interval of time to pass.

Once the temperatures have stabilized, TB (which may be practically equal to TE) is measured via the temperature probe 62 in step 804. in step 806, the blind pixel signal S is measured via the image acquisition electronics 50. Accordingly, the blind pixel signal ¾ and ' ¾ are measured at temperature To in steps 804 and 806, respectively.

in step 80S, the temperature of the temperature chamber is set to a different temperature 7). Similar to step 802, the temperatures of the temperature chamber and the imaging device 10 are let to stabilize to temperature Tj and a new temperature T ¾ respectively, by allowing tor an appropriate interval of time to pass. Once the temperatures have stabilized, TBB is measured via the temperature probe 62 in step 810. In step 812, the blind pixel signal <¾ is measured via the image acquisition electronics 50, Accordingly, the blind pixel signal SB ' and TBB are- measured at chamber temperature Ti in steps 810 and 812, respectively.

The process may continue over a range of chamber temperatures of interest, shown by the decision step 813. For each selected chamber temperature, the blind pixel signal ¾ and T B and T E are measured as in steps 804, 806, 810 and 812 above.

In step 814, the existence of a correlation between the environment temperature, the blind pixel sig l ¾■ and the temperature of the blackbody radiation source 60 (and 70 if present) is verified by analyzin the resultant measurements. For example, the blind pixel signal ¾ measurements from steps 804 and 810 can be plotted as function of the operating temperatures TE established in steps 802 and 80S. Similarly, the ¾ measurements from steps 806 and 812 can be plotted or otherwise visualized versus the range of operating temperatures 7 established in steps 802 and 808. An example of plots for executing step 814 is depicted in Figures 1 1 A and 11 B.

Referring first to Figure 1 1 A, an example of plots of the measurements of the operating temperatures (7 ,-), the blind pixel signal (¾), and the blackbody radiation source temperature (¾ measured via the temperature probe 62) is depicted. The plots shown in Figure 1 1 A are intended to serve as illustrative examples, and should not be taken s limiting in the scope or implementation of the process 800. Note that the x-axis in Figure 1 1 A is designated as "time (?)", as should be apparent due to the variation of the operating temperatures (¾} as time (t) goes by. Also note that the example plots shown in Figure Π A includes two y-axes. The first y-axis (shown on the left side of Figure 1 1 A) is designated as "temperature " and corresponds to the operating temperatures ( ¾ and the biackbody radiation source temperature (¾). The second y-axis (shown on the right side of Figure 11 A), is designated as "signal counts" and is the measured outpu of the ADC 52 corresponding to the blind pixel signal (,¾).

If there is a linear (or any other one-to-one) relationship between the three entities Tg, Τβ Β , and ¾, the above discussed assumptions are upheld to be valid, and therefore there exists a correlation between the temperatures ¾ ¾, and the blind pixel signal ¾.

Referring now to Figure 1 I B, the recognition of such a linear relationship can be shown by alternatively plotting the measurements depicted in Figure 1 1 A. As should be apparent, the example plots shown in Figure 1 IB show the biackbody radiation source temperature (¾) and the blind pixel signal (¾) signal counts versus the temperature 7 .. which, as previousl discussed, is the environment temperature. Accordingly, the x-axis in Figure 1 I B is designated as "environment temperature". As in Figure 1 1 A, Figure f IB also includes two y-axes. The first y-axis (shown on the left side of Figure 1 1 B) is designated as "temperature" and corresponds to the biackbody radiation source temperature (¾), The second y-axis (shown on the right side of Figure 1 S B), is designated as "signal counts' ' and is the measured output of the ADC 52 corresponding to the blind pixel signal (¾).

Similar to the plots shown in Figure 1 1 A, the plots shown in Figure 11 B are intended to serve as illustrative examples, and should not be taken as limiting in the scope or implementation of the process 800·. As can be clearly seen in the illustrative example depicted in Figure 1 IB, a linear relationship of non-zero slope (which is an example of a one-to-one relationship) exists between the three entities ¾ TU B , and ¾, thus implying that the three entities are correlated.

Refer now to Figure 9, a flowchart of a process 900 for determining a correlation between the drifts of scene pixels signals and the blind pixel signals ¾ changes due to changes in ■environment temperature. Similar to the process 800, before performing the process 900, the imaging device 10 is placed in the temperature chamber. The imaging device 10 is also pointed at a source of infrared radiation representing and simulating a scene during the operation of the device 10, most conveniently a biackbody source at. a known and fixed temperature. The biackbody may be positioned, inside the temperature chamber or outside of the temperature chamber and measured by the imaging device 10 through an infrared transparent window, in the process 900, measurements of the scene pixel signal S and the blind pixel signal ¾ are made via the image .acquisition electronics 50.

In step 90.1 (similar to step 801 above), the device 10 is retained in the temperature chamber and pointed at th external blackbody source which is set to a fixed temperature 7 , In step 902, the temperature of the temperature chamber is set to an initial temperature To. The chamber and the device .10 are let to stabilize at temperature o by waitin an appropriate period of time. In step 904, the imaged pixel si gnal S and the blind pixel signal ¾ are measured a fter the temperature of the imaging device 10 reaches stabilization at 7¾.

In step 906, the temperature of the temperature chamber is set to a new temperature T/, and the external blackbody is maintained at the temperature 7'. The chamber and the device 10 are let to stabilize at temperature Ί by waiting an appropriate period of time. In step 908, the scene pixel signal S and the blind pixel signal ¾ are measured after the temperature of the imaging device 10 reaches stabilization at Γ / .

In step 91 , the imaged pixel signal S measured in step 904 is subtracted from the imaged pixel signal S measured in step 908. The result of step 910 yields the temporal drift of the imaged pixel signal due to the change in the temperature of the temperature chamber. Also in step 910, the blind pixel signal S& measured in step 904 is subtracted from the blind pixel signal <% measured in step 908.

Similar to the process 800, the process 900 may continue over a range of chamber temperatures of interest, shown by decision step 912, For each selected chamber temperature, the imaged pixel signal 5 measured in step 904 is subtracted from the imaged pixel signal S measured at the selected temperature, and the blind pixel signal ¾ measured at step 904 is subtracted from the blind pixel signal measured at the selected temperature. This procedure can be performed for all the operating temperature ranges of the imaging device.

In step 914, the resultant differences in the scene pixels obtained in step 910 are plotted as function of the blind pixel differences obtained at each chamber temperature, in step 916, the correlation function i determined by analyzing the results of the plot obtained in step 914. Numerical methods, such as, for example, curve-fitting, least-squares, or other suitable methods, can be used to further facilitate the determination of the correlation function.

As should be apparent, the resulting con-elation function can be interpolated and extrapolated to cover operating temperature ranges not measured during the execution of the processes 800 and 900. In step 918, the correlation function determined in step 916 is stored in a memory coupled to the processor 54, such as, for example, the storage medium 56. Note that typical environment temperature variations used during the execution of the processes 800 and 900 may depend on various factors such as, for example, the location of the imaging device 10 when in the operational stage and the intended specific use of the imaging device 10 when in the operational stage. For example, when the imaging device 10 is used for monitoring in industrial installations and facilities for gas leakages, the temperature variations occurring duri ng the execution of the processes 800 and 900 are typically in the range of tens of degrees.

As a result of the correlation function determined by the process 900, during the operation of the imaging device 10, signal drifts of the measured scene pixel signals can be compensated in real time while the temperature of the environment changes. The process of compensating and/or correcting for the signal drifts during operation of the imaging device 10 is detailed in Figure 1 Q,

Refer now to Figure 10, a flowchart of a process 000 for correcting for the signal drifts in the imaged pixel signal S caused by environment temperature changes, while the device 10 is operational in the field, in steps 1002-1014 the device 10 is operational in the field and monitors a scene in an industrial environment, automatically and. without human intervention,

in step 1002, the scene pixel signal S is measured and stored at an initial time to. The scene pixel measured at time t & may be stored in the storage medium 56 or stored in a temporary memory coupled to the processor 54. In step 1 04, the blind pixel signal ¾ is measured at the same initial time to. In step 1006, the scene pixel signal S is measured at a subsequent time ¾ after the initial time In step 1008, the blind pixel signal ¾ is measured at the same subsequent time

in step 1010, the blind pixel signal S's measured in step 1004 is subtracted from the blind pixel signal ¾ measured in step 1008. in step 1 10, the drift of scene pixel signal that occurred between the measurement time ¾ and ¾ (due lo change in the environment temperature) is determined from the correlation function of signal differences determined and stored in the procedure 900. The determination of the drift of scene pixel signal in step 1010 is accomplished by subtracting the blind pixel signal measured in ste 1004 from the blind pixel signal measured in step 1008. The resultant difference in blind pixel signal measurements is substituted into the correlation function of signal differences determined in the procedure 900 to determine the drift: of scene pixel signal.

In step 1012·, the scene pixel signal S measured at step 1006 is modified by subtracting from it the drift, value determined in step 1 10. In step 1014, the scene pixel signal modified in step 1012 is used to assess the presence or absence of the gas of interest in the corresponding scene region, and to calculate the gas path concentration if the gas is present. As should be apparent, steps 1006-1014 can be repeated, as needed, for additional measurements by the device 10 of the scene pixel signals for the detection and path concentration of the gas. This is shown by decision step 1016. Accordingly, if additional scene pixel signal measurements are needed, the process 1000 returns to step 1006 (at a new subsequent time t s ). If no additional scene pixe! signal measurements are needed, the process ends at step 1018.

Note that as a result of the structure and operation of the device 10 when in the operational stage, the radiation from the Mackbody source 60 (and 70 if present) is projected onto the third detector region 1 c continuously over the duration for which the radiation from the scene §0 is focused onto the detector regions 14a and 14b. This is required by the process and results in the reduced frequency of shutter open and closing when in the -operational stage, and in a more accurate determination and quantification of the relevant gas present in the scene.

Note that the blind pixel signal that used to correct the drift in an imaged pixel signal is typically, and preferably, the blind pixel signal associated with the blind pixel that is positioned above or below the associated imaged pixel. In other words, the blind pixel signal used to correct the drift in an imaged pixel signal is preferably the blind pixel signal associated with the detector element closest in position to the detector element associated with the imaged pixel signal. For example, as shown in Figure- 6B, the blind pixel 14c-t is used to correct for the drift in imaged pixel 14b-l . Likewise, the blind pixel l4c-2 is used to correct for the drift in imaged pixel 14a-l.

As mentioned above, the above described processes 800, 900 and 1000 were explained with reference to correcting for the drift in a single imaged pixel signal. As previously mentioned, the same processes may be performed for each of the imaged pixels signals, and may be performed in parallel. The process for correcting for the drift may be supplemented by known methods, such as, for example, NUC. in order to further reduce and correct for the effect of the signal drift. As a result of the drift correction via the processes 800, 900 and 1000 described above, the supplemental NUC method is perfonned at a reduced irequency. The frequenc of operation of the supplemental N UC method is typically in the range of once per hour to once per day.

It will be appreciated that the above descriptions are intended only to serve as examples, and that many other embodiments are possible within the scope of the present invention as defined in the appended claims.