Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AN IMAGE PROCESSING SYSTEM AND METHOD THEREOF
Document Type and Number:
WIPO Patent Application WO/2023/046406
Kind Code:
A1
Abstract:
A method of processing images for eye tracking is disclosed. The method comprises capturing at least one image, by way of an imaging module, when an illumination module is operating in an ON state, and consecutively capturing at least one image, by way of an imaging module, when the illumination module is operating in an OFF state. The method includes receiving and processing, by way of an image processing unit, the at least one image captured when the illumination module is operating in the ON state; and the at least one image captured when the illumination module is operating in the OFF state, such that one or more resultant images suitable for eye tracking are obtained. The one or more resultant image(s) comprise(s) image information of at least one eye and of a selected range of wavelength. A system and a computer program product are also disclosed.

Inventors:
CONG WEIQUAN (SG)
SUMAN SHAILABH (SG)
GUO HENG (SG)
Application Number:
PCT/EP2022/073654
Publication Date:
March 30, 2023
Filing Date:
August 25, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CONTINENTAL AUTOMOTIVE TECH GMBH (DE)
International Classes:
G06F3/01; B60K35/00; G02B27/00; G06T5/50; G06V40/18
Domestic Patent References:
WO2017134918A12017-08-10
WO2021087573A12021-05-14
Foreign References:
US20180270436A12018-09-20
EP3187100A12017-07-05
US7522344B12009-04-21
Attorney, Agent or Firm:
CONTINENTAL CORPORATION (DE)
Download PDF:
Claims:
Patent claims

1 . A method of processing images for eye tracking, the method comprising: capturing at least one image, by way of an imaging module, when an illumination module is operating in an ON state; and consecutively capturing one or more resultant images suitable for eye tracking, by way of an imaging module, when the illumination module is operating in an OFF state; characterized by: receiving and processing, by way of an image processing unit, the at least one image captured when the illumination module is operating in the ON state; and the at least one image captured when the illumination module is operating in the OFF state; such that one or more resultant images suitable for eye tracking are obtained wherein the one or more resultant images comprises image information

- of at least one eye captured in at least two selected range of wavelength, and the method further comprises: subtracting, a first pixel value from a first image and a second pixel value from a second image for yielding at least one full resolution resultant image comprising image information in at least one selected range of wavelengths.

2. The method of claim 1 , wherein the image information of the one or more resultant images comprises at least range of wavelength of the group consisting of:

• 380 nm to 800 nm in a red, blue, green (RGB) range;

• 700nm to 1100nm in a near-infrared (NIR) range; and

• 400nm to 700nm in a visible light range. 3. The method of claims 1 - 2, further comprising: determining, by way of the image processing unit, a first pixel intensity value (P1 ) from the at least one image captured when the at least one illumination module is operating in the ON state; and determining, by way of the image processing unit, a second pixel intensity value (P2) from the at least one image captured consecutively when the at least one illumination module is operating to the OFF state.

4. The method of claim 3, further comprising: calculating, a ratio between an average value of the first pixel intensity value (P1 ) from the at least one image captured when the at least one illumination module is operating in the ON state; and an average value of the second pixel intensity value (P2) from the at least one image captured consecutively when the at least one illumination module is operating to the OFF state; and controlling an analog dimming function of the imaging module in response to the ratio calculated.

5. The method of claims 1 - 4, further comprising: obtaining the one or more resultant images comprising an image in a near-infrared (NIR) wavelength by subtracting, the first pixel intensity value (P1 ) from determined according to the at least one images captured when the illumination module is operating in the ON state from the second pixel intensity value (P2) from determined according to the at least one images captured when the illumination module is operating in the OFF state. 6. The method according to any one of the preceding claims, further comprising aligning a first image frame captured and a second image frame captured consecutively by: determining, by way of the image processing unit, at least one identical feature between the first image frame captured and the second image frame captured consecutively; and matching, the at least one identical feature determined between the first image frame captured and the second image frame captured consecutively, prior to obtaining the one or more resultant images.

7. The method according to any one of the preceding claims, wherein the eye tracking is selected from the group consisting of: estimation of light of sight of user, for purposes of virtual reality (VR) applications; estimation of eye position of user, for purposes of augmented reality (AR) applications; and estimation of a state of a user, for purposes of driver monitoring applications.

8. The method according to any one of the preceding claims, further comprising: obtaining the one or more resultant images comprising an image in a red, blue, green (RGB) wavelength and an infrared (IR) wavelength by: capturing the at least one image, by way of the imaging module, when the illumination module is operating in the ON state. and consecutively obtaining the one or more resultant images comprising an image in a red, blue green (RGB) wavelength and a visible light wavelength by capturing the at least one image, by way of the imaging module, when the illumination module is operating in the OFF state.

9. The method according to any one of the preceding claims, further comprising: subtracting, by way of the image processing unit, a visible light wavelength from the one or more resultant images obtained.

10. An image processing system for processing images for eye tracking function, the system comprising: an illumination module operable to operate between an ON state and an OFF state; an imaging module operable to capture at least one image when the illumination module operates in the ON state; and to consecutively capture at least one image when the illumination module operates in the OFF state; and a processing unit operable to process information of the at least one image captured; characterized in that: the image processing unit is further operable to receive and process the at least one image captured when the illumination module is operating in the ON state; and the at least one image captured when the illumination module is operating in the OFF state; such that one or more resultant images suitable for eye tracking are obtained wherein the one or more resultant images comprises image information

- of at least one eye captured in

- at least two selected range of wavelength. The system of claim 10, wherein the imaging module is selected from the group consisting of: an image sensor operable to capture images in a combination of RGB wavelength and IR wavelength; and a global shutter sensor. The system of claims 10-11 , wherein the image information of the one or more resultant images comprises at least range of wavelength of the group comprising:

• 380 nm to 800 nm in a red, blue, green (RGB) range;

• 700nm to 1100nm in a near-infrared (NIR) range; and

• 400nm to 700nm in a visible light range. The system of claim 10-12, wherein the illumination module is selected from the group consisting of:

• light-emitting diode (LED); and

• vertical cavity surface emitting laser (VSCEL). A computer program product comprising instructions to cause the image processing system in claim 10 - 13 to execute the steps of the method of claims 1 to 10. A computer-readable medium having stored thereon the computer program of claim 14.

Description:
AN IMAGE PROCESSING SYSTEM AND METHOD THEREOF

TECHNICAL FIELD

This disclosure relates to image processing, and more in particular, a method, system and device for processing images for eye tracking commonly used in motor vehicle applications such as driver monitoring.

BACKGROUND

Driver monitoring systems (DMS) has been used in the automotive industry for determining a status of operators for some years. The nature of DMS uses a driver’s facial characteristics identification, for example eye movement or head position to determine status of the operator.

Increasingly, other types of automotive monitoring systems such as cabin monitoring systems (CMS) is necessitated, of which CMS monitors an entire passenger cabin, to determine for example, number of passengers onboard, intruders while vehicle is parked and/or potential attacks such as robbery. Such applications require image analyses to identify different types of objects within the passenger cabin.

Due to the nature of lighting conditions within a passenger cabin of motor vehicles, it is always challenging to obtain clear, full resolution images for such analysis. Further, different types of objects within a passenger cabin have different texture and depth, which increases the complexity of identifying different types of objects within a passenger cabin.

There is therefore a need to provide a method and system for processing images suitable for eye tracking, that overcomes, or at least ameliorates, the problems described above. Furthermore, other desirable features and characteristics will become apparent from the subsequent detailed description and the appended claims, taking in conjunction with the accompanying drawings and this background of the disclosure.

SUMMARY

A purpose of this disclosure is to ameliorate the problem of obtaining full resolution images for eye tracking, by providing the subject-matter of the independent claims.

Further, a purpose of this disclosure is to ameliorate the problem of identifying different types of objects captured in a monitoring system, by providing the subject-matter of the independent claims.

The objective of this disclosure is solved by a method of processing images for eye tracking, the method comprising: capturing at least one image, by way of an imaging module, when an illumination module is operating in an ON state; and consecutively capturing at least one image, by way of an imaging module, when the illumination module is operating in an OFF state; characterized by: receiving and processing, by way of an image processing unit, the at least one image captured when the illumination module is operating in the ON state; and the at least one image captured when the illumination module is operating in the OFF state; such that one or more resultant images suitable for eye tracking are obtained wherein the one or more resultant images comprises image information

- of at least one eye and of a selected range of wavelength.

An advantage of the above described aspect of this disclosure yields a method of processing images suitable for eye tracking, of which image information obtained is selected from a range of wavelength, such that specific details of image information may be analyse for eye tracking.

Preferred is a method of processing images for eye tracking as described above or as described above as being preferred, in which: the image information of the one or more resultant images comprises at least range of wavelength of the group consisting of:

• 380nm to 800nm in a red, blue, green (RGB) range;

• 700nm to 110Onm in a near-infrared (NIR) range; and

• 400nm to700nm in a visible light range.

The advantage of the above aspect of this disclosure is to select image information falling within a selected scope of range of wavelengths, such that the one or more resultant images processed will only contain image information between 380nm to 1100nm, and preferably between (1 ) 380nm to 800nm in a RGB range; (2) 700nm to 1100nm in a NIR range; and (3) 400nm to 700nm in a visible light range.

Preferred is a method of processing images for eye tracking as described above or as described above as being preferred, further comprising: determining, by way of the image processing unit, a first pixel intensity value (P1 ) from the at least one image captured when the at least one illumination module is operating in the ON state; and determining, by way of the image processing unit, a second pixel intensity value (P2) from the at least one image captured consecutively when the at least one illumination module is operating to the OFF state. The advantage of the above aspect of this disclosure is to capture images alternatively with controllable amplitude, such image information consecutively captured in different selected wavelengths.

Preferred is a method of processing images for eye tracking as described above or as described above as being preferred, in which: calculating, a ratio between an average value of the first pixel intensity value (P1 ) from the at least one image captured when the at least one illumination module is operating in the ON state; and an average value of the second pixel intensity value (P2) from the at least one image captured consecutively when the at least one illumination module is operating to the OFF state; and controlling an analog dimming function of the imaging module in response to the ratio calculated.

The advantage of the above aspect of this disclosure is to control an analog diming in response to a calculated ratio, where the ratio is an average value of a first pixel value of the first image captured and an average value of a second pixel value of the second image consecutively captured. An advantage of obtaining is calculated ratio is such that brightness of lighting module can be adjusted according to the calculated ratio as a form of feedback information between a previous frame and a subsequent frame to achieve precise analog dimming control.

Preferred is a method of processing images for eye tracking as described above or as described above as being preferred, in which: obtaining the one or more resultant images comprising an image in a near-infrared (NIR) wavelength by subtracting, the first pixel intensity value (P1 ) from determined according to the at least one images captured when the illumination module is operating in the ON state from the second pixel intensity value (P2) from determined according to the at least one images captured when the illumination module is operating in the OFF state.

The advantage of the above aspect of this disclosure is to obtain one or more resultant images containing image information in the NIR wavelength or within a wavelength range of 700nm to 1100nm, by applying an image subtracting method. An advantages of obtaining a full resolution resultant image containing image information in the NIR wavelength or 700nm to 1100nm is to perform further eye tracking analysis in the aforesaid wavelength range.

Preferred is a method of processing images for eye tracking as described above or as described above as being preferred, in which: aligning a first image frame captured and a second image frame captured consecutively by: determining, by way of the image processing unit, at least one identical feature between the first image frame captured and the second image frame captured consecutively; and matching, the at least one identical feature determined between the first image frame captured and the second image frame captured consecutively, prior to obtaining the one or more resultant images.

The advantage of the above aspect of this disclosure is to process images of captured by the imaging module, by feature matching and image alignment to produce full resolution resultant images. In particular, this aspect is advantageous for sensing fast-moving objects captured. Preferred is a method of processing images for eye tracking as described above or as described above as being preferred, in which: the eye tracking is selected from the group consisting of: estimation of light of sight of user, for purposes of virtual reality (VR) applications; estimation of eye position of user, for purposes of augmented reality (AR) applications; and estimation of a state of a user, for purposes of driver monitoring applications.

The advantage of the above aspect of this disclosure is to yield resultant images suitable for eye tracking, for purposes of virtual reality (VR) applications; augmented reality (AR) applications and driver monitoring applications.

Preferred is a method of processing images for eye tracking as described above or as described above as being preferred, in which: obtaining the one or more resultant images comprising an image in a red, blue, green (RGB) wavelength and an infrared (IR) wavelength by: capturing the at least one image, by way of the imaging module, when the illumination module is operating in the ON state. and consecutively obtaining the one or more resultant images comprising an image in a red, blue green (RGB) wavelength and a visible light wavelength by capturing the at least one image, by way of the imaging module, when the illumination module is operating in the OFF state.

The advantage of the above aspect of this disclosure is to obtain one or more resultant images containing image information in the RGB and IR wavelength, by controlling a status of the illumination module, to process at least two images consecutively, in at least two different types of wavelength and more in particular within a range of 380nm to 800nm. An advantages of obtaining a full resolution resultant image containing image information in the RGB and IR wavelength or withing a range of 380nm to 1100nm is to perform further eye tracking analysis in the aforesaid wavelength range.

Preferred is a method of processing images for eye tracking as described above or as described above as being preferred, further comprising: subtracting, by way of the image processing unit, a visible light wavelength from the one or more resultant images obtained.

The advantage of the above aspect of this disclosure is to apply an image subtracting method to remove visible light wavelength such that the one or more resultant images contains resultant images in within a range of selected wavelength without visible light. An advantage of this aspect of this disclosure is to yield suppression of undesirable noise signals.

The objective of this disclosure is solved by an image processing system for processing images for eye tracking function, the system comprising: an illumination module operable to operate between an ON state and an OFF state; an imaging module operable to capture at least one image when the illumination module operates in the ON state; and to consecutively capture at least one image when the illumination module operates in the OFF state; and a processing unit operable to process information of the at least one image captured; characterized in that: the image processing unit is further operable to obtain one or more resultant images in a selected group of wavelength, such that the one or more resultant images are suitable for eye tracking.

An advantage of the above described aspect of this disclosure yields an image processing system of processing images suitable for eye tracking, of which image information obtained is selected from a range of wavelength, such that specific details of image information may be analyse for eye tracking.

Preferred is an image processing system as described above or as described above as being preferred, in which: the imaging module is selected from the group consisting of: an image sensor operable to capture images in a combination of RGB wavelength and IR wavelength; and a global shutter sensor.

The advantage of the above aspect of this disclosure is to yield an imaging processing system using only a single image sensor configuration suitable for capturing full resolution images in multiple wavelengths. Advantageously, this yields an imaging processing system which requires minimal hardware.

Preferred is an image processing system as described above or as described above as being preferred, in which: the image information of the one or more resultant images comprises at least range of wavelength of the group comprising:

• 380 nm to 800 nm in a red, blue, green (RGB) range;

• 700nm to 110Onm in a near-infrared (NIR) range; and

• 400nm to 700nm in a visible light range.

The advantage of the above aspect of this disclosure yields multiple images, each image containing information in a different wavelength, which maybe processed by the image processing system disclosed herein.

Preferred is an image processing system as described above or as described above as being preferred, in which: the illumination module is selected from the group consisting of:

• light-emitting diode (LED); and

• vertical cavity surface emitting laser (VSCEL). The advantage of the above aspect of this disclosure yields different types of illumination module suitable for use in the image processing system disclosed herein.

The objective of this disclosure is solved by a computer program product comprising instructions to cause the image processing system as described above or as described above as being preferred to execute the steps of the method as described above or as described above as being preferred.

The advantage of the above aspect of this disclosure is to yield a computer program product to cause the image processing system to execute the steps of the method as disclosed herein for processing images captured.

The objective of this disclosure is solved by a computer-readable medium having stored thereon the compute program product as described above or as described above as being preferred.

The advantage of the above aspect of this disclosure is to yield a computer-readable medium for storing the computer program product, to cause the image processing system to execute the steps of the method as disclosed herein for processing images captured.

BRIEF DESCRIPTION OF DRAWINGS

Other objects and aspects of this disclosure will become apparent from the following description of embodiments with reference to the accompanying drawings in which:

FIG. 1 shows a system block diagram in accordance with a preferred embodiment.

FIG. 2 shows exemplary image frame intervals of images captured in accordance with a preferred embodiment.

FIG. 3 shows a flowchart 300 for processing images for eye tracking in a preferred embodiment. FIG. 4a-c shows an exemplary frame by frame image subtraction process in a preferred embodiment. In various embodiments described by reference to the above figures, like reference signs refer to like components in several perspective views and/or configurations.

DETAILED DESCRIPTION OF EMBODIMENTS

The following detailed description is merely exemplary in nature and is not intended to limit the disclosure or the application and uses of the disclosure. Furthermore, there is no intention to be bound by any theory presented in the preceding background of the disclosure or the following detailed description. It is the intent of this disclosure to present an image processing method and system which yields full resolution images for eye tracking purposes.

Hereinafter, the term “first”, “second”, “third” and the like used in the context of this disclosure may refer to modification of different elements in accordance to various exemplary embodiments, but not limited thereto. The expressions may be used to distinguish one element form another element, regardless of sequence of importance. By way of an example, “a first image” and “a second image” may indicate different images regardless of order or importance. On a similar note, a first image may be referred to as the second image and vice versa without departing from the scope of this disclosure.

FIG. 1 shows a system block diagram 100 in accordance with a preferred embodiment. In particular, system 100 shows an image processing system for eye tracking functions. In an embodiment, system 100 includes a control module 102 for executing image processing functions and an image module 104 for executing image processing functions. The imaging module 104 further includes an image sensor 106 with a lens or imaging optics 110 for receiving light rays and an illumination module 108 having illumination optics 112. The image module 104 may include circuitry for example a driver circuit and/or a digital - analog circuit (DAC) for driving the image module 104. The control module 102 and image module 104 may be in electrical communication.

As shown in FIG. 1 , the control module 102 is a system-on-chip (SoC) operable to control the entire image processing system 100 and execute algorithms. The image sensor 106 may be a global shutter type image sensor. Example of imaging optics 110 may include lens and/or optical filters suitable for working with selected wavelengths operable by the image sensor 106. As shown in FIG. 1 , the image processing system 100 further includes illumination module 108. Suitable types of illumination module 108 includes light emitting diodes (LED) and vertical cavity surface emitting laser (VCSEL). The image processing system 100 may further include illumination optics 112. Examples of illumination optics 112 includes diffusor or reflector. In an embodiment, the image sensor 106 of the imaging module is operable to capture a first image when the illumination module 108 is operating in an ON state, and consecutively capture at least second image when the illumination module 108 is operating in an OFF state. By alternatively switching the illumination module 108 in an ON mode and an OFF mode, the image sensor 106 will receive sensor signals within a certain wavelength at a predetermined time interval. This process of capturing a first image when the illumination module 108 is operating in an ON state, and consecutively capture at least second image when the illumination module 108 is operating in an OFF state may be adjusted or designed according to exposure time and gain setting based on image processing requirements. In this disclosure, the focus shall relate to eye tracking functions.

FIG. 2 shows exemplary image frame intervals of images captured, or a switching logic mode 200 in accordance with an exemplary embodiment. The aforesaid switching logic mode can be achieved by switching signals generated by either a controller or using logic circuit chips.

As shown in FIG. 2, when the illumination module 108 of the image processing system 100 is operating in ON mode, a first image captured contains a combination of selected wavelengths, or at least two selected wavelengths. When the illumination module 108 is operating in the OFF mode, a second image consecutively captured contains only RGB wavelengths. A main advantage of this switching logic mode 200 is to that within an image frame rate interval, two distinct resultant images may be processed by image processing system 100 in a single image frame interval 202, each resultant image comprising image information of a selected range of wavelength. In this exemplary embodiment, the image sensor 106 is operable to detect a combination of at least two types of wavelengths. Suitable examples of the image sensor 106 operable to capture sensing signals in dual wavelength may be an image sensor operable to sense red, blue, green (RGB) and infrared (IR) wavelengths. In this exemplary embodiment, the illumination module 108 is a near infrared (NIR) light source. An advantage of this embodiment is to produce high resolution images captured under dimly lit ambient conditions. An exemplary scenario will be capturing images for eye tracking function of an operator sitting within an interior of a motor vehicle operating at night.

FIG. 3 which shows a flowchart 300 for processing images for eye tracking in a preferred embodiment. At step 302, the control module 102 executes a command causing the illumination module 108 to operate in an ON mode, and the imaging module 104 to capture a first image. In this embodiment, the illumination module 108 is operating in the ON mode, the image information captured in the first image comprises image information in dual wavelengths.

The first image may comprise image information in a range of 400nm to 700nm, or a visible light range, and may further comprises image information in a range of 700nm to 1100nm in a NIR range. Accordingly, the control module 102 is operable to determine a pixel value (P1 ) of the first image, which may include both visible light optical power and NIR light optical power.

At the next step 304, the control module 102 consecutively executes a command causing the illumination module 108 to operate in an OFF mode, and the imaging module 104 to consecutively capture at least one second image. When the illumination module 108 is operating in the OFF mode, the image information captured in the at least one second image comprises image information in a single wavelength or in a range of 400nm to 700nm, or a visible light range. Accordingly, the control module 102 may determine a pixel value (P2) of the second image, which will include both visible light optical power and NIR light optical power. Optionally at step 306, the control module 102 may further process the at least one second image captured when the illumination module 108 is operating in an OFF mode, to obtain a high-resolution colour image containing image information in 380 nm to 800 nm in a red, blue, green (RGB) range only. An example of suitable imaging processing step may be a demosaicing algorithm, such as colour filter array or colour filter mosaic

At step 308, the control module 102 may further process the first image and the at least one second image captured, by subtracting the first pixel value (P1 ) determined from the second pixel value (P2) determined, such that a resultant image comprising image information in a range of 700nm to 1100nm in a near-infrared (NIR) range is yield. Advantageously, the resultant image produced is a full resolution NIR image. The advantage of processing full resolution NIR image for eye tracking is of importance in the field of machine vision applications, in particular where eye tracking is applicable. By way of example, applicable eye tracking function for estimation of light of sight of user, for purposes of virtual reality (VR) applications; estimation of eye position of user, for purposes of augmented reality (AR) applications and estimation of a state of a user, for purposes of driver monitoring applications.

For clarity and brevity, the principles of image subtraction process in this disclosure are explained in detail below. As mentioned above, one of the advantages of using a single image sensor 106 capturing single RGB sensing signals and IR sensing signals with an NIR illumination module 108 configuration is to capture images for eye tracking function under dimly lit conditions.

The aforesaid configuration addresses the some of the problems in eye tracking image processing systems, i.e. , lack of high-quality images to accurately estimate position of eyes.

Under lit conditions, for example during daytime, visible light imaging pixels and NIR imaging pixels share the same exposure time during image capturing but with different quantum efficiency (QE) and irradiance at pixel surface. The brightness difference between visible light optical power and NIR light optical power can lead to poor image quality. To counter the effects leading to poor image quality, the brightness of NIR illumination module needs to be adjusted. This can be achieved by using a driver chip with analog dimming function. However, analog dimming is not controllable via direct inter-chip communication. A control module, such as a controllable DAC chip or a pulse width modulation (PWM) based resistor-capacitor (RC) circuit, RC network or RC filter, for generating an analog signal may be necessary to control this analog dimming function.

In contrast, the control module 102 of this disclosure executes calculating of a ratio between an average value of the first pixel intensity value (P1 ) from the at least one image captured when the at least one illumination module is operating in the ON state and an average value of the second pixel intensity value (P2) from the at least one image captured consecutively when the at least one illumination module is operating to the OFF state, and controlling an analog dimming function of the imaging module 104 in response to the ratio calculated. The calculated ratio is used as an additional feedback to control the analog diming settings to the subsequent image frame.

Under extreme ambient lighting condition, an NIR illumination module may not be operable to supply sufficient brightness to meet an expected ratio to yield a full resolution resultant image. Under such circumstances, an optical light neutral density (ND) filter (not shown) may be combined with NIR long pass filter to reduce the ambient visible lighting brightness while maintaining the NIR light brightness as a mitigation solution.

Turning now to FIG. 4a-c, the principles of generating full resolution NIR images at step 308 can be achieved using pixelwise processing. As shown in FIG. 4a, an image containing image information in RGB wavelength and IR wavelength is the minued while a corresponding image represented by FIG. 4b containing image information in RGB wavelength only is the subtrahend. The respective images may be processed by the image processing system 100 at steps 302 and step 308. Applying the image subtraction algorithm, the pixelwise post-processing formulas can be defined as follows: ● P3(IR)(1,1) = P1(B + IR)(1,1) − P2(B)(1,1) ● P3(IR)(1,2) = P1(G + IR)(1,2) − P2(G)(1,2) ● P3(IR) ( 1,2 ) = P1(R + IR)(1,2) − P2(G)(1,3) ● … ● P3(IR)(2,2) = P1(IR)(2,2) − P2(IR)(2,2) wherein P = pixel intensity value, for e.g., P1 = first pixel intensity value R = red color value B = blue color value G = green color value IR = infrared value Thus, it can be seen, an image processing method and system having the advantage of yielding full resolution in selected wavelengths has been provided. More advantageously, the image processing system is operable to consecutively capture images in different wavelengths, thus processing at least two images of different distinct wavelengths within an image frame interval. By determining the pixel intensity value of each image pixel, the image processing method and system as disclosed herein yields full resolution images under different ambient lighting conditions, to achieve accuracy to eye tracking. While exemplary embodiments have been presented in the foregoing detailed description of the disclosure, it should be appreciated that a vast number of variation exist. It should further be appreciated that the exemplary embodiments are only examples, and are not intended to limit the scope, applicability, operation, or configuration of the disclosure in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing an exemplary embodiment of the disclosure, it being understood that various changes may be made in the function and arrangement of elements and method of operation described in the exemplary embodiment without departing from the scope of the disclosure as set forth in the appended claims. Reference Signs