Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD FOR PROCESSING IMAGE DATA, IMAGE PROCESSOR UNIT AND COMPUTER PROGRAM
Document Type and Number:
WIPO Patent Application WO/2023/110123
Kind Code:
A1
Abstract:
The invention discloses a method for processing image data of an image sensor (S), said image sensor (S) comprises an area of a pixel matrix providing image pixel data (IPD), comprising the steps of detecting defect pixels in an image pixel data (IPD) to identify possibly defective pixel positions in the sensor (S) and correcting the image pixel data (IPD) by use of the detected defect pixel positions, wherein the steps of storing the identified possibly defective pixel positions together with associated capture parameters of the image sensor (S) and correcting actual image pixel data by use of stored defect pixel positions associated with capture parameters (CP; P_i), which are related to the actual capture parameters of the image sensor (S) for the actual image pixel data (IPD), are further executed.

Inventors:
EL-YAMANY NOHA (DE)
Application Number:
PCT/EP2021/086518
Publication Date:
June 22, 2023
Filing Date:
December 17, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
DREAM CHIP TECH GMBH (DE)
International Classes:
H04N5/367
Domestic Patent References:
WO2007036055A12007-04-05
Foreign References:
US20040239782A12004-12-02
US20040263648A12004-12-30
US5694228A1997-12-02
Other References:
G. H. CHAPMANR. THOMASI. KORENZ. KOREN: "Relating digital imager defect rates to pixel size, sensor area and ISO", IEEE INTERNATIONAL SYMPOSIUM ON DEFECT AND FAULT TOLERANCE IN VLSI AND NANOTECHNOLOGY SYSTEMS, 2012, pages 164 - 169
G. H. CHAPMANJ. LEUNGR. THOMASZ. KORENI. KOREN: "Trade-offs in imager design with respect to pixel defect rates", IEEE INTERNATIONAL SYMPOSIUM ON DEFECT AND FAULT TOLERANCE IN VLSI AND NANOTECHNOLOGY SYSTEMS, 2010, pages 231 - 239
J. LEUNGG. H. CHAPMANY. H. CHOIR. THOMASZ. KORENI. KOREN: "Analysing the impact of ISO on digital imager defects with an automated defect trace algorithm", PROC. SPIE 7536, SENSORS, CAMERAS, AND SYSTEMS FOR INDUSTRIAL/SCIENTIFIC APPLICATION XI, 75360F, 2010
J. LEUNGG. H. CHAPMANI. KORENZ. KOREN: "Characterisation of gain enhanced in-field defects in digital imagers", IEEE INTERNATIONAL SYMPOSIUM ON DEFECT AND FAULT TOLERANCE IN VSLI SYSTEMS, 2009, pages 155 - 163, XP031595064
J. LEUNGJ. DUDASG. H. CHAPMANI. KORENZ. KOREN: "Quantitative analysis of in-field defects in image sensor arrays", IEEE INTERNATIONAL SYMPOSIUM ON DEFECT AND FAULT TOLERANCE IN VLSI SYSTEMS, 2007, pages 526 - 534
NOHA EI-YAMANY: "IS&T International Symposium on Electronic Imaging 2017, Digital Photography and Mobile Imaging XIII", SOCIETY FOR IMAGING SCIENCE AND TECHNOLOGY, article "Robust Defect Pixel Detection and Correction for Bayer Imaging Systems", pages: 46 - 51
Attorney, Agent or Firm:
MEISSNER BOLTE PATENTANWÄLTE RECHTSANWÄLTE PARTNERSCHAFT MBB (DE)
Download PDF:
Claims:
Claims

1 . Method for processing image data of an image sensor (S), said image sensor

(S) comprises an area of a pixel matrix providing image pixel data (IPD), comprising the steps of: a) Detecting defective pixels in an image pixel data (IPD) to identify possibly defective pixel positions in the sensor (S), and b) Correcting the image pixel data (IPD) by use of the detected defective pixel positions, characterised in c) Storing the identified possibly defective pixel positions together with associated capture parameters of the image sensor (S), and d) Correcting actual image pixel data by use of stored defective pixel positions associated with capture parameters (CP; P_i) which are related to the actual capture parameters of the image sensor (S) for the actual image pixel data (IPD).

2. Method according to claim 1 , characterised by: a) Temporarily storing the identified possibly defective pixel positions together with associated capture parameters of the image sensor (S) of each image pixel data (IPD) provided by the image sensor (S); b) Counting the error number as count of the same pixel position being detected as defective for respective capture parameter (CP; P_i); and c) Identifying a pixel position as defective for the related capture parameter (CP; P_i) if the error number for the pixel position related to the capture parameter (CP, P_i) exceeds a predefined threshold value. Method according to claim 2, characterised by setting a flag in a data set stored for each identified possibly defective pixel position related to a capture parameter (CP; P_i) in case that the related error number exceeds the predefined threshold (T_i). Method according to one of claims 1 to 3, characterised in that capture parameters (CP; P_i) are selected from a group of analogue sensor gain (AG), sensitivity (ISO), exposure time (ET) and temperature. Method according to claim 4, characterised in that the capture parameters (CP; P_i) each include a single parameter or a set of different parameters. Image processor unit (1 ) for processing image pixel data (IPD) of an image sensor (S), said image sensor (S) comprises a sensor area of a pixel matrix providing image pixel data (IPD), wherein the image processor unit (1) is arranged to detect defective pixels in the image pixel data (IPD) to identify possibly defective pixel positions in the image sensor (S) and to correct the image pixel data (IPD) by use of the detected defective pixel positions, characterised in that the image processor unit (1 ) is further arranged to: a) Store the identified possibly defective pixel positions together with associated capture parameters (CP; P_i) of the image sensor (S) in a data memory (2); and b) Correct an actual image pixel data (IPD) by use of stored defective pixel positions associated with capture parameters (CP; P_i) which are related to the actual capture parameters (CP; P_i) of the image sensor (S) for the actual image pixel data (IPD). Image processor unit (1 ) according to claim 6, characterised in that the image processor unit (1 ) is further arranged to: a) Temporarily store the identified possibly defective pixel positions together with associated capture parameters of the image sensor (S) for each image pixel data (IPD) provided by the image sensor (S); b) Count the error number (C_i) as count of the same pixel positions being defective for respective capture parameter; and 18 c) Identify a pixel position as defective for the related capture parameter (CP; P_i) if the error number for the pixel position related to the capture parameter (CP; P_i) exceeds a predefined threshold value (T_i). Image processor unit (1 ) according to claim 7, characterised in that the image processor unit (1 ) is further arranged to set a flag (F_i) in a data set stored for each identified possibly defective pixel position related to the capture parameter (CP; P_i) in case that the related error number (C_i) exceeds the predefined threshold. Image processor unit (1 ) according to one of claims 6, 7 or 8, characterised in that the image processor unit (1 ) is further arranged to select the capture parameters (CP; P_i) from a group of analogue sensor gain (AG), sensitivity (ISO), exposure time (ET) and temperature. Image processor unit (1 ) according to claim 9, characterised in that the capture parameters (CP; P_i) each include a single parameter or a set of different parameters. Computer program comprising instructions which, when the program is executed by the image processing unit (1 ) according to one of claims 6 to 10, causes the image processing unit (1 ) to carry out the steps of the method of one of claims 1 to 5.

Description:
Method for processing image data, image processor unit and computer program

The invention relates to a method for processing image data of an image sensor, said image sensor comprises a sensor area of a pixel matrix providing image pixel data, comprising the steps of: a) Detecting defective pixels in image pixel data to identify possibly defective pixel positions in the sensor, and b) Correcting the image data by use of the detected defective pixel positions.

The invention further relates to an image processor unit for processing image data of an image sensor, said image sensor comprises a sensor area of a pixel matrix providing image pixel data, wherein the image processor unit is arranged to detect defective pixels in the image data to identify possibly defective pixel positions in the image sensor and to correct the image data by use of the detected defective pixel positions.

The invention further relates to a computer program comprising instructions to carry out the above method.

Digital imagers are widely used in everyday products, such as smart phones, tablets, notebooks, cameras, cars and wearables. An important problem in the image signal processing pipeline in camera systems used in those products is the detection and correction of defective imaging sensor pixels which develop during or after the fabrication process of the sensor. If those defective pixels are not corrected early in the image signal processing pipeline, subsequent filtering or demosaicing operations will cause them to spread and appear as coloured clusters of pixels that degrade the image quality. In order to maintain a high image quality, it is desired to have a defective pixel detection and correction function preferably in an early stage of an image processor unit.

Typically, defective pixel detection takes place as a per-frame image processing or analysis step that seeks to identify defective sensor pixel positions based on raw single frame or multiple raw frames. The detection mechanism does not keep track of the identified defect positions over the lifetime of the image sensor.

With the fundamental drawbacks of the existing method when only partial defect detection can be achieved, the presence of false positives cannot be avoided and there is a likelihood of over-correction or under-correction due to a mismatch between the defect detection solution parameters and the camera capture conditions. This affects the final image quality negatively.

G. H. Chapman, R. Thomas, I. Koren and Z. Koren, “Relating digital imager defect rates to pixel size, sensor area and ISO”, in: IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems, pp. 164-169, 2012 confirms that the number of in-field defects increases continuously over the sensor lifetime in a linear fashion. The rate of fault pixels is dependent on the power of the ISO and the exposure time. Shrinking the pixel size to gain a larger number of pixels in the given sensor area will result in defect rates growing faster than the pixel numbers.

G. H. Chapman, J. Leung, R. Thomas, Z. Koren and I. Koren, ..Tradeoffs in imager design with respect to pixel defect rates,” in: IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems, pp. 231 -239, 2010 discloses a relationship between increased defect rate and increased image sensitivity (ISO). A good pixel is identified when the dark current rate and the dark offset is zero. The normalized pixel output can be determined as a function of the incident illumination rate, the dark current rate, the exposure setting, the dark offset and the amplification from the ISO setting. In a hot pixel, the normalized pixel output increases with exposure time. J. Leung, G. H. Chapman, Y. H. Choi, R. Thomas, Z. Koren and I. Koren, “Analysing the impact of ISO on digital imager defects with an automated defect trace algorithm”, in: Proc. SPIE 7536, Sensors, Cameras, and Systems for Industrial/Scientific Application XI, 75360F, 2010 discloses that hot pixels can be identified by a series of dark field images at increasing exposure times. In the defect trace algorithm, the probability is calculated for each identified hot pixel by moving over the images recursively forward in time and using Bayesian equations. The appearance of hot pixels is affected by the exposure time and ISO setting used to capture the images. To avoid faults, a detection correction scheme is proposed.

J. Leung, G. H. Chapman, I. Koren and Z. Koren, “Characterisation of gain enhanced in-field defects in digital imagers”, in: IEEE International Symposium on Defect and Fault Tolerance in VSLI Systems, pp. 155-163, 2009 presents the effect on defects of different camera settings, mainly different ISO amplifications. High ISO amplifications intensify the defect parameters so that defects that were not visible before become visible in high ISO settings.

J. Leung, J. Dudas, G. H. Chapman, I. Koren and Z. Koren, ..Quantitative analysis of in-field defects in image sensor arrays”, in: IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems, pp. 526-534, 2007 proposes a use of calibrated uniform illuminations to search for the existence of particular defect types to estimate the relative prevalence of each defect type of an image sensor. In particular, fully-stuck, partially-stuck, abnormal-sensitivity and hot pixels are the types of examined defects.

An object of the present invention is to provide an improved method and an image processor unit which provides a robust learning-based solution to identify the defective pixel positions of an image sensor, learns those positions over the camera lifetime and improves the correction of actual image data.

The object is achieved by the method comprising the features of claim 1 , the image processor unit comprising the features of claim 6 and the computer program comprising the features of claim 11 . Further embodiments are disclosed in the dependent claims. It is proposed to a) Store the identified possibly defective pixel positions together with associated capture parameters of the image sensor, and b) Correct an actual image data by use of stored defective pixel positions associated with capture parameters which are related to the actual capture parameters of the image sensor for the actual image data.

Due to the correlation between the identified defective sensor pixel positions and the camera capture settings, an over-correction or under-correction of defects can be avoided. This results in high image quality.

The method can have the additional steps of

Temporarily storing the identified possibly defective pixel positions together with associated capture parameters of the image sensor for each image sensor pixel data provided by the image sensor,

Counting the error number as count of the same pixel positions detected as defective for a respective capture parameter; and

Identifying a pixel position as defect for the related capture parameter if the error number for the pixel position related to the capture parameter exceeds a predefined threshold value.

Temporary storing of the possibly defective pixel positions in a buffer together with an occurrence counter for the number of errors allows a fast and easy processing to execute pixel positions identified as false positive. The counter for the error number can be set to increase each time a defect is identified again in the same pixel position with the same capture parameter.

The same capture parameter is present if the parameter or a specific set of parameters are within a given range of similarity. There is no need for a capture parameter to be exactly the same as the preceding capture parameter.

In case that the related error number, e.g. of an occurrence counter, exceeds a predefined threshold, a confirmation flag in a data set (temporary defect buffer) can be stored for each identified possibly defective pixel position related to a capture parameter.

Thus, each capture parameter is related to a respective occurrence counter (error number) and to a respective confirmation flag.

Once a confirmation flag is set for the respective capture parameter, the pixel position is indicated as positive false defective for which the pixel positions comprising the respective flag indicating a defect for the actual capture setting of the sensor will be corrected in a succeeding image processing step.

Capture parameters can be selected for example from the group of analogue sensor gain, sensitivity (ISO), exposure time and temperature. This group can be expanded by other parameters which are related to the occurrence of defects of pixels or to the visibility of defective pixels.

The capture parameters associated with the defective pixel positions can be a single capture parameter or a set of different capture parameters.

The image processor unit can be an image processor arranged to a) Store the identified possibly defective pixel positions together with associated capture parameters of the image sensor in a data memory, and b) Correct an actual image by use of stored defective pixel positions associated with capture parameters which are related to the actual capture parameters of the image sensor for the actual image data.

The image processor unit can be a programmable data processor including a memory for temporarily storing the data set including the indicated defective pixel positions and the related capture parameters, occurrence counters and confirmation flags. The image processor can also be a hardware unit arranged in a hardware logic, e.g. an FPGA. The invention is described by an example with enclosed figures. It shows:

Figure 1 - Block diagram of the method for processing image data and functional parts of an image processor unit;

Figure 2 - A structure of a data set stored for each identified possibly defective pixel position;

Figure 3 - Block diagram of an image processor unit connected to an image sensor.

Figure 1 presents a flow diagram of the method of correcting defective pixels based on the image pixel data IPD provided as whole data from an electronic image sensor.

In the first step A), an active (i.e. dynamic) defective pixel detection mechanism identifies possibly defective pixel positions of the sensor by evaluating the incoming image pixel data IPD. This method step A) can be performed for each incoming raw frame of a picture. This identification can be executed or controlled by pre-selected parameters depending on the design of the active defective pixel detection mechanism and the respective needs.

The detection of defective pixels is well-known and described in the above- mentioned prior art, e.g. in Noha El-Yamany: Robust Defect Pixel Detection and Correction for Bayer Imaging Systems, in: IS&T International Symposium on Electronic Imaging 2017, Digital Photography and Mobile Imaging XIII, Society for Imaging Science and Technology, p. 46 - 51 , incorporated by reference herein.

For example, a pixel in a specific colour channel can be identified as defective in case that the pixel is significantly different from its same-coloured neighbours in a small windows centered at the pixel and if the local brightness of the pixel is significantly different from the local brightness difference of the same-coloured neighbours in the small windows centered at the pixel.

A hot pixel can be identified if the pixel local brightness difference in a pixel position is larger than the first pre-selected tuning parameter (detection strength parameter) and larger than a false positives control parameter as a second pre-selected tuning parameter. A cold pixel can be identified if the pixel local brightness difference in the pixel position is smaller than the negative value of the pre-selected detection strength parameter and smaller than a second false positives control parameter calculated as a function of pre-selected tuning parameter and the minimum value of the pixel local brightness of the surrounding pixels.

The pixel positions x, y are then stored in the step B) in a temporary defects buffer 2 together with the respective camera capture parameters CP, P_i used for obtaining the image pixel data IPD. The reason for storing the identified possibly defective pixel positions in a temporary defect dictionary is that the defect identification process is prone to errors, and typically not only real defects will be picked up, but also false positives. Hence the identified defect positions do not solely represent sensor defect positions, but also include good pixel positions.

In a step C), a refinement of identified defects takes place. The step C) confirms the identified possible defects for a respective camera capture parameter CP, P_i in order to exclude false positives and to ensure only real sensor defective pixel positions are stored in the confirmed defects storage (dictionary or buffer) in step D), which are later corrected by the defective pixel correction mechanism in step E). The result is a flow of corrected image pixel data together with the remaining image pixel data which does not need to be corrected and can be processed in a parallel flow in case known defective pixels are detected in step A) for this image pixel data IPD.

The sequence procedure in Figure 1 takes place e.g. for every incoming frame during the camera captures (i.e. during the use of the camera) so that the defective pixel positions are continuously learned until they are fully identified.

For every identified defect position, the camera capture parameters CP, P_i that influence the defects rate, i.e. the visibility of defective pixels, are associated with that pixel position x, y. Hence, in both the temporary and the confirmed defect dictionaries, the defect positions are not only stored but also associated with the camera capture settings, i.e. the camera capture parameters CP, P_i. As a result, there are multiple identified defect position maps, and each of those maps is associated with a capture setting. At one time, the confirmed defects positions stored in the confirmed defects storage 3 in step D) that are associated with the capture setting closest to the runtime’s are only corrected. This is to avoid over-correction and under-correction of defects due to including more or less defective positions than what is necessary.

The detection of defect pixels can be performed, for example, on the raw data coming from the Bayer sensor. In the following, l(i,j) denotes the pixel at the position (i,j) in the input raw Bayer image I, where i = [1 , 2, ... , H]; j = [1 , 2, ... , W]; H and W are the height and width of I, respectively, and O(i,j) denotes the pixel at the position (i,j) after DPC. Normalized intensities within the range [0.0,1 .0] are assumed for both I and O. lavg(i ) denotes the robust local average estimate of the same-color neighbors in an S*S window centered at (i,j). A pixel l(i,j) is identified as a defect if two conditions are met.

Condition (A):

The pixel is significantly different from its same-color neighbors in the S*S Bayer window centered at that pixel.

For hot pixels: l(i,j) > (1 +Mi )x l avg (i,j) (1 )

For cold pixels: l(i,j) < (1 -Mi ) x lavg(i,j) (2)

Condition (A) is rather intuitive, since a defect pixel responds abnormally to illumination, and thus becomes visibly different from its neighbors. Mi is an algorithm parameter, which by design denotes the detection strength, and Mi e (0.0, 1.0). The smaller Mi is, the stronger the detection is. lavg should be insensitive to the presence of a hot/cold/mixed couplet in the same color channel, in order to enable the algorithm to robustly identify the defect, whether it is a singlet or belongs to a couplet in the same color channel. The c -trimmed mean [14] is one example of how to calculate lavg, because the highest and lowest a/2 values are discarded prior to computing the local average. For a = 2 and S = 5, lavg(i,j) could then be calculated as follows:

1 . Sorting the eight same-color neighbors of l(i ,j), > (i,j), where

Q(i,j) = {l(i-2,j-2), l(i-2,j), l(i-2 ,j+2), l(i ,j-2), l(i ,j+2), l(i+2 ,j-2), l(i+2,j), l(i+2 ,j+2)} (3)

2. Discarding the maximum and minimum values

3. Calculating lavg(i ) by averaging the remaining six values. Condition (B):

In the 3x3 Bayer window centered at the pixel, the local brightness difference at the pixel is significantly higher than the smallest local brightness difference for each color channel, for hot pixels. Conversely, for cold pixels, the local brightness difference at the pixel is significantly lower than the largest local brightness difference for each color channel.

For hot pixels: dib (i,j) > M2xmin(dib (i,j-1 ), dib (i,j+1 )) (4) dib (i,j) > M2xmin(dib (i-1 ,j), dib (i+1 ,j)) (5) dib (i,j) > M2xmin(dib (i-1 ,j-1 ), dib (i-1 ,j+1 ), dib (i+1 ,j-1 ), dib (i+1 ,j+ 1 )) (6)

For cold pixels: dib (i,j) < M2xmax(dib (i, j-1 ), dib (i, j+ 1 )) (7) dib (i,j) < M2xmax(dib (i-1 ,j), dib (i+1 ,j)) (8) dib (i,j) < M2xmax(dib (i-1 ,j-1 ), dib (i-1 ,j+1 ), dib (i+1 ,j-1 ), dib (i+1 ,j+ 1 )) (9)

All the conditions in equations (4)-(6) must hold for condition (B) to be true for hot pixels. Similarly, all the conditions in equations (7)-(9) must hold for condition (B) to be true for cold pixels. dib(k, I) is the local brightness difference at pixel position (k, I), which is calculated as dib(k, I) = l(k, l)-l aV g(k, I). Because a defect pixel is visibly different from its neighbors, it is expected that the local brightness difference measured at the defect location will be considerably different from that at the other pixel locations in a very small neighborhood of the defect. Condition (B) is imposed, therefore, to differentiate between a defect and a small image feature/detail, thus enabling the algorithm to be robust against false positives. The use of the Minimum (min) operation in equations (4)-(6), for hot pixels, and the Maximum (max) operation in equation (7)-(9), for cold pixels, enables the algorithm to robustly identify the defect, whether it is a singlet or belongs to a hot/cold/mixed couplet in different color channels. M2 is an algorithm parameter, which by design denotes false positives control, and M2 e [1 .0, U], where II is an upper bound. The larger M2 is, the weaker the detection is and the less the false positives are.

Figure 2 shows an example of the structure of the data set stored in step B) in the temporary defect buffer 2. The pixel positions identified as possibly defect by use of any appropriate defective pixel detection mechanism are stored as defect candidate x coordinate and defect candidate y coordinate. Together with the coordinates (positions) of the possibly defective pixel positions (defect candidates), the camera capture parameters P_i associated with defect candidate in a hot pixel position (x, y) are also stored in the data set related to the pixel position x, y. The capture parameter can also be a parameter set including a group of parameters. It is possible to separately store the capture parameters P_i or a set of capture parameters in a separate parameter list and to link this capture parameter data by use of an assigned number or pointer into the entering data set shown in Figure 2.

The data set for the possibly defective pixel positions x, y stored in the temporary defect buffer 2 also includes an occurrence counter C_i which is associated with the defect candidate in the x, y position as well as the capture parameter (including parameter set) P_i. This occurrence counter C_i can be incremented each time a defect is identified again in the same pixel position x, y and the same capture parameter P_i. The maximum value for the occurrence counter C_i and the increment strategy depend on the allocated number of bits for it in the buffer storage.

The related capture parameter P_i for setting the occurrence counter C_i must not be identical. It is possible to define a specific similarity of capture parameters within the capture parameters P_i are handled as similar.

Further, the data set includes confirmation flags F_i associated with the defect candidate in that pixel position x, y as well as the capture parameter P_i. The flag could e.g. defined as binary zero or one and can be set to one when the defect candidate at that pixel position x, y is transformed into a confirmed defect. The field in the data set can be encoded for each buffer entry and as efficiently as possible in order to reduce the number of bits for each entry and, ultimately, to reduce the buffer size.

If the temporary defects buffer fills up quickly before a sufficient number of frames has been processed, an eviction rule can be implemented in order to clear up some room for more defect candidates to enter the buffer. It also possible to impose a constraint on how many defects could be identified per frame or frame slice. These options depends on the allocated memories for the buffers.

The size of the buffer should be determined in accordance with the expected defect rate for the imaging sensor used as well as false positives rates expected from the defect detection mechanism used to ensure that the learning defect positions converge after a sufficient time to full sensor defective pixel identification, i.e. close to 100 % defect detection rate.

The two common types of defects in imaging sensors are hot defect and cold defects. The hot defects are the main type. Defective pixels respond abnormally to incident illumination in comparison to good pixels. Hot pixels respond more strongly, thus appear as bright or saturated dots in the image, while cold pixels respond less or do not respond at all, thus appear as dark or black dots in the image.

Sensor defective pixel characterisation can reveal what sensor parameters have an impact on the rate and visibility of defective pixels. For example, the capture parameters of analogue gain, sensitivity (ISO) end exposure time have an effect on the detection rate of defective pixels. Increasing the analogue gain, the sensitivity (ISO) and/or the exposure time leads to an increased rate and visibility of hot defective pixels.

Thus, with the increasing analogue gain/ISO/exposure time, new defects can be become visible in addition to the one set where visible at a lower analogue gain, sensitivity (ISO) and/or exposure time. That effectively means that the defect maps overlap over different capture settings. This information is very valuable later in the confirmation of candidate defects and used according to the present invention.

In addition, defective pixel could develop after the sensor fabrication and during the imager lifetime. The detection rate and visibility of defective pixel increase with temperature as well. Each camera capture setting has a corresponding combination of analogue gain (AG), exposure time (ET) and temperature among other parameters. Depending on the parameters that affect the rate/visibil ity of defective pixels, one can discreetly sample the parameters base into M samples, P_i and i = 1 , 2, ... , M.

For example, if the analogue gain (AG) is the parameter that defects sensor defect rate/visibility, then the parameter space can be sampled by P_i = AG_i, where 1 = 1 , 2, 3, ... , M and AG_i is the smallest analogue gain and AG_ M is the largest analogue gain supported by the sensor. Other values of AG can be set to the capture parameter AG_2 through AG_M - 1 , where M depends on the granularity, i.e. quantization with which the analogue gain AG is sampled, as well allocated number of bits for its encoding in the buffer entry.

If the exposure time ET is the parameter that affects the sensor defect rate and visibility, then the parameter space can be sampled by P_i = ET_i, where I = 1 , 2, 3, ... , M. ET_1 can be the smallest exposure time and ET_M the largest exposure times supported by the exposure control algorithm. Other values of ET can be set to ET_2 through ET_M - 1. M depends on the granularity, i.e. quantization with which the espouse time ET is sampled, as well as the allocated number of bits for its encoding in the buffer entry.

If both of the analogue gain AG and exposure time ET affect the defects rate/visibility, then the capture parameter P_i can optionally sample combinations of capture parameters like analogue gain AG and exposure time ET or a mathematical combination thereof (e.g. the product or sum of the single parameter).

Any relevant capture parameter that affects the sensor defect rate and any possible combination thereof (parameters set) can be used and stored in the data set.

It is clear then that depending on the sensor capture parameter that have direct impact on the defects rate and the visibility of possibly defective pixels, the capture parameters will be defined. Since defect detection is a process that is prone to errors, and hence not all the defect candidates are not real defects but some of them are just false positives (i.e. good pixels), the data sets stored in the temporary defect buffer is refined during the lifetime of the sensor in order to exclude false positives from the identified defect candidates.

The occurrence counter C_i keeps track of how many times a defect has been detected at a given sensor pixel position x, y over the camera lifetime and over various capture settings. For a given defect candidate position x, y when the occurrence counter C_i exceeds a pre-defined threshold T_i, the corresponding flag F_i is set to a confirmed state and thus transforming the defect position x, y and only the capture setting P_i is confirmed. When a defect candidate position x, y is confirmed at a given capture setting P_i, it can also be confirmed at another capture settings P_i as a result if some conditions hold. For example imaging sensor characterisation reveals that with increasing the analogue gain, sensitivity ISO and exposure time, new defects become visible in addition to the ones that are already visible at the lower analogue gain, sensitivity ISO and exposure time, and that effectively means that the defect map overlap over different capture settings. Taking that knowledge into account, a defect candidate that have become confirmed at one of these capture parameters, i.e. analogue gain AG_i or exposure time ET_i is by default also confirmed at the larger parameters AGJ or ETJ if AG_i or ET_i is smaller than AGJ or ETJ. That transfer of confirmation expedites the learning process and helps to converge faster to full sensor defect identification.

The refinement stops when all the confirm flags are in the confirmed state for all the identified defective pixel positions x, y. However, the refinement with increasing threshold T_i values could be pursued periodically in order to check if new defects have developed over time and should be included amongst those in the defect dictionary to be corrected as well.

Other confirmed defect positions along their corresponding capture parameters can be stored in a separate data memory, i.e. in the confirmed defects dictionary or buffer in step D). The confirmed defect buffer can essentially be co-located with the temporary defects buffer and shares the entries in fields. They could also be two physically separate buffers, which depends on the available memory for the buffers and the hardware structure of the image processor unit.

Figure 3 shows an exemplary block diagram of an image processor unit 1 which is connected to an image sensor S to receive the image pixel data IPD (raw data) and the capture parameters CP used for obtaining the image pixel data IPD.

The image processor unit 1 can be an integrated circuit comprising a processor and a computer program arranged to process the image pixel data IPD and to capture parameters CP according to the method of the present invention. It can also be a hardcoded integrated circuit, i.e. FPGA or the like.

The image processor unit 1 is arranged to actively detect defective pixels in step A) and to store the pixel positions x, y and the related capture parameters CP in step B) in the temporary defect buffer for image pixels identified as possibly defective. The image pixel data IPD which are not identified as defective are provided as the output of the image processor unit 1 .

Repeating the active defective pixel detection step A) during the lifetime leads to a refinement of identified defects in step C). The image processor unit 1 is therefore arranged to increment the occurrence counters C_i for the related capture parameters CP n the temporary defects buffer 2 and to set a confirm flag F_i in case that the occurrence counter C_i for the respective capture parameters P_i reaches a pre-defined threshold T_i value or becomes larger than the pre-defined threshold T_i value.

Then the confirmed defective pixel positions x, y are stored in step D) in a separate defect storage 3 together with the associated capture parameters P_i for which the confirm flag F_i is set.

A separate image processor can then be provided to perform the defective pixel correction in step E). This can also be included in the same integrated circuit by use of another image processor or the same image processor or the image processor unit 1 . Therefore, the block of the image processor unit 1 shown in Figure 3 can be a separate functional unit of the same hardware logic or can also be a separate functional and hardware unit. This depends on the chosen design strategy.

By use of the image processor unit 1 of the present invention, the camera capture parameter CP (including a parameter set) is compared to the capture parameters C_i in the confirmed defects storage 3 and the defect positions x, y, associated with the capture parameter P_i closest to the run time’s capture parameter are only corrected by the defective pixel correction mechanism E). Other pixel positions x, y are left intact.

This strategy achieves a high image quality by avoiding an over-correction that results from correcting pixels that are most likely not visible defects at the runtime capture setting, and under-correction that results from not considering all the pixels that are most likely visible defects at the runtime capture setting.

Full sensor defective pixel identification can be achieved after sufficient learning, i.e. close to 100 % detection rate. Full false positives elimination can be achieved after sufficient learning, i.e. close to 0 % false positives frame. Due to the correlation and association between the identified sensor defect positions (or maps) and the camera capture settings, over-correction or under-correction can be avoided. No procalibration or pre-knowledge of the sensor defect parameters is needed, as the proposed framework learns the defect positions x, y on its own over the camera usage (lifetime) and associates those positions with the camera capture parameters. The image processor unit 1 requires a very low computational complexity and is suitable for real time resource-constraint image signal processors. It provides a generic scheme for various colour filter arrangements that can be used if the proposed learning-based framework utilises spatial coordinates of the identified effects as well as the camera parameters without any regard to how the colour channel are arranged. It provides a generic scheme for various defective pixel solutions as any defective pixel detection mechanism could be used and incorporated in the proposed learning-based framework.