Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
FIXED PATTERN NOISE REDUCTION
Document Type and Number:
WIPO Patent Application WO/2014/046848
Kind Code:
A1
Abstract:
A method, including receiving signals, from a rectangular array (16) of sensor elements (18) arranged in rows and columns (56, 58), corresponding to an image captured by the array. The method also includes analyzing the signals along a row or a column to identify one or more local turning points, and processing the signals at the identified local turning points to recognize fixed pattern noise in the captured image. The method further includes correcting values of the signals from the sensor elements at the identified local turning points so as to reduce the fixed pattern noise in the image.

Inventors:
ZILBERSTEIN CHAYA (IL)
SKARBNIK NIKOLAY (IL)
WOLF STUART (IL)
STARODUBSKY NATALIE (IL)
Application Number:
PCT/US2013/056700
Publication Date:
March 27, 2014
Filing Date:
August 27, 2013
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
GYRUS ACMI INC DBA OLYMPUS SURGICAL TECHNOLOGIES AMERICA (US)
International Classes:
H04N5/365; G06T5/00; H04N5/217
Foreign References:
EP1401196A12004-03-24
FR2953086A12011-05-27
EP1605403A12005-12-14
US20040135895A12004-07-15
US20070291142A12007-12-20
EP1206128A12002-05-15
Attorney, Agent or Firm:
KLIGLER, Daniel (61576 Tel Aviv, IL)
Download PDF:
Claims:
CLAIMS

1. A method, comprising:

receiving signals, from a rectangular array of sensor elements arranged in rows and columns, corresponding to an image captured by the array;

analyzing the signals along a row or a column to identify one or more local turning points;

processing the signals at the identified local turning points to recognize fixed pattern noise in the captured image; and

correcting values of the signals from the sensor elements at the identified local turning points so as to reduce the fixed pattern noise in the image.

2. The method according to claim 1, wherein the one or more local turning points comprise one or more local minima.

3. The method according to claim 2, wherein processing the signals to recognize the fixed pattern noise comprises determining that an absolute value of differences of the signals at the one or more local minima is less than a predetermined threshold.

4. The method according to claim 3, wherein the predetermined threshold is a function of a noise level of the sensor elements, and wherein correcting the values of the signals comprises adding the noise level to the signals at the one or more local minima.

5. The method according to claim 1, wherein the one or more local turning points comprise one or more local maxima.

6. The method according to claim 5, wherein processing the signals to recognize the fixed pattern noise comprises determining that an absolute value of differences of the signals at the one or more local maxima is less than a predetermined threshold. 7. The method according to claim 6, wherein the predetermined threshold is a function of a noise level of the sensor elements, and wherein correcting the values of the signals comprises subtracting the noise level from the signals at the one or more local maxima.

8. The method according to claim 1, wherein analyzing the signals along the row or the column comprises evaluating the signals of a given sensor element and of nearest neighbor sensor elements of the given sensor element along the row or the column.

9. The method according to claim 8, wherein the nearest neighbor sensor elements comprise a first sensor element and a second sensor element, and wherein the first sensor element, the given sensor element, and the second sensor element are contiguous.

10. The method according to claim 8, wherein the nearest neighbor sensor elements comprise a first plurality of sensor elements and a second plurality of sensor elements, wherein the first and the second pluralities are disjoint, and wherein the given sensor element lies between the first and the second pluralities.

11. The method according to claim 1 , wherein the sensor elements along the row or the column comprise first elements configured to detect a first color and second elements configured to detect a second color different from the first color, and wherein analyzing the signals along the row or the column comprises analyzing the signals from the first elements to identify the one or more local turning points.

12. A method, comprising:

receiving signals, from a rectangular array of sensor elements arranged in rows and columns, corresponding to an image captured by the array;

identifying a first row and a second row contiguous with the first row; and prior to receiving second row signals from the second row:

receiving first row signals from the first row;

analyzing the first row signals along the first row to identify one or more first row local turning points;

processing the first row signals at the identified first row local turning points to recognize fixed pattern noise in the captured image; and

correcting values of the first row signals from the sensor elements at the identified first row local turning points so as to reduce the fixed pattern noise in the image.

13. The method according to claim 12, wherein the one or more first row local turning points comprise one or more first row local minima.

14. The method according to claim 12, wherein the one or more first row local turning points comprise one or more first row local maxima.

15. A method, comprising:

receiving signals, from a rectangular array of sensor elements arranged in rows and columns, corresponding to an image captured by the array;

analyzing the signals along a diagonal of the array to identify one or more local turning points;

processing the signals at the identified local turning points to recognize fixed pattern noise in the captured image; and

correcting values of the signals from the sensor elements at the identified local turning points so as to reduce the fixed pattern noise in the image.

16. Apparatus, comprising:

a rectangular array of sensor elements arranged in rows and columns, configured to output signals corresponding to an image captured by the array; and a processor which is configured to:

analyze the signals along a row or a column to identify one or more local turning points,

process the signals at the identified local turning points to recognize fixed pattern noise in the captured image, and

correct values of the signals from the sensor elements at the identified local turning points so as to reduce the fixed pattern noise in the image.

17. Apparatus, comprising:

a rectangular array of sensor elements arranged in rows and columns, configured to output signals corresponding to an image captured by the array; and a processor which is configured to:

identify a first row and a second row contiguous with the first row, and prior to receiving second row signals from the second row: receive first row signals from the first row,

analyze the first row signals along the first row to identify one or more first row local turning points,

process the first row signals at the identified first row local turning points to recognize fixed pattern noise in the captured image, and

correct values of the first row signals from the sensor elements at the identified first row local turning points so as to reduce the fixed pattern noise in the image.

18. Apparatus, comprising:

a rectangular array of sensor elements arranged in rows and columns, configured to output signals corresponding to an image captured by the array; and a processor which is configured to:

analyze the signals along a diagonal of the array to identify one or more local turning points,

process the signals at the identified local turning points to recognize fixed pattern noise in the captured image, and

correct values of the signals from the sensor elements at the identified local turning points so as to reduce the fixed pattern noise in the image.

Description:
FIXED PATTERN NOISE REDUCTION

FIELD OF THE INVENTION

The present invention relates generally to imaging, and specifically to reduction of fixed pattern noise in an image.

BACKGROUND OF THE INVENTION

Fixed pattern noise (FPN), from arrays of sensor elements, is easily apparent to a human observer of an image due to the observer's inherent sensitivity to edges in the image. Consequently, any system which reduces the FPN in an image presented to an observer would be advantageous.

SUMMARY OF THE INVENTION

An embodiment of the present invention provides a method, including:

receiving signals, from a rectangular array of sensor elements arranged in rows and columns, corresponding to an image captured by the array;

analyzing the signals along a row or a column to identify one or more local turning points;

processing the signals at the identified local turning points to recognize fixed pattern noise in the captured image; and

correcting values of the signals from the sensor elements at the identified local turning points so as to reduce the fixed pattern noise in the image.

In a disclosed embodiment the one or more local turning points include one or more local minima. Processing the signals to recognize the fixed pattern noise may include determining that an absolute value of differences of the signals at the one or more local minima is less than a predetermined threshold. The predetermined threshold may be a function of a noise level of the sensor elements, and correcting the values of the signals may consist of adding the noise level to the signals at the one or more local minima.

In an alternative disclosed embodiment the one or more local turning points include one or more local maxima. Processing the signals to recognize the fixed pattern noise may include determining that an absolute value of differences of the signals at the one or more local maxima is less than a predetermined threshold. The predetermined threshold may be a function of a noise level of the sensor elements, and correcting the values of the signals may include subtracting the noise level from the signals at the one or more local maxima.

In an alternative embodiment analyzing the signals along the row or the column includes evaluating the signals of a given sensor element and of nearest neighbor sensor elements of the given sensor element along the row or the column. The nearest neighbor sensor elements may include a first sensor element and a second sensor element, and the first sensor element, the given sensor element, and the second sensor element may be contiguous. Alternatively, the nearest neighbor sensor elements include a first plurality of sensor elements and a second plurality of sensor elements, the first and the second pluralities may be disjoint, and the given sensor element may lie between the first and the second pluralities.

In a further alternative embodiment the sensor elements along the row or the column include first elements configured to detect a first color and second elements configured to detect a second color different from the first color, and analyzing the signals along the row or the column may include analyzing the signals from the first elements to identify the one or more local turning points.

There is further provided, according to an embodiment of the present invention, a method, including:

receiving signals, from a rectangular array of sensor elements arranged in rows and columns, corresponding to an image captured by the array;

identifying a first row and a second row contiguous with the first row; and prior to receiving second row signals from the second row:

receiving first row signals from the first row;

analyzing the first row signals along the first row to identify one or more first row local turning points;

processing the first row signals at the identified first row local turning points to recognize fixed pattern noise in the captured image; and

correcting values of the first row signals from the sensor elements at the identified first row local turning points so as to reduce the fixed pattern noise in the image.

Typically, the one or more first row local turning points include one or more first row local minima. Alternatively or additionally, the one or more first row local turning points include one or more first row local maxima.

There is further provided, according to an embodiment of the present invention, a method, including:

receiving signals, from a rectangular array of sensor elements arranged in rows and columns, corresponding to an image captured by the array;

analyzing the signals along a diagonal of the array to identify one or more local turning points;

processing the signals at the identified local turning points to recognize fixed pattern noise in the captured image; and

correcting values of the signals from the sensor elements at the identified local turning points so as to reduce the fixed pattern noise in the image.

There is further provided, according to an embodiment of the present invention, apparatus, including:

a rectangular array of sensor elements arranged in rows and columns, configured to output signals corresponding to an image captured by the array; and a processor which is configured to:

analyze the signals along a row or a column to identify one or more local turning points,

process the signals at the identified local turning points to recognize fixed pattern noise in the captured image, and

correct values of the signals from the sensor elements at the identified local turning points so as to reduce the fixed pattern noise in the image.

There is further provided, according to an embodiment of the present invention, apparatus, including:

a rectangular array of sensor elements arranged in rows and columns, configured to output signals corresponding to an image captured by the array; and a processor which is configured to:

identify a first row and a second row contiguous with the first row, and prior to receiving second row signals from the second row:

receive first row signals from the first row,

analyze the first row signals along the first row to identify one or more first row local turning points, process the first row signals at the identified first row local turning points to recognize fixed pattern noise in the captured image, and

correct values of the first row signals from the sensor elements at the identified first row local turning points so as to reduce the fixed pattern noise in the image.

There is further provided, according to an embodiment of the present invention, apparatus, including:

a rectangular array of sensor elements arranged in rows and columns, configured to output signals corresponding to an image captured by the array; and a processor which is configured to:

analyze the signals along a diagonal of the array to identify one or more local turning points,

process the signals at the identified local turning points to recognize fixed pattern noise in the captured image, and

correct values of the signals from the sensor elements at the identified local turning points so as to reduce the fixed pattern noise in the image.

The present invention will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:

BRIEF DESCRIPTION OF THE DRAWINGS

Fig. 1 is a schematic illustration of a fixed pattern noise reduction system, according to an embodiment of the present invention;

Fig. 2 is a schematic illustration of a first display generated on a screen, according to an embodiment of the present invention;

Fig. 3 is a schematic illustration of a second display generated on the screen, according to an embodiment of the present invention;

Fig. 4 is a flowchart of steps performed by a processor in checking for fixed pattern noise, and in correcting for the noise when found, according to an embodiment of the present invention; and

Fig. 5 is a flowchart of steps performed by the processor in checking for fixed pattern noise, and in correcting for the noise when found, according to an alternative embodiment of the present invention. DETAILED DESCRIPTION OF EMBODIMENTS

OVERVIEW

In an embodiment of the present invention, the presence of fixed pattern noise, generated by sensor elements in a rectangular array, is compensated for. The elements are arranged in rows and columns, and capture an image from light incident on the array. The elements in turn generate respective signals corresponding to the image captured by the array.

The element signals are analyzed, along a row, along a column, or along any other linear line, to identify elements wherein the signals form one or more local turning points, i.e., one or more local maxima or one or more local minima. The signals from the identified elements are processed, typically by measuring an amplitude of the local turning point, to recognize that the signals do correspond to fixed pattern noise.

For elements that are identified as producing fixed pattern noise, the values of the signals from the elements are corrected in order to reduce the fixed pattern noise in the image. The correction is typically accomplished by adding a noise level (of noise generated by the elements of the array) to an identified local minimum, or by subtracting a noise level from an identified local maximum.

The compensation for the presence of fixed pattern noise may be performed for both "gray-scale" and color arrays. The compensation is designed to suit real time implementation with moderate computational/hardware demands, and needs no prior off-line or on-line calibration of the array of elements.

DETAILED DESCRIPTION

Reference is now made to Fig. 1, which is a schematic illustration of a fixed pattern noise reduction system 10, according to an embodiment of the present invention. System 10 may be applied to any imaging system wherein images are generated using a rectangular array of sensor elements, herein by way of example assumed to comprise photo-detectors. Herein, by way of example, system 10 is assumed to be applied to images generated by an endoscope 12 which is imaging a body cavity 14 of a patient undergoing a medical procedure. To implement its imaging, endoscope 12 comprises a rectangular array 16 of substantially similar individual sensor elements 18, as well as a lens system 20 which focuses light onto the sensor elements. In the description herein, as required, sensor elements 18 are distinguished from each other by having a letter appended to the identifying numeral (so that array 16 can be considered as comprising sensor elements 18A, 18B, 18C, . . ..)

Although Fig. 1 illustrates array 16 and lens system 20 as being at the distal end of the endoscope, it will be understood that the array and/or the lens system may be at any convenient locations, including locations at the proximal end of the endoscope, and locations external to the endoscope. Except where otherwise indicated in the following description, for simplicity and by way of example, the individual sensor elements of the array are assumed to comprise "gray-scale" sensor elements generating output levels according to the intensity of incident light, and regardless of the color of the incident light. Typically, array 16 comprises a charge coupled device (CCD) array of sensor elements 18. However, array 16 may comprise any other type of sensor elements known in the art, such as complementary metal oxide semiconductor (CMOS) detectors or hybrid CCD/CMOS detectors.

Array 16 operates in a periodic manner, so that within a given period it outputs voltage levels of its sensor elements as a set of values. In the description herein a set of values output for one period of operation of the array is termed a frame. Array 16 may operate at a standard frame rate, such as 60 frames/second (where the period of l

operation is— S). Alternatively, array 16 may operate at any other frame rate, which

60

may be higher or lower than a standard rate of 60 frames/second.

System 10 is supplied with the video signal of an endoscope module 22, comprising a processor 24 communicating with a memory 26. Endoscope module 22 also comprises a fixed pattern noise (FPN) reduction module 28, which may be implemented in software, hardware, or a combination of software and hardware. The functions of endoscope module 22 and FPN reduction module 28 are described below. While for simplicity and clarity FPN module 28 and processor 24 have been illustrated as being separate from endoscope 12, this is not a requirement for embodiments of the present invention. Thus, in one embodiment FPN module 28 and the functionality used by processor 24 (for operation of the FPN module) are incorporated into a handle of the endoscope. Such an embodiment may operate as a stand-alone system, substantially independent of the endoscope module. Processor 24 uses software stored in memory 26, as well as FPN module 28, to operate system 10. Results of the operations performed by processor 24 may be presented to a medical professional operating system 10 on a screen 32, which typically displays an image of body cavity 14 undergoing the procedure. The software used by processor 24 may be downloaded to the processor in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on non-transitory tangible media, such as magnetic, optical, or electronic memory.

Fig. 2 is a schematic illustration of a first display 34 generated on screen 32, according to an embodiment of the present invention. Display 34 is produced by processor 24, from a single frame output by array 16, as a rectangular array 50 of pixels 52, and the pixels are distinguished as necessary by appending a letter to the identifying numeral. Array 50 and array 16 have a one-to-one correspondence, so that each pixel 52A, 52B, 52C, ... has a level, or value, representative of an intensity of light registered by a corresponding sensor element 18A, 18B, 18C, ... . Except where otherwise indicated, pixels 52 are assumed by way of example to have 256 gray levels, varying in value from 0 (black) to 255 (white). In the following description, the arrays are assumed to have N rows x M columns, where N, M are positive integers. Typically, although not necessarily, N, M are industry standard values, and (N,M) may equal, for example, (1080, 1920).

Theoretically, if array 16 is illuminated with light that has a constant intensity over the array, the level of each pixel 52 in array 50 is equal. Thus, if array 16 is illuminated with low-intensity light that has an equal intensity over the whole array, the pixel value of each pixel 52 should be the same, and in Fig. 2 the value is assumed to have a value 24. In fact, random noise causes a random variation in the pixel value, but for simplicity and for the purposes of description such random noise is disregarded.

A region 54 of array 50 is shown in more detail in a callout 56, and exemplary levels of the pixels are provided in the callout. As illustrated in the callout, the levels of most of the pixels are 24. However, two columns differ from 24: a column 58 has pixel values which are generally larger than 24, a column 60 has pixel values which are generally smaller than 24. The differences from the expected value of 24 are typically because of practical limitations such as small differences in the individual response and/or construction of sensor elements 18, and/or differences in amplification stages after the sensor elements. The differences from the expected value appear on screen 32 as a lighter or darker pixel compared to the surrounding pixels, and because of the characteristics of the human visual system, spatially coherent differences along a curve are particularly evident. (In CCD/CMOS sensors such a common curve is a row or a column, such as column 58 or 60.) The differences, termed fixed pattern noise, are corrected by system 10.

Fig. 3 is a schematic illustration of a second display 70 generated on screen 32, according to an embodiment of the present invention. Display 70 is produced from a single frame of array 16, and has varying gray levels, such as occur in an image of cavity 14. A callout 72 gives levels of pixels 52 that processor generates from sensor elements 16 for a region 74 of the display, showing numerical values of the varying gray levels. The broken lines beside callout 72 represent the existence of pixels (whose values are not shown) on either side of the callout. Overall there are N rows x M columns of pixels; the significance of pixels 76 and 78, and of columns 80 and 82, is explained below.

Fig. 4 is a flowchart of steps performed by processor 24 in checking for fixed pattern noise, and in correcting for the noise when found, according to an embodiment of the present invention. The processor performs the steps on each frame of pixel values received from array 16, prior to displaying the values, corrected as required by the flowchart, on screen 32. The steps of the flowchart are typically performed sequentially on sets of pixel values as the sets are received by the processor, so that any correction needed in the value of a pixel in a given set may be made before subsequent pixel sets are received. Alternatively, the steps may be performed after receipt by the processor of a complete frame, and prior to receipt of an immediately following frame. In either case, any corrections generated by the flowchart may be performed in real time.

It will be understood that while the flowchart may generate a corrected value for a particular pixel, the corrected value is only used in generating the display on screen 32. Any corrected pixel value is not used in subsequent iterations of the flowchart. In the flowchart description, 0 < m < M, m e I; 0 < n < N, n G l, and M, N are as defined above. The values m, n define a pixel (m, n), and in the flowchart m, n are also used as counters. Pixel (m, n) has a pixel value p(m,n).

In an initial step 102, the processor measures a noise level NL generated by array 16. The noise level used may comprise any convenient noise level known in the art, such as a root mean square level or an average level. Typically, although not necessarily, the noise level may be measured prior to array 16 being used for imaging in system 10, for example, at a facility manufacturing the array. In some embodiments an operator of system 10 may provide a value of noise level NL to the processor without the processor having to make an actual measurement.

From the noise level, the processor sets a correction factor X to be applied in subsequent steps of the flowchart, where X is a function of NL. In some embodiments, X is set equal to NL-

In a row defining step 104 the processor sets n equal to 1 , to analyze the first row of the frame. In a column defining step 106 the processor sets m equal to 2, to analyze the second pixel in the row. (As is explained further below, the processor considers sets of three pixels, and analyzes the central pixel of the set. Thus, the first pixel analyzed is the second pixel of the row.)

In a first local turning point condition 108, the processor checks if the values of the pixels immediately preceding and immediately following the pixel being analyzed are both less than the value, p(m,n), of the pixel being analyzed. The condition checks if the pixel being analyzed, i.e., the pixel at the center of the three pixels of the set being considered, is at a local turning point, in this case a local maximum.

If condition 108 provides a valid return, then in an optional condition step 1 10 the processor checks if absolute values of differences Δι and Δ2 are less than a predefined multiple K of correction factor X. The term KX acts as a threshold value. In step 1 10,

Δι = p(m - 1, n) - p(m, n)

Δ 2 = p(m + l, n)— p(m, n)

The condition checked by the processor in step 110 is: |AJ < KX AND |Δ 2 | < KX (2)

Typically K > 2. In one exemplary embodiment K = 4.

Step 110 may be applied to prevent system 10 from deforming image edges, such as may occur if the amplitude of the detected turning point is large. In the case that it is applied, a suitable value of K may be determined by an operator of system 10, and/or by a manufacturer of array 16, without undue experimentation. Typically, lower values of K result in better edge preservation at the cost of less noise removal, and higher K values result in better noise removal, but with a greater chance of modifying weak edges.

If condition 110 holds, then in a first adjustment step 112 the value p(m,n) is reduced by X and the flowchart then continues at a pixel increment step 114. If condition 110 doesn't hold, the flowchart proceeds directly to step 114. If step 110 is not applied, a valid return to condition 108 leads directly to first adjustment step 112.

If first turning point condition 108 doesn't hold, the flowchart continues at a second local turning point condition 116.

In second turning point condition 116, the processor checks the set of pixels already checked in condition 108. In condition 116 the processor checks if the pixel being analyzed, i.e., central pixel p(m,n) in the set of pixels {p(m-l,n), p(m,n), p(m+l,n)}, is at a local minimum. If condition 116 holds, then the processor may apply an optional condition step 118, which is substantially the same as optional condition step 110 described above, and performs the same function. Typically, although not necessarily, in embodiments where step 110 is applied step 118 is also applied.

If condition 118 holds, then in a second adjustment step 120 the value p(m,n) is increased by X and the flowchart continues at pixel increment step 114. If condition 118 doesn't hold, the flowchart proceeds to step 114. If step 118 is not applied, a valid return to condition 116 leads directly to second adjustment step 120.

In a row checking condition 122, the processor checks if an end of a row being checked has been reached. If the end has not been reached, then the flowchart returns to condition 108, and initiates analysis of a subsequent pixel in the row. If condition 122 provides a valid return, indicating that the end of a row has been reached, the processor increments row counter n by one in an increment row step 124. The processor then checks, in a last row condition 126, if the last row of array 16 has been reached. If the last row has not been reached, the flowchart returns to step 106, to begin analyzing a subsequent row of array 16.

If condition 126 provides a valid return, so that all rows of array 16 have been analyzed, the flowchart ends.

Referring back to Fig. 3, it will be understood that application of the flowchart to the pixels of callout 72 determines that pixel 76 is a local maximum, so that the processor decreases its value by X prior to displaying the pixel on screen 32. Similarly, application of the flowchart to the other pixels of column 80 determines that most of the other pixels of the column are at local maxima, and have their value decreased by X. From callout 72 pixel 78 is a local minimum, so that the processor increases its value by X prior to display on screen 32. Similarly, most of the other pixels of column 82 are at local minima, and those that are at local minima have their value increased by X.

Consideration of the flowchart illustrates that in performing the steps of the flowchart processor 24 checks sequential sets of three contiguous pixels. In each set the processor checks if the central pixel is at a local turning point, i.e., is at a local maximum, wherein condition 108 is valid, or is at a local minimum, wherein condition 116 is valid. If the central pixel is at a local maximum, its value is increased; if the central pixel is at a local minimum, its value is decreased.

As is apparent from the flowchart description, and unlike prior art methods for reducing FPN, there is no need for calibration of array 16 prior to implementation of the steps of the flowchart.

Fig. 5 is a flowchart of steps performed by processor 24 in checking for fixed pattern noise, and in correcting for the noise when found, according to an alternative embodiment of the present invention. Apart from the differences described below, the steps of the flowchart of Fig. 5 are generally similar to that of the steps of the flowchart of Fig. 4, and steps indicated by the same reference numerals in both flowcharts are generally similar when implemented.

In contrast to the flowchart of Fig. 4 (wherein the processor analyzed the central pixel in a set of three contiguous pixels), in the flowchart of Fig. 5 processor 24 analyzes the central pixel in a set of more than three contiguous pixels, typically an odd number of pixels, to identify central pixels that are local turning points.

A first step 132 includes the actions of step 102. In addition, in step 132 a value of a whole number c is selected. The value of c defines the number of pixels S in the sets analyzed by the processor, according to the equation:

S = 2c + 1 (3)

In the following description whole number c, by way of example is assumed to be 4, so that S = 9, i.e., processor 24 considers sets of nine pixels.

In a column defining step 136 the processor sets m equal to c + 1, to analyze the central pixel in the first set of nine pixels. In this example, m = 5, so the processor begins by analyzing the fifth pixel in the row being considered.

A first local turning point condition 138 includes the action of condition 108, described above, where the central pixel is compared with its nearest neighbors to determine if the central pixel is at a local turning point, in this case a local maximum.

In addition, assuming the nearest-neighbor comparison is valid, in condition 138 the processor also compares, iteratively, the central pixel with its next-nearest neighbors and succeeding next-nearest neighbors. The iteration is illustrated in the flowchart by an increment step 149, where the value of "a", defined below in expression (4), increments, and a line 150. For a set of nine pixels, in addition to the nearest neighbors, there are next-nearest neighbors, next-next-nearest neighbors, and next-next-next-nearest neighbors, so that there are three iterations. If a comparison is valid, the processor assigns a turning point parameter G a to be 1. If the comparison is invalid, G a is assigned to be 0.

An expression for condition 138 is: > p(m + a, n)

(4)

An optional condition step 140 is generally similar to step 110, and may be applied to each iteration of condition 138 to check that absolute values of differences Δι and Δ2 are less than the predefined value K (described above). In condition step 140,

A = p(m - a, n) - p(m, n),

Δ 2 = p(m + a, n) - p(m, n) (5)

a = 1,2, c

In a decision 152, the processor evaluates all the values of G a determined in condition 138, assuming they have been validated in step 140. If the evaluation is valid, then the flowchart proceeds to step 112 where the value X is subtracted from p(m,n). If the evaluation is invalid, p(m,n) is unchanged, and the flowchart continues at step 114.

In general, in decision 152 there are c values of turning point parameter G a , each of the values being 0 or 1. In a first embodiment, if 50% or more of the values are 1, then the evaluation in decision 152 is returned valid. In a second embodiment, if 75% or more of the values are 1, then the decision is valid. In other embodiments, the condition for decision 152 to be valid is another, preset, fraction of unit values. In some embodiments, weighting may be attached to the different values of G a , for example, a weight for Gi (the nearest neighbor value of the turning point parameter) may be assigned a higher weight than a weight for G2 (the next-nearest neighbor value of the turning point parameter).

If the nearest- neighbor comparison in condition 138 is invalid, then the flowchart continues to a second local turning point condition 146. Condition 146 includes the action of condition 116, where by comparing with its nearest neighbors the processor determines if the central pixel is at a local minimum.

In addition, assuming the nearest-neighbor comparison in condition 146 is valid, the processor performs an iterative process on succeeding next-nearest neighbors of the central pixel. The iterative process is substantially as described above for condition 138, and is illustrated in the flowchart by a line 154 and an increment step 153, generally similar to step 149 described above. In condition 146, the processor evaluates values of turning point parameter G a according to expression (6): < p(m + a, n)

(6)

An optional condition step 148 is substantially the same as step 140, described above.

A decision 156 is substantially the same as decision 152, described above.

Thus, in decision 156 the processor evaluates all the values of G a determined in condition 146, assuming they have been validated in step 148. If the evaluation is valid, then the flowchart proceeds to step 120 where the value X is added to p(m,n). If the evaluation is invalid, p(m,n) is unchanged, and the flowchart continues at step 114.

Referring back to Fig. 3, and assuming that c is set equal to 4, application of the flowchart of Fig. 5 to pixel 76 gives Gi = 1, G2 = 0, G3 = 0, and G4 = 0. Consequently, for either the first or second embodiment referred to above, no adjustment is made to the value of pixel 76. In the case of pixel 78, Gi = 1, G2 = 1, G3 = 1 , and G4 = 1. Thus for either the first or second embodiment referred to above,

X is added to the value of pixel 78.

Consideration of the flowchart of Fig. 5 shows that signals from central array element are compared with signals from pluralities of elements on either side of the central element. All the elements in the comparison are contiguous, and the two pluralities are disjoint.

The description above has assumed that analysis of pixel values derived from array 16 is on a row by row basis. Such an analysis could be performed as a given row data is available to the processor, and the processor is typically able to perform its analysis before data from the row immediately following the given row is available to the processor. Alternatively, the analysis on a row by row basis may be made by the processor when a complete frame of data is available to the processor. In this case the analysis is typically completed before an immediately succeeding frame of data is available to the processor.

In cases where complete frames of data are available to the processor, a first alternative embodiment of the present invention comprises analyzing the frame data on a column by column basis. As with the analysis on a row by row basis, all columns may be analyzed before an immediately succeeding frame is available to the processor. Those having ordinary skill in the art will be able to adapt the description of the flowcharts of Figs. 4 and 5, mutatis mutandis, for analysis of pixel values on a column by column basis.

In addition, in a second alternative embodiment that may be implemented for cases where complete frames of data are available, the data may be analyzed on a diagonal by diagonal basis. Such an embodiment is not limited to "simple" diagonals, where elements lie on diagonals having a slope of +1 so that succeeding elements are generated on a "one across and one up" basis, or a slope of -1, so that succeeding elements are generated on a "one across and one down" basis. Rather, embodiments of the present invention comprise analysis on a diagonal by diagonal basis, where a diagonal is any straight line having an equation of the form: y = ax + b (7) where "a" and "b" are real numbers, "a" is the slope of the diagonal, and represent variables on respective orthogonal axes.

It will be understood that for any given value of slope "a" sequential values of "b" in equation (7) define a family of parallel diagonals that may be used for the diagonal by diagonal basis referred to above. Those having ordinary skill in the art will be able to adapt the description of the flowcharts above for analysis of pixel values on such a diagonal by diagonal basis.

It will be understood that regardless of the different types of analysis described above, all analyses may be accomplished in real time.

For simplicity, the description above has assumed that the sensor elements of array 16 generate gray scale values according to the intensity of light incident on the elements, independent of the color of the light. Embodiments of the present invention comprise arrays which have elements that are dependent on the color of the incident light. Such elements, herein referred to as "color elements," typically have a color filter in front of the elements, and may, for example, be arranged in a Bayer configuration. In the case of an array having color elements, the analysis described above is on sub-arrays of color elements having the same color. Thus, for an RGB (red, green, blue) array, the analysis is performed on a red sub-array of elements, a green sub-array of elements, and a blue sub-array of elements.

It will be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and subcombinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.